Inexpensive and Scalable High Performance Computing the OurGrid Approach 1
机器学习专业词汇中英文对照

机器学习专业词汇中英⽂对照activation 激活值activation function 激活函数additive noise 加性噪声autoencoder ⾃编码器Autoencoders ⾃编码算法average firing rate 平均激活率average sum-of-squares error 均⽅差backpropagation 后向传播basis 基basis feature vectors 特征基向量batch gradient ascent 批量梯度上升法Bayesian regularization method 贝叶斯规则化⽅法Bernoulli random variable 伯努利随机变量bias term 偏置项binary classfication ⼆元分类class labels 类型标记concatenation 级联conjugate gradient 共轭梯度contiguous groups 联通区域convex optimization software 凸优化软件convolution 卷积cost function 代价函数covariance matrix 协⽅差矩阵DC component 直流分量decorrelation 去相关degeneracy 退化demensionality reduction 降维derivative 导函数diagonal 对⾓线diffusion of gradients 梯度的弥散eigenvalue 特征值eigenvector 特征向量error term 残差feature matrix 特征矩阵feature standardization 特征标准化feedforward architectures 前馈结构算法feedforward neural network 前馈神经⽹络feedforward pass 前馈传导fine-tuned 微调first-order feature ⼀阶特征forward pass 前向传导forward propagation 前向传播Gaussian prior ⾼斯先验概率generative model ⽣成模型gradient descent 梯度下降Greedy layer-wise training 逐层贪婪训练⽅法grouping matrix 分组矩阵Hadamard product 阿达马乘积Hessian matrix Hessian 矩阵hidden layer 隐含层hidden units 隐藏神经元Hierarchical grouping 层次型分组higher-order features 更⾼阶特征highly non-convex optimization problem ⾼度⾮凸的优化问题histogram 直⽅图hyperbolic tangent 双曲正切函数hypothesis 估值,假设identity activation function 恒等激励函数IID 独⽴同分布illumination 照明inactive 抑制independent component analysis 独⽴成份分析input domains 输⼊域input layer 输⼊层intensity 亮度/灰度intercept term 截距KL divergence 相对熵KL divergence KL分散度k-Means K-均值learning rate 学习速率least squares 最⼩⼆乘法linear correspondence 线性响应linear superposition 线性叠加line-search algorithm 线搜索算法local mean subtraction 局部均值消减local optima 局部最优解logistic regression 逻辑回归loss function 损失函数low-pass filtering 低通滤波magnitude 幅值MAP 极⼤后验估计maximum likelihood estimation 极⼤似然估计mean 平均值MFCC Mel 倒频系数multi-class classification 多元分类neural networks 神经⽹络neuron 神经元Newton’s method ⽜顿法non-convex function ⾮凸函数non-linear feature ⾮线性特征norm 范式norm bounded 有界范数norm constrained 范数约束normalization 归⼀化numerical roundoff errors 数值舍⼊误差numerically checking 数值检验numerically reliable 数值计算上稳定object detection 物体检测objective function ⽬标函数off-by-one error 缺位错误orthogonalization 正交化output layer 输出层overall cost function 总体代价函数over-complete basis 超完备基over-fitting 过拟合parts of objects ⽬标的部件part-whole decompostion 部分-整体分解PCA 主元分析penalty term 惩罚因⼦per-example mean subtraction 逐样本均值消减pooling 池化pretrain 预训练principal components analysis 主成份分析quadratic constraints ⼆次约束RBMs 受限Boltzman机reconstruction based models 基于重构的模型reconstruction cost 重建代价reconstruction term 重构项redundant 冗余reflection matrix 反射矩阵regularization 正则化regularization term 正则化项rescaling 缩放robust 鲁棒性run ⾏程second-order feature ⼆阶特征sigmoid activation function S型激励函数significant digits 有效数字singular value 奇异值singular vector 奇异向量smoothed L1 penalty 平滑的L1范数惩罚Smoothed topographic L1 sparsity penalty 平滑地形L1稀疏惩罚函数smoothing 平滑Softmax Regresson Softmax回归sorted in decreasing order 降序排列source features 源特征sparse autoencoder 消减归⼀化Sparsity 稀疏性sparsity parameter 稀疏性参数sparsity penalty 稀疏惩罚square function 平⽅函数squared-error ⽅差stationary 平稳性(不变性)stationary stochastic process 平稳随机过程step-size 步长值supervised learning 监督学习symmetric positive semi-definite matrix 对称半正定矩阵symmetry breaking 对称失效tanh function 双曲正切函数the average activation 平均活跃度the derivative checking method 梯度验证⽅法the empirical distribution 经验分布函数the energy function 能量函数the Lagrange dual 拉格朗⽇对偶函数the log likelihood 对数似然函数the pixel intensity value 像素灰度值the rate of convergence 收敛速度topographic cost term 拓扑代价项topographic ordered 拓扑秩序transformation 变换translation invariant 平移不变性trivial answer 平凡解under-complete basis 不完备基unrolling 组合扩展unsupervised learning ⽆监督学习variance ⽅差vecotrized implementation 向量化实现vectorization ⽮量化visual cortex 视觉⽪层weight decay 权重衰减weighted average 加权平均值whitening ⽩化zero-mean 均值为零Letter AAccumulated error backpropagation 累积误差逆传播Activation Function 激活函数Adaptive Resonance Theory/ART ⾃适应谐振理论Addictive model 加性学习Adversarial Networks 对抗⽹络Affine Layer 仿射层Affinity matrix 亲和矩阵Agent 代理 / 智能体Algorithm 算法Alpha-beta pruning α-β剪枝Anomaly detection 异常检测Approximation 近似Area Under ROC Curve/AUC Roc 曲线下⾯积Artificial General Intelligence/AGI 通⽤⼈⼯智能Artificial Intelligence/AI ⼈⼯智能Association analysis 关联分析Attention mechanism 注意⼒机制Attribute conditional independence assumption 属性条件独⽴性假设Attribute space 属性空间Attribute value 属性值Autoencoder ⾃编码器Automatic speech recognition ⾃动语⾳识别Automatic summarization ⾃动摘要Average gradient 平均梯度Average-Pooling 平均池化Letter BBackpropagation Through Time 通过时间的反向传播Backpropagation/BP 反向传播Base learner 基学习器Base learning algorithm 基学习算法Batch Normalization/BN 批量归⼀化Bayes decision rule 贝叶斯判定准则Bayes Model Averaging/BMA 贝叶斯模型平均Bayes optimal classifier 贝叶斯最优分类器Bayesian decision theory 贝叶斯决策论Bayesian network 贝叶斯⽹络Between-class scatter matrix 类间散度矩阵Bias 偏置 / 偏差Bias-variance decomposition 偏差-⽅差分解Bias-Variance Dilemma 偏差 – ⽅差困境Bi-directional Long-Short Term Memory/Bi-LSTM 双向长短期记忆Binary classification ⼆分类Binomial test ⼆项检验Bi-partition ⼆分法Boltzmann machine 玻尔兹曼机Bootstrap sampling ⾃助采样法/可重复采样/有放回采样Bootstrapping ⾃助法Break-Event Point/BEP 平衡点Letter CCalibration 校准Cascade-Correlation 级联相关Categorical attribute 离散属性Class-conditional probability 类条件概率Classification and regression tree/CART 分类与回归树Classifier 分类器Class-imbalance 类别不平衡Closed -form 闭式Cluster 簇/类/集群Cluster analysis 聚类分析Clustering 聚类Clustering ensemble 聚类集成Co-adapting 共适应Coding matrix 编码矩阵COLT 国际学习理论会议Committee-based learning 基于委员会的学习Competitive learning 竞争型学习Component learner 组件学习器Comprehensibility 可解释性Computation Cost 计算成本Computational Linguistics 计算语⾔学Computer vision 计算机视觉Concept drift 概念漂移Concept Learning System /CLS 概念学习系统Conditional entropy 条件熵Conditional mutual information 条件互信息Conditional Probability Table/CPT 条件概率表Conditional random field/CRF 条件随机场Conditional risk 条件风险Confidence 置信度Confusion matrix 混淆矩阵Connection weight 连接权Connectionism 连结主义Consistency ⼀致性/相合性Contingency table 列联表Continuous attribute 连续属性Convergence 收敛Conversational agent 会话智能体Convex quadratic programming 凸⼆次规划Convexity 凸性Convolutional neural network/CNN 卷积神经⽹络Co-occurrence 同现Correlation coefficient 相关系数Cosine similarity 余弦相似度Cost curve 成本曲线Cost Function 成本函数Cost matrix 成本矩阵Cost-sensitive 成本敏感Cross entropy 交叉熵Cross validation 交叉验证Crowdsourcing 众包Curse of dimensionality 维数灾难Cut point 截断点Cutting plane algorithm 割平⾯法Letter DData mining 数据挖掘Data set 数据集Decision Boundary 决策边界Decision stump 决策树桩Decision tree 决策树/判定树Deduction 演绎Deep Belief Network 深度信念⽹络Deep Convolutional Generative Adversarial Network/DCGAN 深度卷积⽣成对抗⽹络Deep learning 深度学习Deep neural network/DNN 深度神经⽹络Deep Q-Learning 深度 Q 学习Deep Q-Network 深度 Q ⽹络Density estimation 密度估计Density-based clustering 密度聚类Differentiable neural computer 可微分神经计算机Dimensionality reduction algorithm 降维算法Directed edge 有向边Disagreement measure 不合度量Discriminative model 判别模型Discriminator 判别器Distance measure 距离度量Distance metric learning 距离度量学习Distribution 分布Divergence 散度Diversity measure 多样性度量/差异性度量Domain adaption 领域⾃适应Downsampling 下采样D-separation (Directed separation)有向分离Dual problem 对偶问题Dummy node 哑结点Dynamic Fusion 动态融合Dynamic programming 动态规划Letter EEigenvalue decomposition 特征值分解Embedding 嵌⼊Emotional analysis 情绪分析Empirical conditional entropy 经验条件熵Empirical entropy 经验熵Empirical error 经验误差Empirical risk 经验风险End-to-End 端到端Energy-based model 基于能量的模型Ensemble learning 集成学习Ensemble pruning 集成修剪Error Correcting Output Codes/ECOC 纠错输出码Error rate 错误率Error-ambiguity decomposition 误差-分歧分解Euclidean distance 欧⽒距离Evolutionary computation 演化计算Expectation-Maximization 期望最⼤化Expected loss 期望损失Exploding Gradient Problem 梯度爆炸问题Exponential loss function 指数损失函数Extreme Learning Machine/ELM 超限学习机Letter FFactorization 因⼦分解False negative 假负类False positive 假正类False Positive Rate/FPR 假正例率Feature engineering 特征⼯程Feature selection 特征选择Feature vector 特征向量Featured Learning 特征学习Feedforward Neural Networks/FNN 前馈神经⽹络Fine-tuning 微调Flipping output 翻转法Fluctuation 震荡Forward stagewise algorithm 前向分步算法Frequentist 频率主义学派Full-rank matrix 满秩矩阵Functional neuron 功能神经元Letter GGain ratio 增益率Game theory 博弈论Gaussian kernel function ⾼斯核函数Gaussian Mixture Model ⾼斯混合模型General Problem Solving 通⽤问题求解Generalization 泛化Generalization error 泛化误差Generalization error bound 泛化误差上界Generalized Lagrange function ⼴义拉格朗⽇函数Generalized linear model ⼴义线性模型Generalized Rayleigh quotient ⼴义瑞利商Generative Adversarial Networks/GAN ⽣成对抗⽹络Generative Model ⽣成模型Generator ⽣成器Genetic Algorithm/GA 遗传算法Gibbs sampling 吉布斯采样Gini index 基尼指数Global minimum 全局最⼩Global Optimization 全局优化Gradient boosting 梯度提升Gradient Descent 梯度下降Graph theory 图论Ground-truth 真相/真实Letter HHard margin 硬间隔Hard voting 硬投票Harmonic mean 调和平均Hesse matrix 海塞矩阵Hidden dynamic model 隐动态模型Hidden layer 隐藏层Hidden Markov Model/HMM 隐马尔可夫模型Hierarchical clustering 层次聚类Hilbert space 希尔伯特空间Hinge loss function 合页损失函数Hold-out 留出法Homogeneous 同质Hybrid computing 混合计算Hyperparameter 超参数Hypothesis 假设Hypothesis test 假设验证Letter IICML 国际机器学习会议Improved iterative scaling/IIS 改进的迭代尺度法Incremental learning 增量学习Independent and identically distributed/i.i.d. 独⽴同分布Independent Component Analysis/ICA 独⽴成分分析Indicator function 指⽰函数Individual learner 个体学习器Induction 归纳Inductive bias 归纳偏好Inductive learning 归纳学习Inductive Logic Programming/ILP 归纳逻辑程序设计Information entropy 信息熵Information gain 信息增益Input layer 输⼊层Insensitive loss 不敏感损失Inter-cluster similarity 簇间相似度International Conference for Machine Learning/ICML 国际机器学习⼤会Intra-cluster similarity 簇内相似度Intrinsic value 固有值Isometric Mapping/Isomap 等度量映射Isotonic regression 等分回归Iterative Dichotomiser 迭代⼆分器Letter KKernel method 核⽅法Kernel trick 核技巧Kernelized Linear Discriminant Analysis/KLDA 核线性判别分析K-fold cross validation k 折交叉验证/k 倍交叉验证K-Means Clustering K – 均值聚类K-Nearest Neighbours Algorithm/KNN K近邻算法Knowledge base 知识库Knowledge Representation 知识表征Letter LLabel space 标记空间Lagrange duality 拉格朗⽇对偶性Lagrange multiplier 拉格朗⽇乘⼦Laplace smoothing 拉普拉斯平滑Laplacian correction 拉普拉斯修正Latent Dirichlet Allocation 隐狄利克雷分布Latent semantic analysis 潜在语义分析Latent variable 隐变量Lazy learning 懒惰学习Learner 学习器Learning by analogy 类⽐学习Learning rate 学习率Learning Vector Quantization/LVQ 学习向量量化Least squares regression tree 最⼩⼆乘回归树Leave-One-Out/LOO 留⼀法linear chain conditional random field 线性链条件随机场Linear Discriminant Analysis/LDA 线性判别分析Linear model 线性模型Linear Regression 线性回归Link function 联系函数Local Markov property 局部马尔可夫性Local minimum 局部最⼩Log likelihood 对数似然Log odds/logit 对数⼏率Logistic Regression Logistic 回归Log-likelihood 对数似然Log-linear regression 对数线性回归Long-Short Term Memory/LSTM 长短期记忆Loss function 损失函数Letter MMachine translation/MT 机器翻译Macron-P 宏查准率Macron-R 宏查全率Majority voting 绝对多数投票法Manifold assumption 流形假设Manifold learning 流形学习Margin theory 间隔理论Marginal distribution 边际分布Marginal independence 边际独⽴性Marginalization 边际化Markov Chain Monte Carlo/MCMC 马尔可夫链蒙特卡罗⽅法Markov Random Field 马尔可夫随机场Maximal clique 最⼤团Maximum Likelihood Estimation/MLE 极⼤似然估计/极⼤似然法Maximum margin 最⼤间隔Maximum weighted spanning tree 最⼤带权⽣成树Max-Pooling 最⼤池化Mean squared error 均⽅误差Meta-learner 元学习器Metric learning 度量学习Micro-P 微查准率Micro-R 微查全率Minimal Description Length/MDL 最⼩描述长度Minimax game 极⼩极⼤博弈Misclassification cost 误分类成本Mixture of experts 混合专家Momentum 动量Moral graph 道德图/端正图Multi-class classification 多分类Multi-document summarization 多⽂档摘要Multi-layer feedforward neural networks 多层前馈神经⽹络Multilayer Perceptron/MLP 多层感知器Multimodal learning 多模态学习Multiple Dimensional Scaling 多维缩放Multiple linear regression 多元线性回归Multi-response Linear Regression /MLR 多响应线性回归Mutual information 互信息Letter NNaive bayes 朴素贝叶斯Naive Bayes Classifier 朴素贝叶斯分类器Named entity recognition 命名实体识别Nash equilibrium 纳什均衡Natural language generation/NLG ⾃然语⾔⽣成Natural language processing ⾃然语⾔处理Negative class 负类Negative correlation 负相关法Negative Log Likelihood 负对数似然Neighbourhood Component Analysis/NCA 近邻成分分析Neural Machine Translation 神经机器翻译Neural Turing Machine 神经图灵机Newton method ⽜顿法NIPS 国际神经信息处理系统会议No Free Lunch Theorem/NFL 没有免费的午餐定理Noise-contrastive estimation 噪⾳对⽐估计Nominal attribute 列名属性Non-convex optimization ⾮凸优化Nonlinear model ⾮线性模型Non-metric distance ⾮度量距离Non-negative matrix factorization ⾮负矩阵分解Non-ordinal attribute ⽆序属性Non-Saturating Game ⾮饱和博弈Norm 范数Normalization 归⼀化Nuclear norm 核范数Numerical attribute 数值属性Letter OObjective function ⽬标函数Oblique decision tree 斜决策树Occam’s razor 奥卡姆剃⼑Odds ⼏率Off-Policy 离策略One shot learning ⼀次性学习One-Dependent Estimator/ODE 独依赖估计On-Policy 在策略Ordinal attribute 有序属性Out-of-bag estimate 包外估计Output layer 输出层Output smearing 输出调制法Overfitting 过拟合/过配Oversampling 过采样Letter PPaired t-test 成对 t 检验Pairwise 成对型Pairwise Markov property 成对马尔可夫性Parameter 参数Parameter estimation 参数估计Parameter tuning 调参Parse tree 解析树Particle Swarm Optimization/PSO 粒⼦群优化算法Part-of-speech tagging 词性标注Perceptron 感知机Performance measure 性能度量Plug and Play Generative Network 即插即⽤⽣成⽹络Plurality voting 相对多数投票法Polarity detection 极性检测Polynomial kernel function 多项式核函数Pooling 池化Positive class 正类Positive definite matrix 正定矩阵Post-hoc test 后续检验Post-pruning 后剪枝potential function 势函数Precision 查准率/准确率Prepruning 预剪枝Principal component analysis/PCA 主成分分析Principle of multiple explanations 多释原则Prior 先验Probability Graphical Model 概率图模型Proximal Gradient Descent/PGD 近端梯度下降Pruning 剪枝Pseudo-label 伪标记Letter QQuantized Neural Network 量⼦化神经⽹络Quantum computer 量⼦计算机Quantum Computing 量⼦计算Quasi Newton method 拟⽜顿法Letter RRadial Basis Function/RBF 径向基函数Random Forest Algorithm 随机森林算法Random walk 随机漫步Recall 查全率/召回率Receiver Operating Characteristic/ROC 受试者⼯作特征Rectified Linear Unit/ReLU 线性修正单元Recurrent Neural Network 循环神经⽹络Recursive neural network 递归神经⽹络Reference model 参考模型Regression 回归Regularization 正则化Reinforcement learning/RL 强化学习Representation learning 表征学习Representer theorem 表⽰定理reproducing kernel Hilbert space/RKHS 再⽣核希尔伯特空间Re-sampling 重采样法Rescaling 再缩放Residual Mapping 残差映射Residual Network 残差⽹络Restricted Boltzmann Machine/RBM 受限玻尔兹曼机Restricted Isometry Property/RIP 限定等距性Re-weighting 重赋权法Robustness 稳健性/鲁棒性Root node 根结点Rule Engine 规则引擎Rule learning 规则学习Letter SSaddle point 鞍点Sample space 样本空间Sampling 采样Score function 评分函数Self-Driving ⾃动驾驶Self-Organizing Map/SOM ⾃组织映射Semi-naive Bayes classifiers 半朴素贝叶斯分类器Semi-Supervised Learning 半监督学习semi-Supervised Support Vector Machine 半监督⽀持向量机Sentiment analysis 情感分析Separating hyperplane 分离超平⾯Sigmoid function Sigmoid 函数Similarity measure 相似度度量Simulated annealing 模拟退⽕Simultaneous localization and mapping 同步定位与地图构建Singular Value Decomposition 奇异值分解Slack variables 松弛变量Smoothing 平滑Soft margin 软间隔Soft margin maximization 软间隔最⼤化Soft voting 软投票Sparse representation 稀疏表征Sparsity 稀疏性Specialization 特化Spectral Clustering 谱聚类Speech Recognition 语⾳识别Splitting variable 切分变量Squashing function 挤压函数Stability-plasticity dilemma 可塑性-稳定性困境Statistical learning 统计学习Status feature function 状态特征函Stochastic gradient descent 随机梯度下降Stratified sampling 分层采样Structural risk 结构风险Structural risk minimization/SRM 结构风险最⼩化Subspace ⼦空间Supervised learning 监督学习/有导师学习support vector expansion ⽀持向量展式Support Vector Machine/SVM ⽀持向量机Surrogat loss 替代损失Surrogate function 替代函数Symbolic learning 符号学习Symbolism 符号主义Synset 同义词集Letter TT-Distribution Stochastic Neighbour Embedding/t-SNE T – 分布随机近邻嵌⼊Tensor 张量Tensor Processing Units/TPU 张量处理单元The least square method 最⼩⼆乘法Threshold 阈值Threshold logic unit 阈值逻辑单元Threshold-moving 阈值移动Time Step 时间步骤Tokenization 标记化Training error 训练误差Training instance 训练⽰例/训练例Transductive learning 直推学习Transfer learning 迁移学习Treebank 树库Tria-by-error 试错法True negative 真负类True positive 真正类True Positive Rate/TPR 真正例率Turing Machine 图灵机Twice-learning ⼆次学习Letter UUnderfitting ⽋拟合/⽋配Undersampling ⽋采样Understandability 可理解性Unequal cost ⾮均等代价Unit-step function 单位阶跃函数Univariate decision tree 单变量决策树Unsupervised learning ⽆监督学习/⽆导师学习Unsupervised layer-wise training ⽆监督逐层训练Upsampling 上采样Letter VVanishing Gradient Problem 梯度消失问题Variational inference 变分推断VC Theory VC维理论Version space 版本空间Viterbi algorithm 维特⽐算法Von Neumann architecture 冯 · 诺伊曼架构Letter WWasserstein GAN/WGAN Wasserstein⽣成对抗⽹络Weak learner 弱学习器Weight 权重Weight sharing 权共享Weighted voting 加权投票法Within-class scatter matrix 类内散度矩阵Word embedding 词嵌⼊Word sense disambiguation 词义消歧Letter ZZero-data learning 零数据学习Zero-shot learning 零次学习Aapproximations近似值arbitrary随意的affine仿射的arbitrary任意的amino acid氨基酸amenable经得起检验的axiom公理,原则abstract提取architecture架构,体系结构;建造业absolute绝对的arsenal军⽕库assignment分配algebra线性代数asymptotically⽆症状的appropriate恰当的Bbias偏差brevity简短,简洁;短暂broader⼴泛briefly简短的batch批量Cconvergence 收敛,集中到⼀点convex凸的contours轮廓constraint约束constant常理commercial商务的complementarity补充coordinate ascent同等级上升clipping剪下物;剪报;修剪component分量;部件continuous连续的covariance协⽅差canonical正规的,正则的concave⾮凸的corresponds相符合;相当;通信corollary推论concrete具体的事物,实在的东西cross validation交叉验证correlation相互关系convention约定cluster⼀簇centroids 质⼼,形⼼converge收敛computationally计算(机)的calculus计算Dderive获得,取得dual⼆元的duality⼆元性;⼆象性;对偶性derivation求导;得到;起源denote预⽰,表⽰,是…的标志;意味着,[逻]指称divergence 散度;发散性dimension尺度,规格;维数dot⼩圆点distortion变形density概率密度函数discrete离散的discriminative有识别能⼒的diagonal对⾓dispersion分散,散开determinant决定因素disjoint不相交的Eencounter遇到ellipses椭圆equality等式extra额外的empirical经验;观察ennmerate例举,计数exceed超过,越出expectation期望efficient⽣效的endow赋予explicitly清楚的exponential family指数家族equivalently等价的Ffeasible可⾏的forary初次尝试finite有限的,限定的forgo摒弃,放弃fliter过滤frequentist最常发⽣的forward search前向式搜索formalize使定形Ggeneralized归纳的generalization概括,归纳;普遍化;判断(根据不⾜)guarantee保证;抵押品generate形成,产⽣geometric margins⼏何边界gap裂⼝generative⽣产的;有⽣产⼒的Hheuristic启发式的;启发法;启发程序hone怀恋;磨hyperplane超平⾯Linitial最初的implement执⾏intuitive凭直觉获知的incremental增加的intercept截距intuitious直觉instantiation例⼦indicator指⽰物,指⽰器interative重复的,迭代的integral积分identical相等的;完全相同的indicate表⽰,指出invariance不变性,恒定性impose把…强加于intermediate中间的interpretation解释,翻译Jjoint distribution联合概率Llieu替代logarithmic对数的,⽤对数表⽰的latent潜在的Leave-one-out cross validation留⼀法交叉验证Mmagnitude巨⼤mapping绘图,制图;映射matrix矩阵mutual相互的,共同的monotonically单调的minor较⼩的,次要的multinomial多项的multi-class classification⼆分类问题Nnasty讨厌的notation标志,注释naïve朴素的Oobtain得到oscillate摆动optimization problem最优化问题objective function⽬标函数optimal最理想的orthogonal(⽮量,矩阵等)正交的orientation⽅向ordinary普通的occasionally偶然的Ppartial derivative偏导数property性质proportional成⽐例的primal原始的,最初的permit允许pseudocode伪代码permissible可允许的polynomial多项式preliminary预备precision精度perturbation 不安,扰乱poist假定,设想positive semi-definite半正定的parentheses圆括号posterior probability后验概率plementarity补充pictorially图像的parameterize确定…的参数poisson distribution柏松分布pertinent相关的Qquadratic⼆次的quantity量,数量;分量query疑问的Rregularization使系统化;调整reoptimize重新优化restrict限制;限定;约束reminiscent回忆往事的;提醒的;使⼈联想…的(of)remark注意random variable随机变量respect考虑respectively各⾃的;分别的redundant过多的;冗余的Ssusceptible敏感的stochastic可能的;随机的symmetric对称的sophisticated复杂的spurious假的;伪造的subtract减去;减法器simultaneously同时发⽣地;同步地suffice满⾜scarce稀有的,难得的split分解,分离subset⼦集statistic统计量successive iteratious连续的迭代scale标度sort of有⼏分的squares平⽅Ttrajectory轨迹temporarily暂时的terminology专⽤名词tolerance容忍;公差thumb翻阅threshold阈,临界theorem定理tangent正弦Uunit-length vector单位向量Vvalid有效的,正确的variance⽅差variable变量;变元vocabulary词汇valued经估价的;宝贵的Wwrapper包装分类:。
Option Pricing Under a Double Exponential Jump Diffusion Model

dS t
Nt
S t− =
dt +
dW t + d
Vi − 1
i=1
where W t is a standard Brownian motion, N t a Poisson process with rate , and Vi a sequence of independent identically distributed (i.i.d.) nonnegative random variabic double exponential distribution with the density
2. Background and Intuition
2.1. The Double Exponential Jump Diffusion Model
Under the double exponential jump diffusion model, the dynamics of the asset price S t is given by
Hui Wang
Division of Applied Mathematics, Brown University, Box F, Providence, Rhode Island 02912, huiwang@
Analytical tractability is one of the challenges faced by many alternative models that try to generalize the Black-Scholes option pricing model to incorporate more empirical features. The aim of this paper is to extend the analytical tractability of the Black-Scholes model to alternative models with jumps. We demonstrate that a double exponential jump diffusion model can lead to an analytic approximation for finite-horizon American options (by extending the Barone-Adesi and Whaley method) and analytical solutions for popular path-dependent options (such as lookback, barrier, and perpetual American options). Numerical examples indicate that the formulae are easy to implement, and are accurate.
MP4462DN

Notes: 1) Exceeding these ratings may damage the device. 2) The device is not guaranteed to function outside of its
operating conditions. 3) Measured on approximately 1” square of 1 oz copper.
0V < VFB < 0.8V
RFREQ = 45kΩ
1.6
RFREQ = 18kΩ
3.2
VEN = 0V
No load, VFB = 0.9V
Thermal Shutdown Hysteresis Minimum Off Time (4) Minimum On Time (4)
EN Up Threshold
Parameter
Symbol Condition
Min
VIN UVLO Hysteresis
Soft-Start Time (4)
Oscillator Frequency
Shutdown Supply Current Quiescent Supply Current Thermal Shutdown
ELECTRICAL CHARACTERISTICS
VIN = 12V, VEN = 2.5V, VCOMP = 1.4V, TA= +25°C, unless otherwise noted.
Parameter
Symbol Condition
Min Typ Max Units
Feedback Voltage Upper Switch On Resistance Upper Switch Leakage Current Limit Error Amp Voltage Gain (4) Error Amp Transconductance Error Amp Min Source current Error Amp Min Sink current VIN UVLO Threshold
Unit 9 Digital signals and signal processing

Unite 9 Digital signals and signal processingPart 1: Digital signal processingDigital signal processing (DSP) is the study of signals in a digital representation and the processing methods of these signals. DSP and analog signal processing are sub-fields of signal processing. DSP includes sub-fields like audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, image processing, signal processing for communications, biomedical signal processing, etc.Since the goal of DSP is usually to measure or filter continuous real-world analog signals, the first step is usually to convert the signal form an analog to a digital form, by using an analog to digital converter. Often, the required output signal is another analog output signal, which requires a digital to analog converter.The algorithms required for DSP are sometimes performed using specialized computers, which make use of specialized microprocessors called digital signal processors (also abbreviated DSP). These process signals in real time, and are generally purpose-designed application-specific integrated circuits (ASICs). When flexibility and rapid development are more important than unit costs at high volume, DSP algorithms may also be implemented using field-rogrammable gatearrays (FPGAs).DSP domainsIn DSP, engineers usually study digital signals in one of the following domains: time domain (one-dimensional signals), spatial domain (multidimensional signals), frequency domain, autocorrelation domain, and wavelet domains. They choose the domain in which to process a signal by making an informed guess (or by trying different possibilities) as to which domain best represents the essential characteristics of the signal. A sequence of samples form a measuring device produces a time or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain information, that is, the frequency spectrum. Autocorrelation is defined as the cross-correlation of the signal with itself over varying intervals of time or space.Signal samplingWith the increasing use of computers the usage and need of digital signal processing has increased. In order to use an analog signal on computer it must be digitized with an analog to digital converter (ADC). Sampling is usually carried out in tow stages, discretization and quantization. In the discretization stage, the space of signals is partitioned into equivalence classes and discretization is carried out by replacing thesignal with representative signal of the corresponding equivalence class. In the quantization stage the representative signal values are approximated by values form a finite set.In order for a sampled analog signal to be exactly reconstructed, the Nyquist-Shannon sampling theorem must be satisfied. This theorem states that the sampling frequency must be greater than twice the bandwidth of the signal. In practice, the sampling frequency is often significantly more than twice the required bandwidth. The most common bandwidth scenarios are: DC~BW (“baseband”); and f BW, a frequency band centered on a carrier frequency (“direct demodulation”).Time and space domainsThe most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Filtering generally consists of some transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters; for example: - A “linear” filter is a linear transformation of input samples; other filters are “non-linear.” Linear filters satisfy the superposition condition, i.e., if an input is a weighted linear combination of different signals, the output is an equally weighted linear combination of the corresponding output signals.- A “causal” filter uses only previous samples of the input or output signals; while a “non-causal” filter uses future input samples. A non-causal filter can usually be changed into a causal filter by adding a delay to it.- A “time-invariant” filter has constant properties over time; other filters such as adaptive filters change in time.- Some filters are “stable”, others are “unstable”. A stable filter produces an output that converges to a constant value with time or remains bounded within a finite interval. An unstable filter produces output which diverges.- A “finite impulse response” (FIR) filter uses only the input signal, while an “infinite impulse response” filter (IIR) uses both the input signal and previous samples of the output signal. FIR filters are always stable, while IIR filters may be unstable.Most filters can be described in Z-domain (a superset of the frequency domain) by their transfer functions. A filter may also be described as a difference equation, a collection of zeroes and poles or, if it is an FIR filter, an impulse response or step response. The output of an FIR filter to any given input may be calculated by convolving the input signal with the impulse response. Filters can also be represented by block diagrams which can then be used to derive a sample processing algorithm to implement the filter using hardware instructions.Frequency domainSignals are converted from time or space domain to the frequency domain usually through the Fourier transform. The Fourier transform converts the signal information to a magnitude and phase component of each frequency. Often the Fourier transform is converted to the power spectrum, which is the magnitude of each frequency component squared.The most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to get information of which frequencies are present in the input signal and which are missing.There are some commonly used frequency domain transformations. For example, the cepstrum converts a signal to the frequency domain through Fourier transform, takes the logarithm, and then applies another Fourier transform. This emphasizes the frequency components with smaller magnitude while retaining the order of magnitudes of frequency components.ApplicationsThe main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital communications, radar, sonar,seismology, and biomedicine. Specific examples are speech compression and transmission in digital mobile phones, room matching equalization of sound in HiFi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, computer-generated animations in movies, medical imaging such as CAT scans and MRI, image manipulation, high fidelity loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.ImplementationDigital signal processing is often implemented using specialized microprocessors such as the MC560000 and the TMS320. These often process data using fixed-point arithmetic, although some versions are available which use floating point arithmetic and are more powerful. For faster applications FPGAs might be used. Beginning in 2007, multicore implementations of DSPs have started to emerge. For faster applications with vast usage, ASICs might be designed specifically. For slow applications, a traditional slower processor such as microcontroller can cope.Part 2: General concepts of digital signal processingThere have been tremendous demands in the use of digital computersand special-purpose digital circuitry for performing varied signal processing functions that were originally achieved with analog equipment. The continued evolution of inexpensive integrated circuits has led to a variety of microcomputers and minicomputers that can be used for various signal processing functions. It is now possible to build special-purpose digital processors within much smaller size and lower cost constraints of systems previously all analog in nature.We will provide a general discussion of the basic concepts associated with digital signal processing. To do so, it is appropriate to discuss some common terms and assumptions. Wherever possible the definitions and terminology will be established in accordance with the recommendations of the IEEEE Group on Audio and Electroacoustics.An analog signal is a function that is defined over a continuous range of time and in which the amplitude may assume a continuous range of values. Common examples are the sinusoidal function, the step function, the output of a microphone, etc. the term analog apparently originated from the field of analog computation, in which voltages and currents are used to represent physical variables, but it has been extended in usage.Continuous-time signal is a function that is defined over a continuous range of time, but in which the amplitude may either have a continuous ranger of values or a finite number of possible values. In this context, an analog signal could be considered as a special case of continuous-timesignal. In practice, however, the terms analog and continuous-time are interchanged casually in usage and are often used to mean the same thing. Because of the association of the term analog with physical analogies, preference has been established for the term continuous-time. Nevertheless, there will be cases in which the term analog will be used for clarity, particularly where it relates to the term digital.The term quantization describes the process of representing a variable by a set of distinct values. A quantized variable is one that may assume only distinct values.A discrete-time signal is a function that is defined only at a particular set of values of time. This means that the independent variable, time, is quantized. If the amplitude of a discrete-time signal is permitted to assume a continuous range of values, the function is said to be a sampled-data signal. A sampled-data signal could arise from sampling an analog signal at discrete values of time.A digital signal is a function in which both time and amplitude are quantized. A digital signal may always be represented by a sequence of numbers in which each number has a finite number of digits.The terms discrete-time and digital are often interchanged in practice and are often used to mean the same thing. A great deal of the theory underlying discreet-time signals is applicable to purely digital signals, so it is not always necessary to make rigid distinctions. The term。
A High Frequency Core Loss Measurement Method For Arbitrary Excitations

A High Frequency Core Loss Measurement Method For Arbitrary ExcitationsMingkai Mu, Fred C. Lee, Qiang Li, David Gilham, Khai D. T. NgoCenter for Power Electronics SystemsECE Department, Virginia Tech, Blacksburg, USAEmail: mmk@Abstract — Recently, point of load (POL) converter are pushed to higher switching frequency for higher power density. As the frequency increases, magnetic core loss becomes a significant part of the total loss of POL converters. Accurate measurement of this part of loss is important for the converter design. And the core loss under non-sinusoidal excitation is particularly interesting for pulse-width-modulation (PWM) converters. However, precise measurement is difficult with classical four-wire methods, because of high frequency and non-sinusoidal flux waveform (like triangular flux). In this paper, a new method is proposed for high frequency core loss measurement with arbitrary excitation. The principle is to cancel the reactive power in the core under test with a losslessor low-loss core and reduce the sensitivity to phase discrepancy. By doing this, phase discrepancy induced error can be significantly reduced, and accurate measurement can be achieved for higher frequency than conventional method.I. I NTRODUCTIONMagnetic core loss is always a concern for power electronics applications. The converter is less efficientwithout proper magnetic core. Since the core loss increases dramatically with frequency, it draws a lot of attention to investigate and evaluate different magnetic materials for high frequency POL converters. Many efforts were made to search for accurate measurement of core loss. [1]-[9] One way to measure the core loss is the thermal approach.[1][2] Put the winded core in the thermal isolated chamber, and measure the temperature difference between inlet and outlet coolant. This method is universal but it is hard to exclude winding loss and time consuming. Soelectrical engineers usually prefer electric approach which isaccurate and fast. One method is proposed for high frequency measurement.[3] It winds the core under test as aninductor and connects a capacitor in series with it. When the capacitor resonates with the inductor, it is much easier to measure the loss of the inductor. Though this method is good for high frequency, it includes the winding loss, and is limited to sinusoidal excitation.The classical four-wire method for core loss measurement[4][5], is shown in Fig. 1. The core under testare winded as a transformer. Excitation is exerted on the corethrough one winding, and voltage is sensed on the otherwinding, which is the sensing winding. Integrating the(a) four-wire method for core loss measurement1l L 2l L mL(b) equivalent modelFigure 1. four-wire measurement method and its equivalent modelproduct of voltage on sensing winding and the current through the excitation winding, we can calculate the loss consumed in the core. This method doesn't have significant drawbacks in principle, and it excludes the winding loss from the measured core loss.However, the phase difference between the secondary side voltage v 2 and the current sensing resistor voltage v R is very closed to 90°, so it is sensitive to phase discrepancy[6][7]. The phase discrepancy always exists due to current sensing resister parasitics, probe mismatch, transformer parasitics, and oscilloscope time resolution limit, etc. A small phase error will lead to significant loss error. As frequency increases, this loss error will become more severe, because the phase discrepancy is more difficult to control. So it is notaccurate when testing high frequency loss. Recently, a new method is proposed to reduce the sensitivity to the phase discrepancy at high frequency[9].The circuit is shown in Fig.2. It adds a capacitor to © 2011 IEEE. Reprinted, with permission, from IEEE APEC 2011.(a) improved method1l L 2l L mL rC(b) equivalent modelFigure 2. measurement setup with resonant capacitor and its equivalentmodelresonant with the magnetizing inductor of the transformer, and measures the voltage v 3 which equals to the total voltage on magnetizing inductor, equivalent core loss resistor and resonant capacitor. Because the capacitor voltage will cancel the inductor voltage, v 3 can be more in-phase with the current through the magnetizing inductor. Integrating the product of v 3 and v R will still give the core loss, but it will be much less sensitive to phase discrepancy. Though this method overcomes the phase sensitivity problem, it only works under sinusoidal excitation.So far, there is still no good solution for high frequency core loss measurement method with arbitrary excitation.II. P ROPOSED M ETHODA. Phase ErrorBefore introducing proposed method, it is necessary to discuss the sensitivity to phase discrepancy of the classical four-wire method in Fig. 1 and the principle of the method in Fig.2. The equivalent model of four-wire method is in Fig. 1(b). This method uses the integration of the product of the sensing winding voltage v 2 and the current sensor voltage v R to get the core loss. Because the impedance of magnetizing inductor L m is usually much larger than the equivalent core loss resistor R core , the phase angle difference between v 2 and v R is very close to 90°. So a little phase discrepancy will lead to significant loss error after the integration[9]. If we assume the excitation is sinusoidal, the sensitivity to phase discrepancy is given in the equation:Figure 3. loss error induced by 1° phase error at different phase anglebetween v 2 and v R .tan()v i ϕϕ−Δ=×Δ(1)φv-i is the phase angle difference between the two terms for integration: voltage and current. When the voltage and current have nearly 90° phase difference, the small phase discrepancy Δφ will be amplified by tan(φv-i ). For low frequency measurement, parasitic effect in the test circuit is small, so phase discrepancy is small and won't cause severe error. As frequency increases, the imperfectness like parasitic of current sensing resistor, mismatch of two probes and oscilloscope sample rate limitation begins to emerge. For example, a 2Ω sensing resistor with 1nH ESL will produce 0.9° phase discrepancy at 5MHz. And for a 5GS/s digital oscilloscope, the minimum sampling period is 200ps, which is 0.36° at 5MHz. Fig. 3 shows how much percentage error 0.1°, 1°, 10° phase discrepancy will cause at different phase angle difference between v 2 and i R . We can see that at near 90°, 1° phase discrepancy will produce about 100% error. From the perspective of power, the product of v 2 and i R gives the instantaneous power, which contains a small resistive power and a large reactive power. In principle, averaging the instantaneous power during one period will produce the resistive power, which is the core loss. However, this calculated core loss is not reliable, because a tiny phase discrepancy will introduce a big error into the calculated resistive power. If we can reduce the calculated reactive power, we can reduce the sensitivity of the conventional method.The principle of the method with resonant capacitor in Fig. 2 is using the capacitor to cancel the reactive voltage on the magnetizing inductor, so the phase difference between v 3 and v R will be close to 0°, and the integration of their product will be less sensitive to phase discrepancy.B. Proposed MethodThe method in Fig. 2 is only for sinusoidal excitation, because it only cancels the reactive voltage at a single frequency. Following the principle of cancelling reactive voltage, we can get another circuit. To cancel the reactive voltage on the magnetizing inductor for the entire frequency0.01%0.0001%100%L(a) proposed method with ideal inductorL **(b) proposed method with air core transformer Figure 4. proposed method for arbitrary excitationrange, the capacitor is replaced with an ideal inductor and the polarity of the transformer is changed, as shown in Fig. 4(a). In principle, the core loss resistor should be in parallel with the magnetizing inductor. For analytical simplicity, the core loss resistor R core is placed in series with the magnetizing inductor, and its resistance is nonlinear. If N 1:N 2=1:1, v 3 is the sum of the inductor voltage v L and -v m .3()m core m V j L j L R I ωω=−−(1)If L=L m , the voltage on the magnetizing inductor will be cancelled out for any frequency. v 3 will equal to the voltage on the equivalent core loss resistor and in phase with the v R . Integrating the product of v 3 and v R will be the loss on R core , and less sensitive to phase discrepancy. However, if the inductor is not ideal, the winding loss on the added inductor L will affect the accuracy of the measured result. So the circuit in Fig. 4(b) is proposed. It uses the sensing winding voltage of an air core transformer to cancel the reactive voltage on the magnetizing inductor. Because the air core doesn't have core loss, we can use its magnetizing inductor to be the ideal inductor in Fig. 4(a).According to the new measurement method, the core loss can be calculated as301Tcore r refP v v dt TR =∫(2)T is the excitation period, R ref is the current sensing resistance.With this arrangement of two transformers, this method gets to the similar principle as the method in Fig. 2: mostreactive power in the core under test is cancelled with the reactive power in the air core, leaving only the resistive power which is consumed by the core under test. If the power is more resistive, it will be much easier to be measured.From another perspective, the proposed method can be interpreted as comparing the core under test with a reference core, which is the air core in Fig. 4(b). Their power difference is the core loss of the core under test. In some situations, the air core may not be a good choice as the reference core. For example, when the magnetizing inductance of the core under test is high, a big air core transformer is needed. It will introduce non-negligible parasitic capacitors. A low-loss core could be used as the reference core, and its size and parasitic capacitors can be much smaller than the air core. If its loss is much smaller than the core under test, the error can be tolerable.III. V ERIFICATIONA. Simulation VerificationSimulation is performed to verify the concept of this method in Fig. 5. The simulation tool is Simplis ®. The excitation is 2MHz 50% duty cycle rectangular voltage. The magnetizing inductance of the core under test is 1.2μH, the core loss resistor is assumed to be in parallel with the magnetizing inductor, and its value is 1k Ω. The magnetizing inductance of the air core transformer is also 1.2μH, in order to cancel the reactive power in the core under test. In this simulation, assume the error comes from the 1nH parasitic inductance of current sensing resistor. Fig. 5(b) shows the voltage on current sensing resistor v R , the voltage on the secondary winding of the core under test v 2, and the cancelled voltage v 3. v 3 is in phase with v R . They reach the peak value at the same time. So the product of v 3 and v R will give only the real power consumed on R core . From thecomparison in table 1, we can reduce the phase discrepancy induced error percentage from 86% to 1.01%, with the added air core transformer. The error percentage is defined asmeasured actualactualP P P −Δ=(3)(a) simulation circuit in Simplis(b) simulation waveforms Figure 5. simulation verificationTABLE I. Simulation resultP loss (mW)Error PercentageActual Loss on R core 40.48 0 Calculated Loss with Integration of V 2 * V R 75.28 86% Calculated Loss with Integration of V 3 * V R41.06 1.02%B. Experiment ResultsExperiments are performed to verify and demonstrate the new method. The core sample is the a thin toroid core made of sintered low temperature co-fired ceramic (LTCC) magnetic material (40011 from ESL ElectroScience ®). To measure its core loss, a lossless or low loss core is required. In this experiment, a toroid core made of NiZn Ferrite (4F1 from Ferroxcube ®) is chosen as the reference core. To make sure that reference core won't introduce much error, its loss density and core volume should be estimated first. The core loss densities of 4F1 and 40011 are compared in Fig. 6, which are measured with the resonant capacitor method in Fig. 2. The figure shows that NiZn ferrite has much lower loss than 40011 LTCC material at the same flux density. The transformer winded with the core under test and the reference core are shown in Fig. 7(a), both with the 1:1 turns ratio. Calculation shows that their magnetizing inductances and core volumes are similar but the flux density in the NiZn ferrite core is about 1/3 of the flux density in the LTCC core, when exerting the same voltage excitation. So the loss in the NiZn ferrite core is much smaller than the loss in the 40011 LTCC core, and we can use the NiZn ferrite core to be the reference core, without introducing significant error. The measurement setup is shown in Fig. 7(b). The core under test is immersed in a cup of hot oil to keep the core temperature stable. The temperature of the oil is controlled by the hotplate at the bottom. In following experiments, the temperature are kept at 100°C.The first experiment measured core loss of 40011 LTCC material under sinusoidal voltage excitation. The measurement waveforms are shown in Fig. 8(a). The cancelled voltage v 3 is in phase with the voltage on the current sensing resistor v R . So the measured result should be less sensitive to phase discrepancy. The measured core loss density is shown in Fig.8(b), compared with the measured result of the method with resonant capacitor. The results of the two methods are consistent, though with 10% difference.101010101010Figure 6. loss comparison between 4F1 and LTCC 40010(1.5MHz sinusoidal excitation, 100°C)(a) core under test (LTCC, left) and reference core (NiZn, right)(b) core under test in oil bath Figure 7. experiment setup(a) measurement waveform101010C o r e L o s sD e n s i t y (k W /m 3)(b) comparison between two methodsFigure 8. experimental waveforms and results for sinusoidal flux excitation(LTCC 40011, 1.5MHz, 100°C)The second experiment measured the core loss under rectangular voltage excitation. The core loss at different duty cycle are measured. Typical waveforms of 50% and 25% duty cycle are shown in Fig. 9. The core loss at different duty cycle are shown in Fig. 10. The peak flux density is kept the same at 10mT. From the measured result, we can see that core loss varies with different duty cycle. At 50% duty cycle, the core loss is the lowest, and lower than sinusoidal excitation. As the duty cycle increases or decreases, the core loss increases. When the duty cycle is lower than 30% or higher than 70%, the core loss is higher than sinusoidal excitation.(a) measurement waveforms (50% duty cycle)v 2v Rv 3(b) measurement waveforms (25% duty cycle) Figure 9. experimental waveform for triangular flux(LTCC 40011, 1.5MHz, 100°C)C o r e L o s sD e n s i t y (k W /m 3)Figure 10. measured core loss for different duty cycle triangular flux(LTCC 40011, 1.5MHz, 100°C)IV.C ONCLUSIONA new high frequency core loss measurement method is proposed in this paper. By using an additional transformer with a lossless or low-loss magnetic core to cancel the reactive voltage, it reduces the sensitivity to phase discrepancy. So the new method can measure at much higher frequency than classical four-wire method. What is more, the measurement circuit is wide-band, so it can measure the loss of arbitrary waveform like triangular flux, which is the practical interest of power electronics applications.R EFERENCES[1]Chucheng Xiao; Gang Chen; Odendaal, W.G.H., "Overview of PowerLoss Measurement Techniques in Power Electronics Systems", IEEE Transactions on Industry Applications, Volume 43, Issue 3, May-june 2007, pp. 657 - 664.[2]R. Linkous, A. W. Kelley, K. C. Armstrong, “An improvedcalorimeter for measuring the core loss of magnetic materials”, in proc. Appl. Power Electron. Conf., vol. 2, 6-10 Feb. 2000, pp. 633 – 639. [3]Yehui Han; Cheung, G.; An Li; Sullivan, C. R.; Perreault, D.J.,"Evaluation of magnetic materials for very high frequency power applications", Power Electronics Specialists Conference, 2008. 15-19, pp. 4270 - 4276.[4] A. Brockmeyer, “Experimental evaluation of the influence of DCpremagnetization on the properties of power electronic ferrites”, in proc. Appl. Power Electron. Conf., vol. 1, 3-7 March 1996, pp. 454 – 460.[5]V. J. Thottuvelil, T. G. Wilson, Jr. H. A. Owen, “High-frequencymeasurement techniques for magnetic cores”, IEEE Trans. Power Electron., vol. 5, no. 1, Jan. 1990, pp. 41 – 53.[6]James N. Lester, Member IEEE and Benjamin M. Alexandrovich,‘‘Compensating Power Measurement Phase Delay Error’’, Industry Applications Conference, vol.3, pp. 1679 -- 1684, 1999.[7]Vencislav Cekov Valchev, Alex Van den Bossche, ‘‘Inductors andTransformers for Power Electronics’’, Chapter 11, CRC Press, 2005.[8]J. Zhang, G. Skutt, F. C. Lee, “Some practical issues related to coreloss measurement using impedance analyzer approach”, in proc.Appl. Power Electron. Conf., vol. 2, no. 0, part 2, 5-9 March 1995, pp. 547 – 553.[9]Mingkai Mu, Qiang Li, David Gilham, Fred C. Lee, Khai D.T. Ngo,"New core loss measurement method for high frequency magnetic materials", IEEE Energy Conversion Congress & Expo, 2010.。
Stability of Time-Delay Systems Equivalence between Lyapunov and Scaled Small-Gain Conditio

Stability of Time-Delay Systems:Equivalence between Lyapunov and Scaled Small-Gain ConditionsJianrong Zhang,Carl R.Knopse,and Panagiotis Tsiotras Abstract—It is demonstrated that many previously reported Lyapunov-based stability conditions for time-delay systems are equivalent to the ro-bust stability analysis of an uncertain comparison system free of delays via the use of the scaled small-gain lemma with constant scales.The novelty of this note stems from the fact that it unifies several existing stability results under the same framework.In addition,it offers insights on how new,less conservative results can be developed.Index Terms—Stability,time-delay systems.II.I NTRODUCTIONThe analysis of linear time-delay systems(LTDS)has attracted much interest in the literature over the half century,especially in the last decade.Two types of stability conditions,namely delay-inde-pendent and delay-dependent,have been studied[17].As the name implies,delay-independent results guarantee stability for arbitrarily large delays.Delay-dependent results take into account the maximum delay that can be tolerated by the system and,thus,are more useful in applications.One of the first stability analysis results was the polyno-mial criteria[8]–[10].An important result was later provided by[3], which gives necessary and sufficient conditions for efficient compu-tation of the delay margin for the linear systems with commensurate delays.This result only requires the computation of the eigenvalues and generalized eigenvalues of constant matrices.Unfortunately,it is not straightforward to extend this to many problems of interest, such as the stability of general(noncommensurate)delays systems, H1performance of LTDS with exogenous disturbances,robust stability of LTDS with dynamical uncertainties,and robust controller synthesis,etc.Recently,much effort has been devoted to developing frequency-domain and time-domain based techniques which may be extendable to such problems.The frequency-domain approaches include integral quadratic constraints[6],singular value tests[25], framework-based criteria[4],and other similar techniques.In[20], the traditional -framework was extended for time-delay systems to obtain a necessary and sufficient stability condition,which was then relaxed to a convex sufficient condition.Other recent stability analysis results have been developed in the time-domain,based on Lyapunov’s Second Method using either Lyapunov–Krasovskii functionals or Lyapunov–Razumikhin functions [26],[12],[13],[16],[22],[14],[17],[19].These results are formulated in terms of linear matrix inequalities(LMIs),and,hence,can be solved efficiently[1].While these results are often extendable to the systems with general multiple delays and/or dynamical uncertainties,they can be rather conservative and the corresponding Lyapunov functionals are complex.A formal procedure for constructing Lyapunov functionals for LTDS was proposed in[11],but a Lyapunov functional,in general, Manuscript received June10,1999;revised August10,2000.Recommended by Associate Editor J.Chen.This work was supported by the National Science Foundation under Grant DMI-9713488.J.Zhang and C.R.Knospe are with the Department of Mechanical and Aerospace Engineering,University of Virginia,Charlottesville,V A22904-4746 USA(e-mail:jz9n@;crk4y@).P.Tsiotras is with the School of Aerospace Engineering,Georgia Institute of Technology,Atlanta,GA30332-0150USA(e-mail:p.tsiotras@). Publisher Item Identifier S0018-9286(01)01015-7.does not provide direct information on how conservative the resultant condition may be in practice.In this note,we show that several existing Lyapunov-based results, both delay-independent and delay-dependent,are equivalent to the scaled small-gain condition for robust stability of a comparison system that is free of delay.This result provides a new frequency-domain in-terpretation to some common Lyapunov-based results in the literature. Via a numerical example,we investigate the potential conservatism of the stability conditions,and demonstrate that a major source of conservatism is the embedding of the delay uncertainties in unit disks that the comparison system employs.This source of conservatism is hidden in the Lyapunov-based framework but is quite apparent in the comparison system interpretation.These results also provide insight into how to reduce the conservatism of the stability tests.After a conference version of this note appeared in[28],we be-came aware of the results of[15]and[7]which are related to our approach.Unlike the model transformation class in[15],which con-tains distributed delays,the comparison system employed herein is a delay-free uncertain system stated in frequency domain and permits the immediate application of the standard frequency-domain techniques, such as the framework.The results in[7]are based on a special case of our comparison system,namely M=I n.Neither[15]nor[7]exam-ined the equivalence of existing Lyapunov-based criteria and the scaled small-gain conditions,which is the contribution of this note.The notation is conventional.Let n2m)be the set of all real(complex)n2mmatrices,[f1g,I n be n2n identity matrix,W T be the transpose of real matrix W,and RH1:=f H(s): H(s)2H1,H(s)is a real rational transfer matrix g.P>0indicates that P is a symmetric and positive definite matrix,and k1k1indicates the H1norm defined by k G k1:=sup!2n2n with respect to a block structure 3is defined by 3(M)=0if there is no323such that I0M3 is singular,and3(M)=[min f (3):det(I0M3)=0;323g]01 otherwise.We also define the set1r:=f diag[ 1I n,111, r I n and the closed norm-bounded set B1r:=f12 H1:k1k1 1;1(s)21r g.Finally,for linear time-invariant system P(s)and its input x(t),we define a signal P(s)[x](t)asP(s)[x](t):=L01[P(s)X(s)]where X(s)is the Laplace transform of x(t),and L01[1]is the inverse Laplace operator.III.C OMPARISON S YSTEMFor ease of exposition,we will examine the single-delay case. However,the Lyapunov stability conditions examined here may all be straightforwardly extended to the case of systems with multiple (noncommensurate)delays.Consider the linear time-delay system_x(t)=Ax(t)+A d x(t0 )(1) where A2n2n are constant matrices,and the delay is constant,unknown,but bounded by a known bound as0 . The following assumption is a necessary condition when investigating asymptotic stability of the system(1).Assumption1:The system(1)free of delay is asymptotically stable, that is,the matrix A:=A+A d is Hurwitz.Taking Laplace transforms of both sides,the system(1)can be ex-pressed in the s domain assX(s)=AX(s)+A d e0 s X(s):(2)0018–9286/01$10.00©2001IEEEFig.1.A system with uncertainty.The results of this note depend on the notion of robust stability of afeedback interconnection of a finite-dimensional,linear,time-invariant (FDLTI)system and an uncertain system with known uncertainty struc-ture.The following definition clarifies the type of robust stability used herein.More on this definition can be found in [32].Definition 1:Consider a linear,time-invariant (finite-dimensional)system G (s )interconnected with an uncertain block 1,as shown in Fig.1.The uncertain block 1belongs to a known,uncertainty structure set 121.Then,the system is said to be robustly stable if G (s )is internally stable and the interconnection is well posed and remains internally stable for all 121:To proceed with our analysis,we need the following preliminary results.Lemma 1:Let M2e 0s 01 s e 0s 01 s=A +MA d )X (s )+(I 0M )A d e0 s X (s )+d AX(s )+e 0 s1d A d X (s ):In view of the fact that k e 0 s k 1=1and k (e 0 s 01)=( s )k 1== 1,it follows from the above equation that (2)is a special case of the uncertain system (3)with 11=e 0 s I n ,and 12=(e 0 s 01)=( s )I n .Therefore,the robust stability of (3)guarantees that (1)is asymptotically stable for all 2[0; ].As shown in the next section,the comparison system (3)can be rewritten as an interconnection of an FDLTI system G (s )with a block 1,where 1=diag[11;12]2B 12.Hence,the analysis of the ro-bust stability of the system (3)may be performed via -analysis,since the small- theorem applies even to the case where the uncertainty is nonrational [23].Because the calculation of is NP-hard in general[2],its upper bound with D scales is typically used instead.In partic-ular,the interconnection in Fig.1is robustly stable if G (s )2RH1is internally stable andsup !2(j!)D 01n 2n;D i =D 3i >0g :The test (4),although a convex optimization problem,requires a fre-quency sweep.Alternatively,the analysis of robust stability may be performed without the frequency sweep by solving an LMI.The fol-lowing lemma states this result.Additional conservatism is introduced in this formulation,however,since it implies satisfaction of (4)with the same constant real scaling matrix used for all frequencies.Lemma 2[21](Scaled Small-Gain LMI)1:Consider the system in-terconnection shown in Fig.1where the plant G (s )is FDLTI and the uncertainty block is such that 12B 1r .Let (A;B;C;D )be a min-imal realization of G (s )withG (s )=:Then,the closed-loop system is robustly stable if there exist matricesX >0and Q =diag[Q 1,Q 2,111,Qr ]>0,Qi 22nXA d AXA d A dA T A T d X011XA T d A T dX012X>0(7)where =0 01[(A +A d )T X +X (A +A d )]0( 011+ 012)X .b)[13]There exist matrices P >0;P 1>0and P 2>0satisfyingH P A T P A T dAP 0 P 10 A dP0 P 2<0(8)1Thesmall gain theorem applies to the case where the uncertainty blockscontain infinite dimensional dynamic systems [32].where H=P(A+A d)T+(A+A d)P+ A d(P1+ P2)A T d.c)[19]There exist matrices X>0;U>0;V>0and Wsatisfying10W A d A T A T d V (W+X)0A T d W T0U A T d A T d V0V A d A V A d A d0V0000V<0(9)where 1=(A+A d)T X+X(A+A d)+W A d+A T d W T+U.The following proposition shows that all of above conditions areequivalent to the SSGS conditions for the special case of the compar-ison system(3).Proposition1:For the comparison system(3),if M=0,the SSGScondition is equivalent to the condition(6),2and,if M=I n,the SSGScondition is equivalent to the condition(8)and can also be reducedto the condition(7).Moreover,the delay-dependent condition(9)isequivalent to the SSGS condition for(3)with M as a free-matrix vari-able.Proof:First,let M=0,then the comparison system(3)becomessX(s)=AX(s)+11A d X(s)112B11which can be described as the following closed-loop system:_x=Ax+A d uy=xu=11[y](t):WithG(s)=t h e S S G S c o n d i t i o n b e c o m e s(6).N e x t,w e l e tM=I n a n d13=1112.Equation(3)then becomessX(s)=(A+A d)X(s)+12 A d AX(s)+13 A d A d X(s)(10)with diag[12;13]2B12.The last equation can be rewritten as theclosed-loop system_x=(A+A d)x+ A d u1+ A d u2y1=Axy2=A d xu1=12[y1](t)u2=13[y2](t):Then,by applying Lemma2withG(s)=AA dw e s e e t h a t t h e s y s t e m(1)i s a s y m p t o t i c a l l y s t a b l e f o r a n y c o n s t a n t,0 ,i f t h e r e e x i s tX>0and Q=diag[Q1;Q2]>0suchthatR XA d XA d A T Q1A T d Q2A T d X0Q1000A T d X00Q200Q1A000Q10Q2A d0000Q2<02Similar observations can also be found,for example,in[26]and[4].where R=(A+A d)T X+X(A+A d).Multiplying bydiag[X01;I;I; Q011; Q012]on both sides and using Schurcomplements,the above inequality is equivalenttoH XA d A XA d A dA T A T d X0Q10A T d A T d X00Q2<0X>0;Q1>0;Q2>0(12)where H=(A+A d)T X+X(A+A d)+Q1+Q2.Now,lettingQ1= 011X and Q2= 012X,where constants 1>0and2>0,(12)is reduced to(7).Finally,consider the general case of(3)and rewrite it as the fol-lowing:_x=(A+MA d)x+(I0M)A d u2+ Mu1y1=A d Ax+A d A d u2y2=xu1=12[y1](t)u2=11[y2](t):(13)Therefore,applying Lemma2withG(s)=w h e r e^A=A+MA d,^B=[ M(I0M)AA I]Fig.2.Delay margin versus K.(1)Nyquist Criterion.(2) upper bound withfrequency-dependent D scaling.(3)Condition of[19].(4)Condition of[13].(5)Condition of[16].(6)Condition of[25],[26]for K<Kc o n t r o l o f u n c e r t a i n l i n e a r13t h I F A C W o r l d C o n g r.,1996,p p.113–118.d e l a y s y s t e m s,”i n[14]S.-I.N i c u l e s c u,“O n t h e s t a b i l i t y a n d s t a b i lw i t h d e l a y e d s t a t e,”P h.D.d i s s e r t a t i o n,L a b o r aG r e n o b l e,I N P G,1996.[15]S.-I.Niculescu and J.Chen,“Frequency sweeping tests for asymptoticstability:A model transformation for multiple delays,”in Proc.38th IEEE Conf.Decision Control,1999,pp.4678–4683.[16]S.-I.Niculescu,o,J.-M.Dion,and L.Dugard,“Delay-depen-dent stability of linear systems with delayed state:An LMI approach,”in Proc.34th IEEE Conf.Decision Control,1995,pp.1495–1497. [17]S.-I.Niculescu,E.I.Verriest,L.Dugard,and J.-M.Dion,“Stability androbust stability of time-delay systems:A guided tour,”in Stability and Robust Control of Time Delay Systems.New York:Springer-Verlag, 1997,pp.1–71.[18] A.Packard and J.C.Doyle,“The complex structured singular value,”Automatica,vol.29,no.1,pp.77–109,1993.[19]P.Park,“A delay-dependent stability criterion for systems with uncer-tain time-invariant delays,”IEEE Trans.Automat.Contr.,vol.44,pp.876–877,Apr.1999.[20]G.Scorletti,“Robustness analysis with time-delays,”in Proc.36th IEEEConf.Decision Control,1997,pp.3824–3829.[21]R.E.Skelton,T.Iwasaki,and K.Grigoriadis,A Unified Algebraic Ap-proach to Linear Control Design.New York:Taylor&Francis,1998.[22] E.Tissir and A.Hmamed,“Further results on stability of_x(t)=Ax(t)+Bx(t0 ),”Automatica,vol.32,no.12,pp.1723–1726,1996.[23] A.L.Tits and M.K.H.Fan,“On the small- theorem,”Automatica,vol.31,no.8,pp.1199–1201,1995.[24]J.Tlusty,“Machine dynamics,”in Handbook of High Speed MachiningTechnology,R.I.King,Ed.New York:Chapman&Hall,1985,pp.48–153.[25] E.I.Verriest,M.K.H.Fan,and J.Kullstam,“Frequency domain robuststability criteria for linear delay systems,”in Proc.32nd IEEE Conf.Decision Control,1993,pp.3473–3478.[26] E.I.Verriest and A.F.Ivanov,“Robust stability of systems with delayedfeedback,”Circuits,Syst.Signal Processing,vol.13,pp.213–222,1994.[27]M.Vidyasagar,Nonlinear Systems Analysis,2nd ed.EnglewoodCliffs,NJ:Prentice-Hall,1993.[28]J.Zhang,C.R.Knospe,and P.Tsiotras,“A unified approach to time-delay system stability via scaled small gain,”in Proc.Amer.Control Conf.,1999,pp.307–308.[29],“Toward less conservative stability analysis of time-delay sys-tems,”in Proc.38th IEEE Conf.Decision Control,1999,pp.2017–2022.[30],“Stability of linear time-delay systems:A delay-dependent cri-terion with a tight conservatism bound,”in Proc.2000Amer.Control Conf.,2000.[31],“Asymptotic stability of linear systems with multiple time-in-variant state-delays,”in Proc.2nd IFAC Workshop Linear Time Delay Systems,to be published.[32]K.Zhou,J. C.Doyle,and K.Glover,Robust and Optimal Con-trol.Englewood-Cliffs,NJ:Prentice-Hall,1996.Bounded Stochastic Distributions Control forPseudo-ARMAX Stochastic SystemsHong Wang and Jian Hua ZhangAbstract—Following the recently developed algorithms for the control of the shape of the output probability density functions for general dy-namic stochastic systems[6]–[8],this note presents the modeling and con-trol algorithms for pseudo-ARMAX systems,where,different from all the existing ARMAX systems,the considered system is subjected to any arbi-trary bounded random input and the purpose of the control input design is to make the output probability density function of the system output as close as possible to a given distribution function.At first,the relationship between the input noise distribution and the output distribution is estab-lished.This is then followed by the description on the control algorithm de-sign.A simulated example is used to demonstrate the use of the algorithm and encouraging results have been obtained.IndexTerms—i=1v i(k)B i(y)y2[a;b](1)whereu k control input;(y;u)measured probability density function of the system output;V(k)=(v1;v2;...;v M)T,weight vector;B i(y)pre-specified basis functions for the approximation of(y;u)[2];A andB constant matrices.Although there are several advantages in using this type of model to de-sign the required control algorithm,it is difficult to link such a model structure to a physical system.In particular,the key assumption that the control input only affects the weights of the output probability density function is strict for some applications.As such,it would be ideal if aManuscript received March30,2000;revised July31,2000.Recommended by Associate Editor Q.Zhang.This work was supported in part by the U.K. EPSRC under Grant(GB/K97721),and in part by the Overseas Scholarship Committee of the P.R.China.H.Wang is with the Department of Paper Science,Affiliated Member of Con-trol Systems Centre,University of Manchester Institute of Science and Tech-nology,Manchester M601QD,U.K.H.Zhang is on leave from the Department of Power Engineering,North China University of Electrical Power,Beijing,P.R.China.Publisher Item Identifier S0018-9286(01)01014-5.0018–9286/01$10.00©2001IEEE。
A Statistical Theory of Mobile-Radio Reception

For simplicity, it \\;11 be assumed that at every point there are exactly N component waves and that these N waves have the same amplitude. I n additio n it will be assumed that the transmitted rad iation is vertically polarized , that is, with th e electric-field vector directcd verti cally, and that the polarization is unchanged on scattering so that the received field is also vert ically polarized. Th e model described so far gives what might be termed the "sca ttered field," since th e energy arrives at the receiver by way of a number of indirect paths. Another term for t his scattered field is the "incoherent ficld," because its phase is completely random. Sometimes a significant fraction of t he total reccived energy arrives by way of the direct line-of-sight path from t ransmitte r to receiver. Th e phase of th e "direct wave" is nonr andom and it may therefore he described as a "coherent wave." It will be seen later that th e field in a heavil y built-up area such as New York Cit y is entirely of the scatte red type, whereas the field in a suburban area with the transmitter not more t han a mile or two distant is often a combination of a scattered fi eld with a direct wave.
裴攀-翻译中文

第6章光源和放大器在光纤系统,光纤光源产生的光束携带的信息。
激光二极管和发光二极管是两种最常见的来源。
他们的微小尺寸与小直径的光纤兼容,其坚固的结构和低功耗要求与现代的固态电子兼容。
在以下几个GHz的工作系统,大部分(或数Gb /秒),信息贴到光束通过调节输入电流源。
外部调制(在第4、10章讨论)被认为是当这些率超标。
我们二极管LED和激光研究,包括操作方法,转移特性和调制。
我们计划以获得其他好的或理念的差异的两个来源,什么情况下调用。
当纤维损失导致信号功率低于要求的水平,光放大器都需要增强信号到有效的水平。
通过他们的使用,光纤链路可以延长。
因为光源和光放大器,如此多的共同点,他们都是在这一章处理。
1.发光二极管一个发光二极管[1,2]是一个PN结的半导体发光时正向偏置。
图6.1显示的连接器件、电路符号,能量块和二极管关联。
能带理论提供了对一个)简单的解释半导体发射器(和探测器)。
允许能带通过的是工作组,其显示的宽度能在图中,相隔一禁止区域(带隙)。
在上层能带称为导带,电子不一定要到移动单个原子都是免费的。
洞中有一个正电荷。
它们存在于原子电子的地点已经从一个中立带走,留下的电荷原子与净正。
自由电子与空穴重新结合可以,返回的中性原子状态。
能量被释放时,发生这种情况。
一个n -型半导体拥有自由电子数,如图图英寸6.1。
p型半导体有孔数自由。
当一种P型和一种N型材料费米能级(WF)的P和N的材料一致,并外加电压上作用时,产生的能垒如显示的数字所示。
重参杂材料,这种情况提供许多电子传到和过程中需要排放的孔。
在图中,电子能量增加垂直向上,能增加洞垂直向下。
因此,在N地区的自由电子没有足够的能量去穿越阻碍而移动到P区。
同样,空穴缺乏足够的能量克服障碍而移动进入n区。
当没有外加电压时,由于两种材料不同的费米能级产生的的能量阻碍,就不能自由移动。
外加电压通过升高的N端势能,降低一侧的P端势能,从而是阻碍减小。
如果供电电压(电子伏特)与能级(工作组)相同,自由电子和自由空穴就有足够的能量移动到交界区,如底部的数字显示,当一个自由电子在交界区遇到了一个空穴,电子可以下降到价带,并与空穴重组。
斯托克计量经济学课后习题实证答案

斯托克计量经济学课后习题实证答案P ART T WO Solutions to EmpiricalExercisesChapter 3Review of StatisticsSolutions to Empirical Exercises1. (a)Average Hourly Earnings, Nominal $’sMean SE(Mean) 95% Confidence Interval AHE199211.63 0.064 11.50 11.75AHE200416.77 0.098 16.58 16.96Difference SE(Difference) 95% Confidence Interval AHE2004 AHE1992 5.14 0.117 4.91 5.37(b)Average Hourly Earnings, Real $2004Mean SE(Mean) 95% Confidence Interval AHE199215.66 0.086 15.49 15.82AHE200416.77 0.098 16.58 16.96Difference SE(Difference) 95% Confidence Interval AHE2004 AHE1992 1.11 0.130 0.85 1.37(c) The results from part (b) adjust for changes in purchasing power. These results should be used.(d)Average Hourly Earnings in 2004Mean SE(Mean) 95% Confidence Interval High School13.81 0.102 13.61 14.01College20.31 0.158 20.00 20.62Difference SE(Difference) 95% Confidence Interval College High School 6.50 0.188 6.13 6.87Solutions to Empirical Exercises in Chapter 3 109(e)Average Hourly Earnings in 1992 (in $2004)Mean SE(Mean) 95% Confidence Interval High School13.48 0.091 13.30 13.65 College19.07 0.148 18.78 19.36Difference SE(Difference) 95% Confidence Interval College High School5.59 0.173 5.25 5.93(f) Average Hourly Earnings in 2004Mean SE(Mean) 95% Confidence Interval AHE HS ,2004AHE HS ,19920.33 0.137 0.06 0.60 AHE Col ,2004AHE Col ,19921.24 0.217 0.82 1.66Col–HS Gap (1992)5.59 0.173 5.25 5.93 Col–HS Gap (2004)6.50 0.188 6.13 6.87Difference SE(Difference) 95% Confidence Interval Gap 2004 Gap 1992 0.91 0.256 0.41 1.41Wages of high school graduates increased by an estimated 0.33 dollars per hour (with a 95%confidence interval of 0.06 0.60); Wages of college graduates increased by an estimated 1.24dollars per hour (with a 95% confidence interval of 0.82 1.66). The College High School gap increased by an estimated 0.91 dollars per hour.(g) Gender Gap in Earnings for High School Graduates Yearm Y s m n m w Y s w n w m Y w Y SE (m Y w Y )95% CI 199214.57 6.55 2770 11.86 5.21 1870 2.71 0.173 2.37 3.05 200414.88 7.16 2772 11.92 5.39 1574 2.96 0.192 2.59 3.34There is a large and statistically significant gender gap in earnings for high school graduates.In 2004 the estimated gap was $2.96 per hour; in 1992 the estimated gap was $2.71 per hour(in $2004). The increase in the gender gap is somewhat smaller for high school graduates thanit is for college graduates.Chapter 4Linear Regression with One RegressorSolutions to Empirical Exercises1. (a) ·AHE 3.32 0.45 u AgeEarnings increase, on average, by 0.45 dollars per hour when workers age by 1 year.(b) Bob’s predicted earnings 3.32 0.45 u 26 $11.70Alexis’s predicted earnings 3.32 0.45 u 30 $13.70(c) The R2 is 0.02.This mean that age explains a small fraction of the variability in earnings acrossindividuals.2. (a)There appears to be a weak positive relationship between course evaluation and the beauty index.Course Eval 4.00 0.133 u Beauty. The variable Beauty has a mean that is equal to 0; the(b) ·_estimated intercept is the mean of the dependent variable (Course_Eval) minus the estimatedslope (0.133) times the mean of the regressor (Beauty). Thus, the estimated intercept is equalto the mean of Course_Eval.(c) The standard deviation of Beauty is 0.789. ThusProfessor Watson’s predicted course evaluations 4.00 0.133 u 0 u 0.789 4.00Professor Stock’s predicted course evaluations 4.00 0.133 u 1 u 0.789 4.105Solutions to Empirical Exercises in Chapter 4 111(d) The standard deviation of course evaluations is 0.55 and the standard deviation of beauty is0.789. A one standard deviation increase in beauty is expected to increase course evaluation by0.133 u 0.789 0.105, or 1/5 of a standard deviation of course evaluations. The effect is small.(e) The regression R2 is 0.036, so that Beauty explains only3.6% of the variance in courseevaluations.3. (a) ?Ed 13.96 0.073 u Dist. The regression predicts that if colleges are built 10 miles closerto where students go to high school, average years of college will increase by 0.073 years.(b) Bob’s predicted years of completed education 13.960.073 u 2 13.81Bob’s predicted years of completed education if he was 10 miles from college 13.96 0.073 u1 13.89(c) The regression R2 is 0.0074, so that distance explains only a very small fraction of years ofcompleted education.(d) SER 1.8074 years.4. (a)Yes, there appears to be a weak positive relationship.(b) Malta is the “outlying” observation with a trade share of 2.(c) ·Growth 0.64 2.31 u TradesharePredicted growth 0.64 2.31 u 1 2.95(d) ·Growth 0.96 1.68 u TradesharePredicted growth 0.96 1.68 u 1 2.74(e) Malta is an island nation in the Mediterranean Sea, south of Sicily. Malta is a freight transportsite, which explains its larg e “trade share”. Many goods coming into Malta (imports into Malta)and immediately transported to other countries (as exports from Malta). Thus, Malta’s importsand exports and unlike the imports and exports of most other countries. Malta should not beincluded in the analysis.Chapter 5Regression with a Single Regressor:Hypothesis Tests and Confidence IntervalsSolutions to Empirical Exercises1. (a) ·AHE 3.32 0.45 u Age(0.97) (0.03)The t -statistic is 0.45/0.03 13.71, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(b) 0.45 r 1.96 u 0.03 0.387 to 0.517(c) ·AHE 6.20 0.26 u Age(1.02) (0.03)The t -statistic is 0.26/0.03 7.43, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(d) ·AHE 0.23 0.69 u Age(1.54) (0.05)The t -statistic is 0.69/0.05 13.06, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(e) The difference in the estimated E 1 coefficients is 1,1,??College HighScool E E 0.69 0.26 0.43. Thestandard error of for the estimated difference is SE 1,1,??()College HighScoolE E (0.032 0.052)1/2 0.06, so that a 95% confidence interval for the difference is 0.43 r 1.96 u 0.06 0.32 to 0.54(dollars per hour).2. ·_ 4.000.13CourseEval Beauty u (0.03) (0.03)The t -statistic is 0.13/0.03 4.12, which has a p -value of 0.000, so the null hypothesis can be rejectedat the 1% level (and thus, also at the 10% and 5% levels).3. (a) ?Ed13.96 0.073 u Dist (0.04) (0.013)The t -statistic is 0.073/0.013 5.46, which has a p -value of 0.000, so the null hypothesis can be rejected at the 1% level (and thus, also at the 10% and 5% levels).(b) The 95% confidence interval is 0.073 r 1.96 u 0.013 or0.100 to 0.047.(c) ?Ed13.94 0.064 u Dist (0.05) (0.018)Solutions to Empirical Exercises in Chapter 5 113(d) ?Ed13.98 0.084 u Dist (0.06) (0.013)(e) The difference in the estimated E 1 coefficients is 1,1,??Female Male E E 0.064 ( 0.084) 0.020.The standard error of for the estimated difference is SE 1,1,??()Female Male E E (0.0182 0.0132)1/20.022, so that a 95% confidence interval for the difference is 0.020 r 1.96 u 0.022 or 0.022 to0.064. The difference is not statistically different.Chapter 6Linear Regression with Multiple RegressorsSolutions to Empirical Exercises1. Regressions used in (a) and (b)Regressor a bBeauty 0.133 0.166Intro 0.011OneCredit 0.634Female 0.173Minority 0.167NNEnglish 0.244Intercept 4.00 4.07SER 0.545 0.513R2 0.036 0.155(a) The estimated slope is 0.133(b) The estimated slope is 0.166. The coefficient does not change by an large amount. Thus, theredoes not appear to be large omitted variable bias.(c) Professor Smith’s predicted course evaluation (0.166 u 0)0.011 u 0) (0.634 u 0) (0.173 u0) (0.167 u 1) (0.244 u 0) 4.068 3.9012. Estimated regressions used in questionModelRegressor a bdist 0.073 0.032bytest 0.093female 0.145black 0.367hispanic 0.398incomehi 0.395ownhome 0.152dadcoll 0.696cue80 0.023stwmfg80 0.051intercept 13.956 8.827SER 1.81 1.84R2 0.007 0.279R0.007 0.277Solutions to Empirical Exercises in Chapter 6 115(a) 0.073(b) 0.032(c) The coefficient has fallen by more than 50%. Thus, it seems that result in (a) did suffer fromomitted variable bias.(d) The regression in (b) fits the data much better as indicated by the R2, 2,R and SER. The R2 and R are similar because the number of observations is large (n 3796).(e) Students with a “dadcoll 1” (so that the student’s father went to college) complete 0.696 moreyears of education, on average, than students with “dadcoll 0” (so that the student’s father didnot go to college).(f) These terms capture the opportunity cost of attending college. As STWMFG increases, forgonewages increase, so that, on average, college attendance declines. The negative sign on thecoefficient is consistent with this. As CUE80 increases, it is more difficult to find a job, whichlowers the opportunity cost of attending college, so that college attendance increases. Thepositive sign on the coefficient is consistent with this.(g) Bob’s predicted years of education 0.0315 u 2 0.093 u58 0.145 u 0 0.367 u 1 0.398 u0 0.395 u 1 0.152 u 1 0.696 u 0 0.023 u 7.5 0.051 u 9.75 8.82714.75(h) Jim’s expected years of education is 2 u 0.0315 0.0630 less than Bob’s. Thus, Jim’s expectedyears of education is 14.75 0.063 14.69.3.Variable Mean StandardDeviation Unitsgrowth 1.86 1.82 Percentage Pointsrgdp60 3131 2523 $1960tradeshare 0.542 0.229 unit freeyearsschool 3.95 2.55 yearsrev_coups 0.170 0.225 coups per yearassasinations 0.281 0.494 assasinations per yearoil 0 0 0–1 indicator variable (b) Estimated Regression (in table format):Regressor Coefficienttradeshare 1.34(0.88)yearsschool 0.56**(0.13)rev_coups 2.15*(0.87)assasinations 0.32(0.38)rgdp60 0.00046**(0.00012)intercept 0.626(0.869)SER 1.59R2 0.29R0.23116 Stock/Watson - Introduction to Econometrics - Second EditionThe coefficient on Rev_Coups is í2.15. An additional coup in a five year period, reduces theaverage year growth rate by (2.15/5) = 0.43% over this 25 year period. This means the GPD in 1995 is expected to be approximately .43×25 = 10.75% lower. This is a larg e effect.(c) The 95% confidence interval is 1.34 r 1.96 u 0.88 or 0.42 to 3.10. The coefficient is notstatistically significant at the 5% level.(d) The F-statistic is 8.18 which is larger than 1% critical value of 3.32.Chapter 7Hypothesis Tests and Confidence Intervals in Multiple RegressionSolutions to Empirical Exercises1. Estimated RegressionsModelRegressor a bAge 0.45(0.03)0.44 (0.03)Female 3.17(0.18)Bachelor 6.87(0.19)Intercept 3.32(0.97)SER 8.66 7.88R20.023 0.1902R0.022 0.190(a) The estimated slope is 0.45(b) The estimated marginal effect of Age on AHE is 0.44 dollars per year. The 95% confidenceinterval is 0.44 r 1.96 u 0.03 or 0.38 to 0.50.(c) The results are quite similar. Evidently the regression in (a) does not suffer from importantomitted variable bias.(d) Bob’s predicted average hourly earnings 0.44 u 26 3.17 u 0 6.87 u 0 3.32 $11.44Alexis’s predicted average hourly earnings 0.44 u 30 3.17 u 1 6.87 u 1 3.32 $20.22 (e) The regression in (b) fits the data much better. Gender and education are important predictors of earnings. The R2 and R are similar because the sample size is large (n 7986).(f) Gender and education are important. The F-statistic is 752, which is (much) larger than the 1%critical value of 4.61.(g) The omitted variables must have non-zero coefficients and must correlated with the includedregressor. From (f) Female and Bachelor have non-zero coefficients; yet there does not seem to be important omittedvariable bias, suggesting that the correlation of Age and Female and Age and Bachelor is small. (The sample correlations are ·Cor(Age, Female) 0.03 and·Cor(Age,Bachelor) 0.00).118 Stock/Watson - Introduction to Econometrics - Second Edition2.ModelRegressor a b cBeauty 0.13**(0.03) 0.17**(0.03)0.17(0.03)Intro 0.01(0.06)OneCredit 0.63**(0.11) 0.64** (0.10)Female 0.17**(0.05) 0.17** (0.05)Minority 0.17**(0.07) 0.16** (0.07)NNEnglish 0.24**(0.09) 0.25** (0.09)Intercept 4.00**(0.03) 4.07**(0.04)4.07**(0.04)SER 0.545 0.513 0.513R2 0.036 0.155 0.1552R0.034 0.144 0.145(a) 0.13 r 0.03 u 1.96 or 0.07 to 0.20(b) See the table above. Intro is not significant in (b), but the other variables are significant.A reasonable 95% confidence interval is 0.17 r 1.96 u 0.03 or0.11 to 0.23.Solutions to Empirical Exercises in Chapter 7 119 3.ModelRegressor (a) (b) (c)dist 0.073**(0.013) 0.031**(0.012)0.033**(0.013)bytest 0.092**(0.003) 0.093** (.003)female 0.143**(0.050) 0.144** (0.050)black 0.354**(0.067) 0.338** (0.069)hispanic 0.402**(0.074) 0.349** (0.077)incomehi 0.367**(0.062) 0.374** (0.062)ownhome 0.146*(0.065) 0.143* (0.065)dadcoll 0.570**(0.076) 0.574** (0.076)momcoll 0.379**(0.084) 0.379** (0.084)cue80 0.024**(0.009) 0.028** (0.010)stwmfg80 0.050*(0.020) 0.043* (0.020)urban 0.0652(0.063) tuition 0.184(0.099)intercept 13.956**(0.038) 8.861**(0.241)8.893**(0.243)F-statitisticfor urban and tuitionSER 1.81 1.54 1.54R2 0.007 0.282 0.284R0.007 0.281 0.281(a) The group’s claim is that the coefficien t on Dist is 0.075 ( 0.15/2). The 95% confidence forE Dist from column (a) is 0.073 r 1.96 u 0.013 or 0.099 to 0.046. The group’s claim is includedin the 95% confidence interval so that it is consistent with the estimated regression.120 Stock/Watson - Introduction to Econometrics - Second Edition(b) Column (b) shows the base specification controlling for other important factors. Here thecoefficient on Dist is 0.031, much different than the resultsfrom the simple regression in (a);when additional variables are added (column (c)), the coefficient on Dist changes little from the result in (b). From the base specification (b), the 95% confidence interval for E Dist is0.031 r1.96 u 0.012 or 0.055 to 0.008. Similar results are obtained from the regression in (c).(c) Yes, the estimated coefficients E Black and E Hispanic are positive, large, and statistically significant.Chapter 8Nonlinear Regression FunctionsSolutions to Empirical Exercises1. This table contains the results from seven regressions that are referenced in these answers.Data from 2004(1) (2) (3) (4) (5) (6) (7) (8)Dependent VariableAHE ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) Age 0.439**(0.030) 0.024**(0.002)0.147**(0.042)0.146**(0.042)0.190**(0.056)0.117*(0.056)0.160Age2 0.0021**(0.0007) 0.0021** (0.0007)0.0027**(0.0009)0.0017(0.0009)0.0023(0.0011)ln(Age) 0.725**(0.052)Female u Age 0.097 (0.084) 0.123 (0.084) Female u Age2 0.0015 (0.0014)0.0019 (0.0014) Bachelor u Age 0.064 (0.083)0.091 (0.084) Bachelor u Age2 0.0009 (0.0014) 0.0013 (0.0014) Female 3.158**(0.176) 0.180**(0.010)0.180**(0.010)0.180**(0.010)(0.014)1.358*(1.230)0.210**(0.014)1.764(1.239)Bachelor 6.865**(0.185) 0.405**(0.010)0.405**(0.010)0.405**(0.010)0.378**(0.014)0.378**(0.014)0.769(1.228)1.186(1.239)Female u Bachelor 0.064** (0.021) 0.063**(0.021)0.066**(0.021)0.066**(0.021)Intercept 1.884(0.897) 1.856**(0.053)0.128(0.177)0.059(0.613)0.078(0.612)0.633(0.819)0.604(0.819)0.095(0.945)F-statistic and p-values on joint hypotheses(a) F-statistic on terms involving Age 98.54(0.00)100.30(0.00)51.42(0.00)53.04(0.00)36.72(0.00)(b) Interaction termswithAge24.12(0.02)7.15(0.00)6.43(0.00)SER 7.884 0.457 0.457 0.457 0.457 0.456 0.456 0.456 R0.1897 0.1921 0.1924 0.1929 0.1937 0.1943 0.1950 0.1959 Significant at the *5% and **1% significance level.122 Stock/Watson - Introduction to Econometrics - Second Edition(a) The regression results for this question are shown in column (1) of the table. If Age increasesfrom 25 to 26, earnings are predicted to increase by $0.439 per hour. If Age increases from33 to 34, earnings are predicted to increase by $0.439 per hour. These values are the samebecause the regression is a linear function relating AHE and Age .(b) The regression results for this question are shown in column (2) of the table. If Age increasesfrom 25 to 26, ln(AHE ) is predicted to increase by 0.024. This means that earnings are predicted to increase by 2.4%. If Age increases from 34 to 35, ln(AHE ) is predicted to increase by 0.024.This means that earnings are predicted to increase by 2.4%. These values, in percentage terms,are the same because the regression is a linear function relating ln(AHE ) and Age .(c) The regression results for this question are shown in column (3) of the table. If Age increasesfrom 25 to 26, then ln(Age ) has increased by ln(26) ln(25) 0.0392 (or 3.92%). The predictedincrease in ln(AHE ) is 0.725 u (.0392) 0.0284. This means that earnings are predicted toincrease by 2.8%. If Age increases from 34 to 35, then ln(Age ) has increased by ln(35) ln(34) .0290 (or 2.90%). The predicted increase in ln(AHE ) is 0.725 u (0.0290) 0.0210. This means that earnings are predicted to increase by 2.10%.(d) When Age increases from 25 to 26, the predicted change in ln(AHE ) is(0.147 u 26 0.0021 u 262) (0.147 u 25 0.0021 u 252) 0.0399.This means that earnings are predicted to increase by 3.99%.When Age increases from 34 to 35, the predicted change in ln(AHE ) is(0. 147 u 35 0.0021 u 352) (0. 147 u 34 0.0021 u 342) 0.0063.This means that earnings are predicted to increase by 0.63%.(e) The regressions differ in their choice of one of the regressors. They can be compared on the basis of the .R The regression in (3) has a (marginally) higher 2,R so it is preferred.(f) The regression in (4) adds the variable Age 2 to regression(2). The coefficient on Age 2 isstatistically significant ( t 2.91), and this suggests that the addition of Age 2 is important. Thus,(4) is preferred to (2).(g) The regressions differ in their choice of one of the regressors. They can be compared on the basis of the .R The regression in (4) has a (marginally) higher 2,R so it is preferred.(h)Solutions to Empirical Exercises in Chapter 8 123 The regression functions using Age (2) and ln(Age) (3) are similar. The quadratic regression (4) is different. It shows a decreasing effect of Age on ln(AHE) as workers age.The regression functions for a female with a high school diploma will look just like these, but they will be shifted by the amount of the coefficient on the binary regressor Female. The regression functions for workers with a bachelor’s degree will also look just like these, but they would be shifted by the amount of the coefficient on the binary variable Bachelor.(i) This regression is shown in column (5). The coefficient on the interaction term Female uBachelor shows the “extra effect” of Bachelor on ln(AHE) for women relative the effect for men.Predicted values of ln(AHE):Alexis: 0.146 u 30 0.0021 u 302 0.180 u 1 0.405 u 1 0.064 u 1 0.078 4.504Jane: 0.146 u 30 0.0021 u 302 0.180 u 1 0.405 u 0 0.064 u 0 0.078 4.063Bob: 0.146 u 30 0.0021 u 302 0.180 u 0 0.405 u 1 0.064 u 0 0.078 4.651Jim: 0.146 u 30 0.0021 u 302 0.180 u 0 0.405 u 0 0.064 u 0 0.078 4.273Difference in ln(AHE): Alexis Jane 4.504 4.063 0.441Difference in ln(AHE): Bob Jim 4.651 4.273 0.378Notice that the difference in the difference predicted effects is 0.441 0.378 0.063, which is the value of the coefficient on the interaction term.(j) This regression is shown in (6), which includes two additional regressors: the interactions of Female and the age variables, Age and Age2. The F-statistic testing the restriction that the coefficients on these interaction terms is equal to zero is F 4.12 with a p-value of 0.02. This implies that there is statistically significant evidence (at the 5% level) that there is a different effect of Age on ln(AHE) for men and women.(k) This regression is shown in (7), which includes two additional regressors that are interactions of Bachelor and the age variables, Age and Age2. The F-statistic testing the restriction that the coefficients on these interaction terms is zero is 7.15 with a p-value of 0.00. This implies that there is statistically significant evidence (at the 1% level) that there is a different effect of Age on ln(AHE) for high school and college graduates.(l) Regression (8) includes Age and Age2 and interactions terms involving Female and Bachelor.The figure below shows the regressions predicted value of ln(AHE) for male and females with high school and college degrees.124 Stock/Watson - Introduction to Econometrics - Second EditionThe estimated regressions suggest that earnings increase as workers age from 25–35, the rangeof age studied in this sample. There is evidence that the quadratic term Age2 belongs in theregression. Curvature in the regression functions in particularly important for men.Gender and education are significant predictors of earnings, and there are statistically significant interaction effects between age and gender and age and education. The table below summarizes the regressions predictions for increases in earnings as a person ages from 25 to 32 and 32 to 35Gender, Education Predicted ln(AHE) at Age(Percent per year)25 32 35 25 to 32 32 to 35Males, High School 2.46 2.65 2.67 2.8% 0.5%Females, BA 2.68 2.89 2.93 3.0% 1.3%Males, BA 2.74 3.06 3.09 4.6% 1.0%Earnings for those with a college education are higher than those with a high school degree, andearnings of the college educated increase more rapidly early in their careers (age 25–32). Earnings for men are higher than those of women, and earnings of men increase more rapidly early in theircareers (age 25–32). For all categories of workers (men/women, high school/college) earningsincrease more rapidly from age 25–32 than from 32–35.。
INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONS

INTERNATIONAL JOURNAL OF CIRCUIT THEORY AND APPLICATIONSInt.J.Circ.Theor.Appl.2006;34:559–582Published online in Wiley InterScience().DOI:10.1002/cta.375A wavelet-based piecewise approach for steady-state analysisof power electronics circuitsK.C.Tam,S.C.Wong∗,†and C.K.TseDepartment of Electronic and Information Engineering,Hong Kong Polytechnic University,Hong KongSUMMARYSimulation of steady-state waveforms is important to the design of power electronics circuits,as it reveals the maximum voltage and current stresses being imposed upon specific devices and components.This paper proposes an improved approach tofinding steady-state waveforms of power electronics circuits based on wavelet approximation.The proposed method exploits the time-domain piecewise property of power electronics circuits in order to improve the accuracy and computational efficiency.Instead of applying one wavelet approximation to the whole period,several wavelet approximations are applied in a piecewise manner tofit the entire waveform.This wavelet-based piecewise approximation approach can provide very accurate and efficient solution,with much less number of wavelet terms,for approximating steady-state waveforms of power electronics circuits.Copyright2006John Wiley&Sons,Ltd.Received26July2005;Revised26February2006KEY WORDS:power electronics;switching circuits;wavelet approximation;steady-state waveform1.INTRODUCTIONIn the design of power electronics systems,knowledge of the detailed steady-state waveforms is often indispensable as it provides important information about the likely maximum voltage and current stresses that are imposed upon certain semiconductor devices and passive compo-nents[1–3],even though such high stresses may occur for only a brief portion of the switching period.Conventional methods,such as brute-force transient simulation,for obtaining the steady-state waveforms are usually time consuming and may suffer from numerical instabilities, especially for power electronics circuits consisting of slow and fast variations in different parts of the same waveform.Recently,wavelets have been shown to be highly suitable for describingCorrespondence to:S.C.Wong,Department of Electronic and Information Engineering,Hong Kong Polytechnic University,Hunghom,Hong Kong.†E-mail:enscwong@.hkContract/sponsor:Hong Kong Research Grants Council;contract/grant number:PolyU5237/04ECopyright2006John Wiley&Sons,Ltd.560K.C.TAM,S.C.WONG AND C.K.TSEwaveforms with fast changing edges embedded in slowly varying backgrounds[4,5].Liu et al.[6] demonstrated a systematic algorithm for approximating steady-state waveforms arising from power electronics circuits using Chebyshev-polynomial wavelets.Moreover,power electronics circuits are piecewise varying in the time domain.Thus,approx-imating a waveform with one wavelet approximation(ing one set of wavelet functions and hence one set of wavelet coefficients)is rather inefficient as it may require an unnecessarily large wavelet set.In this paper,we propose a piecewise approach to solving the problem,using as many wavelet approximations as the number of switch states.The method yields an accurate steady-state waveform descriptions with much less number of wavelet terms.The paper is organized as follows.Section2reviews the systematic(standard)algorithm for approximating steady-state waveforms using polynomial wavelets,which was proposed by Liu et al.[6].Section3describes the procedure and formulation for approximating steady-state waveforms of piecewise switched systems.In Section4,application examples are presented to evaluate and compare the effectiveness of the proposed piecewise wavelet approximation with that of the standard wavelet approximation.Finally,we give the conclusion in Section5.2.REVIEW OF WA VELET APPROXIMATIONIt has been shown that wavelet approximation is effective for approximating steady-state waveforms of power electronics circuits as it takes advantage of the inherent nature of wavelets in describing fast edges which have been embedded in slowly moving backgrounds[6].Typically,power electronics circuits can be represented by a time-varying state-space equation˙x=A(t)x+U(t)(1) where x is the m-dim state vector,A(t)is an m×m time-varying matrix,and U is the inputfunction.Specifically,we writeA(t)=⎡⎢⎢⎢⎣a11(t)a12(t)···a1m(t)............a m1(t)a m2(t)···a mm(t)⎤⎥⎥⎥⎦(2)andU(t)=⎡⎢⎢⎢⎣u1(t)...u m(t)⎤⎥⎥⎥⎦(3)In the steady state,the solution satisfiesx(t)=x(t+T)for0 t T(4) where T is the period.For an appropriate translation and scaling,the boundary condition can be mapped to the closed interval[−1,1]x(+1)=x(−1)(5) Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582A WA VELET-BASED PIECEWISE APPROACH FOR STEADY-STATE ANALYSIS561 Assume that the basic time-invariant approximation equation isx i(t)=K T i W(t)for−1 t 1and i=1,2,...,m(6) where W(t)is any wavelet basis of size2n+1+1(n being the wavelet level),K T i=[k i,0,...,k i,2n+1] is a coefficient vector of dimension2n+1+1,which is to be found.‡The wavelet transformedequation of(1)isKD W=A(t)K W+U(t)(7)whereK=⎡⎢⎢⎢⎢⎢⎢⎢⎣k1,0k1,1···k1,2n+1k2,0k2,1···k2,2n+1............k m,0k m,1···k m,2n+1⎤⎥⎥⎥⎥⎥⎥⎥⎦(8)Thus,(7)can be written generally asF(t)K=−U(t)(9) where F(t)is a m×(2n+1+1)m matrix and K is a(2n+1+1)m-dim vector,given byF(t)=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣a11(t)W T(t)−W T(t)D T···a1i(t)W T(t)···a1m W T(t)...............a i1(t)W T(t)···a ii(t)W T(t)−W T(t)D T···a im W T(t)...............a m1(t)W T(t)···a mi(t)W T(t)···a mm W T(t)−W T(t)D T⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(10)K=[K T1···K T m]T(11)Note that since the unknown K is of dimension(2n+1+1)m,we need(2n+1+1)m equations. Now,the boundary condition(5)provides m equations,i.e.[W(+1)−W(−1)]T K i=0for i=1,...,m(12) This equation can be easily solved by applying an appropriate interpolation technique or via direct numerical convolution[11].Liu et al.[6]suggested that the remaining2n+1m equations‡The construction of wavelet basis has been discussed in detail in Reference[6]and more formally in Reference[7].For more details on polynomial wavelets,see References[8–10].Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582562K.C.TAM,S.C.WONG AND C.K.TSEare obtained by interpolating at2n+1distinct points, i,in the closed interval[−1,1],and the interpolation points can be chosen arbitrarily.Then,the approximation equation can be written as˜FK=˜U(13)where˜F= ˜F1˜F2and˜U=˜U1˜U2(14)with˜F1,˜F2,˜U1and˜U2given by˜F1=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣[W(+1)−W(−1)]T(00···0)···(00···0)(00···0)[W(+1)−W(−1)]T···(00···0)............(00···0)2n+1+1columns(00···0)···[W(+1)−W(−1)]T(2n+1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭m rows(15)˜F2=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣F( 1)F( 2)...F( 2n+1)(2n+1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎭2n+1m rows(16)˜U1=⎡⎢⎢⎢⎣...⎤⎥⎥⎥⎦⎫⎪⎪⎪⎬⎪⎪⎪⎭m elements(17)˜U2=⎡⎢⎢⎢⎢⎢⎣−U( 1)−U( 2)...−U( 2n+1)⎤⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎭2n+1m elements(18)Finally,by solving(13),we obtain all the coefficients necessary for generating an approximate solution for the steady-state system.Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582A WA VELET-BASED PIECEWISE APPROACH FOR STEADY-STATE ANALYSIS5633.WA VELET-BASED PIECEWISE APPROXIMATION METHODAlthough the above standard algorithm,given in Reference[6],provides a well approximated steady-state solution,it does not exploit the piecewise switched nature of power electronics circuits to ease computation and to improve accuracy.Power electronics circuits are defined by a set of linear differential equations governing the dynamics for different intervals of time corresponding to different switch states.In the following,we propose a wavelet approximation algorithm specifically for treating power electronics circuits.For each interval(switch state),we canfind a wavelet representation.Then,a set of wavelet representations for all switch states can be‘glued’together to give a complete steady-state waveform.Formally,consider a p-switch-state converter.We can write the describing differential equation, for switch state j,as˙x j=A j x+U j for j=1,2,...,p(19) where A j is a time invariant matrix at state j.Equation(19)is the piecewise state equation of the system.In the steady state,the solution satisfies the following boundary conditions:x j−1(T j−1)=x j(0)for j=2,3,...,p(20) andx1(0)=x p(T p)(21)where T j is the time duration of state j and pj=1T j=T.Thus,mapping all switch states to the close interval[−1,1]in the wavelet space,the basic approximate equation becomesx j,i(t)=K T j,i W(t)for−1 t 1(22) with j=1,2,...,p and i=1,2,...,m,where K T j,i=[k1,i,0···k1,i,2n+1,k2,i,0···k2,i,2n+1,k j,i,0···k j,i,2n+1]is a coefficient vector of dimension(2n+1+1)×p,which is to be found.Asmentioned previously,the state equation is transformed to the wavelet space and then solved by using interpolation.The approximation equation is˜F(t)K=−˜U(t)(23) where˜F=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣˜F˜F1˜F2...˜Fp⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦and˜U=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣˜U˜U1˜U2...˜Up⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(24)Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582564K.C.TAM,S.C.WONG AND C.K.TSEwith ˜F0,˜F 1,˜F 2,˜F p ,˜U 0,˜U 1,˜U 2and ˜U p given by ˜F 0=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣F a 00···F b F b F a 0···00F b F a ···0...............00···F b F a (2n +1+1)×m ×p columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭m ×p rows (F a and F b are given in (33)and (34))(25)˜F 1=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣F ( 1)0 0F ( 2)0 0............F ( 2n +1) (2n +1+1)m columns 0(2n +1+1)m columns···0 (2n +1+1)m columns(2n +1+1)×m ×p columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭2n +1m rows(26)˜F 2=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣0F ( 1)···00F ( 2)···0............0(2n +1+1)m columnsF ( 2n +1)(2n +1+1)m columns···(2n +1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(27)˜F p =⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣0···0F ( 1)0···0F ( 2)...... 0(2n +1+1)m columns···(2n +1+1)m columnsF ( 2n +1)(2n +1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(28)˜U0=⎡⎢⎢⎢⎣0 0⎤⎥⎥⎥⎦⎫⎪⎪⎪⎬⎪⎪⎪⎭m ×p elements(29)Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582A WA VELET-BASED PIECEWISE APPROACH FOR STEADY-STATE ANALYSIS565˜U1=⎡⎢⎢⎢⎢⎢⎣−U( 1)−U( 2)...−U( 2n+1)⎤⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎭2n+1m elements(30)˜U2=⎡⎢⎢⎢⎢⎣−U( 1)−U( 2)...−U( 2n+1)⎤⎥⎥⎥⎥⎦(31)˜Up=⎡⎢⎢⎢⎢⎢⎣−U( 1)−U( 2)...−U( 2n+1)⎤⎥⎥⎥⎥⎥⎦(32)F a=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣[W(−1)]T0 00[W(−1)]T 0............00···[W(−1)]T(2n+1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭m rows(33)F b=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣[−W(+1)]T0 00[−W(+1)]T 0............00···[−W(+1)]T(2n+1+1)m columns⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦⎫⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎬⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎪⎭m rows(34)Similar to the standard approach outlined in Section2,all the coefficients necessary for gener-ating approximate solutions for each switch state for the steady-state system can be obtained by solving(23).It should be noted that the wavelet-based piecewise method can be further enhanced for approx-imating steady-state solution using different wavelet levels for different switch states.Essentially, Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582566K.C.TAM,S.C.WONG AND C.K.TSEwavelets of high levels should only be needed to represent waveforms in switch states where high-frequency details are present.By using different choices of wavelet levels for different switch states,solutions can be obtained more quickly.Such an application of varying wavelet levels for different switch intervals can be easily incorporated in the afore-described algorithm.4.APPLICATION EXAMPLESIn this section,we present four examples to demonstrate the effectiveness of our proposed wavelet-based piecewise method for steady-state analysis of switching circuits.The results will be evaluated using the mean relative error (MRE)and mean absolute error (MAE),which are defined byMRE =12 1−1ˆx (t )−x (t )x (t )d t (35)MAE =12 1−1|ˆx (t )−x (t )|d t (36)where ˆx (t )is the wavelet-approximated value and x (t )is the SPICE simulated result.The SPICE result,being generated from exact time-domain simulation of the actual circuit at device level,can be used for comparison and evaluation.In discrete forms,MAE and MRE are simply given byMRE =1N Ni =1ˆx i −x i x i(37)MAE =1N Ni =1|ˆx i −x i |(38)where N is the total number of points sampled along the interval [−1,1]for error calculation.In the following,we use uniform sampling (i.e.equal spacing)with N =1001,including boundary points.4.1.Example 1:a single pulse waveformConsider the single pulse waveform shown in Figure 1.This is an example of a waveform that cannot be efficiently approximated by the standard wavelet algorithm.The waveform consists of five segments corresponding to five switch states (S1–S5),and the corresponding state equations are given by (19),where A j and U j are given specifically asA j =⎧⎪⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎪⎩0if 0 t <t 10if t 1 t <t 21if t 2 t <t 30if t 3 t <t 40if t 4 t T(39)Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582A WA VELET-BASED PIECEWISE APPROACH FOR STEADY-STATE ANALYSIS567S1S2S3S4S50t1t2t3t4THFigure 1.A single pulse waveform consisting of 5switch states.andU j =⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩0if 0 t <t 1H /(t 2−t 1)if t 1 t <t 2−Hif t 2 t <t 3−H /(t 4−t 3)if t 3 t <t 40if t 4 t T(40)where H is the amplitude (see Figure 1).Switch states 2(S2)and 4(S4)correspond to the rising edge and falling edge,respectively.Obviously,when the widths of rising and falling edges are small (relative to the whole switching period),the standard wavelet method cannot provide a satisfactory approximation for this waveform unless very high wavelet levels are used.Theoretically,the entire pulse-like waveform can be very accurately approximated by a very large number of wavelet terms,but the computational efforts required are excessive.As mentioned before,since the piecewise approach describes each switch interval separately,it yields an accurate steady-state waveform description for each switch interval with much less number of wavelet terms.Figures 2(a)and (b)compare the approximated pulse waveforms using the proposed wavelet-based piecewise method and the standard wavelet method for two different choices of wavelet levels with different widths of rising and falling edges.This example clearly shows the benefits of the wavelet-based piecewise approximation using separate sets of wavelet coefficients for the different switch states.Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582568K.C.TAM,S.C.WONG AND C.K.TSE0−0.2−0.4−0.6−0.8−1−20−15−10−50.20.40.60.81−0.2−0.4−0.6−0.8−10.20.40.60.81(a)051015(b)Figure 2.Approximated pulse waveforms with amplitude 10.Dotted line is the standard wavelet approx-imated waveforms using wavelets of levels from −1to 5.Solid lines are the actual waveforms and the wavelet-based piecewise approximated waveforms using wavelets of levels from −1to 1:(a)switch states 2and 4with rising and falling times both equal to 5per cent of the period;and (b)switch states 2and 4with rising and falling times both equal to 1per cent of the period.4.2.Example 2:simple buck converterThe second example is the simple buck converter shown in Figure 3.Suppose the switch has a resistance of R s when it is turned on,and is practically open-circuit when it is turned off.The diode has a forward voltage drop of V f and an on-resistance of R d .The on-time and off-time equivalent circuits are shown in Figure 4.The basic system equation can be readily found as˙x=A (t )x +U (t )(41)where x =[i L v o ]T ,and A (t )and U (t )are given byA (t )=⎡⎢⎣−R d s (t )L −1L 1C −1RC⎤⎥⎦(42)U (t )=⎡⎣E (1−s (t ))+V f s (t )L⎤⎦(43)Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582Figure3.Simple buck convertercircuit.Figure4.Equivalent linear circuits of the buck converter:(a)during on time;and(b)during off time.Table ponent and parameter values for simulationof the simple buck converter.Component/parameter ValueMain inductance,L0.5mHCapacitance,C0.1mFLoad resistance,R10Input voltage,E100VDiode forward drop,V f0.8VSwitching period,T100 sOn-time,T D40 sSwitch on-resistance,R s0.001Diode on-resistance,R d0.001with s(t)defined bys(t)=⎧⎪⎨⎪⎩0for0 t T D1for T D t Ts(t−T)for all t>T(44)We have performed waveform approximations using the standard wavelet method and the proposed wavelet-based piecewise method.The circuit parameters are shown in Table I.We also generate waveforms from SPICE simulations which are used as references for comparison. The approximated inductor current is shown in Figure5.Simple visual inspection reveals that the wavelet-based piecewise approach always gives more accurate waveforms than the standard method.Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582−0.5−10.51−0.5−10.51012345670123456712345671234567(a)(b)(c)(d)Figure 5.Inductor current waveforms of the buck converter.Solid line is waveform from piecewise wavelet approximation,dotted line is waveform from SPICE simulation and dot-dashed line is waveform using standard wavelet approximation.Note that the solid lines are nearly overlapping with the dotted lines:(a)using wavelets of levels from −1to 0;(b)using wavelets of levels from −1to 1;(c)using wavelets oflevels from −1to 4;and (d)using wavelets of levels from −1to 5.Table parison of MREs for approximating waveforms for the simple buck converter.Wavelet Number of MRE for i L MRE for v C CPU time (s)MRE for i L MRE for v C CPU time (s)levels wavelets (standard)(standard)(standard)(piecewise)(piecewise)(piecewise)−1to 030.9773300.9802850.0150.0041640.0033580.016−1to 150.2501360.1651870.0160.0030220.0024000.016−1to 290.0266670.0208900.0320.0030220.0024000.046−1to 3170.1281940.1180920.1090.0030220.0024000.110−1to 4330.0593070.0538670.3750.0030220.0024000.407−1to 5650.0280970.025478 1.4380.0030220.002400 1.735−1to 61290.0122120.011025 6.1880.0030220.0024009.344−1to 72570.0043420.00373328.6410.0030220.00240050.453In order to compare the results quantitatively,MREs are computed,as reported in Table II and plotted in Figure 6.Finally we note that the inductor current waveform has been very well approximated by using only 5wavelets of levels up to 1in the piecewise method with extremelyCopyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582123456700.10.20.30.40.50.60.70.80.91M R E (m e a n r e l a t i v e e r r o r )Wavelet Levelsinductor current : standard method inductor current : piecewise methodFigure parison of MREs for approximating inductor current for the simple buck converter.small MREs.Furthermore,as shown in Table II,the CPU time required by the standard method to achieve an MRE of about 0.0043for i L is 28.64s,while it is less than 0.016s with the proposed piecewise approach.Thus,we see that the piecewise method is significantly faster than the standard method.4.3.Example 3:boost converter with parasitic ringingsNext,we consider the boost converter shown in Figure 7.The equivalent on-time and off-time circuits are shown in Figure 8.Note that the parasitic capacitance across the switch and the leakage inductance are deliberately included to reveal waveform ringings which are realistic phenomena requiring rather long simulation time if a brute-force time-domain simulation method is used.The state equation of this converter is given by˙x=A (t )x +U (t )(45)where x =[i m i l v s v o ]T ,and A (t )and U (t )are given byA (t )=A 1(1−s (t ))+A 2s (t )(46)U (t )=U 1(1−s (t ))+U 2s (t )(47)Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582Figure7.Simple boost convertercircuit.Figure8.Equivalent linear circuits of the boost converter including parasitic components:(a)for on time;and(b)for off time.with s(t)defined earlier in(44)andA1=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣−R mL mR mL m00R mL l−R l+R mL l−1L l1C s−1R s C s000−1RC⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(48)A2=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣−R mR dL mR m R dL m0−R mL m d mR m R dL l−R mR d+R lL l−1L lR mL l d m1C s00R mC(R d+R m)−R mC(R d+R m)0−R+R m+R dC R(R d+R m)⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(49)Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582U1=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣EL m⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(50)U2=⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣EL m−R m V fL m d mR m V fL l(R d+R m)−V f R mC(R d m⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(51)Again we compare the approximated waveforms of the leakage inductor current using the proposed piecewise method and the standard wavelet method.The circuit parameters are listed in Table III.Figures9(a)and(b)show the approximated waveforms using the piecewise and standard wavelet methods for two different choices of wavelet levels.As expected,the piecewise method gives more accurate results with wavelets of relatively low levels.Since the waveform contains a substantial portion where the value is near zero,we use the mean absolute error(MAE)forTable ponent and parameter values for simulation ofthe boost converter.Component/parameter ValueMain inductance,L m200 HLeakage inductance,L l1 HParasitic resistance,R m1MOutput capacitance,C200 FLoad resistance,R10Input voltage,E10VDiode forward drop,V f0.8VSwitching period,T100 sOn-time,T D40 sParasitic lead resistance,R l0.5Switch on-resistance,R s0.001Switch capacitance,C s200nFDiode on-resistance,R d0.001Copyright2006John Wiley&Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–5820−0.2−0.4−0.6−0.8−1−50.20.40.60.815100(a)(b)−50.20.40.60.81510Figure 9.Leakage inductor waveforms of the boost converter.Solid line is waveform from wavelet-based piecewise approximation,dotted line is waveform from SPICE simulation and dot-dashed line is waveform using standard wavelet approximation:(a)using wavelets oflevels from −1to 4;and (b)using wavelets of levels from −1to 5.Table IV .Comparison of MAEs for approximating the leakage inductor currentfor the boost converter.Wavelet Number MAE for i l CPU time (s)MAE for i l CPU time (s)levels of wavelets(standard)(standard)(piecewise)(piecewise)−1to 3170.4501710.1250.2401820.156−1to 4330.3263290.4060.1448180.625−1to 5650.269990 1.6410.067127 3.500−1to 61290.2118157.7970.06399521.656−1to 72570.13254340.6250.063175171.563evaluation.From Table IV and Figure 10,the result clearly verifies the advantage of using the proposed wavelet-based piecewise method.Furthermore,inspecting the two switch states of the boost converter,it is obvious that switch state 2(off-time)is richer in high-frequency details,and therefore should be approximated with wavelets of higher levels.A more educated choice of wavelet levels can shorten the simulationCopyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582345670.050.10.150.20.250.30.350.40.450.5M A E (m e a n a b s o l u t e e r r o r )Wavelet Levelsleakage inductor current : standard method leakage inductor current : piecewise methodFigure parison of MAEs for approximating the leakage inductor current for the boost converter.time.Figure 11shows the approximated waveforms with different (more appropriate)choices of wavelet levels for switch states 1(on-time)and 2(off-time).Here,we note that smaller MAEs can generally be achieved with a less total number of wavelets,compared to the case where the same wavelet levels are employed for both switch states.Also,from Table IV,we see that the CPU time required for the standard method to achieve an MAE of about 0.13for i l is 40.625s,while it takes only slightly more than 0.6s with the piecewise method.Thus,the gain in computational speed is significant with the piecewise approach.4.4.Example 4:flyback converter with parasitic ringingsThe final example is a flyback converter,which is shown in Figure 12.The equivalent on-time and off-time circuits are shown in Figure 13.The parasitic capacitance across the switch and the transformer leakage inductance are included to reveal realistic waveform ringings.The state equation of this converter is given by˙x=A (t )x +U (t )(52)where x =[i m i l v s v o ]T ,and A (t )and U (t )are given byA (t )=A 1(1−s (t ))+A 2s (t )(53)U (t )=U 1(1−s (t ))+U 2s (t )(54)Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–5820−0.2−0.4−0.6−0.8−1−6−4−20.20.40.60.81024680−0.2−0.4−0.6−0.8−1−6−4−20.20.40.60.81024680−0.2−0.4−0.6−0.8−1−6−4−20.20.40.60.81024680−0.2−0.4−0.6−0.8−1−6−4−20.20.40.60.8102468il(A)il(A)il(A)il(A)(a)(b)(c)(d)Figure 11.Leakage inductor waveforms of the boost converter with different choice of wavelet levels for the two switch states.Dotted line is waveform from SPICE simulation.Solid line is waveform using wavelet-based piecewise approximation.Two different wavelet levels,shown in brackets,are used for approximating switch states 1and 2,respectively:(a)(3,4)with MAE =0.154674;(b)(3,5)withMAE =0.082159;(c)(4,5)with MAE =0.071915;and (d)(5,6)with MAE =0.066218.Copyright 2006John Wiley &Sons,Ltd.Int.J.Circ.Theor.Appl.2006;34:559–582。
U.S.A.

Abstract
AdaBoost algorithm of Freund and Schapire has been successfully applied to many domains 2, 10, 12] and the combination of AdaBoost with the C4.5 decision tree algorithm has been called the best o -the-shelf learning algorithm in practice. Unfortunately, in some applications, the number of decision trees required by AdaBoost to achieve a reasonable accuracy is enormously
large and hence is very space consuming. This problem was rst studied by Margineantu and Dietterich 7], where they proposed an empirical method called Kappa pruning to prune the boosting ensemble of decision trees. The Kappa method did this without sacri cing too much accuracy. In this work-in-progress we propose a potential improvement to the Kappa pruning method and also study the boosting pruning problem from a theoretical perspective. We point out that the boosting pruning problem is intractable even to approximate. Finally, we suggest a margin-based theoretical heuristic for this problem. Keywords: Pruning Adaptive Boosting. Boosting Decision Trees. Intractability.
the majority of the sculpture must be inflated

the majority of the sculpture must be inflated The Art of Inflatable SculpturesIn recent years, inflatable sculptures have gained popularity in the art world. These sculptures are created using materials such as plastic or rubber and are inflated using an air pump. The majority of the sculpture must be inflated in order to achieve the desired shape and size.One of the benefits of inflatable sculptures is their portability. They can be easily deflated and transported to different locations, making them ideal for temporary installations or exhibitions. Additionally, inflatable sculptures are often cheaper to produce than traditional sculptures made from materials such as bronze or marble.Inflatable sculptures can take on a variety of shapes and sizes, ranging from small, intricate designs to large, towering structures. Some artists use inflatable sculptures to create immersive installations that visitors can walk through or interact with. Others use them as a way to explore themes such as consumer culture, identity, or the environment.One of the most well-known inflatable sculptures is "Cloud Gate," located in Chicago's Millennium Park. This massive sculpture, which measures 66 feet long and 33 feet high, is made from stainless steel and is covered in a mirror finish. When visitors stand beneath it, they can see their reflection distorted in the curved surface of the sculpture.While inflatable sculptures may seem like a whimsical addition to the art world, they also serve as a commentary on our society's obsession with consumerism and disposable culture. Many inflatable sculptures are designed to be temporary, lasting only a few days or weeks before being deflated and discarded. This serves as a reminder that even the most seemingly permanent structures are ultimately temporary.In conclusion, inflatable sculptures are a unique and innovative addition to the art world. They offer artists a new way to express their creativity and engage with audiences, while also serving as a commentary on our society's values and priorities. Whether you're a fan of contemporary art or simply appreciate the whimsy of inflatable sculptures, there is no denying their impact on the art world.。
计算机英语高级词汇整理

计算机英语高级词汇整理GVPP(Generic Visual Perception Processor,常规视觉处理器)HL-PBGA(外表黏著高耐热、轻薄型塑胶球状矩阵封装)IA(Intel Architecture,英特尔架构)ICU(Instruction Control Unit,指令控制单元)ID (identify,鉴别号码)IDF(Intel Developer Forum,英特尔开发者论坛)IEU(Integer Execution Units,整数执行单元)IMM(Intel Mobile Module,英特尔移动模块)Instructions Cache(指令缓存)Instruction Coloring(指令分类)IPC(Instructions Per Clock Cycle,指令/时钟周期)ISA(instruction set architecture,指令集架构)KNI(Katmai New Instructions,Katmai新指令集,即SSE)Latency(潜伏期)LDT(Lightning Data Transport,闪电数据传输总线)Local Interconnect(局域互连)MESI(Modified,Exclusive,Shared,Invalid,修改、排除、共享、废弃)MMX(MultiMedia Extensions,多媒体扩展指令集)MMU(Multimedia Unit,多媒体单元)MFLOPS(Million Floationg Point/Second,每秒百万个浮点操作)MHz(Million Hertz,兆赫兹)MP(Multi-Processing,多重处理器架构)MPS(MultiProcessor Specification,多重处理器标准)MSRs(Model-Specific Registers,特别模块存放器)NAOC(no-aount OverClock,无效超频)NI(Non-Intel,非英特尔)OLGA(Organic Land Grid Array,基板栅格阵列)OoO(Out of Order,乱序执行)PGA(Pin-Grid Array,引脚网格阵列,耗电大)PR(Performance Rate,性能比率)PSN(Processor Serial numbers,处理器序列号)PIB(Processor In a Box,盒装处理器)PPGA(Plastic Pin Grid Array,塑胶针状矩阵封装)FP(Plastic Quad Flat Package,塑料方块平面封装)RAW(Read after Write,写后读)Register Contention(抢占存放器)Register Pressure(存放器缺乏)Register Renaming(存放器重命名)Remark(芯片频率重标识)Resource Contention(资源冲突)Retirement(指令引退)RISC(Reduced Instruction Set Computing,精简指令集计算机)。
0.13-$mu$m CMOS Phase Shifters for X-, Ku-, and K-Band Phased Arrays

IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 42, NO. 11, NOVEMBER 200725350.13-m CMOS Phase Shifters for X-, Ku-, and K-Band Phased ArraysKwang-Jin Koh, Student Member, IEEE, and Gabriel M. Rebeiz, Fellow, IEEEAbstract—Two 4-bit active phase shifters integrated with all digital control circuitry in 0.13- m RF CMOS technology are developed for X- and Ku-band (8–18 GHz) and K-band (18–26 GHz) phased arrays, respectively. The active digital phase shifters synthesize the required phase using a phase interpolation process by adding quadrature-phased input signals. The designs are based on a resonance-based quadrature all-pass filter for quadrature signaling with minimum loss and wide operation bandwidth. Both phase shifters can change phases with less than about 2 dB of RMS amplitude imbalance for all phase states through an associated DAC control. For the X- and Ku-band phase shifter, the RMS phase error is less than 10 over the entire 5–18 GHz range. The average insertion loss ranges from 3 dB to 0.2 dB at 5–20 GHz. The input 1dB for all 4-bit phase states is typically 5.4 1.3 dBm at 12 GHz in the X- and Ku-band phase shifter. The K-band phase shifter exhibits 6.5–13 of RMS phase error at 15–26 GHz. The average insertion loss is from 4.6 to 3 dB at 15–26 GHz. The input 1.1 dBm at 24 GHz. 1dB of the K-band phase shifter is 0.8 For both phase shifters, the core size excluding all the pads and the output 50 matching circuits, inserted for measurement purpose only, is very small, 0.33 0.43 mm2 . The total current consumption is 5.8 mA in the X- and Ku-band phase shifter and 7.8 mA in the K-band phase shifter, from a 1.5 V supply voltage.Index Terms—Active phase shifters, CMOS analog integrated circuits, phased arrays, quadrature networks.I. INTRODUCTION LECTRONIC phase shifters (PSs), the most essential elements in electronic beam-steering systems such as phased-array antennas, have been traditionally developed using switched transmission lines [1]–[3], 90 -hybrid coupled lines [4]–[6], and periodic loaded lines [7]–[9]. However, even though these distributed approaches can achieve true time delay along the line sections, their physical sizes make them impractical for integration with multiple arrays in a commercial IC 30 GHz) frequencies. process, especially below K-band ( The migrations from distributed networks to lumped-element configurations, such as synthetic transmission lines with varactors (and/or variable inductors) tuning [10]–[12], lumped hybrid-couplers with reflection loads [13]–[15], or the combined topologies of lumped low-pass filters and high-pass filters [16]–[18], seem to reduce the physical dimensions of the phase shifters with reasonable performance achieved. However, for fine phase quantization levels over wide operation bandwidth,Ethe size of the lumped passive networks grows dramatically, mainly for the various on-chip inductors used, and is not suitable for integrated phased array systems on a chip. Also, in most cases, the relationships between the control signal (voltage or current) and output phase of the lumped passive phase shifters are not linear, which makes the design of the control circuits quite complex [19]. The passive phase shifters by themselves can achieve good linearity without consuming any DC power, but their large insertion loss requires an amplifier to compensate the loss, typically more than two stages at high frequencies 10 GHz), which offsets the major merits of good linearity ( and low power dissipation of the passive phase shifters. Compared with the passive designs, active phase shifters [20]–[27] where differential phases can be obtained by the roles of transistors rather than passive networks, can achieve a high integration level with decent gain and accuracy along with a fine digital phase control under a constrained power budget. Although sometimes referred to differently as an endless PS [20], a programmable PS [21], a Cartesian PS [23], or a phase rotator [24], the underlying principle for all cases is to interpolate the phases of two orthogonal-phased input signals through adding the I/Q inputs for synthesizing the required phase. The different amplitude weightings between the I- and Q-inputs result in different phases. Thus, the basic function blocks of a typical active phase shifter are composed of an I/Q generation network, an analog adder, and control circuits which set the different amplitude weightings of I- and Q-inputs in the analog adder for the necessary phase bits. In this work, a 4-bit (phase quantization level 22.5 ) active phase shifters to be integrated on-chip with multiple phased arrays for X-, Ku-, and K-band (8–26 GHz) applications are de65–80 GHz). signed in a 0.13- m RF CMOS technology ( Section II describes the phase shifter architecture and performance requirements in detail. More specific circuit level descriptions of the building blocks are presented in Section III. The implementation details and experimental results are discussed in Section IV. II. SYSTEM ARCHITECTURE Fig. 1 briefly describes the phased array receiver system proposed for this work. The phased array adopts the conventional RF phase-shifting architecture, which is superior to other architectures such as local oscillator (LO) or IF phase-shifting systems in that the RF output signal has a high pattern directivity so that it can substantially reject an interferer before a RF mixer, relaxing the mixer linearity and overall dynamic range requirement [28]. A single-ended SiGe or GaAs low-noise amplifier (LNA) having variable gain function sets the noise figure (NF) and gain ofManuscript received February 1, 2007; revised June 15, 2007. This work was supported by the INTEL UC-Discovery Project at the University of California at San Diego. The authors are with the Department of Electrical and Computer Engineering (ECE), University of California at San Diego, La Jolla, CA 92093 USA (e-mail: kkoh@). Digital Object Identifier 10.1109/JSSC.2007.9072250018-9200/$25.00 © 2007 IEEE2536IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 42, NO. 11, NOVEMBER 2007necessary control logic signals for the DAC and adder, using the 4-bit digital inputs from the DSP. The DAC is an indispensable element for fine digital phase controls in modern phased arrays. Decreasing the phase quantization level needs more sophisticated gain control from a higher resolution DAC, but will not result in any significant increase of the phase-shifter physical area. III. CIRCUIT DESIGN A. Quadrature All-Pass Filter (QAF) In the phase synthesis, which is based on a phase interpolation method by adding two properly weighted quadrature vector signals, the accuracy of the output phase is dominated by the orthonormal precision of the I/Q seed vectors. Specifically, as the output phases heavily depend on the amplitude weightings of I- and Q-input, the output phase error is more sensitive to the amplitude mismatch than the phase mismatch of the I/Q inputs, which leads to the use of an all-pass polyphase filter ensuring equal I/Q amplitude for all , rather than a high-pass/low-pass mode one as an I/Q generation network as in [27]. However, although a polyphase filter provides a solid method of quadrature generation and is sometimes used in the LO signal path where the signal amplitude is very large, its loss often prevents it from being used in the main RF signal paths, and this is more true of multistage polyphase filters for wideband operations. To achieve high quadrature precision over wide bandwidth without sacrificing any signal loss, an - resonance based quadrature all-pass filter is developed. 1) Basic Operation: As shown in Fig. 3(a), the quadrature generation is based on the orthogonal phase splitting between ) and ) in the series - - resonators. The transfer function of the single-ended I/Q network isFig. 1. Multiple antenna receiver for phased array applications. A SiGe or GaAs LNA is used depending on the required system noise figure.the RF part, required from the overall system perspective. The system includes transformer-based (1:1) on-chip baluns for differential signaling after the LNA. The 4-bit differential phase shifter, presented in this work, should provide about 5 0 dB level of insertion loss and higher than 5 dBm of input with less than 10 mW of power dissipation from a 1.5 V supply voltage. The input impedance of the phase shifter should be matched with the output impedance of the LNA ( 50 ). As the phase shifter will eventually be integrated on-chip with an active signal combiner network whose input impedance is capacitive 50 fF, i.e., a gate input of a source follower), the output ( matching in the phase shifter is not necessary. However, the phase shifter should provide a digital interface to the DSP for 4-bit phase controls. The building blocks of the differential active phase shifter are shown in Fig. 2. A differential input signal is split into quadrature phased I- and Q-vector signals using a quadrature all-pass filter (QAF), which provides differential 50 matching with the previous stage as well. The QAF is based on - series resonators, utilizing the series resonance to minimize loss, which will be discussed in detail in the next section. An analog differential adder, composed of two Gilbert-cell type signed variable gain amplifiers (VGAs), adds the I- and Q-inputs from the QAF with proper amplitude weights and polarities, giving an interpolated output signal with a synthetic phase of and magnitude of . For 4-bit phase resolution, the different amplitude weightings of each input of the adder can be accomplished through changing the gain of each VGA differently. A current-mode 3-bit DAC takes this role by controlling the bias current of the VGAs. The logic encoder synthesizes the(1)where and Q .The benefits of this I/Q network are that it can guarantee 90 phase shift between Iand Q-paths for all due to a zero at DC from the I-path transfer function, and it can achieve 3 dB voltage gain at resonance fre. The operating bandwidth is high due to quency when Q the relatively low , although the I/Q output magnitudes are exact only at as the quadrature relationships rely on the low-pass and high-pass characteristics. Even with these advantages, the single-ended I/Q network does not seem to be very attractive because the quadrature accuracy in the single-ended I/Q network is very sensitive to any parasitic loading capacitance, discussed further in this section. Fig. 3(b) and (c) show the transformation to a balanced second-order all-pass configuration to increase the bandwidth and to make it less sensitive to loading effects. After building up the resonators differentially [Fig. 3(b)], opening nodes A and B from the ground can eliminate the redundant series of and through resonance without causing any difference in theKOH AND REBEIZ: 0.13- m CMOS PHASE SHIFTERS FOR X-, Ku-, AND K-BAND PHASED ARRAYS2537Fig. 2. Building blocks of the active phase shifter.Fig. 3. Generation of the resonance-based second-order all-pass quadrature network. (a) Single-ended I/Q network based on low-pass and high-pass topologies. (b) Differential formation of (a). (c) Elimination of redundancy. (d) Differential quadrature all-pass filter.quadrature operation [Fig. 3(c)]. The final form of the QAF [Fig. 3(d)] has a transfer function given by(2)functions, respectively. The symmetric zero locations between the transfer functions can ensure equal I/Q amplitude for all . For the quadrature phase splitting between the I- and Q-paths at , the difference of output phases contributed a frequency of by each right half-plane zero of the transfer functions must be . Another 45 contribution comes from the role 45 at . Equation (4) must thereof left half-plane zeroes at fore be satisfied, and the solutions are shown in (5).. Intuitively, in Fig. 3(d), while shows high-pass characteristic in the view of , it also . Thereshows low-pass characteristics from the point of fore, the linear combination of these characteristics leads to the all-pass operations shown in (2). The interesting point in (2), compared with (1), is that the Q is effectively divided by half, hence increasing the operation bandwidth, because of the elimination of a redundant series - during the differential transform. The differential I/Q network shows for , which is the all and orthogonal phase splitting at . double-pole frequency of (2) when Q 2) Bandwidth Extension: A slight lowering of the Q from 1 can split the double-pole into two separate negative real poles. The equations in (3) show the poles and zeroes of the transfer are the two left half-plane poles, and functions, where and are the zeroes of the I- and Q-path transferwhere(3)(4)(5)2538IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 42, NO. 11, NOVEMBER 2007input matched differentially to , the input reflection coefficient ) at can be given as ((7)Fig. 4. I/Q phase error characteristics at the increase of R. f L : pH (Q GHz, f GHz), C : , R=R R R; R and R=R : .= 639 = 18 6@12 + 1 = 48 2 1 = 01= 50 =02= 12 GHz, = 275 fF andIt is noted that if Q in (5), which is possible by increasing from the original value of , then one can obtain two frequencies where the QAF can generate an exact 90 phase difference between the I/Q outputs, extending the operation bandwidth further, and these two frequencies are identical to the pole frequencies of the I- and Q-path transfer functions. The phase and at error from the 90 relationships between , defined as , can be expressed asWithin 45% 80% variation of , (7) results in , corresponding to roughly below 10 dB input return loss over more than 100% bandwidth. 3) Loading Effect: It is worthwhile to consider the errors caused by the loading effects on the QAF, which we have deliberately ignored for simplicity. Fig. 5 addresses this problem conceptually in a single-ended manner, where the , mainly originated from the parasitic loading capacitance input gate capacitance of a transistor in the next stage, can ( ) and modify the output impedances of ( ) differently. Intuitively, will lower the loaded , hence increasing the resisQ of a high-pass network, . Also, will tance and decreasing the inductance of reduce the resistance and increase capacitance of the low-pass network, , hence effectively increasing the loaded Q. The are the by-products of these impedance modifications by and quadrature errors at the output. The degradation of phase and amplitude errors from this loading effect will be mainly dependent on the ratio of , as given in (8) and (9), respectively, for the case of the single-ended I/Q network. The is defined in the same manner as and at .(6)is the offset frequency from the center frequency of . according to Fig. 4 presents the simulation results of for two cases of Q ( ) and Q ( ). means a net increment of from the ideal . The simulations were done at GHz value of pH ( by SPECTRE with process models, GHz and GHz), fF, , given by the IBM 0.13- m CMOS technology. The theoretical values agree well with simulations. The discrepancy at high freof the given inductor. Thequencies is due to the limited from 35% to oretically, one can achieve less than 5 of about 50% variation of with Q . However, this error frequency range can be increased further with a slight increase of . Typically, a 10% increment of exhibits less than 5 of over 0.5 0.65 of . The penalty in this bandwidth extension by the pole-splitting technique is a small reQ at . duction of voltage gain which can be given as For example, when Q is 0.83, the gain is 0.7 dB lower from the , and is acceptable for most ideal 3 dB voltage gain at applications. It is also noteworthy that the effective decrement of Q by half in the QAF makes possible a real value of input impedance over a wider bandwidth and facilitates impedance matching. With(8) dB (9)The all-pass mode differential configuration can suppress these errors because any output node impedance in Fig. 3(d) is composed of low-pass and high-pass networks as mentioned, and provides counterbalances on the effect of . Fig. 6 shows the simulation results of the quadrature errors caused by at GHz for the single-ended and differential QAF, along with the theoretical values evaluated from (8) and (9). For the most practical range of , the differential I/Q network can by more than half of that from the single-ended reduce one, and the slope of is much smaller in the differential case than in the single-ended one. As the capacitance of the QAF becomes smaller with increasing operating frequencies, can go up to moderate values for millimeter-wave applications, causing substantial errors. The lower impedance design of the QAF, where can be kept constant, hence diminishing , can increased while relieve this potential problem at the expense of more powerKOH AND REBEIZ: 0.13- m CMOS PHASE SHIFTERS FOR X-, Ku-, AND K-BAND PHASED ARRAYS2539Fig. 5. Single-ended I/Q network under capacitive loading.Fig. 6. Quadrature errors from the loading effect of C at f = f = 12 GHz. (a) Phase error. (b) Amplitude error. All simulations were done by SPECTRE with = 18:6 @ 12 GHz, f = 50 GHz), C = 275 fF and R = 48:2 . foundry passive models (L = 639 pH (Qconsumption for driving the low impedance from the previous stage of the QAF. Another appropriate solution is to insert a from source follower after the QAF, which will minimize the gate of an input transistor of the following stage. In this work, for the X- and Ku-band phase shifter, the [ in QAF is designed with differential 50 Fig. 3(d)] for impedance matching with the previous stage. GHz, the final optimized values of and For through SPECTRE simulations are pH ( GHz) and fF. This takes into account about which includes 70 fF of input pad capacitance and 50 fF of the input capacitance of the following stage (a differential adder) and the parasitic layout capacitance. For the K-band phase shifter, the optimized passive component values are pH ( GHz), fF and [ in Fig. 3(d)]. The inductors are realized incorporating the parasitic layout inductance using the foundry models with full-wave electromagnetic simulations. With all the parasitic capacitances, Monte Carlo simulations assuming( 5%), ( 5%) and Gaussian distributions of ( 10%), show about a maximum 5 of quadrature phase error within 1 statistical variations at 12 GHz. Within 3 variations, the maximum I/Q phase error is 15 and I/Q amplitude mismatch is 1.2 0.3 dB for the X- and Ku-band QAF. For the K-band design, the phase error distribution is 5 13 within 1 variations at GHz. Within 3 variations, the phase error ranges from 15 to 18 and amplitude mismatch is 2.3 0.6 dB, which are just enough for distinguishing 22.5 of phase quantization levels. B. Analog Differential Adder Fig. 7(a) shows the analog differential signed-adder, which adds the - converted I- and Q-input from a QAF together in the current domain at the output node, synthesizing the required phase. The size of the input transistors ( ) is optimized through SPECTRE simulations with respect to the linearity. The polarity of each I/Q input can be reversed by switching the tail current from one side to the other with2540IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 42, NO. 11, NOVEMBER 2007Fig. 7. (a) Analog differential adder with output impedance matching networks. (b) Three-bit differential DAC for bias current controls of the adder.switches and . As the phase shifter is designed to be integrated with multiple arrays on-chip, the small form factor is a critical consideration, leading to the use of an active and , instead of an inductor load composed of on-chip spiral inductor. The equivalent output impedance from , where the active inductor load can be expressed as , [29]. The and are gate-source and gate-drain parasitic capacitances of , respectively, and is the transconductance of , . For expressed as , , and constitute a measurement purposes only, wideband 50 matching T-network (differentially 100 ) of which maximum circuit node Q looking toward the 50 load from the matching network is less than 1. For the X- and Ku-band phase shifter, the total bias current ( ) in the differential adder is 5 mA from a and 1.5 V supply voltage. This provides roughly nH ( GHz) from the active inductor load with and of . In the SPECTRE simulations including I/O pad parasitics, the phase shifter shows 2 0 dB of differential voltage gain at 5-20 GHz. The peak gain variance is less than 2.4 dB and the worst case phase error at 12 GHz is less than 5.2 for all 4-bit phase states. The phase shifter achieves typically 4.7 dBm of at 12 GHz. The is below 10 dB at 8–16.7 GHz input and is less than 10 dB at 6.7–16 GHz with pH GHz), fF and fF. ( For the K-band phase shifter, with 7 mA of DC current in the adder, and with and of ( and pH), the differential voltage gain 2.5 dB at 15-30 GHz in simulations. At 24 GHz, the is 6 peak gain error is less than 3.5 dB and the peak phase error is less than 9.5 for all phase bits. The input at 24 GHz is 1.3 dBm. The is less than 10 dB at 15–33 GHz and is below 10 dB at 15–28.2 GHz with pH ( GHz), fF and fF in the SPECTRE simulations.TABLE I LOGIC MAPPING TABLE FOR THE SWITCH CONTROLSC. DAC The gain controls of the I- and Q-path of the adder for 4-bit phase resolution can be achieved by changing the bias current ratios between the two paths. For instance, a 6:1 ratio between and results in 6:1 ratio between the I- and Q-paths of the adder based on the long channel model, leading , which is a good to an output phase of approximation for low-level gate overdriving and well matched with the simulation results. This is only 0.3 error from the 4-bit resolution, indicating that the phase shifter can achieve a high accuracy by simple DC bias current controls. A current-mode differential DAC shown in Fig. 7(b) sets the bias current ratiosKOH AND REBEIZ: 0.13- m CMOS PHASE SHIFTERS FOR X-, Ku-, AND K-BAND PHASED ARRAYS2541Fig. 8. Chip microphotograph. (a) X- and Ku-band phase shifter. (b) K-band phase shifter.of the I- and Q-paths of the adder through mirroring to the curfor 4-bit phase synthesis. Table I shows the rent source of control logics for the pMOS switches , , and in the DAC, and nMOS switches and in the adder. “ ” means logically high ( on-state) and “ ” is logically low ( off-state). , where , , 1, 2, and 3, is just the logic inversion of . The differential architecture of the phase shifter causes the 0 -bit, 22.5 -bit, and 45 -bit to be fundamental bits, as the others can be obtained by reversing the switch polarities of these bits in the adder and/or in the DAC (see Table I). It should be also emphasized that the logic and scaling of current sources of the DAC are set such that for all 4-bit phase states, the load current in the adder keeps a constant value, i.e., for all phase bits. This results in a constant impedance of the active inductor load, and the same for all amplitude response proportional to phase states. For instance, for all the cases of 0 -bit, 22.5 -bit, and 45 -bit, the scaling factors of the output load current in the adder have the same values of 5.6 and the gain can , where is a constant be expressed as , process parameters determined by a transistor size of and , load impedance and current mirroring such as ratio from DAC to adder. To improve current matching, the m). The DAC is designed with long channel CMOS ( control logics are implemented with static CMOS gates. IV. EXPERIMENTAL RESULTS AND DISCUSSIONS The phase shifters are realized in IBM 0.13- m one-poly eight-metal (1P8M) CMOS technology. To improve signal balance, all the signal paths have symmetric layouts. The fabricated die microphotographs are shown in Fig. 8. The core size excluding output matching networks for both phase shifters is 0.33 0.43 mm , and the total size including all the pads and matching circuits is 0.75 0.6 mm . The phase shifters are measured on-chip with external 180 hybrid couplers (Krytar, loss 0.5–1.5 dB @ 5–26 GHz) for differential signal inputsand outputs. The balun loss is calibrated out with a standard differential SOLT calibration technique using a vector signal network analyzer (Agilent, PNA-E8364B). As the input reflection coefficient is dominantly set by the quadrature network, a changing phase at the adder does not discharacteristic. The characteristics also do not turb the change for different phase settings, as the output load currents are the same for all phase states, resulting in a constant output impedance from the active load as discussed. Fig. 9 displays the typical measurement results of the input and output return losses, together with the simulation curves. For X- and Ku-band , converted into differential 50 referphase shifters, the ence using ADS, is below 10 dB from 8.5 GHz to 17.2 GHz. In differential 100 reference, the phase shifter shows less than 10 dB of in the 6.3–16.5 GHz range. For the K-band phase is below 10 dB at 16.8–26 GHz and shifter, the measured the is less than 10 dB at 17–26 GHz. The external 180 hybrid couplers limit the maximum measurement frequency for the K-band case. A. QAF Characteristics The measurement of the 0 -/180 -bit and 90 -/270 -bit at the final output of the phase shifters should reflect the QAF characteristics exactly (Fig. 10). The dashed curves correspond to simulations with 50 fF loading capacitance. For the QAF of the X- and Ku-band phase shifters, the peak I/Q phase error is less than 5.5 and gain error is less than 1.5 dB at 12 GHz. The 10 phase error frequency range is from 5.5–17.5 GHz. The peak I/Q gain error at 5–20 GHz is less than 2.4 dB. For the K-band QAF, the quadrature phase error varies from 2.7 at 15 GHz to a maximum of 15.2 at 26 GHz. The I/Q amplitude error of the K-band QAF is 1.76–3.3 dB at 15–26 GHz. B. X- and Ku-Band Phase Shifters For the X- and Ku-band phase shifters, Fig. 11(a) and (b) shows the frequency responses of the unwrapped insertion phases and insertion gains according to the 4-bit digital input2542IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 42, NO. 11, NOVEMBER 2007Fig. 9. Measured results of input and output return loss of the phase shifters. (a) S of the K-band phase shifter. (d) S of the K-band phase shifter.of the X- and Ku-band phase shifter. (b) Sof the X- and Ku-band. (c) SFig. 10. Quadrature error characteristics of the I/Q networks measured at the output of the adder. (a) I/Q phase error of the X- and Ku-band QAF. (b) I/Q amplitude error of the X- and Ku-band QAF. (c) I/Q phase error of the K-band QAF. (d) I/Q amplitude error of the K-band QAF. All simulations were done with SPECTRE.KOH AND REBEIZ: 0.13- m CMOS PHASE SHIFTERS FOR X-, Ku-, AND K-BAND PHASED ARRAYS2543Fig. 11. Measured insertion phase and gain of the X- and Ku-band phase shifter with 4-bit digital inputs. (a) Insertion phase. (b) Insertion gain. (c) RMS phase error. (d) RMS gain error.codes, measured from 5 to 20 GHz. At 12 GHz, the measured peak-to-peak phase error is 8.5 9.1 and the peak-to-peak insertion gain is 1.5 1.2 dB for all phase states. The average differential gain ranges from 3 dB at 20 GHz to 0.2 dB at around 11–12 GHz. The peak-to-peak gain variations are minimum 1.4 dB at 7 GHz and maximum 5.4 dB at 20 GHz. With reference to 0 -bit which comes from a 0000 digital input code, the RMS phase error can be defined asThe RMS phase error and gain error, calculated at each measured frequency, are shown in Fig. 11(c) and (d), respectively. The phase shifter exhibits less than 5 RMS phase error from 5.3 GHz to about 12 GHz. The 10 RMS error frequency range goes up to 18 GHz, achieving 5-bit accuracy across more than 3:1 bandwidth. The RMS gain error is less than 2.2 dB for 5–20 GHz. The phase shifter achieves 5.4 1.3 dBm of input at 12 GHz for all 4-bit phase states with 5.8 mA of DC current consumption from a 1.5 V supply voltage. C. K-Band Phase Shifter Fig. 12(a) shows the measured insertion phases with 4-bit digital input codes of the K-band phase shifter. The insertion loss characteristics are shown in Fig. 12(b), and the RMS phase errors and gain errors versus frequency are presented in Fig. 12(c) and (d), respectively. The RMS phase error is 6.5 –13 at 15–26 GHz. The average insertion loss varies from 4.6 dB at 15 GHz to 3 dB at around 24.5–26 GHz. The peak-to-peak gain variations are minimum 3.3 dB at 15.4 GHz and maximum 6.3 dB at 25.6 GHz. The RMS gain error is less than about 2.1 dB from 15 to 26 GHz. As shown in Fig. 11(c) and (d) and in Fig. 12(c) and (d), the RMS phase errors versus frequency have strong correlations with the RMS gain error patterns versus frequency. This is a typical characteristic of the proposed phase shifter; because the output phase in the phase shifter is set by the gain factors of the I- and Q-(10)and means the th output phase error from where the ideal phase value corresponding to the th digital input sequence in Table I. Similarly, the RMS gain error can be defined asdB(11)dB dB dB . The is th inwhere sertion gain in dB-scale corresponding to th digital input order is the average insertion gain in dB-scale also. and。
2010-PE-07---Effects of Discretization Methods on the Performance of Resonant Controllers

Effects of Discretization Methods on the Performance of Resonant ControllersAlejandro G.Yepes,Student Member,IEEE,Francisco D.Freijedo,Member,IEEE, Jes´u s Doval-Gandoy,Member,IEEE,´Oscar L´o pez,Member,IEEE,Jano Malvar,Student Member,IEEE,and Pablo Fernandez-Comesa˜n a,Student Member,IEEEAbstract—Resonant controllers have gained significant impor-tance in recent years in multiple applications.Because of their high selectivity,their performance is very dependent on the ac-curacy of the resonant frequency.An exhaustive study about dif-ferent discrete-time implementations is contributed in this paper. Some methods,such as the popular ones based on two integrators, cause that the resonant peaks differ from expected.Such inac-curacies result in significant loss of performance,especially for tracking high-frequency signals,since infinite gain at the expected frequency is not achieved,and therefore,zero steady-state error is not assured.Other discretization techniques are demonstrated to be more reliable.The effect on zeros is also analyzed,establishing the influence of each method on the stability.Finally,the study is extended to the discretization of the schemes with delay compensa-tion,which is also proved to be of great importance in relation with their performance.A single-phase active powerfilter laboratory prototype has been implemented and tested.Experimental results provide a real-time comparison among discretization strategies, which validate the theoretical analysis.The optimum discrete-time implementation alternatives are assessed and summarized.Index Terms—Current control,digital control,power condition-ing,pulsewidth-modulated power converters,Z transforms.N OMENCLATUREVariablesC Capacitance.f Frequency in hertz.G(s)Model in the s domain.G(z)Model in the z domain.H(s)Resonant controller in the s domain.H(z)Resonant controller in the z domain.i Current.K Gain of resonant controller.L Inductance value.m Pulsewidth modulation(PWM)duty cycle. N Number of samples to compensate with com-putational delay compensation.n Highest harmonic to be compensated. Manuscript received September17,2009;revised December29,2009.Date of current version June18,2010.This work was supported by the Spanish Min-istry of Education and Science under Project DPI2009-07004.Recommended for publication by Associate Editor P.Mattavelli.The authors are with the Department of Electronic Technology,University of Vigo,Vigo36200,Spain(e-mail:agyepes@uvigo.es;fdfrei@uvigo.es;jdoval@ uvigo.es;olopez@uvigo.es;janomalvar@uvigo.es;pablofercom@uvigo.es). Color versions of one or more of thefigures in this paper are available online at .Digital Object Identifier10.1109/TPEL.2010.2041256R Equivalent series resistance value.R(s)Resonant term in the s domain.R(z)Resonant term in the z domain.T Period.θPhase of grid voltage.V V oltage.ωAngular frequency in radians per second.u(s)Input value.y(s)Output value.Subscripts1Fundamental component.a Actual value(f).c Generic current controller(G).d Degree of freedom in the zero-pole matchingdiscretization method(K).dc Relative to the dc link(V).f Relative to the passive inductivefilter(V,i,L,R,and G).I Equivalent to the double of the integral gainof a proportional+integral(PI)controller indq frame(K).k Relative to the k th harmonic(H,R,K P,andK I).L Relative to the load(i).Lh Relative to the harmonics of the load(i).o Resonant frequency of a continuous resonantterm or resonant controller(f andω).P Equivalent to the double of the proportionalgain of a PI controller in dq frame(K). PCC Relative to the point of common coupling(V).PL Relative to the plant(G).rms Root mean square.s Relative to sampling(f and T).src Relative to the voltage source(V,i,and L). sw Relative to switching(f).T Sum of the gains for every value of harmonicorder k(K P).X Resonant term R or resonant controller Hdiscretized with method X,where X∈{zoh,foh,f,b,t,tp,zpm,imp}.X&Y Resonant term R or resonant controller Himplemented with two discrete integrators,with the direct one discretized with method Xand the feedback one with method Y,whereX,Y∈{zoh,foh,f,b,t,tp,zpm,imp}.0885-8993/$26.00©2010IEEEX−Y Resonant controller H VPI(z),in whichR1(s)is discretized with method X andR2(s)with method Y,where X,Y∈{zoh,foh,f,b,t,tp,zpm,imp}. Superscripts∗Reference value.1Resonant term R of the form s/(s2+ω2o). 2Resonant term R of the form s2/(s2+ω2o).d Including delay compensation(H and R). PR Resonant controller H of the PR type.VPI Resonant controller H of the VPI type. Others∆x Difference between x and its target value(i f).ˆx Estimated value of x(θ1andω1).I.I NTRODUCTIONI N recent years,resonant controllers have gained significantimportance in a wide range of different applications due to their overall good performance.They have been applied with satisfactory results to cases such as distributed power generation systems[1],[2],dynamic voltage regulators[3],[4],wind tur-bines[5],[6],photovoltaic systems[7],[8],fuel cells[9],[10], active rectifiers[11],active powerfilters(APFs)[12]–[17], microgrids[18],and permanent magnet synchronous motors [19].Resonant controllers allow to track sinusoidal references of arbitrary frequencies with zero steady-state error for both single-phase and three-phase applications.An important saving of computational burden and complexity is obtained due to their implementation in stationary frame,avoiding the coordinates transformations,and providing perfect tracking of both positive and negative sequences[1],[13],[14],[20]–[22].Resonant con-trollers in synchronous reference frame(SRF)have been also proposed to control pairs of harmonics simultaneously when no unbalance exist[7],[15]–[17],[22],[23].An essential step in the implementation of resonant digital controllers is the discretization.Because of the narrow band and infinite gain of resonant controllers,they are specially sensitive to this process.Actually,a slight displacement of the resonant poles causes a significant loss of performance.In the case of proportional+resonant(PR)controllers[14],[20]–[22],even for small frequency deviations,the effect of resonant terms becomes minimal,and the PR controller behaves just as a proportional one[14].The resonant regulator proposed in[16]is less sensitive to these variations when cross coupling due to the plant appears in the dq frame,but if these deviations in the resonant poles are present,it does not achieve zero steady-state error either. Furthermore,if selectivity is reduced to increase robustness to frequency variations,undesired frequencies and noise may be amplified.Thus,an accurate peak position is preferable to low selectivity.Therefore,it is of paramount importance to study the effectiveness of the different alternatives of discretization for implementing digital resonant controllers,due to the critical characteristics of their frequency response.As proved in this paper,many of the existing discretization techniques cause a displacement of the poles.This fact results in a deviation of the frequency at which the infinite gain occurs with respect to the expected resonant frequency.This error becomes more significant as the sampling time and the desired peak frequency increase.In practice,it can be stated that most of these discretization methods result in suitable implementations when tracking50/60Hz(fundamental)references and even for low-order harmonics.However,as shown in this paper,some of them do not perform so well in applications in which signals of higher frequencies should be tracked,such as APFs and ac motor drives. This error has special relevance in the case of implementations based on two integrators,since it is a widely employed option mainly due to its simplicity for frequency adaptation[8],[13], [15],[23]–[25].Discretization also has an effect on zeros,modifying their distribution with respect to the continuous transfer function. These discrepancies should not be ignored because they have a direct relation with stability.In fact,resonant controllers are often preferred to be based on the Laplace transform of a cosine function instead of that of a sine function because its zero im-proves stability[13],[19].In a similar way,the zeros mapped by each technique will affect the stability in a different man-ner.Consequently,it is also convenient to establish which are the most adequate techniques from the point of view of phase versus frequency response.However,for large values of the resonance frequency,the computational delay affects the system performance and may cause instability.Therefore,a delay compensation scheme should be implemented[14],[15],[17],[23].It can be per-formed in the continuous domain as proposed in[15].However, the discretization of that scheme leads to several different expressions.A possible implementation in the z domain was posed in[14],but there are other possibilities.Consequently,it should be analyzed how each method affects the effectiveness of the computational delay compensation.This aspect has a significant relevance since it will determine the stability at the resonant frequencies.The study of these effects of the discretization on resonant controllers has not been analyzed in the existing literature. Therefore,it is of paramount importance to analyze how each method affects the performance in relation with these aspects.A single-phase APF laboratory prototype has been built to check the theoretical approaches,because it is an application very suitable for proving the controllers performance when tracking different frequencies,and results can be extrapolated to other single-phase and three-phase applications where a perfect tracking/rejection of references/disturbances is sought through resonant controllers.The paper is organized as follows.Section II presents alterna-tive digital implementations of resonant controllers.The reso-nant peak displacement depending on the discretization method, as well as its influence on stability,is analyzed in Section III. Several discrete-time implementations including delay compen-sation,and a comparison among them,are posed in Section IV. Section V summarizes the performance of the digital imple-mentations in each aspect and establishes the most optimum alternatives depending on the existing requirements.Finally, experimental results of Section VII validate the theoreticalanalysis regarding the effects of discretization on the perfor-mance of resonant controllers.II.D IGITAL I MPLEMENTATIONS OF R ESONANT C ONTROLLERS A.Resonant Controllers in the Continuous DomainA PR controller can be expressed in the s domain as[14],[20]–[22]H PR(s)=K P+K Iss2+ω2o=K P+K I R1(s)(1)withωo being the resonant angular frequency.R1(s)is the resonant term,which has infinite gain at the resonant frequency (f o=ωo/2π).This assures perfect tracking for components rotating at f o when implemented in closed-loop[21].R1(s) is preferred to be the Laplace transform of a cosine function instead of that of a sine function,since the former provides better stability[13],[19].H PR(s)in stationary frame is equivalent to a propor-tional+integral(PI)controller in SRF[21].However,if cross coupling due to the plant is present in the dq frame,unde-sired peaks will appear in the frequencies around f o in closed loop[17].This anomalous behavior worsens even more the per-formance when frequency deviates from its expected value.An alternative resonant regulator,known as vector PI(VPI)con-troller,is proposed in[16]:H VPI(s)=K P s2+K I ss2+ω2o.(2)The VPI controller cancels coupling terms produced when the plant has the form1/(sL f+R f)[16],[17],[23],such as in shunt APFs and ac motor drives,with L f and R f being, respectively,the inductance and the equivalent series resistance of an R–Lfilter.Parameters detuning due to estimation errors in the values of L f and R f has been proved in[17]to have small influence on the performance.H VPI(s)can be decomposed as the sum of two resonant terms,R1(s)and R2(s),as follows:H VPI(s)=K Ps2s2+ω2o+K Iss2+ω2o=K P R2(s)+K I R1(s).(3) Equation(3)permits to discretize R1(s)and R2(s)with dif-ferent methods.In this manner,the most optimum alternative for H VPI(z)will be the combination of the most adequate discrete-time implementation for each resonant term.B.Implementations Based on the Continuous Transfer Function DiscretizationTable I shows the most common discretization methods.The Simpson’s rule approximation has not been included because it transforms a second-order function to a fourth-order one,which is undesirable from an implementation viewpoint[26].The techniques reflected in Table I have been applied to R1(s) and R2(s),leading to the discrete mathematical expressions shown in Table II.T s is the controller sampling period and f s=1/T s is the sampling rate.From Table II,it can be seen thatTABLE IR ELATIONS FOR D ISCRETIZING R1(s)AND R2(s)BY D IFFERENT METHODS the effect of each discretization method on the resonant poles displacement will be equal in both R1(s)and R2(s),since each method leads to the same denominator in both resonant terms. It should be noted that zero-pole matching(ZPM)permits a degree of freedom(K d)to maintain the gain for a specific frequency[26].C.Implementations Based on Two Discrete IntegratorsThe transfer function H PR(s)can be discretized by decom-posing R1(s)in two simple integrators,as shown in Fig.1(a) [13].This structure is considered advantageous when imple-menting frequency adaptation,since no explicit trigonometric functions are needed.Whereas other implementations require the online calculation of cos(ωo T s)terms,in Fig.1schemes the parameterωo appears separately as a simple gain,so it can be modified in real time according to the actual value of the frequency to be controlled.Indeed,it is a common practice to implement this scheme due to the simplicity it permits when frequency adaptation is required[13],[15],[24],[25].An analogous reasoning can be applied to H VPI(s),leading to the block diagram shown in Fig.1(b).Instead of developing an equivalent scheme to the total transfer function H VPI(s), it could be obtained as an individual scheme for implementing each resonant term R1(s)and R2(s)could be obtained,but in this case the former is preferable because of the saving of resources.It has been suggested in[8]to discretize the direct integrator of Fig.1(a)scheme using forward Euler method and the feedback one using the backward Euler method.Additional alternatives of discretization for both integrators have been analyzed in[25], and it was also proposed to use Tustin for both integrators,or to discretize both with backward Euler,adding a one-step delay in the feedback line.Nevertheless,using Tustin for both integrators poses implementation problems due to algebraic loops[25].In this paper,these proposals have been also applied to the block diagram shown in Fig.1(b).Table III shows these three discrete-time implementations of the schemes shown in Fig.1.TABLE IIz -D OMAIN T RANSFER F UNCTIONS O BTAINED BY D ISCRETIZING R 1(s )AND R 2(s )BY D IFFERENT METHODSFig.1.Block diagrams of frequency adaptive resonant controllers (a)H P R (s )and (b)H V P I (s )based on two integrators.It should be noted that H jt&t (z )and H j t (z )are equivalent for both j =PR and j =VPI ,since the Tustin transformation is based on a variable substitution.The same is true for the rest of methods that consist in substituting s as a function of z .However,zero-order hold (ZOH),first-order hold (FOH),and impulse invariant methods applied separately to each integratordo not lead to H j zoh,H j foh ,and H jimp ,respectively.Indeed,to dis-cretize an integrator with ZOH or FOH results in the same way as a forward Euler substitution,while to discretize an integrator with the impulse invariant is equivalent to employ backward Euler.III.I NFLUENCE OF D ISCRETIZATION M ETHODSON R OOTS D ISTRIBUTIONA.Resonant Poles DisplacementThe z domain transfer functions obtained in Section II can be grouped in the sets of Table IV,since some of them present an identical denominator,and therefore,coinciding poles.Fig.2represents the pole locus of the transfer functions in Table IV.Damped resonant controllers do not assure perfect tracking [21];poles must be placed in the unit circumference,which corresponds to a zero damping factor (infinite gain).All discretization techniques apart from A and B lead to undamped poles;the former maps the poles outside of the unit circle,whereas the latter moves them toward the origin,causing a damping factor different from zero,so both methods should be avoided.This behavior finds its explanation in the fact that these two techniques do not map the left half-plane in the s domain to the exact area of the unit circle [26].However,there is an additional issue that should be taken into account.Although groups C ,D ,and E achieve infinite gain,it can be appreciated that,for an identical f o ,their poles are located in different positions of the unit circumference.This fact reveals that there exists a difference between the actual resonant frequency (f a )and f o ,depending on the employed implementation,as also observed in Fig.3(d).Consequently,the infinite gain may not match the frequency of the controller references,causing steady-state error.Fig.3(a)–(c)depicts the error f o −f a in hertz as a function of f o and f s for each group.The poles displacement increases with T s and f o ,with the exception of group E .The slope of the error is also greater as these parameters get higher.Actually,the denominator of group D is a second-order Tay-lor series approximation of group E .This fact explains the in-creasing difference between them as the product ωo T s becomes larger.Some important outcomes from this study should be highlighted.1)The Tustin transformation,which is a typical choice in digital control due to its accuracy in most applications,features the most significant deviation in the resonant frequency.2)The error exhibited by the methods based on two dis-cretized integrators becomes significant even for highTABLE IIID ISCRETE T RANSFERF UNCTIONS H P R (s )AND H V P I (s )O BTAINED BY E MPLOYING T WO D ISCRETIZED INTEGRATORSTABLE IVG ROUPS OF E XPRESSIONS W ITH I DENTICAL P OLES IN THE z DOMAINFig.2.Pole locus of the discretized resonant controllers at f s =10kHz (fundamental to the 17th odd harmonics).sampling frequencies and low-order harmonics.For in-stance,at f s =10kHz,group D exhibits an error of +0.7Hz for the seventh harmonic,which causes a consid-erable gain loss [see Fig.3(d)].When dealing with higher harmonic orders (h ),such as 13and 17,it raises to 4.6and 10.4Hz,respectively,which is unacceptable.3)Group E leads to poles that match the original continuous ones,so the resonant peak always fits the design frequency f o .B.Effects on Zeros DistributionOnce assured infinite gain due to a correct position of the poles,another factor to take into account is the displacement of zeros caused by the discretization.Resonant controllers that be-long to group E have been proved to be more suitable for an op-timum implementation in terms of resonant peak displacement.However,the numerators of these discrete transfer functions are not the same,and they depend on the discretization method.This aspect has a direct relation with stability,so it should not be ignored.On the other hand,although group D methods produce a resonant frequency error,they avoid the calculation of explicit cosine functions when frequency adaptation is needed.This fact may imply an important saving of resources.Therefore,it is also of interest to establish which is the best option of that set.The analysis will be carried out by means of the frequency response.The infinite gain at ωo is given by the poles po-sition,whereas zeros only have a visible impact on the gain at other frequencies.Concerning phase,the mapping of zeros provided by the discretization may affect all the spectrum,in-cluding the phase response near the resonant frequency.Due to the high gain around ωo ,the phase introduced by the reso-nant terms at ω≈ωo will have much more impact on the phase response of the whole controllers than at the rest of the spec-trum [14].Therefore,the influence of discretization on the stabil-ity should be studied mainly by analyzing the phase lag caused at ω≈ωo .1)Displacement of R 1(s )Zeros by Group E Discretiza-tions:Fig.4compares the frequency response of a resonant controller R 1(s ),designed for the seventh harmonic,when dis-cretization methods of group E are employed at f s =10kHz.An almost equivalent magnitude behavior is observed,eventhough R 1imp(z )has a lower attenuation in the extremes,and both R 1tp (z )and R 1foh (z )tend to reduce the gain at high fre-quencies.However,the phase versus frequency plot differs more significantly.From Fig.4,it can be appreciated that R 1tp(z )and R 1foh (z )are the most accurate when comparing with R 1(s ).On thecontrary,the phase lag introduced by R 1zoh (z )and R 1zpm (z )is higher than for the continuous model.This fact is particu-larly critical at ω≈ωo ,even though they also cause delay for higher frequencies.As shown in Fig.4,they introduce a phase lag at f o =350Hz of 6.3◦.For higher values of ωo T s ,it be-comes greater.For instance,if tuned at a resonant frequency of f o =1750Hz with f s =10kHz,the delay is 32◦.There-fore,the implementation of R 1zoh (z )and R 1zpm (z )may lead to instability.On the other hand,R 1tp (z ),R 1foh (z ),and R 1imp(z )accurately reproduce the frequency response at the resonance frequency,maintaining the stability of the continuous controllerat ωo .Fig.4also shows that R 1imp(z )can be considered the most advantageous implementation of R 1(s ),since it maintains the stability at ω≈ωo and introduces less phase lag in open-loop for the rest of the spectrum,thereby allowing for a larger phase margin.Fig.3.Deviation of the resonance frequency of the discretized controller f a from the resonance frequency f o of the continuous controller.(a)Group C transfer functions.(b)Group D transfer functions.(c)Group E transfer functions.(d)Discretized seventh harmonic resonant resonant controller at f s= 10kHz.Fig.4.Bode plot of R1(s)discretized with group E methods for a seventh harmonic resonant controller at f s=10kHz.In any case,the influence of the discretization atω=ωo is not as important as its effect on the stability atω≈ωo,since the gain of R1(z)is much lower at those frequencies.Consequently, this aspect can be neglected unless low sample frequencies, high resonant frequencies,and/or large values of K I/K P are employed.In these cases,it can be taken into account in order to avoid unexpected reductions in the phase margin that could affect the stability,or even to increase its value over the phase margin of the continuous system by means of R1imp(z).2)Displacement of R2(s)Zeros by Group E Discretizations: The frequency response of R2(s)discretizations is shown in Fig.5(a).It can be seen that ZOH produces a phase lag near the resonant frequency that could affect stability.Among the rest of possibilities of group E,the impulse invari-ant method is also quite unfavorable:it provides much less gain after the resonant peak than the rest of the discretizations.This fact causes that the zero phase provided by R2(z)forω>ωo has much less impact on the global transfer function H VPI(z), in comparison to the phase delay introduced by R1(z).In this manner,the phase response of H VPI(z)would show a larger phase lag if R2(s)is discretized with impulse invariant instead of other methods,worsening the stability atω>ωo. Actually,as shown in Fig.5(b),if R2imp(z)is used,the delay of H VPI(z)can become close to−45◦for certain frequencies, which is certainly not negligible.This is illustrated,as an exam-ple,in Fig.5(b),in which Bode plot of H VPI(z)is shown when it is implemented as R1imp(z),and R2(s)is discretized with the different methods.Fixed values of K I and K P have been employed to make the comparison possible.K I=K P R f/L f has been chosen,so the cross coupling due to the plant is can-celed[16],[17],and an arbitrary value of1has been assigned to K P as an example.According to the real parameters of the laboratory prototype,L f=5mH and R f=0.5Ω.If the ra-tio K I/K P is changed,the differences will become more or less notable,but essentially,each method will still affect in the same manner.It should be remarked that the phase responseFig.5.Study of group E discretizations effect on R2(s)zeros.(a)Frequency response of R2(s)discretized with group E methods for a seventh harmonic resonant controller at f s=10kHz.(b)Frequency response of H V P I(z)for a third harmonic resonant controller at f s=10kHz,with R1im p(z),when R2(s) is discretized by each method of group E.K P=1and K I=K P R f/L f, with R f=0.5Ωand L f=5mH.of H VPI(z)atω≈ωo is not modified by R1imp(z),but only by the discretization of R2(s).Fig.5(b)also shows that some implementations introduce less phase at low frequencies than H VPI(s),but the influence of this aspect on the performance can be neglected.In conclusion,any of the discretization methods of group E, with the exception of impulse invariant and ZOH,are adequate for the implementation of R2(z).Actually,the influence of these two methods is so negative that they could easily lead to instability continuous resonant controllers with considerable stability margins.3)Displacement of Zeros by Group D Discretizations: Fig.6(a)shows the Bode plot of R1(s)implemented with setD schemes.R1f&b (z)produces a phase lead in comparisontoFig.6.Frequency response of R1(s)and H V P I(s)implemented with groupD methods for a seventh harmonic resonant controller at f s=10kHz.(a)R1(s).(b)H V P I(s),K P=1,and K I=K P R f/L f,with R f=0.5Ωand L f=5mH.R1(s),whereas R1b&b(z)causes a phase lag.This is also trueatω≈ωo,which are the most critical frequencies.Therefore,R1f&b(z)is preferable to R1b&b(z).On the other hand,as can beappreciated in Fig.6(b),the Bode plot of H VPIf&b(z)and H VPIb&b(z)scarcely differ.They both achieve an accurate reproduction ofH VPI(s)frequency response.Actually,atω≈ωo,they provideexactly the same phase.Consequently,they can be indistinctlyemployed with satisfactory results.IV.D ISCRETIZATION I NFLUENCE ON C OMPUTATIONALD ELAY C OMPENSATIONA.Delay Compensation in the Continuous DomainFor large values ofωo,the delay caused by T s affects the sys-tem performance and may cause instability.Therefore,a delaycompensation scheme should be implemented[14],[15],[17], [23],[27].1)Delay Compensation for H PR(s):Concerning resonant controllers based on the form H PR(s),a proposal was posed in[15]for performing the compensation of the computational delay.The resulting transfer function can be expressed in the s domain asH PR d(s)=K P+K I s cos(ωo NT s)−ωo sin(ωo NT s)s2+ω2o=K P+K I R1d(4) with N being the number of sampling periods to be compen-sated.According to the work of Limongi et al.[23],N=2is the most optimum value.2)Delay Compensation for H VPI(s):Because of H VPI(s) superior stability,it only requires computational delay for much greater resonant frequencies than H PR(s)[16],[17],[23]. Delay compensation could be obtained by selecting K P= cos(ωo NT s)and K I=−ωo sin(ωo NT s).However,this ap-proach would not permit to choose the parameters so as to satisfy K I/K P=R f/L f;thus,it would not cancel the cross coupling terms as proposed in[16]and[17].Therefore,an alternative approach is proposed shortly. R1d(s)and R2d(s)are individually implemented with a de-lay compensation of N samples each,so K P and K I can be still adjusted in order to cancel the plant pole:H VPI d(s)=K P s2cos(ωo NT s)−sωo sin(ωo NT s)s2+ω2o+K I s cos(ωo NT s)−ωo sin(ωo NT s)s2+ω2o=K P R2d+K I R1d.(5)3)Delay Compensation for R1d(s)and R2d(s):If the res-onant terms are decomposed by the use of two integrators,it is possible to perform the delay compensation by means of the block diagrams depicted in Fig.7(a)and(b)for R1d(s)and R2d(s),respectively.Fig.8illustrates the effect of the computational delay com-pensation for both R1d(s)and R2d(s),setting f o=350Hz and f s=10kHz as an example.As N increases,the180◦phase shift at f o rises,compensating the phase lag that would be caused by the delay.B.Discrete-Time Implementations of Delay Compensation SchemesAs stated in the previous section,the delay compensation should be implemented for each resonant term separately.For this reason,it is convenient to study how each discretization method affects the effectiveness of the delay compensation for R1d(z)and R2d(z)individually.Effects on groups E and D implementations,due to their superior performance,are ana-lyzed.Tables V and VI reflect the discrete transfer functions obtained by the application of these methods to R1d(s)andR2d(s),respectively.R1df&b (z)and R1db&b(z)result of apply-ing the corresponding discretization transforms to theschemeFig.7.Implementations of(a)R1d(s)and(b)R2d(s)based on twointegrators.Fig.8.Frequency response of(a)R1d(s)and(b)R2d(s)for different valuesof N;f o=350Hz and f s=10kHz.shown in Fig.7(a).On the other hand,R2df&b(z)and R2db&b(z) are obtained by discretizing the integrators shown in Fig.7(b).Substituting N=0in Tables V and VI leads to the expres-sions of Tables II and III,respectively.It can be also noted that。
Extensibility in the Large

Extensibility in the LargeMatthias ZengerProgramming Methods LaboratorySwiss Federal Institute of Technology Lausannematthias.zenger@epfl.chProgramming Software Components Software component technology is driven by the promise of building software from off-the-shelf com-ponents that are provided by a global software com-ponent industry consisting of independent component developers[14].Therefore,component program-ming emphasizes independent development and de-ployment of software modules.For being indepen-dently deployable,a component has to be well sepa-rated from other components.In addition,it has to be composable with other components by a third-party that does not necessarily have access to the implemen-tation details of all the components involved.An important point which is often not considered is ponents have to be extensible,since in general,components do notfit off-the-shelf into an arbitrary deployment context.Theyfirst have to be adapted to satisfy the needs of a particular customer. Apart from this,extensibility is also an important re-quirement for enabling software evolution.Software evolution includes the maintenance and extension of component features and interfaces.Supporting soft-ware evolution is important,since components are architectural building blocks and as such,subject to continuous change.A typical software evolution pro-cess yields different versions of a single component being deployed in different contexts.Extensibility is also required when developing families of software ap-plications[11,2].For instance,software product-lines[8,16]rely heavily on a mechanism for creating variants of a system which share a common structure but which are configured with possibly different com-ponents.Here is a list of requirements we identified to be im-portant for component-oriented programming prac-tice including the corresponding implications on the implementation platform:•It is necessary that components are implemented in a modular way with explicit context dependen-cies,enabling type-safe separate compilation.•Mechanisms for composing independently devel-oped components have to beflexible but also safe.•Component composition has to scale well,since component-oriented programming is targeted to-wards programming in the large[4].•Reuse of components in different contexts should imply the least possible need for explicit adapta-tion code.•In support for a smooth software evolution pro-cess,components have to be extensible without the need to anticipate possible extensions.•Component systems have to be extensible on the system level as well,allowing to plug in alterna-tive or additional components without the need for re-wiring the whole system.•Extensibility has to be non-invasive,allowing to derive different extensions of a component inde-pendently.•Different extensions of a component have to be able to coexist requiring an appropriate version-ing mechanism.•Component deployment and extension must not require the availability of source code since this would violate the principle of binary component deployment.In general,these issues pose high demands on the implementation platform.It is clear by now that mainstream object-oriented programming languages, which are today predominantly used for programming software components,do not live up to most of the requirements.Recently,this observation gave rise to research how to support component technology best on the programming language level.Language Support for Software Extensibil-ityPreviously published approaches for supporting component-oriented programming on the language level such as ComponentJ[12],ACOEL[13],Arch-Java[1],etc.each target specific issues(like type-safety),but none of them addresses extensibility in particular.In order to investigate extensibility issues related to component-oriented programming,we developed a component model that emphasizes the evolution of components[18].It is a simple prototype-based model forfirst-class components built on top of a class-based object-oriented language.The model is formalized as an extension of Featherweight Java[7]. This calculus includes a small set of primitives to dy-namically create,compose,and extend software com-ponents in a type-safe way,while supporting features like explicit context dependencies,late composition, unanticipated component extensibility,and strong en-capsulation.Opposed to most other approaches which link services of components explicitly,components get composed implicitly with coarse-grained compo-sition operators.We used this framework to discuss the trade-offs between aggregation-based and mixin-based component compositions.To make some of the ideas developed in this work useable in practice,we are currently working on a module system with ex-plicit support for extensibility[17].Even though classical module systems like the one of Modula-3[3],Oberon-2[10]and Ada95[15]can be used to model the modular aspects of software components well,they have severe restrictions con-cerning extensibility and reuse.These systems allow type-safe separate compilation,but they hard-wire module dependencies;i.e.they refer to other mod-ules by name,which makes it impossible to plug in a module with a different name but a compatible speci-fication without performing consistent renamings on the source code level.For functional programming languages,module systems[9,5]exist that obey the principle of external connections,i.e.the separation of component definition and component connections. These module systems maximize reuse,but they yield modules that are not extensible,since everything is hard-wired internally.We consider this lacking sup-port for unanticipated extensibility to be a serious shortcoming.In practice one is required to use ad-hoc techniques to introduce changes in modules.In most cases this comes down to hack the changes into the source code of the corresponding modules.This obviously contradicts the idea of deploying compiled module binaries—a process which does not require to publish source code.But even for cases where the source code is available,source code modifications are considered to be error-prone.With modifications on the source code level one risks to invalidate the use of modules in contexts they get already successfully deployed.The design of our module system includes primi-tives for creating and linking modules as well as type-safe mechanisms for extending modules or even fully linked programs statically[17].Module composition is based on aggregation,opposed to other approaches for extensible modules that make use of a mixin-based scheme.The extensibility mechanism relies on two concepts:module refinements and module specializa-tions.Both of them are based on inheritance on the module level.While refinements yield a new ver-sion of a module that subsumes the original module, specializations are used to derive new independent modules from a given“prototype”.The module sys-tem supports software development according to the open/closed principle:Programs are closed in the sense that they can be executed,but they are open for exten-sions that statically add,refine or replace modules or whole subsystems of interconnected modules.Exten-sibility does not have to be planned ahead and does not require modifications of existing source code,pro-moting a smooth software evolution process.The overall design of the module system was guided by the aim to develop a pragmatic,implementable, and conservative extension of Java[6].We are cur-rently implementing a compiler based on the exten-sible Java compiler JaCo[19,20].JaCo itself is de-signed to support unanticipated extensions without the need for source code modifications.JaCo is cur-rently written in a slightly extended Java dialect mak-ing use of an architectural design pattern that allows refinements in a similar way.We hope to be able to re-implement JaCo in future using extensible mod-ules.This would also allow us to gain experience with extensible modules and their capabilities to statically evolve software through module refinements and spe-cializations.References[1]J.Aldrich,C.Chambers,and D.Notkin.Archi-tectural reasoning in ArchJava.In Proceedings ofthe16th European Conference on Object-OrientedProgramming,M´a laga,Spain,June2002.[2]J.Bosch and A.Ran.Evolution of software prod-uct families.In3rd International Workshop onSoftware Architectures for Product Families,LNCS1951,pages168–183,Las Palmas de Gran Ca-naria,Spain,2000.[3]L.Cardelli,J.Donahue,L.Glassman,M.Jordan,B.Kalsow,and G.Nelson.Modula-3languagedefinition.ACM SIGPLAN Notices,27(8):15–42,August1992.[4]F.Deremer and H.H.Kron.Programmingin the large versus programming in the small.IEEE Transactions on Software Engineering,June1976.[5]M.Flatt and M.Felleisen.Units:Cool modulesfor HOT languages.In Proceedings of the ACMConference on Programming Language Design andImplementation,pages236–248,1998.[6]J.Gosling,B.Joy,G.Steele,and G.Bracha.TheJava Language Specification.Java Series,Sun Mi-crosystems,second edition,2000.ISBN0-201-31008-2.[7]A.Igarashi, B.Pierce,and P.Wadler.Feath-erweight Java:A minimal core calculus forJava and GJ.In Proceedings of the Conferenceon Object-Oriented Programming,Systems,Lan-guages&Applications,volume34(10),pages132–146,1999.[8]M.Jazayeri,A.Ran,and F.van der Linden.Soft-ware Architecture for Product Families:Principlesand Practices.Addison-Wesley,2000.[9]D.MacQueen.Modules for Standard ML.InConference Record of the1984ACM Symposiumon Lisp and Functional Programming,pages198–207,New York,August1984.[10]H.M¨o ssenb¨o ck and N.Wirth.The programminglanguage Oberon-2.Structured Programming,12(4):179–195,1991.[11]D.Parnas.On the design and development ofprogram families.IEEE Transactions on SoftwareEngineering,SE-2(1):1–9,1976.[12]J.C.Seco and L.Caires.A basic model oftyped components.In Proceedings of the14th Eu-ropean Conference on Object-Oriented Program-ming,pages108–128,2000.[13]V.C.Sreedhar.Programming software compo-nents using ACOEL.Unpublished manuscript,IBM T.J.Watson Research Center,2002.[14]ponent Software:BeyondObject-Oriented Programming.Addison Wesley/ACM Press,New York,1998.ISBN0-201-17888-5.[15]S.T.Taft and R.A.Duff.Ada95ReferenceManual:Language and Standard Libraries.Lec-ture Notes in Computer Science.Springer Ver-lag,1997.ISBN3-540-63144-5.[16]D.Weiss and i.Software Product-Line Engi-neering.Addison-Wesley,1999.[17]M.Zenger.Evolving software with extensiblemodules.In International Workshop on Unantic-ipated Software Evolution,M´a laga,Spain,2002.[18]M.Zenger.Type-safe prototype-based com-ponent evolution.In Proceedings of the Eu-ropean Conference on Object-Oriented Program-ming,M´a laga,Spain,June2002.[19]M.Zenger and M.Odersky.Extensible algebraicdatatypes with defaults.In Proceedings of theInternational Conference on Functional Program-ming,Firenze,Italy,September2001.[20]M.Zenger and M.Odersky.Implementing ex-tensible compilers.In ECOOP Workshop on Mul-tiparadigm Programming with Object-OrientedLanguages,Budapest,Hungary,June2001.。
Design of a wideband 10W GaN power

iii
iv
Contents
Preface Abstract Figures Tables Abbreviations 1 2 Introduction Theory 2.1 RF amplifier …………………………………………………………………… 2.1.1 2.1.2 2.1.3 2.1.4 2.2 Transistor ……………………………………………………………... Matching networks …………………………………………………… Biasing network and stability ………………………………………… Important parameters in PA design …………………………………… i iii vii ix x 1 3 4 4 4 5 5 7 7 8 12 13 16 …………………………………………………… 16
Preface
This master's thesis has been prepared by Muhammed Hakan Yilmaz during the spring of 2011 at the Norwegian University of Science and Technology. The assignment was given by the Department of Electronics and Telecommunications. The work has been interesting and challenging. I would like to thank my supervisor Associate Professor Morten Olavsbråten at Department of Electronics and Telecommunications with NTNU for all his invaluable assistance and guidance during this master’s thesis. Further I would like to thank his Phd student Dragan Mitrevski for his continuous help during design and measurement in the laboratory. Also I am grateful for the guidance in ADS from Terje Mathiesen at the beginning of this thesis work.
水平展开 英文术语

水平展开英文术语Horizontal Expansion: Understanding the Concept and its Application in Business.Horizontal expansion, often referred to as horizontal integration, is a strategic move by a company to increase its market share and competitive position by acquiring or merging with other companies that operate at the same level in the supply chain. This type of expansion involves the addition of similar businesses, rather than expanding into new or related industries. The goal is to consolidate resources, enhance economies of scale, and achieve greater operational efficiency.1. Understanding Horizontal Expansion.Horizontal expansion occurs when a company acquires another business that offers similar products or services to its existing portfolio. This strategy is distinct from vertical expansion, which involves moving upstream ordownstream in the supply chain by acquiring suppliers or distributors. Horizontal expansion allows a company to broaden its market reach and increase its market share without venturing into entirely new areas of business.The key benefits of horizontal expansion include:Increased Market Share: Acquiring or merging with another company can instantly boost a company's market share, giving it a larger presence in the industry.Greater Economies of Scale: By combining resources, a company can achieve economies of scale, which refers to the cost advantages gained by increasing production or operations.Diversification of Risk: Horizontal expansion can help diversify a company's risk by spreading it across multiple businesses or markets.Enhanced Competitiveness: By increasing its scale and resources, a company can become more competitive in themarketplace, potentially leading to higher profits.2. Strategies for Horizontal Expansion.There are several strategies that companies can employ to achieve horizontal expansion:Merger and Acquisition (M&A): This involves the combination of two or more companies through a merger or acquisition. M&A allows companies to combine their resources, expertise, and market reach to create a stronger entity.Joint Ventures: Joint ventures involve two or more companies pooling their resources and expertise to create a new business entity. This allows companies to share risks and costs while leveraging each other's strengths.Strategic Alliances: These are agreements between companies to collaborate on specific projects or initiatives. Strategic alliances can help companies share resources, technology, and market access to achieve commongoals.3. Challenges of Horizontal Expansion.While horizontal expansion can offer significant benefits, it also presents some challenges:Integration Challenges: Merging two or more companies can be complex, involving the integration of different cultures, processes, and systems.Increased Management Complexity: As a company grows through horizontal expansion, it may face increased management complexity, requiring additional resources and expertise to manage the expanded operations.Competitive Response: Horizontal expansion may trigger competitive responses from other companies in the industry, who may attempt to defend their market share through countermeasures such as price cuts or new product introductions.4. Case Studies of Horizontal Expansion.To illustrate the concept of horizontal expansion,let's consider some real-world case studies:Amazon's Acquisition of Whole Foods Market: Amazon, a leading e-commerce company, acquired Whole Foods Market, a chain of natural and organic grocers, in 2017. This horizontal expansion allowed Amazon to expand its physical retail presence and enhance its offering in the grocery sector. By combining Amazon's online retail capabilities with Whole Foods' physical stores and supply chain, the company was able to offer customers a seamless shopping experience across both platforms.AT&T's Acquisition of Time Warner: In 2018, AT&T, a telecommunications company, acquired Time Warner, a media and entertainment conglomerate that owned brands like HBO, CNN, and Turner Broadcasting. This horizontal expansion allowed AT&T to expand its content offering and create a more comprehensive media and entertainment ecosystem. By combining AT&T's distribution capabilities with TimeWarner's content, the company aimed to offer customers a more integrated experience across multiple platforms.5. Conclusion.Horizontal expansion is a key strategic tool for companies seeking to increase their market share and competitive position. By acquiring or merging with other companies operating at the same level in the supply chain, companies can consolidate resources, enhance economies of scale, and achieve greater operational efficiency. However, horizontal expansion also presents challenges such as integration difficulties and increased management complexity. Therefore, companies must carefully evaluate the potential benefits and risks involved before pursuing this strategy.As the business world continues to evolve, horizontal expansion will remain a relevant strategy for companies seeking to grow and compete effectively. By understanding the concept, strategies, and challenges associated withhorizontal expansion, companies can make informed decisions about how to best achieve their long-term goals.。
内标法及外标法方法、原理、优缺点

An internal standard should be used when performing MS quantitation. An appropriate internal standard will control for extraction, HPLC injection and ionization variability. In a complex matrix it is not uncommon for two different standard levels in SRM integrated plots, at the lower end of the standard curve, to give nearly an identical response. It is only when an internal standard is used that the two points can be differentiated. Some researchers attempt to prepare standard curves and run samples without an internal standard and find moderate success. Often without an internal standard % RSDs of replicates can be as high as 20%. Using an internal standard the % RSDs can be brought down to approximately 2%. We run triplicates at each level of our standard curve.How do I choose an internal standard?The best internal standard is an isotopically labeled version of the molecule you want to quantify. An isotopically labeled internal standard will have a similar extraction recovery, ionization response in ESI mass spectrometry, and a similar chromatographic retention time. If you are performing non-clinical PK quantitation it may be difficult to justify such a standard since a special synthesis of an isotopically labeled standard can be expensive and time consuming. Often if you are working with medicinal chemists they will have a library of compound analogs that can be used as internal standards. These analogs were made in the evolution of the compound to be tested and will be similar to the compound to be quantified and more importantly will be slightly different by parent mass. Try to avoid using de-methylated (-14) or hydroxylated (+16) analogs as internal standards since these are the most common mass shifts observed in naturally occuring metabolites of the parent compound. A common internal standard is a chlorinated version of the parent molecule. A chlorinated version of the parent molecule will commonly have a similar chromatographic retention time which is an important characteristic of an internal standard. We have found that one of the most important characteristics of an internal standard is that it co-elutes with the compound to be quantified.How do I use an internal standard?First of all an internal standard should be added at the beginning of the sample work-up, typically before the plasma crash or solid phase extraction. The internal standard should be added at the same level in every sample including the standards. An internal standard should give a reliable MS response. Care should be taken that the amount of the internal standard is well above the limit of quantitation but not so high as to suppress the ionization of the analyte. "How much internal standard should I add?", this is an important question. It pays to know roughly how much compound is in your sample. This can be accomplished by making trial analyses of an early, middle and late time point with perhaps one or two standard points. This information will be very valuable when building an appropriate standard curve and in knowing how much internal standard to add. If you were trying to quantify samples in the range of 100 fg to 25 pg and the limit of detection was 100 fg you might add 5 to 10 pg of internal standard to every sample. A good rule of thumb is to target the internal standard to the lower 1/3 of the working standard curve. This is a range that will give a comfortable response without interfering with the ionization of the analyte. --------------------------------------------------------------------------------中文:什么叫内标法?怎样选择内标物?内标法是一种间接或相对的校准方法。
BPL--A Language Derived from APLBPL——来自APL语言

File: BPL_For_APL_Users.docAuthor: J.M. SmeenkRevision: 5/23/09 12:06:22 AMBPL--A Language Derived from APLBPL uses ASCII tokens and words rather than the Greek and other symbols of APL, because the APL alphabet is foreign to most users, is difficult to type and edit, and is less expandable than a practically infinite set of keywords.BPL has many more built-in system variables and functions than APL; APL philosophy tends to design everything from basic components, but BPL philosophy recognizes the need for an integrated environment of functions that allow for easier conceptualization. For example, the APL -1 take rho MATRIX requires more mental steps by the reader to understand than the BPL numrows MATRIX. BPL in general attempts to reduce idiomatic programming by providing a standardized and optimized set of functions. Mathematicians do not typically think in terms of primitive operations, but rather in terms of operations derived from primitive operations, and one goal of BPL is to be a tool of thought suitable for mathematics and computer science classes. Thus, BPL has optional hierarchies whereas APL eschews hierarchies.A BPL numeric scalar is usually written in the general form -1.2'-3&-4.5'-6 (for-1.2 x 10^-3 + i x -4.5 x 10 ^-6); alternate-base constants are similar but begin with a 0 and use the letters A-Z (e.g. -0AB.8 for 171.5 in base 16; the base is indicated in a system variable); double-precision constants are similar but use a double decimal point (e.g. 3..14159); vectors and higher-order arrays are constructed similar to APL. BPL literal data is like APL literal data except that double quotes are used instead of single quotes. BPL's grounded system of nested arrays are written with braces enclosing heterogeneous data separated by commas (e.g. {"PQ",2 3#5,{},{{"R"}}}); omitted arrays are permitted to show that data is unavailable or inapplicable.BPL user names begin with A-Z (e.g. MyVar1, X_Y_Z3) while BPL system names begin with a-z (e.g. sin, base, sort_) or are composed of symbol combinations. All system functions are either niladic, monadic, or dyadic; system functions do not have both monadic and dyadic uses as in APL.BPL expressions are computed in right-to-left order with no precedence, just like APL, but setting the system variable preced from its default of 0 to 1 modifies execution order to resemble the order of languages like Fortran.The following table shows some equivalencies between APL and BPL:APL BPL------ -----highminus 2 -2-1.2E-3J-4.5E-6 -1.2'-3&-4.5'-6'DON''T SAY "QUOTE"' "DON'T SAY |"QUOTE|"" +X conj X-X 0-Xtimes X unit X or sign X reciprocal X 1/X*X e^Xbig_circle X pi*Xln X ln Xfloor X floor Xceiling X ceil X|X mag X~X not Xrho X $Xrho rho X $$X,X ##Xiota X 1~~X(Y-1)+iota 1+Y-X (assuming origin of 1) Y~~X?X rand Xgrade_up X seq Xgrade_down X seq_ Xoverstrike_big_circle_and_| X rev Xoverstrike_big_circle_and_- X rev`[1] Xoverstrike_big_circle_and_\ X trnsp Xform X form Xexecute X %Xdomino X invert XY+X Y+XY-X Y-XY times X Y*XY divided_by X Y/XY*X Y^X1 2 3 big_circle X sin X, cos X, tan XY log X Y logof X10 log X log XY minimum X Y min XY maximum X Y max XY residue X X mod YY<X Y<XY>X Y>XY=X Y=XY less_than_or_equal_to X Y<=XY greater_than_or_equal_to X Y>=XY notequalto X Y<>XY and X Y and XY or X Y or XY nand X Y nand XY nor X Y nor XY rho X Y#XY iota X X seek YY,X Y~XY up_arrow X Y take XY down_arrow X Y drop XY?X Y deal XY/X Y keep X(~Y)/X Y lose XY overstrike_/_and_- X Y keep`[1] XY\X Y expand XY overstrike_\_and_- X Y expand`[1] XY overstrike_big_circle_and_| X Y rot XY overstrike_big_circle_and_- X Y rot`[1] XY overstrike_big_circle_and_\ X Y transp XY epsilon X Y in X(~Y) epsilon X Y notin XY encode X Y radix XY decode X Y deradix XY +.times X Y product XY domino X Y quotient XY format X (similar equivalent is Y fmt X) Y[X1;X2;] Y[X1,X2,](various nested array forms) {"P",1 2 3,{"QR"},X,}Y left_arrow X Y: X((1-2)-3)-4 5 ((1-2)-3)-4 5+/X sum X or acc`+ X+overstrike_/_and_- X sum`[1] X or acc`+`[1] X-/X altsum X or acc`- Xtimes/X prod X or acc`* Xdivided_by/X altprod X or acc`/ XX little_circle.F Y X outer`F YX G.F Y X G`inner`F Y(+/X) divided_by rho X mean Xiota 0 zil"" ""(no equivalent)nil <-> 0#{}enclose X {X}disclose X X''quad_CR X Xquad EX X "X": {}quad FX X (not needed, happens automatically)quad NC X type Xquad CT tolquad IO (no equivalent, origin is 1)quad LX Dir._quad PP precquad PW widthquad RL seedquad AI acctquad AV asciiquad TS nowquad WA bytes @Y left_arrow X Y: Xquotequad left_arrow X : XY left_arrow quad Y::num:Y left_arrow quotequad Y::str: or just Y:right_arrow LABEL goto _05LABEL: Statement _05; StatementStatement1 diamond Statement2 Statement1; Statement2Statement comment COMMENT Statement ** COMMENT(no equivalent) Line ++ RestOfLine on next line [*] [*] ++ continues a line onto the next lineBPL employs complementary indexing, whereby Vector[-1] refers to the last element of Vector, Vector[-2] refers to the second last, ..., and Vector[0-$Vector] refers to the first element. The index origin is always 1, so specifying Vector[0] leads to an error.BPL I/O resembles APL I/O. Typing Expr outputs an expression; typing Variable: inputs an variable's value as in APL Quad or QuoteQuad (the user must specify the typeof Variable to get numeric input, otherwise literal input is assumed). Typing : Expr outputs an expression without the final newline character, so that input may follow output on the same line. I/O may be redirected using system variables.BPL allows picked indexing, which allows the user to select arbitrary elements of a nested or non-nested array (instead of just those which form a subscripted "slice" of the array), and picked reassignment, which allows the user to modify these arbitrary elements. Complementary indexing may be used with picked indexing.BPL has numerous operators which are system names combined with the ` symbol (and possibly axes). The meta operator works on comparisons so that they are made at a"word" level rather than at a "character" level. In addition to analogs of existing APL operators, BPL has several additional specialized operators.BPL user functions and user operators are literal matrices (or arrays conformable to literal matrices). To execute a BPL function or operator, the user prefixes the name with % (e.g. %PrintReport 2009). System functions need not be prefixed with %. Within a user function, the variables x,y, and z refer to the right argument, left argument, and result; within a user operator, the functions g and f refer to the left functional parameter and the right functional parameter).BPL directories are objects that store names and referents (such as values of expressions or subdirectories). Directories may have subdirectories to arbitrary depth. For example, the multiple assignment Dir."A B C": {3,"PQ",{5}} is the same as saying Dir.A: 3; Dir.B: "PQ"; Dir.C: {5}; specifying Dir.B in an expression returns the value "PQ". To erase a name from the namelist of a directory, the user assigns it to a vacancy (an unspecified array): Dir."P": {}. The active workspace (symbolized @) is an instance of a directory, storing names and referents. External objects, whose names begin with @, are simply variables too large to fit into the active workspace--this represents a major implementation challenge but unifies BPL since files are simply variables (such as a nested vector) and can be processed with BPL code rather than with special file-handling capabilities. Thus in BPL there are no files, only large externally-stored variables.BPL contains several forms of assignment, including moves and swaps, which apply to namelists as well as single variables. The consequence of the directory structure and these forms of assignment is that all of the APL workspacecommands )LOAD, )COPY, )ERASE, etc. are replaced by assignments in a unified fashion. For example, if @ExternalWorkspace contains the three functions (executable literal arrays) Fn1, Fn2, and Fn3, then Fn4: @ExternalWorkspace.Fn1 would copy Fn1 into the active workspace under a new name, and @ExternalWorkspace."Fn3": {} would erase Fn3 from the external workspace.APL has only a primitive control structures, the goto and labels. BPL implements If-Then-Else, OnError-Trap-EndTrap, Case-Choices-Else, Loop-Exit, Goto-Label, and the like two ways. For the most common forms of control structures that require only a few statements on a single line, BPL provides constructions likeX>100 ?? %YesDoThis !! %NoDoThat (for an If-Then-Else) or<< for i: 1 2 3 4 5; X: X+i^2 >> (for a Loop) are provided. For more intricate forms, block-oriented control structures are provided using a 3-character indentation scheme. For instance,X: 0|< for i: 1 2 3 4 5 ** FOR LOOPX: X+i^2 ** FIRST STMT OF FOR LOOP BODY|\ X>100 ** IF (CONDITION)|? %YesDoThis ** THEN (1-LINE VERTICAL BLOCK)|! %NoDoThat ** ELSE (2-LINE VERTICAL BLOCK)"Hi"|/ ** ENDIF|> ** END LOOPBPL has a philosophy of optional hierarchies where APL eschews hierarchies. For example, setting the BPL system variable preced to 1 from its default of 0 allows precedence in calculations (so that among other precedences, multiplication precedes addition, e.g. 5*3+4 yielding 19 rather than 35).In keeping with optional hierarchies, BPL allows optional typing. In the absence of explicit type information, BPL uses the values of variables to assign a type. By a declaration like X:: int(1,10)[5=>], the user can restrict X from just any value to only vectors of length 5 or more with values between 1 and 10; subsequently if an item (scalar) of X is later reassigned to 11, the error can be detected. Declarations are useful for documentation and debugging purposes, as well as enabling BPL to allocate storage more effectively. But for a quick-and-dirty program, or in creative development of a prototype, excessive type declarations may be counterproductive; they can be added later if desired. Nested types (e.g. X::{int,real[],{bit[2],bit[3]}}) and derived types (e.g. PhoneNumberType: {int[3],int[3],int[4]} followed by Y:: PhoneNumberType) allow complicated typed data structures. Types are implemented as strings or nested arrays of strings. Functions and directories may be declared. The type of an object may be queried (e.g. type Y returns "PhoneNumberType" and real[]?X returns 1 if and only if X is a real vector of any length. A number of derived system types exist for the user's convenience.One goal for BPL is use in graphical computing using the general model of Visual Basic. To this end, graphical directories are envisioned, with named referents such as MsgBox.Position or MsgBox.Font storing graphical information. Such directories can also store system functions such as MsgBox.click which stores a matrix of code to be executed when the usser clicks on the MsgBox. Arrays of graphical directories are allowed (e.g. MsgBox[5,2].Font returns the name of the font for a particular messagebox within a matrix of message boxes).BPL stores relational database tables in directories composed of matrices and vectors of simple datatypes (literal, integer, real, etc.). A number of functions are provided to manipulate these tables, giving BPL some of the functionality of SQL.BPL security is implemented using directory tables, using an security table which contains information on (1) who may use an object, (2) what the object name is, and (3) what prohibitions the owner puts on the object for the user. The BPL security system is similar to UNIX security, and has the goal of being unobtrusive for users who are not security-conscious. In fact, BPL tries to make all of its features as orthogonal and independent as possible, so a user who does not know anything about operators (or nested arrays or control structures or types, etc.) can use BPL with blissful ignorance about these features.Future directions for BPL include specification of the formatting function, pattern matching, and further operating system interfaces.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Inexpensive and Scalable High Performance Computing: the OurGrid Approach1 Francisco Brasileirofubica@.brUniversidade Federal de Campina GrandeDepartamento de Sistemas e ComputaçãoLaboratório de Sistemas DistribuídosAv. Aprígio Veloso, s/n, Bloco CO58.109-970, Campina Grande -PB, BrazilHigh performance computing over public resources and computational grids has recently become a reality with several voluntary computing and grid initiatives currently in operation. However, the mainstream solutions available to implement such systems require substantial computational infrastructure support to allow the aggregation of a large number of resources spread over many administrative domains.To make high performance computing affordable even for those users that cannot rely on having a highly qualified computing support team, we have proposed, implemented and deployed OurGrid. OurGrid is a peer-to-peer grid middleware that supports the automatic creation of large computational grids for the execution of embarrassingly parallel applications. It has been used to support the OurGrid Community (/),a public free-to-join grid and the ShareGrid, “a collaborative project, coordinated by TOPIX (TOrino Piemonte Internet eXchange)in the framework of the Innovation Development Program and funded by Regione Piemonte, aimed at providing a computing and storage service to the academic research community”(http://dcs.di.unipmn.it/).OurGrid takes opportunistic computing,pioneered by the Condor project[1,2],to a new dimension, potentially allowing thousands of peers to form a collaborative community for the exchange of idle computing capacity[3].This is possible, first of all, because OurGrid incorporates an incentive mechanism –named the “Network of Favors” –that makes in the best interest of each peer to donate as much idle resources as possible, discouraging free-riding [4]. Also important is its ability to discover idle resources in a very efficient way,thanks to the use of NodeWiz, a highly scalable and fault-tolerant peer-to-peer Grid Information Service that is specialized in answering multi-attribute range queries [5,6]. In this large scale, heterogeneous and very dynamic grid, attain accurate information about resources and applications to perform appropriate application scheduling is an important issue. OurGrid solves this problem by trading cycles for information. Its Worqueue with Replication (WQR) scheduler uses no information about applications or resources and instead uses judicious replication on the assignment of tasks to resources so to achieve performance comparable to the traditional bin-packing schedulers that require full and accurate information about the system [7]. Finally, OurGrid provides a security portfolio that encompasses mechanisms with different levels of assurance and deployment difficulty, leaving to the user the option for selecting the correct tradeoff between security guarantees and cost, therefore, welcoming a very large spectrum of users to join the community.Since the OurGrid Community appeared as a production grid, many users have benefited from its technology. Applications executing over the OurGrid Community 1This project is developed in cooperation with Hewlett-Packard Brazil R&D.concerns different areas including several kinds of simulations, molecular dynamics [8], climate forecast [9], hydrological management [9], medical images processing[10], and data mining [11]. These applications share two common characteristics: they have high processing demand and can be parallelized as a Bag-of-Tasks application.OurGrid applications can be categorized by the way they interact with the middleware. There are two main approaches:script-based and embedded applications.Script-based applications are the simplest ones. These applications comprise the execution of several tasks, in which each task can be defined as the execution of a program that reads input data from a file and outputs its result to another file. In this case the application is written as a text file that contains entries for all tasks that comprise the application,each entry being described as follows: i) a stage-in command used to transfer input data and executable code to the grid machine; ii) the command that will be executed at the grid machine; iii) a stage-out command that transfers the output of the execution to the user's machine; and, iv) a validation command that runs a sanity check on the results produced, so to detect bogus responses generated by faulty or malicious resources. In most cases it is possible to use a script to automate the generation of the application description file, since there may be hundreds or thousands of tasks per application. Embedded applications incorporate calls to the OurGrid middleware Java API. The use of grid resources is transparent to the application users. All the grid complexity is hidden from them. This approach is normally used when the application is more complex than those exemplified before. When using the OurGrid API directly, users are able to access sophisticated features that are not available through the simpler script-based interaction mode.References[1] D. Thain, T. Tannenbaum, M. Livny. Distributed Computing in Practice: The Condor Experience. Concurrency and Computation:Practice and Experience, Vol.17(2-4), pp.323-356, February-April 2005.[2] A. R. Butt, R. Zhang, Y. C. Hu. A Self-Organizing Flock of Condors, Journal of Parallel and Distributed Computing, Vol.66 (1), pp. 145-161, January2006.[3] W. Cirne, F. Brasileiro, N. Andrade, L. Costa, A. Andrade, R. Novaes, M. bs of the World, Unite!!!, Journal of Grid Computing, Vol.4 (3), pp. 225-246, September 2006.[4] N. Andrade, F. Brasileiro, W. Cirne, M. Mowbray.Automatic Grid Assembly by Promoting Collaboration in Peer-to-Peer Grids, Journal of Parallel and Distributed Computing, Vol. 67(4), pp. 957-966, August2007.[5] S. Basu, S.Banerjee, P. Sharma, S.-J.Lee.Nodewiz: Peer-to-Peer Resource Discovery for Grids, Proceedings of the Fifth IEEE International Symposium on Cluster Computing and the Grid(CCGrid 2005), IEEE Computer Society,Washington, DC, USA,pp. 213-220, May 2005.[6] F. Brasileiro, L. B. Costa, A. Andrade,W. Cirne, S. Basu, S. Banerjee.A Large Scale Fault-tolerant Grid Information Service, Proceedings of the4th International Workshop on Middleware for Grid Computing(MGC 2006), ACM Press, New York, NY, USA, p. 14, November 2006.[7] W. Cirne, F. Brasileiro, D. Paranhos,L. Góes, W. Voorsluys.On the Efficacy, Efficiency and Emergent Behavior of Task Replication in Large Distributed Systems, Parallel Computing,Vol. 33(3),pp. 213-234,April 2007.[8] C. Veronez, C. Osthoff, P. Pascutti. HIV-I Protease Mutants Molecular Dynamics Research on Grid Computing Environment. Proceedings of Brazilian Workshop on Bioinformatics (WOB 2003), pp.161-164, October 2003.[9] William Voorsluys, Eliane Araújo, Walfredo Cirne, Carlos O. Galvão, Enio P. Souza, Enilson P. Cavalcanti.Fostering Collaboration to Better Manage Water Resources, Concurrency and Computation: Practice and Experience, Vol. 19(12), pp. 1609–1620, August2007.[10] M.Oliveira, P. Marques, W. Cirne.Grid Computing in the Optimization of Content-Based Medical Images Retrieval, Radiologia Brasileira, Vol. 40, pp. 255-262, July 2007.[11] Fabrício A. B. da Silva, Sílvia Carvalho, Hermes Senger, Eduardo R. Hruschka, Cléver R. G. de Farias.Running Data Mining Applications on the Grid: a Bag-of-Tasks Approach, Proceedings of the International Conference on Computational Science and its Applications (ICCSA 2004),Springer Berlin, Heidelberg , Germany, pp. 168-177, May2004.。