基于神经网络的COCOMO估算(IJISA-V4-N9-3)
人工智能核心算法模拟题及参考答案

人工智能核心算法模拟题及参考答案1、基于神经网络的分类模型是?A、生成模型B、判别模型C、两者都不属于D、两者都属于答案:B2、优化器是训练神经网络的重要组成部分,使用优化器的目的不包含以下哪项:A、加快算法收敛速度B、减少手工参数的设置难度C、避过过拟合问题D、避过局部极值答案:C3、在SCikitTearn中,DBSCAN算法对于()参数值的选择非常敏感A、pB、epsC、njobsD、a1gorithm答案:B4、11和12正则化是传统机器学习常用来减少泛化误差的方法,以下关于两者的说法正确的是:A、11正则化可以做特征选择B、11和12正则化均可做特征选择C、12正则化可以做特征选择D、11和12正则化均不可做特征选择答案:A5、Re1U在零点不可导,那么在反向传播中怎么处理OA、设为0B、设为无穷大C、不定义D、设为任意值答案:A6、代码array=np.arange(10,31,5)中的5代表()?A、元素的个数B、步长C、第一个元素D、最后一个元素答案:B7、图像处理中无损压缩的目的是OA、滤除图像中的不相干信号B、滤除图像中的高频信号C、滤除图形中的低频信号D、滤除图像中的冗余信号答案:D8、对于DBSCAN,参数EPS固定,当MinPtS取值较大时,会导致A、能很好的区分各类簇B、只有高密度的点的聚集区划为簇,其余划为噪声C、低密度的点的聚集区划为簇,其余的划为噪声D、无影响答案:B9、为应对卷积网络模型中大量的权重存储问题,研究人员在适量牺牲精度的基础上设计出一款超轻量化模型OA、KNNB、RNNC、BNND、VGG答案:C10、在前馈神经网络中,误差后向传播(BP算法)将误差从输出端向输入端进行传输的过程中,算法会调整前馈神经网络的什么参数A、输入数据大小B、神经元和神经元之间连接有无C、相邻层神经元和神经元之间的连接权重D、同一层神经元之间的连接权重答案:C11、1STM是一个非常经典的面向序列的模型,可以对自然语言句子或是其他时序信号进行建模,是一种OOA、循环神经网络B、卷积神经网络C、朴素贝叶斯D、深度残差网络答案:A12、O的核心训练信号是图片的“可区分性”。
COCOMO估算模型

COCOMO估算模型COCOMO模型是由TRW公司开发,Boehm提出的结构化成本估算模型。
是⼀种精确的、易于使⽤的成本估算⽅法。
基本COCOMO模型,中间COCOMO模型,详细COCOMO模型模型。
其中基本COCOMO模模型按其详细程度可以分为三级:基本型是是⼀个静态单变量模型,它⽤⼀个以已估算出来的原代码⾏数(LOC)为⾃变量的经验函数计算软件开发⼯作量。
中级COCOMO模型在基本COCOMO模型的基础上,再⽤涉及产品、硬件、⼈员、项⽬等⽅⾯的影响因素调整⼯作量的估算。
详细COCOMO模型包括中间COCOMO模型的所有特性,但更进⼀步考虑了软件⼯程中每⼀步骤(如分析、设计)的影响。
模型中,考虑开发环境,软件开发项⽬的类型可以分为3种:1. 组织型(organic):相对较⼩、较简单的软件项⽬。
开发⼈员对开发⽬标理解⽐较充分,与软件系统相关的⼯作经验丰富,对软件的使⽤环境很熟悉,受硬件的约束较⼩,程序的规模不是很⼤(<50000⾏)2. 嵌⼊型(embedded): 要求在紧密联系的硬件、软件和操作的限制条件下运⾏,通常与某种复杂的硬件设备紧密结合在⼀起。
对接⼝,数据结构,算法的要求⾼。
软件规模任意。
如⼤⽽复杂的事务处理系统,⼤型/超⼤型操作系统,航天⽤控制系统,⼤型指挥系统等。
3. 半独⽴型(semidetached):介于上述两种软件之间。
规模和复杂度都属于中等或更⾼。
最⼤可达30万⾏。
COCOMO模型中我们定义以下变量:L-------源指令条数。
不包括注释。
1KDSI = 1000DSI。
E-------开发⼯作量(以⼈⽉计) 1MM = 19 ⼈⽇ = 152 ⼈时 =1/12 ⼈年D-----开发进度。
(以⽉计)根据以上定义,我们分别对基本COCOMO模型,中间COCOMO模型,详细COCOMO模型的应⽤做出解释如下:基本COCOMO模型1. 我们知道,COCOMO模型是⼀种基于代码⾏估算的成本分析⽅法,因此我们⾸先估算出软件的代码⾏规模L(单位是kLoc,即千⾏代码)2. 然后我们根据公式E = a*L^b , D = c*E^d 得到估算出的⼯作量和开发时间。
mscoco的评估指标

mscoco的评估指标引言MSCOCO(Microsoft Common Objects in Context)是一个通用的目标检测、分割和图像描述数据集。
在计算机视觉领域,评估模型的性能非常重要。
为了衡量模型在MSCOCO数据集上的表现,需要使用一些评估指标。
本文将介绍MSCOCO的评估指标以及它们的计算方法和应用场景。
MSCOCO数据集概述MSCOCO数据集是一个广泛使用的计算机视觉数据集,包含了各种类型的图像,例如人物、动物、交通工具等。
它标注了每个图像中的物体位置、物体类别、物体分割等信息,同时还提供了图像描述的文本注释。
评估指标概述评估指标用于度量模型在目标检测、分割和图像描述任务中的性能。
MSCOCO的评估指标主要包括目标检测的Average Precision (AP)、目标分割的Average Precision (AP)、图像描述的BLEU、METEOR、CIDEr等指标。
目标检测指标目标检测任务的评估指标主要是Average Precision (AP)。
AP是预测框(bounding box)和真实框之间的重叠程度的度量,衡量了模型对目标位置的准确性和召回率。
具体计算方法如下:1.对于每个类别,根据预测框和真实框的IOU(Intersection over Union)值,将预测框排序。
2.根据不同的IOU阈值(通常是0.5、0.75、0.95),计算每个阈值下的Precision和Recall。
3.计算每个类别的Average Precision (AP),即Precision和Recall的积分。
目标分割指标目标分割任务的评估指标同样是Average Precision (AP),但是计算方法略有不同。
对于目标分割任务,需要计算预测分割掩码(segmentation mask)和真实分割掩码之间的重叠程度。
具体计算方法如下:1.对于每个类别,根据预测分割掩码和真实分割掩码的IOU值,将预测框排序。
基于特张脸算法的人脸识别年龄预测(IJMECS-V5-N9-6)

explored. In contrast to other face variations, aging variations presents several unique characteristics which make age estimation a challenging task. Since human faces provide a lot of information, many topics have drawn attention and thus have been studied intensively. The most prominent thing of these is face recognition. Other research topics include predicting feature faces, classifying gender, and expressions from facial images, and so on. However, very few studies have been done on age classification or age estimation. In this research, we try to prove that computer can estimate/classify human age according to features extracted from human facial image using eigenfaces. Humans are not perfect in the task of predicting the age of the subjects based on facial information. The accuracy of age prediction by humans depends on various factors, such as the ethnic origin of a person shown in an image, the overall conditions under which the face is observed, and the actual abilities of the observer to perceive and analyse facial information. The aim of this experiment was to get an indication of the accuracy in age estimation by humans, so that we can compare their performance with the performance achieved by machines. . II. LITERATURE REVIEW Accurate age prediction is one of the most important issues in human communication. It is essential part of human-computer interaction. With the advancement in technology, one thing that concerns the whole world and especially in the developing countries is the tremendous increase in the population. With such a rapid rate of increase, it is becoming difficult to recognize each and every individual because we have to maintain copies either in digital or hard copy format of every individual at different time periods of his life. Sometimes database has the required information of that particular individual, but it’s of no use as it is now obsolete. With age a person’s facial features changes and it becomes difficult to identify a person given an image of his at two different ages. Age prediction from human faces is a challenging problem with a host of applications in forensics, security, biometrics, electronic customer relationship management, entertainment and cosmetology. The main challenge is the huge heterogeneity in facial feature changes due to aging for different humans. Being able to determine the facial changes associated with age is a hard problem, I.J. Modern Education and Computer Science, 2013, 9, 38-44
基于贝叶斯校正算法的软件估算模型COCOMOⅡ的研究

【bt c】 Sf a se ia oeC C M l sh udtn o s ta e l m n’ i c tsm t. A sat o w rc tsm tm dl O O OI a e onao r o wr d e p etSn o t a r t e o t e I yt f i f f e vo s ei e
T r e e h n e h t ae r io O OMO I , a ei a bai lo t m,n ut l rge i n ls of t r n a c ee i t e s n f C uh t sm p ci o C B y a C l rt nA g r h o em l p r s na a - I s n i o i ie e s o y
估 算精度 , 采用 多元 回归的分析方法—— 贝叶斯校正算法对 其进 行校 正, 在逻辑 一致 的基 础上根据 先验 信息和样
本信 息作 出推论, 得到 的后验结果提高 了估算精度。实验结果表 明, 经过 贝叶斯校 正算法 的 C C O OMOI模 型进 一 1 步提 高了数据的精确度。
CC M Ⅱ O O 0 模型由 3 个子模型组成: 应用组合模 型、 早期设计模 型、 后体系结构模 型。顾名思 义 , 体 后 系结构模型发生在软件体系结构完好定义和建立之 后, 基于源代码 行 (L C 和/ 功能点 以及 5个 比例 SO ) 或 指数因子、7 1 个工作量乘数因子, 它使用源代码行数 (L C和/ S O ) 或功能点作 为项 目大小 的输入 , 并使用 修
第 5 章 基本COCOMO模型

软件工程经济学所关注的是软件工程关联的经济学问题, 软件工程经济学所关注的是软件工程关联的经济学问题,其 中的核心内容是关于软件工程的成本与进度估算。 中的核心内容是关于软件工程的成本与进度估算。本章内容讨 论应用“构造性成本模型” COCOMO模型 模型) 论应用“构造性成本模型” (COCOMO模型)估算软件成本 的基本问题并介绍基本COCOMO模型 内容包括: 模型。 的基本问题并介绍基本COCOMO模型。内容包括:
基本 COCOMO 模型
基本 COCOMO 模型
定义与假设
在上一章已经介绍了COCOMO模型中所用到的软件 在上一章已经介绍了COCOMO模型中所用到的软件生 模型中所用到的软件生 命周期阶段和活动的定义 下面是一些附加 定义和假设, 的定义。 附加的 命周期阶段和活动的定义。下面是一些附加的定义和假设, 它们是COCOMO模型应用的基础 它们是COCOMO模型应用的基础 基本的成本驱动因子是项目开发中交付的源指令 成本驱动因子是项目开发中交付的源指令数 基本的成本驱动因子是项目开发中交付的源指令数( DSI) 其定义如下: DSI)。其定义如下: 交付(delivered) 交付(delivered)— 这个术语通常意味着必须排 不可交付的支持软件 如测试驱动程序。然而, 支持软件, 除不可交付的支持软件,如测试驱动程序。然而,如 果这些软件的开发需要付出与交付软件相同的努力, 果这些软件的开发需要付出与交付软件相同的努力, 有其自身的评审、测试计划、文档等等,那么, 有其自身的评审、测试计划、文档等等,那么,它们 也应该计算在内 源指令( unstruction) 源指令(source unstruction)— 该术语包括由项 目组成员编写的并能由预处理程序、 目组成员编写的并能由预处理程序、编译程序和汇编 程序转换为机器代码的所有程序指令。 程序转换为机器代码的所有程序指令。它不包括注释 和未经修改的公用软件。含作业控制语言、 和未经修改的公用软件。含作业控制语言、格式控制 语句和数据声明。 代码行计算 语句和数据声明。按代码行计算
COCOMO模型

COCOMO模型 – 常见的软件规模估算⽅法CoCoMo模型计算机软件的估算模型是根据以前完成项⽬的实际数据导出的,⽤于软件项⽬的计划阶段。
模型是根据“从前的”,“局部的”数据得出的,估算模型不可能完全适⽤于当前所有的软件项⽬和全部开发环境。
这些模型的计算结果仅供参考。
1981年Boehm提出“构造性成本模型”(Constructive Cost Model),简称CoCoMo模型。
它是在静态、单变量模型的基础上构造出来的.CoCoMo模型分为基本、中间、详细三个层次,分别⽤于软件开发的三个不同阶段。
基本CoCoMo模型⽤于系统开发的初期,估算整个系统的⼯作量(包括软件维护)和软件开发所需要的时间。
中间CoCoMo模型⽤于估算各个⼦系统的⼯作量和开发时间。
详细CoCoMo模型⽤于估算独⽴的软部件,如⼦系统内部的各个模块。
基本CoCoMo模型E = aLbD = cEd其中:E表⽰⼯作量,单位是⼈⽉(PM)。
D表⽰开发时间,单位是⽉(M)。
L是项⽬的代码⾏估计值,单位是千⾏代码a ,b ,c ,d是常数,取值如下表所⽰。
Boehm把软件划分为组织型、半独⽴型和嵌⼊型三类,允许不同应⽤领域和复杂程度的软件按照三类软件的适⽤范围选取相应的参数a,b,c,d。
软件类型 a b c d 适⽤范围组织型 2.4 1.05 2.5 0.38 各类应⽤程序半独⽴型 3.0 1.12 2.5 0.35 各类实⽤程序、编译程序等嵌⼊型 3.6 1.20 2.5 0.32 实时处理、控制程序、操作系统中间CoCoMo模型以基本CoCoMo模型为基础,在⼯作量估计公式中乘以⼯作量调节因⼦(EAF)E = aLb *EAF其中:L是软件产品的⽬标代码⾏数,a,b是常数,取值如下表所⽰。
中间CoCoMo模型参数软件类型 a b组织型 3.2 1.05半独⽴型 3.0 1.12嵌⼊型 2.8 1.20⼯作量调节因⼦(EAF)与软件产品属性、计算机属性、⼈员属性、项⽬属性有关软件产品属性1.软件可靠性、2.软件复杂性、3.数据库的规模。
中间COCOMO模型估算方程

K :人的工作量(人-年)
Ck :技术状况有关的常数
对于差的开发环境 Ck = 2500 对于好的开发环境 Ck = 10000 对于优的开发环境 Ck = 12500
5、基于代码行的成本估算方法
模型: Le (a 4m b) / 6
Ld
n (b a)2 i1 6
a:极好情况下的源代码估算行数期望值 b:正常情况下的源代码估算行数期望值 c:较差情况下的源代码估算行数期望值
可能的影响 2 3 2 3 2
RMMM
表 风险预测表样本
资金流失
预算风险Biblioteka 40%1需求改变
产品规模
80%
2
技术达不到预期效果
技术风险
30%
1
缺少对于工具的培训
人力风险
80%
3
人员缺乏经验 人员流 动频繁
人力风险
30%
2
人力风险
60%
2
3、COCOMO模型 (1)基本COCOMO模型
估算方程: ED rS c TD a(ED)b
ED:总的开发工作量 TD:开发时间 S:源指令数 r,c,a,b:经验常数,取决于项目的总体类型
项目的总体类型:
结构型:在本机内部的开发环境中的小规模产 品。
嵌入型:计算机开发环境往往受到严格限制, 例如时间与空间的限制, 因此对同样的软件规模, 其开发难度要大些,估算工作量要大得多,生产率 将低得多。
半独立型介于结构型与嵌入型之间。
3、COCOMO模型 (2)中间COCOMO模型
估算方程: ED rS c TD a(ED)b
ED:总的开发工作量 TD:开发时间 S:源指令数 r,c,a,b:经验常数,取决于项目的总体类型
图像处理单元考核试卷

D.图像量化
2.以下哪种图像格式不支持无损压缩?()
A. PNG
B. JPEG
C. GIF
D. BMP
3.在RGB颜色空间中,红色对应的通道是:()
A. R
B. G
C. B
D. Y
4.以下哪个算法不属于边缘检测算法?()
A. Sobel算法
B. Canny算法
C. Laplacian算法
2.以下哪种图像格式不支持无损压缩?()
A. PNG
B. JPEG
C. GIF
D. BMP
3.在RGB颜色空间中,红色对应的通道是:()
A. R
B. G
C. B
D. Y
4.以下哪个算法不属于边缘检测算法?()
A. Sobel算法
B. Canny算法
C. Laplacian算法
D. Huffman编码
图像处理单元考核试卷
考生姓名:__________答题日期:__________得分:__________判卷人:__________
一、单项选择题(本题共20小题,每小题1分,共20分,在每小题给出的四个选项中,只有一项是符合题目要求的)
1.图像处理单元最基础的操作是:()
A.颜色空间转换
B.图像滤波
D.方向性
11.以下哪种插值方法不常用于图像缩放?()
A.最邻近插值
B.双线性插值
C.双三次插值
D.傅里叶插值
12.在数字图像处理中,以下哪个概念与图像分辨率相关?()
A.空间分辨率
B.频率分辨率
C.时间分辨率
D.能量分辨率
13.以下哪个算法不属于图像分割中的区域生长算法?()
基于功能点的COCOMOⅡ估算模型研究和应用

complexity
also
getting higher
a
and
higher.Cost management of software
is
becoming
an
important part of
computer software project management.Only by
doing this aspect of the gate in order to guarantee the quality and to reduce costs of
ale
the improved COCOMO II model
cost estimates of sinail
and
Function Point
Analysis
more suitable for
and medium.sized
software companies.
Key Words:Cost estimation;COCOMO II;Function Point Analysis.
The
thesis currently studies COCOM0 II
cost of software.In view of
are
model—one of successful methods
current
for estimating the
the features of that the
domestic
作用,并根据国内中小企业的情况增加驱动因子。
(2)选取功能点估算法作为软件规模估算方法,针对我国软件行业的特点进 行了调整。 (3)结合具体实例通过改进后的基于功能点的COCOMOII估算模型估算出规 模和工作量,与该软件项目的实际工作量做比较,对模型进行了评估, 同时对结果进行分析说明。
人工智能模型优化考核试卷

4.以下哪些技术可以用于处理类别不平衡问题?()
A.欠采样
B.过采样
C.使用不同的损失函数
D.增加迭代次数
5.以下哪些是损失函数的类型?()
A.交叉熵
B.均方误差
C. Hinge损失
D.Байду номын сангаас尔可夫链
6.以下哪些方法可以用来进行特征选择?()
A.主成分分析
B.逐步回归
C. L1正则化
D.决策树特征重要性
1.以下哪些方法可以用来优化神经网络的性能?()
A.添加更多的隐藏层
B.使用批量归一化
C.减少学习率
D.增加数据集大小
2.以下哪些是常见的过拟合解决策略?()
A.增加训练数据
B.提高正则化项的权重
C.减少模型参数
D.增加模型复杂度
3.以下哪些是监督学习算法?()
A. K均值聚类
B.支持向量机
C.线性回归
13.以下哪个方法可以用于优化神经网络的初始化权重?()
A.随机初始化
B.全连接初始化
C. He初始化
D.常数初始化
14.以下哪个算法在机器学习中具有很好的非线性拟合能力?()
A.线性回归
B.决策树
C. K最近邻
D.神经网络
15.以下哪个方法可以用于减小神经网络中的梯度爆炸问题?()
A.批量归一化
B.指数衰减学习率
10.在模型训练过程中,提前停止是一种防止过拟合的有效方法。()
五、主观题(本题共4小题,每题10分,共40分)
1.请描述什么是过拟合,它在人工智能模型中是如何产生的,以及你如何检测和避免过拟合问题。
2.简要解释集成学习方法的工作原理,并列举至少三种常见的集成学习方法,同时说明它们的主要优点。
软件项目工作量估算COCOMO和SLIM模型的应用研究

软件项目工作量估算COCOMO和SLIM模型的应用研究赵燕君〔浙江师范大学数理与信息工程学院,浙江金华,321004〕摘要:工作量估算关于软件项目打算制订、项目进度治理、人力资源调配、项目成本操纵有着重要意义。
文章重点介绍了COCOMO模型和SLIM模型方法,再对这两个不同模型估算方法进行综合分析比较,总结出比较准确的估算方法。
关键词:工作量估算;COCOMO模型;SLIM模型0 引言项目经理把工作量分配给具体的工程师,把工作量分布在详细的项目打算中,这确实是依据工作量所进行的项目治理[1]。
工作量的估确实是软件项目打算的关键环节。
专门多组织更情愿使用分解或建模方法而不愿听取专家建议或使用类推分析方法。
通过构造对工作量或成本起关键作用的参数模型〔如普遍使用的建模〕,当软件工程师将估量值与实际值比较时,他们就拥有能够用来检验的东西。
通过在过程中合成一个模型,估量师检查模型和准确性之间的关系,以便能够调整模型,提高以后推测的准确性。
有两种类型的模型差不多用来进行工作量的估量:成本模型〔cost〕和约束模型〔constraint〕。
成本模型提高了工作量或连续时刻的直截了当估量,如COCOMO模型确实是一个体会成本模型。
相反,约束模型显示了随着时刻的流逝两个或多个参数之间的关系,这些参数是工作量、连续时刻或人员水平等。
Rayleigh曲线在几个商业产品〔包括Putnam〕中作为约束模型被使用[2]。
文章要紧是对两个正确率较高的典型模型:COCOMO和SLIM模型进行介绍,并将两者分析比较,提出了综合运用这两种模型运算工作量的方法。
1 COCOMO模型估算法在20世纪70年代,Barry Boehm 研究了从加利福尼亚TRW咨询公司的大量项目中收集了数据。
使用这些数据,他道出了构造性成本模型〔COnstructive COst MOdel,COCOMO〕。
后来他和他的同事提出了升级版COCOMO2.0,是对原始版的完全更新。
COCOMOMO模型

COCOMO模型在软件项目成本估算中的应用05软件(一)班肖凯0510321112摘要:软件开发成本估算主要指软件开发过程中所花费的工作量及相应的代价。
COCOMO模型作为一个优秀的估算工具,定义了自己独有的且很使用的变量和这些变量之间的计算公式。
在组织型、嵌入型、半独立型这些类型的项目开发中COCOMO模型分别定义出了各自相关的准确的计算。
关键词成本估算工作量COCOMO模型项目类型组织模型ABSTRACT:Software development cost estimates mainly refers to software development costs in the process of the workload and the corresponding price. COCOMO model as an excellent tool for the estimation, the definition of its unique and very variable and the use of these variables between the calculation formula. In tissue-type, embedded, semi-independent of these types of project development in the definition of COCOMO model were related to their accurate calculation.引言:软件开发成本估算主要指软件开发过程中所花费的工作量及相应的代价。
软件开发成本的估算,应是从软件计划、需求分析、设计、编码、单元测试、集成测试到认证测试,整个开发过程所花费的代价作为依据的。
在项目管理过程中,为了使时间、费用和工作范围内的资源得到最佳利用,人们开发出了不少成本估算方法,以尽量得到较好的估算。
COCOMOII成本估算模型

COCOMOII成本估算模型COCOMO的发展历程和很多IT管理相关模型的产生一样有着十分传奇的色彩,和其国的Lev i’s品牌牛仔裤一样有着悠久的历史。
1981年的一天,师出TRW汤普森·拉莫·伍尔德里奇公司[美]计算机科学部从事软件开发成本估算研究工作的Barry Boehm博士——一个为成全天下软件开发事业而投生到历史的洪流中去,决意用智慧为世界迎来崭新的明天的人。
工夫不负有心人,在历经日以继夜的无数次失败后,终于在这一天提出的结构型成本估算模型——“构建式成本模型”(COCOMO),它是一种精确、易于使用的成本估算方法,而且是一个和Putnam一样已经得到业界数据的验证的模型。
时过境迁,在随后的10年岁月里,在美国空军任职的Ray Kile先生,对其进行了修订改良,形成了中级COCOMO增强版,同时也是美军使用的标准版本。
Boehm重来没有放弃成为一个伟大软件成本估算模型专家的理想,而一直从事如何有效地将COCOMO有效的运用到软件项目成本估算工具当中,他意识到IT界的发展极为迅速,如果没有发展和创新COCOMO终究有一天将会被社会所淘汰,被世人所遗忘,所以到了1996年,Boehm博士根据软件发展情况,终于发布了改进版,将COCOMO升级为C OCOMOII,而COCOMO II是对经典COCOMO模型的彻底更新,反映了现代软件过程与构造方法。
美军国防部在1999年春季公布的参数模型指导手册中将此模型作为软件评估模型的首选,B oehm终于把COCOMO从浩瀚的同类工具中脱颖而出并推上世界巅峰,期间其著作《软件成本估算:COCOMOn模型方法》和《软件工程经济学》更成为无数IT项目管理人员所景仰的经典之作,终于智慧爆发,惨悟透了老子《道德经》中“大有则无”的思想,发现自己再强也只不过是凡俗人间的一片过眼云烟,所以看破红尘就此归隐。
然而他却给世人留下COCOMOII这个巨大的财富,为软件开发事业做出无法估计的贡献,也许是对他为软件世界付出辛勤汗水的一点告慰吧。
基于COCOMO模型的软件成本估算工具研究

基于COCOMO模型的软件成本估算工具研究
郝振明
【期刊名称】《现代计算机(专业版)》
【年(卷),期】2009(000)011
【摘要】软件成本估算在软件项目管理中占据重要地位,目前已有多种软件成本估算的模型,其中COCOMO模型最为人们所关注.但对于如何估算该模型中的软件规模这一问题,相关的研究还较少.若能从功能点分析方法入手,计算出软件的功能点数目,再将它根据统计结果转换成相应的软件规模,将会大大提高COCOMO模型使用的准确程度,并进一步提高COCOMO工具的实用性.
【总页数】4页(P7-9,13)
【作者】郝振明
【作者单位】暨南大学计算机科学系,广州,510632
【正文语种】中文
【相关文献】
1.基于COCOMO-Ⅱ模型的军用软件计价方法研究 [J], 范朝阳;胡欣
2.基于贝斯校正算法的软件估算模型COCOMOⅡ的研究 [J], 尚鲜连;陈小英;贾震斌;陈静
3.基于贝斯校正算法的软件估算模型COCOMOⅡ的研究 [J], 尚鲜连;陈小英;贾震斌;陈静
4.基于度量工具的软件成本估算模型使用方法 [J], 袁荣;舒风笛;汤子楠;王青
5.基于COCOMO2模型的软件自动化测试影响研究 [J], 王茹
因版权原因,仅展示原文概要,查看原文内容请购买。
coco数据集目标检测算法

coco数据集目标检测算法
COCO数据集是一个大型的、多样的目标检测和识别数据集,包含了大量不同种类的物体和场景。
在COCO数据集上常用的目标检测算法有很多,包括但不限于:
1. Faster R-CNN:这是一种经典的深度学习目标检测算法,通过在CNN模型中引入区域提议网络(RPN)来提高目标检测的准确度和速度。
2. YOLO (You Only Look Once):这是一种单次多框目标检测算法,可以在一个图像中一次性预测所有目标的位置和类别。
3. SSD (Single Shot MultiBox Detector):这是一种单次多框目标检测算法,通过在CNN模型中引入多个卷积层来预测目标的位置和类别。
4. RetinaNet:这是一种基于Faster R-CNN的改进算法,通过引入新的损失函数来提高目标检测的准确度和鲁棒性。
5. Mask R-CNN:这是一种在Faster R-CNN基础上增加了实例分割功能的算法,可以同时进行目标检测和分割。
这些算法在COCO数据集上都有较好的表现,具体选择哪种算法取决于应用场景、计算资源以及精度和速度的需求。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
I.J. Intelligent Systems and Applications, 2012, 9, 22-28Published Online August 2012 in MECS (/)DOI: 10.5815/ijisa.2012.09.03COCOMO Estimates Using Neural NetworksAnupama Kaushik,Assistant Professor, Dept. of IT, Maharaja Surajmal Institute of Technology, GGSIP University, Delhi, Indiaanupama@msit.inAshish Chauhan, Deepak Mittal, Sachin GuptaDept. of IT, Maharaja Surajmal Institute of Technology, GGSIP University, Delhi, IndiaAshish.chauhan004@; deepakm905@; sachin.gupta_15@Abstract—Software cost estimation is an important phase in software development.It predicts the amount of effort and development time required to build a software system. It is one of the most critical tasks and an accurate estimate provides a strong base to the development procedure. In this paper, the most widely used software cost estimation model, the Constructive Cost Model (COCOMO) is discussed. The model is implemented with the help of artificial neural networks and trained using the perceptron learning algorithm. The COCOMO dataset is used to train and to test the network. The test results from the trained neural network are compared with that of the COCOMO model. The aim of our research is to enhance the estimation accuracy of the COCOMO model by introducing the artificial neural networks to it.Index Terms—Artificial Neural Network, Constructive Cost Model, Perceptron Network, Software Cost EstimationI.IntroductionSoftware cost estimation is one of the most significant activities in software project management. Accurate cost estimation is important because it can help to classify and prioritize development projects to determine what resources to commit to the project and how well these resources will be used. The accuracy of the management decisions will depend on the accuracy of the software development parameters. These parameters include effort estimation, development time estimation, cost estimation, team size estimation, risk analysis, etc. These estimates are calculated in the early development phases of the project. So, we need a good model to calculate these parameters. An early and accurate estimation model reduces the possibilities of conflicts between members in the later stages of project development.In the last few decades many software cost estimation models have been developed. The algorithmic models also known as conventional models use a mathematical formula to predict project cost based on the estimates of project size, the number of software engineers, and other process and product factors [1]. These models can be built by analysing the costs and attributes of completed projects and finding the closest fit formula to actual experience. COCOMO (Constructive Cost Model), is the best known algorithmic cost model published by Barry Boehm in 1981 [2]. It was developed from the analysis of sixty three software projects. These conventional approaches lacks in terms of effectiveness and robustness in their results. These models require inputs which are difficult to obtain during the early stages of a software development project. They have difficulty in modelling the inherent complex relationships between the contributing factors and are unable to handle categorical data as well as lack of reasoning capabilities [3]. The limitations of algorithmic models led to the exploration of the non-algorithmic models which are soft computing based.Non algorithmic models for cost estimation encompass methodologies on fuzzy logic (FL), artificial neural networks (ANN) and evolutionary computation (EC).These methodologies handle real life situations by providing flexible information processing capabilities. This paper proposed a neural network technique using perceptron learning algorithm for software cost estimation which is based on COCOMO model. Neural networks have been found as one of the best techniques for software cost estimation. Now-a-days many researchers and scientists are constantly working on developing new software cost estimation techniques using neural networks [4, 5, 6, 7].The rest of the paper is organized as follows: section 2 and 3 describes the COCOMO model and artificial neural network concepts respectively.Section3 and 4 discusses the related work and proposed neural model. Section 4 and 5 presents the proposed model and the training algorithm implemented. Section 6 discusses the experimental results and evaluation criteria. Finally Section 7 concludes the paper.II.COCOMO ModelThe COCOMO model [2] is the most widely used algorithmic cost estimation technique due to itssimplicity. This model provides us with the effort in person months, the development time in months and the team size in persons. It makes use of mathematical equations to calculate these parameters. The COCOMO model is a hierarchy of software cost estimation models and they are:1. Basic Model- It estimates effort for the small to medium sized software projects in a quick and rough fashion and takes the formE = a (SIZE) b (1) where E is effort applied in Person-Months and SIZE is measured in thousand delivered source instructions. The coefficients a and b are dependent upon the three modes of development of projects. Boehm proposed three modes of projects:(a)Organic mode- It is for small sized projects upto2-50 KLOC (thousand lines of code) withexperienced developers in a familiar environment.(b)Semi detached mode- It is for medium sizedprojects upto 50-300 KLOC with averageprevious experience on similar projects.(c)Embedded mode- It is for large and complexprojects typically over 300 KLOC with developershaving very little previous experience.2. Intermediate Model- The Basic COCOMO does not take account of the software development environment. Boehm introduced a set of 15 cost drivers in the Intermediate COCOMO that adds accuracy to the Basic COCOMO. The cost drivers are grouped into four categories:1.Product attributes(a)Required software reliability (RELY)(b)Database size (DATA)(c)Product complexity (CPLX)puter attributes(a)Execution time constraint (TIME)(b)Main storage constraint (STOR)(c)Virtual machine volatility (VIRT)(d)Computer turnaround time (TURN)3.Personnel attributes(a)Analyst capability (ACAP)(b)Application experience (AEXP)(c)Programmer capability (PCAP)(d)Virtual machine experience (VEXP)(e)Programming language experience (LEXP)4.Project attributes(a)Modern programming practices (MODP)(b)Use of software tools (TOOLS)(c)Required development schedule (SCED)Table I Coefficients for Intermediate COCOMOThe Cost drivers have up to six levels of rating: Very Low, Low, Nominal, High, Very High, and Extra High. Each rating has a corresponding real number known as effort multiplier, based upon the factor and the degree to which the factor can influence productivity. The estimated effort in person-months (PM) for the intermediate COCOMO is given as:Effort = a×[SIZE]b ×i=1Π15 EM i (2) In equation (2) the coefficient “a”is known as productivity coefficient and the coefficient “b” is t he scale factor. They are based on the different modes of project as given in Table I. The contribution of effort multipliers corresponding to the respective cost drivers is introduced in the effort estimation formula by multiplying them together. The numerical value of the ith cost driver is EM i (Effort Multiplier).3. Detailed Model- Boehm introduced two more capabilities in this model and they are, Phase sensitive effort multipliers which help in determining the manpower allocation for each phase of the project and three level product hierarchy. These are module, subsystem and system levels. The ratings of the cost drivers are done at appropriate level.This research used intermediate COCOMO model because it has estimation accuracy that is greater than the basic version, and at the same time comparable to the detailed version.III.Artificial Neural NetworksArtificial neural networks (ANN) [8] are the interconnection of the artificial neurons. They are used to solve the artificial intelligence problems without the need for creating a real biological model. These networks focus on hypothetical matters from an information processing point of view. ANN’s possess large number of highly interconnected processing elements called neurons. Each neuron is connected with the other by a connection link. Each connection link is associated with weights which contain information about the input signal. This information is used by the neuron net to solve a particular problem. Each neuron has an internal state of its own. This internal state is called the activation level of neuron, which is the function of the inputs the neuron receives. There are a number of activation functions that can be applied over net input such as Gaussian, Linear, Sigmoid and Tanh. Figure 1 shows the structure of a basic neural network.Fig.1 Basic Neural NetworkA basic neural network consists of a number of inputs applied by some weights, combined together to give an output. The feedback from the output is again put into the inputs to adjust the applied weights and to train the network. This structure of the neural networks help to solve the practical, non linear, decision making problems easily.The neural network used in our approach is perceptron neural network [9]. The perceptron is a network that learns concepts, i.e. it can learn to respond with True (1) or False (0) for inputs presented to it, by repeatedly studying examples provided to it. This network weights and biases could be trained to produce a correct target vector when presented with the corresponding input vector. The training technique used is called the perceptron learning rule. Perceptron Neural Network is selected due to its ability to generalize from its training vectors and work with randomly distributed connections.Vectors from a training set are presented to the network one after another. If the network's output is correct, no change is made. Otherwise, the weights and biases are updated using the perceptron learning rule. An entire pass through all of the input training vectors is called an epoch. When such an entire pass of the training set has occurred without error, training is complete. At this time any input training vector may be presented to the network and it will respond with the correct output vector. If a vector P not in the training set is presented to the network, the network will tend to exhibit generalization by responding with an output similar to target vectors for input vectors close to the previously unseen input vector P.The activation function is one of the key components of the perceptron as in the most common neural network architectures. It determines, based on the inputs, whether the perceptron activates or not. Basically, the perceptron takes all of the weighted input values and adds them together. If the sum is above or equal to some value (called the threshold) then the perceptron fires. Otherwise, the Perceptron does not. The output of the perceptron network is given byy = f(y in) (3) where f(y in) is activation function and is defined asf(y in ) = (4)IV.Relevant WorkArtificial neural networks are good at modeling complex non linear relationships. Since last many years, there have been many researchers who have worked upon the cost estimation of the projects using the artificial neural networks. Many researchers have applied the neural networks approach to estimate software development effort [10, 11, 12, 13, 14, 15 and 16]. A recent study by Jorgensen [17] provides a detailed review of different studies on the software development effort. Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree P and Ramesh S.N.S.V.S.[18] explain the radial study of the Neural Network. Another study by Samson et al. [12] uses an albus multilayer perceptron in order to predict software effort. They use Boehm’s COCOMO dataset. Srinivasan and Fisher [15] report the use of a neural network with a back propagation learning algorithm. They found that the neural network outperformed other techniques. K. Vinay Kumar, V. Ravi, Mahil Carr, and N. Raj Kiran [19] use the wavelet neural network for predicting software development cost. N. Tadayon [20] also reports the use of neural network with a back propagation learning algorithm. However it was not clear how the dataset was divided for training and validation purposes. B. Tirimula Rao et al. [21] provided a novel neural network approach for software cost estimation using functional link artificial neural network. COCOMO is arguably the most popular and widely used software estimation model, which integrates valuable expertknowledge [2].Fig. 2 Architecture of the proposed Neural NetworkV.Proposed Neural NetworkFigure 2 shows the basic structure of the proposednetwork. The performance of a neural network depends on its architecture and their parameter settings. There are many parameters governing the architecture of the neural network including the number of layers, the number of nodes in each layer, the transfer function in each node, learning algorithm parameters and the weights which determine the connectivity between nodes. Inappropriate selection of network patterns and learning rules may cause serious difficulties in network performance and training. The problem is to decide the number of layers and number of nodes in the layers and the learning algorithm as well. However, the criterion is to select the minimum nodes which would not impair the network performance. The number of layers and nodes should be minimized to amplify the performance. In our network, there are 17 inputs to the network which are size of the project in KLOC, 15 effort multipliers, actual effort of the project and one bias value. These inputs enter the network as weighted inputs. The effort is calculated using equation (5).The weights are initialized as W i= 1 for i = 1 to 17, learning rate, α = 0.001 and bias b = 1. The inputs, as received, are multiplied to the weights and provided to the network. As the Propagation network uses summation of the inputs but the COCOMO model uses its multiplication, a log function is used to neutralize them. So, the equation (2) is modified as:log (Effort) = log (a×[SIZE]b ×i=1Π15 EM i) (5) The output obtained by the above equation, is compared using the activation function and the output signal is sent forward. According to the output of the activation function, the weights applied on the inputs are modified. When the output of activation function is 1, the difference between actual effort and effort calculated is found to check if it is in permissible limit or not. If it is in the permissible limit, the output is accepted else the weights are adjusted. This completes with one epoch of the project.The algorithm for training the above network and for calculating new set of weights is depicted in the following steps:Step 1: Initialize the weights, bias and learning rate α. Step 2: Perform steps 3-8 until stopping condition is false.Step 3: Perform steps 4-7 for each training pair.Step 4: The input layer receives input signal and sends it to the hidden layer by applying identityactivation functions on all the input units fromi=1 to 17.Step 5: Each hidden unit j= 1 to 5 sums its weighted input signals to calculate net input given by:The activation functions as given by equation (4) are applied over the above net input to calculate the output response: = )Step 6: Calculate the output i.e. effort at the output layer using the same procedure as in step 5 andconsidering all the weights for j=1 to 5 as 1. Step 7: Compare the actual effort with the computed effort, if the difference is within thepermissible limit the output is accepted elsethe weights are updated as follow:w i(new) = w i(old) + α × input(i)Step 8: Check for the stopping condition i.e. if there is no change in weights then stop the training process, else start again from Step 3.VI.Evaluation Criteria and ResultsThe experiments are done with the proposed neural network model by taking some of the original projects from COCOMO dataset. COCOMO dataset is publicly available which consists of 63 projects [22]. We have divided the entire dataset into two sets, training set and validation set to get more accuracy of prediction. The model is implemented in Matlab.The evaluation consists in comparing the accuracy of the estimated effort with the actual effort. There are many evaluation criteria for software effort estimation among them we applied the most frequent one which is Magnitude of Relative Error (MRE) which is defined as in equation (6).MRE= (6) Table 1 shows some of the experimental values which were tested. These values are then compared with the actual effort of the model. The comparison tells us about the efficiency of our network. Each row of the table corresponds to a project data which specifies the size of the project, the actual effort of the project, the effort multiplier values and finally the effort calculated by our project. The input values are entered in the project through a GUI (Graphical User Interface). The model is implemented in Matlab. Table 2 shows the actual effort, the estimated effort and the MRE value for the experimented projects. Figure 3 is the graphical representation of the actual and the calculated effort of 15 projects of COCOMO dataset [22]. Through this graph, it can be observed that the difference between the actual and the calculated effort is quite less which shows that the proposed algorithm is an accurate and precise algorithm.Table 1 Experimental StudiesTable 2 Comparisons of ResultsFig. 3: Actual and Calculated EffortVII. ConclusionA reliable and accurate estimate of software development effort has always been a challenge for both the industrial and academic communities. There are several software effort forecasting models that can be used in forecasting future software development effort. We have constructed a cost estimation model based on artificial neural networks. Our idea consists in the use of a model that maps COCOMO model to a neural network with minimal number of layers and nodes to increase the performance of the network. The neural network that we have used to predict the software development effort is the Perceptron network. We have use d the COCOM0’81 dataset to train and to test the network. It is observed that the obtained accuracy of the network is acceptable.Thus, it is concluded that the use of the artificial neural network algorithm to model the COCOMO estimation algorithm is an efficient way to find the values of the project estimates. It provides us with nearly accurate values.AcknowledgementThe authors would like to thank the anonymous reviewers for their careful reading of this paper and for their helpful comments.References[1] Ch. Satyananda Reddy, KVSN Raju,“AnImproved Fuz zy Approach for COCOMO’s EffortEstimation using Gaussian Membership Function,” Journal of Software ”, Volume 4, No. 5, July 2009. [2] Boehm, B.W., “Software EngineeringEconomics,” Prentice -Hall, Englewood Cliffs, NJ, USA, 1994.[3] M.O. Saliu, M.Ahmed, “Soft Computing basedEffort Prediction Systems –A Survey, in : E.Damiani, L.C. Jain (Eds),” Computational Intelligence in Software Engineering, Springer-Verlag, July 2004, ISBN 3-540-22030-5.[4] Dawson, C.W., “A neur al network approach tosoftware projects effort estimation,” Transaction Information and Communication Technologies, Vol.16, pages 9, 1996.[5] Idri, A. Khoshgoftaar, T.M. Abran, A., “Canneural networks be easily interpreted in software cost estimation?,” Pro ceedings of the IEEE Internation Conference on Fuzzy Systems, FUZZ-IEEE’02, Vol.:2, 1162-1167, 2002.[6] Finnie, G.R. and Wittig, G.E., “AI tools forsoftware development effort estimation,” In proceedings of the IEEE International Conference on Software Engineering: Education and Practice, Washington DC, pp 346-353, 1996.[7] B. Tirimula Rao, B. Sameet, G. Kiran Swathi, K.Vikram Gupta, Ch. Ravi Teja, S. Sumana, “A novel neural network approach for software cost estimation using Functional Link Artificial Neural Network(FLANN)”, International Journal of Computer Science and Network Society, Vol.9 No.6, June 2009.[8] Stephen Marsland , Jonathan Shapiro, and UlrichNehmzow. “A self-organising network that grows when required ”, Journal Neural Networks, Vol. 15 Issue (8-9):1041- 1058, 2002.[9] S.N. Sivanandam, S.N. Deepa, Principles of SoftComputing, Wiley India (2007).[10] Ch. Satyananda Reddy and KVSVN Raju, “ AnOptimal Neural Network Model for Software Effort Estimation”, Int.J. of Software Engineering, IJSE Vol.3 No.1 January 2010[11] Jorgerson, M., “Experience with accuracy ofsoftware maintenance task effort prediction models,” IEEE Transactions on Software Engineering, Volume 21 (8), 674–681, 1995.[12]Samson, B., Ellison, D., Dugard, P., “Softwarecost estimation using an Albus perceptron (CMAC),” Journal of Information and Software Technology, Volume 39 (1), 55–60, 1997.[13]Schofield, C., “Non-algorithmic effort estimationtechniques,” Technical Report TR98-01, 1998. [14]Seluca, C., “An investigation into software eff ortestimation using a back propagation neural network,” M.Sc.Thesis, Bournemouth University, UK, 1995.[15]Srinivasan, K., Fisher, D., “Machine learningapproaches to estimating software development effort,” IEEE Transactions on Software Engineering, Volume 21 (2), 126–137, 1995. [16]Wittig, G., Finnie, G., “Estimating softwaredevelopment effort with connectionist models,”Journal of Information and Software Technology, Volume 39 (7), 469–476, 1997.[17]Hughes, R.T., “An evaluation of machine learningtechniques for software effort estimation,”University of Brighton, 1996.[18]Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree Pand Ramesh S.N.S.V.S., “Software Effort Estimation using Radial Basis and Generalized Regression Neural Networks”, Journal of Computing, Volume 2, Issue 5, May 2010, ISSN 2151-9617[19]K. Vinay Kumar, V. Ravi, Mahil Carr, N. RajKiran, “Software development cost estimation using wavelet neural networks”, The journal of Systems and Software 81(2008) 1853-1867. [20]N. Tadayon, “Neural Network Approach forSoftware Cost Estimation”, proceedings of the International Conference on Information Technology: Coding and Computing(ITCC’05), Vol. 2, pp. 815-818, 2005.[21]B. Tirimula Rao, B. Sameet, G. Kiran Swathi, K.Vikram Gupta, Ch. Ravi Teja, S. Sumana, “A novel neural network approach for software cost estimation using Functional Link Artificial Neural Network (FLANN)”, International Journal of Computer Science and Network Society, Vol. 9 No.6, June 2009.[22]Anupama Kaushik received her B.E (Computer Science) from Bharathiyar University and M.Tech (Information Technology) from Tezpur University. She joined Department of Information Technology of Maharaja Surajmal Institute of Technology as an Assistant Professor in 2004. Her research area includes Software Engineering, Object Oriented Software Engineering and Soft Computing.Ashish Chauhan is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.Deepak Mittal is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.Sachin Gupta is a student pursuing his B.Tech from Department of Information Technology of Maharaja Surajmal Institute of Technology. This work was a part of their project on Software Cost Estimation. His research area includes Software Engineering and Artificial Neural Networks.。