考研英语阅读理解外刊原文经济学人
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Artificial intelligence (AI) machine learning has transformed speech and language recognition technology. A new study published in IEEE Transactions on Affective Computing by researchers affiliated with the Japan Advanced Institute of Science and Technology (JAIST) and Osaka University demonstrates human-like, sentiment-sensing AI machine learning using physiological data.
人工智能(AI)机器学习已经改变了语音和语言识别技术。
日本高级科学技术研究所和大阪大学的研究人员在《IEEE情感计算汇刊》上发表了一项新研究,展示了利用生理数据进行的类人、情感感知的人工智能机器学习。
Emotional intelligence, or emotional quotient (EQ), refers to a person’s ability to understand and manage emotions in order to build relationships, solve conflicts, manage stress, and other activities. Applied artificial intelligence machine learning practitioners are striving to integrate more human-like traits, such as EQ, in areas such as conversational AI chatbots, virtual assistants, and more for customer service, sales, and other functions.
情商或情绪商数(EQ)是指一个人为了建立关系、解决冲突、管理压力和其他活动而理解和管理情绪的能力。
应用人工智能机器学习实践者正在努力将更多类似人类的特征,如情商,集成到会话式人工智能聊天机器人、虚拟助理等领域,以及更多地用于客户服务、销售和其他功能。
According to Allied Market Research, the worldwide conversational AI market size is projected to reach $32.6 billion by 2030, with a compound annual growth rate of 20 percent during 2021-2030. Conversational AI is rapidly moving beyond simple text-based recognition using natural language processing (NLP) and machine learning to more multimodal models.
根据联合市场研究的数据,到2030年,全球会话式人工智能市场规模预计将达到326亿美元,2021-2030年复合年增长率为20%。
会话式人工智能正在迅速超越使用自然语言处理和机器学习的简单文本识别,转向更多的多模态模型。
Multimodal sentiment analysis is a method of identifying someone’s
psychological state using AI systems based on factors such as facial expression, posture, tone, and speech. It enriches text-based analysis by integrating other modalities such as data from images and sound to provide richer insights into the user’s emotional state.
多模态情感分析是一种基于面部表情、姿势、语气和语音等因素,使用人工智能系统识别人的心理状态的方法。
它通过整合图像和声音等其他形式来丰富基于文本的分析,从而对用户的情绪状态提供更丰富的见解。
On the frontier of multimodal sentiment analysis is incorporating physiological data. In life sciences, anything that pertains to the body and its systems can be considered physiological.
多模态情绪分析的前沿是结合生理数据。
在生命科学中,任何与身体及其系统有关的东西都可以被认为是生理的。
In this new study, Japanese researchers sought to understand the effects of physiological signals in multimodal sentiment analysis. The researchers analyzed data from over 2,400 exchanges with 26 participants interacting with a conversational AI, according to a recent release by JAIST.
在这项新研究中,日本研究人员试图了解生理信号在多模态情绪分析中的作用。
根据JAIST最近发布的一份报告,研究人员分析了26名参与者与会话式人工智能进行互动的2400多次交流数据。
The researchers used a dataset that contained data from facial expression, posture detection with skin potential, voice color sensors, and speech recognition. They discovered that the biological information was more effective than voice and facial recognition.
研究人员使用了一个包含面部表情、皮肤电位姿势检测、嗓音颜色传感器和语音识别的数据集。
他们发现,生物信息比嗓音和面部识别更有效。
“Our results suggest that physiological features are effective in the unimodal model and that the fusion of linguistic representations with physiological features provides the best results for estimating self-sentiment labels as annotated by the users themselves,” the researchers wrote.
研究人员写道:“我们的结果表明,生理特征在单峰模型中是有效的,语言表征和生理特征的融合为估计用户自己标注的自我情感标签提供了最佳结果。
”
This pioneering study suggests that focusing on human physiological signals may be the key to creating artificial intelligence machine learning systems with high emotional intelligence. According to the scientists, AI can help monitor mental health by detecting changes in emotional states in the future.
这项开创性的研究表明,关注人类生理信号可能是创造具有高情商的人工智能机器学习系统的关键。
据科学家称,人工智能可以通过检测未来情绪状态的变化来帮助监测心理健康。