新闻中心
网站首页   学会概况   学会规章   新闻中心   学术交流
社会服务   科学普及  计算机大赛   会员中心   联系方式
一键拨号
一键留言
会员中心
通知公告
学术报告Towards Quality Assurance of Deep Learning Systems
2019-12-17

南京大学计算机科学与技术系

软件新技术与产业化协同创新中心


摘 要:

The past decade has seen the great potential of applying deep neural network (DNN) based software to safety-critical scenarios, such as autonomous driving. Similar to traditional software, DNNs could exhibit incorrect behaviours, caused by hidden defects, leading to severe accidents and losses. Hence, it is very important for quality and security assurance of deep learning systems, especially for those applied in safety- and mission-critical scenarios.

In this talk, I will introduce some of my latest research works in testing and security analysis on deep learning models. First, I will introduce DeepHunter (ISSTA'19), a coverage-guided fuzzing framework for testing feedforward neural networks. Second, I will introduce DeepStellar (FSE'19), a quantitive analysis for the stateful neural networks (e.g., RNN). At last, I will introduce DiffChaser (IJCAI'19), a differential testing framework for detecting disagreements which are caused by prediction on different models, frameworks or platforms.

报告人简介:

Xiaofei Xie is a presidential postdoctoral fellow in Nanyang Technological University, Singapore. He received Ph.D, M.E. and B.E. from Tianjin University. His research mainly focus on program analysis, loop analysis, traditional software testing and security analysis of artificial intelligence. He has made some top tier conference/journal papers relevant to software analysis in ISSTA, FSE, TSE, IJCAI and CCS.  In particular, he won two ACM SIGSOFT Distinguished Paper Awards (FSE’16 and ASE’19).

时间:12月19日(星期四)19:00

地点:计算机科学技术楼233室


上一篇:2019年度江苏省优秀计算机科技工作者、 江苏省计算机学会青年科技奖、优秀博士 学位论文评选结果公示
下一篇:学术报告 Causal Discovery and Prediction in the Presence of Distribution Shifts
版权所有:江苏省计算机学会
苏ICP备14049275号-1