标题:A Deep Fully Convolution Neural Network for Semantic Segmentation Based on Adaptive Feature Fusion
作者:Liu, Anbang ;Yang, Yiqin ;Sun, Qingyu ;Xu, Qingyang
作者机构:[Liu, Anbang ;Yang, Yiqin ;Sun, Qingyu ;Xu, Qingyang ] School of Mechanical, Electrical and Information Engineering, Shandong University Weihai, Weiha 更多
会议名称:5th International Conference on Information Science and Control Engineering, ICISCE 2018
会议日期:20 July 2018 through 22 July 2018
来源:Proceedings - 2018 5th International Conference on Information Science and Control Engineering, ICISCE 2018
出版年:2019
页码:16-20
DOI:10.1109/ICISCE.2018.00013
关键词:adaptive; deep neural network; feature fusion; fully convolutional neural network; semantic segmentation
摘要:Fully convolutional neural network is a special deep neural networks based on convolutional neural networks and are often used for semantic segmentation. This paper proposes an improved fully convolutional neural network which fuses the feature maps of deeper layers and shallower layers to improve the performance of image segmentation. In the process of feature fusion, adaptive parameters are introduced to enable different layers to participate in feature fusion as different proportion. The deep layers of neural network mainly extract the abstract information of the object, and the shallow layers of neural network extracts the refined features of objects, such as edge information and precise shape. Adaptive parameters can speed up the training speed and improve the prediction accuracy. In the early stages of training, the feature maps of shallow layers have a larger fusion coefficient, which allows the neural network to learn the feature of object's location and shape quickly. As the training progresses, gradually weakening the fusion coefficient of shallow layers and increasing the fusion coefficient of deep layers which can enhance the network's ability of predicting the details of the objects. This paper uses Scene Parsing Challenge 2016 dataset presented by MIT for training. Experiments show that the method proposed in this paper speeds up the training and improves the pixel prediction accuracy. © 2018 IEEE.
收录类别:EI;SCOPUS
资源类型:会议论文;期刊论文
原文链接:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85062086517&doi=10.1109%2fICISCE.2018.00013&partnerID=40&md5=dac34431bd05708a76a9617c13dd5a47
TOP