###
计算机系统应用英文版:2020,29(1):137-143
←前一篇   |   后一篇→
本文二维码信息
码上扫一扫!
基于空间变换密集卷积网络的图片敏感文字识别
(1.重庆邮电大学 光电工程学院, 重庆 400065;2.重庆邮电大学 通信与信息工程学院, 重庆 400065)
Network Image Sensitive Text Recognition Based on Spatial Transformation Network and Dense Neural Network
(1.College of Photoelectricity Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China;2.College of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China)
摘要
图/表
参考文献
相似文献
本文已被:浏览 1341次   下载 2290
Received:June 04, 2019    Revised:June 28, 2019
中文摘要: 互联网上含有大量多字体混合、形变、拉伸、左右结构字形、倾斜畸变等复杂场景下的敏感文字图片,在处理相关图片过程中存在特征提取难、识别率低的问题.本文提出基于空间变换网络与密集神经网络的方法对图片敏感文字进行特征提取与变换矫正,使用了深层双向GRU网络与CTC时域连接网络对序列特征信息进行标记预测,序列化处理文本的方式可较好地提升距离较宽文字与模糊文字信息的处理能力.实验结果表明,本模型在Caffe-OCR中文合成数据集和CTW数据集中分别实现了87.0%和90.3%识别准确率,平均识别时间达到了26.3 ms/图.
Abstract:In the process of processing related network pictures, the traditional system is difficult to extract features and it makes the pictures recognition rate becoming inefficiency. In this study, we propose a module based on spatial transformation and dense neural network method to recognize the images, extract text feature, and transform the parameter about sensitive text pictures. The module using the deep GRU and CTC to mark characteristics of sequence prediction information, and serialization of dealing with the text can better improve the ability of wider text and fuzzy text information. Experimental results show that the recognition accuracy of the model in Caffe-OCR Chinese composite data set and CTW data set is 87.0% and 90.3% respectively, and the average recognition time reaches 26.3 ms/graph.
文章编号:     中图分类号:    文献标志码:
基金项目:国家自然科学基金(61301124,61471075,61671091);重庆科委自然科学基金(cstc2016jcyjA0347);重庆高校创新团队建设计划
引用文本:
林金朝,蔡元奇,庞宇,杨鹏,张焱杰.基于空间变换密集卷积网络的图片敏感文字识别.计算机系统应用,2020,29(1):137-143
LIN Jin-Zhao,CAI Yuan-Qi,PANG Yu,YANG Peng,ZHANG Yan-Jie.Network Image Sensitive Text Recognition Based on Spatial Transformation Network and Dense Neural Network.COMPUTER SYSTEMS APPLICATIONS,2020,29(1):137-143