计算机系统应用  2019, Vol. 28 Issue (9): 147-153 PDF

1. 中国科学院 计算机网络信息中心, 北京 100190;
2. 中国科学院大学, 北京 100049

Species Recognition of Protected Area Based on AutoML
LIU Yao1,2, LUO Ze1
1. Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China;
2. University of Chinese Academy of Sciences, Beijing 100049, China
Abstract: With the increase of investment in ecological protection, the application of infrared camera technology in natural reserves has developed rapidly. Species recognition, which is particularly important in how to fully mine photo information, is the premise of other work. In image recognition, with the outbreak of deep learning, the image recognition has been revolutionized. Convolutional neural network as the representative network structure almost completely overcomes the traditional method in accuracy. However, due to the huge impact of the network structure on the accuracy of the final image recognition, people often choose a network structure suitable for their own dataset from some classic network structures, such as VGG16, VGG19, ResNet50, and so on, in practical applications. Nevertheless, it may need to re-select network structure for different datasets. Therefore, in the species recognition of protected area, this study proposes an automatic construction network structure technology based on AutoML. The technology can automatically build appropriate network structures for different datasets of protected area to avoid manual selection of network structures. At the same time, the technology achieves an accuracy comparable to manual selection of network structures.
Key words: species recognition     auto machine learning     auto construct network structures

1 相关理论 1.1 贝叶斯优化

(1)假设输入之间服从高斯分布模型M.

(2)从M中选择下一个采集函数值较高的输入 $x$ .

(3)观察输入 $x$ 的输出 $y$ , 如果 $y$ 满足要求, 那么结束算法; 否则, 返回 $\left( {x,y} \right)$ 修正高斯分布模型M, 返回第(2)步.

1.2 模拟退火

(1) 初始化: 初始温度T (充分大), 初始解状态S (是算法迭代的起点), 每个T值的迭代次数L.

(2) 对 $k = 1,2, \cdots ,L$ 做第(3)至第(6)步.

(3) 产生新解 $S'$ .

(4) 计算增量 $\Delta T = C(S') - C(S)$ , 其中 $C(S)$ 为评价函数.

(5) 若 $\Delta T < 0$ , 则接受 $S'$ 作为新的当前解, 否则以概率 ${e^{\frac{{ - \Delta T}}{T}}}$ 接受 $S'$ 作为新的当前解.

(6) 如果满足终止条件则输出当前解作为最优解, 结束程序.

(7) $T$ 逐渐减少, 且 $T \leftarrow 0$ , 然后转第2步.

2 本文方法

 图 1 AutoML技术

2.1 经典的网络结构组件

2.2 自动构建网络结构

(1)从候选队列头取出网络结构 $G$ .

(2)将 $G$ 进行上述4种扩展操作, 得到4个新的网络结构, 将新的网络添加到候选队列中.

2.3 搜索网络结构

 图 2 自动构建网络结构

 $\alpha \left( f \right) = \mu \left( f \right) + \beta \sigma \left( f \right)$

$\;\beta$ 是一个平衡因子, 来平衡探索与利用. 得到每个网络结构的 $\alpha$ 值之后, 通过模拟退火算法来选取下一个网络结构作为备训练的网络结构.

(1)初始化模拟退火的温度衰减率 $r$ , 温度参数 $T$ 以及最低温度阈值 ${T_{\rm{low}}}$ , 最高历史模型性能值 ${c_{\max }}$ , 最优网络结构 ${f_{\max }}$ , 优先级搜索队列Q.

(2)取出队列头网络结构 $f$ , 对该网络结构做上述四种操作进行扩增, 得到四个新的网络结构, 对于每个新网络结构 $f'$ , 如果 ${e^{\frac{{\alpha \left( {f'} \right) - {c_{\max }}}}{T}}} > Rand()$ , 将该网络加入到优先级搜索队列中, 否则不加入队列. 如果 ${c_{\max }} <$ $\alpha \left( {f'} \right)$ , 那么 ${c_{\max }} \leftarrow \alpha \left( {f'} \right),{f_{\max }} \leftarrow f'$ . 同时衰减温度 $T \leftarrow T \times r$ .

(3)如果队列不为空, 并且 $T > {T_{\rm{low}}}$ , 返回第二步; 否则, 返回最优网络结构 ${f_{\max }}$ .

2.4 算法流程

 图 3 算法流程图

3 实验分析 3.1 数据集

3.2 模型训练

3.3 实验结果

4 结论与展望

 [1] 刘雪华, 武鹏峰, 何祥博, 等. 红外相机技术在物种监测中的应用及数据挖掘. 生物多样性, 2018, 26(8): 850-861. DOI:10.17520/biods.2018053 [2] 唐卓, 杨建, 刘雪华, 等. 基于红外相机技术对四川卧龙国家级自然保护区雪豹(Panthera uncia)的研究 . 生物多样性, 2017, 25(1): 62-70. [3] 兰慧, 金崑. 红外相机技术在北京雾灵山自然保护区兽类资源调查中的应用. 兽类学报, 2016, 36(3): 322-329. [4] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, NV, USA. 2012. 1097–1105. [5] Ren SQ, He KM, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks. Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada. 2015. 91–99. [6] Zelener A, Stamos I. CNN-based object segmentation in urban LIDAR with missing points. Proceedings of the 4th International Conference on 3D Vision. Stanford, CA, USA. 2016. 417–425. [7] Hochreiter S. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 1998, 6(2): 107. DOI:10.1142/S0218488598000094 [8] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on Machine Learning. Lille, France. 2015. [9] Agarap AF. Deep learning using rectified linear units (ReLU). arXiv preprint arXiv: 1803.08375, 2018. [10] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014, 15(1): 1929. [11] Inoue H. Data augmentation by pairing samples for images classification. arXiv preprint arXiv: 1801.02929, 2018. [12] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 1409.1556, 2014. [13] Szegedy C, Liu W, Jia YQ, et al. Going deeper with convolutions. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA. 2015. 1–9. [14] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA. 2016. 770–778. [15] Huang G, Liu Z, van der Maaten L, et al. Densely connected convolutional networks. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA. 2017. 2261–2269. [16] Bergstra J, Komer B, Eliasmith C, et al. Hyperopt: A Python library for model selection and hyperparameter optimization. Computational Science & Discovery, 2015, 8(1): 014008. [17] Hutter F, Hoos H, Leyton-Brown K. An evaluation of sequential model-based optimization for expensive blackbox functions. Proceedings of the 15th Annual Conference Companion on Genetic and Evolutionary Computation. Amsterdam, The Netherlands. 2013. 1209–1216. [18] Hutter F, Hoos HH, Leyton-Brown K, et al. Time-bounded sequential parameter optimization. Proceedings of the 4th International Conference on Learning and Intelligent Optimization. Venice, Italy. 2010. [19] Jin HF, Song QQ, Hu X. Auto-keras: An efficient neural architecture search system. arXiv preprint arXiv: 1806.10282, 2018. [20] Casale FP, Gordon J, Fusi N. Probabilistic neural architecture search. arXiv preprint arXiv: 1902.05116, 2019. [21] Pham H, Guan MY, Zoph B, et al. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv: 1802.03268, 2018. [22] Cai H, Chen TY, Zhang WN, et al. Efficient architecture search by network transformation. arXiv preprint arXiv: 1707.04873, 2017. [23] Swersky K, Snoek J, Adams RP. Freeze-thaw Bayesian optimization. arXiv preprint arXiv: 1406.3896, 2014. [24] Nickson T, Osborne MA, Reece S, et al. Automated machine learning on big data using stochastic algorithm tuning. arXiv preprint arXiv: 1407.7969, 2014. [25] Klein A, Falkner S, Bartels S, et al. Fast bayesian optimization of machine learning hyperparameters on large datasets. arXiv preprint arXiv: 1605.07079, 2016. [26] Snoek J, Larochelle H, Adams RP. Practical Bayesian optimization of machine learning algorithms. Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, NV, USA. 2012. 2951–2959. [27] Wistuba M, Schilling N, Schmidt-Thieme L. Scalable Gaussian process-based transfer surrogates for hyperparameter optimization. Machine Learning, 2018, 107(1): 43-78. DOI:10.1007/s10994-017-5684-y [28] Steinbrunn M, Moerkotte G, Kemper A. Heuristic and randomized optimization for the join ordering problem. The VLDB Journal, 1997, 6(3): 191-208. DOI:10.1007/s007780050040