Fusion Expansion-dual Feature Extraction Applied to Few-shot Learning
CSTR:
Author:
  • Article
  • | |
  • Metrics
  • |
  • Reference [31]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    The goal of few-shot image classification is to identify the category based on a very small number of labeled samples. Two of the key issues are too little labeled data and invisible categories (the training category and the test category are inconsistent). In response, we propose a new few-shot classification model: fusion expansion-dual feature extraction model. First, we introduce a fusion expansion mechanism (FE), which uses the change rules between different samples of the same category in the visible category samples to expand the support set samples, thereby increasing the number of samples in the support set and making the extracted features more robust. Secondly, we propose a dual feature extraction mechanism (DF). A large amount of data from the base class is first utilized to train two different feature extractors: a local feature extractor and a global feature extractor, which are applied to extract more comprehensive sample features. Then the local and overall features are compared to highlight the features that have the greatest impact on the classification, thereby improving the accuracy of the classification. On the Mini-ImageNet and Tiered-ImageNet datasets, our model has achieved good results.

    Reference
    [1] Mehrotra A, Dukkipati A. Generative adversarial residual pairwise networks for one shot learning. arXiv: 1703.08033, 2017.
    [2] Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. Proceedings of the 34th International Conference on Machine Learning. Sydney: JMLR.org, 2017. 1126–1135.
    [3] Gao BB, Xing C, Xie CW, et al. Deep label distribution learning with label ambiguity. IEEE Transactions on Image Processing, 2017, 26(6): 2825–2838. [doi: 10.1109/TIP.2017.2689998
    [4] Hariharan B, Girshick R. Low-shot visual recognition by shrinking and hallucinating features. Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017. 3037–3046.
    [5] Zhang C, Cai YJ, Lin GS, et al. DeepEMD: Few-shot image classification with differentiable Earth Mover’s distance and structured classifiers. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 12200–12210.
    [6] Vinyals O, Blundell C, Lillicrap T, et al. Matching networks for one shot learning. Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 3637–3645.
    [7] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 4080–4090.
    [8] Ren MY, Triantafillou E, Ravi S, et al. Meta-learning for semi-supervised few-shot classification. Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach: ICLR, 2017.
    [9] Oreshkin BN, Rodriguez P, Lacoste A. TADAM: Task dependent adaptive metric for improved few-shot learning. Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal: Curran Associates Inc., 2018. 719–729.
    [10] Li AX, Luo TG, Xiang T, et al. Few-shot learning with global class representations. Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019. 9714–9723.
    [11] Li WB, Wang L, Xu JL, et al. Revisiting local descriptor based image-to-class measure for few-shot learning. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 7253–7260.
    [12] Li HY, Eigen D, Dodge S, et al. Finding task-relevant features for few-shot learning by category traversal. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 1–10.
    [13] Antoniou A, Edwards H, Storkey AJ. How to train your MAML. Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019.
    [14] Lifchitz Y, Avrithis Y, Picard S, et al. Dense classification and implanting for few-shot learning. Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019. 9250–9259.
    [15] Chen WY, Liu YC, Kira Z, et al. A closer look at few-shot classification. Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019.
    [16] Jamal NA, Qi GJ. Task agnostic meta-learning for few-shot learning. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 11711–11719.
    [17] Elsken T, Staffler B, Metzen JH, et al. Meta-learning of neural architectures for few-shot learning. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 12362–12372.
    [18] Scarselli F, Gori M, Tsoi AC, et al. The graph neural network model. IEEE Transactions on Neural Networks, 2009, 20(1): 61–80. [doi: 10.1109/TNN.2008.2005605
    [19] Liu YB, Lee J, Park M, et al. Learning to propagate labels: Transductive propagation network for few-shot learning. Proceedings of the 7th International Conference on Learning Representations. New Orleans: OpenReview.net, 2019.
    [20] Satorras VG, Estrach JB. Few-shot learning with graph neural networks. Proceedings of the 6th International Conference on Learning Representations. Vancouver: OpenReview.net, 2018.
    [21] Kim J, Kim T, Kim S, et al. Edge-labeling graph neural network for few-shot learning. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 11–20.
    [22] Yin C, Geng X. Facial age estimation by conditional probability neural network. Proceedings of the Chinese Conference on Pattern Recognition. Beijing: Springer, 2012. 243–250.
    [23] Shen W, Zhao K, Guo YL, et al. Label distribution learning forests. Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: Curran Associates Inc., 2017. 834–843.
    [24] Qi H, Brown M, Lowe DG. Low-shot learning with imprinted weights. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 5822–5830.
    [25] Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 2015, 115(3): 211–252. [doi: 10.1007/s11263-015-0816-y
    [26] Ravi S, Larochelle H. Optimization as a model for few-shot learning. Proceedings of the 5th International Conference on Learning Representations. Toulon: OpenReview.net, 2017.
    [27] Sun QR, Liu YY, Chua TS, et al. Meta-transfer learning for few-shot learning. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 403–412.
    [28] He KM, Zhang XY, Ren SQ, et al. Deep residual learning for image recognition. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778.
    [29] Hou RB, Chang H, Ma BP, et al. Cross attention network for few-shot classification. Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver: Neural Information Processing Systems Foundation, 2019. 360.
    [30] Yoon SW, Seo J, Moon J. TapNet: Neural network augmented with task-adaptive projection for few-shot learning. Proceedings of the 36th International Conference on Machine Learning. Long Beach: PMLR, 2019. 7115–7123.
    [31] Simon C, Koniusz P, Nock R, et al. Adaptive subspaces for few-shot learning. Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020. 4135–4144.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

杨振宇,胡新龙,崔来平,王钰,马凯洋.融合扩充-双重特征提取应用于小样本学习.计算机系统应用,2022,31(9):217-225

Copy
Share
Article Metrics
  • Abstract:908
  • PDF: 1859
  • HTML: 1980
  • Cited by: 0
History
  • Received:December 01,2021
  • Revised:December 29,2021
  • Online: June 28,2022
Article QR Code
You are the first990593Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063