Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2025,34(12):1-15, DOI: 10.15888/j.cnki.csa.010033, CSTR: 32024.14.csa.010033
    Abstract:
    Underwater images are affected by factors such as scattering and absorption in water, resulting in severe image quality degradation. Scene depth, as an important parameter in images, plays a key role in underwater image restoration. It can be used as an intermediate parameter in physical model-based restoration and as a feature in deep learning. Firstly, this study introduces an underwater imaging model based on the principle of underwater image restoration. Secondly, it focuses on analyzing the application of physical models and deep learning in scene depth estimation and image restoration. By classifying and summarizing these different methods, the study compares their advantages and disadvantages to reveal the core role of scene depth in degradation modeling and restoration optimization. Thirdly, several algorithms are compared through experiments from both subjective and objective aspects to analyze their advantages and limitations. Finally, an outlook is presented to provide new ideas and directions for the future development of underwater image restoration.
    2025,34(12):16-25, DOI: 10.15888/j.cnki.csa.010067, CSTR: 32024.14.csa.010067
    Abstract:
    With the widespread deployment of large language models (LLMs) across various generative tasks, their high computational demands impose stringent performance requirements on the underlying hardware. RISC-V, an emerging open-source instruction-set architecture, shows great potential owing to its excellent customizability and extensibility. Nevertheless, when deploying mainstream LLMs, the RISC-V ecosystem still faces challenges such as an incomplete software stack and limited compute capability. This study proposes an inference acceleration method for LLMs on RISC-V heterogeneous platforms. By establishing a heterogeneous runtime environment that integrates the Cambricon MLU370 accelerator, the device driver is ported, essential libraries are compiled, and the PyTorch framework is adapted. Building on this foundation, a lightweight multi-threading optimization scheme is further designed to improve the efficiency of core operators—especially the attention mechanism—on multi-core architectures. Experimental results on the SG2042+ MLU370-S4 platform show that, without relying on any additional optimizations, the proposed method achieves up to 52.3 times end-to-end inference speedup for several mainstream LLMs, thus demonstrating both the feasibility and broad applicability of the approach on RISC-V heterogeneous systems.
    2025,34(12):26-38, DOI: 10.15888/j.cnki.csa.010014, CSTR: 32024.14.csa.010014
    Abstract:
    Multi-agent evolutionary reinforcement learning integrates evolutionary algorithms into multi-agent reinforcement learning, addressing inherent problems such as low-quality reward signals and non-stationarity. However, existing methods often struggle to balance learning and exploration between reinforcement learning and evolutionary algorithms. On one hand, suboptimal strategies in reinforcement learning can negatively affect the population. On the other hand, the low utilization of high-quality strategies within the population limits overall learning efficiency. Moreover, in complex partially observable environments, agents face challenges in obtaining effective observation representations, which reduces decision-making accuracy. To address these challenges, this study proposes an improved multi-agent evolutionary reinforcement learning method based on strategy optimization and representation search (SORS). First, to tackle the balance between learning and exploration, a reward-driven strategy optimization module is designed, utilizing superior strategies to guide population mutation in evolutionary algorithms and gradient optimization in reinforcement learning. Second, to mitigate partial observability in complex environments, a representation search method is introduced, enhancing agents’ observation representations by perturbing representation network populations. Finally, experiments conducted on a StarCraft simulation platform validate the proposed method. The results show that SORS achieves superior performance, surpassing all baseline algorithms in average win rates across different environments.
    2025,34(12):39-54, DOI: 10.15888/j.cnki.csa.010007, CSTR: 32024.14.csa.010007
    Abstract:
    To address the problems of diverse target scales and the difficulty in identifying small-scale targets in UAV aerial images, this study proposes a multi-scale target recognition algorithm named MON-YOLOv8n. First, the C2f_OCA module is designed based on the OrthoNets orthogonal channel attention network. By integrating attention mechanisms with orthogonal channel design, this module enhances the performance of convolutional neural networks for multi-scale feature extraction and improves the model’s multi-dimensional understanding of complex data. Second, the multi-scale edge enhancement module (MEEM) and multi-order gated aggregation module (MOGA) are introduced to enhance the edge details of objects and effectively reduce redundant information in the features. Then, the SPDConv and RepViTBlock modules are introduced into the backbone and neck networks, respectively, to achieve a quantitative reduction in model parameters and improve the detection capability of small-scale targets. Finally, the target detection layer is modified to utilize the NWDLoss function, further enhancing the model’s ability to detect small targets and improving its robustness. Experimental results show that the proposed model achieves detection mAP50 of 95.2%, 85.2%, and 84.1% on HIT-UAV, DroneVehicle, and DOTAv1 datasets, respectively, which are 7.2%, 4.8%, and 5.0% higher than the YOLOv8n baseline model.
    2025,34(12):55-66, DOI: 10.15888/j.cnki.csa.010027, CSTR: 32024.14.csa.010027
    Abstract:
    Image processing models have been widely applied across various scenarios, making the protection of their intellectual property increasingly important to prevent unauthorized use. However, existing watermarking methods are facing various problems, such as high-frequency artifacts, reduced model efficiency, and insufficient imperceptibility. To address these problems, this study proposes a black-box watermarking method with adaptive camouflage for image processing models. The method generates naturally blended camouflage textures as trigger patterns by extracting image color features and designs a recognition and transformation module to convert trigger images into high-quality watermarked images. It dynamically extracts dominant color features using the HLS histogram filtering and a local clustering algorithm, and enhances texture imperceptibility through Gaussian filtering and feathered masking techniques, ensuring that the watermark introduces no visual artifacts in either the spatial or frequency domains. Experimental results demonstrate that the proposed method preserves model fidelity, achieves a 100% watermark verification rate, and maintains robustness against various watermark removal and attack strategies such as fine-tuning and pruning.
    2025,34(12):67-74, DOI: 10.15888/j.cnki.csa.010022, CSTR: 32024.14.csa.010022
    Abstract:
    Remote sensing image change detection plays a vital role in urban expansion and disaster monitoring. However, existing methods still exhibit limitations in feature extraction capability, resistance to pseudo-change interference, and multi-scale feature fusion. To this end, this study proposes MS-SwinCE, a change detection model that integrates multi-scale Swin Transformer, a contrastive feature enhancement module (CFEM), and a bi-directional feature pyramid network (BiFPN). The model enhances long-range dependency modeling via the local window and shifted mechanism, with CFEM accurately capturing change differences and suppressing noise, and BiFPN efficiently fusing multi-scale semantic information. Experimental results demonstrate that MS-SwinCE outperforms ChangeFormer on the LEVIR-CD dataset, with an improvement of 1.18% in IoU, 0.70% in F1 score, 0.32% in Precision, and 1.06% in Recall. On the WHU-CD dataset, MS-SwinCE achieves an increase of 1.84% in IoU, 1.06% in F1 score, 0.47% in Precision, and 1.66% in Recall compared to BIT. Additionally, while maintaining high accuracy, MS-SwinCE has a parameter count of 31.66M, notably lower than ChangeFormer (41.03M) with similar accuracy. Thus, an effective balance between accuracy and efficiency is achieved. Ablation studies further confirm the effectiveness and synergistic gain of each module.
    Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    Available online:  December 01, 2025 ,DOI: 10.15888/j.cnki.csa.010051
    Abstract:
    Existing super-resolution (SR) networks typically capture multi-scale features by stacking multi-branch structures, leading to slow inference and limited modeling of global pixel dependencies. Some studies introduce Transformer-based self-attention to enhance reconstruction quality but at the cost of increased complexity. To address these challenges, this study proposes a novel SR network that combines multi-scale edge enhancement with a lightweight Transformer (ECTL-SR). A lightweight edge-guided convolutional block effectively captures and fuses fine-grained edge features under different receptive fields, while structural re-parameterization reduces redundant computation and memory overhead. A lightweight position-aware circular convolution is embedded into a modified Transformer to boost the network’s ability to model long-range dependencies and efficiently expand the receptive field at low cost. Experiments show that the proposed network achieves a good balance between performance and efficiency, outperforming existing SR methods on benchmark datasets such as Urban100 with superior reconstruction results.
    Available online:  December 01, 2025 ,DOI: 10.15888/j.cnki.csa.010053
    Abstract:
    Modern embedded systems are typically equipped with high-speed caches, which enhance hardware performance but complicate system software design, necessitating the prediction of cache behavior for real-time tasks. Predicting cache behavior often involves cache classification, and implementing an exact cache classification may exhibit non-deterministic polynomial (NP) characteristics. Reducing these NP characteristics in an exact cache classification presents a challenge. To address previous shortcomings, this study proposes both the strongly connected component elimination technique and the extended anti-chain technique to further reduce the NP characteristics. Through benchmark tests, it is found that the proposed techniques can significantly reduce most classification time overhead, with the maximum reduction exceeding 4 h; while a small portion of classification time overhead slightly increases, with the maximum increase not exceeding 3 min. It helps design more efficient cache behavior prediction tools.
    Available online:  December 01, 2025 ,DOI: 10.15888/j.cnki.csa.010056
    Abstract:
    With the continuous advancement of shale oil exploration and development, well logging data has become increasingly important in reservoir evaluation. However, due to factors such as logging equipment failures and cost constraints, missing or abnormal well log curves frequently occur, which severely impact the accuracy of geological interpretation and resource development. To address the issues of missing and abnormal well log curves, a deep learning model termed Inception-BiGRU-Transformer (IBT) is proposed by integrating a Transformer encoder to enhance global feature representation, a bidirectional gated recurrent unit (BiGRU) for temporal modeling, and an Inception module for multi-scale feature extraction. This model effectively improves the reconstruction accuracy and stability of well log curves through its combined multi-scale feature extraction and sequential modeling mechanisms. Experiments are conducted on measured data from twelve wells in the Daqing Gulong shale oil region, involving both single-target and multi-target well log curve reconstruction tasks. The results demonstrate that the IBT model outperforms mainstream models in terms of RMSE, MAE, MAPE, and R2, exhibiting superior predictive accuracy and generalization capability. Furthermore, ablation studies confirm the effectiveness of each component in enhancing the model’s predictive performance.
    Available online:  December 01, 2025 ,DOI: 10.15888/j.cnki.csa.010057
    Abstract:
    This study is dedicated to enhancing the application efficiency and accuracy of deep learning in lung sound analysis. In view of the insufficient robustness and limited generalization capabilities of existing deep learning models in lung sound analysis, it proposes a method that integrates the convolutional neural networks (CNN), long short-term memory network (LSTM), and support vector machine (SVM) to achieve efficient and in-depth analysis of lung sound signals. The method begins with the preprocessing of lung sound signals to extract reconstructed signals and their corresponding Hilbert spectra. Secondly, a deep learning network model that integrates CNN, LSTM, and SVM is designed and built. Finally, the processed signal data are input into the CNN-LSTM-SVM deep learning network to extract and fuse the time-domain and frequency-domain features of lung sound signals. Experimental results show that the method achieves high levels of 96.20% for the recall, 96.56% for accuracy, and 0.96 for F1-score. These results confirm the efficiency and reliability of the proposed method, providing a new technological approach for the early diagnosis of lung diseases, and potentially significantly enhancing the speed and accuracy of clinical diagnosis.
    Available online:  December 01, 2025 ,DOI: 10.15888/j.cnki.csa.010087
    Abstract:
    Unsupervised domain adaptation (UDA) aims to apply a trained model in the source domain to the target domain with only unlabeled data. Current UDA approaches learn domain-invariant features by aligning the source domain and target domain feature spaces via statistical difference minimization or adversarial learning. However, these constraints may result in the distortion of semantic feature structures and loss of class discriminability. To this end, this study proposes a new method called DAMPL. This method utilizes the CLIP model to inject textual descriptive information to deeply mine the semantic content of the image, and adopts a prompt learning paradigm for domain characteristics to effectively retain information specific to different domains, thus avoiding information loss. Additionally, the pseudo-labelling of the target domains are corrected via a semantic bootstrapping mechanism to reduce the inter-domain differences and enhance the generalization ability of the model. Finally, mutual information maximization loss (IML) is also introduced to preserve the feature distinguishability of the target domains. The final DAMPL method demonstrates optimal performance by achieving 83.8%, 79.7%, and 89.8% classification accuracy on the Office-Home, miniDomainNet, and VisDA-2017 datasets, respectively.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010063
    Abstract:
    Large language models (LLMs), represented by ChatGPT and DeepSeek, are rapidly developing and widely used in various tasks, such as text generation and intelligent assistants. However, these large models also face severe privacy and security risks. Especially in high security scenarios such as healthcare and finance, threats such as model theft and data privacy leakage are often key factors hindering the application of large models. Existing security solutions for protecting large model inference usually have certain limitations, such as the lack of runtime protection for the inference computation process, or practical challenges caused by the high cost of computation and communication. Confidential computing can build a secure inference environment based on trusted execution environment (TEE) hardware, and is a practical and effective security technology for implementing secure inference of large language models. Therefore, this study proposes a secure inference application scheme for large language models based on confidential computing, which ensures the integrity of the inference computing environment, model weight parameters, and model image files through remote attestation, implements encryption protection for large model inference traffic via confidential interconnection based on TEE hardware, and protects the privacy of user prompts in multi-user scenarios by isolating the inference contexts among different users. The proposed scheme provides comprehensive security protection for the entire process and full chain of large language model inference, while verifying the integrity of the execution environment to achieve efficient and secure confidential large language model inference. Furthermore, a prototype system is implemented on a heterogeneous TEE server platform (SEV and CSV), and the system’s security and performance are evaluated. The results show that while achieving the expected security goals, the performance loss introduced by the proposed scheme theoretically does not exceed 1% of the inference overhead of the native AI model, which can be ignored in practical applications.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010071
    Abstract:
    Precipitation nowcasting, a critical spatiotemporal sequence prediction task, has significant applications in meteorological domains such as agriculture and transportation. While radar echo extrapolation based on deep learning is a commonly used nowcasting method, existing methods have limitations in capturing the complex spatiotemporal patterns of radar echoes. The performance of these methods degrades significantly over time, making it difficult to accurately predict the spatiotemporal evolution of precipitation. This study proposes GloCal-Net, a model that integrates global modes and local variations. The model is based on a U-Net architecture with hybrid Mamba-Transformer experts, designed to enhance the ability to capture complex patterns in radar echo sequences by optimizing the feature extraction mechanism. To validate the proposed model, comparative and ablation experiments are conducted on a real radar dataset from Jiujiang. Compared with mainstream deep learning models, in the 2-hour extrapolation task, the proposed model achieves a comparable Heidke skill score and a 4.19% higher critical success index, reaching 0.36 and 0.29 respectively. The learned perceptual image patch similarity decreases by 3.70%, reaching 0.31. The structural similarity increases by 2.07%, reaching 72.37%. These experimental results show that GloCal-Net improves several key performance indicators and simultaneously verifies the effectiveness of each component.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010074
    Abstract:
    Multi-agent path finding (MAPF) aims to plan conflict-free paths for multiple agents to optimize collaborative task performance. This study reviews the current state of MAPF research, including algorithm classification, application scenarios, and future trends, while discussing the challenges in large-scale dynamic environments. First, the study provides a detailed introduction to the definition of MAPF. Then, it categorizes and summarizes path planning algorithms based on search, bio-inspired methods, sampling, and reinforcement learning. Finally, the study analyzes the advantages and disadvantages of each algorithm and their applicable scenarios. This review aims to help researchers understand the current developments and future directions of MAPF technology, and to promote further progress in this field.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010077
    Abstract:
    The agile requirements process model is suitable for scenarios with frequent requirements iterations. This approach emphasizes a user-centered design concept, using concise text and not relying on complex processes and tools. Introducing requirement models into the agile process can effectively address issues such as the insufficient understanding of agile methods. However, in scenarios with frequent requirement iterations, the introduced requirement models often face challenges such as difficulty in maintenance and outdated versions. In agile development with frequent requirements iterations, the model’s complexity results in high resource consumption for its manual maintenance of the requirement model. To address this issue, this study proposes an agile requirements process model based on multi-agent systems, MA-ARP. This model uses an automatic processing system built around multi-agent technology, leveraging the reasoning and recognition capabilities of multi-agents to dynamically update the requirement model according to changes in requirements. This approach effectively reduces the costs associated with maintaining the requirement model during the agile process. Case studies and comprehensive evaluations show that this approach can achieve automatic updates and maintenance of the requirement model, and the proposed model meets or exceeds level 2 in most of the selected requirements engineering process evaluation metrics.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010079
    Abstract:
    In recent years, with the continuous expansion of code scale and the increasing complexity of functional requirements and system architectures in the field of software development, automated tools have become a core support for improving development efficiency. Among these tools, generative large language models (LLM) have been widely applied in software development. Although LLM can significantly shorten the development cycle, studies have shown that the code they generate often contains security vulnerabilities. These vulnerabilities may pose risks to the system.Existing research has found that the outputs of LLM are influenced by differences in prompt languages. However, research and evaluation on the security of code generated using different prompt languages remain insufficient. This study aims to explore the impact of English and Chinese input prompt languages on code security and to conduct a systematic evaluation of this impact. Specifically, English and Chinese texts are used to instruct code generation in LLM, and the differences in the security of the generated code are analyzed.The experimental results indicate that when English is used as the prompt language, the number of vulnerabilities and bugs in the code generated by LLM is relatively smaller. In contrast, when Chinese is used as the prompt language, the generated code is more prone to security vulnerabilities.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010080
    Abstract:
    Road defect detection, as an important method for measuring pavement damage and maintaining road maintenance, faces challenges, including extreme length-to-width ratios, varying defect sizes, and uneven distributions of easy versus difficult defects. Current convolution-based methods have achieved larger receptive fields to enhance perception, but at the expense of high-frequency components that contain small defects, making them unsuitable for road defect detection tasks. To address this, a road defect detection algorithm, FS-YOLO, based on frequency enhancement and synergy of geometric shape and category, is proposed. First, to balance the receptive field and high-frequency information, a frequency-adaptive dilation strategy is introduced, dynamically adjusting the spatial expansion rate according to local frequency components, and assigning appropriate convolutional kernels to defects of different sizes. Second, given that different types of defects have distinct geometric shapes and positions, an attention-based three-dimensional explicit synergy dynamic detection head is introduced to achieve explicit synergy between spatial geometric information and category information, enabling the model to leverage the inherent potential of defect categories and spatial locations. Finally, the slide loss function is introduced to address the imbalance in the distribution of difficult and easy defects in real-world roads, particularly enhancing the model’s ability to handle difficult-to-distinguish samples. Experimental results show that FS-YOLO significantly outperforms the baseline model in terms of precision and recall on both the self-built dataset and the public road defect detection datasets RDD 2022 and UAV-PDD. It has also been effectively validated in practical applications on expressways and national and provincial roads, significantly improving the accuracy and efficiency of defect detection.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010049
    Abstract:
    Deep reinforcement learning (DRL) has shown significant promise in computer cluster scheduling tasks. However, existing DRL-based cluster scheduling methods often lack sufficient generalization, hindering their effectiveness in highly dynamic and frequently changing cluster environments. To address this challenge, this study proposes an improved meta-learning optimization method for deep reinforcement learning cluster scheduling, termed MRLScheduler. The essence of this methodology lies in two improvements to meta-learning: First, a data generation module based on diffusion models generates diverse synthetic data during the initialization phase of meta-learning to expand and optimize multi-task datasets. Second, an experience replay module based on diffusion models leverages historical task data to generate synthetic experiences during cross-task training in meta-learning, enabling the reuse of historical experiences. Finally, the improved meta-learning is integrated into the deep reinforcement learning cluster scheduling algorithm to fine-tune the strategies of agents in highly dynamic and frequently changing cluster environments, thus improving their generalization ability. The experimental results indicate that MRLScheduler outperforms other baseline algorithms, effectively enhancing the generalization ability of deep reinforcement learning-based cluster scheduling algorithms.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010050
    Abstract:
    In skin lesion image segmentation tasks, U-Net suffers from insufficient multi-scale adaptability, inefficient cross-layer feature fusion, and computational redundancy that leads to edge information loss when processing dermoscopy images. This study proposes a hierarchical pyramid attention network (HPANet) that achieves dual optimization of multi-scale feature capture and cross-layer feature transmission through pyramid attention module and dual-path feature fusion mechanism. The dual-path adaptive fusion module combines CNN and Transformer dual-branch features, enhancing information interaction of complementary features through channel attention and compressed spatial attention, while utilizing bilinear interaction and residual connections to alleviate feature dilution problems. The pyramid attention module integrates hierarchical multi-kernel convolution, depthwise separable downsampling, and patch-wise spatial-channel attention mechanisms to significantly improve multi-scale lesion feature capture capability. Experimental results demonstrate that the proposed architecture outperforms mainstream models on both ISIC 2017 and ISIC 2018 datasets, confirming its dual advantages in lesion boundary preservation and small lesion detection.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010064
    Abstract:
    This study proposes a lightweight detection algorithm, RABL-YOLOv8n, to address the problems of target blur, feature weakening, and small-scale target omission in pedestrian detection under low-illumination nighttime scenes. First, a lightweight RGCSPELAN module is designed, which significantly enhances the ability to capture small targets by optimizing the feature extraction process, while effectively reducing unnecessary computational and storage overhead. Second, a fine-grained classification attention mechanism (AFGC) is introduced in the 10th layer of the backbone network, which utilizes a multi-branch local perception strategy to enhance the recognizability of fine-grained features such as pedestrian clothing texture. Then, a bidirectional feature pyramid network (BiFPN) structure is adopted in the feature fusion layer, combined with an adaptive feature weighting strategy to further enhance the interaction capability of multi-scale features. Finally, the LSCD detection head is used to replace the original detection head. By decoupling the localization and classification tasks and introducing a lightweight context-aware module, the accuracy of small-object detection is significantly improved. The experimental results show that on the self-built NightPerson dataset, compared with the baseline YOLOv8n model, mAP@50 increases by 0.3%, accuracy decreases by only 0.013, recall increases by 0.009, while parameter count and floating-point operations are reduced by 58% and 42%, respectively. Compared with YOLOv5n, YOLOv6n, YOLOv10n and other models, the proposed algorithm achieves the best balance between detection accuracy and model lightweighting.
    Available online:  November 26, 2025 ,DOI: 10.15888/j.cnki.csa.010068
    Abstract:
    Current fault identification methods based on deep learning generally face challenges such as high data dependency, high computational cost and time consumption, and limited model generalization ability. To address these issues, this study proposes a lightweight and high-precision fault identification model that integrates the MobileNetV3, selective kernel network (SKNet), and stacked long short-term memory network (Stacked LSTM). First, the input data is preprocessed, and the processed data is converted into a format suitable for convolutional layer input. In the feature extraction stage, an improved MobileNetV3 backbone network is employed for deep feature mining. On the basis of retaining the efficiency of depthwise separable convolution, an inverted residual module strategically embeds a dual attention mechanism combining squeeze-and-excitation (SE) and selective kernel (SK), effectively enhancing channel information interaction and multi-scale feature adaptive selection, thus significantly improving feature representation and reducing computational complexity. Subsequently, the stacked LSTM captures long-term temporal dependencies in the vibration signals. Finally, feature compression and classification decision-making are achieved through the fully connected layer, thus constructing an end-to-end recognition system. Experimental results show that the recognition accuracy of the proposed model reaches 99.47%, demonstrating significant advantages in recognition accuracy and model generalization ability compared with traditional gearbox fault identification techniques.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010029
    Abstract:
    As one of the most common malignant tumors with high mortality rates among women worldwide, breast cancer relies highly on medical imaging for its diagnosis and treatment. Focus segmentation plays a crucial role in the precise identification of pathological regions, diagnostic assistance, and treatment planning. Recent advances in deep learning have made significant progress in automatic segmentation for breast cancer focus, laying a foundation for further deep learning-based research in this field. Recent research achievements are systematically reviewed, with a focus on the application of deep learning technology in focus segmentation under different medical imaging modes, aiming to provide a reference for the advancement of research in breast cancer focus segmentation. The relevant datasets and common evaluation metrics for image segmentation are briefly introduced. The image segmentation method for breast cancer based on deep learning is then systematically reviewed. The application of algorithms applied in different imaging modes is generalized. At last, current challenges for this technology are summarized. Future directions are also discussed based on limitations in the existing research.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010041
    Abstract:
    Multi-agent reinforcement learning (MARL) is a crucial part of multi-agent system research, demonstrating remarkable effectiveness in complex collaborative tasks. However, in scenarios requiring long-term decision-making, multi-agent systems often underperform due to the difficulty in estimating long-term returns and accurately modeling environmental uncertainties. To this end, this study proposes a multi-agent memory-reinforcement learning model based on quantile regression. The model not only selectively utilizes historical decision-making experience to assist long-term decision-making but also employs quantile functions to model the return distribution, thereby effectively capturing return uncertainties. The model comprises three components, including a memory indexing module, an implicit quantile decision network, and a value distribution decomposition module. Specifically, the memory indexing module generates intrinsic reward by adopting historical decision-making experience to enhance the agents’ full utilization of existing experience. The implicit quantile decision network models reward distribution via quantile regression, providing powerful support for long-term decision-making. The value distribution decomposition module decomposes overall return distributions into the distribution of an individual agent to support single-agent strategy learning. Extensive experiments conducted in StarCraft II environments demonstrate that the proposed method enhances the performance of agents in long-term decision-making tasks, with fast convergence rates.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010042
    Abstract:
    Currently, satellite remote sensing images feature a large size, and the targets to be detected are mostly small and unevenly distributed, with many targets gathering together. There are also significant scale differences among targets and the background is rather complex. All these factors pose great challenges to land utilization and environmental disaster detection. Therefore, this study proposes a remote sensing image detection method based on improved YOLO11. Firstly, this study introduces an attention mechanism into the C3k2 module of YOLO11 and designs the C3k2_DAB module. This enhances the detection performance under the influence of complex backgrounds while controlling model complexity. Secondly, a PKI module is added to the neck network to boost adaptive feature extraction of local and global context information. Finally, a new detection head PConv is introduced at the detection end to extract spatial features more swiftly under the prerequisite of reducing redundant computations and memory access. Experimental results demonstrate that the improved YOLO11 network model yields excellent performance in remote sensing image target detection. Compared to the original YOLO11 model, the proposed model’s mAP@0.5 increases by 2.4% and mAP@0.5:0.95 improves by 2.1%. Additionally, this model outperforms other mainstream target detection models, thus providing new insights for the application of remote sensing target detection algorithms.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010043
    Abstract:
    Time series forecasting finds widespread applications in such fields as weather forecasting, power load forecasting, and financial management. In recent years, deep learning has made remarkable progress in these tasks. However, existing models still have limitations in struggling with non-stationarity and heterogeneous pattern modeling, which is mainly represented by homogenized modeling of trends and seasonal components, and modal aliasing during decomposition. To this end, this study proposes a dual-domain time-frequency cooperative decomposition network (DTFNet), which designs a heterogeneous architecture with parallel time and frequency domains. In the time domain, an MLP network with strong noise resistance is employed to model the long-term evolution characteristics of trends, while in the frequency domain, fast Fourier transform is adopted to extract periodic seasonal components, with multi-scale convolution operations employed to capture spatial correlations between time-frequency characteristics. Meanwhile, this study introduces a decomposition method based on discrete wavelet transformation (DWT) to replace conventional moving average decomposition, effectively mitigating boundary effects and modal aliasing. Experiments on six public datasets demonstrate that DTFNet outperforms the current mainstream models in both accuracy and robustness. Ablation experiments show the notable effectiveness of the proposed DWT-based decomposition module and dual-domain time-frequency modeling architecture. Featuring sound generalization ability, DTFNet is applicable to multiple time series forecasting tasks, offering powerful support for real-world applications such as power load forecasting and weather forecasting.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010028
    Abstract:
    Small object detection in remote and complex scenes faces persistent challenges, including low detection accuracy and poor robustness, due to the objects’ small size, irregular shape, weak texture, and high susceptibility to background interference. To address these issues, this study proposes an enhanced detection algorithm named remote-enhanced fusion YOLO (ReF-YOLO), which systematically optimizes the YOLO11 framework from three aspects: feature extraction, feature fusion, and detection head design. Specifically, a module called C3k2DCASC is introduced, integrating channel attention and spatial modeling, to enhance the backbone network’s representational capacity for irregular objects. The L-Fuse structure, combined with the same-scale features from the backbone and the efficient downsampling module SCDown, is introduced to improve semantic-detail alignment. Additionally, a high-resolution P2 detection branch is added to strengthen the perception and localization capacities of the algorithm for detecting extremely small objects. Experiments on a representative small object detection dataset VisDrone2019, demonstrate that the proposed method improves mAP@0.5 by 4.9% over YOLO11n, along with enhanced accuracy and stability across various small object detection tasks. These results validate the utility and generalization capability of ReF-YOLO in remote and complex scenes.
    Available online:  November 17, 2025 ,DOI: 10.15888/j.cnki.csa.010060
    Abstract:
    As an important therapeutic resource, traditional Chinese medicine (TCM) has undergone thousands of years of clinical practice and application. To promote the modernization of TCM and explore its application potential in new indications, this study draws on research experience from drug repurposing in Western medicine and combines emerging network medicine theories to propose two random walk-based models for predicting potential therapeutic associations between TCM and symptoms: M-RW and GO-DREAMwalk. The two models incorporate path-based and functional information between TCM and symptoms to guide the random walk process. The resulting node sequences are input into a heterogeneous Skip-gram model to learn the embedded vector representations of nodes. Subsequently, an XGBoost classifier is trained by adopting TCM-symptom association labels and the learned embedded vectors. Finally, the models are tested and evaluated by employing clinical data on liver cirrhosis. In the clinically effective prediction task, the top-ranking prediction precision of the two models reaches 0.0798 and 0.0684 respectively, improvements of 145% and 110% over the mechanism-based Proximity, 40% and 20% over the data-driven method node2vec, and 53% and 31% over the data-driven method edge2vec respectively. Furthermore, applying the Rank Aggregation method to integrate the prediction results of both models leads to precision improvements of 75% and 105%, further enhancing the predictive ability of the models. The prediction results on real-world clinical data of the two models demonstrate sound prediction performance, highlighting their potential to promote the effective application of TCM in novel indications.
    Available online:  November 11, 2025 ,DOI: 10.15888/j.cnki.csa.010061
    Abstract:
    Skin cancer is a common and serious type of cancer, with melanoma having the highest fatality rate. Early detection and treatment can significantly improve the survival rate of skin cancer patients. Dermoscopic, macroscopic, and histopathological images all play essential roles in diagnosis. The application of artificial intelligence technology can effectively enhance the efficiency of classifying these three types of images and help reduce diagnostic costs. Deep learning, with its feature extraction capabilities, is more suitable for the classification tasks of detailed skin cancer images. This study reviews the relevant research on the classification tasks of the three commonly used images in skin cancer diagnosis, analyzes the technical focuses of the three types of images due to their different image characteristics, and conducts targeted analysis of the difficulties faced in clinical application. Finally, future developments and challenges are discussed to promote the broader application of artificial intelligence in skin cancer diagnosis.
    Available online:  November 11, 2025 ,DOI: 10.15888/j.cnki.csa.010054
    Abstract:
    Large language model (LLM) represented by ChatGPT are one of the most prominent research topics in artificial intelligence (AI) today. They are considered critical technological means for driving revolutionary transformations across traditional industries, providing substantial momentum for industrial innovation and upgrading. As a key domain of long-term exploration and application for AI technologies, the healthcare field is currently confronted with accelerated aging of the population, insufficient medical resource supply, and tense physician-patient relationships. Under this background, AI is regarded as the most promising solution to thoroughly solving these conflicts and problems, especially the LLMs represented by ChatGPT, which offer people a glimpse of hope. This study first briefly reviews the development of natural language processing (NLP) technologies, followed by a systematic introduction of the historical development background and technical evolution trajectory of the GPT-series LLMs. By combining the practical demands and current status of the healthcare industry, it discusses the application scenarios and cases of LLMs represented by ChatGPT in this field by category. Finally, this study conducts an in-depth analysis of the inherent limitations of LLMs and challenges encountered during the large-scale deployment, implementation, and utilization, with some targeted solutions and ideas provided.
    Available online:  November 04, 2025 ,DOI: 10.15888/j.cnki.csa.010052
    Abstract:
    There is interference caused by complex working conditions such as strong arc light, smoke and dust, and extreme thermal radiation during the TIG welding, and interference for visual feature extraction of the molten pool due to the reflection characteristic instability of the molten pool region caused by the dynamic flow of liquid metal. In view of this, this study proposes an improvement method for molten pool measurement, which includes a lightweight network molten pool segmentation method based on the attention mechanism and multi-scale feature fusion, and an image processing method of closing operation, connected region labeling, and the minimum bounding rectangle based on the segmentation results. The results show that the improved network has enhanced performance on the self-built molten pool segmentation datasets. The mean intersection over union (MIoU) reaches 95.44%, the mean pixel accuracy (mPA) is 98.27%, and the inference time for a single frame is only 11.30 ms. Additionally, the length, width, and area of the segmented molten pool are effectively extracted.
    Available online:  November 04, 2025 ,DOI: 10.15888/j.cnki.csa.010045
    Abstract:
    Existing multimodal-based image anomaly detection methods suffer from several limitations: anomaly smoothing during anomaly region extraction, insufficient fine-grained perception, and low discrimination efficiency in defect detection, leading to degraded overall performance. To address these issues, this study proposes a multimodal image anomaly detection model with an asymmetric teacher-student network (MATS), comprising three key components: a cross-modal anomaly amplifier (CAA), a multi-dilated local attention (MDLA) module, and a FastKAN feed-forward network. First, the CAA amplifies anomalous regions while reducing noise by expanding/compressing auxiliary features and fusing them with target features, thus alleviating anomaly smoothing in subsequent detection. Subsequently, the MDLA module enhances fine-grained perception of anomalies through multi-dilation-rate convolutions combined with local attention for multi-scale feature extraction, while integrating normalizing flow (NF) to generate the conditional probability distribution of normal samples. The FastKAN module enables efficient anomaly discrimination via lightweight feature processing, producing feature maps consistent with the teacher network's outputs for pixel-wise distance calculation to evaluate anomaly scores. During testing, regions with significant discrepancies between teacher and student network outputs are identified as anomalies. Experimental results on public industrial image datasets MVTec AD and MVTec 3D-AD demonstrate that the proposed method achieves state-of-the-art performance in multimodal anomaly detection and localization.
    Available online:  October 29, 2025 ,DOI: 10.15888/j.cnki.csa.010048
    Abstract:
    To address the complex spatiotemporal characteristics and temporal domain fluctuation challenges in flight trajectory prediction, this study proposes a method integrating spatiotemporal dual extraction and frequency-domain enhancement. The proposed method combines the temporal convolutional network (TCN) with the iTransformer model to capture local temporal features and global variable interactions in trajectory sequences. This enables dual extraction of data features at different levels and granularities, effectively uncovering potential spatiotemporal correlations. The frequency enhanced channel attention mechanism (FECAM) is introduced, which converts trajectory features into the frequency domain using the discrete cosine transform and strengthens the frequency-domain information with channel attention, reducing the impact of temporal domain fluctuations. Experiments on a 3D flight trajectory dataset show that during climb, cruise, and descent phases, the proposed method achieves mean absolute errors of 1.15, 0.15, and 0.82, respectively, demonstrating significant advantages in prediction accuracy and stability over existing methods.
    Available online:  October 29, 2025 ,DOI: 10.15888/j.cnki.csa.010055
    Abstract:
    As an important formal tool for describing the time-constrained behavior of real-time systems, timed automata are widely employed in fields such as embedded systems and communication protocols. The traditional way of manually building real-time system models is time-consuming and prone to errors, and automatic inference models have become a research hotspot. This study focuses on the active learning algorithms of time automata, sorts them out according to the data storage structure and equivalent query method, summarizes both the latest research status of active learning algorithms in the current field of time automata, and their core ideas and technical frameworks, with the challenges faced by the current research analyzed at the same time. By comparing the advantages and limitations of various methods, this study hopes to provide researchers with a clear reference framework and propose possible future research ideas, aiming to promote the development of the theory and practice of TA automated modeling.
    Available online:  October 29, 2025 ,DOI: 10.15888/j.cnki.csa.010044
    Abstract:
    Information compression and semantic coherence in long texts are persistent challenges in summary generation models. To address this issue, this study proposes a summary generation model integrating content-guided and multi-scale attention. The model adopts a dual-branch architecture to jointly model multi-granularity semantics and utilizes a content-guided mechanism to focus on key information relevant to the summary. Based on the conventional BERT-Transformer framework, a dual-branch structure is introduced to enhance semantic representation, and a cross-branch fusion mechanism (MSAA-SAM) is designed to achieve semantic alignment and unified representation. In addition, the pointer-generator network is improved by incorporating a global sentence vector guidance mechanism to enhance generation control, thereby improving key information extraction and reducing redundancy in long-text summarization. Experimental results on the NLPCC 2017 and LCSTS datasets demonstrate that the proposed model outperforms mainstream baseline models in generative summarization tasks, verifying its comprehensive advantages in semantic modeling, generation quality, and control capability.
    Available online:  October 29, 2025 ,DOI: 10.15888/j.cnki.csa.010040
    Abstract:
    Existing generative adversarial network (GAN) compression methods focus more on optimizing network architecture and the spatial domain, while neglecting the impact of spectral-domain optimization on distillation effectiveness and model performance. This limitation results in discrepancies between lightweight models and teacher models in generating high-frequency image details. In addition, conventional feature extraction methods in image translation often cause detail loss. To address these issues, this study proposes a spectral knowledge distillation scheme with integrated feature enhancement (FESD-CycleGAN). In FESD-CycleGAN, by shifting certain feature channels in the feature map, the receptive field is expanded and feature diversity is enhanced, thus improving both the details and the overall quality of generated images. Moreover, since spectral-domain knowledge distillation enables the generator to capture high-frequency details of images, knowledge distillation in both the spatial and spectral domains is integrated on top of feature enhancement in the feature map. This approach enhances the model’s ability to preserve fine details in generated images. Experimental results show that on the horse2zebra, summer2winter, and edges2shoes datasets, FESD-CycleGAN reduces the FID by 2.19, 0.68, and 0.76 compared to the baseline DCD model, achieving scores of 54.98, 73.41, and 27.45, respectively. The generative performance of lightweight models is effectively improved by FESD-CycleGAN.
    Available online:  October 21, 2025 ,DOI: 10.15888/j.cnki.csa.010031
    Abstract:
    Predictive process monitoring (PPM) techniques utilize existing event logs to predict certain key metrics in running business processes. In terms of feature extraction, current PPM methods presuppose that cases are solely influenced by their attributes or exclusively encoded by extracting resource-level inter-case behavioral attributes. These methods typically overlook inter-case behavioral information from the activity perspective. This study proposes a new method to capture the inter-activity behaviour of cases (IABC), which involves a feature construction framework covering three dimensions: time window, activity granularity, and behaviour state. It constructs a total of 36 types of inter-activity behavioral features. Concurrently, this study proposes two novel algorithms: the influence distribution algorithm for mining positive/negative influence propagation among activities, and the batch behaviour detection algorithm for identifying potential batch operations. The effectiveness of the IABC method is evaluated on three publicly available event logs. The results demonstrate that the temporal prediction model integrating the IABC method outperforms both the baseline model, which does not use the method, and the model that employs resource-level inter-case features.
    Available online:  October 11, 2025 ,DOI: 10.15888/j.cnki.csa.010037
    Abstract:
    RISC-V is rapidly evolving, with the porting and development of its software ecosystem advancing. Due to the complexity of operating system itself, performance evaluations are often limited to the module level, making it difficult to conduct systematic analysis at the function level. The software adaptation of RISC-V often draws on optimization strategies from mature architectures like ARM. This study proposes a method for fine-grained kernel performance evaluation and defect detection based on cross-architecture comparison. The method shifts the evaluation focus from the module level to the function level by using cross-architecture context matching and architecture-specific anomaly detection. It was applied to the Linux 5.10 kernel to compare the performance of RISC-V and ARM, which revealed the limitations of RISC-V. Experiments using a targeted test suite identified 9 performance issues from 25 anomalous contexts on RISC-V, with 80% classified as high priority. The result demonstrates that the proposed method achieves an evaluation accuracy of 80%, indicating the effectiveness and accuracy of the method in identifying performance issues during architecture adaptation.
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2016,25(8):1-7 ,DOI: 10.15888/j.cnki.csa.005283
    [Abstract] (9616) [HTML] (0) [PDF 1.11 M] (44107)
    Abstract:
    Since 2006, Deep Neural Network has achieved huge access in the area of Big Data Processing and Artificial Intelligence, such as image/video discriminations and autopilot. And unsupervised learning methods as the methods getting success in the depth neural network pre training play an important role in deep learning. So, this paper attempts to make a brief introduction and analysis of unsupervised learning methods in deep learning, mainly includs two types, Auto-Encoders based on determination theory and Contrastive Divergence for Restrict Boltzmann Machine based on probability theory. Secondly, the applications of the two methods in Deep Learning are introduced. At last a brief summary and prospect of the challenges faced by unsupervised learning methods in Deep Neural Networks are made.
    2008,17(5):122-126
    [Abstract] (8771) [HTML] (0) [PDF 0.00 Byte] (52678)
    Abstract:
    随着Internet的迅速发展,网络资源越来越丰富,人们如何从网络上抽取信息也变得至关重要,尤其是占网络资源80%的Deep Web信息检索更是人们应该倍加关注的难点问题。为了更好的研究Deep Web爬虫技术,本文对有关Deep Web爬虫的内容进行了全面、详细地介绍。首先对Deep Web爬虫的定义及研究目标进行了阐述,接着介绍了近年来国内外关于Deep Web爬虫的研究进展,并对其加以分析。在此基础上展望了Deep Web爬虫的研究趋势,为下一步的研究奠定了基础。
    2022,31(5):1-20 ,DOI: 10.15888/j.cnki.csa.008463
    [Abstract] (8323) [HTML] (7171) [PDF 2.46 M] (10103)
    Abstract:
    Although the deep learning method has made a huge breakthrough in machine learning, it requires a large amount of manual work for data annotation. Limited by labor costs, however, many applications are expected to reason and judge the instance labels that have never been encountered before. For this reason, zero-shot learning (ZSL) came into being. As a natural data structure that represents the connection between things, the graph is currently drawing more and more attention in ZSL. Therefore, this study reviews the methods of graph-based ZSL systematically. Firstly, the definitions of ZSL and graph learning are outlined, and the ideas of existing solutions for ZSL are summarized. Secondly, the current ZSL methods are classified according to different utilization ways of graphs. Thirdly, the evaluation criteria and datasets concerning graph-based ZSL are discussed. Finally, this study also specifies the problems to be solved in further research on graph-based ZSL and predicts the possible directions of its future development.
    2011,20(11):80-85
    [Abstract] (8142) [HTML] (0) [PDF 842.93 K] (46669)
    Abstract:
    Based on study of current video transcoding solutions, we proposed a distributed transcoding system. Video resources are stored in HDFS(Hadoop Distributed File System) and transcoded by MapReduce program using FFMPEG. In this paper, video segmentation strategy on distributed storage and how they affect accessing time are discussed. We also defined metadata of video formats and transcoding parameters. The distributed transcoding framework is proposed on basis of MapReduce programming model. Segmented source videos are transcoding in map tasks and merged into target video in reduce task. Experimental results show that transcoding time is dependent on segmentation size and trascoding cluster size. Compared with single PC, the proposed distributed video transcoding system implemented on 8 PCs can decrease about 80% of the transcoding time.
    2012,21(3):260-264
    [Abstract] (7317) [HTML] (0) [PDF 328.42 K] (48476)
    Abstract:
    The essential problem of open platform is the validation and authorization of users. Nowadays, OAuth is the international authorization method. Its characteristic is that users could apply to visit their protected resources without the need to enter their names and passwords in the third application. The latest version of OAuth is OAuth2.0 and its operation of validation and authorization are simpler and safer. This paper investigates the principle of OAuth2.0, analyzes the procedure of Refresh Token and offers a design proposal of OAuth2.0 server and specific application examples.
    2019,28(6):1-12 ,DOI: 10.15888/j.cnki.csa.006915
    [Abstract] (7147) [HTML] (22837) [PDF 656.80 K] (30716)
    Abstract:
    A knowledge graph is a knowledge base that represents objective concepts/entities and their relationships in the form of graph, which is one of the fundamental technologies for intelligent services such as semantic retrieval, intelligent answering, decision support, etc. Currently, the connotation of knowledge graph is not clear enough and the usage/reuse rate of existing knowledge graphs is relatively low due to lack of documentation. This paper clarifies the concept of knowledge graph through differentiating it from related concepts such as ontology in that the ontology is the schema layer and the logical basis of a knowledge graph while the knowledge graph is the instantiation of an ontology. Research results of ontologies can be used as the foundation of knowledge graph research to promote its developments and applications. Existing generic/domain knowledge graphs are briefly documented and analyzed in terms of building, storage, and retrieval methods. Moreover, future research directions are pointed out.
    2007,16(9):22-25
    [Abstract] (7019) [HTML] (0) [PDF 0.00 Byte] (11682)
    Abstract:
    本文结合物流遗留系统的实际安全状态,分析了面向对象的编程思想在横切关注点和核心关注点处理上的不足,指出面向方面的编程思想解决方案对系统进行分离关注点处理的优势,并对面向方面的编程的一种具体实现AspectJ进行分析,提出了一种依据AspectJ对遗留物流系统进行IC卡安全进化的方法.
    2011,20(7):184-187,120
    [Abstract] (6962) [HTML] (0) [PDF 714.75 K] (38400)
    Abstract:
    According to the actual needs of intelligent household, environmental monitoring etc, this paper designed a wireless sensor node of long-distance communication system. This system used the second SoC CC2530 set in RF and controller chips as the core module and externally connected with CC2591 RF front-end power amplifier module. Based on ZigBee2006 in software agreement stack, it realized each application layer function based on ZStack. It also introduced wireless data acquisition networks based on the ZigBee agreement construction, and has given the hardware design schematic diagram and the software flow chart of sensor node, synchronizer node. The experiment proved that the node is good in performance and the communication is reliable. The communication distance has increased obviously compared with the first generation TI product.
    2016,25(7):8-16 ,DOI: 10.15888/j.cnki.csa.005241
    [Abstract] (6507) [HTML] (0) [PDF 921.13 K] (16973)
    Abstract:
    This software is completed by Visual c + + 6.0 and Access 2003 tools, and designed in the Unicode character set patterns, to solve the problem about system compatibility and character output garbled in current national language in software development. This development model is used simply, has stable operation, flexible interface, and is convenient for user unified processing (backup, print) vocabulary and voice database at the same time also provides technical guidance to other national language text translation software development. Currently translation supporting tools for Dai region has not yet been released, Daile Wen - Chinese Translation Audible Electronic Dictionary is an important “application innovation” in the field of Dai information technology, it's the basic support of research about minority language cultural information element representation and extraction, and the main function is responsible for Dai queries, translation, reading, etc. Daile Wen - Chinese Translation Electronic Dictionary designed to achieve the common functions such as Dai-Chinese bilingual translation, Dai people reading, Dai phonetic display, it also supports the lexicon to add, modify, delete custom actions, it implements the good human-computer interaction function.
    2008,17(1):113-116
    [Abstract] (6459) [HTML] (0) [PDF 0.00 Byte] (55477)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。
    2008,17(8):87-89
    [Abstract] (6418) [HTML] (0) [PDF 0.00 Byte] (46004)
    Abstract:
    随着面向对象软件开发技术的广泛应用和软件测试自动化的要求,基于模型的软件测试逐渐得到了软件开发人员和软件测试人员的认可和接受。基于模型的软件测试是软件编码阶段的主要测试方法之一,具有测试效率高、排除逻辑复杂故障测试效果好等特点。但是误报、漏报和故障机理有待进一步研究。对主要的测试模型进行了分析和分类,同时,对故障密度等参数进行了初步的分析;最后,提出了一种基于模型的软件测试流程。
    2008,17(8):2-5
    [Abstract] (6323) [HTML] (0) [PDF 0.00 Byte] (37030)
    Abstract:
    本文介绍了一个企业信息门户中单点登录系统的设计与实现。系统实现了一个基于Java EE架构的结合凭证加密和Web Services的单点登录系统,对门户用户进行统一认证和访问控制。论文详细阐述了该系统的总体结构、设计思想、工作原理和具体实现方案,目前系统已在部分省市的广电行业信息门户平台中得到了良好的应用。
    2004,13(8):58-59
    [Abstract] (6267) [HTML] (0) [PDF 0.00 Byte] (32745)
    Abstract:
    本文介绍了Visual C++6.0在对话框的多个文本框之间,通过回车键转移焦点的几种方法,并提出了一个改进方法.
  • 全文下载排行(总排行年度排行各期排行)
    摘要点击排行(总排行年度排行各期排行)

  • Article Search
    Search by issue
    Select AllDeselectExport
    Display Method:
    2007,16(10):48-51
    [Abstract] (5356) [HTML] (0) [PDF 0.00 Byte] (93600)
    Abstract:
    论文对HDF数据格式和函数库进行研究,重点以栅格图像为例,详细论述如何利用VC++.net和VC#.net对光栅数据进行读取与处理,然后根据所得到的象素矩阵用描点法显示图像.论文是以国家气象中心开发Micaps3.0(气象信息综合分析处理系统)的课题研究为背景的.
    2002,11(12):67-68
    [Abstract] (4936) [HTML] (0) [PDF 0.00 Byte] (64642)
    Abstract:
    本文介绍非实时操作系统Windows 2000下,利用VisualC++6.0开发实时数据采集的方法.所用到的数据采集卡是研华的PCL-818L.借助数据采集卡PCL-818L的DLLs中的API函数,提出三种实现高速实时数据采集的方法及优缺点.
    2008,17(1):113-116
    [Abstract] (6459) [HTML] (0) [PDF 0.00 Byte] (55477)
    Abstract:
    排序是计算机程序设计中一种重要操作,本文论述了C语言中快速排序算法的改进,即快速排序与直接插入排序算法相结合的实现过程。在C语言程序设计中,实现大量的内部排序应用时,所寻求的目的就是找到一个简单、有效、快捷的算法。本文着重阐述快速排序的改进与提高过程,从基本的性能特征到基本的算法改进,通过不断的分析,实验,最后得出最佳的改进算法。
    2008,17(5):122-126
    [Abstract] (8771) [HTML] (0) [PDF 0.00 Byte] (52678)
    Abstract:
    随着Internet的迅速发展,网络资源越来越丰富,人们如何从网络上抽取信息也变得至关重要,尤其是占网络资源80%的Deep Web信息检索更是人们应该倍加关注的难点问题。为了更好的研究Deep Web爬虫技术,本文对有关Deep Web爬虫的内容进行了全面、详细地介绍。首先对Deep Web爬虫的定义及研究目标进行了阐述,接着介绍了近年来国内外关于Deep Web爬虫的研究进展,并对其加以分析。在此基础上展望了Deep Web爬虫的研究趋势,为下一步的研究奠定了基础。

下载归智APP ,关注本刊

External Links

You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-3
Address:4# South Fourth Street, Zhongguancun,Haidian, Beijing,Postal Code:100190
Phone:010-62661041 Fax: Email:csa (a) iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063