Abstract:Hierarchical federated learning (HFL) aims to optimize model performance and maintain data privacy through multi-layered collaborative learning. However, its effectiveness relies on effective incentive mechanisms for participating parties and strategies to address information asymmetry. To address these issues, this study proposes a layered incentive mechanism for protecting the privacy of end devices, edge servers, and cloud servers. At the edge-device layer, edge servers act as intermediaries, using the multi-dimensional contract theory to design a variety of contract items. This encourages end devices to participate in HFL using local data without disclosing the costs of data collection, model training, and model transmission. At the cloud-edge layer, the Stackelberg game models the relationship between unit data reward and data size between a cloud server and edge servers and subsequently transforms it into a Markov process, all while maintaining the confidentiality of the edge servers’ unit profit. Then, multi-agent deep reinforcement learning (MADRL) is used to incrementally approach the Stackelberg equilibrium (SE) while ensuring privacy. Experimental results indicate that the proposed incentive mechanism outperforms traditional approaches, yielding an almost 11% increase in cloud server revenue and an approximately 18 times improvement in the cost-effectiveness gained.