BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification
Abstract
1. Introduction
- (1)
- We propose the system architecture of BIMW, a blockchain-enabled innocuous model watermarking framework that ensures secure and trustworthy AI model deployment and sharing in distributed edge computing environments.
- (2)
- We extend the traditional LIME-style feature attribution into a differentiable watermark extraction framework, formulating the interpretation process as a ridge regression optimization problem that supports end-to-end gradient propagation for robust watermark embedding.
- (3)
- To improve robustness and stability, we propose a normalization mechanism applied prior to binarization, which effectively mitigates noise sensitivity and ensures consistent watermark recovery across samples and models. We theoretically analyze and compare two watermark verification mechanisms, combining cosine similarity for differentiable training and the chi-squared test for statistically grounded ownership verification.
- (4)
- We strengthen the data transmission security and transparency of watermark data, models, and trigger samples by using blockchain encryption technology.
- (5)
- We conducted comprehensive experiments by applying BIMW to various AI models. The results demonstrate its effectiveness and resistance against watermark-removal attacks, as well as its efficiency in model data authentication and ownership verification on edge computing platforms.
2. Background Knowledge and Related Work
2.1. Edge Intelligence
2.2. Model Watermarking and Ownership Verification
2.3. Interpretable Machine Learning
2.4. Blockchain for Data Security
3. Methodology
| Algorithm 1 Ownership verification via hypothesis testing. |
| Require: Trigger set , suspicious model , reference watermark , significance level . Ensure: Boolean flag indicating whether ownership is verified.
|
3.1. Insight of Interpretable Watermarking
3.2. Embedding the Watermark
3.3. Extracting the Watermark via Feature Impact Analysis
3.4. Verifying Ownership
3.5. Blockchain-Based Data Verification
4. Experimental Results
4.1. Masked Sample Generation Process
4.2. Key Parameters
4.3. Performance Evaluation
4.4. Resilience Against Watermark Removal Attacks
4.5. Latency of Model Data Authentication
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
| AI | Artificial Intelligence |
| EI | Edge Intelligence |
| IW | Interpretable Watermarking |
| IoT | Internet of Things |
| DL | Deep Learning |
| IP | Intellectual Property |
| FIA | Feature Impact Analysis |
| IML | Interpretable Machine Learning |
| SHAP | Shapley Additive Explanations |
| LIME | Local Interpretable Model-Agnostic Explanations |
| CIDs | Content Identifiers |
| WSR | Watermark success rate |
References
- Liu, X.; Xu, R.; Chen, Y. A Decentralized Digital Watermarking Framework for Secure and Auditable Video Data in Smart Vehicular Networks. Future Internet 2024, 16, 11. [Google Scholar] [CrossRef]
- Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef]
- Deng, S.; Zhao, H.; Fang, W.; Yin, J.; Dustdar, S.; Zomaya, A.Y. Edge intelligence: The confluence of edge computing and artificial intelligence. IEEE Internet Things J. 2020, 7, 7457–7469. [Google Scholar] [CrossRef]
- Wu, H.; Zhang, Z.; Guan, C.; Wolter, K.; Xu, M. Collaborate edge and cloud computing with distributed deep learning for smart city internet of things. IEEE Internet Things J. 2020, 7, 8099–8110. [Google Scholar] [CrossRef]
- Li, L.; Ota, K.; Dong, M. Deep learning for smart industry: Efficient manufacture inspection system with fog computing. IEEE Trans. Ind. Inform. 2018, 14, 4665–4673. [Google Scholar] [CrossRef]
- Zhang, Y.; Jia, R.; Pei, H.; Wang, W.; Li, B.; Song, D. The secret revealer: Generative model-inversion attacks against deep neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 253–261. [Google Scholar]
- Tian, Z.; Cui, L.; Liang, J.; Yu, S. A comprehensive survey on poisoning attacks and countermeasures in machine learning. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
- Liang, Y.; Xiao, J.; Gan, W.; Yu, P.S. Watermarking techniques for large language models: A survey. arXiv 2024, arXiv:2409.00089. [Google Scholar]
- Wang, R.; Li, H.; Mu, L.; Ren, J.; Guo, S.; Liu, L.; Fang, L.; Chen, J.; Wang, L. Rethinking the vulnerability of dnn watermarking: Are watermarks robust against naturalness-aware perturbations? In Proceedings of the 30th ACM International Conference on Multimedia, Lisbon, Portugal, 10–14 October 2022; pp. 1808–1818. [Google Scholar]
- Jia, H.; Choquette-Choo, C.A.; Chandrasekaran, V.; Papernot, N. Entangled watermarks as a defense against model extraction. In Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Online, 11–13 August 2021; pp. 1937–1954. [Google Scholar]
- Yan, Y.; Pan, X.; Zhang, M.; Yang, M. Rethinking {White-Box} watermarks on deep learning models under neural structural obfuscation. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23), Anaheim, CA, USA, 9–11 August 2023; pp. 2347–2364. [Google Scholar]
- Adi, Y.; Baum, C.; Cisse, M.; Pinkas, B.; Keshet, J. Turning your weakness into a strength: Watermarking deep neural networks by backdooring. In Proceedings of the 27th USENIX Security Symposium (USENIX Security 18), Baltimore, MD, USA, 15–17 August 2018; pp. 1615–1631. [Google Scholar]
- Shao, S.; Li, Y.; Yao, H.; He, Y.; Qin, Z.; Ren, K. Explanation as a watermark: Towards harmless and multi-bit model ownership verification via watermarking feature attribution. arXiv 2024, arXiv:2405.04825. [Google Scholar] [CrossRef]
- Singh, P.; Devi, K.J.; Thakkar, H.K.; Bilal, M.; Nayyar, A.; Kwak, D. Robust and secure medical image watermarking for edge-enabled e-healthcare. IEEE Access 2023, 11, 135831–135845. [Google Scholar] [CrossRef]
- Liu, X.; Xu, R.; Peng, X. BEWSAT: Blockchain-enabled watermarking for secure authentication and tamper localization in industrial visual inspection. In Proceedings of the Eighth International Conference on Machine Vision and Applications (ICMVA 2025), Melbourne, Australia, 12–14 June 2025; Volume 13734, pp. 54–65. [Google Scholar]
- Xu, R.; Liu, X.; Nagothu, D.; Qu, Q.; Chen, Y. Detecting Manipulated Digital Entities Through Real-World Anchors. In Proceedings of the International Conference on Advanced Information Networking and Applications; Springer: Berlin/Heidelberg, Germany, 2025; pp. 450–461. [Google Scholar]
- Saha, A.; Subramanya, A.; Pirsiavash, H. Hidden trigger backdoor attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11957–11965. [Google Scholar]
- Li, E.; Zhou, Z.; Chen, X. Edge intelligence: On-demand deep learning model co-inference with device-edge synergy. In Proceedings of the 2018 Workshop on Mobile Edge Communications, New York, NY, USA, 20 August 2018; pp. 31–36. [Google Scholar]
- Liu, X.; Xu, R.; Zhao, C. AGFI-GAN: An Attention-Guided and Feature-Integrated Watermarking Model Based on Generative Adversarial Network Framework for Secure and Auditable Medical Imaging Application. Electronics 2024, 14, 86. [Google Scholar] [CrossRef]
- Boenisch, F. A systematic review on model watermarking for neural networks. Front. Big Data 2021, 4, 729663. [Google Scholar] [CrossRef] [PubMed]
- Pan, X.; Zhang, M.; Yan, Y.; Wang, Y.; Yang, M. Cracking white-box dnn watermarks via invariant neuron transforms. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Long Beach, CA, USA, 6–10 August 2023; pp. 1783–1794. [Google Scholar]
- Li, Y.; Zhu, M.; Yang, X.; Jiang, Y.; Wei, T.; Xia, S.T. Black-box dataset ownership verification via backdoor watermarking. IEEE Trans. Inf. Forensics Secur. 2023, 18, 2318–2332. [Google Scholar] [CrossRef]
- Dziedzic, A.; Duan, H.; Kaleem, M.A.; Dhawan, N.; Guan, J.; Cattan, Y.; Boenisch, F.; Papernot, N. Dataset inference for self-supervised models. Adv. Neural Inf. Process. Syst. 2022, 35, 12058–12070. [Google Scholar]
- Battah, A.; Madine, M.; Yaqoob, I.; Salah, K.; Hasan, H.R.; Jayaraman, R. Blockchain and NFTs for trusted ownership, trading, and access of AI models. IEEE Access 2022, 10, 112230–112249. [Google Scholar] [CrossRef]
- Molnar, C. Interpretable Machine Learning; Lulu Press: Morrisville, ND, USA, 2020. [Google Scholar]
- Murdoch, W.J.; Singh, C.; Kumbier, K.; Abbasi-Asl, R.; Yu, B. Definitions, methods, and applications in interpretable machine learning. Proc. Natl. Acad. Sci. USA 2019, 116, 22071–22080. [Google Scholar] [CrossRef] [PubMed]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13 August 2016; pp. 1135–1144. [Google Scholar]
- Garreau, D.; Luxburg, U. Explaining the explainer: A first theoretical analysis of LIME. In Proceedings of the International Conference on Artificial Intelligence and Statistics, PMLR, Online, 26–28 August 2020; pp. 1287–1296. [Google Scholar]
- Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv 2014, arXiv:1407.3561. [Google Scholar]
- Jetson Orin Nano Super Developer Kit. Available online: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-orin/nano-super-developer-kit/ (accessed on 22 October 2025).
- Solidity. Available online: https://docs.soliditylang.org/en/v0.8.13/ (accessed on 22 October 2025).
- Ganache. Available online: https://archive.trufflesuite.com/ganache/ (accessed on 22 October 2025).
- Truffle. Available online: https://archive.trufflesuite.com/docs/truffle/ (accessed on 22 October 2025).
- IPFS Docs. Available online: https://docs.ipfs.tech/ (accessed on 22 October 2025).
- Krizhevsky, A.; Hinton, G. Convolutional deep belief networks on cifar-10. 2010, pp. 1–9. Available online: https://www.cs.toronto.edu/~kriz/conv-cifar10-aug2010.pdf (accessed on 22 October 2025).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]




| Symbol | Description | Symbol | Description |
|---|---|---|---|
| (, c) | Clean dataset | (, ) | Watermarked dataset |
| Model parameters | Target watermarked | ||
| Optimal model parameters | The interpretation | ||
| Model’s loss function | Discrepancy between and | ||
| Control factor | y | Model’s prediction | |
| Hyper-parameter | Identity matrix | ||
| Recovered watermark bits | Normalized correlation |
| L | Metric↓ Trigger→ | No WM | Noise | Patch | Black-Edge |
|---|---|---|---|---|---|
| 32 | Pred Acc. | 91.36 | 91.28 | 91.25 | 91.23 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 48 | Pred Acc. | 91.36 | 91.31 | 91.14 | 91.12 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 64 | Pred Acc. | 91.36 | 91.22 | 91.29 | 91.26 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 128 | Pred Acc. | 91.36 | 91.34 | 91.33 | 91.18 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 256 | Pred Acc. | 91.36 | 91.26 | 91.18 | 91.31 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 |
| L | Metric↓ Trigger→ | No WM | Noise | Patch | Black-Edge |
|---|---|---|---|---|---|
| 32 | Pred Acc. | 75.81 | 74.93 | 75.06 | 74.72 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 64 | Pred Acc. | 75.81 | 74.98 | 75.15 | 74.89 |
| p-value | / | ||||
| WSR | / | 1.000 | 1.000 | 1.000 | |
| 256 | Pred Acc. | 75.81 | 75.12 | 75.37 | 75.03 |
| p-value | / | ||||
| WSR | / | 1.000 | 0.999 | 0.999 |
| L | Method↓ Trigger→ | Noise | Patch | Black-Edge | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Pred Acc. | H | WSR | Pred Acc. | H | WSR | Pred Acc. | H | WSR | ||
| No WM | 91.36 | / | / | 91.36 | / | / | 91.36 | / | / | |
| 32 | Backdoor | 90.97 | 89.87 | 1.000 | 87.56 | 85.38 | 1.000 | 87.92 | 84.68 | 1.000 |
| IW | 91.28 | 91.26 | 1.000 | 91.25 | 91.26 | 1.000 | 91.03 | 91.24 | 1.000 | |
| No WM | 91.36 | / | / | 91.36 | / | / | 91.36 | / | / | |
| 48 | Backdoor | 90.94 | 89.38 | 1.000 | 89.31 | 86.49 | 1.000 | 89.25 | 85.22 | 1.000 |
| IW | 91.31 | 91.29 | 1.000 | 91.14 | 91.12 | 1.000 | 91.11 | 91.09 | 1.000 | |
| No WM | 91.36 | / | / | 91.36 | / | / | 91.36 | / | / | |
| 64 | Backdoor | 90.91 | 89.24 | 1.000 | 90.02 | 87.83 | 1.000 | 89.81 | 87.04 | 1.000 |
| IW | 91.22 | 91.19 | 1.000 | 90.68 | 1.000 | 90.73 | 1.000 | |||
| No WM | 91.36 | / | / | 91.36 | / | / | 91.36 | / | / | |
| 128 | Backdoor | 90.89 | 87.68 | 1.000 | 88.47 | 87.21 | 1.000 | 90.03 | 87.36 | 1.000 |
| IW | 91.34 | 91.32 | 1.000 | 91.12 | 90.01 | 1.000 | 91.26 | 90.17 | 1.000 | |
| No WM | 91.36 | / | / | 91.36 | / | / | 91.36 | / | / | |
| 256 | Backdoor | 90.85 | 86.33 | 0.979 | 90.18 | 81.65 | 0.998 | 90.11 | 85.79 | 1.000 |
| IW | 91.26 | 91.23 | 1.000 | 89.63 | 89.87 | 1.000 | 91.29 | 90.03 | 1.000 | |
| L | Method↓ Trigger→ | Noise | Patch | Black-Edge | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Pred Acc. | H | WSR | Pred Acc. | H | WSR | Pred Acc. | H | WSR | ||
| No WM | 75.81 | / | / | 75.81 | / | / | 75.81 | / | / | |
| 32 | Backdoor | 72.09 | 71.87 | 0.813 | 71.63 | 71.38 | 0.806 | 71.72 | 71.4 | 0.811 |
| IW | 75.42 | 75.39 | 1.000 | 75.48 | 75.41 | 0.996 | 75.46 | 75.40 | 1.000 | |
| No WM | 75.81 | / | / | 75.81 | / | / | 75.81 | / | / | |
| 64 | Backdoor | 73.25 | 72.16 | 0.857 | 73.31 | 72.92 | 0.864 | 73.43 | 73.12 | 0.896 |
| IW | 75.53 | 75.41 | 1.000 | 75.55 | 75.43 | 0.998 | 75.62 | 75.45 | 1.000 | |
| No WM | 75.81 | / | / | 75.81 | / | / | 75.81 | / | / | |
| 256 | Backdoor | 73.28 | 72.31 | 0.932 | 73.34 | 72.99 | 0.941 | 73.41 | 73.08 | 0.935 |
| IW | 75.64 | 75.49 | 1.000 | 75.67 | 75.51 | 1.000 | 75.68 | 75.53 | 0.997 | |
| Match | Mismatch | Total | |
|---|---|---|---|
| Observed frequency (O) | 162 | 94 | 256 |
| Expected frequency (E) | 128 | 128 | 256 |
| Stage | 1 | 2 | 3 | 4 | 5 | 6 |
|---|---|---|---|---|---|---|
| Latency | 0.11 | 4.5 | 1.36 | 0.53 | 0.66 | 0.02 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, X.; Xu, R. BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification. Future Internet 2025, 17, 490. https://doi.org/10.3390/fi17110490
Liu X, Xu R. BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification. Future Internet. 2025; 17(11):490. https://doi.org/10.3390/fi17110490
Chicago/Turabian StyleLiu, Xinyun, and Ronghua Xu. 2025. "BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification" Future Internet 17, no. 11: 490. https://doi.org/10.3390/fi17110490
APA StyleLiu, X., & Xu, R. (2025). BIMW: Blockchain-Enabled Innocuous Model Watermarking for Secure Ownership Verification. Future Internet, 17(11), 490. https://doi.org/10.3390/fi17110490

