Next Article in Journal
Effects of Adding Alkali Metals and Organic Cations to Cu-Based Perovskite Solar Cells
Previous Article in Journal
Efficacy Study of Fault Trending Algorithm to Prevent Fault Occurrence on Automatic Trampoline Webbing Machine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Application of Secret Codes for Learning Medical Data

Department of Electrical and Computer Engineering, University of Ulsan, Ulsan 44610, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(3), 1709; https://doi.org/10.3390/app12031709
Submission received: 20 December 2021 / Revised: 2 February 2022 / Accepted: 5 February 2022 / Published: 7 February 2022
(This article belongs to the Topic Artificial Intelligence in Healthcare)

Abstract

:
In distributed learning for data requiring privacy preservation, such as medical data, the distribution of secret information is an important problem. In this paper, we propose a framework for secret codes in application to distributed systems. Then, we provide new methods to construct such codes using the synthesis or decomposition of previously known minimal codes. The numerical results show that new constructions can generate codes with more flexible parameters than original constructions in the sense of the number of possible weights and the range of weights. Thus, the secret codes from new constructions may be applied to more general situations or environments in distributed systems.

1. Introduction

With the fourth Industrial Revolution, the application of artificial intelligence technology is expanding in the medical field [1,2,3,4,5,6]. The biggest obstacle to the collaboration of medical data from distinct institutes has been the protection the private information contained in the distributed system. In particular, federate learning is in the spotlight as a distributed machine learning technique that can simultaneously retain privacy and efficiency [7]. It can produce a similar result to learning all data at once without sharing private data. Since federated learning does not centralize data to a big server, it can protect the private information of each user. Currently, it is being applied to several areas, including health care, smart factories, and finance [8,9,10,11,12,13]. Major companies such as Google and NVIDIA have been conducting research on medical artificial intelligence through the development of their own federated learning algorithms [8,9,10].
Linear block codes have been investigated for applications in several areas of engineering, such as communication systems, cryptography, and security [14]. A minimal code is a block code in which the support of a codeword is not included in that of any other codewords [15]. Using the minimal code, one user’s information is not subordinate to other users’ information. This has been constantly studied as one of the mathematical structures that can be used in secret-sharing schemes [16,17,18,19,20,21,22,23]. Furthermore, a minimal code can be used in federated learning due to its distributed characteristics of secret information. Almost all the known minimal codes so far have been designed based on the structure and characteristics of finite fields [24]. In particular, for binary cases, several design methods have been proposed [16,17,18,19,20]. On the other hand, non-binary minimal code has been investigated recently [21,22,23]. Research on previously known minimal codes has focused only on the weight distribution of the codes. Considering recent applications, further characteristics or structures of minimal codes, such as their error-correction capability and relation to the learning rate, should be investigated.
In this paper, we propose a framework for secret codes in application to distributed systems. Then, we provide new methods to construct such codes using the synthesis or decomposition of previously known minimal codes. The numerical results show that new constructions can generate codes with more flexible parameters than original constructions in the sense of the number of possible weights and the range of weights. The secret codes from new constructions may be applied to more general situations or environments in distributed systems.
The remainder of this paper is organized as follows. In Section 2, we present some preliminaries on secret-sharing schemes, finite fields, and minimal codes. We present design methods for secret codes, and propose a framework for the application of designed codes to distributed learning systems in Section 3. We provide the results of our construction in Section 4, and then compare them with the previous researches in Section 5. Finally, we present concluding remarks in Section 6.

2. Preliminaries

In the distributed learning schemes depicted in Figure 1, the distribution of secret or private information is very important in order to retain secret properties. Linear block codes can be used as a mathematical tool to design such distribution strategies [17,18]. In this section, we present some preliminary knowledge on finite fields and algebraic codes that are used in secret-sharing schemes.
Minimal codes are a class of linear block codes, in which the support of a codeword is not included to that of another codeword. A binary linear block code C of length N is a subspace of vectors in the space 0 , 1 N . An element of a linear block code is called a codeword. A codeword x of length N in a binary linear block code can be expressed as
x = x n   : x n 0 , 1 ,   n Z N
where n represent the index of each symbol, and Z N is the set of integers modulo N . Because a linear block code is a subspace, the vector addition of any two codewords in a code is included as another codeword in the code. The support of x is a subset of Z N , which is defined as
supp x = n   : x n = 1 ,   n Z N
The size of supp x is called the Hamming weight of x . In secret-sharing schemes, a secret codeword is assigned to a user or a device. Information in the codeword should not be subordinate to any other codewords. A minimal code is defined as a linear block code in which the support of a codeword is not a subset of the support of any other codeword, as shown in Figure 2. Using this characteristic, secret information can be distributed to users, and the information of a user is not fully revealed to the other users. Due to this characteristic, minimal codes have been applied to secret-sharing schemes [15,17]. However, in distributed learning, the secret-storing structure is important, as is the weight distribution of a code. Research on analytic approaches to the structure is ongoing.
A secret-sharing scheme is, from a mathematical viewpoint, constructed on a finite field. A finite field is an algebraic structure in which addition, subtraction, multiplication, and division can be freely operated [24]. For a prime p and a positive integer m , a finite field GF p m consists of the additive identity 0 and 1 = α 0 , α 1 , α 2 , ,   α p m 2 , where α is called a primitive element of GF p m . The finite field GF p m is an Abelian group with respect to the addition modulo p , and GF p m \ 0 is a cyclic group with respect to the multiplication. Any element of a finite field can be represented not only in a multiplicative way using the power of the primitive element, but also in an additive way using vector representation on the basis 1 ,   α 1 , α 2 , , α m 1 . Note that GF p m can be interpreted as a vector space of dimension m over GF p . Most of the well-known error-correction codes and pseudorandom sequences have been constructed based on the properties of finite fields [14]. In this paper, we assume that p = 2 , which corresponds to the cases of binary codes. Because most machine learning algorithms are based on the binary operation, the assumption is appropriate.
In a secret-sharing scheme defined on GF p m , a secret corresponds to an element of the finite field. Let U 1 , , U l be users or devices engaged in the scheme. Pieces of the secret are assigned to each user using a secret code. This secret information must be encoded so that the whole information is not synthesized from any proper subset of information of users. Moreover, in a partial group of users, if a user can recover some part of the secret, the other user should also be able to recover the same information. Minimal codes are good mathematical structures to apply to various kinds of secret-sharing schemes due to their linearity and the non-inclusive properties of supports. A minimal code can be parameterized by the length of codewords, the number of codewords, and the weight distribution. The length means the number of information indices, the number of codewords corresponds to the number of users or devices, and the weight distribution is related to the placement of information. In Table 1, some well-known constructions for minimal codes are summarized. Recently, Mesnager et al. presented a generalized construction tool including several existing design methods [23]. Since most of known minimal codes are designed using the structure of GF p m , their lengths are restricted to p m 1 for a prime p .
In [16], Aschkmin and Barg established a sufficient condition on the existence of minimal codes as in the following theorem.
Theorem 1.
[16] A linear code C defined over a finite field of characteristic p becomes a minimal code if
w m i n w m a x > p 1 p
where w m i n   (resp. w m i n ) is the minimum (resp. maximum) value among the Hamming weights of the codewords of C .
Note that the ratio between the minimum and the maximum weights refers to the possible range of information throughput or performance of each device that the system can accommodate. Although the inequality (1) provides an efficient guideline for designing a minimal code, it also restricts the range of selection of Hamming weights, which is closely related to the flexibility of information distribution in learning data. Recently, some constructions which are not restricted in (1) have been presented [20,22], as shown in Table 1. These constructions can provide flexibility in the selection of amounts of information according to distinct users.

3. Design of Secret Codes for Distributed Systems

The authors of [12] summarized the federated learning structure for health care systems sharing data. In this section, we consider a framework for a group of users or institutions with secret information. If a group of users or participants in the distributed learning system can regenerate some part of the secret information, any other group including this group should also be able to regenerate the same information. On the other hand, the entire secret should not be generated from one user or a small group of users. Furthermore, to support adaptive amounts of information on each user and flexibility in the number of users, the construction of secret codes supporting various numbers of users and adaptive rates of information is required. Our framework, with secret codes for application in distributed systems, is summarized in Figure 3.
The support of a codeword can be regarded as the position of the secret information on the corresponding user. Let C = x 1 , x 2 , , x l be a minimal code of length N and size l . For each codeword x i , 1 i l , we can describe its support as
s i = supp x i = s i , j   |   s i , j < s i , j   if   j < j .
Furthermore, we define the set of user indices as L = 1 , 2 , , l .
Construction A.
For a minimal code C of length N , let y l + 1 be a new vector whose support is given by a union of supports of two codewords as
s u p p y l + 1 = s i 1 s i 2
where 1 i 1 < i 2 l and s i is not a subset of s i 1 s i 2 for any i L \ i 1 ,   i 2 . The new code C A is constructed as the union of C \ x i 1 , x i 2 and y l + 1 . In a recursive way, the supports can be merged into an arbitrary combination of the supports.
Note that the code C A in Construction A is not linear since the support set is not the direct sum of two supports, but the union of the two sets. However, the distributed property of the secret is preserved by the characteristics of the minimal code. Construction A corresponds to the situation where a user is out of the federated group, and the amount of corresponding information and learning resources can be assigned to another user. The information of a user is not subordinate to that of the other users even after merging the resources. Although Construction A only describes a case where the information on the user who has left is assigned to only one user, we can generalize to a case where it is distributed to multiple users.
Construction B.
For a minimal code C of length N , let y l + 1 ,   y l + 2 be a new vector whose union of two supports make
s i = s u p p y l + 1 s u p p y l + 2
where 1 i l and s u p p y l + 1 and s u p p y l + 2 are not a subset of s i for any i L \ i . The new code C B is constructed as the union of C \ x i and y l + 1 ,   y l + 2 . Applying this construction in a recursive way, the supports can be separated into arbitrary sizes.
Because of the non-inclusive property of the supports in Construction B, the situation can be interpreted as one where a user is out of the federated group, and the amount of corresponding information and learning resources can be assigned to another user. The information of a user is not included to that of the other users even after separating the resources. Construction B describes a case in which the support is separated from only one user, but it can be easily generalized to a case in which the information is extracted from different users.
When we consider merging two disjoint distributed systems with secret codes, it is impossible to preserve the non-inclusive property of supports if the two codes are unified without any additional modification of the structure. The interleaving of blocks is a well-known technique in the design of pseudorandom sequences [25]. We apply the technique to the structure of minimal codes as in the following construction.
Construction C.
For minimal codes C 1 and C 2 of odd length N and size l , where C 1 = x 1 , 1 , , x 1 , l and C 2 = x 2 , 1 , , x 2 , l . Define codewords of length y i = y i n   |   y i n 0 , 1 ,   0 k 2 N 1 ,   1 i l , as
y i n = x 1 , i n 2 ,   i f   n   i s   e v e n ; x 2 , i n 2 ,   i f   n   i s   o d d ,  
where x is the greatest integer less than x . Define a new code C C as the set of all combination of C 1 and C 2 .
In Construction C, if the index of y i n is even, the code structure follows C 1 . Otherwise, it follows C 2 . Since the non-inclusive properties and linearity are preserved by the structure of the original codes, the new code could be used in merging two disjoint systems. Moreover, the extension can be generalized to the case of merging three or more distinct systems. Therefore, when two or more distributed systems are merged, we do not need to design a new code for the new system. When we have two distinct secret codes of relatively prime lengths, it is possible to enlarge the number of codewords and the length as in the following construction.
Construction D.
For two minimal codes, C 1 = x 1 , 1 , , x 1 , l 1 of length N 1 and size l 1 , and C 2 = x 2 , 1 , , x 2 , l 2 of lengths N 2 and size l 2 , assume that N 1 and N 2 are relatively prime. Define codewords of length y i , j = y i , j n   |   y i , j n 0 , 1 ,   0 n N 1 N 2 1 ,   1 i l 1 ,   1 j l 2 , as
y i , j n = 1 , i f   n   m o d   N 1 s u p p x 1 , i   a n d   n   m o d   N 2 s u p p x 2 , j ; 0 , o t h e r w i s e .  
where x   m o d   y means the remainder of x divided by y . Define a new code C D as collection of all y i , j s for 1 i l 1 and 1 i l 2 .
Figure 4 shows application scenarios for Constructions A and B. The two constructions correspond to a case in which a user or a device leaves or joins the federation, respectively. Construction C can be applied to a situation where two distributed systems are merged into one, as shown in Figure 5. The original codes are embedded into the extended code without a change in the structure in Construction C. Therefore, any new algebraic design of the distribution is not required for that case. It is also easy to generalize to a case in which three or more systems are merged. Note that the three constructions can be combined according to the application scenarios, and several known constructions, including those in Table 1, can be applied to the four constructions.

4. Results

In this section, we present some resultant theory and examples obtained from our constructions. Constructions C and D provide extended codes in the sense of the length and the weight. Moreover, their properties can be shown by combining the properties of the original codes and the construction terminologies, as shown in the following propositions.
Proposition 1.
The code C C in Construction C is a minimal code of length 2 N and size l . The weight of each codeword y i is the sum of the weights of x 1 , i and x 2 , i .
Proof .
The length and the size of the code C C are clear from the definition. The next step is to check the linearity of the new code, that is, whether the equality y i + y j C C for any 1 i , j l is satisfied or not. Note that the addition is the vector addition modulo 2. Since C 1 and C 2 are linear codes, it is possible to let x 1 , i + x 1 , j = x 1 , k for some k with 1 k l . Similarly, we have x 2 , i + x 2 , j = x 2 , k . These two equations imply
y i n + y j n = y k n = x 1 , k n 2 ,   i f   n   i s   e v e n ; x 2 , k n 2 ,   i f   n   i s   o d d ,
by the definition in Construction C. Thus, for all 1 i , j l , there exists a k with 1 k l such that y i + y j = y k , which implies that C C is linear. Non-inclusive properties of supports can be also shown using the definition. The support supp y i can be partitioned into supp e y i supp o y i , where supp e y i (resp. supp e y i ) is the subset of supp y i with even (resp. odd) indices. It is clear that supp e y i is equivalent to multiplying supp x 1 , i by two, and supp o y i is equivalent to adding one after multiplying supp x 2 , i by two. Thus, supp e y i and supp e y j are not included in one another for any 1 i j l . Similarly,   supp o y i and supp o y k satisfy the same property. Therefore, it is shown that the code C C satisfies the conditions of minimal codes. □
Example 1 shows a resultant code applying Construction C to a code present in [20]. Additional parameters of codes with more flexible choices of weights can be obtained by simply changing the index of one code. More parameters from numerical simulations are presented in Section 5.
Example 1.
In Example 20 from[20], a minimal code of length 63 and size 128 is given. The weight distributions for weights ( 0 , 14 , 30 , 32 , 38 ) are given by ( 1 , 1 , 49 , 63 , 14 ). Using two codes from two different primitive elements of the finite field GF(63), we can construct a new code of length 126 whose weight distribution is given by ( 0 , 14 , 30 , 32 , 38 ) for weights ( 0 , 28 , 60 , 64 , 76 ) when assuming the order of indices of the codewords are according to the order of weights. If we change the order of combinations, several different codes with different weight distributions are generated. Table 2 compares one example of the code of length 126 with the original code of length 63 .
The properties of codes from Construction D can be formulated in the following proposition.
Proposition 2.
The code C D in Construction D is a minimal code of length N 1 N 2 and size l 1 l 2 . The weight of each codeword is the product of the weights of two component codewords.
Proof .
By the definition, the length of each codeword is clearly N 1 N 2 , and the size of the code is l 1 l 2 . For each support of x 1 , i , all the indices of support of x 2 , j can be combined. Thus, the weight of y i , j is equal to the product of the weights of x 1 , i   and x 2 , j . Assume that x 1 , i 1 + x 1 , i 2 = x 1 , i 3 for 1 i 1 ,   i 2 ,   i 3 l 1 and x 2 , j 1 + x 2 , j 2 = x 2 , j 3 for 1 j 1 ,   j 2 ,   j 3 l 2 . Then, it is clear that y i 3 , j 3 = y i 1 , j 1 + y i 2 , j 2 for any combination of 1 i 1 ,   i 2 ,   i 3 l 1 and 1 j 1 ,   j 2 ,   j 3 l 2 , which implies that C D is linear. Note that for any 1 i 1 ,   i 2 l 1 and 1 j 1 ,   j 2 l 2 , the supports of y i 1 , j 1 and y i 2 , j 2 are not included in one another if i 1 i 2 or j 1 j 2 due to the properties of C 1 and C 2 . □
Example 2.
Assume that there are two codes: C 1 of length 31 and C 2 of length 63 . Furthermore, suppose the weight distribution of C 1 be given by ( d 1 , d 2 , , d k ) for weights ( w 1 , w 2 , , w k ), and the weight distribution of C 2 by ( d 1 , d 2 , , d l ) for weights ( w 1 , , w l ). Then, by Construction D, we can obtain a code of length 1953 = 31 × 63 . The weight of a codeword is the product of two weights from C 1 and C 2 .
It may be possible to use Construction D in the case that computing resources are enhanced and the number of devices is significantly increased. However, Construction D extends the size and weight from the component codes, cases where N 1 and N 2 are relatively prime are very rare for previously known constructions. Thus, finding primary constructions for minimal codes for more general parameters should be a future research topic. Furthermore, as shown in Example 2, the parameters of the codes are increased significantly. Therefore, it would be possible to apply Construction D to a case in which the resources are also increased considerably. Table 2 summarizes the characteristics of the resultant codes from Constructions A~D.

5. Discussions

In this section, we discuss the flexibility of resultant codes from Construction C compared with the previously known constructions. Table 3, which was obtained by numerical simulation, shows an example of code parameters with more possible numbers of information lengths compared to the previous construction. In this example, we can observe that there are more choices of possible weights in Construction C. This means that the distribution of information becomes more flexible. Note that several additional combinations for weight distribution are possible by exchanging the order of codewords in Construction C.
Table 4, which was obtained by another numerical simulation, compares the ratio between the minimum and the maximum Hamming weights presented in (1) of Theorem 1. The original codes of lengths 128 and 512 from [20] are not restricted in (1), since the ratio w m i n w m a x is less than 1/2. It is possible to obtain a ratio farther from 1/2 using Construction C, as shown in Table 4. Thus, Construction C can provide an extended range of the information ratio among different users.
From the comparisons in Table 2 and Table 3, we can deduce that Construction C provides more flexibility in the distribution of secret information compared to the previous construction. Therefore, the new codes can be not only applied to more general cases but also adapted to the flexible learning conditions.

6. Conclusions

In this paper, we have proposed a design scheme and framework for secret codes which can be applied in flexible situations in distributed learning systems for medical data. The new code from the constructions could be applied to the cases that are not covered by previously known codes. It is shown that secret information can be distributed or merged according to the design of appropriate secret codes based on block codes. According to the parameters of learning resources, we may be able to apply more algebraic constructions or synthesis algorithms for block codes. Learning systems can be implemented under general conditions such as tensorflow; multiple workstations with 8 GB or more RAM, RTX1060, or more; and Mbps networks. The secret codes could be used in the distribution of private or secret information, as well as merging and separating distributed systems. Our future work will focus on the development of more advanced and detailed coding techniques for distributed learning systems, including various kinds of servers and devices. Furthermore, the realization of distributed learning systems with actual parameters from the environments of hospitals will be an interesting topic.

Author Contributions

Conceptualization, D.J.; methodology, J.-H.C.; formal analysis, D.J.; writing, review and editing, D.J. and J.-H.C.; funding acquisition, D.J. and J.-H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the 2021 Research Fund of University of Ulsan.

Institutional Review Board Statement

The participant’s personal identification information used in the study did not include personal information. Ethical review and approval were not required for the study.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors would like to thank the anonymous Reviewers, the Academic Editor, and the Assistant Editor for their valuable comments that helped to improve the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest about publication of this paper.

References

  1. Jacob Calvert, J.; Saber, N.; Hoffman, J.; Das, R. Machine-learning-based laboratory developed test for the diagnosis of Sepsis in high-risk patients. Diagnostics 2019, 9, 20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Ahmed, N.; Yigit, A.; Isik, Z.; Alpkocak, A. Identification of Leukemia subtypes from microscopic images using convolutional neural network. Diagnostics 2019, 9, 104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Chakraborty, S.; Aich, S.; Kim, H.-C. Detection of Parkinson’s disease from 3T T1 weighted MRI scans using 3D convolutional neural network. Diagnostics 2020, 10, 402. [Google Scholar] [CrossRef]
  4. Lee, J.-H.; Kim, Y.-T.; Lee, J.-B.; Jeong, S.-N. A Performance Comparison between Automated deep learning and dental professionals in classification of dental implant systems from dental imaging: A multi-center study. Diagnostics 2020, 10, 910. [Google Scholar] [CrossRef]
  5. Saha, R.; Aich, S.; Tripathy, S.; Kim, H.-C. Artificial intelligence is reshaping healthcare amid COVID-19: A review in the context of diagnosis & prognosis. Diagnostics 2021, 11, 1604. [Google Scholar] [PubMed]
  6. Hashmani, M.A.; Jameel, S.M.; Rizvi, S.S.H.; Shukla, S. An adaptive federated machine learning-based intelligent system for skin disease detection: A step toward an intelligent dermoscopy device. Appl. Sci. 2021, 11, 2145. [Google Scholar] [CrossRef]
  7. Konečný, J.; McMahan, B.; Ramage, D. Federated Optimization: Distributed Optimization Beyond the Datacenter. arXiv 2015, arXiv:1511.03575. [Google Scholar]
  8. Federated Learning: Collaborative Machine Learning without Centralized Training Data. Available online: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html (accessed on 30 November 2021).
  9. Federated Learning Powered by NVIDIA Clara. Available online: https://developer.nvidia.com/blog/federated-learning-clara/ (accessed on 30 November 2021).
  10. Sheller, M.J.; Edwards, B.; Reina, G.A.; Martin, J.; Pati, S.; Kotrotsou, A.; Milchenko, M.; Xu, W.; Marcus, D.; Colen, R.R.; et al. Federated learning in medicine: Facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 2020, 10, 12598. [Google Scholar] [CrossRef] [PubMed]
  11. Drungilas, V.; Vaičiukynas, E.; Jurgelaitis, M.; Butkienė, R.; Čeponienė, L. Towards blockchain-based federated machine learning: Smart contract for model inference. Appl. Sci. 2021, 11, 1010. [Google Scholar]
  12. Prayitno; Shyu, C.-R.; Putra, K.T.; Chen, H.-C.; Tsai, Y.Y.; Tozammel Hossain, K.S.M.; Jiang, W.; Shae, Z.-Y. A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications. Appl. Sci. 2021, 11, 11191. [Google Scholar]
  13. Li, Z.; Li, Z.; Li, Y.; Tao, J.; Mao, Q.; Zhang, X. An intelligent diagnosis method for machine fault based on federated learning. Appl. Sci. 2021, 11, 12117. [Google Scholar]
  14. Lin, S.; Costello, J. Error Control Coding, 2nd ed.; Prentice-Hall: Hoboken, NJ, USA, 2004. [Google Scholar]
  15. Massey, J. Minimal codewords and secret sharing. In Proceedings of the 6th Joint Swedish-Russian International Workshop on Information Theory, Mölle, Sweden, 22–27 August 1993; pp. 276–279. [Google Scholar]
  16. Ashikhmin, A.; Barg, A. Minimal Vectors in Linear Codes. IEEE Trans. Inform. Theory 1998, 44, 2010–2017. [Google Scholar] [CrossRef]
  17. Carlet, C.; Ding, C.; Yuan, J. Linear codes from perfect nonlinear mappings and their secret sharing schemes. IEEE Trans. Inform. Theory 2005, 51, 2089–2102. [Google Scholar] [CrossRef] [Green Version]
  18. Yuan, J.; Ding, C. Secret sharing schemes from three classes of linear codes. IEEE Trans. Inform. Theory 2006, 1, 206–212. [Google Scholar] [CrossRef]
  19. Ding, K.; Ding, C. A class of two-weight and three-weight codes and their applications in secret sharing. IEEE Trans. Inform. Theory 2015, 61, 5835–5842. [Google Scholar] [CrossRef] [Green Version]
  20. Ding, C.; Heng, Z.; Zhou, Z. Minimal binary linear codes. IEEE Trans. Inform. Theory 2018, 64, 6536–6545. [Google Scholar] [CrossRef]
  21. Bartoli, D.; Bonini, M. Minimal linear codes in odd characteristic. IEEE Trans. Inform. Theory 2019, 65, 4152–4155. [Google Scholar] [CrossRef] [Green Version]
  22. Xu, G.; Qu, L. Three classes of minimal linear codes over the finite fields of odd characteristic. IEEE Trans. Inform. Theory 2019, 65, 7067–7078. [Google Scholar] [CrossRef]
  23. Mesnager, S.; Qi, Y.; Ru, H.; Tan, C. Minimal linear codes from characteristic functions. IEEE Trans. Inform Theory 2020, 66, 5404–5413. [Google Scholar] [CrossRef] [Green Version]
  24. Lidl, R.; Niederreiter, H. Finite Fields, 1st ed.; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
  25. Gong, G. Theory and applications of q-ary interleaved sequences. IEEE Trans. Inform. Theory 1995, 41, 400–411. [Google Scholar] [CrossRef]
Figure 1. Distributed learning model for medical data.
Figure 1. Distributed learning model for medical data.
Applsci 12 01709 g001
Figure 2. A simple example of three vectors whose supports are not included in one another.
Figure 2. A simple example of three vectors whose supports are not included in one another.
Applsci 12 01709 g002
Figure 3. A framework for distributed systems with secret codes.
Figure 3. A framework for distributed systems with secret codes.
Applsci 12 01709 g003
Figure 4. Corresponding scenarios for Constructions (a) A and (b) B.
Figure 4. Corresponding scenarios for Constructions (a) A and (b) B.
Applsci 12 01709 g004
Figure 5. Corresponding scenario for Construction C.
Figure 5. Corresponding scenario for Construction C.
Applsci 12 01709 g005
Table 1. Known classes of binary minimal codes ( p is an odd prime).
Table 1. Known classes of binary minimal codes ( p is an odd prime).
ReferenceAlphabet SizeLengthNumber of
Distinct Weights
Restricted in
(1)
[16] 2 2 m 1 3 Yes
[19] 2 2 m 1 3 Yes
[20] 2 2 m 1 4 ~ 6 No
[21] p p m 1 3Yes
[22] p p m 1 3 ~ 4 No
Table 2. Comparison of Construction Methods.
Table 2. Comparison of Construction Methods.
ConstructionsCharacteristicLinearity
ASeparated supportsmay be broken
BMerged supportsmay be broken
CMerging two codespreserved
DEnhanced parameterspreserved
Table 3. An example of the weight distribution of the resultant code from Construction C (code length: 127).
Table 3. An example of the weight distribution of the resultant code from Construction C (code length: 127).
Original Code in [20]Construction C
Possible weights1,14,30,32,382,28,60,62,64,68,70,76
Weight Distribution(1,1,49,63,14)(1,1,25,38,41,10,6),
(1,1,30,32,44,6,6,8),
(1,1,35,20,50,8,6,7),
….
Number of distinct information lengths58
Table 4. Examples of the ratio between the minimum and the maximum weights from Construction C.
Table 4. Examples of the ratio between the minimum and the maximum weights from Construction C.
Length w m i n w m a x   in [20] w m i n w m a x   in   Construction   C
12848.15%24.08%, 27,78%, 29.89%,…, 48.15%
51248.28%24.24%, 25.86%, 31.91%,…, 48.28%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jo, D.; Chung, J.-H. Design and Application of Secret Codes for Learning Medical Data. Appl. Sci. 2022, 12, 1709. https://doi.org/10.3390/app12031709

AMA Style

Jo D, Chung J-H. Design and Application of Secret Codes for Learning Medical Data. Applied Sciences. 2022; 12(3):1709. https://doi.org/10.3390/app12031709

Chicago/Turabian Style

Jo, Dongsik, and Jin-Ho Chung. 2022. "Design and Application of Secret Codes for Learning Medical Data" Applied Sciences 12, no. 3: 1709. https://doi.org/10.3390/app12031709

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop