Next Article in Journal
Chaos of the Six-Dimensional Non-Autonomous System for the Circular Mesh Antenna
Next Article in Special Issue
Dynamic Constrained Boundary Method for Constrained Multi-Objective Optimization
Previous Article in Journal
Interaction of Variable Fluid Properties with Electrokinetically Modulated Peristaltic Flow of Reactive Nanofluid: A Thermodynamical Analysis
Previous Article in Special Issue
Solving a Class of High-Order Elliptic PDEs Using Deep Neural Networks Based on Its Coupled Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multi-Category Inverse Design Neural Network and Its Application to Diblock Copolymers

Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(23), 4451; https://doi.org/10.3390/math10234451
Submission received: 30 September 2022 / Revised: 22 November 2022 / Accepted: 23 November 2022 / Published: 25 November 2022
(This article belongs to the Special Issue Computational Intelligence: Theory and Applications)

Abstract

:
In this work, we design a multi-category inverse design neural network to map ordered periodic structures to physical parameters. The neural network model consists of two parts, a classifier and Structure-Parameter-Mapping (SPM) subnets. The classifier is used to identify structures, and the SPM subnets are used to predict physical parameters for desired structures. We also present an extensible reciprocal-space data augmentation method to guarantee the rotation and translation invariant of periodic structures. We apply the proposed network model and data augmentation method to two-dimensional diblock copolymers based on the Landau–Brazovskii model. Results show that the multi-category inverse design neural network has high accuracy in predicting physical parameters for desired structures. Moreover, the idea of multi-categorization can also be extended to other inverse design problems.

1. Introduction

Material properties are mainly determined by microscopic structures. Therefore, to obtain satisfactory properties, how to find desired structures is very important in material design. The formation of ordered structures directly relies on physical conditions, such as temperature, pressure, molecular components, and geometry confinement. However, the relationship between ordered structures and physical conditions is extremely complicated and diversified. A traditional approach is a trial-and-error manner, i.e., passively finding ordered structures for given physical conditions. This approach, in terms of solving the direct problem, is time-consuming and expensive. A wise way is an inverse design that turns to find physical conditions for desired structures.
In this work, we are concerned about the theoretical development of the inverse design method for block copolymers. Block copolymer systems are important materials in industrial applications since they can self-assemble into innumerous ordered structures. There are many approaches for solving the direct problem of block copolymer systems, such as the first principle calculation [1], Monte Carlo simulation [2,3], molecular dynamics [4], dissipative particle dynamics [5,6], self-consistent field simulation [7], and density functional theory [8]. In the past decades, a directed self-assembly (DSA) method has been developed to invert design block copolymers. Liu et al. [9] presented an integration scheme of a block-copolymer-directed assembly with 193 nm immersion lithography, and provided a pattern quality that was comparable with existing double patterning techniques. Suh et al. [10] obtained nanopatterns via DSA of block copolymer films with a vapor-phase-deposited topcoat. Many DSA strategies have been also developed for the fabrication of ordered square patterns to satisfy the demand for lithography in semiconductors [11,12,13,14].
With the rise of data science and machine learning, many deep-learning inverse design methods have been developed to learn the mapping between structures and physical parameters [15,16,17,18]. These new techniques and methods are beginning to be used to study block copolymers. Yao et al. combined machine learning with self-consistent field theory (SCFT) to accelerate the exploration of parameter space for block copolymers [19]. Lin and Yu designed a deep learning solver inspired by physical-informed neural networks to tackle the inverse discovery of the interaction parameters and the embedded chemical potential fields for an observed structure [20]. Based on the idea of classifying first and fitting later, Katsumi et al. estimated Flory–Huggins interaction parameters of diblock copolymers from cross-sectional images of phase-separated structures [21]. The phase diagrams of block copolymers could be predicted by combining the deep learning technique and SCFT [22,23].
In this work, we propose a new neural network to address inverse design problem based on the idea of multi-categorization. We take the AB diblock copolymer system as an example to demonstrate the performance of our network. The training and test datasets are generated from the Landau–Brazovskii (LB) model [24]. The LB model is an effective tool to describe the phases and phase transition of diblock copolymers [25,26,27,28,29]. Let ϕ ( r ) be the order parameter, a function of spatial position r , which represents the density distribution of diblock copolymers. The free energy functional of the LB model is
E ( ϕ ( r ) ) = 1 | Ω | Ω ξ 2 2 [ ( Δ + 1 ) ϕ ] 2 + τ 2 ! ϕ 2 γ 3 ! ϕ 3 + 1 4 ! ϕ 4 d r ,
where ϕ ( r ) satisfies the mass conservation 1 | Ω | Ω ϕ ( r ) d r = 0 and Ω is the system volume. The model parameters in (1) are associated to physical conditions of diblock copolymers. Concretely, τ is a temperature-like parameter related to the Flory–Huggins interaction parameter, the degree of polymerization N, and the A monomer fraction f of each diblock copolymer chain. τ can control the onset of the order–disorder spinodal decomposition. The disordered phase becomes unstable at τ = 0 . γ is associated with f and N; it is nonzero only if the AB diblock copolymers chain is asymmetric. ξ is the bare correlation length. Further relationships can be found in [25,26,27,28,29]. The stationary states of the LB free energy functional correspond to ordered structures.
The rest of the paper is organized as follows. In Section 2, we solve the LB model (1) to obtain datasets. In Section 3, we present the multi-category inverse design neural network and the reciprocal-space data augmentation method for periodic structures. In Section 4, we take the diblock copolymer system confined in two dimensions as an example to test the performance of our proposed inverse design neural network model. In Section 5, we draw a brief summary of this work.

2. Direct Problem

Solving the direct problem involves optimizing the LB free energy functional (1) to obtain stationary states corresponding to ordered structures:
min ϕ ( r ) E ( ϕ ( r ) ) , s . t . 1 | Ω | Ω ϕ ( r ) d r = 0 .
Here, we only consider periodic structures. Therefore, we can apply the Fourier pseudospectral method to discretize the above optimization problem.

2.1. Fourier Pseudospectral Method

For a periodic order parameter ϕ ( r ) , r Ω : = T d = R d / A Z d , where A = a 1 , a 2 , , a d R d × d is the primitive Bravis lattice. The primitive reciprocal lattice B = b 1 , b 2 , , b d R d × d , satisfying the dual relationship
A B T = 2 π I .
The order parameter ϕ ( r ) can be expanded as
ϕ ( r ) = k Z d ϕ ^ k e i ( B k ) T r , r T d ,
where the Fourier coefficient
ϕ ^ ( k ) = 1 | T d | T d ϕ r e i ( B k ) T r d r ,
| T d | is the volume of T d .
We define the discrete grid set as
T N d = ( r 1 , j 1 , r 2 , j 2 , . . . , r d , j d ) = A j 1 / N , j 2 / N , . . . , j d / N T , 0 j i < N , j i Z , i = 1 , 2 , . . . , d ,
where the number of elements of T N d is M = N d . Denote the grid periodic function space G N = { f : T N d C , f is periodic}. For any periodic grid functions F , G G N , the 2 -inner product is defined as
F , G N = 1 M r j T N d F ( r j ) G ¯ ( r j ) .
The discrete reciprocal space is
K N d = k = k j j = 1 d Z d : N / 2 k j < N / 2 ,
and the discrete Fourier coefficients of ϕ ( r ) in T N d can be represented as
ϕ ^ ( k ) = ϕ ( r j ) , e i ( Bk ) T r j N = 1 M r j T N d ϕ r j e i ( B k ) T r j , k K N d .
For k Z d and l Z d , we have the discrete orthogonality
e i ( Bk ) T r j , e i ( Bl ) T r j N = 1 , k = l + N m , m Z d , 0 , o t h e r w i s e .
Therefore, the discrete Fourier transform of ϕ ( r j ) is
ϕ ( r j ) = k K N d ϕ ^ k e i ( B k ) T r j , r j T N d .
The N d -order trigonometric polynomial is
I N ϕ ( r ) = k K N d ϕ ^ k e i ( B k ) T r , r T d .
Then, for r j T N d , we have ϕ ( r j ) I N ϕ ( r j ) .
Due to the orthogonality (10), the LB energy functional E ( ϕ ) can be discretized as
E h [ Φ ^ ] = ξ 2 2 h 1 + h 2 = 0 1 B h 1 T B h 2 2 ϕ ^ h 1 ϕ ^ h 2 + τ 2 ! h 1 + h 2 = 0 ϕ ^ h 1 ϕ ^ h 2 γ 3 ! h 1 + h 2 + h 3 = 0 ϕ ^ h 1 ϕ ^ h 2 ϕ ^ h 3 + 1 4 ! h 1 + h 2 + h 3 + h 4 = 0 ϕ ^ h 1 ϕ ^ h 2 ϕ ^ h 3 ϕ ^ h 4 ,
where h i K N d , i = 1 , 2 , 3 , 4 , and Φ ^ = ϕ ^ 1 , ϕ ^ 2 , , ϕ ^ M T C M . The convolutions in the above expression can be efficiently calculated through the fast Fourier transform (FFT). Moreover, the mass conservation constraint 1 | Ω | Ω ϕ ( r ) d r = 0 is discretized as
e T Φ ^ = 0 ,
where e = ( 1 , 0 , . . . , 0 ) T R M . Therefore, (2) reduces to a finite dimensional optimization problem
min Φ ^ C M E h [ Φ ^ ] = G h [ Φ ^ ] + F h [ Φ ^ ] , s . t . e T Φ ^ = 0 ,
where G h and F h are the discretized interaction and bulk energies
G h ( Φ ^ ) = ξ 2 2 h 1 + h 2 = 0 1 B h 1 T B h 2 2 ϕ ^ h 1 ϕ ^ h 2 , F h ( Φ ^ ) = τ 2 ! h 1 + h 2 = 0 ϕ ^ h 1 ϕ ^ h 2 γ 3 ! h 1 + h 2 + h 3 = 0 ϕ ^ h 1 ϕ ^ h 2 ϕ ^ h 3 + 1 4 ! h 1 + h 2 + h 3 + h 4 = 0 ϕ ^ h 1 ϕ ^ h 2 ϕ ^ h 3 ϕ ^ h 4 .
In the work, we employ the adaptive APG method to solve (15).

2.2. Phase Diagram

Given parameters [ γ , τ ] , we can obtain the stationary states by solving the free energy functional ( 16 ) . Due to the non-convexity of LB free energy functional, there are many, even infinite stationary states for given parameters. We need to determine the stationary state with the lowest energy, which corresponds to the most probable ordered structure observed in experiments. It requires comparing the energies of stationary states to obtain the stable structure and constructing a phase diagram. In the following, we consider disordered (DIS), cylindrical hexagonal (HEX), lamellar (LAM), body-centered cubic (BCC), and double gyroid (DG) phases as candidate structures. We use the AGPD software [30] to produce a ( τ , γ ) -plane phase diagram, as shown in Figure 1. The obtained phase diagram is consistent with previous work [29,31,32].

3. Inverse Design Neural Network

3.1. Multi-Category Inverse Design Network

The architecture of the multi-category inverse design network for predicting physical parameters for the desired periodic structure is shown in Figure 2. The neural network mainly consists of two modules: a classifier and SPM subnets. The former (orange block) identifies and classifies candidate structures, and the latter (blue block) including a family of subnets is a mapping connecting physical parameters and ordered structures.
According to the characteristics of the problem, we can design corresponding network architectures of classifier and SPM subnets. In this work, the classifier and each SPM subnet use the same network architecture as shown in Figure 3. Concretely, the network contains an input layer, three convolutional layers, three max-pooling layers, four fully connected layers, and an output layer. For the classifier network, the size of the output layer represents the number of categories, while for each subnet, it means the number of predicted physical parameters. The network architecture is a development of the Lenet-5 network [33].

3.2. Computational Complexity

In this subsection, we analyze the computational complexity of the network architecture of the classifier and each SPM subnet. As shown in Figure 3, the network consists of three parts:
  • Convolution. The computational amount of a convolution layer is O ( ( 2 · C in · k 2 1 ) · H · W · C out ) , where k 2 is the size of the convolution kernel. C in and C out represent the number of channels of the previous layer and the current layer. H and W are the height and width of the current layer.
  • Max-pooling. The computational amount of a pooling layer is O ( C in · k 2 · H · W · C out ) , where k 2 is the size of the pooling kernel. C in and C out represent the number of channels of the previous layer and the current layer. H and W are the height and width of the current layer.
  • Full connection. The computational amount of a fully connected layer is O ( 2 · N in · N out ) , where N in and N out represent the dimensionality of the previous layer and the current layer.

3.3. Reciprocal-Space Data Augmentation (RSDA) Method

Generally, the ability of a network depends not only on its architecture, but also on the amount and properties of data. Due to the rotation and translation invariance of periodic structures, how to make the classifier recognize the invariance is very important. Here, we use a data augmentation method to increase the amount of data. Existing data augmentation methods, such as Swapping, Mixup, Insertion, and Substitution, are often used for classification tasks, see a recent review [34] and the references therein. However, as the dataset increases, these data augmentation approaches might become expensive, and it would be easy to use the wrong labels to train the network. For one-dimensional periodic phases, Yao et al. [19] used a Period-Filling method for data augmentation. However, this approach is difficult to extend to higher dimensions.
In this paper, we propose an extensible data augmentation method implemented in reciprocal space called the RSDA method. For a periodic structure, the primary wave vectors in the reciprocal space can describe the main features of the structure. The RSDA method uses the information of the primary wave vectors for data augmentation. We denote the fundamental domain of periodic structure ϕ ( r ) as Ω = { j = 1 d ζ j a j , a j R d , ζ j [ 0 , 1 ) } . For any translation t R d , t / A Z d Ω ; therefore, we only need to consider the translation in the fundamental domain Ω . From the Fourier expansion ( 12 ) , and for rotation matrix R S O ( d ) , t Ω , we have
ϕ ( R r + t ) = h K N d ϕ ^ ( h ) e i ( B h ) T ( R r + t ) = h K N d ϕ ^ ( h ) e i B h T t e i B ˜ h T r = h K N d ϕ ˜ ( h ) e i B ˜ h T r
where B ˜ = R T B is a new reciprocal lattice by rotation transformation and ϕ ˜ ( h ) = ϕ ^ ( h ) e i B h T t is a new Fourier coefficient associated with translation transformation. Obviously, the RSDA method is easy to implement, and can be suitable for arbitrary dimensional periodic structures.

4. Application

In this section, we apply the multi-category network model to diblock copolymers confined on a two-dimensional plane to obtain the mapping from periodic structure to physical parameters. Besides, we compare the approximation accuracy and computational time of forecasting parameters. All the experiments in this section are conducted on a workstation equipped with two Intel(R) Xeon(R) Silver 4214 CPUs @ 2.20 GHz with 128 GB RAM. As the phase diagram shows, in two dimensions, only LAM and HEX phases are thermodynamic stable. Therefore, the SPM has two subnets, i.e., LAM and HEX subnets. As shown in Figure 4, the classifier is used to distinguish LAM or HEX phase after inputting the order parameter ϕ . For desired structures, LAM and HEX subnets are used to predict the physical parameters ( τ * , γ * ) . The dataset required for the classifier consists of { ( ϕ j , y j ) } j = 1 N , where y j is the label, and for each SPM subnet, consists of { ( ϕ j , ( τ j * , γ j * ) ) } j = 1 N ; N is the size of training data.
Table 1 shows the network parameters of the classifier and SPM subnets. In all networks, the size of the output layer is 2. We adopt the ReLU function as the activation function and the Adam optimizer with a learning rate of 10 4 to train the neural network model. Kaiming_uniform [35] is used to initialize the network parameters both in tje classifier and subnets. We set the maximum epoch to be 20,000 and stop training if the error on the validation set decreases to 10 5 to prevent overfitting. For the classifier, the loss function is defined as
L = 1 N i = 1 N y i · log p i + 1 y i · log 1 p i ,
where y i represents the label of the sample, LAM phase is 0, and HEX phase is 1. p i is the probability of identifying HEX phase. For each SPM subnet, the loss function is
M S E = 1 N i = 1 N u i u ˜ i 2 ,
where u i = ( τ * , γ * ) are the targeted parameters and u ˜ i = ( τ , γ ) are the predicted parameters.
It is well known that the LAM and HEX phases have 2- and 6-fold symmetries, respectively. As shown in Figure 5a,b, the fundamental domain Ω L A M of the LAM phase is a square, while the fundamental domain Ω H E X of the HEX phase is a parallelogram region due to the 6-fold rotational symmetry. Their corresponding primitive Bravis lattice is
A L A M = λ 1 1 0 0 1 , A H E X = λ 2 1 cos ( π / 3 ) 0 sin ( π / 3 ) .
Lattice constants λ 1 and λ 2 depend on the model parameters.
Now, we construct the dataset of the LAM phase. We discrete the τ - γ phase space of the stable LAM phase with step size [ Δ τ , Δ γ ] = [ 0.008 , 0.02 ] to form the dataset S L A M 1 for training and validation, and with step size [ Δ τ , Δ γ ] = [ 0.003 , 0.015 ] to form S L A M 2 for testing. We randomly choose 627 parameter pairs as a group G L A M 1 from S L A M 1 to construct training and validation sets, and 262 parameter pairs as a group G L A M 2 from S L A M 2 to build the test set. Then, we generate LAM phases by solving the direct problem with each selected parameter pair in G L A M 2 and G L A M 2 .
We augment data by rotation and translation. The rotation matrix is
R = cos θ sin θ sin θ cos θ ,
where θ is the rotation angle. LAM phases have 2-fold rotational symmetry; therefore, θ [ 0 , π ) . Figure 5c gives a sketch plot of rotating LAM phase in the reciprocal space. Concretely, we rotate 627 LAM phases in G L A M 1 with θ = { j π / 60 } j = 1 60 and translate them with t = ( i , i ) Ω L A M , i { 0 , 0.2 , 0.4 , 0.6 , 0.8 } . The dataset we obtained includes 188,100 LAM phases. We randomly split these samples into a training set and a validation set at a ratio of 4:1. In the group G L A M 2 , 262 LAM phases are rotated by θ { 3 ° , 7°, 19°, 20°, 27°, 34°, 78°, 80°, 82°, 114 ° } and translated by t = ( i , i ) , i { 0.17 , 0.37 , 0.51 , 0.65 , 0.73 } to form the test set.
Similarly, we construct a dataset of the HEX phase. The τ - γ domain of the stable HEX phase is discretized with step size [ Δ τ , Δ γ ] = [ 0.008 , 0.02 ] to form S H E X 1 for training and validation, and with step size [ Δ τ , Δ γ ] = [ 0.003 , 0.015 ] to form S H E X 2 for testing. We randomly choose 1600 parameter pairs as a group G H E X 1 from S H E X 1 to construct training and validation sets, and 300 parameter pairs as a group G H E X 2 from S H E X 2 to build the test set. We still obtain HEX phases by solving the direct problem for selected parameters.
Then, we augment the dataset by rotation and translation operators. The HEX phase has 6-fold rotational symmetry; therefore, the rotation angle θ in (20) belongs to [ 0 , π / 3 ) . An illustration of rotating HEX phase in the reciprocal space is shown in Figure 5d. Concretely, we discretize rotation angle θ = { j π / 60 } j = 1 20 and select translation vector t = ( i , i ) in Ω H E X , i { 0 , 0.2 , 0.4 , 0.6 , 0.8 } , for processing each sample in G H E X 1 . This results in 16,000 HEX phases. We randomly divide them into a training set and a validation set with a ratio of 4:1. The generated dataset of 300 in G H E X 2 is rotated by θ { 3 ° , 11°, 24°, 27°, 38°, 40°, 52°, 54°, 59 ° } and translated by t = ( i , i ) T , i { 0.17 , 0.37 , 0.51 , 0.65 , 0.73 } to form the test set.
Some LAM and HEX phases under rotation and translation transformation are visually illustrated in Figure 6. The sizes of training, validation, and test sets for the classifier are given in Table 2.
Figure 7 shows the training and validation loss of the classifier. One can find that the training and validation losses reach 10 7 and 10 8 at epoch = 5, respectively. The accuracy is defined as α = i M i , i / i , j M i , j , where M is the confusion matrix [21]—a visual table layout that reflects the predictions of the network. As shown in Table 3, M i j denotes the number of i identified to be j, and i , j { LAM , HEX } . These results show that the classifier can identify structures with 100% success.
We classify the training and validation data by the classifier and obtain 188,100 LAM and 160,000 HEX phases. Then, we adopt them as the training and validation data for each subnet. For the test data onto each subnet, we only consider the data translated by t = ( 0.17 , 0.17 ) in the test set. Table 4 indicates the size of the dataset for each SPM subnet.
Figure 8 presents the training and validation losses of SPM networks. We can see that the training loss of the LAM (HEX) subnet is 5.75 × 10 5 ( 1.16 × 10 5 ) and the validation loss is 1.89 × 10 5 ( 8.30 × 10 6 ) at epoch = 700 (1200).
We define the relative error
E = u u ˜ 2 u 2 ,
and the average relative error
E average = 1 N t e s t k = 1 N test u k u ˜ k 2 u k 2 ,
where N test is the size of test data. The predicted accuracy of single sample is ( 1 E ) × 100 % , while the average accuracy is ( 1 E average ) × 100 % . Figure 9 illustrates the test accuracy of the LAM and HEX subnets. Results indicate that the accuracy of parameters prediction for a single sample can achieve 80 % 100 % , and the average prediction accuracy of LAM subnet (blue dashed line) is 84%. For the HEX subnet (pink dashed line), the average prediction accuracy can reach 91%.
We randomly select two test samples in the test set and input them into the network to predict the corresponding parameters. Next, we obtain structures corresponding to the predicted parameters by solving the direct problem. Figure 10 shows LAM and HEX phases with targeted and predicted parameters, and the absolute errors between the targeted and predicted phases. From Figure 10c,f, we can see that the absolute error of the LAM phase is 10 2 and of the HEX phase is 10 3 . This also reflects a good fitting effect of SPM subnets.
Table 5 shows the training time of the classifier and each SPM subnet, and the online calculation time of these networks when identifying structures or predicting parameters with one sample in the test set.

5. Conclusions

In this paper, we propose a multi-category neural network for the inverse design of ordered periodic structures. The proposed network can construct the mapping between phases and physical parameters. For periodic phases, we provide an extensible RSDA approach to augment data. Then, we apply these methods to the two-dimensional diblock copolymer system. The dataset is produced by the LB free energy functional. Experimental results show that the structure recognition accuracy of the classifier can reach 100% based on 26,600 randomly selected test data. Moreover, on a dataset consisting of 5320 randomly selected test data, the parameter prediction accuracy of the LAM phase reaches 84% and the accuracy of the HEX phase reaches 91%. The network model and RSDA method are applied to a two-dimensional problem; however, they can be extended to higher-dimensional inverse design problems.

Author Contributions

Conceptualization, Y.H., K.J., D.W. and T.Z.; methodology, Y.H., K.J., D.W. and T.Z.; software, D.W. and T.Z.; validation, K.J., D.W. and T.Z.; formal analysis, K.J., D.W. and T.Z.; investigation, K.J., D.W. and T.Z.; resources, K.J., D.W. and T.Z.; data curation, D.W. and T.Z.; writing—original draft preparation, K.J., D.W. and T.Z.; writing—review and editing, K.J., D.W. and T.Z.; visualization, K.J., D.W. and T.Z.; supervision, Y.H. and K.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by NSFC Project (12171412, 11971410). K.J. is partially supported by the Natural Science Foundation for Distinguished Young Scholars of Hunan Province (2021JJ10037). Y.H. is partially supported by China’s National Key R&D Programs (2020YFA0713500).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Chen Cui and Liwei Tan for useful discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Beu, T.A.; Onoe, J.; Hida, A. First-principles calculations of the electronic structure of one-dimensional C60 polymers. Phys. Rev. B 2005, 72, 155416. [Google Scholar] [CrossRef]
  2. He, X.; Song, M.; Liang, H.; Pan, C. Self-assembly of the symmetric diblock copolymer in a confined state: Monte Carlo simulation. J. Chem. Phys. 2001, 114, 10510–10513. [Google Scholar] [CrossRef]
  3. Sugimura, N.; Ohno, K. A Monte Carlo simulation of water + oil + ABA block copolymer ternary system. I. Patterns in thermal equilibrium. AIP Adv. 2021, 11, 055312. [Google Scholar] [CrossRef]
  4. Lemak, A.S.; Lepock, J.R.; Chen, J.Z. Molecular dynamics simulations of a protein model in uniform and elongational flows. Proteins Struct. Funct. Bioinform. 2003, 51, 224–235. [Google Scholar] [CrossRef] [PubMed]
  5. Ortiz, V.; Nielsen, S.O.; Discher, D.E.; Klein, M.L.; Lipowsky, R.; Shillcock, J. Dissipative particle dynamics simulations of polymersomes. J. Phys. Chem. B 2005, 109, 17708–17714. [Google Scholar] [CrossRef] [Green Version]
  6. Gavrilov, A.A.; Kudryavtsev, Y.V.; Chertovich, A.V. Phase diagrams of block copolymer melts by dissipative particle dynamics simulations. J. Chem. Phys. 2013, 139, 224901. [Google Scholar] [CrossRef] [Green Version]
  7. Fredrickson, G.; Fredrickson, D. The Equilibrium Theory of Inhomogeneous Polymers; International Series of Monographs on Physics; OUP Oxford: Oxford, UK, 2006. [Google Scholar]
  8. Fraaije, J. Dynamic density functional theory for microphase separation kinetics of block copolymer melts. J. Chem. Phys. 1993, 99, 9202–9212. [Google Scholar] [CrossRef]
  9. Liu, C.C.; Nealey, P.F.; Raub, A.K.; Hakeem, P.J.; Brueck, S.R.J.; Han, E.; Gopalan, P. Integration of block copolymer directed assembly with 193 immersion lithography. J. Vac. Sci. Technol. B 2010, 28, C6B30. [Google Scholar] [CrossRef] [Green Version]
  10. Suh, H.S.; Kim, D.H.; Moni, P.; Xiong, S.; Ocola, L.E.; Zaluzec, N.J.; Gleason, K.K.; Nealey, P.F. Sub-10-nm patterning via directed self-assembly of block copolymer films with a vapour-phase deposited topcoat. Nat. Nanotechnol. 2017, 12, 575–581. [Google Scholar] [CrossRef]
  11. Li, W.; Gu, X. Square patterns formed from the directed self-assembly of block copolymers. Mol. Syst. Des. Eng. 2021, 6, 355–367. [Google Scholar] [CrossRef]
  12. Ouk Kim, S.; Solak, H.H.; Stoykovich, M.P.; Ferrier, N.J.; De Pablo, J.J.; Nealey, P.F. Epitaxial self-assembly of block copolymers on lithographically defined nanopatterned substrates. Nature 2003, 424, 411–414. [Google Scholar] [CrossRef] [PubMed]
  13. Ji, S.; Nagpal, U.; Liao, W.; Liu, C.C.; de Pablo, J.J.; Nealey, P.F. Three-dimensional directed assembly of block copolymers together with two-dimensional square and rectangular nanolithography. Adv. Mater. 2011, 23, 3692–3697. [Google Scholar] [CrossRef] [PubMed]
  14. Chuang, V.P.; Gwyther, J.; Mickiewicz, R.A.; Manners, I.; Ross, C.A. Templated self-assembly of square symmetry arrays from an ABC triblock terpolymer. Nano Lett. 2009, 9, 4364–4369. [Google Scholar] [CrossRef]
  15. Malkiel, I.; Nagler, A.; Mrejen, M.; Arieli, U.; Wolf, L.; Suchowski, H. Deep learning for design and retrieval of nano-photonic structures. arXiv 2017, arXiv:1702.07949. [Google Scholar]
  16. Gahlmann, T.; Tassin, P. Deep neural networks for the prediction of the optical properties and the free-form inverse design of metamaterials. arXiv 2022, arXiv:2201.10387. [Google Scholar] [CrossRef]
  17. Liu, D.; Tan, Y.; Khoram, E.; Yu, Z. Training deep neural networks for the inverse design of nanophotonic structures. ACS Photonics 2018, 5, 1365–1369. [Google Scholar] [CrossRef] [Green Version]
  18. Peurifoy, J.; Shen, Y.; Jing, L.; Yang, Y.; Cano-Renteria, F.; DeLacy, B.G.; Tegmark, M.; Joannopoulos, J.D.; Soljacic, M. Nanophotonic particle simulation and inverse design using artificial neural networks. Sci. Adv. 2018, 4, eaar4206. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Xuan, Y.; Delaney, K.T.; Ceniceros, H.D.; Fredrickson, G.H. Deep learning and self-consistent field theory: A path towards accelerating polymer phase discovery. J. Comput. Phys. 2021, 443, 110519. [Google Scholar] [CrossRef]
  20. Lin, D.; Yu, H.Y. Deep learning and inverse discovery of polymer self-consistent field theory inspired by physics-informed neural networks. Phys. Rev. E 2022, 106, 014503. [Google Scholar] [CrossRef]
  21. Hagita, K.; Aoyagi, T.; Abe, Y.; Genda, S.; Honda, T. Deep learning-based estimation of Flory–Huggins parameter of A–B block copolymers from cross-sectional images of phase-separated structures. Sci. Rep. 2021, 11, 1–16. [Google Scholar] [CrossRef]
  22. Nakamura, I. Phase diagrams of polymer-containing liquid mixtures with a theory-embedded neural network. New J. Phys. 2020, 22, 015001. [Google Scholar] [CrossRef]
  23. Aoyagi, T. Deep learning model for predicting phase diagrams of block copolymers. Comput. Mater. Sci. 2021, 188, 110224. [Google Scholar] [CrossRef]
  24. Brazovskiĭ, S.A. Phase transition of an isotropic system to a nonuniform state. J. Exp. Theor. Phys. 1975, 41, 85–89. [Google Scholar]
  25. Leibler, L. Theory of microphase separation in block copolymers. Macromolecules 1980, 13, 1602–1617. [Google Scholar] [CrossRef]
  26. Fredrickson, G.; Helfand, E. Fluctuation effects in the theory of microphase separation in block copolymers. J. Chem. Phys. 1987, 87, 697. [Google Scholar] [CrossRef]
  27. Shi, A.C.; Noolandi, J.; Desai, R.C. Theory of anisotropic fluctuations in ordered block copolymer phases. Macromolecules 1996, 29, 6487–6504. [Google Scholar] [CrossRef]
  28. Miao, B.; Wickham, R.A. Fluctuation effects and the stability of the Fddd network phase in diblock copolymer melts. J. Chem. Phys. 2008, 128, 054902. [Google Scholar] [CrossRef]
  29. McClenagan, D. Landau Theory of Complex Ordered Phases. Ph.D. Thesis, McMaster University, Hamilton, ON, Canada, 2019. [Google Scholar]
  30. Jiang, K.; Si, W. AGPD: Automatically Generating Phase Diagram; National Copyright Administration: Beijing, China, 2022. [Google Scholar]
  31. Shi, A.C. Nature of anisotropic fluctuation modes in ordered systems. J. Phys. Condens. Matter 1999, 11, 10183–10197. [Google Scholar] [CrossRef]
  32. Zhang, P.; Zhang, X. An efficient numerical method of Landau–Brazovskii model. J. Comput. Phys. 2008, 227, 5859–5870. [Google Scholar] [CrossRef]
  33. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  34. Li, B.; Hou, Y.; Che, W. Data augmentation approaches in natural language processing: A survey. AI Open 2021, 3, 71–90. [Google Scholar] [CrossRef]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
Figure 1. (a) Phase diagram of the Landau-Brazovskii model. (b) LAM. (c) HEX. (d) BCC. (e) DG.
Figure 1. (a) Phase diagram of the Landau-Brazovskii model. (b) LAM. (c) HEX. (d) BCC. (e) DG.
Mathematics 10 04451 g001
Figure 2. Multi-category inverse design network.
Figure 2. Multi-category inverse design network.
Mathematics 10 04451 g002
Figure 3. Network architecture of classifier and each SPM subnet.
Figure 3. Network architecture of classifier and each SPM subnet.
Mathematics 10 04451 g003
Figure 4. Network of the two-dimensional diblock copolymer system.
Figure 4. Network of the two-dimensional diblock copolymer system.
Mathematics 10 04451 g004
Figure 5. Schematic diagram of fundamental domain. (a) LAM phase Ω L A M ; (b) HEX phase Ω H E X . Illustration of rotating primary spectral points of (c) LAM and (d) HEX phases in reciprocal space. Reference state (red); transform state after rotating θ degree(s) (blue).
Figure 5. Schematic diagram of fundamental domain. (a) LAM phase Ω L A M ; (b) HEX phase Ω H E X . Illustration of rotating primary spectral points of (c) LAM and (d) HEX phases in reciprocal space. Reference state (red); transform state after rotating θ degree(s) (blue).
Mathematics 10 04451 g005
Figure 6. Density and spectral pattern of LAM and HEX phases under rotation and translation transformation. LAM phases at [ τ , γ ] = [ 0.3 , 0.2 ] : (a) θ = 0, t = ( 0 , 0 ) ; (b) θ = 0, t = ( 0.1 , 0.1 ) ; (c) θ = π / 6 , t = ( 0 , 0 ) ; (d) θ = π / 2 , t = ( 0.1 , 0.1 ) . HEX phases [ τ , γ ] = [ 0.3 , 0.8 ] : (e) θ = 0, t = ( 0 , 0 ) ; (f) θ = 0, t = ( 0.1 , 0.1 ) ; (g) θ = π / 6 , t = ( 0 , 0 ) ; (h) θ = π / 4 , t = ( 0.1 , 0.1 ) .
Figure 6. Density and spectral pattern of LAM and HEX phases under rotation and translation transformation. LAM phases at [ τ , γ ] = [ 0.3 , 0.2 ] : (a) θ = 0, t = ( 0 , 0 ) ; (b) θ = 0, t = ( 0.1 , 0.1 ) ; (c) θ = π / 6 , t = ( 0 , 0 ) ; (d) θ = π / 2 , t = ( 0.1 , 0.1 ) . HEX phases [ τ , γ ] = [ 0.3 , 0.8 ] : (e) θ = 0, t = ( 0 , 0 ) ; (f) θ = 0, t = ( 0.1 , 0.1 ) ; (g) θ = π / 6 , t = ( 0 , 0 ) ; (h) θ = π / 4 , t = ( 0.1 , 0.1 ) .
Mathematics 10 04451 g006
Figure 7. Training and validation losses of classifier.
Figure 7. Training and validation losses of classifier.
Mathematics 10 04451 g007
Figure 8. The training and validation losses of SPM subnets: (a) LAM and (b) HEX.
Figure 8. The training and validation losses of SPM subnets: (a) LAM and (b) HEX.
Mathematics 10 04451 g008
Figure 9. The test accuracy of LAM (blue) and HEX (green) subnets. Scatter plots demonstrate the parameter prediction accuracy of HEX and LAM subnets. The average test accuracy of the two subnets is shown by two dashed lines.
Figure 9. The test accuracy of LAM (blue) and HEX (green) subnets. Scatter plots demonstrate the parameter prediction accuracy of HEX and LAM subnets. The average test accuracy of the two subnets is shown by two dashed lines.
Mathematics 10 04451 g009
Figure 10. The phases with targeted and predicted parameters are obtained by solving the direct problem (15). LAM phase: (a) [ τ * , γ * ] = [−0.4, 0.2], θ = 82 and t = ( 0.17 , 0.17 ) ; (b) [ τ , γ ] = [−0.3914, 0.2067], θ = 82 and t = ( 0.17 , 0.17 ) . HEX phase: (d) [ τ * , γ * ] = [−0.08, 0.28], θ = 40 and t = ( 0.17 , 0.17 ) ; (e) [ τ , γ ] = [−0.0807, 0.2744], θ = 40 and t = ( 0.17 , 0.17 ) . Absolute errors between targeted and predicted LAM (c) and HEX (f) phases.
Figure 10. The phases with targeted and predicted parameters are obtained by solving the direct problem (15). LAM phase: (a) [ τ * , γ * ] = [−0.4, 0.2], θ = 82 and t = ( 0.17 , 0.17 ) ; (b) [ τ , γ ] = [−0.3914, 0.2067], θ = 82 and t = ( 0.17 , 0.17 ) . HEX phase: (d) [ τ * , γ * ] = [−0.08, 0.28], θ = 40 and t = ( 0.17 , 0.17 ) ; (e) [ τ , γ ] = [−0.0807, 0.2744], θ = 40 and t = ( 0.17 , 0.17 ) . Absolute errors between targeted and predicted LAM (c) and HEX (f) phases.
Mathematics 10 04451 g010
Table 1. Network parameters of classifier and each subnet. The notations of parameters in the table are as follows: in channels (i), out channels (o), kernel size (k), stride (s), padding (p), batch size (Nb).
Table 1. Network parameters of classifier and each subnet. The notations of parameters in the table are as follows: in channels (i), out channels (o), kernel size (k), stride (s), padding (p), batch size (Nb).
Layer TypeOutput ShapeLayer TypeOutput Shape
Conv2d ( i = 1 , o = 4 , k = 5 , p = 2 , s = 1 ) N b , 4 , 40 , 40 Connect left
MaxPool2d ( k = 2 , s = 2 ) N b , 4 , 20 , 20 Reshape N b , 256
Conv2d ( i = 4 , o = 8 , k = 5 , p = 2 , s = 1 ) N b , 8 , 20 , 20 Fully connected N b , 128
MaxPool2d ( k = 2 , s = 2 ) N b , 8 , 10 , 10 Fully connected N b , 64
Conv2d ( i = 8 , o = 16 , k = 5 , p = 1 , s = 1 ) N b , 16 , 8 , 8 Fully connected N b , 10
MaxPool2d ( k = 5 , s = 1 ) N b , 16 , 4 , 4 Fully connected N b , 2
Table 2. Amount of training, validation, and test data for the classifier.
Table 2. Amount of training, validation, and test data for the classifier.
Training SetValidation SetTest Set
Classifier278,48069,62026,600
Table 3. The confusion matrix of classifier at epoch = 5.
Table 3. The confusion matrix of classifier at epoch = 5.
Identified LAMIdentified HEX
Certain LAM13,1000
Certain HEX013,500
Table 4. Amount of training, validation, and test data for LAM and HEX subnets.
Table 4. Amount of training, validation, and test data for LAM and HEX subnets.
Training DataValidation DataTest Data
LAM subnet150,48037,6202620
HEX subnet128,00032,0002700
Table 5. Training and test time for the classifier and each SPM subnet.
Table 5. Training and test time for the classifier and each SPM subnet.
ClassifierLAM SubnetHEX Subnet
Training time11.5 min11.6 h20.0 h
Test time0.0078 s0.0034 s0.0074 s
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, D.; Zhou, T.; Huang, Y.; Jiang, K. A Multi-Category Inverse Design Neural Network and Its Application to Diblock Copolymers. Mathematics 2022, 10, 4451. https://doi.org/10.3390/math10234451

AMA Style

Wei D, Zhou T, Huang Y, Jiang K. A Multi-Category Inverse Design Neural Network and Its Application to Diblock Copolymers. Mathematics. 2022; 10(23):4451. https://doi.org/10.3390/math10234451

Chicago/Turabian Style

Wei, Dan, Tiejun Zhou, Yunqing Huang, and Kai Jiang. 2022. "A Multi-Category Inverse Design Neural Network and Its Application to Diblock Copolymers" Mathematics 10, no. 23: 4451. https://doi.org/10.3390/math10234451

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop