Predicting Thermal Conductivity of Nanostructures by Implementation of Deep Neural Networks and Genetic Programming Based Symbolic Regression
Poster Presentation XML
Authors
1School of Mechanical Engineering, College of Engineering, University of Tehran
2College of Engineering, School of Mechanical Engineering
Abstract
Predicting models for thermal conductivity (TC) of novel nanostructures have diverse applications. Within this process, new materials along with a better understanding of thermal management can be achieved. Although, the nature of TC makes it usually a troublous goal to predict. While experiments on this problem are quite expensive and inefficient to use, the classic models that have been used to predict TC are not effective neither when compared to those of density functional theory (DFT) nor molecular dynamics (MD) simulations. Providing researchers with a powerful tool to predict TC, DFT and MD acquire hours of CPUs and relatively high computing cost. This trouble seems to be more serious when the TC in larger scales is required. Herein, we use deep learning neural networks as well as proposing a genetic programming based Symbolic Regression (SR) approach to train the algorithms and obtain a better predicting model of nanostructure thermal conductivities. Our results show a five-fold reduction in simulation time versus current methods such as molecular dynamics or density functional theory. In total, there are 347 nanostructure samples as our dataset all carried out based on DFT. We use a hybrid cross-validation approach including both K-Fold CV and holdout validations to compare these models. To evaluate our models, we use two well-known statistical criteria which are RMSE and R^2 as our metrics. The symbolic regression model came out with four unprecedented formulas that all of them, while been used on our dataset, outperform the Slack model which is the most famous analytical formula to predict thermal conductivity. The neural network was trained with MAE as the loss function and makes use of the Adam optimizer. This was observed that neural networks are impressively strong in the process of training algorithm, and they outperform other models in all of our experiments during training. All models presented a weaker performance during extrapolation experiments comparing to cross-validation performance results. We believe this phenomenon occurs because of low distribution quality in our dataset since changing the set of data for training and using all the data in the domain has increased the efficiency.
Keywords