# Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions

## Abstract

**:**

## 1. Introduction

## 2. Machine Learning and Deep Learning

#### 2.1. The Key Distinctions between Deep Learning and Machine Learning

#### 2.2. Different Machine Learning Categories

#### 2.2.1. Supervised Learning

- Usage:

#### 2.2.2. Unsupervised Learning

- Usage:

#### 2.2.3. Semi-Supervised Learning

- Usage:

#### 2.2.4. Reinforcement

- Usage

#### 2.3. What Role Does Deep Learning Play in AI? (Evolution of Deep Learning)

#### 2.4. How Deep Learning Works? DL Workflow

_{1}, …, x

_{p}} and w = {w

_{1}, …, w

_{p}} are the weights.

_{0}, often known as the bias, is included even though it has nothing to do with the input. The common activation function, the logic sigmoid function, provides the actual outputs ỹ.

#### Convolutional Neural Networks CNN

^{k}and weight W

^{k}) for producing k feature maps h

^{k}with a size of (m − n − 1) and are convolved with input, as described before. Similar to NLP, the convolution layer generates a dot product between its input and the weights as shown in Equation (1), but its inputs are less than the original picture size. Then, by adding nonlinearity or an activation function to the output of the convolution layer, we obtain the following:

#### 2.5. Deep Learning Approaches

#### 2.6. DL Properties and Dependencies

#### 2.6.1. Understanding Different Types of Data

- Sequential Data:

- 2D data or an image:

- Tables of Data:

#### 2.6.2. The Dependencies and Properties of DL

#### 2.6.3. Process for Feature Engineering

#### 2.6.4. Model Training and Execution Time

#### 2.6.5. Black-Box Perception and Interpretability

## 3. Some Deep Learning Applications

#### 3.1. Recognition of Objects in Images

#### 3.2. Biometrics

#### 3.3. Natural Language Processing

#### 3.4. Recommender Systems (RS)

#### 3.5. Mobile

#### 3.6. Clinical Imaging

#### 3.6.1. Medical Applications

#### 3.6.2. Tasks That Machine Learning Can Handle in Healthcare

**Detecting and Diagnosing Critical Diseases:**

**Drug Discovery and Manufacturing:**

**Keeping Health Records:**

**Clinical Trials and Research:**

**Knowledgeable Consent to Use**

**Security and Openness**

**Fairness of algorithms and Biases**

**Data Privacy**

## 4. The Future Directions

#### 4.1. The Difficulties of Deep Learning

#### 4.1.1. Absence of Originality in Model Structure

#### 4.1.2. Modernise Training Techniques

#### 4.1.3. Decrease Training Duration

#### 4.1.4. Online Education

**Data volume:**Deep learning is a collection of computationally intensive models. Such examples are fully connected multi-layer neural networks in which a multitude of network parameters must be accurately evaluated. Massive amounts of data are needed to accomplish this objective. While there are no hard and fast rules about the minimum amount of training documents, a good rule of thumb is to have at least 10 times as many samples as network parameters. This is also one of the reasons why deep learning is so effective in sectors where vast quantities of data can be easily collected (e.g., computer vision, speech, natural language).

**Temporality:**The progression and evolution of diseases across time are nondeterministic. Nonetheless, many existing deep learning models, including those recently suggested in the medical area, assume static, vector-based inputs, which cannot naturally account for the time factor. Developing deep learning methods capable of handling temporal healthcare data is a crucial component that will necessitate the creation of unique solutions.

**Interpretability**: Despite the success of deep learning models in a variety of application domains, they are frequently considered black boxes. While this may not be a problem in other more deterministic domains, such as picture annotation (since the end user may objectively check the tags assigned to the photos), in health care, both the quantitative algorithmic performance and the reason why the algorithms function are vital. In reality, such model interpretability (i.e., providing which phenotypes are driving the predictions) is vital for persuading medical practitioners to take the steps suggested by the predictive system (e.g., prescription of a specific medication, potentially high risk of developing a certain disease). Many of these obstacles present numerous opportunities and future research options for increasing the pitch.

## 5. Conclusions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Arel, I.; Rose, D.C.; Karnowski, T.P. Deep machine learning—A new frontier in artificial intelligence research [research frontier]. IEEE Comput. Intell. Mag.
**2010**, 5, 13–18. [Google Scholar] [CrossRef] - Benos, L.; Tagarakis, A.C.; Dolias, G.; Berruto, R.; Kateris, D.; Bochtis, D. Machine Learning in Agriculture: A Comprehensive Updated Review. Sensors
**2021**, 21, 3758. [Google Scholar] [CrossRef] - Huang, J.; Chai, J.; Cho, S. Deep learning in finance and banking: A literature review and classification. Front. Bus. Res. China
**2020**, 14, 13. [Google Scholar] [CrossRef] - Gambella, C.; Ghaddar, B.; Naoum-Sawaya, J. Optimization problems for machine learning: A survey. Eur. J. Oper. Res.
**2021**, 290, 807–828. [Google Scholar] [CrossRef] - Vuong, Q. Machine Learning for Robotic Manipulation. 2021. Available online: https://arxiv.org/abs/2101.00755v1 (accessed on 11 April 2023).
- Yuan, F.-G.; Zargar, S.A.; Chen, Q.; Wang, S. Machine learning for structural health monitoring: Challenges and opportunities. Sens. Smart Struct. Technol. Civ. Mech. Aerosp. Syst.
**2020**, 11379, 1137903. [Google Scholar] [CrossRef] - Kubat, M. An Introduction to Machine Learning. In An Introduction to Machine Learning; Springer International Publishing: Cham, Switzerland, 2017; pp. 321–329. [Google Scholar]
- Hinton, G.E.; Osindero, S.; Teh, Y.-W. A fast learning algorithm for deep belief nets. Neural Comput.
**2006**, 18, 1527–1554. [Google Scholar] [CrossRef] - Deng, L. Deep Learning: Methods and Applications. Found. Trends Signal Process.
**2013**, 7, 197–387. [Google Scholar] [CrossRef] - Karhunen, J.; Raiko, T.; Cho, K. Unsupervised deep learning: A short review. In Advances in Independent Component Analysis and Learning Machines; Academic Press: Cambridge, MA, USA, 2015; pp. 125–142. [Google Scholar] [CrossRef]
- Du, K.L.; Swamy, M.N. Neural Networks and Statistical Learning, 2nd ed.; Springer Science & Business Media: London, UK, 2019; pp. 1–988. [Google Scholar]
- Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques; Morgan Kaufmann: Waltham, MA, USA, 2012. [Google Scholar] [CrossRef]
- Haykin, S. Neural Networks and Learning Machines; Pearson Education USA: Upper Saddle River, NJ, USA, 2008. [Google Scholar]
- Ahmad, J.; Farman, H.; Jan, Z. Deep learning methods and applications. In Deep Learning: Convergence to Big Data Analytics; SpringerBriefs in Computer Science; Springer: Singapore, 2019; pp. 31–42. [Google Scholar] [CrossRef]
- Deep Learning Techniques: An Overview|SpringerLink. Available online: https://link.springer.com/chapter/10.1007/978-981-15-3383-9_54 (accessed on 11 March 2023).
- Srinivas, M.; Sucharitha, G.; Matta, A. Machine Learning Algorithms and Applications; Wiley: Hoboken, NJ, USA, 2021. [Google Scholar] [CrossRef]
- Janiesch, C.; Zschech, P.; Heinrich, K. Machine learning and deep learning. Electron. Mark.
**2021**, 31, 685–695. [Google Scholar] [CrossRef] - Sarker, I.H. Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Comput. Sci.
**2021**, 2, 160. [Google Scholar] [CrossRef] [PubMed] - Hassanien, A.E.; Chang, K.C.; Mincong, T. (Eds.) Advanced Machine Learning Technologies and Applications; Springer Nature: Singapore, 2021; Volume 1141. [Google Scholar] [CrossRef]
- Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw.
**2015**, 61, 85–117. [Google Scholar] [CrossRef] - Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? Adv. Neural Inf. Process. Syst.
**2014**, 4, 3320–3328. [Google Scholar] [CrossRef] - Cireşan, D.C.; Meier, U.; Masci, J.; Gambardella, L.M.; Schmidhuber, J. High-Performance Neural Networks for Visual Object Classification. arXiv
**2011**, arXiv:1102.0183. [Google Scholar] [CrossRef] - Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adv. Neural Inf. Process. Syst.
**2019**. [Google Scholar] - Zhang, Z.; Cui, P.; Zhu, W. Deep Learning on Graphs: A Survey. IEEE Trans. Knowl. Data Eng.
**2020**, 34, 249–270. [Google Scholar] [CrossRef] - LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature
**2015**, 521, 436–444. [Google Scholar] [CrossRef] [PubMed] - Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access
**2019**, 7, 53040–53065. [Google Scholar] [CrossRef] - Bengio, Y. Learning Deep Architectures for AI. Found. Trends Mach. Learn.
**2009**, 2, 1–127. [Google Scholar] [CrossRef] - Mathew, A.; Amudha, P.; Sivakumari, S. Deep learning techniques: An overview. Adv. Intell. Syst. Comput.
**2021**, 1141, 599–608. [Google Scholar] - Deng, L. A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Trans. Signal Inf. Process.
**2014**, 3, e2. [Google Scholar] [CrossRef] - Osisanwo, F.Y.; Akinsola, J.E.T.; Awodele, O.; Hinmikaiye, J.O.; Olakanmi, O.; Akinjobi, J. Supervised Machine Learning Algorithms: Classification and Comparison. Int. J. Comput. Trends Technol.
**2017**, 48, 128–138. [Google Scholar] [CrossRef] - Nasteski, V. An overview of the supervised machine learning methods. Horizons. B
**2017**, 4, 51–62. [Google Scholar] [CrossRef] - Panigrahi, A.; Chen, Y.; Kuo, C.C.J. Analysis on Gradient Propagation in Batch Normalized Residual Networks. arXiv
**2018**, arXiv:1812.00342. [Google Scholar] [CrossRef] - Kumari, K.; Yadav, S. Linear regression analysis study. J. Pr. Cardiovasc. Sci.
**2018**, 4, 33. [Google Scholar] [CrossRef] - Du, K.-L.; Swamy, M.N.S. Support Vector Machines. In Neural Networks and Statistical Learning; Springer: Berlin/Heidelberg, Germany, 2019; pp. 593–644. [Google Scholar] [CrossRef]
- Swapna, M.; Sharma, Y.K.; Prasad, B. CNN Architectures: Alex Net, Le Net, VGG, Google Net, Res Net. Int. J. Recent Technol. Eng.
**2020**, 8, 953–959. [Google Scholar] [CrossRef] - Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Commun. ACM
**2014**, 63, 139–144. [Google Scholar] [CrossRef] - Xu, W.; Sun, H.; Deng, C.; Tan, Y. Variational Autoencoders for Semi-supervised Text Classification. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 3358–3364. [Google Scholar] [CrossRef]
- Kameoka, H.; Li, L.; Inoue, S.; Makino, S. Supervised determined source separation with multichannel variational autoencoder. Neural Comput.
**2019**, 31, 1891–1914. [Google Scholar] [CrossRef] [PubMed] - Li, Y. Deep Reinforcement Learning: An Overview. arXiv
**2017**, arXiv:1701.07274. [Google Scholar] [CrossRef] - Paliwal, M. Deep Reinforcement Learning. Smart Innov. Syst. Technol.
**2022**, 273, 136–142. [Google Scholar] [CrossRef] - Arulkumaran, K.; Deisenroth, M.P.; Brundage, M.; Bharath, A.A. A Brief Survey of Deep Reinforcement Learning. IEEE Signal Process. Mag.
**2017**, 34, 26–38. [Google Scholar] [CrossRef] - Mnih, V.; Badia, A.P.; Mirza, M.; Graves, A.; Lillicrap, T.; Harley, T.; Silver, D.; Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the 33rd International Conference on Machine Learning, ICML 2016, New York, NY, USA, 19–24 June 2016; Volume 4, pp. 2850–2869. [Google Scholar] [CrossRef]
- Van Hasselt, H.; Guez, A.; Silver, D. Deep Reinforcement Learning with Double Q-learning. In Proceedings of the 30th AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; pp. 2094–2100. [Google Scholar] [CrossRef]
- Silver, D.; Huang, A.; Maddison, C.J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; et al. Mastering the game of Go with deep neural networks and tree search. Nature
**2016**, 529, 484–489. [Google Scholar] [CrossRef] - Naeem, M.; Paragliola, G.; Coronato, A. A reinforcement learning and deep learning based intelligent system for the support of impaired patients in home treatment. Expert Syst. Appl.
**2021**, 168, 114285. [Google Scholar] [CrossRef] - Reynolds, D. Gaussian Mixture Models. Encycl. Biom.
**2009**, 741, 659–663. [Google Scholar] [CrossRef] - Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging
**2016**, 35, 1285–1298. [Google Scholar] [CrossRef] - Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM
**2017**, 60, 84–90. [Google Scholar] [CrossRef] - Hinton, G.E. Deep belief networks. Scholarpedia
**2009**, 4, 5947. [Google Scholar] [CrossRef] - Goyal, P.; Pandey, S.; Jain, K. Introduction to Natural Language Processing and Deep Learning. In Deep Learning for Natural Language Processing; Springer: New York, NY, USA, 2018; pp. 1–74. [Google Scholar] [CrossRef]
- Taye, M.M. Theoretical Understanding of Convolutional Neural Network: Concepts, Architectures, Applications, Future Directions. Computation
**2023**, 11, 52. [Google Scholar] [CrossRef] - Li, Z.; Liu, F.; Yang, W.; Peng, S.; Zhou, J. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Trans. Neural Netw. Learn. Syst.
**2021**, 33, 6999–7019. [Google Scholar] [CrossRef] - Tomè, D.; Monti, F.; Baroffio, L.; Bondi, L.; Tagliasacchi, M.; Tubaro, S. Deep Convolutional Neural Networks for pedestrian detection. Signal Process. Image Commun.
**2016**, 47, 482–489. [Google Scholar] [CrossRef] - Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res.
**2011**, 12, 2825–2830. [Google Scholar] - Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci.
**2021**, 2, 420. [Google Scholar] [CrossRef] [PubMed] - Najafabadi, M.M.; Villanustre, F.; Khoshgoftaar, T.M.; Seliya, N.; Wald, R.; Muharemagic, E. Deep learning applications and challenges in big data analytics. J. Big Data
**2015**, 2, 1. [Google Scholar] [CrossRef] - Coelho, I.M.; Coelho, V.N.; Luz, E.J.D.S.; Ochi, L.S.; Guimarães, F.G.; Rios, E. A GPU deep learning metaheuristic based model for time series forecasting. Appl. Energy
**2017**, 201, 412–418. [Google Scholar] [CrossRef] - Serin, G.; Sener, B.; Ozbayoglu, A.M.; Unver, H.O. Review of tool condition monitoring in machining and opportunities for deep learning. Int. J. Adv. Manuf. Technol.
**2020**, 109, 953–974. [Google Scholar] [CrossRef] - Singh, P.; Manure, A. Learn TensorFlow 2.0; APress: Berkeley, CA, USA, 2020. [Google Scholar] [CrossRef]
- Gad, A.F. TensorFlow: A Guide to Build Artificial Neural Networks Using Python Build Artificial Neural Networks Using TensorFlow Library with Detailed Explanation of Each Step and Line of Code. Available online: https://www.researchgate.net/publication/321826020_TensorFlow_A_Guide_To_Build_Artificial_Neural_Networks_Using_Python (accessed on 12 March 2023).
- Shafiq, M.; Gu, Z. Deep Residual Learning for Image Recognition: A Survey. Appl. Sci.
**2022**, 12, 8972. [Google Scholar] [CrossRef] - Wasnik, P.; Raja, K.B.; Ramachandra, R.; Busch, C. Assessing face image quality for smartphone based face recognition system. In Proceedings of the 5th International Workshop on Biometrics and Forensics, IWBF 2017, Coventry, UK, 4–5 April 2017. [Google Scholar] [CrossRef]
- Mamoshina, P.; Vieira, A.; Putin, E.; Zhavoronkov, A. Applications of Deep Learning in Biomedicine. Mol. Pharm.
**2016**, 13, 1445–1454. [Google Scholar] [CrossRef] - Li, L.; Zhao, Y.; Jiang, D.; Zhang, Y.; Wang, F.; Gonzalez, I.; Valentin, E.; Sahli, H. Hybrid Deep Neural Network—Hidden Markov Model (DNN-HMM) based speech emotion recognition. In Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction, ACII 2013, Geneva, Switzerland, 2–5 September 2013; pp. 312–317. [Google Scholar] [CrossRef]
- Vetrekar, N.; Raja, K.B.; Ramachandra, R.; Gad, R.; Busch, C.; Ramachandra, R.; Gad, R.; Busch, C. Multi-spectral imaging for robust ocular biometrics. In Proceedings of the 2018 International Conference on Biometrics, ICB 2018, Gold Coast, QLD, Australia, 20–23 February 2018; pp. 195–201. [Google Scholar] [CrossRef]
- Er Schwenk, H.O.G. Continuous Space Translation Models for Phrase-Based Statistical Machine Translation. 2012, pp. 1071–1080. Available online: http://wwww.lium.univ-lemans.fr/~cslm (accessed on 12 March 2023).
- Tang, D.; Wei, F.; Qin, B.; Liu, T.; Zhou, M. Coooolll: A Deep Learning System for Twitter Sentiment Classification. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; Association for Computational Linguistics: Beijing, China, 2014; pp. 208–212. [Google Scholar]
- Lin, M.S.; Tang, C.G.Y.; Kom, X.J.; Eyu, J.Y.; Xu, C. Building a Natural Language Processing Model to Extract Order Information from Customer Orders for Interpretative Order Management. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management, Kuala Lumpur, Malaysia, 7–10 December 2022; pp. 81–86. [Google Scholar] [CrossRef]
- Wang, F.-Y.; Miao, Q.; Li, X.; Wang, X.; Lin, Y. What Does ChatGPT Say: The DAO from Algorithmic Intelligence to Linguistic Intelligence. IEEE/CAA J. Autom. Sin.
**2023**, 10, 575–579. [Google Scholar] [CrossRef] - Du, H.; Teng, S.; Chen, H.; Ma, J.; Wang, X.; Gou, C.; Li, B.; Ma, S.; Miao, Q.; Na, X.; et al. Chat with ChatGPT on Intelligent Vehicles: An IEEE TIV Perspective. In IEEE Transactions on Intelligent Vehicles; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar] [CrossRef]
- Sedhain, S.; Menon, A.K.; Sanner, S.; Xie, L. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th International Conference on World Wide Web, Florence, Italy, 18–22 May 2015; pp. 111–112. [Google Scholar] [CrossRef]
- Salakhutdinov, R.; Mnih, A.; Hinton, G. Restricted Boltzmann machines for collaborative filtering. ACM Int. Conf. Proc. Ser.
**2007**, 227, 791–798. [Google Scholar] [CrossRef] - He, X.; Liao, L.; Zhang, H.; Nie, L.; Hu, X.; Chua, T.S. Neural Collaborative Filtering. In Proceedings of the 26th International World Wide Web Conference, WWW 2017, Perth, Australia, 3–7 April 2017; pp. 173–182. [Google Scholar] [CrossRef]
- Han, J.; Zhang, Z.; Mascolo, C.; Andre, E.; Tao, J.; Zhao, Z.; Schuller, B.W. Deep Learning for Mobile Mental Health: Challenges and recent advances. IEEE Signal Process. Mag.
**2021**, 38, 96–105. [Google Scholar] [CrossRef] - Zou, J.; Zhang, Q. UbiEi-Edge: Human Big Data Decoding Using Deep Learning on Mobile Edge. In Proceedings of the roceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Guadalajara, Mexico, 1–5 November 2021; pp. 1861–1864. [Google Scholar] [CrossRef]
- Lane, N.D.; Georgiev, P. Can Deep Learning Revolutionize Mobile Sensing? In Proceedings of the HotMobile 2015—16th International Workshop on Mobile Computing Systems and Applications, Santa Fe, NM, USA, 12–13 February 2015; pp. 117–122. [Google Scholar] [CrossRef]
- Estonilo, C.G.; Festijo, E.D. Evaluation of the Deep Learning-Based m-Health Application Using Mobile App Development and Assessment Guide. In Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference, CCWC 2022, Las Vegas, NV, USA, 26–29 January 2022; pp. 435–440. [Google Scholar] [CrossRef]
- Liu, L.; Xu, J.; Huan, Y.; Zou, Z.; Yeh, S.-C.; Zheng, L.-R. A Smart Dental Health-IoT Platform Based on Intelligent Hardware, Deep Learning, and Mobile Terminal. IEEE J. Biomed. Health Inform.
**2020**, 24, 898–906. [Google Scholar] [CrossRef] - Liu, S.; Liu, S.; Cai, W.; Pujol, S.; Kikinis, R.; Feng, D. Early diagnosis of Alzheimer’s disease with deep learning. In Proceedings of the IEEE 11th International Symposium on Biomedical Imaging, ISBI 2014, Beijing, China, 29 April–2 May 2014; pp. 1015–1018. [Google Scholar] [CrossRef]
- Brosch, T.; Tam, R. Manifold Learning of Brain MRIs by Deep Learning. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany,, 2013; Volume 8150, pp. 633–640. [Google Scholar] [CrossRef]
- Prasoon, A.; Petersen, K.; Igel, C.; Lauze, F.; Dam, E.; Nielsen, M. Deep Feature Learning for Knee Cartilage Segmentation Using a Triplanar Convolutional Neural Network. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8150, pp. 246–253. [Google Scholar] [CrossRef]
- Yoo, Y.; Brosch, T.; Traboulsee, A.; Li, D.K.; Tam, R. Deep Learning of Image Features from Unlabeled Data for Multiple Sclerosis Lesion Segmentation. In Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2014; Volume 8679, pp. 117–124. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA
**2016**, 316, 2402–2410. [Google Scholar] [CrossRef] [PubMed] - Alipanahi, B.; Delong, A.; Weirauch, M.T.; Frey, B.J. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat. Biotechnol.
**2015**, 33, 831–838. [Google Scholar] [CrossRef] [PubMed] - Malhotra, C.; Kotwal, V.; Dalal, S. Ethical framework for machine learning. In Proceedings of the 10th ITU Academic Conference Kaleidoscope: Machine Learning for a 5G Future, ITU K, Santa Fe, Argentina, 26–28 November 2018. [Google Scholar] [CrossRef]
- Mageswaran, G.; Nagappan, S.D.; Hamzah, N.; Brohi, S.N. Machine Learning: An Ethical, Social & Political Perspective. In Proceedings of the 4th International Conference on Advances in Computing, Communication and Automation, ICACCA, Subang Jaya, Malaysia, 26–28 October 2018. [Google Scholar] [CrossRef]

**Figure 1.**Different machine learning categories and algorithms [14].

**Figure 4.**Representation of a neural network [20].

Machine Learning | Deep Learning | |
---|---|---|

Human Intervention | To achieve outcomes, machine learning requires more continuous human engagement. | Deep learning is more difficult to implement initially but requires little intervention afterward. |

Hardware | Machine learning programmes are typically less complicated than deep learning algorithms and may frequently be executed on standard computers. | Deep learning systems necessitate significantly more robust hardware and resources. The increasing demand for power has increased the utilisation of graphics processing units. GPUs are advantageous due to their high bandwidth memory and thread parallelism’s ability to conceal memory transfer latency (delays) (the ability of many operations to run efficiently at the same time.) |

Time | Machine learning systems can be installed and used quickly, but their results may not be as good as they could be. | Deep learning systems take more time to set up, but they can give results right away (though the quality is likely to get better as more data becomes available). |

Approach | Typically, machine learning requires organised data and uses conventional techniques such as linear regression. | Deep learning utilises neural networks and is designed to handle massive volumes of unstructured data. |

Applications | Email, bank, and doctor’s office all currently utilise machine learning. | Deep learning technology enables more complicated and autonomous programmes, such as self-driving automobiles and surgical robots. |

Usage | There are numerous applications for machine learning, including regression analysis, classification, and clustering. | Deep learning is typically employed for complicated tasks such as picture and speech recognition, natural language processing, and autonomous systems. |

Data | In general, machine learning algorithms use less data than deep learning algorithms, although data quality is more crucial. | Deep learning algorithms require enormous amounts of data to train neural networks but may learn and improve autonomously as additional data are processed. |

Advantages of Deep Learning | Disadvantages of Deep Learning |
---|---|

The potential to generate novel features from the limited existing training data. | There is less room for improvement in the training process because the entire training process depends on the constant flow of data. |

Can produce results for tasks that are dependable and actionable by using unsupervised learning approaches. | With more datasets available, computational training becomes substantially more expensive. |

It cuts down on the amount of time needed for feature engineering, one of the activities involved in learning how to use machine learning. | Transparency in fault revision is lacking. There are no intermediary stages to support a particular fault’s claims. A whole algorithm is updated to address the problem. |

Continuous training has made its architecture change-adaptive and capable of solving a variety of issues | For training the data sets, you need pricey resources, fast processors, and potent GPUs. |

Advantages | Limitation | |
---|---|---|

CNN | Highly effective for visual recognition | The quantity and ability of the training data have a significant impact on CNN performance. |

After learning a segment inside a certain area of an image, CNN can recognise that segment anyplace else in the picture. | extremely sensitive to noise | |

RNN | An RNN uses the same parameters throughout each phase, unlike a conventional neural network. This significantly lowers the number of parameters we need to memorise. | It is challenging for RNNs to monitor long-term dependence. This is particularly true when there are too many words between the noun and the verb in extended phrases and paragraphs. |

For unlabelled photos, RNNs may be used in conjunction with CNNs to produce precise descriptions. | RNNs can’t be combined to create highly complex models. The reason for this is that the gradient decays over several layers as a result of the activation function employed in RNN models. | |

Generative Adversarial Networks (GANs) | GANs enable effective semi-supervised classifier training. | The effectiveness of the generator and discriminator is essential to GAN’s success. Even if one of them fails, the entire system collapses. |

The produced data are practically indistinguishable from the original data due to the model’s increased accuracy. | The discriminator and generator are distinct systems that were trained using various loss functions. It might thus take a long time to train the entire system. | |

Autoencoders | A model is produced that is mostly dependent on data rather than predetermined filters. | Sometimes training demands a lot of time. |

Very little complexity makes them simpler to train. | The information that emerges from the model may be hazy and confusing if the training data are not indicative of the testing data. | |

ResNets | In some situations, ResNets are more accurate and need fewer weights than LSTMs and RNNs. | If a ResNet has too many levels, faults may be difficult to see and difficult to transmit back fast and accurately. However, if the layers are too thin, the learning may not be as effective. |

A network may be built by adding tens of thousands of residual layers, which can then be trained. |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Taye, M.M.
Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. *Computers* **2023**, *12*, 91.
https://doi.org/10.3390/computers12050091

**AMA Style**

Taye MM.
Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. *Computers*. 2023; 12(5):91.
https://doi.org/10.3390/computers12050091

**Chicago/Turabian Style**

Taye, Mohammad Mustafa.
2023. "Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions" *Computers* 12, no. 5: 91.
https://doi.org/10.3390/computers12050091