Artificial Neural Networks to Investigate Mathematical Models: A Concise Review

Authors

  • Pathipati Lakshmi Durga Department of mathematics, School of advanced sciences, VIT-AP university, Inavolu, Beside AP secretariat, Amaravati, AP, 522237, India. Author
  • Sukanta Nayak Department of mathematics, School of advanced sciences, VIT-AP university, Inavolu, Beside AP secretariat, Amaravati, AP, 522237, India. Author

DOI:

https://doi.org/10.31181/sa33202554

Keywords:

Artificial neural network architecture, Activation functions, Levenberg marquardt algorithm, Modified levenberg marquardt algorithms

Abstract

Mathematical modelling plays an essential role in understanding numerous physical phenomena, often described by Differential Equations (DEs). However, handling these models, particularly when inundated with extensive data and inputs, poses challenges for both analytical and numerical approaches. Consequently, alternative techniques are sought. In this context, Artificial Neural Networks (ANNs) become valuable tool. Researchers have dedicated efforts to enhance ANNs capability in handling various types of DE’s such as Ordinary Differential Equations (ODE’s), Partial Differential Equations (PDE’s), and Nonlinear Partial  Differential Equations (NPDE’s). This pursuit has led to advancements in learning algorithms tailored for ANNs. Currently, the Levenberg-Marquardt (LM) learning technique stands out as a popular method for training ANN models. Traditional learning techniques, marked by complexities such as the size of the Jacobian matrix, error stability, and computational intensity, have spurred the development of advanced learning techniques aimed at simplifying the training process. These advancements necessitate a thorough analysis of various ANN techniques, providing insights into selecting the most suitable approach for a given system.  Consequently, the objective of this paper is systematically exploring the ANNs in solving mathematical models,  shedding light on their application in diverse domains.

References

Iyengar, S. R. K., & Jain, R. K. (2009). Numerical methods. New Age International. https://books.google.com/books?hl=en&lr=&id=5p5jFxb16UEC&oi

Nayak, S., & Chakraverty, S. (2018). Interval finite element method with MATLAB. Academic press. https://www.researchgate.net/publication/325195030

Chapra, S. C., Canale, R. P., & others. (2011). Numerical methods for engineers (Vol. 1221). Mcgraw-hill New York. https://www.researchgate.net/profile/Steven-Chapra/publication/44398580

Gerald, C. F. (2004). Applied numerical analysis. Pearson Education India. https://www.cse.iitm.ac.in/~vplab/downloads/opt/Applied Numerical Analysis.pdf

Atkinson, K. E. (2008). An introduction to numerical analysis. John wiley & sons. https://math.science.cmu.ac.th/docs/qNA2556

Reddy, J. N. (2006). An introduction to the finite element method. 27 New York (Vol. 27). McGraw-Hill Higher Education. http://dx.doi.org/10.1115/1.3265687

Radi, B., & El Hami, A. (2018). Advanced numerical methods with matlab 2: Resolution of nonlinear, differential and partial differential equations. John Wiley & Sons. http://dx.doi.org/10.1002/9781119492238

Hildebrand, F. B. (1987). Introduction to numerical analysis. Courier Corporation. https://books.google.com/books?hl=en&lr=&id=f7We11dz0_kC&oi=fnd&pg

Burden, R. L., & Faires, J. D. (2011). Student solutions manual and study guide. Brooks/Cole, Cengage Learning. https://d1wqtxts1xzle7.cloudfront.net/51824818/9th_edition

Trefethen, L. N. (1996). Finite difference and spectral methods for ordinary and partial differential equations. Cornell University-Department of Computer Science and Center for Applied. http://www.math.hmc.edu/~dyong/math165/trefethenbook.pdf

Priyadarshini, S., & Nayak, S. (2022). A numerical approach to study heat and mass transfer in porous medium influenced by uncertain parameters. International communications in heat and mass transfer, 139, 106411. https://doi.org/10.1016/j.icheatmasstransfer.2022.106411

Priyadarshini, S., & Nayak, S. (2023). Effects of imprecisely defined parameters on heat and mass transfer in a vertical annular porous cylinder. International communications in heat and mass transfer, 149, 107097. https://doi.org/10.1016/j.icheatmasstransfer.2023.107097

Priyadarshini, S., & Nayak, S. (2023). A new hybrid approach to study heat and mass transfer in porous medium influenced by imprecisely defined parameters. Case studies in thermal engineering, 51, 103619. https://doi.org/10.1016/j.csite.2023.103619

McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115–133. https://doi.org/10.1007/BF02478259

Moghaddam, A. H., Moghaddam, M. H., & Esfandyari, M. (2016). Stock market index prediction using artificial neural network. Journal of economics, finance and administrative science, 21(41), 89–93. https://doi.org/10.1016/j.jefas.2016.07.002

Shakeel, P. M., Tobely, T. E. El, Al-Feel, H., Manogaran, G., & Baskar, S. (2019). Neural network based brain tumor detection using wireless infrared imaging sensor. IEEE access, 7, 5577–5588. https://doi.org/10.1109/ACCESS.2018.2883957

Heydarpour, F., Abbasi, E., Ebadi, M. J., & Karbassi, S.-M. (2020). Solving an optimal control problem of cancer treatment by artificial neural networks. IJIMAI, 6(4), 18–25. http://dx.doi.org/10.9781/ijimai.2020.11.011

de Souza, G. B., da Silva Santos, D. F., Pires, R. G., Marana, A. N., & Papa, J. P. (2019). Deep features extraction for robust fingerprint spoofing attack detection. Journal of artificial intelligence and soft computing research, 9(1), 41–49. http://dx.doi.org/10.2478/jaiscr-2018-0023

Lam, M. W. Y. (2018). One-match-ahead forecasting in two-team sports with stacked Bayesian regressions. Journal of artificial intelligence and soft computing research, 8. http://dx.doi.org/10.1515/jaiscr-2018-0011

Lagaris, I. E., Likas, A., & Fotiadis, D. I. (1998). Artificial neural networks for solving ordinary and partial differential equations. IEEE transactions on neural networks, 9(5), 987–1000. https://doi.org/10.1109/72.712178

Isaac, El., Likas, A. C., & Papageorgiou, D. G. (2000). Neural-network methods for boundary value problems with irregular boundaries. IEEE transactions on neural networks, 11(5), 1041–1049. https://doi.org/10.1109/72.870037

Parisi, D. R., Mariani, M. C., & Laborde, M. A. (2003). Solving differential equations with unsupervised neural networks. Chemical engineering and processing: Process intensification, 42(8–9), 715–721. https://doi.org/10.1016/S0255-2701(02)00207-6

Malek, A., & Beidokhti, R. S. (2006). Numerical solution for high order differential equations using a hybrid neural network—optimization method. Applied mathematics and computation, 183(1), 260–271. https://doi.org/10.1016/j.amc.2006.05.068

Tsoulos, I. G., Gavrilis, D., & Glavas, E. (2009). Solving differential equations with constructed neural networks. Neurocomputing, 72(10–12), 2385–2391. https://doi.org/10.1016/j.neucom.2008.12.004

Zahoor Raja, M. A., Khan, J. A., & Qureshi, I. M. (2010). Evolutionary computational intelligence in solving the fractional differential equations [presentation]. Asian conference on intelligent information and database systems (pp. 231–240). https://doi.org/10.1007/978-3-642-12145-6_24

Junaid, A., Raja, M. A. Z., & Qureshi, I. M. (2009). Evolutionary computing approach for the solution of initial value problems in ordinary differential equations. World academy of science, engineering and technology, 55, 578–5581. https://doi.org/10.1007/978-3-642-12145-6_24

Mall, S., & Chakraverty, S. (2013). Comparison of artificial neural network architecture in solving ordinary differential equations. Advances in artificial neural systems, 2013(1), 181895. https://doi.org/10.1155/2013/181895

Chakraverty, S., & Mall, S. (2014). Regression-based weight generation algorithm in neural network for solution of initial and boundary value problems. Neural computing and applications, 25(3), 585–594. https://doi.org/10.1007/s00521-013-1526-4%0A%0A

Shi, E., & Xu, C. (2021). A comparative investigation of neural networks in solving differential equations. Journal of algorithms & computational technology, 15, 1748302621998605. https://doi.org/10.1177/1748302621998605

Mall, S., & Chakraverty, S. (2014). Chebyshev neural network based model for solving Lane--Emden type equations. Applied mathematics and computation, 247, 100–114. https://doi.org/10.1016/j.amc.2014.08.085

Chakraverty, S., & Mall, S. (2020). Single layer Chebyshev neural network model with regression-based weights for solving nonlinear ordinary differential equations. Evolutionary intelligence, 13(4), 687–694. https://doi.org/10.1007/s12065-020-00383-y%0A%0A

Raja, M. A. Z., Khan, J. A., & Qureshi, I. M. (2010). A new stochastic approach for solution of Riccati differential equation of fractional order. Annals of mathematics and artificial intelligence, 60(3), 229–250. https://doi.org/10.1007/s10472-010-9222-x%0A%0A

Qu, H., & Liu, X. (2015). A numerical method for solving fractional differential equations by using neural network. Advances in mathematical physics, 2015(1), 439526. https://doi.org/10.1155/2015/439526

Raja, M. A. Z., Manzar, M. A., & Samar, R. (2015). An efficient computational intelligence approach for solving fractional order Riccati equations using ANN and SQP. Applied mathematical modelling, 39(10–11), 3075–3093. https://doi.org/10.1016/j.apm.2014.11.024

Jafarian, A., Mokhtarpour, M., & Baleanu, D. (2017). Artificial neural network approach for a class of fractional ordinary differential equation. Neural computing and applications, 28(4), 765–773. https://doi.org/10.1007/s00521-015-2104-8

Pakdaman, M., Ahmadian, A., Effati, S., Salahshour, S., & Baleanu, D. (2017). Solving differential equations of fractional order using an optimization technique based on training artificial neural network. Applied mathematics and computation, 293, 81–95. https://doi.org/10.1016/j.amc.2016.07.021

Pratama Danang, A., A. Bakar, M., Ismail, N. B., & M, M. (2022). ANN-based methods for solving partial differential equations: a survey. Arab journal of basic and applied sciences, 29(1), 233–248. https://doi.org/10.1080/25765299.2022.2104224

Sirignano, J., & Spiliopoulos, K. (2018). DGM: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375, 1339–1364. https://doi.org/10.1016/j.jcp.2018.08.029

Knoke, T., & Wick, T. (2021). Solving differential equations via artificial neural networks: Findings and failures in a model problem. Examples and counterexamples, 1, 100035. https://doi.org/10.1016/j.exco.2021.100035

Valasoulis, K., Fotiadis, D. L., Lagaris, I. E., & Likas, A. (2002). Solving differential equations with neural networks: implementation on a dsp platform [presentation]. 2002 14th international conference on digital signal processing proceedings. dsp 2002 (cat. no. 02th8628) (Vol. 2, pp. 1265–1268). https://doi.org/10.1109/ICDSP.2002.1028323

Zang, C., & Wang, F. (2020). Neural dynamics on complex networks [presentation]. Proceedings of the 26th acm sigkdd international conference on knowledge discovery & data mining (pp. 892–902). https://doi.org/10.1145/3394486.3403132

Blechschmidt, J., & Oliver, E. (2021). Three ways to solve partial differential equations with neural networks—A review. Gamm-mitteilungen, 44(2), e202100006. https://doi.org/10.1002/gamm.202100006

Althubiti, S., Kumar, M., Goswami, P., & Kumar, K. (2023). Artificial neural network for solving the nonlinear singular fractional differential equations. Applied mathematics in science and engineering, 31(1), 2187389. https://doi.org/10.1080/27690911.2023.2187389

Venkatachalapathy, P., & Mallikarjunaiah, S. M. (2023). A deep learning neural network framework for solving singular nonlinear ordinary differential equations. International journal of applied and computational mathematics, 9(5), 1-68. https://doi.org/10.1007/s40819-023-01563-x

Fojdl, J., & Brause, R. W. (2008). The performance of approximating ordinary differential equations by neural nets [presentation]. 2008 20th ieee international conference on tools with artificial intelligence (Vol. 2, pp. 457–464). https://doi.org/10.1109/ICTAI.2008.44

Michoski, C., Milosavljević, M., Oliver, T., & Hatch, D. R. (2020). Solving differential equations using deep neural networks. Neurocomputing, 399, 193–212. https://doi.org/10.1016/j.neucom.2020.02.015

Kumar, M., & Yadav, N. (2011). Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: a survey. Computers & mathematics with applications, 62(10), 3796–3811. https://doi.org/10.1016/j.camwa.2011.09.028

Chen, R. T. Q., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in neural information processing systems, 31, 1–13. https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6

Nayak, S. (2020). Fundamentals of optimization techniques with algorithms. Academic Press. https://shop.elsevier.com/books/fundamentals-of-optimization-techniques-with-algorithms/nayak/978-0-12-821126-7

Kalyanmoy, D. (2004). Optimization for engineering design: Algorithms and examples. Prentice-Hall Of India Pvt. Limited. https://www.semanticscholar.org/paper/Optimization-for-Engineering-Design%3A-Algorithms-and-Deb-Deb/48556b84dd1bf424a45d4f6acd39e29a05cc9158

Rao, S. S. (2019). Engineering optimization: theory and practice. John Wiley & Sons. https://doi.org/10.1002/9781119454816

Panigrahi, P. K., & Nayak, S. (2024). Numerical approach to solve imprecisely defined systems using Inner Outer Direct Search optimization technique. Mathematics and computers in simulation, 215, 578–606. https://doi.org/10.1016/j.matcom.2023.08.025

Panigrahi, P. K., & Nayak, S. (2023). Numerical investigation of non-probabilistic systems using Inner Outer Direct Search optimization technique. AIMS mathematics, 8(9), 21329–21358. https://doi.org/10.3934/math.20231087%0D

Nayak, S., & Pooja, J. (2022). Numerical optimisation technique to solve imprecisely defined nonlinear system of equations with bounded parameters. International journal of mathematics in operational research, 23(3), 394_411. https://doi.org/10.1504/IJMOR.2022.127381

Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323(6088), 533–536. https://doi.org/10.1038/323533a0

Bilski, J., Kowalczyk, B., Marchlewska, A., & Zurada, J. M. (2020). Local Levenberg-Marquardt algorithm for learning feedforwad neural networks. Journal of artificial intelligence and soft computing research, 10(4), 299–316. https://doi.org/10.2478/jaiscr-2020-0020

de Jesús Rubio, J. (2020). Stability analysis of the modified Levenberg--Marquardt algorithm for the artificial neural network training. IEEE transactions on neural networks and learning systems, 32(8), 3510–3524. https://doi.org/10.1109/TNNLS.2020.3015200

Wilamowski, B. M., & Yu, H. (2010). Improved computation for Levenberg--Marquardt training. IEEE transactions on neural networks, 21(6), 930–937. https://doi.org/10.1109/TNN.2010.2045657

Li, S., Deng, Y. Q., Zhu, Z. L., Hua, H. L., & Tao, Z. Z. (2021). A comprehensive review on radiomics and deep learning for nasopharyngeal carcinoma imaging. Diagnostics, 11(9), 1523. https://doi.org/10.3390/diagnostics11091523

Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6), 386–408. https://psycnet.apa.org/doi/10.1037/h0042519

Rosenblatt, F., & others. (1962). Principles of neurodynamics: Perceptrons and the theory of brain mechanisms (Vol. 55). Spartan books Washington, DC. https://www.scirp.org/reference/referencespapers?referenceid=1501842

LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the ieee, 86(11), 2278–2324. https://doi.org/10.1109/5.726791

Holyoak, K. J. (1987). A connectionist view of cognition: parallel distributed processing. explorations in the microstructure of cognition. David E. Rumelhart, James L. McClelland, and the PDP Research Group. MIT Press, Cambridge, MA, 1986. In two volumes. Vol. 1, Foundations. . Science, 236(4804), 992–996. https://www.science.org/doi/abs/10.1126/science.236.4804.992

Shanmuganathan, S. (2016). Artificial neural network modelling: An introduction. In Artificial neural network modelling (pp. 1–14). Springer. https://doi.org/10.1007/978-3-319-28495-8_1

Chakraverty, S., & Mall, S. (2017). Artificial neural networks for engineers and scientists: solving ordinary differential equations. CRC Press. https://doi.org/10.1201/9781315155265

Han, J., Pei, J., & Tong, H. (2022). Data mining: Concepts and techniques. Morgan kaufmann. https://myweb.sabanciuniv.edu/rdehkharghani/files/2016/02

Zurada, J. (1992). Introduction to artificial neural systems. West Publishing Co. https://dl.acm.org/doi/abs/10.5555/131393

Bonaccorso, G. (2017). Machine learning algorithms.[Sl]. https://books.google.com/books/about/Machine_Learning_Algorithms.html?id=_-ZDDwAAQBAJ

Yadav, N., Yadav, A., & Kumar, M. (2015). An introduction to neural network methods for differential equations. SpringerBriefs in applied sciences and technology. https://doi.org/10.1007/978-94-017-9816-7

Demuth, H., Beale, M., & Hagan, M. (1992). Neural network toolbox. , 2000 For Use with MATLAB. The MathWorks Inc (Vol. 2000). The MathWorks. https://books.google.com/books/about/Machine_Learning_Algorithms.html?id=_-ZDDwAAQBAJ

Karlik, B., Olgac, A. V., & others. (2011). Performance analysis of various activation functions in generalized MLP architectures of neural networks. International journal of artificial intelligence and expert systems, 1(4), 111–122. https://www.cscjournals.org/manuscript/Journals/IJAE/Volume1/Issue4/IJAE-26.pdf

Jagtap, A. D., & Karniadakis, G. E. (2023). How important are activation functions in regression and classification? A survey, performance comparison, and future directions. Journal of machine learning for modeling and computing, 4(1), 21-75. https://55. 10.1615/JMachLearnModelComput.2023047367

Han, J., & Moraga, C. (1995). The influence of the sigmoid function parameters on the speed of backpropagation learning [presentation]. International workshop on artificial neural networks (pp. 195–201). https://doi.org/10.1007/3-540-59497-3_175

Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., … & Others. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE signal processing magazine, 29(6), 82–97. https://doi.org/10.1109/MSP.2012.2205597

Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines [presentation]. Proceedings of the 27th international conference on machine learning (icml-10) (pp. 807–814). https://dl.acm.org/doi/10.5555/3104322.3104425

Kwan, H. K. (1992). Simple sigmoid-like activation function suitable for digital hardware implementation. Electronics letters, 28(15), 1379–1380. /doi/abs/10.1049/el:19920877

Mansor, M. A., & Sathasivam, S. (2016). Activation function comparison in neural-symbolic integration [presentation]. AIP conference proceedings (Vol. 1750, p. 20013). https://doi.org/10.1063/1.4954526

Elliott, D. L. (1993). A better activation function for artificial neural networks. https://api.drum.lib.umd.edu/server/api/core/bitstreams/fbe46c3c

Farzad, A., Mashayekhi, H., & Hassanpour, H. (2019). A comparative performance analysis of different activation functions in LSTM networks for classification. Neural computing and applications, 31(7), 2507–2521. https://doi.org/10.1007/s00521-017-3210-6%0A%0AKeywords

Pravin, C., & Singh, Y. (2004). An activation function adapting training algorithm for sigmoidal feedforward networks. Neurocomputing, 61, 429–437. https://doi.org/10.1016/j.neucom.2004.04.001

Qin, Y., Wang, X., & Zou, J. (2018). The optimized deep belief networks with improved logistic sigmoid units and their application in fault diagnosis for planetary gearboxes of wind turbines. IEEE transactions on industrial electronics, 66(5), 3814–3824. https://doi.org/10.1109/TIE.2018.2856205

Steffen, E., Youssef, P., & Gurevych, I. (2019). Is it time to swish? Comparing deep learning activation functions across NLP tasks. ArXiv preprint arxiv:1901.02671. https://doi.org/10.48550/arXiv.1901.02671

Kong, S., & Takatsuka, M. (2017). Hexpo: A vanishing-proof activation function [presentation]. 2017 international joint conference on neural networks (IJCNN) (pp. 2562–2567). https://doi.org/10.1109/IJCNN.2017.7966168

Eger, S., Youssef, P., & Gurevych, I. (2019). Is it time to swish? Comparing deep learning activation functions across NLP tasks. ArXiv preprint arxiv:1901.02671. https://doi.org/10.48550/arXiv.1901.02671

Maas, A. L., Hannun, A. Y., Ng, A. Y., & others. (2013). Rectifier nonlinearities improve neural network acoustic models [presentation]. Proceedings of the 30th international conference on machine learning. https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf

Lu, L., Shin, Y., Su, Y., & Karniadakis, G. E. (2019). Dying relu and initialization: Theory and numerical examples. ArXiv preprint arxiv:1903.06733. https://doi.org/10.4208/cicp.OA-2020-0165

Shang, W., Sohn, K., Almeida, D., & Lee, H. (2016). Understanding and improving convolutional neural networks via concatenated rectified linear units [presentation]. International conference on machine learning (pp. 2217–2225). http://dx.doi.org/10.48550/arXiv.1603.05201

Godin, F., Degrave, J., Dambre, J., & De Neve, W. (2018). Dual rectified linear units (DReLUs): A replacement for tanh activation functions in quasi-recurrent neural networks. Pattern recognition letters, 116, 8–14. https://doi.org/10.1016/j.patrec.2018.09.006

Cao, J., Pang, Y., Li, X., & Liang, J. (2018). Randomly translational activation inspired by the input distributions of ReLU. Neurocomputing, 275, 859–868. https://doi.org/10.1016/j.neucom.2017.09.031

Liu, Y., Zhang, J., Gao, C., Qu, J., & Ji, L. (2019). Natural-logarithm-rectified activation function in convolutional neural networks. 2019 IEEE 5th international conference on computer and communications (ICCC) (pp. 2000–2008). IEEE. https://doi.org/10.1109/ICCC47050.2019.9064398

Dubey, S. R., & Chakraborty, S. (2021). Average biased ReLU based CNN descriptor for improved face retrieval. Multimedia tools and applications, 80(15), 23181–23206. https://doi.org/10.1007/s11042-020-10269-x%0A%0A

Lee, H., & Kang, I. S. (1990). Neural algorithm for solving differential equations. Journal of computational physics, 91(1), 110–131. https://doi.org/10.1016/0021-9991(90)90007-N

Meade Jr, A. J., & Fernandez, A. A. (1994). Solution of nonlinear ordinary differential equations by feedforward neural networks. Mathematical and computer modelling, 20(9), 19–44. https://doi.org/10.1016/0895-7177(94)00160-X

Meade Jr, A. J., & Fernandez, A. A. (1994). The numerical solution of linear ordinary differential equations by feedforward neural networks. Mathematical and computer modelling, 19(12), 1–25. https://doi.org/10.1016/0895-7177(94)90095-7

Lagaris, I. E., Likas, A., & Fotiadis, D. I. (1997). Artificial neural network methods in quantum mechanics. Computer physics communications, 104(1–3), 1–14. https://doi.org/10.1016/S0010-4655(97)00054-4

Lagaris, I. E., Likas, A., & Papageorgiou, D. G. (1998). Neural network methods for boundary value problems defined in arbitrarily shaped domains. ArXiv preprint cs/9812003. https://doi.org/10.48550/arXiv.cs/9812003

Otadi, M., Mosleh, M., & others. (2011). Numerical solution of quadratic Riccati differential equation by neural network. https://www.sid.ir/paper/322546/en

Ibraheem, K. I., & Khalaf, B. M. (2011). Shooting neural networks algorithm for solving boundary value problems in ODEs. Applications and applied mathematics: An international journal (aam), 6(1), 15. https://digitalcommons.pvamu.edu/aam/vol6/iss1/15/

Tawfiq, L. N. M., & Hussein, A. A. T. (2013). Design feed forward neural network to solve singular boundary value problems. International scholarly research notices, 2013(1), 650467. https://doi.org/10.1155/2013/650467

Mall, S., & Chakraverty, S. (2016). Application of Legendre neural network for solving ordinary differential equations. Applied soft computing, 43, 347–356. https://doi.org/10.1016/j.asoc.2015.10.069

Okereke, R. N., Maliki, O. S., & Oruh, B. I. (2021). A novel method for solving ordinary differential equations with artificial neural networks. Applied mathematics, 12(10), 900–918. https://doi.org/10.4236/am.2021.1210059

Aarts, L. P., & Van Der Veer, P. (2001). Neural network method for solving partial differential equations. Neural processing letters, 14(3), 261–271. https://doi.org/10.1023/A:1012784129883

Beidokhti, R. S., & Malek, A. (2009). Solving initial-boundary value problems for systems of partial differential equations using neural networks and optimization techniques. Journal of the franklin institute, 346(9), 898–913. https://doi.org/10.1016/j.jfranklin.2009.05.003

Rudd, K., & Ferrari, S. (2015). A constrained integration (CINT) approach to solving partial differential equations using artificial neural networks. Neurocomputing, 155, 277–285. https://doi.org/10.1016/j.neucom.2014.11.058

Berg, J., & Nyström, K. (2018). A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing, 317, 28–41. https://doi.org/10.1016/j.neucom.2018.06.056

Anitescu, C., Atroshchenko, E., Alajlan, N., & Rabczuk, T. (2019). Artificial neural network methods for the solution of second order boundary value problems. Computers, materials & continua, 59(1), 345–359. http://dx.doi.org/10.32604/cmc.2019.06641

Liu, Z., Yang, Y., & Cai, Q. (2019). Neural network as a function approximator and its application in solving differential equations. Applied mathematics and mechanics, 40(2), 237–248. https://doi.org/10.1007/s10483-019-2429-8%0A%0A

Panghal, S., & Kumar, M. (2021). Optimization free neural network approach for solving ordinary and partial differential equations. Engineering with computers, 37(4), 2989–3002. https://doi.org/10.1007/s00366-020-00985-1%0A%0A

Jafarian, A., Measoomy Nia, S., Khalili Golmankhaneh, A., & Baleanu, D. (2018). On artificial neural networks approach with new cost functions. Applied mathematics and computation, 339, 546–555. https://doi.org/10.1016/j.amc.2018.07.053

Admon, M. R., Senu, N., Ahmadian, A., Abdul Majid, Z., & Salahshour, S. (2023). A new efficient algorithm based on feedforward neural network for solving differential equations of fractional order. Communications in nonlinear science and numerical simulation, 117, 106968. https://doi.org/10.1016/j.cnsns.2022.106968

Hagan, M. T., & Menhaj, M. B. (1994). Training feedforward networks with the Marquardt algorithm. IEEE transactions on neural networks, 5(6), 989–993. https://doi.org/10.1109/72.329697

Published

2025-09-14

How to Cite

Lakshmi Durga, P. ., & Nayak, S. . (2025). Artificial Neural Networks to Investigate Mathematical Models: A Concise Review. Systemic Analytics, 3(3), 177-192. https://doi.org/10.31181/sa33202554