Ivaniuk A. Study of relationships in data using artificial neural networks

Українська версія

Thesis for the degree of Doctor of Philosophy (PhD)

State registration number

0825U000334

Applicant for

Specialization

  • 113 - Прикладна математика

13-12-2024

Specialized Academic Board

PhD 7057

National University of Kyiv-Mohyla Academy

Essay

Ivaniuk A.O. Study of relationships in data using artificial neural networks. — Qualified research work (manuscript). Dissertation to obtain the scientific degree of Doctor of Philosophy in the Field of Study 11 “Mathematics and statistics”, Programme Subject Area 113 “Applied mathematics”. — National University of Kyiv-Mohyla Academy, Kyiv, 2024. This dissertation focuses on studying relationships in data through the application of artificial neural networks. These relationships can be represented in various forms and modeled in different ways. Correct modeling of these relationships is key to successfully solving a variety of tasks, such as classification, regression, and generative modeling. In modern neural networks, standard metrics are widely used to evaluate their performance, such as classification accuracy, mean squared error, and so on. However, good values of these metrics do not guarantee the absence of errors or vulnerabilities in models. Models can produce erroneous results with a high level of confidence, especially when interacting with adversarial examples—specially crafted input data that mislead the model. This research addresses this important problem by conducting a detailed study of quantitative assessment of uncertainty and the robustness of neural networks to adversarial attacks. By using adversarial data as a tool, this work aims to deepen the understanding of model reliability and to develop more robust neural networkbased systems that can withstand various attacks and provide stable performance in real-world applications. By investigating adversarial relationships and patterns in data, this work aims to use them as a metric of generalization to identify model weaknesses and assess their ability to generalize. Understanding how models respond to conflicting perturbations offers a unique perspective on their internal structure and decisionmaking mechanisms. This allows not only for the identification of vulnerabilities but also for the development of methods to eliminate them, thereby enhancing the overall reliability and efficiency of models. As part of this research, various parameterizations of neural networks for sequence modeling are studied, as well as their impact on model performance and robustness to adversarial attacks. Special attention is paid to new architectures and activation functions that can improve models’ ability to generalize and their robustness. Adversarial robustness is considered an important metric for identifying model weaknesses and evaluating their overall effectiveness. The research encompasses effective parameterizations for different types of input data, including images, speech signals, and text. These parameterizations are applied to various machine learning tasks, such as image classification, language modeling, and regression based on latent diffusion models. The experiments conducted aim to identify how different parameterization strategies can improve model performance while maintaining or even enhancing their robustness to adversarial attacks. The results obtained provide important insights for developing more reliable and generalizable machine learning models. This advances the field by identifying optimal parameterization techniques that balance performance and robustness and can be applied in a wide range of practical tasks. Overall, this dissertation makes a significant contribution to understanding and improving the robustness of neural networks to adversarial attacks by proposing new approaches to parameterization and modeling that can be applied in various fields of machine learning. The results of this research can serve as a foundation for developing more reliable and efficient models capable of ensuring high performance and security in real-world applications. The experiments conducted confirm that using the considered parameterizations can enhance classification accuracy but also reveal complexities associated with their adversarial training. Further research in this direction may lead to the creation of models that not only demonstrate high performance but are also robust to various attacks, which is critically important in today’s world where the security and reliability of machine learning models are becoming increasingly significant. Keywords: adversarial robustness, adversarial examples, adversarial purification, attention mechanism, model parameterization, diffusion modeling, auto-encoders, signal processing, adaptive algorithms, optimization algorithms, activation functions, regularization, neural networks, artificial neural network, algorithm, convolutional neural network, parameters, error, machine learning, classification, regression, generative modeling, computer vision, natural language processing, audio modeling.

Research papers

A. Ivaniuk and G. Kriukova, ”On Geometric Properties of Adversarial Examples,”2021 11th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Cracow, Poland, 2021, pp. 964-967

Ivaniuk A. Speech audio modeling by means of causal moving average equipped gated attention / A. Ivaniuk // Могилянський математичний журнал. - 2022. - Т. 5. - С. 53-56.

A. Ivaniuk (2024). ”Latent diffusion model for speech signal processing.”Bulletin of V.N. Karazin Kharkiv National University, series Mathematical modelling. Information technology. Automated control systems, vol. 61, pp. 43-51.

Similar theses