Skip to main navigation menu Skip to main content Skip to site footer

Optimization Strategies and Neural Architectures in Neural Networks: Dataset Pruning, Architecture Search, and Diffusion Models

Abstract

The increasing complexity of neural network applications, particularly in fields such as image super-resolution (SR) and optical character recognition (OCR), has spurred the need for more efficient optimization strategies and innovative neural architectures. This paper explores the latest advancements in dataset pruning, neural architecture search (NAS), and latent dataset distillation using diffusion models. We discuss how these techniques enhance the training efficiency of deep learning models while maintaining or improving performance across tasks. Dataset pruning, which involves reducing the size of training datasets without sacrificing accuracy, is shown to be an effective method for lowering computational costs. Proxy datasets and NAS further contribute by automating the discovery of optimal neural architectures, reducing the resources needed to search the vast space of possible models. Additionally, the paper delves into latent dataset distillation, where diffusion models are employed to create condensed representations of datasets, significantly speeding up the training process. The implications of these techniques on the performance of recurrent neural network (RNN) architectures, such as U-Net and U-ReNet, are evaluated, showcasing their impact on both OCR and SR tasks. This paper synthesizes research in these areas and outlines future directions for advancing neural network optimization and architecture development.

PDF