Impact of Disentanglement on Pruning Neural NetworksShneider, Carl ; Rostami Abendansari, Peyman ; Kacem, Anis et alScientific Conference (2023, July 19) Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency. This can ... [more ▼] Deploying deep learning neural networks on edge devices, to accomplish task specific objectives in the real-world, requires a reduction in their memory footprint, power consumption, and latency. This can be realized via efficient model compression. Disentangled latent representations produced by variational autoencoder (VAE) networks are a promising approach for achieving model compression because they mainly retain task-specific information, discarding useless information for the task at hand. We make use of the Beta-VAE framework combined with a standard criterion for pruning to investigate the impact of forcing the network to learn disentangled representations on the pruning process for the task of classification. In particular, we perform experiments on MNIST and CIFAR10 datasets, examine disentanglement challenges, and propose a path forward for future works. [less ▲] Detailed reference viewed: 50 (0 UL) Impact of Disentanglement on Pruning Neural NetworksShneider, Carl ; Rostami Abendansari, Peyman ; Kacem, Anis et alPoster (2023, June 20) Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge devices for task specific objectives. A variational autoencoder (VAE) framework is combined with a pruning ... [more ▼] Efficient model compression techniques are required to deploy deep neural networks (DNNs) on edge devices for task specific objectives. A variational autoencoder (VAE) framework is combined with a pruning criterion to investigate the impact of having the network learn disentangled representations on the pruning process for the classification task. [less ▲] Detailed reference viewed: 49 (0 UL) Compression of Deep Neural Networks for Space Autonomous SystemsShneider, Carl ; Sinha, Nilotpal ; Jamrozik, Michele Lynn et alPoster (2023, April 19) Efficient compression techniques are required to deploy deep neural networks (DNNs) on edge devices for space resource utilization tasks. Two approaches are investigated. Detailed reference viewed: 54 (0 UL) |
||