Faheem, Muhammad (2025) Energy Efficient Neural Architectures for TinyML Applications. International Journal of Scientific Research and Modern Technology, 4 (5): 531. pp. 45-50. ISSN 2583-4622
There is now a shift being made in machine learning because of Tiny Machine Learning (TinyML) and its use on microcontrollers and edge sensors. This article investigates energy-efficient neural network designs for TinyML that are built to strike a balance among accuracy, how much memory is used and power consumption. We look at recent developments in model quantization, pruning and neural architecture search (NAS) that support using deep learning models in very energy efficient devices. The practical uses of MobileNet, SqueezeNet and EfficientNet on devices that have edge hardware are considered, along with how well they can preserve overall accuracy. Evaluations of minimizing energy DRAM by codesigning hardware and software, along with using specialized accelerators, are considered. Since real-time decisions matter a lot in environmental monitoring, wearable technology and industrial IoT, it’s clear that model deployment must be both efficient and dependable. It gives an overview of the most recent findings to demonstrate how energy-efficient architecture contributes to the fast ongoing progress of TinyML in many areas. Focusing on hands-on methods and actual use cases, this discussion gives actionable tips to those wanting to design smart and energy-efficient edge systems.
Altmetric Metrics
Dimensions Matrics
Downloads
Downloads per month over past year
![]() |