Speech enhancement (SE) enables robust speech recognition, real-time communication, hearing aids, and other applications where speech quality is crucial. However, deploying such systems on resource-constrained devices involves choosing a static trade-off between performance and computational efficiency. In this paper, we introduce dynamic slimming to DEMUCS, a popular SE architecture, making it scalable and input-adaptive. Slimming lets the model operate at different utilization factors (UF), each corresponding to a different performance/efficiency trade-off, effectively mimicking multiple model sizes without the extra storage costs. In addition, a router subnet, trained end-to-end with the backbone, determines the optimal UF for the current input. Thus, the system saves resources by adaptively selecting smaller UFs when additional complexity is unnecessary. We show that our solution is Pareto-optimal against individual UFs, confirming the benefits of dynamic routing. When training the proposed dynamically-slimmable model to use 10% of its capacity on average, we obtain the same or better speech quality as the equivalent static 25% utilization while reducing MACs by 29%.
These are the test samples from the DynSlim (joint) model trained, with a target UF of 0.5:
# | Clean | Noisy | Predicted |
---|---|---|---|
1 | |||
2 | |||
3 | |||
4 | |||
5 | |||
6 | |||
7 | |||
8 |
These are the metrics corresponding to the models shown in the Pareto fronts in Figure 5:
Model variant | Target UF | PESQ | SI-SDR | DNSMOS | STOI | MACs/sample | Average UF |
---|---|---|---|---|---|---|---|
Slimm. backbone | 0.125 | 2.86 | 17.59 | 3.33 | 0.94 | 8944 | |
Slimm. backbone | 0.25 | 2.96 | 17.84 | 3.34 | 0.94 | 15664 | |
Slimm. backbone | 0.50 | 3.00 | 18.09 | 3.36 | 0.94 | 29104 | |
Slimm. backbone | 1.00 | 3.01 | 18.15 | 3.37 | 0.94 | 55985 | |
DynSlim (joint) | 0.10 | 2.96 | 17.86 | 3.35 | 0.94 | 0.18 | |
DynSlim (joint) | 0.25 | 2.96 | 18.04 | 3.36 | 0.94 | 0.29 | |
DynSlim (joint) | 0.50 | 3.03 | 18.19 | 3.37 | 0.95 | 0.54 | |
DynSlim (joint) | 0.75 | 3.04 | 18.24 | 3.38 | 0.95 | 0.79 | |
DynSlim (joint) | 0.90 | 3.08 | 18.24 | 3.39 | 0.95 | 0.96 | |
DynSlim (finetune) | 0.25 | 2.94 | 18.00 | 3.35 | 0.94 | 0.25 | |
DynSlim (finetune) | 0.50 | 2.99 | 18.21 | 3.36 | 0.94 | 0.50 |
@INPROCEEDINGS{miccini2024adaptive,
author={Miccini, Riccardo and Kim, Minje and Laroche, Clément and Pezzarossa, Luca and Smaragdis, Paris},
booktitle={2025 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)},
title={Adaptive Slimming for Scalable and Efficient Speech Enhancement},
year={2025},
keywords={speech enhancement; dynamic neural networks; edge AI},
doi={}
}