Video compression removes subtle spatial and temporal information to save memory. The removed information is often redundant and not important for visual video quality, but it does contain crucial information about the minuscule intensity variations in the skin caused by varying blood volume. Therefore, the state-of-the-art imaging photoplethysmography methods fail to recover vital signs from very compressed videos. We show that deep learning models can learn how noise at different video compression levels affects the iPPG signals and are able to reliably recover vital signs from highly compressed videos, even in presence of large motion. This work was done in collaboration with Daniel McDuff at Microsoft Research. The initial results were published in ICCV-CVPM in 2019 [pdf] [poster] and a full version was published in Biomedical Optics Express in 2020 [pdf] [video].