Kelvin CK Chan, Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy.In Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 5th meeting, Jan. Common test conditions and software reference configurations. Video Summarization Using Deep Neural Networks: A Survey. Evlampios Apostolidis, Eleni Adamantidou, Alexandros I Metsai, Vasileios Mezaris, and Ioannis Patras. Extensive experiments show that our method outperforms the existing ones on the MFQE 2.0 dataset in terms of both fidelity and perceptual effect. On the other hand, we design an efficient and effective Deformable Spatiotemporal Attention (DSTA) module such that the model can pay more effort on restoring the artifact-rich areas like the boundary area of a moving object. Specifically, RF utilizes both the current reference frames and the preceding hidden state to conduct better spatiotemporal compensation. In this paper, to boost artifact removal, on the one hand, we propose a Recursive Fusion (RF) module to model the temporal dependency within a long temporal range. However, these methods usually suffer from a narrow temporal scope, thus may miss some useful details from some frames outside the neighboring ones. Among them, some restore the missing details of each frame via exploring the spatiotemporal information of neighboring frames. A number of deep learning based algorithms have been proposed to recover high-quality videos from low-quality compressed ones.
0 Comments
Leave a Reply. |