1. 西安电子科技大学 电子工程学院,陕西 西安 710071
2. 西安电子科技大学 机电工程学院,陕西 西安 710071
[ "夏译蓝(1998—),女,西安电子科技大学硕士研究生,E-mail:[email protected]" ]
[ "王秀美(1978—),女,教授,E-mail:[email protected]" ]
程培涛(1978—),男,副教授,E-mail:[email protected]
纸质出版日期:2024-06-20,
网络出版日期:2023-11-15,
收稿日期:2023-03-13,
扫 描 看 全 文
夏译蓝, 王秀美, 程培涛. 基于多注意力机制的纹理感知视频修复方法[J]. 西安电子科技大学学报, 2024,51(3):136-146.
Yilan XIA, Xiumei WANG, Peitao CHENG. Texture-aware video inpainting algorithm based on the multi-attention mechanism[J]. Journal of Xidian University, 2024,51(3):136-146.
夏译蓝, 王秀美, 程培涛. 基于多注意力机制的纹理感知视频修复方法[J]. 西安电子科技大学学报, 2024,51(3):136-146. DOI: 10.19665/j.issn1001-2400.20231004.
Yilan XIA, Xiumei WANG, Peitao CHENG. Texture-aware video inpainting algorithm based on the multi-attention mechanism[J]. Journal of Xidian University, 2024,51(3):136-146. DOI: 10.19665/j.issn1001-2400.20231004.
针对现有视频修复方法无法有效利用远处空间内容信息而导致修复结果中存在结构和纹理不合理的问题
提出了一种基于多注意力机制的纹理感知视频修复方法。该方法设计了由多头时空注意力和单图局部注意力构成的多注意力机制以保证全局结构并增强局部纹理
其中多头时空注意力关注整体时空信息
单图局部注意力通过局部窗口的自注意力机制精炼提取局部信息。另外
采用可即插即用的快速傅里叶卷积层残差块代替前馈网络中的普通卷积
将感受野扩展为整个图像
进一步增强了模型对图像纹理和结构的全局信息的获取能力。快速傅里叶卷积层残差块和单图局部注意力相辅相成
共同提升局部纹理的修复质量。在YouTube-VOS和DAVIS数据集上的实验结果表明
虽然提出的方法修复结果的客观质量评价仅次于最优方法Fuseformer
但其参数量和运行时间分别下降了54.8%和21.5%
而且能够生成视觉上更逼真、语义上更合理的修复内容。
Existing video inpainting methods cannot effectively utilize distant spatial contents
which results in unreasonable structures and textures.To solve this problem
a texture-aware video inpainting algorithm based on the multi-attention mechanism is proposed in this paper.The algorithm designs a multi-attention mechanism composed of multi-head spatiotemporal attention and single-image local attention
guaranteeing global structures and enriching local textures.Multi-head spatial-temporal attention focuses on the overall spatial-temporal information
and single-image local attention distills local information through local windows of the self-attention mechanism.A plug-and-play fast Fourier convolution layer residual block is used to replace vanilla convolution in feedforward networks
expanding the receptive field into the entire image so that the global structure and texture of a single frame image can be enriched.The fast Fourier convolutional layer residual block and the single-image local attention complement each other and jointly promote the quality of local textures.Experimental results on YouTube-VOS and DAVIS datasets show that although the proposed method ranks second only to the optimal method Fuseformer on objective metrics
the number of parameters and running time are reduced by 54.8% and 21.5% respectively.And the proposed method can generate more visually realistic and semantically reasonable contents.
视频修复Transformer快速傅里叶卷积多注意力机制纹理感知
video inpaintingTransformerfast Fourier convolutionmulti-attention mechanismtexture-aware
CHAVAN S A, CHOUDHARI N M. Various Approaches for Video Inpainting:A Survey[C]//2019 5th International Conference on Computing,Communication,Control and Automation.Piscataway:IEEE, 2019:1-5.
潘浩. 数字视频的修复方法研究[D]. 合肥: 中国科学技术大学, 2010.
ZHANG X, LI H, QI Y, et al. Rain Removal in Video by Combining Temporal and Chromatic Properties[C]//IEEE International Conference on Multimedia and Expo. Piscataway:IEEE, 2006:461-464.
HUANG Y, ZHENG F, WANG D, et al. Super-Resolution and Inpainting with Degraded and Upgraded Generative Adversarial Networks[C]//Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. New York: ACM, 2021:645-651.
韦哲, 李从利, 沈延安, 等. 基于两阶段模型的无人机图像厚云区域内容生成[J]. 计算机学报, 2021, 44(11):2233-2247.
WEI Zhe, LI Congli, SHENG Yan’an, et al. Thick Cloud Region Content Generation of UAV Image Based on Two-Stage Model[J]. Chinese Journal of Computers, 2021, 44(11):2233-2247.
TRAN D, BOURDEV L, FERGUS R, et al. Learning Spatiotemporal Features with 3D Convolutional Networks[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE, 2015:4489-4497.
CHANG Y L, LIU Z Y, LEE K Y, et al. Free-form Video Inpainting with 3D Gated Convolution and Temporal PatchGAN[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE, 2019:9066-9075.
KIM D, WOO S, LEE J Y, et al. Deep Video Inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2019:5792-5801.
HU Y T, WANG H, BALLAS N, et al. Proposal-Based Video Completion[C]//Proceedings of the European Conference on Computer Vision. Piscataway:IEEE, 2020:38-54.
XU R, LI X, ZHOU B, et al. Deep Flow-Guided Video Inpainting[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2019:3723-3732.
GAO C, SARAF A, HUANG J B, et al. Flow-Edge Guided Video Completion[C]//Proceedings of the European Conference on Computer Vision. Piscataway:IEEE, 2020:713-729.
LEE S, OH S W, WON D Y, et al. Copy-and-Paste Networks for Deep Video Inpainting[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE, 2019:4413-4421.
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is All You Need[J]. Advances in Neural Information Processing Systems, 2017, 30:5998-6008.
ZENG Y, FU J, CHAO H. Learning Joint Spatial-Temporal Transformations for Video Inpainting[C]//Proceedings of the European Conference on Computer Vision. Piscataway:IEEE, 2020:528-543.
LIU R, DENG H, HUANG Y, et al. FuseFormer:Fusing Fine-Grained Information in Transformers for Video Inpainting[C]//Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE, 2021:14040-14049.
TANCIK M, SRINIVASAN P, MILDENHALL B, et al. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains[J]. Advances in Neural Information Processing Systems, 2020, 33:7537-7547.
ZHANG R, ISOLA P, EFROS A A, et al. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE, 2018:586-595.
SIMONYAN K, ZISSERMAN A. Very Deep Convolutional Networks for Large-Scale Image Recognition(2014)[J/OL].[2014-09-04].https://arxiv.org/pdf/1409.1556.pdf.https://arxiv.org/pdf/1409.1556.pdfhttps://arxiv.org/pdf/1409.1556.pdf
SUVOROV R, LOGACHEVA E, MASHIKHIN A, et al. Resolution-Robust Large Mask Inpainting with Fourier Convolutions[C]//Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Piscataway:IEEE, 2022:2149-2159.
GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative Adversarial Nets[J]. Advances in Neural Information Processing Systems, 2014, 27:2672-2680.
WANG C, HUANG H, HAN X, et al. Video Inpainting by Jointly Learning Temporal Structure and Spatial Details[C]//Proceedings of the AAAI Conference on Artificial Intellingence. Menlo Park: AAAI, 2019, 33(1):5232-5239.
KINGMA D P, BA J. Adam:A Method for Stochastic Optimization[C]//Proceedings of the 3rd International Conference on Learning Representations. Piscataway: San Digeo, 2015,1-13.
XU N, YANG L, FAN Y, et al. Youtube-Vos:A Large-Scale Video Object Segmentation Benchmark(2018)[J/OL].[2018-09-06].https://arxiv.org/pdf/1809.03327.pdf.https://arxiv.org/pdf/1809.03327.pdfhttps://arxiv.org/pdf/1809.03327.pdf
CAELLES S, MONTES A, MANINIS K K, et al. The 2018 Davis Challenge on Video Object Segmentation(2018)[J/OL].[2018-03-01].https://arxiv.org/pdf/1803.00557.pdf.https://arxiv.org/pdf/1803.00557.pdfhttps://arxiv.org/pdf/1803.00557.pdf
杨静雅, 齐彦丽, 周一青, 等. CNN-Transformer轻量级智能调制识别算法[J]. 西安电子科技大学学报, 2023, 50(3):40-49.
YANG Jingya, QI Yanli, ZHOU Yiqing, et al. Algorithm for Recognition of Lightweight Intelligent Modulation Based on the CNN-Transformer Networks[J]. Journal of Xidian University, 2023, 50(3):40-49.
0
浏览量
8
下载量
0
CSCD
关联资源
相关文章
相关作者
相关机构