搜索

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于字典学习的稠密光场重建算法

夏正德 宋娜 刘宾 潘晋孝 闫文敏 邵子惠

引用本文:
Citation:

基于字典学习的稠密光场重建算法

夏正德, 宋娜, 刘宾, 潘晋孝, 闫文敏, 邵子惠

Dense light field reconstruction algorithm based on dictionary learning

Xia Zheng-De, Song Na, Liu Bin, Pan Jin-Xiao, Yan Wen-Min, Shao Zi-Hui
PDF
HTML
导出引用
  • 相机阵列是获取空间中目标光场信息的重要手段, 采用大规模密集相机阵列获取高角度分辨率光场的方法增加了采样难度和设备成本, 同时产生的大量数据的同步和传输需求也限制了光场采样规模. 为了实现稀疏光场采样的稠密重建, 本文基于稀疏光场数据, 分析同一场景多视角图像的空间、角度信息的关联性和冗余性, 建立有效的光场字典学习和稀疏编码数学模型, 并根据稀疏编码元素间的约束关系, 建立虚拟角度图像稀疏编码恢复模型, 提出变换域稀疏编码恢复方法, 并结合多场景稠密重建实验, 验证提出方法的有效性. 实验结果表明, 本文方法能够对场景中的遮挡、阴影以及复杂的光影变化信息进行高质量恢复, 可以用于复杂场景的稀疏光场稠密重建. 本研究实现了线性采集稀疏光场的稠密重建, 未来将针对非线性采集稀疏光场的稠密重建进行研究, 以推进光场成像在实际工程中的应用.
    The camera array is an important tool to obtain the light field of target in space. The method of obtaining high angular resolution light field by a large-scaled dense camera array increases the difficulty of sampling and the equipment cost. At the same time, the demand for synchronization and transmission of a large number of data also limits the sampling rate of light field. In order to complete the dense reconstruction of sparse sampling of light field, we analyze the correlation and redundancy of multi-view images in the same scene based on sparse light field data, then establish an effective mathematical model of light field dictionary learning and sparse coding. The trained light field atoms can sparsely express the local spatial-angular consistency of light field, and the four-dimensional (4D) light field patches can be reconstructed from a two-dimensional (2D) local image patch centered around each pixel in the sensor. The global and local constraints of the four-dimensional light field are mapped into the low-dimensional space by the dictionary. These constraints are shown as the sparsity of each vector in the sparse representation domain, the constraints between the positions of non-zero elements and their values. According to the constraints among sparse encoding elements, we establish the sparse encoding recovering model of virtual angular image, and propose the sparse encoding recovering method in the transform domain. The atoms of light field in dictionary are screened and the patches of light field are represented linearly by the sparse representation matrix of the virtual angular image. In the end, the virtual angular images are constructed by image fusion after sparse inverse transform. According to multi-scene dense reconstruction experiments, the effectiveness of the proposed method is verified. The experimental results show that the proposed method can recover the occlusion, shadow and complex illumination in satisfying quality. That is to say, it can be used for dense reconstruction of sparse light field in complex scene. In our study, the dense reconstruction of linear sparse light field is achieved. In the future, the dense reconstruction of nonlinear sparse light field will be studied to promote the practical application of light field imaging.
      通信作者: 刘宾, liubin414605032@163.com
    • 基金项目: 省部级-瞬态冲击技术重点实验室基金(614260603030817)
      Corresponding author: Liu Bin, liubin414605032@163.com
    [1]

    Cao X, Zheng G, Li T T 2014 Opt. Express. 22 24081Google Scholar

    [2]

    Schedl D C, Birklbauer C, Bimber O 2018 Comput. Vis. Image Und. 168 93Google Scholar

    [3]

    Smolic A, Kauff P 2005 Proc. IEEE 93 98Google Scholar

    [4]

    McMillan L, Bishop G 1995, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, USA, August 6−11, 1995 p39

    [5]

    Fehn C 2004 The International Society for Optical Engineering Bellingham, USA, December 30, 2004 p93

    [6]

    Xu Z, Bi S, Sunkavalli K, Hadap S, Su H, Ramamoorthi R 2019 ACM T. Graphic 38 76

    [7]

    Wang C, Liu X F, Yu W K, Yao X R, Zheng F, Dong Q, Lan R M, Sun Z B, Zhai G J, Zhao Q 2017 Chin. Phys. Lett. 34 104203Google Scholar

    [8]

    Zhang L, Tam W J 2005 IEEE Trans. Broadcast. 51 191Google Scholar

    [9]

    Chen W, Chang Y, Lin S, Ding L, Chen L 2005 IEEE Conference on Multimedia and Expo Amsterdam, The Netherlands, July 6−8, 2005 p1314

    [10]

    Jung K H, Park Y K, Kim J K, Lee H, Kim J 2008 3 DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video Istanbul, Turkey, May 28−30, 2008 p237

    [11]

    Hosseini Kamal M, Heshmat B, Raskar R, Vandergheynst P, Wetzstein G 2016 Comput. Vis. Image Und. 145 172Google Scholar

    [12]

    Levoy M, Hanrahan P 1996 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques New York, USA, August 4−9, 1996 p31

    [13]

    Levoy M 2006 Computer 39 46Google Scholar

    [14]

    Donoho D L 2006 IEEE T. Inform. Theory. 52 1289Google Scholar

    [15]

    Park J Y, Wakin M B 2012 Eurasip J. Adv. Sig. Pr. 2012 37

    [16]

    Zhu B, Liu J Z, Cauley S F, Rosen B R, Rosen M S 2018 Nature 555 487Google Scholar

    [17]

    Ophir B, Lustig M, Elad M 2011 IEEE J. Sel. Top. Signal Process. 5 1014Google Scholar

    [18]

    Marwah K, Wetzstein G, Bando Y, Raskar R 2013 ACM T. Graphic. 32 46

    [19]

    Marwah K, Wetzstein G, Veeraraghavan A, Raskar R 2012 ACM SIGGRAPH 2012 Talks Los Angeles, USA, August 5−9, 2012 p42

    [20]

    Tenenbaum J B, Silva V D, Langford J C 2000 Science 290 2319Google Scholar

    [21]

    Honauer K, Johannsen O, Kondermann D, Glodluecke B 2016 Asian Conference on Computer Vision Taipei, China, November 20−24, 2016 p19

    [22]

    Bottou L, Bousquet O 2008 Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems Whistler, Canada, December 3−6, 2007 p161

    [23]

    Mairal J, Bach F, Ponce J, Sapiro G 2009 Proceedings of the 26th Annual International Conference on Machine Learning Montreal, Canada, June 14–18, 2009 p689

    [24]

    Flynn J, Neulander I, Philbin J, Snavely N 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Las Vegas, USA, June 27−30, 2016 p5515

  • 图 1  算法架构图

    Fig. 1.  Algorithm workflow.

    图 2  光场过完备字典

    Fig. 2.  Light field overcomplete dictionary.

    图 3  重建图像质量曲线图 (a) pixels为256 × 256, 不同稀疏度重建性能曲线图; (b) pixels为512 × 512, 不同稀疏度重建性能曲线图; (c) 不同分辨率重建图像的PSNR曲线图; (d) pixels为256 × 256, 不同冗余度重建性能曲线图

    Fig. 3.  Performance of reconstructed image: (a) Performance in sparsity, pixels = 256 × 256; (b) performance in sparsity, pixels = 512 × 512; (c) PSNR in different resolution; (d) performance in redundancy, pixels = 256 × 256.

    图 4  不同稀疏度、冗余度参数重建图像 (a) K = 16, N = 256; (b) K = 34, N = 1024

    Fig. 4.  Image reconstruction in different sparsity and redundancy: (a) K = 16, N = 256; (b) K = 34, N = 1024

    图 5  包含遮挡目标的稠密光场恢复 (a) 稠密光场; (b), (e) 参考图像; (c), (d) 恢复的view 2, view 5虚拟角度图像; (g), (h)目标图像; (f), (i) 残差图

    Fig. 5.  Dense reconstruction of light field with occluded targets: (a) Dense light field; (b), (e) reference images; (c), (d) reconstructed virtual images of view 2 and view 5; (g), (h) target images; (f), (i) residual images.

    图 6  稠密光场恢复 (a) 本文算法恢复图像; (b) DIBR算法恢复图像; (c) 目标图像; (d) 残差图; (e)稠密光场

    Fig. 6.  Dense reconstruction of light field: (a) Reconstructed image for proposed algorithm; (b) reconstructed image for DIBR; (c) target image; (d) residual image; (e) dense light field.

    表 1  不同稀疏度、冗余度重建图像质量指标

    Table 1.  Performance of image reconstruction in different sparsity and redundancy

    Sparse
    parameter (K),
    Redundancy
    parameter (N)
    MSEPSNR/dBSSIMTime/s
    K = 16, N = 25654.421530.77310.88601266.08
    K = 34, N = 102449.004431.22850.886514306.55
    下载: 导出CSV

    表 2  不同场景光场稠密重建结果

    Table 2.  Dense reconstruction of light field in different scenes.

    SenseTableBicycletownBoardgamesrosemaryVinylbicycle*
    MSE21.212454.421525.800553.924418.895022.475649.0044
    PSNR/dB34.864930.773134.014530.812935.367334.613731.2285
    SSIM0.93230.88600.94740.93410.96990.94210.8865
    * 稀疏度K = 34, 冗余度N = 1024.
    下载: 导出CSV
    Baidu
  • [1]

    Cao X, Zheng G, Li T T 2014 Opt. Express. 22 24081Google Scholar

    [2]

    Schedl D C, Birklbauer C, Bimber O 2018 Comput. Vis. Image Und. 168 93Google Scholar

    [3]

    Smolic A, Kauff P 2005 Proc. IEEE 93 98Google Scholar

    [4]

    McMillan L, Bishop G 1995, Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques Los Angeles, USA, August 6−11, 1995 p39

    [5]

    Fehn C 2004 The International Society for Optical Engineering Bellingham, USA, December 30, 2004 p93

    [6]

    Xu Z, Bi S, Sunkavalli K, Hadap S, Su H, Ramamoorthi R 2019 ACM T. Graphic 38 76

    [7]

    Wang C, Liu X F, Yu W K, Yao X R, Zheng F, Dong Q, Lan R M, Sun Z B, Zhai G J, Zhao Q 2017 Chin. Phys. Lett. 34 104203Google Scholar

    [8]

    Zhang L, Tam W J 2005 IEEE Trans. Broadcast. 51 191Google Scholar

    [9]

    Chen W, Chang Y, Lin S, Ding L, Chen L 2005 IEEE Conference on Multimedia and Expo Amsterdam, The Netherlands, July 6−8, 2005 p1314

    [10]

    Jung K H, Park Y K, Kim J K, Lee H, Kim J 2008 3 DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video Istanbul, Turkey, May 28−30, 2008 p237

    [11]

    Hosseini Kamal M, Heshmat B, Raskar R, Vandergheynst P, Wetzstein G 2016 Comput. Vis. Image Und. 145 172Google Scholar

    [12]

    Levoy M, Hanrahan P 1996 Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques New York, USA, August 4−9, 1996 p31

    [13]

    Levoy M 2006 Computer 39 46Google Scholar

    [14]

    Donoho D L 2006 IEEE T. Inform. Theory. 52 1289Google Scholar

    [15]

    Park J Y, Wakin M B 2012 Eurasip J. Adv. Sig. Pr. 2012 37

    [16]

    Zhu B, Liu J Z, Cauley S F, Rosen B R, Rosen M S 2018 Nature 555 487Google Scholar

    [17]

    Ophir B, Lustig M, Elad M 2011 IEEE J. Sel. Top. Signal Process. 5 1014Google Scholar

    [18]

    Marwah K, Wetzstein G, Bando Y, Raskar R 2013 ACM T. Graphic. 32 46

    [19]

    Marwah K, Wetzstein G, Veeraraghavan A, Raskar R 2012 ACM SIGGRAPH 2012 Talks Los Angeles, USA, August 5−9, 2012 p42

    [20]

    Tenenbaum J B, Silva V D, Langford J C 2000 Science 290 2319Google Scholar

    [21]

    Honauer K, Johannsen O, Kondermann D, Glodluecke B 2016 Asian Conference on Computer Vision Taipei, China, November 20−24, 2016 p19

    [22]

    Bottou L, Bousquet O 2008 Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems Whistler, Canada, December 3−6, 2007 p161

    [23]

    Mairal J, Bach F, Ponce J, Sapiro G 2009 Proceedings of the 26th Annual International Conference on Machine Learning Montreal, Canada, June 14–18, 2009 p689

    [24]

    Flynn J, Neulander I, Philbin J, Snavely N 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Las Vegas, USA, June 27−30, 2016 p5515

  • [1] 庞维煦, 李宁, 黄孝龙, 康杨, 李灿, 范旭东, 翁春生. 基于分数阶Tikhonov正则化的激光吸收光谱燃烧场二维重建光路优化研究.  , 2023, 72(3): 037801. doi: 10.7498/aps.72.20221731
    [2] 李永飞, 郭瑞明, 赵航芳. 浅海内波环境下声场干涉条纹的稀疏重建.  , 2023, 72(7): 074301. doi: 10.7498/aps.72.20221932
    [3] 向鹏程, 蔡聪波, 王杰超, 蔡淑惠, 陈忠. 基于深度神经网络的时空编码磁共振成像超分辨率重建方法.  , 2022, 71(5): 058702. doi: 10.7498/aps.71.20211754
    [4] 刘飞, 孙少杰, 韩平丽, 赵琳, 邵晓鹏. 基于稀疏低秩特性的水下非均匀光场偏振成像技术研究.  , 2021, 70(16): 164201. doi: 10.7498/aps.70.20210314
    [5] 李宁, TuXin, 黄孝龙, 翁春生. 基于Tikhonov正则化参数矩阵的激光吸收光谱燃烧场二维重建光路设计方法.  , 2020, 69(22): 227801. doi: 10.7498/aps.69.20201144
    [6] 李春雷, 徐燕, 郑军, 王小明, 袁瑞旸, 郭永. 磁电势垒结构中光场辅助电子自旋输运特性.  , 2020, 69(10): 107201. doi: 10.7498/aps.69.20200237
    [7] 钟鸣宇, 奚亮, 司福祺, 周海金, 王煜. 基于稀疏优化的烟羽断层重建方法.  , 2019, 68(16): 164205. doi: 10.7498/aps.68.20190268
    [8] 刘宾, 潘毅华, 闫文敏. 光场重聚焦成像的散焦机理及聚焦评价函数.  , 2019, 68(20): 204202. doi: 10.7498/aps.68.20190725
    [9] 解万财, 黄素娟, 邵蔚, 朱福全, 陈木生. 基于混合光模式阵列的自由空间编码通信.  , 2017, 66(14): 144102. doi: 10.7498/aps.66.144102
    [10] 丰卉, 孙彪, 马书根. 分块稀疏信号1-bit压缩感知重建方法.  , 2017, 66(18): 180202. doi: 10.7498/aps.66.180202
    [11] 王心怡, 范全平, 魏来, 杨祖华, 张强强, 陈勇, 彭倩, 晏卓阳, 肖沙里, 曹磊峰. Fresnel波带片编码成像的高分辨重建.  , 2017, 66(5): 054203. doi: 10.7498/aps.66.054203
    [12] 文方青, 张弓, 贲德. 基于块稀疏贝叶斯学习的多任务压缩感知重构算法.  , 2015, 64(7): 070201. doi: 10.7498/aps.64.070201
    [13] 马鸽, 胡跃明, 高红霞, 李致富, 郭琪伟. 基于物理总能量目标函数的稀疏重建模型.  , 2015, 64(20): 204202. doi: 10.7498/aps.64.204202
    [14] 王林元, 刘宏奎, 李磊, 闫镔, 张瀚铭, 蔡爱龙, 陈建林, 胡国恩. 基于稀疏优化的计算机断层成像图像不完全角度重建综述.  , 2014, 63(20): 208702. doi: 10.7498/aps.63.208702
    [15] 邓承志, 田伟, 陈盼, 汪胜前, 朱华生, 胡赛凤. 基于局部约束群稀疏的红外图像超分辨率重建.  , 2014, 63(4): 044202. doi: 10.7498/aps.63.044202
    [16] 宋长新, 马克, 秦川, 肖鹏. 结合稀疏编码和空间约束的红外图像聚类分割研究.  , 2013, 62(4): 040702. doi: 10.7498/aps.62.040702
    [17] 郝崇清, 王江, 邓斌, 魏熙乐. 基于稀疏贝叶斯学习的复杂网络拓扑估计.  , 2012, 61(14): 148901. doi: 10.7498/aps.61.148901
    [18] 周玉淑, 曹洁. 有限区域风场的分解和重建.  , 2010, 59(4): 2898-2906. doi: 10.7498/aps.59.2898
    [19] 刘 冬, 王 飞, 黄群星, 严建华, 池 涌, 岑可法. 二维弥散介质温度场的快速重建.  , 2008, 57(8): 4812-4816. doi: 10.7498/aps.57.4812
    [20] 汪仲清. 奇偶q-变形相干态的高阶压缩效应.  , 2001, 50(4): 690-692. doi: 10.7498/aps.50.690
计量
  • 文章访问数:  7978
  • PDF下载量:  99
  • 被引次数: 0
出版历程
  • 收稿日期:  2019-10-23
  • 修回日期:  2019-12-16
  • 刊出日期:  2020-03-20

/

返回文章
返回
Baidu
map