-
针对目前深度学习算法难以实现快速高质量全息图的生成,我们提出了一种注意力卷积神经网络算法(RTC-Holo),在快速生成全息图的同时,提高了全息图的生成质量。整个网络由实值和复值卷积神经网络组成,实值网络进行相位预测,复值网络对SLM面的复振幅进行预测,预测后得到的复振幅的相位用于全息编码和数值重建。在相位预测模块的下采样阶段引入注意力机制,增强相位预测模块下采样阶段的特征提取能力,进而提升整个网络生成纯相位全息图的质量。将精准的角谱衍射模型(angular spectrum method,ASM)嵌入到整个网络中,以无监督的学习方式进行网络训练。本文提出的算法能够在0.015s内生成平均峰值信噪比(peak signal-to-noise ratio,PSNR)高达32.12dB的2K全息图。数值仿真和光学实验验证了该方法的可行性和有效性。In recent years, with the greatly improving performance of computers, deep learning technology has shown an explosive development trend and has been widely used in various fields. In this background, the CGH generation algorithm based on deep learning provides a new idea for real-time high quality of holographic display. The convolutional neural network is the most typical network structure in deep learning algorithms, which is able to automatically extract key local features from an image and construct more complex global features through operations such as convolution, pooling and full connectivity. Convolutional neural networks have been widely used in the field of holographic displays due to their powerful feature extraction and generalization abilities. Compared with the traditional iterative algorithm, the CGH algorithm based on deep learning has a significant improvement in computing speed, but its image quality still needs to improve further. In this paper, an attention convolutional neural network based on the diffraction model of the angular spectral method is proposed to improve the generation quality of holograms while generating holograms quickly. The whole network consists of real-valued and complex-valued convolutional neural networks, the real-valued network is used for phase prediction and the complex-valued network is used to make prediction of the complex amplitude of the SLM surface, and the phase of the complex amplitude which is obtained after prediction is used for holographic coding and numerical reconstruction. An attention mechanism is embedded in the downsampling stage of the phase prediction network to improve the feature extraction capability of the whole algorithm, thus improving the quality of the generated phase-only holograms. An accurate diffraction model of the angular spectrum method is embedded in the whole network to avoid labeling of large-scale datasets, and unsupervised learning is used to train the network. The proposed algorithm is able to generate high-quality 2K holograms within 0.015s. The average peak signal-to-noise ratio of the reconstruction images is up to 32.12 dB and the average structural similarity index measure of the generated holograms is up to 0.934. Numerical simulations and optical experiments verify the feasibility and effectiveness of the proposed attentional convolutional neural network algorithm based on the diffraction model of angular spectral method, which provides a powerful help for the application of deep learning theory and algorithm in the field of real-time holographic display.
-
[1] Huang Q, Hou Y H, Lin F C, Li Z S, He M Y, Wang D, Wang Q H 2024 Opt. Lasers Eng. 176108104
[2] Yao Y W, Zhang Y P, Fu Q Y, Duan J L, Zhang B, Cao L C, Poon T C 2024 Opt. Lett. 491481
[3] Chen C Y, Cheng C W, Chou T A, Chuang C H 2024 Opt. Commun. 550130024
[4] Huang X M, Zhou Y L, Liang H W, Zhou J Y 2024 Opt. Lasers Eng. 176108115
[5] Shigematsu O, Naruse M, Horisaki R 2024 Opt. Lett. 491876
[6] Gu T, Han C, Qin H F, Sun K S 2024 Opt. Express 3244358
[7] Wang D, Li Z S, Zheng Y, Zhao Y R, Liu C, Xu J B, Zheng Y W, Huang Q, Chang C L, Zhang D W, Zhuang S L, Wang Q H 2024 Light: Sci. Appl. 1362
[8] Wang Y Q, Zhang Z L, Zhao S Y, He W, Li X T, Wang X, Jie Y C, Zhao C M 2024 Opt. Laser Technol. 171110372
[9] Yao Y W, Zhang Y P, Poon T C 2024 Opt. Lasers Eng. 175: 108027
[10] Madali N, Gilles A, Gioia P, MORIN L 2024 Opt. Express 322473
[11] Tsai C M, Lu C N, Yu Y H, Han P, Fang Y C 2024 Opt. Lasers Eng. 174: 107982.
[12] Zhao Y, Cao L C, Zhang H, Kong D Z, Jin G F 2015 Opt. Express 2325440
[13] Gerhberg R W, Saxton W O 1972 Optik 35237
[14] Liu K X, He Z H, Cao L C 2021 Chin. Opt. Lett. 19050501.
[15] Sui X M, He Z H, Jin G F, Cao L C 2022 Opt. Express 3030552
[16] Kiriy S A, Rymov D A, Svistunov A S, Shifrina A V, Starikov R S, Cheremkhin P A 2024 Laser Phys. Lett. 21045201
[17] Li X Y, Han C, Zhang C 2024 Opt. Commun. 557130353
[18] Qin H F, Han C, Shi X, Gu T, Sun K S 2024 Opt. Express 3244437
[19] Yan X P, Liu X L, Li J Q, Hu H R, Lin M, Wang X 2024 Opt. Laser Technol. 174110667
[20] Yu G W, Wang J, Yang H, Guo Z C, Wu Y 2023 Opt. Letters 485351
[21] Liu Q W, Chen J, Qiu B S, Wang Y T, Liu J 2023 Opt. Express 3135908
[22] Horisaki R, Takagi R, Tanida J 2018 Appl. Opt. 573859
[23] Lee J, Jeong J, Cho J, Yoo D, Lee B, Lee B 2020 Opt. Express 2827137
[24] Chang C L, Wang D, Zhu D C, Li J M, Xia J, Zhang X L 2022 Opt. Lett. 471482
[25] Wu J C, Liu K X, Sui X M, Cao L C 2021 Opt. Let. 462908
[26] Shui X H, Zheng H D, Xia X X, Yang F R, Wang W S, Yu Y J 2022 Opt. Express 3044814
[27] Peng Y F, Choi S, Padmanaban N, Wetzstein G 2020 ACM Trans. Graph. 391
[28] Zhong C L, Sang X Z, Yan B B, Li H, Chen D, Qin X J, Chen S, Ye X Q 2023 IEEE Trans. Vis. Comput. Graph. 301
计量
- 文章访问数: 31
- PDF下载量: 0
- 被引次数: 0