搜索

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于图像色貌和梯度特征的图像质量客观评价

史晨阳 林燕丹

引用本文:
Citation:

基于图像色貌和梯度特征的图像质量客观评价

史晨阳, 林燕丹

Objective image quality assessment based on image color appearance and gradient features

Shi Chen-Yang, Lin Yan-Dan
PDF
HTML
导出引用
  • 图像质量评价(IQA)方法需要考虑如何从主观视觉度量结果出发, 设计出符合该结果的客观图像质量评价方法, 应用到相关实际问题中. 本文从视觉感知特性出发, 量化色度和结构特征信息, 提出了基于色貌和梯度两个图像特征的图像质量客观评价模型. 两个色貌新指标(vividness和depth)是色度特征信息提取算子; 梯度算子用来提取结构特征信息. 其中, vividness相似图一方面作为特征提取算子计算失真图像局部质量分数, 另一方面作为图像全局权重系数反应每个像素的重要程度. 为了量化所提模型的主要参数, 根据通用模型性能评价指标, 使用Taguchi实验设计方法进行优化. 为了验证该模型的性能, 使用4个常用图像质量数据库中的94幅参考图像和4830幅失真图像进行对比测试, 从预测精度、计算复杂度和泛化性进行分析. 结果表明, 所提模型的精度PLCC值在4给数据库中最低实现0.8455, 最高可以达到0.9640, 综合性能优于10个典型和近期发表的图像质量评估(IQA)模型. 研究结果表明, 所提模型是有效的、可行的, 是一个性能优异的IQA模型.
    With the rapid development of color image contents and imaging devices in various kinds of multimedia communication systems, conventional grayscale counterparts are replaced by chromatic ones. Under such a transition, the image quality assessment (IQA) model needs to be built by subjective visual measurement, designed in accordance with the results, and applied to the related practical problems. Based on the visual perception characteristics, chromaticity and the structure feature information are quantified, and an objective IQA model combining the color appearance and the gradient image features is proposed in this paper, namely color appearance and gradient similarity(CAGS) model. Two new color appearance indices, vividness and depth, are selected to build the chromatic similarity map. The structure information is characterized by gradient similarity map. Vividness map plays two roles in the proposed model. One is utilized as feature extractor to compute the local quality of distorted image, and the other is as a weight part to reflect the importance of local domain. To quantify the specific parameters of CAGS, Taguchi method is used and four main parameters, i.e., KV, KD, KG and α, of this model are determined based on the statistical correlation indices. The optimal parameters of CAGS are KV = KD = 0.02, KG = 50, and α = 0.1. Furthermore, the CAGS is tested by utilizing 94 reference images and 4830 distorted images from the four open image databases (LIVE, CSIQ, TID2013 and IVC). Additionally, the influences of the 35 distortion types on IQA are analyzed. Massive experiments are performed on four publicly available benchmark databases between CAGS and other 10 state-of-the-art and recently published IQA models, for the accuracy, complexity and generalization performance of IQA. The experimental results show that the accuracy PLCC of the CAGS model can achieve 0.8455 at lowest and 0.9640 at most in the four databases, and the results about commonly evaluation criteria prove that the CAGS performs higher consistency with the subjective evaluations. Among the 35 distortion types, the two distortion types, namely contrast change and change of color saturation, CAGS and mostly IQA models have the worst influence on IQA, and the CAGS yields the highest top three rank number. Moreover, the SROCC values of CAGS for other distortion types are all larger than 0.6 and the number of SROCC value larger than 0.95 is 14 times. Besides, the CAGS maintains a moderate computational complexity. These results of test and comparison above show that the CAGS model is effective and feasible, and the corresponding model has an excellent performance.
      通信作者: 林燕丹, ydlin@fudan.edu.cn
    • 基金项目: 国家重点研发计划(批准号: 2017YFB0403700)资助的课题
      Corresponding author: Lin Yan-Dan, ydlin@fudan.edu.cn
    • Funds: Project supported by the National Key R&D Program of China (Grant No. 2017YFB0403700)
    [1]

    姚军财, 刘贵忠 2018 67 108702Google Scholar

    Yao J C, Liu G Z 2018 Acta Phys. Sin. 67 108702Google Scholar

    [2]

    Athar S, Wang Z 2019 IEEE Access 7 140030Google Scholar

    [3]

    Lin W S, Kuo C C J 2011 J. Vis. Commun. Image R. 22 297Google Scholar

    [4]

    Chang H W, Zhang Q W, Wu Q G, Gan Y 2015 Neurocomputing 151 1142Google Scholar

    [5]

    Wang Z, Bovik A C, Sheikh H R, Simoncelli E P 2004 IEEE Trans. Image Process. 13 600Google Scholar

    [6]

    Sheikh H R, Bovik A C 2006 IEEE Trans. Image Process. 15 430Google Scholar

    [7]

    Sheikh H R, Bovik A C, de Veciana G 2005 IEEE Trans. Image Process. 14 2117Google Scholar

    [8]

    Wang Z, Simoncelli E P, Bovik A C 2003 37th Asilomar Conference on Signals, Systems and Computer PacificGrove, CA, November 9−12, 2003 pp1398−1402

    [9]

    Wang Z, Li Q 2011 IEEE Trans. Image Process. 20 1185Google Scholar

    [10]

    Larson E C, Chandler D M 2010 J. Electron. Imaging 19 011006Google Scholar

    [11]

    Zhang L, Zhang L, Mou X Q 2010 IEEE International Conference on Image Processing Hong Kong, Peoples of China, September 26−29, 2010 pp321−324

    [12]

    Zhang L, Zhang L, Mou X Q, Zhang D 2011 IEEE Trans. Image Process. 20 2378Google Scholar

    [13]

    Liu A M, Lin W S, Narwaria M 2012 IEEE Trans. Image Process. 21 1500Google Scholar

    [14]

    Jia H Z, Zhang L, Wang T H 2018 IEEE Access 6 65885Google Scholar

    [15]

    姚军财, 申静 2020 69 148702Google Scholar

    Yao J C, Shen J 2020 Acta Phys. Sin. 69 148702Google Scholar

    [16]

    Robertson A R 1990 Color Res. Appl. 15 167Google Scholar

    [17]

    Mahny M, Vaneycken L, Oosterlinck A 1994 Color Res. Appl. 19 105

    [18]

    Lee D, Plataniotis K N 2015 IEEE Trans. Image Process. 24 3950Google Scholar

    [19]

    Lee D, Plataniotis K N 2014 2014 International Conference on Acoustics, Speech and Signal Processing (ICASSP) Florence, Italy, May 4−9, 2014 pp166−170

    [20]

    Berns R S 2014 Color Res. Appl. 39 322Google Scholar

    [21]

    Zhang L, Shen Y, Li H Y 2014 IEEE Trans. Image Process. 23 4270Google Scholar

    [22]

    Jain R C, Kasturi R, Schunck B G 1995 Machine Vision (New York: McGraw-Hill) pp140–185

    [23]

    Sonka M, Hlavac V, Boyle R 2008 Image Processing, Analysis and Machine Vision (3rd Ed.) (Stanford: Cengage Learning) p77

    [24]

    Xue W F, Zhang L, Mou X Q, Bovik A C 2014 IEEE Trans. Image Process. 23 684Google Scholar

    [25]

    Kim D O, Han H S, Park R H 2010 IEEE Trans. Consum. Electr. 56 930Google Scholar

    [26]

    Nafchi H Z, Shahkolaei A, Hedjam R, Cheriet M 2016 IEEE Access 4 5579Google Scholar

    [27]

    Taguchi G, Yokoyama Y, Wu Y 1993 Taguchi Methods, Design of experiments (Dearbon, MI: ASI Press) pp59−63

    [28]

    Ponomarenko N, Jin L, Ieremeiev O, Lukin V, Egiazarian K, Astola J, Vozel B, Chehdi K, Carli M, Battisti F, Kuo C C J 2015 Signal Process. Image Commun. 30 57Google Scholar

    [29]

    Larson E C, Chandler D M http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23[2020-7-13]

    [30]

    Sheikh H R, Sabir M F, Bovik A C 2006 IEEE Trans. Image Process. 15 3440Google Scholar

    [31]

    Ninassi A, Le Callet P, Autrusseau F 2006 Conference on Human Vision and Electronic Imaging Ⅺ San Jose, CA, USA, January 16–18, 2006 p1

    [32]

    Wang S Q, Gu K, Zeng K, Wang Z, Lin W S 2016 Proc. SPIE 6057 Comput. Graph. Appl. 38 47Google Scholar

    [33]

    Lin C H, Wu C C, Yang P H, Kuo T Y 2009 J. Disp. Technol. 5 323Google Scholar

    [34]

    Preiss J, Fernandes F, Urban P 2014 IEEE Trans. Image Process. 23 1366Google Scholar

  • 图 1  颜色1和2的vividness($ V_{ab}^* $)和depth($ D_{ab}^* $)维度表征, 线段长度定义对应属性[20]

    Fig. 1.  Dimensions of vividness, $ V_{ab}^* $, and depth, $ D_{ab}^* $ for colors 1 and 2. Line lengths define each attribute[20].

    图 2  从LIVE数据库中提取的典型图像 (a)为参考图像; (b)为高斯模糊畸变类型的失真图像; (c)和(e)分别是参考图像的Vividness和Depth图; (d)和(f)分别是失真图像的Vividness和Depth图; (g)是色貌相似图; (h)为梯度相似图

    Fig. 2.  Typical images extracted from LIVE: (a) The reference image;(b) the distorted vision of it by Gaussian blur distortion type; (c) and (e) are the vividness and depth map of the reference image, respectively; (d) and (f) are the vividness and depth map of the distorted image; (g) the color appearance similarity map by connecting the vividness and depth similarity map; (h) the gradient similarity map.

    图 3  本文提出的IQA模型CAGS的计算过程

    Fig. 3.  Illustration for the computational process of the proposed IQA model CAGS.

    图 4  SROCC和RMSE对应的不同水准的S/N

    Fig. 4.  S/N ratio of different levels for SROCC and RMSE.

    图 5  不同KG值对模型性能的影响

    Fig. 5.  Performance of different KG values.

    图 6  基于TID2013数据库的主观MOS与IQA模型计算结果拟合对比 (a) IW-SSIM; (b) IFC; (c) VIF; (d) MAD; (e) RFSIM; (f) FSIMc; (g) GSM; (h) CVSS; (i) CAGS

    Fig. 6.  Scatter plots of subjective MOS against scores calculated by IQA models’ prediction for TID2013 databases: (a) IW-SSIM; (b) IFC; (c) VIF; (d) MAD; (e) RFSIM; (f) FSIMc; (g) GSM; (h) CVSS; (i) CAGS.

    图 7  CAGS与MPCC在TID2013数据库种不同失真类型PLCC值对比

    Fig. 7.  PLCC comparison of different distortion types between CAGS and MPCC on TID2013.

    表 1  IQA数据库基本信息

    Table 1.  Benchmark test databases for IQA.

    数据库原始图像数量失真图像数量失真类型观察者
    TID201325300024971
    CSIQ30866635
    LIVE297795161
    IVC10185415
    下载: 导出CSV

    表 2  变量参数及其控制水准

    Table 2.  Influence factors and level setting for CAGS.

    代号参数表述水准数水准一水准二水准三
    AKV30.0020.020.2
    BKD30.0020.020.2
    CKG31050100
    Dα30.10.51
    下载: 导出CSV

    表 3  采用L9(34)直交表的实验设计及IVC数据库测试结果

    Table 3.  Design with a L9(34) orthogonal array for IVC database.

    实验序号ABCDSROCCSROCC的S/NRMSERMSE的S/N
    111110.9300–0.63030.41137.7168
    212220.9192–0.73180.45336.8723
    313330.9096–0.82300.48256.3301
    421230.9171–0.75170.45966.7524
    522310.9173–0.74980.46726.6099
    623120.9291–0.63880.41427.6558
    731320.9114–0.80580.47356.4936
    832130.9279–0.65000.41747.5890
    933210.9195–0.72900.44816.9725
    下载: 导出CSV

    表 4  对比不同IQA模型的4个数据库性能

    Table 4.  Performance comparison of IQA models on four databases.

    数据库SSIMIW-SSIMIFCVIFMADRFSIMFSIMCGSMCVSSMPCCProposed
    TID2013SROCC0.74170.77790.53890.67690.78070.77440.85100.79460.80690.84520.8316
    PLCC0.78950.83190.55380.77200.82670.83330.87690.84640.84060.86160.8445
    RMSE0.76080.68801.03220.78800.69750.68520.59590.66030.67150.62930.6639
    KROCC0.55880.59770.39390.51470.60350.59510.66650.62550.63310.6469
    CSIQSROCC0.87560.92130.76710.91950.94660.92950.93100.91080.95800.95690.9198
    PLCC0.86130.91440.83840.92770.95020.91790.91920.89640.95890.95860.9014
    RMSE0.13340.10630.14310.09800.08180.10420.10340.11640.07450.07470.1137
    KROCC0.69070.75290.58970.75370.79700.76450.76900.73740.81710.7487
    LIVESROCC0.94790.95670.92590.96360.96690.94010.95990.95610.96720.96600.9734
    PLCC0.94490.95220.92680.96040.96750.93540.95030.95120.96510.96220.9640
    RMSE8.94558.347310.26437.61376.90739.66427.19978.43277.15737.43978.3251
    KROCC0.79630.81750.75790.82820.84210.78160.83660.81500.84060.8658
    IVCSROCC0.90180.91250.89930.89640.91460.81920.92930.85600.88360.9195
    PLCC0.91190.92310.90930.90280.92100.83610.93920.86620.84380.9298
    RMSE0.49990.46860.50690.52390.47460.66840.41830.60880.65380.4483
    KROCC0.72230.73390.72020.71580.74060.64520.76360.66090.69570.7488
    权重平均SROCC0.80510.83760.65600.77500.84560.83060.88590.84380.86280.8737
    PLCC0.83210.86960.67860.83530.87520.86500.89870.87300.88200.8772
    KROCC0.62700.66620.50020.61580.68190.65750.71600.67750.70200.7044
    直接平均SROCC0.86680.89210.78280.86410.90220.86580.91780.87940.90390.9111
    PLCC0.87690.90540.80710.89070.91640.88070.92140.89010.90210.9099
    KROCC0.69200.72550.61540.70310.74580.69660.75890.70970.74660.7526
    下载: 导出CSV

    表 5  IQA模型的不同失真类型SROCC值对比

    Table 5.  SROCC values of IQA models for different types of distortions.

    数据库失真类型SSIMIW-SSIMIFCVIFMADRFSIMFSIMCGSMCVSSMPCCProposed
    TID2013AGN0.86710.84380.66120.89940.88430.88780.91010.90640.94010.86660.9359
    ANC0.77260.75150.53520.82990.80190.84760.85370.81750.86390.81870.8653
    SCN0.85150.81670.66010.88350.89110.88250.89000.91580.90770.73960.9276
    MN0.77670.80200.69320.84500.73800.83680.80940.72930.77150.70320.7526
    HFN0.86340.85530.74060.89720.88760.91450.90940.88690.90970.89570.9159
    IN0.75030.72810.62080.85370.27690.90620.82510.79650.74570.67470.8361
    QN0.86570.84680.62820.78540.85140.89680.88070.88410.88690.79310.8718
    GB0.96680.97010.89070.96500.93190.96980.95510.96890.93480.92180.9614
    DEN0.92540.91520.77790.89110.92520.93590.93300.94320.94270.95100.9466
    JPEG0.92000.91870.83570.91920.92170.93980.93390.92840.95210.89640.9585
    JP2 K0.94680.95060.90780.95160.95110.95180.95890.96020.95870.91600.9620
    JPTE0.84930.83880.74250.84090.82830.83120.86100.85120.86130.85710.8644
    J2 TE0.88280.86560.77690.87610.87880.90610.89190.91820.88510.84090.9250
    NEPN0.78210.80110.57370.77200.83150.77050.79370.81300.82010.77530.7833
    Block0.57200.37170.24140.53060.28120.03390.55320.64180.51520.53960.6015
    MS0.77520.78330.55220.62760.64500.55470.74870.78750.71500.75200.7441
    CTC0.37750.45930.17980.83860.19720.39890.46790.48570.29400.78140.4514
    CCS0.41410.41960.40290.30090.05750.02040.83590.35780.26140.70540.3711
    MGN0.78030.77280.61430.84860.84090.84640.85690.83480.87990.87660.8700
    CN0.85660.87620.81600.89460.90640.89170.91350.91240.93510.81740.9168
    LCNI0.90570.90370.81600.92040.94430.90100.94850.95630.96290.80950.9574
    ICQD0.85420.84010.60060.84140.87450.89590.88150.89730.91080.85960.9060
    CHA0.87750.86820.82100.88480.83100.89900.89250.88230.85230.80940.8768
    SSR0.94610.94740.88850.93530.95670.93260.95760.96680.96050.91780.9580
    CSIQAWGN0.89740.93800.84310.95750.95410.94410.93590.94400.96700.93290.9652
    JPEG0.95430.96620.94120.97050.96150.95020.96640.96320.96890.95640.9573
    JP2 K0.96050.96830.92520.96720.97520.96430.97040.96480.97770.96300.9545
    AGPN0.89240.90590.82610.95110.95700.93570.93700.93870.95160.95170.9492
    GB0.96080.97820.95270.97450.96020.96430.97290.95890.97890.96640.9574
    CTC0.79250.95390.48730.93450.92070.95270.94380.93540.93240.93990.9273
    LIVEJP2 K0.96140.96490.91130.96960.96760.93230.97240.97000.97190.96080.9822
    JPEG0.97640.98080.94680.98460.97640.95840.98400.97780.98360.96740.9836
    AWGN0.96940.96670.93820.98580.98440.97990.97160.97740.98090.94570.9837
    GB0.95170.97200.95840.97280.94650.90660.97080.95180.96620.95610.9641
    FF0.95560.94420.96290.96500.95690.92370.95190.94020.95920.96270.9633
    下载: 导出CSV

    表 6  计算复杂度对比

    Table 6.  Time cost comparisons.

    IQA模型运行时间/sIQA模型运行时间/s
    PSNR0.0186RFSIM0.1043
    SSIM0.0892FSIMc0.3505
    IW-SSIM0.6424GSM0.1018
    IFC1.1554CVSS0.0558
    VIF1.1825MPCC
    MAD2.7711CAGS0.4814
    下载: 导出CSV
    Baidu
  • [1]

    姚军财, 刘贵忠 2018 67 108702Google Scholar

    Yao J C, Liu G Z 2018 Acta Phys. Sin. 67 108702Google Scholar

    [2]

    Athar S, Wang Z 2019 IEEE Access 7 140030Google Scholar

    [3]

    Lin W S, Kuo C C J 2011 J. Vis. Commun. Image R. 22 297Google Scholar

    [4]

    Chang H W, Zhang Q W, Wu Q G, Gan Y 2015 Neurocomputing 151 1142Google Scholar

    [5]

    Wang Z, Bovik A C, Sheikh H R, Simoncelli E P 2004 IEEE Trans. Image Process. 13 600Google Scholar

    [6]

    Sheikh H R, Bovik A C 2006 IEEE Trans. Image Process. 15 430Google Scholar

    [7]

    Sheikh H R, Bovik A C, de Veciana G 2005 IEEE Trans. Image Process. 14 2117Google Scholar

    [8]

    Wang Z, Simoncelli E P, Bovik A C 2003 37th Asilomar Conference on Signals, Systems and Computer PacificGrove, CA, November 9−12, 2003 pp1398−1402

    [9]

    Wang Z, Li Q 2011 IEEE Trans. Image Process. 20 1185Google Scholar

    [10]

    Larson E C, Chandler D M 2010 J. Electron. Imaging 19 011006Google Scholar

    [11]

    Zhang L, Zhang L, Mou X Q 2010 IEEE International Conference on Image Processing Hong Kong, Peoples of China, September 26−29, 2010 pp321−324

    [12]

    Zhang L, Zhang L, Mou X Q, Zhang D 2011 IEEE Trans. Image Process. 20 2378Google Scholar

    [13]

    Liu A M, Lin W S, Narwaria M 2012 IEEE Trans. Image Process. 21 1500Google Scholar

    [14]

    Jia H Z, Zhang L, Wang T H 2018 IEEE Access 6 65885Google Scholar

    [15]

    姚军财, 申静 2020 69 148702Google Scholar

    Yao J C, Shen J 2020 Acta Phys. Sin. 69 148702Google Scholar

    [16]

    Robertson A R 1990 Color Res. Appl. 15 167Google Scholar

    [17]

    Mahny M, Vaneycken L, Oosterlinck A 1994 Color Res. Appl. 19 105

    [18]

    Lee D, Plataniotis K N 2015 IEEE Trans. Image Process. 24 3950Google Scholar

    [19]

    Lee D, Plataniotis K N 2014 2014 International Conference on Acoustics, Speech and Signal Processing (ICASSP) Florence, Italy, May 4−9, 2014 pp166−170

    [20]

    Berns R S 2014 Color Res. Appl. 39 322Google Scholar

    [21]

    Zhang L, Shen Y, Li H Y 2014 IEEE Trans. Image Process. 23 4270Google Scholar

    [22]

    Jain R C, Kasturi R, Schunck B G 1995 Machine Vision (New York: McGraw-Hill) pp140–185

    [23]

    Sonka M, Hlavac V, Boyle R 2008 Image Processing, Analysis and Machine Vision (3rd Ed.) (Stanford: Cengage Learning) p77

    [24]

    Xue W F, Zhang L, Mou X Q, Bovik A C 2014 IEEE Trans. Image Process. 23 684Google Scholar

    [25]

    Kim D O, Han H S, Park R H 2010 IEEE Trans. Consum. Electr. 56 930Google Scholar

    [26]

    Nafchi H Z, Shahkolaei A, Hedjam R, Cheriet M 2016 IEEE Access 4 5579Google Scholar

    [27]

    Taguchi G, Yokoyama Y, Wu Y 1993 Taguchi Methods, Design of experiments (Dearbon, MI: ASI Press) pp59−63

    [28]

    Ponomarenko N, Jin L, Ieremeiev O, Lukin V, Egiazarian K, Astola J, Vozel B, Chehdi K, Carli M, Battisti F, Kuo C C J 2015 Signal Process. Image Commun. 30 57Google Scholar

    [29]

    Larson E C, Chandler D M http://vision.eng.shizuoka.ac.jp/mod/page/view.php?id=23[2020-7-13]

    [30]

    Sheikh H R, Sabir M F, Bovik A C 2006 IEEE Trans. Image Process. 15 3440Google Scholar

    [31]

    Ninassi A, Le Callet P, Autrusseau F 2006 Conference on Human Vision and Electronic Imaging Ⅺ San Jose, CA, USA, January 16–18, 2006 p1

    [32]

    Wang S Q, Gu K, Zeng K, Wang Z, Lin W S 2016 Proc. SPIE 6057 Comput. Graph. Appl. 38 47Google Scholar

    [33]

    Lin C H, Wu C C, Yang P H, Kuo T Y 2009 J. Disp. Technol. 5 323Google Scholar

    [34]

    Preiss J, Fernandes F, Urban P 2014 IEEE Trans. Image Process. 23 1366Google Scholar

  • [1] 赵丹, 王帅虎, 刘少刚, 崔进, 董立强. 磁流变液构成的类梯度结构振动传递特性.  , 2020, 69(9): 098301. doi: 10.7498/aps.69.20200326
    [2] 姚军财, 申静. 基于图像内容对比感知的图像质量客观评价.  , 2020, 69(14): 148702. doi: 10.7498/aps.69.20200335
    [3] 李吉, 刘伍明. 梯度磁场中自旋-轨道耦合旋转两分量玻色-爱因斯坦凝聚体的基态研究.  , 2018, 67(11): 110302. doi: 10.7498/aps.67.20180539
    [4] 姚军财, 刘贵忠. 基于图像内容视觉感知的图像质量客观评价方法.  , 2018, 67(10): 108702. doi: 10.7498/aps.67.20180168
    [5] 管义钧, 孙宏祥, 袁寿其, 葛勇, 夏建平. 近表面层黏性模量梯度变化的复合平板中激光热弹激发声表面波的传播特性.  , 2016, 65(22): 224201. doi: 10.7498/aps.65.224201
    [6] 周子超, 王小林, 陶汝茂, 张汉伟, 粟荣涛, 周朴, 许晓军. 高功率梯度掺杂增益光纤温度特性理论研究.  , 2016, 65(10): 104204. doi: 10.7498/aps.65.104204
    [7] 周先春, 汪美玲, 石兰芳, 周林锋, 吴琴. 基于梯度与曲率相结合的图像平滑模型的研究.  , 2015, 64(4): 044201. doi: 10.7498/aps.64.044201
    [8] 吴惠彬, 梅凤翔. 事件空间中完整力学系统的梯度表示.  , 2015, 64(23): 234501. doi: 10.7498/aps.64.234501
    [9] 周建华, 李栋华, 曾阳素, 朱鸿鹏. 梯度负折射率介质中高斯光束传输特性的研究.  , 2014, 63(10): 104205. doi: 10.7498/aps.63.104205
    [10] 侯明强, 龚自正, 徐坤博, 郑建东, 曹燕, 牛锦超. 密度梯度薄板超高速撞击特性的实验研究.  , 2014, 63(2): 024701. doi: 10.7498/aps.63.024701
    [11] 刘岩, 张文明, 仲作阳, 彭志科, 孟光. 光梯度力驱动纳谐振器的非线性动力学特性研究.  , 2014, 63(2): 026201. doi: 10.7498/aps.63.026201
    [12] 李震, 张锡文, 何枫. 基于速度梯度张量的四元分解对若干涡判据的评价.  , 2014, 63(5): 054704. doi: 10.7498/aps.63.054704
    [13] 石明珠, 许廷发, 梁炯, 李相民. 单幅模糊图像点扩散函数估计的梯度倒谱分析方法研究.  , 2013, 62(17): 174204. doi: 10.7498/aps.62.174204
    [14] 蔡志鹏, 杨文正, 唐伟东, 侯洵. 大梯度指数掺杂透射式GaAs光电阴极响应特性的理论分析.  , 2012, 61(18): 187901. doi: 10.7498/aps.61.187901
    [15] 楼智美, 梅凤翔. 力学系统的二阶梯度表示.  , 2012, 61(2): 024502. doi: 10.7498/aps.61.024502
    [16] 洪轲, 袁玲, 沈中华, 倪晓武. 利用Taylor展开法研究Lamb波在功能梯度材料中的传播特性.  , 2011, 60(10): 104303. doi: 10.7498/aps.60.104303
    [17] 冯友君, 林中校, 张蓉竹. 连续位相板均方根梯度对焦斑匀滑特性的影响.  , 2011, 60(10): 104202. doi: 10.7498/aps.60.104202
    [18] 鄂鹏, 段萍, 江滨浩, 刘辉, 魏立秋, 徐殿国. 磁场梯度对Hall推力器放电特性影响的实验研究.  , 2010, 59(10): 7182-7190. doi: 10.7498/aps.59.7182
    [19] 胡跃辉, 阴生毅, 陈光华, 吴越颖, 周小明, 周健儿, 王 青, 张文理. MWECR CVD等离子体系统梯度磁场对沉积a-Si:H薄膜特性研究.  , 2004, 53(7): 2263-2269. doi: 10.7498/aps.53.2263
    [20] 殷宗敏, 祝颂来. 锥形梯度折射率纤维的成像特性.  , 1981, 30(12): 1603-1608. doi: 10.7498/aps.30.1603
计量
  • 文章访问数:  7933
  • PDF下载量:  141
  • 被引次数: 0
出版历程
  • 收稿日期:  2020-05-19
  • 修回日期:  2020-07-12
  • 上网日期:  2020-11-09
  • 刊出日期:  2020-11-20

/

返回文章
返回
Baidu
map