搜索

x

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

量化核最小逆双曲正弦自适应滤波算法

火元莲 脱丽华 齐永锋 张印

引用本文:
Citation:

量化核最小逆双曲正弦自适应滤波算法

火元莲, 脱丽华, 齐永锋, 张印

Quantized kernel least inverse hyperbolic sine adaptive filtering algorithm

Huo Yuan-Lian, Tuo Li-Hua, Qi Yong-Feng, Zhang Yin
PDF
HTML
导出引用
  • 针对非线性问题, 本文将核方法和双曲正弦函数的逆相结合, 提出了鲁棒的核最小逆双曲正弦算法. 然后利用向量量化对输入空间数据进行量化, 构建出能够抑制网络规模增长的量化核最小逆双曲正弦算法, 降低了原有算法的计算复杂度, 给出了量化核最小逆双曲正弦算法的能量守恒关系和收敛条件. Mackey-Glass短时混沌时间序列预测和非线性信道均衡环境的仿真结果表明, 本文所提出的核最小逆双曲正弦算法和量化核最小逆双曲正弦算法在收敛速度、鲁棒性和计算复杂度上具有优势.
    In the last few decades, the kernel method has been successfully used in the field of adaptive filtering to solve nonlinear problems. Mercer kernel is used to map data from input space to reproducing kernel Hilbert space (RKHS) by kernel adaptive filter (KAF). In regenerated kernel Hilbert spaces, the inner product can be easily calculated by computing the so-called kernel trick. The Kernel adaptive filtering algorithm is superior to common adaptive filtering algorithm in solving nonlinear problems and nonlinear channel equalization. For nonlinear problems, a robust kernel least inverse hyperbolic sine (KLIHS) algorithm is proposed by combining the kernel method with the inverse of hyperbolic sine function.The main disadvantage of KAF is that the radial-basis function (RBF) network grows with every new data sample, which increases the computational-complexity and requires more momories. The vector quantization (VQ) has been proposed to address this problem and has been successfully applied to the current kernel adaptive filtering algorithm. The main idea of the VQ method is to compress the input space through quantization to curb the network-size growth. In this paper, vector quantization is used to quantify the input spatial data, and a quantized kernel least inverse hyperbolic sine (QKLIHS) algorithm is constructed to restrain the growth of network scale. The energy conservation relation and convergence condition of quantized kernel least inverse hyperbolic sine algorithm are given. The simulation results of Mackey-Glass short-time chaotic time series prediction and nonlinear channel equalization environment show that the proposed kernel least inverse hyperbolic sine algorithm and quantized kernel least inverse hyperbolic sine algorithm have advantages in convergence speed, robustness and computational complexity.
      通信作者: 火元莲, huoyuanlian@163.com
    • 基金项目: 国家自然科学基金(批准号: 61561044)和甘肃省自然科学基金(批准号: 20JR10RA077)资助的课题.
      Corresponding author: Huo Yuan-Lian, huoyuanlian@163.com
    • Funds: Project supported by the National Natural Science Foundation of China (Grant No. 61561044) and the Natural Science Foundation of Gansu Province, China(Grant No. 20JR10RA077).
    [1]

    Liu W F, Príncipe J C, Haykin S 2010 Kernel Adaptive Filtering: A Comprehensive Introduction (Hoboken, NJ, USA: John Wiley & Sons) pp16–32

    [2]

    火元莲, 王丹凤, 龙小强, 连培君, 齐永锋 2021 70 028401Google Scholar

    Huo Y L, Wang D F, Long X Q, Lian P J, Qi Y F 2021 Acta Phys. Sin. 70 028401Google Scholar

    [3]

    Liu W, Pokharel P P, Principe J C 2008 IEEE Trans. Signal Process. 56 543Google Scholar

    [4]

    Ma W, Duan J, Man W, Zhao H, Chen B 2017 Eng. Appl. Artif. Intel. 58 101Google Scholar

    [5]

    Wu Q, Li Y, Zakharov Y V, Xue W 2021 Signal Process. 189 108255Google Scholar

    [6]

    Zhao S, Chen B, Príncipe J C 2011 The 2011 International Joint Conference on Neural Networks San Jose, CA USA, 03 October 2011, p2012

    [7]

    Engel Y, Mannor S, Meir R 2004 IEEE Trans. Signal Process. 52 2275Google Scholar

    [8]

    Liu W, Park I, Principe J C 2009 IEEE Trans. Neural Networds 20 1950Google Scholar

    [9]

    Chen B, Zhao S, Zhu P, Príncipe J C 2013 IEEE Trans. Neural Networks Learn. Syst. 24 1484Google Scholar

    [10]

    Csató L, Opper M 2002 Neural Comput. 14 641Google Scholar

    [11]

    Zhao S, Chen B, Zhu P, Príncipe J C 2013 Signal Process. 93 2759Google Scholar

    [12]

    Chen B, Zhao S, Zhu P, Príncipe J C 2012 IEEE Trans. Neural Networks Learn. Syst. 23 22Google Scholar

    [13]

    Engel Y, Mannor S, Meir R 2004 IEEE Transactions on Signal Processing 52 2275

    [14]

    Wang S, Zheng Y, Duan S, Wang L, Tan H 2017 Digital Signal Process. 63 164Google Scholar

    [15]

    Wu Z, Shi J, Xie Z, Ma W, Chen B 2015 Signal Process. 117 11

    [16]

    Shi L, Yun L 2014 IEEE Signal Process. Lett. 21 385Google Scholar

    [17]

    Guan S, Cheng Q, Zhao Y, Biswal B 2021 PLoS One 16 1

    [18]

    焦尚彬, 任超, 黄伟超, 梁炎明 2013 62 210501Google Scholar

    Jiao S B, Ren C, Huang W C, Liang Y M 2013 Acta Phys. Sin. 62 210501Google Scholar

    [19]

    火元莲, 脱丽华, 齐永锋, 丁瑞博 2022 71 048401Google Scholar

    Huo Y L, Tuo L H, Qi Y F, Ding R B 2022 Acta Phys. Sin. 71 048401Google Scholar

    [20]

    Aalo V, Ackie A, Mukasa C 2019 Signal Process. 154 363Google Scholar

    [21]

    Wu Q, Li Y, Jiang Z, Zhang Y 2019 IEEE Access. 7 62107Google Scholar

    [22]

    王世元, 史春芬, 钱国兵, 王万里 2018 67 018401Google Scholar

    Wang S Y, Shi C F, Qian G B, Wang W L 2018 Acta Phys. Sin. 67 018401Google Scholar

  • 图 1  $ \alpha = 1.3 $时的Alpha稳定分布噪声(非高斯环境)

    Fig. 1.  When $ \alpha = 1.3 $, alpha stable distribution noise (non-Gaussian environment).

    图 2  在短时混沌时间序列预测下不同算法的性能比较

    Fig. 2.  Performance comparison of different algorithms under short-time chaotic time series prediction.

    图 3  在短时混沌时间序列预测下不同量化阈值$ \gamma $的QKLIHS算法的性能比较

    Fig. 3.  Performance comparison of QKLIHS algorithms with different quantization thresholds $ \gamma $under short-time chaotic time series prediction.

    图 4  在短时混沌时间序列预测下不同量化阈值$ \gamma $的QKLIHS算法的网络尺寸比较

    Fig. 4.  Network size comparison of QKLIHS algorithms with different quantization thresholds $ \gamma $under short-time chaotic time series prediction.

    图 5  非线性信道

    Fig. 5.  Nonlinear channel.

    图 6  在非线性信道均衡下不同算法的性能比较

    Fig. 6.  Performance comparison of different algorithms under nonlinear channel equalization.

    图 7  在非线性信道均衡下不同量化阈值$ \gamma $的QKLIHS算法的性能比较

    Fig. 7.  Performance comparison of QKLIHS algorithms with different quantization thresholds $ \gamma $ under nonlinear channel equalization.

    图 8  在非线性信道均衡下不同量化阈值$ \gamma $的QKLIHS算法的网络尺寸比较

    Fig. 8.  Network size comparison of QKLIHS algorithms with different quantization thresholds $ \gamma $under nonlinear channel equalization.

    表 1  KLIHS算法

    Table 1.  KLIHS algorithm.

    初始化:
      选择步长$ \mu $; 映射核宽h; $ {\boldsymbol{a} }(1) = 2\mu \dfrac{1}{ {\sqrt {1 + {d^4}\left( 1 \right)} } }d\left( 1 \right) $; ${\boldsymbol{C}}\left( 1 \right) = \left\{ {x\left( 1 \right)} \right\}$
    每获得一对新的样本$\left\{ {{\boldsymbol{x}}(n), {\boldsymbol{d}}(n)} \right\}$时
      1) 计算输出值: $y\left( n \right) = \displaystyle\sum\nolimits_{j = 1}^{n - 1} { { {\boldsymbol{a} }_j}\left( n \right)\kappa \left( { {\boldsymbol{x} }\left( j \right), {\boldsymbol{x} }\left( n \right)} \right)}$
      2) 计算误差: $ e(n) = d(n) - y\left( n \right) $
      3) 添加存储新中心: ${\boldsymbol{C}}\left( n \right) = \left\{ {{\boldsymbol{C}}\left( {n - 1} \right), {\boldsymbol{x}}\left( n \right)} \right\}$
      4) 更新系数:
    ${~} \qquad \qquad {\boldsymbol{a} }(n) = \Big\{ { {\boldsymbol{a} }(n-1), 2\mu \dfrac{1}{ {\sqrt {1 + {e^4} (n)} } }e (n)} \Big\}$
    停止循环
    下载: 导出CSV

    表 2  QKLIHS算法

    Table 2.  QKLIHS algorithm.

    初始化:
      选择步长$ \mu $; 映射核宽$ h $; 量化阈值$ \gamma $; $ {\boldsymbol{a} }(1) = 2\mu \dfrac{1}{ {\sqrt {1 + {d^4}\left( 1 \right)} } }d\left( 1 \right) $; ${\boldsymbol{C} }\left( 1 \right) = \left\{ { {\boldsymbol{x} }\left( 1 \right)} \right\}$
    每获得一对新的样本$\left\{ {{\boldsymbol{x}}(n), d(n)} \right\}$时
     1) 计算输出值: $y\left( n \right) = \displaystyle\sum\limits_{j = 1}^{n - 1} { { {\boldsymbol{a} }_j}\left( n \right)\kappa \left( { {\boldsymbol{x} }\left( j \right), {\boldsymbol{x} }\left( n \right)} \right)}$
     2) 计算误差: $ e(n) = d(n) - y\left( n \right) $
     3) 计算${\boldsymbol{x} }\left( n \right)$与当前字典${\boldsymbol{C} }\left( {n - 1} \right)$的欧几里得距离:
    ${\rm{dis} }({ {\boldsymbol{x} }(n), {(\boldsymbol{C} }({n - 1})}) = \mathop {\min }\limits_{1 \leqslant j \leqslant {\rm{size} }({ {\boldsymbol C } ({n - 1})})} \left\| { {\boldsymbol{x} }( n) - { {\boldsymbol{C} }_j}({n - 1})} \right\|$
     4) 若${\rm{dis}}\left( { {\boldsymbol{x} }\left( n \right), {\boldsymbol{C} }\left( {n - 1} \right)} \right) > \gamma$, 更新字典: ${ {\boldsymbol{x} }_{\rm{q}}}\left( n \right) = {\boldsymbol{x} }\left( n \right)$, ${\boldsymbol{C} }\left( n \right) = \left\{ { {\boldsymbol{C} }\left( {n - 1} \right), { {\boldsymbol{x} }_{\rm{q}}}\left( n \right)} \right\}$
    添加相应系数向量$ {\boldsymbol{a} }\left( n \right) = \left[ { {\boldsymbol{a} }\left( {n - 1} \right), 2\mu \dfrac{1}{ {\sqrt {1 + {e^4}\left( n \right)} } }e\left( n \right)} \right] $
      否则, 保持字典不变: ${\boldsymbol{C} }\left( n \right) = {\boldsymbol{C} }\left( {n - 1} \right)$
    计算与当前数据最近字典元素的下标$j * = \mathop {\arg \min }\limits_{1 \leqslant j \leqslant {\rm{size}}\left( { {\boldsymbol{C} }\left( {n - 1} \right)} \right)} \left\| { {\boldsymbol{x} }\left( n \right) - { {\boldsymbol{C} }_j}\left( {n - 1} \right)} \right\|$
    将${ {\boldsymbol{C} }_{j * } }\left( {n - 1} \right)$作为当前数据的量化值, 并更新系数向量:
    ${ {\boldsymbol{x} }_{\rm{q}}}\left( n \right) = { {\boldsymbol{C} }_{j * } }\left( {n - 1} \right)$, $ { {\boldsymbol{a} }_{j * } }\left( n \right) = { {\boldsymbol{a} }_{j * } }\left( {n - 1} \right) + 2\mu \dfrac{1}{ {\sqrt {1 + {e^4}\left( n \right)} } }e\left( n \right) $
    停止循环
    下载: 导出CSV

    表 3  在短时混沌时间序列预测下不同算法的均值±标准偏差

    Table 3.  The mean standard deviation of different algorithms under short-term chaotic time series prediction.

    算法均值$ \pm $偏差
    KLMS$ 0.2256 \pm 0.1082 $
    KMCC$ 0.1847 \pm 0.0034 $
    KLMP$ 0.1831 \pm 0.0061 $
    KLL$ 0.1807 \pm 0.0047 $
    KLIHS$ 0.1773 \pm 0.0045 $
    下载: 导出CSV

    表 4  在短时混沌时间序列预测下不同量化阈值的QKLIHS算法的均方误差与网络尺寸比较

    Table 4.  Comparison of mean square error and network size of QKLIHS algorithm with different quantization thresholds $ \gamma $ under short-time chaotic time series prediction

    量化阈值误差均值$ \pm $偏差网络尺寸
    0$ 0.1738 \pm 0.0046 $1000
    0.5$ 0.1759 \pm 0.0050 $454
    0.9$ 0.1793 \pm 0.0041 $172
    1.2$ 0.1820 \pm 0.0045 $101
    2.0$ 0.2149 \pm 0.0056 $41
    下载: 导出CSV

    表 5  在非线性信道均衡下不同量化阈值$ \gamma $的QKLIHS算法的稳态误差均值与网络尺寸

    Table 5.  Steady-state error mean and network size of QKLIHS algorithm with different quantization threshold $ \gamma $under nonlinear channel equalization.

    量化阈值误差均值$ \pm $偏差网络尺寸
    0$ 0.0422 \pm 0.0060 $1000
    0.5$ 0.0454 \pm 0.0065 $127
    0.9$ 0.0489 \pm 0.0063 $68
    1.2$ 0.0560 \pm 0.0080 $47
    2.0$ 0.0866 \pm 0.0095 $26
    下载: 导出CSV
    Baidu
  • [1]

    Liu W F, Príncipe J C, Haykin S 2010 Kernel Adaptive Filtering: A Comprehensive Introduction (Hoboken, NJ, USA: John Wiley & Sons) pp16–32

    [2]

    火元莲, 王丹凤, 龙小强, 连培君, 齐永锋 2021 70 028401Google Scholar

    Huo Y L, Wang D F, Long X Q, Lian P J, Qi Y F 2021 Acta Phys. Sin. 70 028401Google Scholar

    [3]

    Liu W, Pokharel P P, Principe J C 2008 IEEE Trans. Signal Process. 56 543Google Scholar

    [4]

    Ma W, Duan J, Man W, Zhao H, Chen B 2017 Eng. Appl. Artif. Intel. 58 101Google Scholar

    [5]

    Wu Q, Li Y, Zakharov Y V, Xue W 2021 Signal Process. 189 108255Google Scholar

    [6]

    Zhao S, Chen B, Príncipe J C 2011 The 2011 International Joint Conference on Neural Networks San Jose, CA USA, 03 October 2011, p2012

    [7]

    Engel Y, Mannor S, Meir R 2004 IEEE Trans. Signal Process. 52 2275Google Scholar

    [8]

    Liu W, Park I, Principe J C 2009 IEEE Trans. Neural Networds 20 1950Google Scholar

    [9]

    Chen B, Zhao S, Zhu P, Príncipe J C 2013 IEEE Trans. Neural Networks Learn. Syst. 24 1484Google Scholar

    [10]

    Csató L, Opper M 2002 Neural Comput. 14 641Google Scholar

    [11]

    Zhao S, Chen B, Zhu P, Príncipe J C 2013 Signal Process. 93 2759Google Scholar

    [12]

    Chen B, Zhao S, Zhu P, Príncipe J C 2012 IEEE Trans. Neural Networks Learn. Syst. 23 22Google Scholar

    [13]

    Engel Y, Mannor S, Meir R 2004 IEEE Transactions on Signal Processing 52 2275

    [14]

    Wang S, Zheng Y, Duan S, Wang L, Tan H 2017 Digital Signal Process. 63 164Google Scholar

    [15]

    Wu Z, Shi J, Xie Z, Ma W, Chen B 2015 Signal Process. 117 11

    [16]

    Shi L, Yun L 2014 IEEE Signal Process. Lett. 21 385Google Scholar

    [17]

    Guan S, Cheng Q, Zhao Y, Biswal B 2021 PLoS One 16 1

    [18]

    焦尚彬, 任超, 黄伟超, 梁炎明 2013 62 210501Google Scholar

    Jiao S B, Ren C, Huang W C, Liang Y M 2013 Acta Phys. Sin. 62 210501Google Scholar

    [19]

    火元莲, 脱丽华, 齐永锋, 丁瑞博 2022 71 048401Google Scholar

    Huo Y L, Tuo L H, Qi Y F, Ding R B 2022 Acta Phys. Sin. 71 048401Google Scholar

    [20]

    Aalo V, Ackie A, Mukasa C 2019 Signal Process. 154 363Google Scholar

    [21]

    Wu Q, Li Y, Jiang Z, Zhang Y 2019 IEEE Access. 7 62107Google Scholar

    [22]

    王世元, 史春芬, 钱国兵, 王万里 2018 67 018401Google Scholar

    Wang S Y, Shi C F, Qian G B, Wang W L 2018 Acta Phys. Sin. 67 018401Google Scholar

  • [1] 温湖峰, 尚天帅, 李剑, 牛中明, 杨东, 薛永和, 李想, 黄小龙. 基于决策树方法的奇A核基态自旋预测.  , 2023, 72(15): 152101. doi: 10.7498/aps.72.20230530
    [2] 齐乐天, 王世元, 沈明琳, 黄刚毅. 基于Nyström柯西核共轭梯度算法的混沌时间序列预测.  , 2022, 71(10): 108401. doi: 10.7498/aps.71.20212274
    [3] 李军, 后新燕. 基于指数加权-核在线序列极限学习机的混沌系统动态重构研究.  , 2019, 68(10): 100503. doi: 10.7498/aps.68.20190156
    [4] 李军, 李大超. 基于优化核极限学习机的风电功率时间序列预测.  , 2016, 65(13): 130501. doi: 10.7498/aps.65.130501
    [5] 王新迎, 韩敏. 多元混沌时间序列的多核极端学习机建模预测.  , 2015, 64(7): 070504. doi: 10.7498/aps.64.070504
    [6] 唐舟进, 任峰, 彭涛, 王文博. 基于迭代误差补偿的混沌时间序列最小二乘支持向量机预测算法.  , 2014, 63(5): 050505. doi: 10.7498/aps.63.050505
    [7] 田中大, 高宪文, 石彤. 用于混沌时间序列预测的组合核函数最小二乘支持向量机.  , 2014, 63(16): 160508. doi: 10.7498/aps.63.160508
    [8] 赵永平, 张丽艳, 李德才, 王立峰, 蒋洪章. 过滤窗最小二乘支持向量机的混沌时间序列预测.  , 2013, 62(12): 120511. doi: 10.7498/aps.62.120511
    [9] 于艳华, 宋俊德. 基于信息冗余检验的支持向量机时间序列预测自由参数选取方法.  , 2012, 61(17): 170516. doi: 10.7498/aps.61.170516
    [10] 李 军, 董海鹰. 基于小波核偏最小二乘回归方法的混沌系统建模研究.  , 2008, 57(8): 4756-4765. doi: 10.7498/aps.57.4756
    [11] 王革丽, 杨培才, 毛宇清. 基于支持向量机方法对非平稳时间序列的预测.  , 2008, 57(2): 714-719. doi: 10.7498/aps.57.714
    [12] 杨永锋, 任兴民, 秦卫阳, 吴亚锋, 支希哲. 基于EMD方法的混沌时间序列预测.  , 2008, 57(10): 6139-6144. doi: 10.7498/aps.57.6139
    [13] 张军峰, 胡寿松. 基于多重核学习支持向量回归的混沌时间序列预测.  , 2008, 57(5): 2708-2713. doi: 10.7498/aps.57.2708
    [14] 蔡俊伟, 胡寿松, 陶洪峰. 基于选择性支持向量机集成的混沌时间序列预测.  , 2007, 56(12): 6820-6827. doi: 10.7498/aps.56.6820
    [15] 于振华, 蔡远利. 基于在线小波支持向量回归的混沌时间序列预测.  , 2006, 55(4): 1659-1665. doi: 10.7498/aps.55.1659
    [16] 孟庆芳, 张 强, 牟文英. 混沌时间序列多步自适应预测方法.  , 2006, 55(4): 1666-1671. doi: 10.7498/aps.55.1666
    [17] 任 韧, 徐 进, 朱世华. 最小二乘支持向量域的混沌时间序列预测.  , 2006, 55(2): 555-563. doi: 10.7498/aps.55.555
    [18] 崔万照, 朱长纯, 保文星, 刘君华. 基于模糊模型支持向量机的混沌时间序列预测.  , 2005, 54(7): 3009-3018. doi: 10.7498/aps.54.3009
    [19] 叶美盈, 汪晓东, 张浩然. 基于在线最小二乘支持向量机回归的混沌时间序列预测.  , 2005, 54(6): 2568-2573. doi: 10.7498/aps.54.2568
    [20] 崔万照, 朱长纯, 保文星, 刘君华. 混沌时间序列的支持向量机预测.  , 2004, 53(10): 3303-3310. doi: 10.7498/aps.53.3303
计量
  • 文章访问数:  3155
  • PDF下载量:  34
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-05-30
  • 修回日期:  2022-08-04
  • 上网日期:  2022-11-08
  • 刊出日期:  2022-11-20

/

返回文章
返回
Baidu
map