生成对抗网络相比传统媒体优势训练方法有什么优势

: 321-332&&&&DOI: 10.16383/j.aas.
综述与评论
生成式对抗网络GAN的研究进展与展望
王坤峰1,2, 苟超1,3, 段艳杰1,3, 林懿伦1,3, 郑心湖4, 王飞跃1,5
1. 中国科学院自动化研究所复杂系统管理与控制国家重点实验室 北京 100190 中国;
2. 青岛智能产业技术研究院 青岛 266000 中国;
3. 中国科学院大学 北京 100049 中国;
4. 明尼苏达大学计算机科学与工程学院 明尼阿波利斯 MN 55414 美国;
5. 国防科学技术大学军事计算实验与平行系统技术研究中心 长沙 410073 中国
Generative Adversarial Networks: The State of the Art and Beyond
WANG Kun-Feng1,2, GOU Chao1,3, DUAN Yan-Jie1,3, LIN Yi-Lun1,3, ZHENG Xin-Hu4, WANG Fei-Yue1,5
1. The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, C
2. Qingdao Academy of Intelligent Industries, Qingdao 266000, C
3. University of Chinese Academy of Sciences, Beijing 100049, C
4. Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN 55414, USA;
5. Research Center for Computational Experiments and Parallel Systems Technology, National University of Defense Technology, Changsha 410073, China
参考文献(0)
1 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 2014 Conference on Advances in Neural Information Processing Systems 27. Montreal, Canada: Curran Associates, Inc., -2680
2 Goodfellow I, Bengio Y, Courville A. Deep Learning. Cambridge, UK: MIT Press, 2016.
3 Ratliff L J, Burden S A, Sastry S S. Characterization and computation of local Nash equilibria in continuous games. In: Proceedings of the 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton). Monticello, IL, USA: IEEE, 4
4 Goodfellow I. NIPS 2016 tutorial: generative adversarial networks. arXiv preprint arXiv: , 2016.
5 Li J W, Monroe W, Shi T L, Jean S, Ritter A, Jurafsky D. Adversarial learning for neural dialogue generation. arXiv preprint arXiv: , 2017.
6 Yu L T, Zhang W N, Wang J, Yu Y. SeqGAN: sequence generative adversarial nets with policy gradient. arXiv preprint arXiv: , 2016.
7 Hu WW, Tan Y. Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv: , 2017.
8 Chidambaram M, Qi Y J. Style transfer generative adversarial networks: learning to play chess differently. arXiv preprint arXiv: , 2017.
9 Bengio Y. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2009, 2(1): 1-127
10 Kingma D P, Welling M. Auto-encoding variational Bayes. arXiv preprint arXiv: , 2013.
11 Rezende D J, Mohamed S, Wierstra D. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv: , 2014.
12 Hinton G E, Sejnowski T J, Ackley D H. Boltzmann Machines: Constraint Satisfaction Networks that Learn. Technical Report No. CMU-CS-84-119, Carnegie-Mellon University, Pittsburgh, PA, USA, 1984.
13 Ackley D H, Hinton G E, Sejnowski T J. A learning algorithm for Boltzmann machines. Cognitive Science, 1985, 9(1): 147-169
14 Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, 18(7):
15 Bengio Y, Thibodeau-Laufer &E, Alain G, Yosinski J. Deep generative stochastic networks trainable by backprop. arXiv preprint arXiv: , 2013.
16 Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504-507
17 LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436-444
18 Krizhevsky A, Sutskever I, Hinton G E. Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, USA: ACM, -1105
19 He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE, 8
20 Hinton G, Deng L, Yu D, Dahl G E, Mohamed A R, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath T N, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82-97
21 Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of the 2014 Conference on Advances in Neural Information Processing Systems 27. Montreal, Canada: Curran Associates, Inc., -3112.
22 He D, Chen W, Wang L W, Liu T Y. A game-theoretic machine learning approach for revenue maximization in sponsored search. arXiv preprint arXiv: , 2014.
23 Silver D, Huang A, Maddison C J, Guez A, Sifre L, van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of go with deep neural networks and tree search. Nature, 2016, 529(7587): 484-489
24 Schmidhuber J. Learning factorial codes by predictability minimization. Neural Computation, 1992, 4(6): 863-879
25 Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 2016, 17(59): 1-35
26 Chen W Z, Wang H, Li Y Y, Su H, Wang Z H, Tu C H, Lischinski D, Cohen-Or D, Chen B. Synthesizing training images for boosting human 3D pose estimation. In: Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV). Stanford, CA, USA: IEEE, 8
27 Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. arXiv preprint arXiv: , 2013.
28 McDaniel P, Papernot N, Celik Z B. Machine learning in adversarial settings. IEEE Security & Privacy, 2016, 14(3): 68-72
29 Arjovsky M, Chintala S, Bottou L. Wasserstein GAN. arXiv preprint arXiv: , 2017.
30 Qi G J. Loss-sensitive generative adversarial networks on Lipschitz densities. arXiv preprint arXiv: , 2017.
31 Odena A. Semi-supervised learning with generative adversarial networks. arXiv preprint arXiv: , 2016.
32 Mirza M, Osindero S. Conditional generative adversarial nets. arXiv preprint arXiv: , 2014.
33 Donahue J, Kr&henb&hl P, Darrell T. Adversarial feature learning. arXiv preprint arXiv: , 2016.
34 Chen X, Duan Y, Houthooft R, Schulman J, Sutskever I, Abbeel P. InfoGAN: interpretable representation learning by information maximizing generative adversarial nets. In: Proceedings of the 2016 Neural Information Processing Systems. Barcelona, Spain: Department of Information Technology IMEC, -2180
35 Odena A, Olah C, Shlens J. Conditional image synthesis with auxiliary classifier GANs. arXiv preprint arXiv: , 2016.
36 Ledig C, Theis L, Husz&r F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z H, Shi W Z. Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv: , 2016.
37 Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: , 2014.
38 Santana E, Hotz G. Learning a driving simulator. arXiv preprint arXiv: , 2016.
39 Gou C, Wu Y, Wang K, Wang F Y, Ji Q. Learning-by-synthesis for accurate eye detection. In: Proceedings of the 2016 IEEE International Conference on Pattern Recognition (ICPR). Cancun, Mexico: IEEE, 2016.
40 Gou C, Wu Y, Wang K, Wang K F, Wang F Y, Ji Q. A joint cascaded framework for simultaneous eye detection and eye state estimation. Pattern Recognition, 2017, 67: 23-31
41 Shrivastava A, Pfister T, Tuzel O, Susskind J, Wang W D, Webb R. Learning from simulated and unsupervised images through adversarial training. arXiv preprint arXiv: , 2016.
42 Zhang Y Z, Gan Z, Carin L. Generating text via adversarial training. In: Proceedings of the 2016 Conference on Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 2016.
43 Reed S, Akata Z, Yan X C, Logeswaran L, Lee H, Schiele B. Generative adversarial text to image synthesis. In: Proceedings of the 33rd International Conference on Machine Learning. New York, NY, USA: ICML, 2016.
44 Ho J, Ermon S. Generative adversarial imitation learning. In: Proceedings of the 2016 Conference on Advances in Neural Information Processing Systems 29. Curran Associates, Inc., -4573
45 Finn C, Christiano P, Abbeel P, Levine S. A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models. arXiv preprint arXiv: , 2016.
46 Pfau D, Vinyals O. Connecting generative adversarial networks and actor-critic methods. arXiv preprint arXiv: , 2016.
47 Wang Fei-Yue. Parallel system methods for management and control of complex systems. Control and decision, 2004, 19(5): 485-489, 514(王飞跃. 平行系统方法与复杂系统的管理和控制. 控制与决策, 2004, 19(5): 485-489, 514)
48 Wang Fei-Yue. Computational experiments for behavior analysis and decision evaluation of complex systems. Journal of System Simulation, 2004, 16(5): 893-897(王飞跃. 计算实验方法与复杂系统行为分析和决策评估. 系统仿真学报, 2004, 16(5): 893-897)
49 Wang F Y, Zhang J, Wei Q L, Zheng X H, Li L. PDP: parallel dynamic programming. IEEE/CAA Journal of Automatica Sinica, 2017, 4(1): 1-5
50 Bai Tian-Xiang, Wang Shuai, Shen Zhen, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel robotics and parallel unmanned systems: framework, structure, process, platform and applications. Acta Automatica Sinica, 2017, 43(2): 161-175(白天翔, 王帅, 沈震, 曹东璞, 郑南宁, 王飞跃. 平行机器人与平行无人系统: 框架、结构、过程、平台及其应用. 自动化学报, 2017, 43(2): 161-175)
51 Wang F Y, Wang X, Li L X, Li L. Steps toward parallel intelligence. IEEE/CAA Journal of Automatica Sinica, 2016, 3(4): 345-348
52 Wang Kun-Feng, Gou Chao, Wang Fei-Yue. Parallel vision: an ACP-based approach to intelligent vision computing. Acta Automatica Sinica, 2016, 42(10): (王坤峰, 苟超, 王飞跃. 平行视觉: 基于ACP的智能视觉计算方法. 自动化学报, 2016, 42(10): )
53 Wang Fei-Yue. On the modeling, analysis, control and management of complex systems. Complex Systems and Complexity Science, 2006, 3(2): 26-34(王飞跃. 关于复杂系统的建模、分析、控制和管理. 复杂系统与复杂性科学, 2006, 3(2): 26-34)
54 Wang Fei-Yue, Liu De-Rong, Xiong Gang, Cheng Chang-Jian, Zhao Dong-Bin. Parallel control theory of complex systems and applications. Complex Systems and Complexity Science, 2012, 9(3): 1-12(王飞跃, 刘德荣, 熊刚, 程长建, 赵冬斌. 复杂系统的平行控制理论及应用. 复杂系统与复杂性科学, 2012, 9(3): 1-12)
55 Wang Fei-Yue. Parallel control: a method for data-driven and computational control. Acta Automatica Sinica, 2013, 39(4): 293-302(王飞跃. 平行控制: 数据驱动的计算控制方法. 自动化学报, ): 293-302)
56 Li Li, Lin Yi-Lun, Cao Dong-Pu, Zheng Nan-Ning, Wang Fei-Yue. Parallel learning&&a new framework for machine learning. Acta Automatica Sinica, 2017, 43(1): 1-8(李力, 林懿伦, 曹东璞, 郑南宁, 王飞跃. 平行学习-机器学习的一个新型理论框架. 自动化学报, 2017, 43(1): 1-8)
李力, 林懿伦, 曹东璞, 郑南宁, 王飞跃. . 自动化学报, 2017, 43(1): 1-8.
胡玉玲, 王飞跃, 刘希未. . 自动化学报, 2014, 40(2): 185-196.
刘小明, 李正熙. . 自动化学报, 2014, 40(12): .
王飞跃. . 自动化学报, 2013, 39(4): 293-302.
版权所有 & 《自动化学报》编辑部
地址:北京中关村东路95号 邮政编码:100190 E-mail: 电话:010- (日常咨询和稿件处理),010- (录用后稿件处理)
本系统由设计开发 & &京ICP备号超星军事理论答案_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
超星军事理论答案
阅读已结束,下载文档到电脑
想免费下载本文?
定制HR最喜欢的简历
下载文档到电脑,方便使用
还剩36页未读,继续阅读
定制HR最喜欢的简历
你可能喜欢市场合作,请您联系:
品牌广告合作,请您联系:
企业创新合作,请您联系:
地方合作,请您联系:
满足以下场景,获得更高通过率:
新融资求报道
新公司求报道
新产品求报道
创投新闻爆料
为你推送和解读最前沿、最有料的科技创投资讯
聚集15家顶级投资机构的专业互联网融资平台
聚集全球最优秀的创业者,项目融资率接近97%,领跑行业浅析对抗生成网络
浅析对抗生成网络
贤仔科技苑
有监督的深度学习就是输入的数据有语义标签,输出的结果由人类标识对错。但很多科学家认为无监督学习才是未来的发展方向,让机器自己从原始数据中发现规律。对抗生成网络就是其中的一种方法。Christian Szegedy(克里斯坦.赛格德)等人在ICLR2014发表的论文中提出了深度学习对抗样本的概念,即在输入的数据集中故意添加细微的干扰,形成输入样本,导致深度神经网络得出错误的输出。这个错误在人看来一目了然,机器却跌入陷阱。Ian Goodfellow,Jonathon Shlens 和Christian Szegedy在论文中给出了一个典型:原始图像是熊猫,神经网络以57.75的置信度判断为“熊猫”。然后给图片加入微小的干扰,即第二张图片的噪声。最终神经网络以99.3%的置信度判断为长臂猿,虽然人眼完全感受不到差别。这个问题乍一看很像过拟合,因为从现象上来说是输入发生了一定程度的改变就导致了输出的不正确。如上图左,上下分别是过拟合和欠拟合导致的对抗样本,其中绿色的o和x代表训练集,红色的o和x即对抗样本,明显可以看到欠拟合的情况下输入发生改变也会导致分类不正确。但Goodfellow却给出了更为准确的解释,即对抗样本误分类是因为模型的线性性质导致的。给一个例子,假设现在用逻辑回归做二分类,输入向量是x=[2,1,3,2,2,2,1,4,5,1],权重向量是w=[1,1,1,1,1,1,1,1,1,1],点乘结果是-3,类预测为1的概率为0.0474,假如将输入变为x+0.5w=[1.5,1.5,3.5,2.5,2.5,1.5,1.5,3.5,4.5,1.5],那么类预测为1的概率就变成了0.88,就因为输入在每个维度上的改变,导致了前后的结果不一致。因为对抗样本导致识别错误,有人将其当作深度学习的深度缺陷。可是加州大学圣地亚哥分校的Zachary Chase Lipton在KDNuggets上发表文章,认为深度学习对于对抗性样本的脆弱性并不是深度学习所独有。在很多机器学习模型中普遍存在,进一步研究抵抗对抗性样本的算法有利于整个机器学习领域的进步。构造对抗样本。如上图,从第一列的原始图片中算出第三列的对抗样本,可以看到第一行从预测为狐狸变成了预测为金鱼,第二行变成了预测为校车。于是对抗生成网络对神经网络进行了特别设计,让其主动产生干扰数据来训练网络的能力。简单来说,对抗生成网络由两部分组成,一个是生成器(generator),另一个是鉴定器(discriminator)。生成器好比是一个卖假货的奸商,而鉴定器好比高超的买家,需要鉴别货品真假。奸商的职责是想方设法欺骗买家(生成对抗样本)。而后者需要不断吸取教训,减少受骗概率。好比军事演习中的蓝军与红军展开对抗,由此强化双方的战斗能力。就对抗生成网络来说,二者都需要不断强化自身实力,共同进化。这有什么好处呢?在很多情况下,我们会面临缺乏数据的情况。但可以通过生成模型来补充,制造样本,产生类似监督学习的效果,但实际上是非监督学习。
本文仅代表作者观点,不代表百度立场。系作者授权百家号发表,未经许可不得转载。
贤仔科技苑
百家号 最近更新:
简介: 专注于互联网科技领域知识分享。
作者最新文章}

我要回帖

更多关于 传统商务的优势 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信