在十进制里有两个数 aa和bb,满足aa²+bb²=aabb词语 问 aabb词语是多少?

&img src=&/50/v2-6b6e486dfdbee89fc2af5f_b.jpg& data-rawwidth=&1024& data-rawheight=&600& class=&origin_image zh-lightbox-thumb& width=&1024& data-original=&/50/v2-6b6e486dfdbee89fc2af5f_r.jpg&&&p&斯坦福大学的课程 CS231n (Convolutional Neural Networks for Visual Recognition) 作为深度学习和计算机视觉方面的重要基础课程,在学界广受推崇。今年 4 月,CS231n 再度开课,全新的 CS231n Spring 2017 仍旧由李飞飞带头,带来了很多新鲜的内容。今天机器之心给大家分享的是其中的第八讲——深度学习软件(Deep Learning Software)。主要内容有:CPU 和 GPU 的对比;深度学习框架简介;TensorFlow 和 PyTorch 的实例;以及各种深度学习框架的比较。&/p&&p&&b&一、 CPU 和 GPU&/b&&/p&&p& CPU:核芯的数量更少;&/p&&p&
但是每一个核芯的速度更快,性能更强;&/p&&p&
更适用于处理连续性(sequential)任务。&/p&&p& GPU:核芯的数量更多;&/p&&p&
但是每一个核芯的处理速度较慢;&/p&&p&
更适用于并行(parallel)任务。&/p&&img src=&/50/v2-8f718d0b7fdea53bbca376ded59c5885_b.png& data-rawheight=&468& data-rawwidth=&716& class=&origin_image zh-lightbox-thumb& width=&716& data-original=&/50/v2-8f718d0b7fdea53bbca376ded59c5885_r.png&&&br&&p&&b&二、深度学习框架简介&/b&&/p&&p&去年我们还仅有 Caffe、Torch、Theano 和 TensorFlow 这些深度学习框架可供使用;但是到了今年,在此基础上我们又新增加了 Caffe2、Pytorch、TensorFlow、PaddlePaddle、 CNTK、MXNet 等等一系列新的框架,可谓「百花齐放」。如今最常用的框架当数 Pytorch 和 TensorFlow 了, 而 Caffe 和 Caffe2 次之。&/p&&img src=&/50/v2-aec45eaa75b561c88fc73_b.png& data-rawheight=&280& data-rawwidth=&523& class=&origin_image zh-lightbox-thumb& width=&523& data-original=&/50/v2-aec45eaa75b561c88fc73_r.png&&&br&&p&深度学习框架的关键点在于:&/p&&p&(1)易于建造大型的计算机图形;&/p&&p&(2)易于在计算机图形中进行梯度计算;&/p&&p&(3)能在 GPU 上高效运行(cuDNN, cuBLA 等)&/p&&p&&b&三、TensorFlow 简单实例&/b&&/p&&p&下面我们将详细说明一个在 TensorFlow 下训练神经网络的简单实例:即用随机数据训练一个两层的网络,激活函数为 ReLU。&/p&&p&&b&a. 定义计算机图形&/b&&/p&&img src=&/50/v2-ec2f3b2e22c3a00cf195d9eaeb2b166c_b.png& data-rawheight=&399& data-rawwidth=&868& class=&origin_image zh-lightbox-thumb& width=&868& data-original=&/50/v2-ec2f3b2e22c3a00cf195d9eaeb2b166c_r.png&&&br&&p&1. 为输入 x,权重系数 w1、w2, 和目标函数 y 创建 placeholder:&/p&&img src=&/50/v2-3b418fe02f5e187a9d7c69dc99c661d6_b.png& data-rawheight=&145& data-rawwidth=&708& class=&origin_image zh-lightbox-thumb& width=&708& data-original=&/50/v2-3b418fe02f5e187a9d7c69dc99c661d6_r.png&&&br&&p&2. 定义前向传输:这是为了计算 y 的预测值和误差损失(loss);实际上这里是没有计算过程的——仅仅是为了创建图形!&img src=&/50/v2-455757ebd410efb06a99f6_b.png& data-rawheight=&131& data-rawwidth=&863& class=&origin_image zh-lightbox-thumb& width=&863& data-original=&/50/v2-455757ebd410efb06a99f6_r.png&&&/p&&br&&p&3. 告诉 Tensorflow 去计算关于 w1 和 w2 的梯度损失;这里仍然不产生计算过程——仅仅是为了创建图形。&img src=&/50/v2-45a16d9cbee0e3ecb85b2d_b.png& data-rawheight=&68& data-rawwidth=&796& class=&origin_image zh-lightbox-thumb& width=&796& data-original=&/50/v2-45a16d9cbee0e3ecb85b2d_r.png&&&/p&&br&&p&&b&b. 运行&/b&&/p&&p&现在已经完成了创建图形的步骤,所以我们进入对图形进行运算的部分。&img src=&/50/v2-196e25b0bb7e5e0dbf868d_b.png& data-rawheight=&274& data-rawwidth=&823& class=&origin_image zh-lightbox-thumb& width=&823& data-original=&/50/v2-196e25b0bb7e5e0dbf868d_r.png&&&/p&&br&&p&创建 Numpy 数组,这个数组将会被填进上方的 placeholder 中。&img src=&/50/v2-ea0762fac2eeff43fd96_b.png& data-rawheight=&132& data-rawwidth=&788& class=&origin_image zh-lightbox-thumb& width=&788& data-original=&/50/v2-ea0762fac2eeff43fd96_r.png&&&/p&&br&&p&对图形进行运算:将 x、y、w1、w2 输入到 numpy 数组中;得到关于损失(loss),w1 梯度和 w2 梯度的 numpy 数组。&img src=&/50/v2-050aeede88cba143d377_b.png& data-rawheight=&122& data-rawwidth=&847& class=&origin_image zh-lightbox-thumb& width=&847& data-original=&/50/v2-050aeede88cba143d377_r.png&&&/p&&br&&p&训练网络:反复对图形进行运算,用梯度(gradient)来更新权重(weights)。&img src=&/50/v2-ba40a53b6c0268_b.png& data-rawheight=&192& data-rawwidth=&713& class=&origin_image zh-lightbox-thumb& width=&713& data-original=&/50/v2-ba40a53b6c0268_r.png&&&/p&&br&&p&把 w1 和 w2 的相应函数从 placeholder() 改为 Variable()。&img src=&/50/v2-dcaf87e96bcb_b.png& data-rawheight=&81& data-rawwidth=&695& class=&origin_image zh-lightbox-thumb& width=&695& data-original=&/50/v2-dcaf87e96bcb_r.png&&&/p&&br&&p&添加 assign 操作来更新 w1 和 w2(图形的一部分)。&img src=&/50/v2-f5fb926ce2c563f4bffde_b.png& data-rawheight=&112& data-rawwidth=&809& class=&origin_image zh-lightbox-thumb& width=&809& data-original=&/50/v2-f5fb926ce2c563f4bffde_r.png&&&/p&&br&&p&对图形进行一次运算来初始化 w1 和 w2,然后进行多次迭代训练。&img src=&/50/v2-715b227efe281bbebdd65a7c_b.png& data-rawheight=&210& data-rawwidth=&897& class=&origin_image zh-lightbox-thumb& width=&897& data-original=&/50/v2-715b227efe281bbebdd65a7c_r.png&&&/p&&br&&p&完整代码如下:&img src=&/50/v2-68afd820a2c_b.png& data-rawheight=&718& data-rawwidth=&895& class=&origin_image zh-lightbox-thumb& width=&895& data-original=&/50/v2-68afd820a2c_r.png&&&/p&&br&&p&但是产生一个问题:误差损失(loss)并没有下降!这是因为 Assign 指令实际上并没有被执行。&img src=&/50/v2-7ffdbac690a4_b.png& data-rawheight=&275& data-rawwidth=&397& class=&content_image& width=&397&&&/p&&br&&p&这时我们就需要添加虚拟图形节点,并且告诉图形去计算虚拟节点。&img src=&/50/v2-1dbf6167a_b.png& data-rawheight=&714& data-rawwidth=&792& class=&origin_image zh-lightbox-thumb& width=&792& data-original=&/50/v2-1dbf6167a_r.png&&&/p&&br&&p&可以使用 optimizer 来计算梯度和更新权重系数;记得要执行 optimizer 的输出!&img src=&/50/v2-beeeaadf057be31cc8027b_b.png& data-rawheight=&379& data-rawwidth=&844& class=&origin_image zh-lightbox-thumb& width=&844& data-original=&/50/v2-beeeaadf057be31cc8027b_r.png&&&/p&&br&&p&使用预先定义的常用损失函数:&img src=&/50/v2-dc84c682d7ee3c1d216397_b.png& data-rawheight=&403& data-rawwidth=&943& class=&origin_image zh-lightbox-thumb& width=&943& data-original=&/50/v2-dc84c682d7ee3c1d216397_r.png&&&/p&&br&&p&使用 Xavier 进行初始化;tf.layer 会自动设置权重系数(weight)和偏置项(bias)!&img src=&/50/v2-7fb1b75ba7dcc94bb08f6d2a_b.png& data-rawheight=&711& data-rawwidth=&937& class=&origin_image zh-lightbox-thumb& width=&937& data-original=&/50/v2-7fb1b75ba7dcc94bb08f6d2a_r.png&&&/p&&br&&p&&b&c. 高级 Wrapper——Keras&/b&&/p&&p&Keras 可以理解为是一个在 TensorFlow 顶部的 layer,它可以让一些工作变得更加简单(也支持 Theano 后端)。&img src=&/50/v2-f4ca08ec6a7da31a24f63365e5faeb99_b.png& data-rawheight=&460& data-rawwidth=&545& class=&origin_image zh-lightbox-thumb& width=&545& data-original=&/50/v2-f4ca08ec6a7da31a24f63365e5faeb99_r.png&&&/p&&br&&p&把模型目标定义成一系列的 layer :&img src=&/50/v2-dfc440d4ac_b.png& data-rawheight=&115& data-rawwidth=&521& class=&origin_image zh-lightbox-thumb& width=&521& data-original=&/50/v2-dfc440d4ac_r.png&&&/p&&br&&p&定义优化器目标(optimizer object):&img src=&/50/v2-ae1ad62f6_b.png& data-rawheight=&39& data-rawwidth=&520& class=&origin_image zh-lightbox-thumb& width=&520& data-original=&/50/v2-ae1ad62f6_r.png&&&/p&&br&&p&创建模型,明确规定损失函数(loss function):&img src=&/50/v2-fa73facde6_b.png& data-rawheight=&59& data-rawwidth=&517& class=&origin_image zh-lightbox-thumb& width=&517& data-original=&/50/v2-fa73facde6_r.png&&&/p&&br&&p&仅用一行代码就能训练模型!&img src=&/50/v2-7dfcba38c08eee83b612bbcd_b.png& data-rawheight=&58& data-rawwidth=&518& class=&origin_image zh-lightbox-thumb& width=&518& data-original=&/50/v2-7dfcba38c08eee83b612bbcd_r.png&&&/p&&br&&p&除了 Keras, 还有一些其他类型的高级容器(Wrapper)可供使用:&img src=&/50/v2-05ebf796c35368afbdbdd0dc3c9ea385_b.png& data-rawheight=&412& data-rawwidth=&830& class=&origin_image zh-lightbox-thumb& width=&830& data-original=&/50/v2-05ebf796c35368afbdbdd0dc3c9ea385_r.png&&&/p&&br&&p&&b&四、PyTorch 实例&/b&&/p&&p&PyTorch 是 Facebook 推出的深度学习框架,不论是在工业界还是学术界,它都得到了广泛的应用。它包括三个等级的抽象概念:&/p&&ul&&li&张量(Tensor):命令式的多维数组对象(ndarray),在 GPU 上运行;&/li&&li&变量(Varaible):计算型图形(computational graph)的节点;用于存储数据和梯度(gradient)&/li&&li&模块(Module):代表一个神经网络层;可以存储状态(state), 也可以存储可学习的权重系数(learnable weights)&/li&&/ul&&p&PyTorch 和 TensorFlow 中抽象概念的等价对应关系:&img src=&/50/v2-8e799a89bcbb_b.png& data-rawheight=&189& data-rawwidth=&757& class=&origin_image zh-lightbox-thumb& width=&757& data-original=&/50/v2-8e799a89bcbb_r.png&&&/p&&p&a. Pytorch 中的张量(Tensor)设置&/p&&p&PyTorch 中的张量就像 numpy 中的数组,但是这些张量可以在 GPU 上运行;&/p&&p&这里我们用 PyTorch 的张量设置了一个两层网络:&img src=&/50/v2-7eda3896183ffda11e923_b.png& data-rawheight=&544& data-rawwidth=&422& class=&origin_image zh-lightbox-thumb& width=&422& data-original=&/50/v2-7eda3896183ffda11e923_r.png&&&/p&&br&&p&下面我们来分步解读:&/p&&p&1. 为数据和权重(weights)创建随机张量:&img src=&/50/v2-a7fb307f70f16e75ce6c5e_b.png& data-rawheight=&110& data-rawwidth=&372& class=&content_image& width=&372&&&/p&&br&&p&2. 设置前向传播:计算预测值(prediction)和损失(loss):&img src=&/50/v2-74cdfbb5c98b_b.png& data-rawheight=&76& data-rawwidth=&371& class=&content_image& width=&371&&&/p&&br&&p&3. 设置反向传播:计算梯度(gradients):&img src=&/50/v2-1eec48a301f37bfa5dac58fe43a099de_b.png& data-rawheight=&125& data-rawwidth=&371& class=&content_image& width=&371&&&/p&&br&&p&4. 梯度下降(Gradient descent)和权重(weights)相对应:&img src=&/50/v2-30912ced327f4d226ceaa86_b.png& data-rawheight=&60& data-rawwidth=&369& class=&content_image& width=&369&&&/p&&br&&p&5. 为了在 GPU 上运行,将张量(tensors)设置为 cuda 数据类型:&img src=&/50/v2-495bf5a3c0f6b2a50aca8bca311f49a1_b.png& data-rawheight=&69& data-rawwidth=&369& class=&content_image& width=&369&&&/p&&br&&p&b. PyTorch 中的 Autogradient 设置&/p&&p&PyTorch 的张量(Tensors)和变量(Variables)拥有相同的应用编程接口 API。变量(Variables)可以记忆它们是怎么产生的(因为反向传播的缘故)。&img src=&/50/v2-7077fdaf6654d85edd46_b.png& data-rawheight=&380& data-rawwidth=&525& class=&origin_image zh-lightbox-thumb& width=&525& data-original=&/50/v2-7077fdaf6654d85edd46_r.png&&&/p&&br&&p&下面仍进行分步解读:&/p&&p&1. 我们不希望(损失 loss 的)梯度和数据(data)有相关性,但我们希望梯度和权重(weights)是相关的。相关设置如图: &img src=&/50/v2-35db1f58bff_b.png& data-rawheight=&105& data-rawwidth=&522& class=&origin_image zh-lightbox-thumb& width=&522& data-original=&/50/v2-35db1f58bff_r.png&&&/p&&br&&p&2. 这里的前向传播看上去和上述张量(Tensor)的对应版本很相似,但是需要注意的是现在这里全部都是变量(variable)。&img src=&/50/v2-f117dfc641bd443f781af9b864e7a48c_b.png& data-rawheight=&46& data-rawwidth=&532& class=&origin_image zh-lightbox-thumb& width=&532& data-original=&/50/v2-f117dfc641bd443f781af9b864e7a48c_r.png&&&/p&&br&&p&3. 计算损失函数对 w1 和 w2 的梯度(开始的时候梯度置零):&img src=&/50/v2-b78ab0e5fd0f06e99b272d_b.png& data-rawheight=&69& data-rawwidth=&530& class=&origin_image zh-lightbox-thumb& width=&530& data-original=&/50/v2-b78ab0e5fd0f06e99b272d_r.png&&&/p&&br&&p&4. 让梯度和权重(weights)相对应:&img src=&/50/v2-0b9cfee5c84f677da48cd40_b.png& data-rawheight=&62& data-rawwidth=&529& class=&origin_image zh-lightbox-thumb& width=&529& data-original=&/50/v2-0b9cfee5c84f677da48cd40_r.png&&&/p&&br&&p&C. 定义新型 Autograd 函数&/p&&p&通过张量的前向和反向传播来定义你自己的 autograd 函数:&img src=&/50/v2-159d9fba4ff1f776c6a961_b.png& data-rawheight=&296& data-rawwidth=&521& class=&origin_image zh-lightbox-thumb& width=&521& data-original=&/50/v2-159d9fba4ff1f776c6a961_r.png&&&/p&&br&&p&可以在前向传播中使用新的 autograd 函数:&img src=&/50/v2-20cdad1fc21f6eb65ce60f99f2fcdbd1_b.png& data-rawheight=&366& data-rawwidth=&541& class=&origin_image zh-lightbox-thumb& width=&541& data-original=&/50/v2-20cdad1fc21f6eb65ce60f99f2fcdbd1_r.png&&&/p&&br&&p&d. PyTorch 中的神经网络(nn)设置&/p&&p&用更高级的「容器」(wrapper)来处理神经网络(neural nets), 和 Keras 相似。完整代码如下:&img src=&/50/v2-efe64f9e99e11ffa7b2ce_b.png& data-rawheight=&460& data-rawwidth=&553& class=&origin_image zh-lightbox-thumb& width=&553& data-original=&/50/v2-efe64f9e99e11ffa7b2ce_r.png&&&/p&&br&&p&下面进行分步解读:&/p&&p&把我们的模型定义成一系列的 layers:&img src=&/50/v2-99c8676e08debbae1c5c_b.png& data-rawheight=&86& data-rawwidth=&518& class=&origin_image zh-lightbox-thumb& width=&518& data-original=&/50/v2-99c8676e08debbae1c5c_r.png&&&/p&&br&&p&也要定义常用损失函数:&img src=&/50/v2-5bdfadfeb0db_b.png& data-rawheight=&27& data-rawwidth=&523& class=&origin_image zh-lightbox-thumb& width=&523& data-original=&/50/v2-5bdfadfeb0db_r.png&&&/p&&br&&p&前向传播:给模型输入数据;给损失函数(loss function)输入预测信息(prediction):&img src=&/50/v2-58f523dc0e5e28a56acf29_b.png& data-rawheight=&63& data-rawwidth=&516& class=&origin_image zh-lightbox-thumb& width=&516& data-original=&/50/v2-58f523dc0e5e28a56acf29_r.png&&&/p&&br&&p&反向传播:计算所有的梯度(gradients):&img src=&/50/v2-fc34b291fec85d57be83_b.png& data-rawheight=&51& data-rawwidth=&521& class=&origin_image zh-lightbox-thumb& width=&521& data-original=&/50/v2-fc34b291fec85d57be83_r.png&&&/p&&br&&p&让梯度和每一个模型参数对应:&img src=&/50/v2-aa2e7788dba2da0bf0b82e41_b.png& data-rawheight=&49& data-rawwidth=&528& class=&origin_image zh-lightbox-thumb& width=&528& data-original=&/50/v2-aa2e7788dba2da0bf0b82e41_r.png&&&/p&&br&&p&下面我们添加一个优化器(optimizer):&img src=&/50/v2-89ac530457ccb8ce4b0f7b3f584bdaf6_b.png& data-rawheight=&204& data-rawwidth=&442& class=&origin_image zh-lightbox-thumb& width=&442& data-original=&/50/v2-89ac530457ccb8ce4b0f7b3f584bdaf6_r.png&&&/p&&br&&p&在计算完梯度以后对所有的参数(parameters)进行更新:&img src=&/50/v2-1b5138cdb19ec543a21b_b.png& data-rawheight=&163& data-rawwidth=&282& class=&content_image& width=&282&&&/p&&br&&p&E. PyTorch 中的神经网络——定义新的模型&/p&&p&Pytorch 中的模块(Module)其实是一个神经网络层(neural net layer),需要注意它的输入和输出都是变量;模块(Module)中包含着权重 (当作变量处理) 或者其他模块;你可以使用 autograd 来定义你自己的模块。详细代码如下:&img src=&/50/v2-f24abe7f2c7b3bfeecaf_b.png& data-rawheight=&511& data-rawwidth=&474& class=&origin_image zh-lightbox-thumb& width=&474& data-original=&/50/v2-f24abe7f2c7b3bfeecaf_r.png&&&/p&&br&&p&下面进行分步解读:&/p&&p&1. 把我们的整体模型定义成一个单一的模块:&img src=&/50/v2-377de6f5ded6cbbaaa60_b.png& data-rawheight=&199& data-rawwidth=&473& class=&origin_image zh-lightbox-thumb& width=&473& data-original=&/50/v2-377de6f5ded6cbbaaa60_r.png&&&/p&&br&&p&2. 用初始化程序来设置两个子模块(一个父模块可以包含子模块)&img src=&/50/v2-eebfc84e3dd80b5e4cfc394bce94102b_b.png& data-rawheight=&154& data-rawwidth=&438& class=&origin_image zh-lightbox-thumb& width=&438& data-original=&/50/v2-eebfc84e3dd80b5e4cfc394bce94102b_r.png&&&/p&&br&&p&3. 用子模块和变量上的 autograd ops 定义前向传播;不需要定义反向传播——因为 autograd 会作相应处理:&img src=&/50/v2-70dcc0871a32eddbbd9eaf7d_b.png& data-rawheight=&71& data-rawwidth=&358& class=&content_image& width=&358&&&/p&&br&&p&4. 创建并训练一个模型实例:&img src=&/50/v2-76d6ba969e27d1da20f78d8f35f135ec_b.png& data-rawheight=&196& data-rawwidth=&482& class=&origin_image zh-lightbox-thumb& width=&482& data-original=&/50/v2-76d6ba969e27d1da20f78d8f35f135ec_r.png&&&/p&&br&&p&E. PyTorch 中的资料存储器(Dataloaders)&/p&&p&资料存储器(DataLoader)包括一个数据集 (Dataset),而且给你提供了小批量处理(minibatching),「洗牌」处理(shuffling)和多线程处理(multithreading);当你需要载入自定义数据(custom data)时,写下你自己的数据集类型(dataset class)就可以了。&img src=&/50/v2-aca7f0adb394b5a8fe980e29be672131_b.png& data-rawheight=&411& data-rawwidth=&475& class=&origin_image zh-lightbox-thumb& width=&475& data-original=&/50/v2-aca7f0adb394b5a8fe980e29be672131_r.png&&&/p&&br&&p&通过遍历存储器(loader)来形成小批量(minibatch);存储器会给你提供张量(Tensors), 所以你需要将其「打包」(wrap)进变量中:&img src=&/50/v2-0f8c094da1be313ed00e2af7ca8a937e_b.png& data-rawheight=&54& data-rawwidth=&436& class=&origin_image zh-lightbox-thumb& width=&436& data-original=&/50/v2-0f8c094da1be313ed00e2af7ca8a937e_r.png&&&/p&&br&&p&注意:使用带有 torchvision 的预先训练好的模型(pretrained model)将会更加简单易行。&/p&&p&F. Torch 和 pytorch 的简单对比&img src=&/50/v2-def1f4eb13f8e7cf43ae3088cba26d4f_b.png& data-rawheight=&362& data-rawwidth=&878& class=&origin_image zh-lightbox-thumb& width=&878& data-original=&/50/v2-def1f4eb13f8e7cf43ae3088cba26d4f_r.png&&&/p&&br&&p&结论:尽量使用 PyTorch 来做你的新项目。&/p&&p&&b&五、Caffe2 简介&/b&&/p&&img src=&/50/v2-157c9a89decc526a062ed_b.png& data-rawheight=&246& data-rawwidth=&835& class=&origin_image zh-lightbox-thumb& width=&835& data-original=&/50/v2-157c9a89decc526a062ed_r.png&&&br&&p&&b&六、深度学习框架之争,究竟谁更胜一筹?&/b&&/p&&br&&img src=&/50/v2-39f356398cfbb1eb4fdcdaa715ac3b51_b.png& data-rawheight=&517& data-rawwidth=&1070& class=&origin_image zh-lightbox-thumb& width=&1070& data-original=&/50/v2-39f356398cfbb1eb4fdcdaa715ac3b51_r.png&&&img src=&/50/v2-6c0acd4a29badb_b.png& data-rawheight=&531& data-rawwidth=&1132& class=&origin_image zh-lightbox-thumb& width=&1132& data-original=&/50/v2-6c0acd4a29badb_r.png&&&img src=&/50/v2-5b36a5e2f09_b.png& data-rawheight=&534& data-rawwidth=&1103& class=&origin_image zh-lightbox-thumb& width=&1103& data-original=&/50/v2-5b36a5e2f09_r.png&&&p&其实具体选择何种框架来进行深度学习取决于我们要做什么。在参阅相关文献之后,我们大致可以得出以下结论(仅供参考):&/p&&ul&&li&PyTorch 和 Torch 更适用于学术研究(research);TensorFlow,Caffe,Caffe2 则更适用于工业界的生产环境部署(industrial production)。&/li&&li&Caffe 适用于处理静态图(static graph);Torch 和 PyTorch 更适用于动态图(dynamic graph);而 TensorFlow 在两种情况下都很实用。&/li&&li&Tensorflow 和 Caffe2 可在移动端使用。
&/li&&/ul&&p&附主要参考文献CS231n_2017_Lecture8,链接可直接下载PPT:&/p&&a href=&/?target=http%3A//cs231n.stanford.edu/slides/2017/cs231n_2017_lecture8.pdf& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&cs231n.stanford.edu/sli&/span&&span class=&invisible&&des/2017/cs231n_2017_lecture8.pdf&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&p&其他参考资料:&/p&&a href=&/?target=http%3A//203.187.160.132%3A9011/dl.ee.cuhk.edu.hk/c3pr90ntc0td/slides/tutorial-caffe.pdf& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&203.187.160.132:9011/dl&/span&&span class=&invisible&&.ee.cuhk.edu.hk/c3pr90ntc0td/slides/tutorial-caffe.pdf&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&br&&a href=&/?target=http%3A//203.187.160.132%3A9011/dl.ee.cuhk.edu.hk/c3pr90ntc0td/slides/DL_in_Action.pdf& class=& external& target=&_blank& rel=&nofollow noreferrer&&&span class=&invisible&&http://&/span&&span class=&visible&&203.187.160.132:9011/dl&/span&&span class=&invisible&&.ee.cuhk.edu.hk/c3pr90ntc0td/slides/DL_in_Action.pdf&/span&&span class=&ellipsis&&&/span&&i class=&icon-external&&&/i&&/a&&p&&b&机器之心整理&/b&&/p&
斯坦福大学的课程 CS231n (Convolutional Neural Networks for Visual Recognition) 作为深度学习和计算机视觉方面的重要基础课程,在学界广受推崇。今年 4 月,CS231n 再度开课,全新的 CS231n Spring 2017 仍旧由李飞飞带头,带来了很多新鲜的内容。今天…
&img src=&/50/v2-e81d5b8ffaf9ad2dec54ac4b1f4d115f_b.png& data-rawwidth=&588& data-rawheight=&410& class=&origin_image zh-lightbox-thumb& width=&588& data-original=&/50/v2-e81d5b8ffaf9ad2dec54ac4b1f4d115f_r.png&&&blockquote&&p&题图来自:&a href=&/?target=http%3A//toyota.csail.mit.edu/node/36& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&toyota.csail.mit.edu&i class=&icon-external&&&/i&&/a&&/p&&p&本文主要对卷积神经网络做可视化分析。&br&&/p&&p&&a href=&/p/& class=&internal&&01 - 简单线性模型&/a&/ &a href=&/p/& class=&internal&&02 - 卷积神经网络&/a&/ &a href=&/p/& class=&internal&&03 - PrettyTensor&/a&/ &a href=&/p/& class=&internal&&04 - 保存 & 恢复&/a&&/p&&p&&a href=&/p/& class=&internal&&05 - 集成学习&/a&/ &a href=&/p/& class=&internal&&06 - CIFAR-10&/a&/ &a href=&/p/& class=&internal&&07 - Inception 模型&/a&/ &a href=&/p/& class=&internal&&08 - 迁移学习&/a&&br&&/p&&p&&a href=&/p/& class=&internal&&09 - 视频数据&/a&/ &a href=&/p/& class=&internal&&11 - 对抗样本&/a& / &a href=&/p/& class=&internal&&12 - MNIST的对抗噪声&/a&&br&&/p&&/blockquote&&p&by &a href=&/?target=http%3A//www.hvass-labs.org/& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Magnus Erik Hvass Pedersen&i class=&icon-external&&&/i&&/a& / &a href=&/?target=https%3A///Hvass-Labs/TensorFlow-Tutorials& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&GitHub&i class=&icon-external&&&/i&&/a& / &a href=&/?target=https%3A///playlist%3Flist%3DPL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Videos on YouTube&i class=&icon-external&&&/i&&/a&&/p&&p&中文翻译 &a href=&/insight-pixel& class=&internal&&thrillerist&/a&/&a href=&/?target=https%3A///thrillerist/TensorFlow-Tutorials& class=& wrap external& target=&_blank& rel=&nofollow noreferrer&&Github&i class=&icon-external&&&/i&&/a&&/p&&p&&b&如有转载,请附上本文链接。&/b&__________________________________________________________________________&br&&/p&&br&&h2&介绍&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E4%BB%8B%E7%BB%8D&&&/a&&/h2&&p&在之前的一些关于卷积神经网络的教程中,我们展示了卷积滤波权重,比如教程&a href=&/p/& class=&internal&&#02&/a&和 &a href=&/p/& class=&internal&&#06&/a&。但单从滤波权重上看,不可能确定卷积滤波器能从输入图像中识别出什么。&/p&&p&本教程中,我们会提出一种用于可视化分析神经网络内部工作原理的基本方法。这个方法就是&b&生成最大化神经网络内个体特征的图像&/b&。图像用一些随机噪声初始化,然后用给定特征关于输入图像的梯度来逐渐改变(生成的)图像。&/p&&p&可视化分析神经网络的方法也称为 &em&&b&特征最大化(feature maximization)&/b&&/em& 或 &em&&b&激活最大化(activation maximization)&/b&&/em&。&/p&&p&本文基于之前的教程。你需要大概地熟悉神经网络(详见教程&a href=&/p/& class=&internal&&#01&/a&和 &a href=&/p/& class=&internal&&#02&/a&),了解Inception模型也很有帮助(教程&a href=&/p/& class=&internal&&#07&/a&)。&/p&&br&&h2&流程图&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E6%B5%81%E7%A8%8B%E5%9B%BE&&&/a&&/h2&&p&这里将会使用教程 #07中的Inception模型。我们想要找到&b&使得神经网络内给定特征最大化&/b&的图像。输入图像用一些噪声初始化,然后用给定特征的梯度来更新图像。在执行了一些优化迭代之后,我们会得到一个这个特定特征“喜欢看到的”图像。&/p&&p&由于Inception模型是由很多相结合的基本数学运算构造的,使用微分链式法则,TensorFlow让我们很快就能找到损失函数的梯度。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&kn&&from&/span& &span class=&nn&&IPython.display&/span& &span class=&kn&&import&/span& &span class=&n&&Image&/span&&span class=&p&&,&/span& &span class=&n&&display&/span&
&span class=&n&&Image&/span&&span class=&p&&(&/span&&span class=&s1&&'images/13_visual_analysis_flowchart.png'&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&img src=&/v2-83a8bd8079_b.png& data-rawwidth=&800& data-rawheight=&290& class=&origin_image zh-lightbox-thumb& width=&800& data-original=&/v2-83a8bd8079_r.png&&&h2&导入&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E5%AF%BC%E5%85%A5&&&/a&&/h2&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&o&&%&/span&&span class=&n&&matplotlib&/span& &span class=&n&&inline&/span&
&span class=&kn&&import&/span& &span class=&nn&&matplotlib.pyplot&/span& &span class=&kn&&as&/span& &span class=&nn&&plt&/span&
&span class=&kn&&import&/span& &span class=&nn&&tensorflow&/span& &span class=&kn&&as&/span& &span class=&nn&&tf&/span&
&span class=&kn&&import&/span& &span class=&nn&&numpy&/span& &span class=&kn&&as&/span& &span class=&nn&&np&/span&
&span class=&c1&&# Functions and classes for loading and using the Inception model.&/span&
&span class=&kn&&import&/span& &span class=&nn&&inception&/span&
&/code&&/pre&&/div&&p&使用Python3.5.2(Anaconda)开发,TensorFlow版本是:&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&tf&/span&&span class=&o&&.&/span&&span class=&n&&__version__&/span&
&/code&&/pre&&/div&&blockquote&'1.1.0'&/blockquote&&h2&&b&Inception 模型&/b&&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#Inception-%E6%A8%A1%E5%9E%8B&&&/a&&/h2&&h3&从网上下载Inception模型&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E4%BB%8E%E7%BD%91%E4%B8%8A%E4%B8%8B%E8%BD%BDInception%E6%A8%A1%E5%9E%8B&&&/a&&/h3&&p&从网上下载Inception模型。这是你保存数据文件的默认文件夹。如果文件夹不存在就自动创建。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&c1&&# inception.data_dir = 'inception/'&/span&
&/code&&/pre&&/div&&p&如果文件夹中不存在Inception模型,就自动下载。 它有85MB。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&inception&/span&&span class=&o&&.&/span&&span class=&n&&maybe_download&/span&&span class=&p&&()&/span&
&/code&&/pre&&/div&&blockquote&Downloading Inception v3 Model ...&br&- Download progress: 100.0%&br&Download finished. Extracting files.&br&Done.&/blockquote&&h3&卷积层的名称&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E5%8D%B7%E7%A7%AF%E5%B1%82%E7%9A%84%E5%90%8D%E7%A7%B0&&&/a&&/h3&&p&这个函数返回Inception模型中卷积层的名称列表。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&get_conv_layer_names&/span&&span class=&p&&():&/span&
&span class=&c1&&# Load the Inception model.&/span&
&span class=&n&&model&/span& &span class=&o&&=&/span& &span class=&n&&inception&/span&&span class=&o&&.&/span&&span class=&n&&Inception&/span&&span class=&p&&()&/span&
&span class=&c1&&# Create a list of names for the operations in the graph&/span&
&span class=&c1&&# for the Inception model where the operator-type is 'Conv2D'.&/span&
&span class=&n&&names&/span& &span class=&o&&=&/span& &span class=&p&&[&/span&&span class=&n&&op&/span&&span class=&o&&.&/span&&span class=&n&&name&/span& &span class=&k&&for&/span& &span class=&n&&op&/span& &span class=&ow&&in&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&graph&/span&&span class=&o&&.&/span&&span class=&n&&get_operations&/span&&span class=&p&&()&/span& &span class=&k&&if&/span& &span class=&n&&op&/span&&span class=&o&&.&/span&&span class=&n&&type&/span&&span class=&o&&==&/span&&span class=&s1&&'Conv2D'&/span&&span class=&p&&]&/span&
&span class=&c1&&# Close the TensorFlow session inside the model-object.&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&close&/span&&span class=&p&&()&/span&
&span class=&k&&return&/span& &span class=&n&&names&/span&
&/code&&/pre&&/div&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&conv_names&/span& &span class=&o&&=&/span& &span class=&n&&get_conv_layer_names&/span&&span class=&p&&()&/span&
&/code&&/pre&&/div&&p&在Inception模型中总共有94个卷积层。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&nb&&len&/span&&span class=&p&&(&/span&&span class=&n&&conv_names&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&94&/blockquote&&p&写出头5个卷积层的名称。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&conv_names&/span&&span class=&p&&[:&/span&&span class=&mi&&5&/span&&span class=&p&&]&/span&
&/code&&/pre&&/div&&blockquote&['conv/Conv2D',&br& 'conv_1/Conv2D',&br& 'conv_2/Conv2D',&br& 'conv_3/Conv2D',&br& 'conv_4/Conv2D']&/blockquote&&p&写出最后5个卷积层的名称。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&conv_names&/span&&span class=&p&&[&/span&&span class=&o&&-&/span&&span class=&mi&&5&/span&&span class=&p&&:]&/span&
&/code&&/pre&&/div&&blockquote&['mixed_10/tower_1/conv/Conv2D',&br& 'mixed_10/tower_1/conv_1/Conv2D',&br& 'mixed_10/tower_1/mixed/conv/Conv2D',&br& 'mixed_10/tower_1/mixed/conv_1/Conv2D',&br& 'mixed_10/tower_2/conv/Conv2D']&/blockquote&&h2&找到输入图像的帮助函数&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E6%89%BE%E5%88%B0%E8%BE%93%E5%85%A5%E5%9B%BE%E5%83%8F%E7%9A%84%E5%B8%AE%E5%8A%A9%E5%87%BD%E6%95%B0&&&/a&&/h2&&p&这个函数用来寻找使网络内给定特征最大化的输入图像。它本质上是用梯度法来进行优化&b&。图像用小的随机值初始化,然后用给定特征关于输入图像的梯度来逐步更新。&/b&&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&optimize_image&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&,&/span& &span class=&n&&feature&/span&&span class=&o&&=&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span&
&span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&,&/span& &span class=&n&&show_progress&/span&&span class=&o&&=&/span&&span class=&bp&&True&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
Find an image that maximizes the feature&/span&
&span class=&sd&&
given by the conv_id and feature number.&/span&
&span class=&sd&&
Parameters:&/span&
&span class=&sd&&
conv_id: Integer identifying the convolutional layer to&/span&
&span class=&sd&&
maximize. It is an index into conv_names.&/span&
&span class=&sd&&
If None then use the last fully-connected layer&/span&
&span class=&sd&&
before the softmax output.&/span&
&span class=&sd&&
feature: Index into the layer for the feature to maximize.&/span&
&span class=&sd&&
num_iteration: Number of optimization iterations to perform.&/span&
&span class=&sd&&
show_progress: Boolean whether to show the progress.&/span&
&span class=&sd&&
&&&&/span&
&span class=&c1&&# Load the Inception model. This is done for each call of&/span&
&span class=&c1&&# this function because we will add a lot to the graph&/span&
&span class=&c1&&# which will cause the graph to grow and eventually the&/span&
&span class=&c1&&# computer will run out of memory.&/span&
&span class=&n&&model&/span& &span class=&o&&=&/span& &span class=&n&&inception&/span&&span class=&o&&.&/span&&span class=&n&&Inception&/span&&span class=&p&&()&/span&
&span class=&c1&&# Reference to the tensor that takes the raw input image.&/span&
&span class=&n&&resized_image&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&resized_image&/span&
&span class=&c1&&# Reference to the tensor for the predicted classes.&/span&
&span class=&c1&&# This is the output of the final layer's softmax classifier.&/span&
&span class=&n&&y_pred&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&y_pred&/span&
&span class=&c1&&# Create the loss-function that must be maximized.&/span&
&span class=&k&&if&/span& &span class=&n&&conv_id&/span& &span class=&ow&&is&/span& &span class=&bp&&None&/span&&span class=&p&&:&/span&
&span class=&c1&&# If we want to maximize a feature on the last layer,&/span&
&span class=&c1&&# then we use the fully-connected layer prior to the&/span&
&span class=&c1&&# softmax-classifier. The feature no. is the class-number&/span&
&span class=&c1&&# and must be an integer between 1 and 1000.&/span&
&span class=&c1&&# The loss-function is just the value of that feature.&/span&
&span class=&n&&loss&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&y_logits&/span&&span class=&p&&[&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&n&&feature&/span&&span class=&p&&]&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&c1&&# If instead we want to maximize a feature of a&/span&
&span class=&c1&&# convolutional layer inside the neural network.&/span&
&span class=&c1&&# Get the name of the convolutional operator.&/span&
&span class=&n&&conv_name&/span& &span class=&o&&=&/span& &span class=&n&&conv_names&/span&&span class=&p&&[&/span&&span class=&n&&conv_id&/span&&span class=&p&&]&/span&
&span class=&c1&&# Get a reference to the tensor that is output by the&/span&
&span class=&c1&&# operator. Note that &:0& is added to the name for this.&/span&
&span class=&n&&tensor&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&graph&/span&&span class=&o&&.&/span&&span class=&n&&get_tensor_by_name&/span&&span class=&p&&(&/span&&span class=&n&&conv_name&/span& &span class=&o&&+&/span& &span class=&s2&&&:0&&/span&&span class=&p&&)&/span&
&span class=&c1&&# Set the Inception model's graph as the default&/span&
&span class=&c1&&# so we can add an operator to it.&/span&
&span class=&k&&with&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&graph&/span&&span class=&o&&.&/span&&span class=&n&&as_default&/span&&span class=&p&&():&/span&
&span class=&c1&&# The loss-function is the average of all the&/span&
&span class=&c1&&# tensor-values for the given feature. This&/span&
&span class=&c1&&# ensures that we generate the whole input image.&/span&
&span class=&c1&&# You can try and modify this so it only uses&/span&
&span class=&c1&&# a part of the tensor.&/span&
&span class=&n&&loss&/span& &span class=&o&&=&/span& &span class=&n&&tf&/span&&span class=&o&&.&/span&&span class=&n&&reduce_mean&/span&&span class=&p&&(&/span&&span class=&n&&tensor&/span&&span class=&p&&[:,:,:,&/span&&span class=&n&&feature&/span&&span class=&p&&])&/span&
&span class=&c1&&# Get the gradient for the loss-function with regard to&/span&
&span class=&c1&&# the resized input image. This creates a mathematical&/span&
&span class=&c1&&# function for calculating the gradient.&/span&
&span class=&n&&gradient&/span& &span class=&o&&=&/span& &span class=&n&&tf&/span&&span class=&o&&.&/span&&span class=&n&&gradients&/span&&span class=&p&&(&/span&&span class=&n&&loss&/span&&span class=&p&&,&/span& &span class=&n&&resized_image&/span&&span class=&p&&)&/span&
&span class=&c1&&# Create a TensorFlow session so we can run the graph.&/span&
&span class=&n&&session&/span& &span class=&o&&=&/span& &span class=&n&&tf&/span&&span class=&o&&.&/span&&span class=&n&&Session&/span&&span class=&p&&(&/span&&span class=&n&&graph&/span&&span class=&o&&=&/span&&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&graph&/span&&span class=&p&&)&/span&
&span class=&c1&&# Generate a random image of the same size as the raw input.&/span&
&span class=&c1&&# Each pixel is a small random value between 128 and 129,&/span&
&span class=&c1&&# which is about the middle of the colour-range.&/span&
&span class=&n&&image_shape&/span& &span class=&o&&=&/span& &span class=&n&&resized_image&/span&&span class=&o&&.&/span&&span class=&n&&get_shape&/span&&span class=&p&&()&/span&
&span class=&n&&image&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&random&/span&&span class=&o&&.&/span&&span class=&n&&uniform&/span&&span class=&p&&(&/span&&span class=&n&&size&/span&&span class=&o&&=&/span&&span class=&n&&image_shape&/span&&span class=&p&&)&/span& &span class=&o&&+&/span& &span class=&mf&&128.0&/span&
&span class=&c1&&# Perform a number of optimization iterations to find&/span&
&span class=&c1&&# the image that maximizes the loss-function.&/span&
&span class=&k&&for&/span& &span class=&n&&i&/span& &span class=&ow&&in&/span& &span class=&nb&&range&/span&&span class=&p&&(&/span&&span class=&n&&num_iterations&/span&&span class=&p&&):&/span&
&span class=&c1&&# Create a feed-dict. This feeds the image to the&/span&
&span class=&c1&&# tensor in the graph that holds the resized image, because&/span&
&span class=&c1&&# this is the final stage for inputting raw image data.&/span&
&span class=&n&&feed_dict&/span& &span class=&o&&=&/span& &span class=&p&&{&/span&&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&tensor_name_resized_image&/span&&span class=&p&&:&/span& &span class=&n&&image&/span&&span class=&p&&}&/span&
&span class=&c1&&# Calculate the predicted class-scores,&/span&
&span class=&c1&&# as well as the gradient and the loss-value.&/span&
&span class=&n&&pred&/span&&span class=&p&&,&/span& &span class=&n&&grad&/span&&span class=&p&&,&/span& &span class=&n&&loss_value&/span& &span class=&o&&=&/span& &span class=&n&&session&/span&&span class=&o&&.&/span&&span class=&n&&run&/span&&span class=&p&&([&/span&&span class=&n&&y_pred&/span&&span class=&p&&,&/span& &span class=&n&&gradient&/span&&span class=&p&&,&/span& &span class=&n&&loss&/span&&span class=&p&&],&/span&
&span class=&n&&feed_dict&/span&&span class=&o&&=&/span&&span class=&n&&feed_dict&/span&&span class=&p&&)&/span&
&span class=&c1&&# Squeeze the dimensionality for the gradient-array.&/span&
&span class=&n&&grad&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&array&/span&&span class=&p&&(&/span&&span class=&n&&grad&/span&&span class=&p&&)&/span&&span class=&o&&.&/span&&span class=&n&&squeeze&/span&&span class=&p&&()&/span&
&span class=&c1&&# The gradient now tells us how much we need to change the&/span&
&span class=&c1&&# input image in order to maximize the given feature.&/span&
&span class=&c1&&# Calculate the step-size for updating the image.&/span&
&span class=&c1&&# This step-size was found to give fast convergence.&/span&
&span class=&c1&&# The addition of 1e-8 is to protect from div-by-zero.&/span&
&span class=&n&&step_size&/span& &span class=&o&&=&/span& &span class=&mf&&1.0&/span& &span class=&o&&/&/span& &span class=&p&&(&/span&&span class=&n&&grad&/span&&span class=&o&&.&/span&&span class=&n&&std&/span&&span class=&p&&()&/span& &span class=&o&&+&/span& &span class=&mf&&1e-8&/span&&span class=&p&&)&/span&
&span class=&c1&&# Update the image by adding the scaled gradient&/span&
&span class=&c1&&# This is called gradient ascent.&/span&
&span class=&n&&image&/span& &span class=&o&&+=&/span& &span class=&n&&step_size&/span& &span class=&o&&*&/span& &span class=&n&&grad&/span&
&span class=&c1&&# Ensure all pixel-values in the image are between 0 and 255.&/span&
&span class=&n&&image&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&clip&/span&&span class=&p&&(&/span&&span class=&n&&image&/span&&span class=&p&&,&/span& &span class=&mf&&0.0&/span&&span class=&p&&,&/span& &span class=&mf&&255.0&/span&&span class=&p&&)&/span&
&span class=&k&&if&/span& &span class=&n&&show_progress&/span&&span class=&p&&:&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&s2&&&Iteration:&&/span&&span class=&p&&,&/span& &span class=&n&&i&/span&&span class=&p&&)&/span&
&span class=&c1&&# Convert the predicted class-scores to a one-dim array.&/span&
&span class=&n&&pred&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&squeeze&/span&&span class=&p&&(&/span&&span class=&n&&pred&/span&&span class=&p&&)&/span&
&span class=&c1&&# The predicted class for the Inception model.&/span&
&span class=&n&&pred_cls&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&argmax&/span&&span class=&p&&(&/span&&span class=&n&&pred&/span&&span class=&p&&)&/span&
&span class=&c1&&# Name of the predicted class.&/span&
&span class=&n&&cls_name&/span& &span class=&o&&=&/span& &span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&name_lookup&/span&&span class=&o&&.&/span&&span class=&n&&cls_to_name&/span&&span class=&p&&(&/span&&span class=&n&&pred_cls&/span&&span class=&p&&,&/span&
&span class=&n&&only_first_name&/span&&span class=&o&&=&/span&&span class=&bp&&True&/span&&span class=&p&&)&/span&
&span class=&c1&&# The score (probability) for the predicted class.&/span&
&span class=&n&&cls_score&/span& &span class=&o&&=&/span& &span class=&n&&pred&/span&&span class=&p&&[&/span&&span class=&n&&pred_cls&/span&&span class=&p&&]&/span&
&span class=&c1&&# Print the predicted score etc.&/span&
&span class=&n&&msg&/span& &span class=&o&&=&/span& &span class=&s2&&&Predicted class-name: {0} (#{1}), score: {2:&7.2%}&&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&n&&msg&/span&&span class=&o&&.&/span&&span class=&n&&format&/span&&span class=&p&&(&/span&&span class=&n&&cls_name&/span&&span class=&p&&,&/span& &span class=&n&&pred_cls&/span&&span class=&p&&,&/span& &span class=&n&&cls_score&/span&&span class=&p&&))&/span&
&span class=&c1&&# Print statistics for the gradient.&/span&
&span class=&n&&msg&/span& &span class=&o&&=&/span& &span class=&s2&&&Gradient min: {0:&9.6f}, max: {1:&9.6f}, stepsize: {2:&9.2f}&&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&n&&msg&/span&&span class=&o&&.&/span&&span class=&n&&format&/span&&span class=&p&&(&/span&&span class=&n&&grad&/span&&span class=&o&&.&/span&&span class=&n&&min&/span&&span class=&p&&(),&/span& &span class=&n&&grad&/span&&span class=&o&&.&/span&&span class=&n&&max&/span&&span class=&p&&(),&/span& &span class=&n&&step_size&/span&&span class=&p&&))&/span&
&span class=&c1&&# Print the loss-value.&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&s2&&&Loss:&&/span&&span class=&p&&,&/span& &span class=&n&&loss_value&/span&&span class=&p&&)&/span&
&span class=&c1&&# Newline.&/span&
&span class=&k&&print&/span&&span class=&p&&()&/span&
&span class=&c1&&# Close the TensorFlow session inside the model-object.&/span&
&span class=&n&&model&/span&&span class=&o&&.&/span&&span class=&n&&close&/span&&span class=&p&&()&/span&
&span class=&k&&return&/span& &span class=&n&&image&/span&&span class=&o&&.&/span&&span class=&n&&squeeze&/span&&span class=&p&&()&/span&
&/code&&/pre&&/div&&h3&绘制图像和噪声的帮助函数&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E7%BB%98%E5%88%B6%E5%9B%BE%E5%83%8F%E5%92%8C%E5%99%AA%E5%A3%B0%E7%9A%84%E5%B8%AE%E5%8A%A9%E5%87%BD%E6%95%B0&&&/a&&/h3&&p&函数对图像做归一化,则像素值在0.0到1.0之间。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&normalize_image&/span&&span class=&p&&(&/span&&span class=&n&&x&/span&&span class=&p&&):&/span&
&span class=&c1&&# Get the min and max values for all pixels in the input.&/span&
&span class=&n&&x_min&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&min&/span&&span class=&p&&()&/span&
&span class=&n&&x_max&/span& &span class=&o&&=&/span& &span class=&n&&x&/span&&span class=&o&&.&/span&&span class=&n&&max&/span&&span class=&p&&()&/span&
&span class=&c1&&# Normalize so all values are between 0.0 and 1.0&/span&
&span class=&n&&x_norm&/span& &span class=&o&&=&/span& &span class=&p&&(&/span&&span class=&n&&x&/span& &span class=&o&&-&/span& &span class=&n&&x_min&/span&&span class=&p&&)&/span& &span class=&o&&/&/span& &span class=&p&&(&/span&&span class=&n&&x_max&/span& &span class=&o&&-&/span& &span class=&n&&x_min&/span&&span class=&p&&)&/span&
&span class=&k&&return&/span& &span class=&n&&x_norm&/span&
&/code&&/pre&&/div&&p&这个函数绘制一张图像。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&plot_image&/span&&span class=&p&&(&/span&&span class=&n&&image&/span&&span class=&p&&):&/span&
&span class=&c1&&# Normalize the image so pixels are between 0.0 and 1.0&/span&
&span class=&n&&img_norm&/span& &span class=&o&&=&/span& &span class=&n&&normalize_image&/span&&span class=&p&&(&/span&&span class=&n&&image&/span&&span class=&p&&)&/span&
&span class=&c1&&# Plot the image.&/span&
&span class=&n&&plt&/span&&span class=&o&&.&/span&&span class=&n&&imshow&/span&&span class=&p&&(&/span&&span class=&n&&img_norm&/span&&span class=&p&&,&/span& &span class=&n&&interpolation&/span&&span class=&o&&=&/span&&span class=&s1&&'nearest'&/span&&span class=&p&&)&/span&
&span class=&n&&plt&/span&&span class=&o&&.&/span&&span class=&n&&show&/span&&span class=&p&&()&/span&
&/code&&/pre&&/div&&p&这个函数在坐标系内绘制6张图。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&plot_images&/span&&span class=&p&&(&/span&&span class=&n&&images&/span&&span class=&p&&,&/span& &span class=&n&&show_size&/span&&span class=&o&&=&/span&&span class=&mi&&100&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
The show_size is the number of pixels to show for each image.&/span&
&span class=&sd&&
The max value is 299.&/span&
&span class=&sd&&
&&&&/span&
&span class=&c1&&# Create figure with sub-plots.&/span&
&span class=&n&&fig&/span&&span class=&p&&,&/span& &span class=&n&&axes&/span& &span class=&o&&=&/span& &span class=&n&&plt&/span&&span class=&o&&.&/span&&span class=&n&&subplots&/span&&span class=&p&&(&/span&&span class=&mi&&2&/span&&span class=&p&&,&/span& &span class=&mi&&3&/span&&span class=&p&&)&/span&
&span class=&c1&&# Adjust vertical spacing.&/span&
&span class=&n&&fig&/span&&span class=&o&&.&/span&&span class=&n&&subplots_adjust&/span&&span class=&p&&(&/span&&span class=&n&&hspace&/span&&span class=&o&&=&/span&&span class=&mf&&0.1&/span&&span class=&p&&,&/span& &span class=&n&&wspace&/span&&span class=&o&&=&/span&&span class=&mf&&0.1&/span&&span class=&p&&)&/span&
&span class=&c1&&# Use interpolation to smooth pixels?&/span&
&span class=&n&&smooth&/span& &span class=&o&&=&/span& &span class=&bp&&True&/span&
&span class=&c1&&# Interpolation type.&/span&
&span class=&k&&if&/span& &span class=&n&&smooth&/span&&span class=&p&&:&/span&
&span class=&n&&interpolation&/span& &span class=&o&&=&/span& &span class=&s1&&'spline16'&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&n&&interpolation&/span& &span class=&o&&=&/span& &span class=&s1&&'nearest'&/span&
&span class=&c1&&# For each entry in the grid.&/span&
&span class=&k&&for&/span& &span class=&n&&i&/span&&span class=&p&&,&/span& &span class=&n&&ax&/span& &span class=&ow&&in&/span& &span class=&nb&&enumerate&/span&&span class=&p&&(&/span&&span class=&n&&axes&/span&&span class=&o&&.&/span&&span class=&n&&flat&/span&&span class=&p&&):&/span&
&span class=&c1&&# Get the i'th image and only use the desired pixels.&/span&
&span class=&n&&img&/span& &span class=&o&&=&/span& &span class=&n&&images&/span&&span class=&p&&[&/span&&span class=&n&&i&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&:&/span&&span class=&n&&show_size&/span&&span class=&p&&,&/span& &span class=&mi&&0&/span&&span class=&p&&:&/span&&span class=&n&&show_size&/span&&span class=&p&&,&/span& &span class=&p&&:]&/span&
&span class=&c1&&# Normalize the image so its pixels are between 0.0 and 1.0&/span&
&span class=&n&&img_norm&/span& &span class=&o&&=&/span& &span class=&n&&normalize_image&/span&&span class=&p&&(&/span&&span class=&n&&img&/span&&span class=&p&&)&/span&
&span class=&c1&&# Plot the image.&/span&
&span class=&n&&ax&/span&&span class=&o&&.&/span&&span class=&n&&imshow&/span&&span class=&p&&(&/span&&span class=&n&&img_norm&/span&&span class=&p&&,&/span& &span class=&n&&interpolation&/span&&span class=&o&&=&/span&&span class=&n&&interpolation&/span&&span class=&p&&)&/span&
&span class=&c1&&# Remove ticks.&/span&
&span class=&n&&ax&/span&&span class=&o&&.&/span&&span class=&n&&set_xticks&/span&&span class=&p&&([])&/span&
&span class=&n&&ax&/span&&span class=&o&&.&/span&&span class=&n&&set_yticks&/span&&span class=&p&&([])&/span&
&span class=&c1&&# Ensure the plot is shown correctly with multiple plots&/span&
&span class=&c1&&# in a single Notebook cell.&/span&
&span class=&n&&plt&/span&&span class=&o&&.&/span&&span class=&n&&show&/span&&span class=&p&&()&/span&
&/code&&/pre&&/div&&h3&优化和绘制图像的帮助函数&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E4%BC%98%E5%8C%96%E5%92%8C%E7%BB%98%E5%88%B6%E5%9B%BE%E5%83%8F%E7%9A%84%E5%B8%AE%E5%8A%A9%E5%87%BD%E6%95%B0&&&/a&&/h3&&p&这个函数优化多张图像并绘制它们。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&k&&def&/span& &span class=&nf&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&bp&&None&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&,&/span& &span class=&n&&show_size&/span&&span class=&o&&=&/span&&span class=&mi&&100&/span&&span class=&p&&):&/span&
&span class=&sd&&&&&&/span&
&span class=&sd&&
Find 6 images that maximize the 6 first features in the layer&/span&
&span class=&sd&&
given by the conv_id.&/span&
&span class=&sd&&
&span class=&sd&&
Parameters:&/span&
&span class=&sd&&
conv_id: Integer identifying the convolutional layer to&/span&
&span class=&sd&&
maximize. It is an index into conv_names.&/span&
&span class=&sd&&
If None then use the last layer before the softmax output.&/span&
&span class=&sd&&
num_iterations: Number of optimization iterations to perform.&/span&
&span class=&sd&&
show_size: Number of pixels to show for each image. Max 299.&/span&
&span class=&sd&&
&&&&/span&
&span class=&c1&&# Which layer are we using?&/span&
&span class=&k&&if&/span& &span class=&n&&conv_id&/span& &span class=&ow&&is&/span& &span class=&bp&&None&/span&&span class=&p&&:&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&s2&&&Final fully-connected layer before softmax.&&/span&&span class=&p&&)&/span&
&span class=&k&&else&/span&&span class=&p&&:&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&s2&&&Layer:&&/span&&span class=&p&&,&/span& &span class=&n&&conv_names&/span&&span class=&p&&[&/span&&span class=&n&&conv_id&/span&&span class=&p&&])&/span&
&span class=&c1&&# Initialize the array of images.&/span&
&span class=&n&&images&/span& &span class=&o&&=&/span& &span class=&p&&[]&/span&
&span class=&c1&&# For each feature do the following. Note that the&/span&
&span class=&c1&&# last fully-connected layer only supports numbers&/span&
&span class=&c1&&# between 1 and 1000, while the convolutional layers&/span&
&span class=&c1&&# support numbers between 0 and some other number.&/span&
&span class=&c1&&# So we just use the numbers between 1 and 7.&/span&
&span class=&k&&for&/span& &span class=&n&&feature&/span& &span class=&ow&&in&/span& &span class=&nb&&range&/span&&span class=&p&&(&/span&&span class=&mi&&1&/span&&span class=&p&&,&/span&&span class=&mi&&7&/span&&span class=&p&&):&/span&
&span class=&k&&print&/span&&span class=&p&&(&/span&&span class=&s2&&&Optimizing image for feature no.&&/span&&span class=&p&&,&/span& &span class=&n&&feature&/span&&span class=&p&&)&/span&
&span class=&c1&&# Find the image that maximizes the given feature&/span&
&span class=&c1&&# for the network layer identified by conv_id (or None).&/span&
&span class=&n&&image&/span& &span class=&o&&=&/span& &span class=&n&&optimize_image&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&n&&conv_id&/span&&span class=&p&&,&/span& &span class=&n&&feature&/span&&span class=&o&&=&/span&&span class=&n&&feature&/span&&span class=&p&&,&/span&
&span class=&n&&show_progress&/span&&span class=&o&&=&/span&&span class=&bp&&False&/span&&span class=&p&&,&/span&
&span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&n&&num_iterations&/span&&span class=&p&&)&/span&
&span class=&c1&&# Squeeze the dim of the array.&/span&
&span class=&n&&image&/span& &span class=&o&&=&/span& &span class=&n&&image&/span&&span class=&o&&.&/span&&span class=&n&&squeeze&/span&&span class=&p&&()&/span&
&span class=&c1&&# Append to the list of images.&/span&
&span class=&n&&images&/span&&span class=&o&&.&/span&&span class=&n&&append&/span&&span class=&p&&(&/span&&span class=&n&&image&/span&&span class=&p&&)&/span&
&span class=&c1&&# Convert to numpy-array so we can index all dimensions easily.&/span&
&span class=&n&&images&/span& &span class=&o&&=&/span& &span class=&n&&np&/span&&span class=&o&&.&/span&&span class=&n&&array&/span&&span class=&p&&(&/span&&span class=&n&&images&/span&&span class=&p&&)&/span&
&span class=&c1&&# Plot the images.&/span&
&span class=&n&&plot_images&/span&&span class=&p&&(&/span&&span class=&n&&images&/span&&span class=&o&&=&/span&&span class=&n&&images&/span&&span class=&p&&,&/span& &span class=&n&&show_size&/span&&span class=&o&&=&/span&&span class=&n&&show_size&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&h2&&b&结果&/b&&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E7%BB%93%E6%9E%9C&&&/a&&/h2&&h3&为浅处的卷积层优化图像&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E4%B8%BA%E6%B5%85%E5%A4%84%E7%9A%84%E5%8D%B7%E7%A7%AF%E5%B1%82%E4%BC%98%E5%8C%96%E5%9B%BE%E5%83%8F&&&/a&&/h3&&p&举个例子,寻找让卷积层conv_names[conv_id]中的2号特征最大化的输入图像,其中conv_id=5。&/p&&br&&div class=&highlight&&&pre&&code class=&language-text&&&span&&/span&image = optimize_image(conv_id=5, feature=2,
num_iterations=30, show_progress=True)
&/code&&/pre&&/div&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&Iteration&/span&&span class=&p&&:&/span& &span class=&mi&&0&/span&
&span class=&n&&Predicted&/span& &span class=&n&&class&/span&&span class=&o&&-&/span&&span class=&n&&name&/span&&span class=&p&&:&/span& &span class=&n&&dishwasher&/span& &span class=&p&&(&/span&&span class=&c1&&#667), score:
4.81%&/span&
&span class=&n&&Gradient&/span& &span class=&nb&&min&/span&&span class=&p&&:&/span& &span class=&o&&-&/span&&span class=&mf&&0.000083&/span&&span class=&p&&,&/span& &span class=&nb&&max&/span&&span class=&p&&:&/span&
&span class=&mf&&0.000100&/span&&span class=&p&&,&/span& &span class=&n&&stepsize&/span&&span class=&p&&:&/span&
&span class=&mf&&76290.32&/span&
&span class=&n&&Loss&/span&&span class=&p&&:&/span& &span class=&mf&&4.83793&/span&
&span class=&o&&...&/span&
&span class=&n&&Iteration&/span&&span class=&p&&:&/span& &span class=&mi&&29&/span&
&span class=&n&&Predicted&/span& &span class=&n&&class&/span&&span class=&o&&-&/span&&span class=&n&&name&/span&&span class=&p&&:&/span& &span class=&n&&bib&/span& &span class=&p&&(&/span&&span class=&c1&&#941), score:
18.87%&/span&
&span class=&n&&Gradient&/span& &span class=&nb&&min&/span&&span class=&p&&:&/span& &span class=&o&&-&/span&&span class=&mf&&0.000047&/span&&span class=&p&&,&/span& &span class=&nb&&max&/span&&span class=&p&&:&/span&
&span class=&mf&&0.000059&/span&&span class=&p&&,&/span& &span class=&n&&stepsize&/span&&span class=&p&&:&/span& &span class=&mf&&&/span&
&span class=&n&&Loss&/span&&span class=&p&&:&/span& &span class=&mf&&17.9321&/span&
&/code&&/pre&&/div&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&plot_image&/span&&span class=&p&&(&/span&&span class=&n&&image&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&img src=&/v2-6eef97f63cbfd3ce64e492_b.png& data-rawwidth=&362& data-rawheight=&356& class=&content_image& width=&362&&&h3&为卷积层优化多张图像&a class=&& href=&file:///F:/USER/tf/13_Visual_Analysis.html#%E4%B8%BA%E5%8D%B7%E7%A7%AF%E5%B1%82%E4%BC%98%E5%8C%96%E5%A4%9A%E5%BC%A0%E5%9B%BE%E5%83%8F&&&/a&&/h3&&p&下面,我们为Inception模型中的卷积层优化多张图像,并绘制它们。这些图像展示了卷积层“想看到的”内容。注意更深的层次里图案变得越来越复杂。&/p&&br&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&0&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&10&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&3&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: conv_3/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-00deaeb5c18357bdd488b0_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-00deaeb5c18357bdd488b0_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&4&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: conv_4/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&5&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&6&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/tower/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-4de6ec555fcb1ead98466_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-4de6ec555fcb1ead98466_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&7&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/tower/conv_1/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&8&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/tower_1/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&9&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/tower_1/conv_1/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&10&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed/tower_1/conv_2/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-3d50ebc6cf253c6340acee_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-3d50ebc6cf253c6340acee_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&20&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed_2/tower/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-bf610ce37936bbc35cbc5ec_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-bf610ce37936bbc35cbc5ec_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed_4/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-d83d05a8b1bf2d66791f1cca0d4f0758_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-d83d05a8b1bf2d66791f1cca0d4f0758_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&40&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed_5/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-f81f60c33a65bdbdb54d_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-f81f60c33a65bdbdb54d_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&50&/span&&span class=&p&&,&/span& &span class=&n&&num_iterations&/span&&span class=&o&&=&/span&&span class=&mi&&30&/span&&span class=&p&&)&/span&
&/code&&/pre&&/div&&blockquote&Layer: mixed_6/conv/Conv2D&br&Optimizing image for feature no. 1&br&Optimizing image for feature no. 2&br&Optimizing image for feature no. 3&br&Optimizing image for feature no. 4&br&Optimizing image for feature no. 5&br&Optimizing image for feature no. 6&/blockquote&&img src=&/v2-05aeebb4d71fe_b.png& data-rawwidth=&484& data-rawheight=&327& class=&origin_image zh-lightbox-thumb& width=&484& data-original=&/v2-05aeebb4d71fe_r.png&&&div class=&highlight&&&pre&&code class=&language-python&&&span&&/span&&span class=&n&&optimize_images&/span&&span class=&p&&(&/span&&span class=&n&&conv_id&/span&&span class=&o&&=&/span&&span class=&mi&&60&/span&&span class=&p&&,&/span& &span class=&n&&n}

我要回帖

更多关于 aa.178.cm 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信