首页 > 代码库 > 【DeepLearning】Exercise:Vectorization
【DeepLearning】Exercise:Vectorization
Exercise:Vectorization
习题的链接:Exercise:Vectorization
注意点:
MNIST图片的像素点已经经过归一化。
如果再使用Exercise:Sparse Autoencoder中的sampleIMAGES.m进行归一化,
将使得训练得到的可视化权值如下图:
我的实现:
更改train.m的参数设置及训练样本选取
%% STEP 0: Here we provide the relevant parameters values that will% allow your sparse autoencoder to get good filters; you do not need to % change the parameters below.visibleSize = 28*28; % number of input units hiddenSize = 196; % number of hidden units sparsityParam = 0.1; % desired average activation of the hidden units. % (This was denoted by the Greek alphabet rho, which looks like a lower-case "p", % in the lecture notes). lambda = 3e-3; % weight decay parameter beta = 3; % weight of sparsity penalty term %%======================================================================%% STEP 1: Implement sampleIMAGES%% After implementing sampleIMAGES, the display_network command should% display a random sample of 200 patches from the dataset% MNIST images have already been normalizedimages = loadMNISTImages(‘train-images.idx3-ubyte‘);patches = images(:,1:10000); %display_network(patches(:,randi(size(patches,2),200,1)),8);% Obtain random parameters thetatheta = initializeParameters(hiddenSize, visibleSize);
训练得到的W1可视化:
【DeepLearning】Exercise:Vectorization
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。