首页 > 代码库 > tenserflow-1
tenserflow-1
原网页来自知乎
https://zhuanlan.zhihu.com/p/22410917
知乎的代码块复制过来可能有缩进问题。
import tensorflow as tf import numpy import matplotlib.pyplot as plt rng = numpy.random # Parameters learning_rate = 0.01 training_epochs = 2000 display_step = 50 # Training Data train_X = numpy.asarray( [3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167, 7.042, 10.791, 5.313, 7.997, 5.654, 9.27, 3.1]) train_Y = numpy.asarray( [1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221, 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3]) n_samples = train_X.shape[0] # tf Graph Input X = tf.placeholder("float") Y = tf.placeholder("float") # Create Model # Set model weights W = tf.Variable(rng.randn(), name="weight") b = tf.Variable(rng.randn(), name="bias") # Construct a linear model activation = tf.add(tf.mul(X, W), b) # Minimize the squared errors cost = tf.reduce_sum(tf.pow(activation - Y, 2)) / (2 * n_samples) # L2 loss optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Gradient descent # Initializing the variables init = tf.initialize_all_variables() # Launch the graph with tf.Session() as sess: sess.run(init) # Fit all training data for epoch in range(training_epochs): for (x, y) in zip(train_X, train_Y): sess.run(optimizer, feed_dict={X: x, Y: y}) # Display logs per epoch step if epoch % display_step == 0: print "Epoch:", ‘%04d‘ % (epoch + 1), "cost=", "{:.9f}".format(sess.run(cost, feed_dict={X: train_X, Y: train_Y})), "W=", sess.run(W), "b=", sess.run(b) print "Optimization Finished!" print "cost=", sess.run(cost, feed_dict={X: train_X, Y: train_Y}), "W=", sess.run(W), "b=", sess.run(b) # Graphic display plt.plot(train_X, train_Y, ‘ro‘, label=‘Original data‘) plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label=‘Fitted line‘) plt.legend() plt.show()
tenserflow-1
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。