this article mainly uses a simple example to implement the neural network. The training data is a random set of simulated data sets to solve the two classification problem. Here we first talk about

, the general process of the training of the neural network structure of the

definition of

1. neural network and

to the output transmission of the

2. definition of loss function and back propagation algorithm

3. to generate the session (Session) and in the training data on repeated running backpropagation optimization algorithm

to remember a point is, no matter how the neural network structure changes, the above three steps will not change.

 import complete code is as follows: 

tensorflow as TF # import TensorFlow kit and from numpy.random import RandomState referred to as TF # into numpy toolkit, simulation data set batch_size = 8 # defined training data size batch W1 = tf.Variable (tf.random_normal ([2,3], stddev=1, seed=1) W2 (tf.random_normal) = tf.Variable ([3,1] stddev=1, seed=1)) # respectively defined network parameters between the one or two and two or three layers, the standard deviation is 1, randomly generated number consistent with x = tf.placeholder (tf.float32, shape= (None, 2), name='x-input'tf.placeholder (tf.float32, y_) = shape= (None, 1), name='y-input') # input for two two dimensions, characteristics, the output of a label, declare the data type float32, None is the size of a batch #y_ is true a = tag Tf.matmul (x, W1) y = tf.matmul (a, W2) # defined neural network propagation process of cross_entropy (y_ = -tf.reduce_mean * tf.log (tf.clip_by_value (y, 1e-10,1.0) train_step)) = tf.train.AdamOptimizer (0.001).Minimize (cross_entropy) # defined loss function and back propagation algorithm RDM = RandomState (1) dataset_size = 128 # produced 128 sets of data X = rdm.rand (dataset_size, 2) Y = [[int (x1+x2 < 1)] for (x1, x2) in X] # all x1+x2< 1 samples as positive samples, expressed as 1; the remaining session is created to run the TensorFlow program with tf.Session 0 (# sess: init_op tf.global_variables_initializer (as) = (init_op) # initialize variables ( print (W1) (print) (W2)) # print training network before the network parameter value STEPS = 5000 # Set the number of rounds of for I training in range (STEPS): start = (I * batch_size)% dataset_size end = min (start+batch_size, dataset_size) # selected batch_size samples of (train_step, feed_dict={x:X[start:end], y_, Y[start:end]}) # through training neural network selection and update the parameters of if i%1000 = = 0: total_cross_entropy = (cross_entropy, feed_dict={x:X, y_: Y}) ("print After%d training step (s), cross entropy on all data is%g" (I, total_cross_entropy)%) # every time in the calculation of all the data on the cross entropy and the output, with the training of cross entropy decreases print ((W1) print ( (W2)) # print out after training the neural network parameter value

operating results are as follows:

shows that the network parameters before printing out training, which is randomly generated parameter values, and then output the cross entropy in the process of training every 1000 times, found in the cross entropy decreases, which explains the classification performance in good. The last is the parameters of the network after the training network ends.

shared a graphical process of neural network training website: here , you can define their own network parameters, and the size of layers of learning rate, and the training process will be displayed in the form is very intuitive. For example:

for the neural network training process can have a very deep understanding.

finally, add some TensorFlow related knowledge:

1.TensorFlow -

calculation model of the calculation diagram of

Tensor tensor, can be simply understood as a multidimensional data structure; Flow embodies its computational model. Flow is translated as "flow", which intuitively expresses the process of mutual conversion between tensors. Each of the calculations in the TensorFlow is a node on the graph, and the edges between the nodes describe the dependence between the calculations.

specifies the GPU method as follows:

 import tensorflow command as TF a = tf.constant ([1.0,2.0], name=, a) B = tf.constant ([3.0,4.0], name=, b) g = tf.Graph (with) g.device (/gpu:0): result = a + B = tf.Session (SESS) (result) 


tensor data model: tensor is the form of data management. The zero order tensor is a scalar, the first tensor is a vector, that is, a one-dimensional array. In general, the N tensor can be understood as an n-dimensional array. The tensor itself does not store the result of the operation, and it only gets a reference to the result. The tf.Session ().Run (result) statement can be used to get the results of the calculation. The

3.TensorFlow run model session

we use session to perform well defined operations.

mainly has the following two ways, the first kind of memory leak, the second will not have this problem.

 creates a session sess = tf.Session () (... #) close the session makes use of the resources in the operation of sess.close (

) released the second way is to use the session context through Python explorer.

 with tf.Session () as sess: (... 

) automatically shut down this way and automatically release the


4.TensorFlow neural network


using neural network to solve the classification problem can be divided into the following four steps:
feature vector extraction problem as input entity.
defines the structure of the neural network and defines how to output from the input of the neural network. This process is the forward propagation algorithm of the neural network.
(3) adjust the setting of parameters in the neural network by training data, which is the process of training the network. A method of matrix variables to predict the data

unknown declare a 2*3 in TensorFlow
using the trained neural network: the

weight = tf.Variable (tf.random_normal ([2,3], stddev=2))

is expressed as a variance of 0 and a standard deviation of 2. The normal distribution of

in TensorFlow, the value of a variable before use, the initialization process of this variable needs to be invoked explicitly. All of a sudden to initialize all variables

sess (
) = tf.Session init_op = tf.initialize_all_variables (

) or

(tf.global_variables_initializer = init_op) for (init_op)

all above is the

, I hope to help you study. I hope you will support a script.

This paper fixed link: | Script Home | +Copy Link

Article reprint please specify:Python implementation of neural network under TensorFlow platform | Script Home

You may also be interested in these articles!