assumes that we have installed tensorflow.

generally runs its demo after installing tensorflow, and the most common demo is the demo of handwritten digit recognition, that is, the MNIST dataset.

, however, we just ran away from its demo. Maybe many people will have the same idea as me. If we bring a digital picture and how to use our trained network model to identify it, we will realize it with MNIST demo.

1.

training model first we trained model, and the model of model.ckpt is saved to the specified folder

 saver (saver.save) = tf.train.Saver (sess, model_data/model.ckpt) 

will be more than two lines of code added to the training code, after the completion of training model can be saved, if this part of the problem you can Baidu access to information, how to save the tensorflow training model, here we will not bothersome.

2.

test model

we trained the model, save it in the model_data folder, you will find the 4 files

 # -*- coding: UTF-8 import import tensorflow as -*- CV2 TF import numpy as NP from sys import path path.append from common import extract_mnist ('../..') # initialization for a single convolution kernel on the parameters of def weight_variable (shape): initial = tf.truncated_normal (shape, stddev=0.1) return tf.Variable (initial) # initialization for a single convolution kernel on def bias_variable shape (offset value tf.constant (0.1): initial = shape=shape, return) tf.Variable (initial) # input feature x, convolution with a convolution kernel W, strides convolution step, #padding said whether they need to make up the edge pixel size of the image output unchanged def conv2d (x, W): return tf.nn.conv2d (x, W, strides=[1. 1, 1, 1], padding='SAME') # maximum pool operation on X, ksize of the pool The scope of def max_pool_2x2 (x): return tf.nn.max_pool (x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') def (main): sess = tf.InteractiveSession (# definition session) # input image data, class x = tf.placeholder ('float', [None, 784]) x_img = tf.reshape (x, [-1,28,28,1]) W_conv1 = weight_variable ([5, 5, 1, 32]) b_conv1 = bias_variable ([32]) W_conv2 = weight_variable ([5,5,32,64]) b_conv2 = bias_variable ([64]) W_fc1 = weight_variable ([7*7*641024]) b_fc1 = bias_variable ([1024]) W_fc2 = weight_variable ([1024,10]) b_fc2 = bias_variable ([10] = tf.train.Saver (write_version=tf.train.SaverDef.V1) saver saver.restore) (sess,'model_data/model.ckpt') # convolution operation , and add the relu activation function h_conv1 = tf.nn.relu (conv2d (x_img, W_conv1) + b_conv1) # maximum pool h_pool1 = max_pool_2x2 (h_conv1) # in second layer convolution layer h_conv2 = tf.nn.relu (conv2d (h_pool1, W_conv2) + b_conv2) h_pool2 = max_pool_2x2 (h_conv2) # convolution output h_pool2_flat = tf.reshape (h_pool2, [-1,7*7*64]) # neural network computing, and add the relu activation function h_fc1 = tf.nn.relu (tf.matmul (h_pool2_flat, W_fc1) + b_fc1) # output layer, using softmax multi classification y_conv=tf.nn.softmax (tf.matmul (h_fc1, W_fc2) + b_fc2) # mnist_data_set = extract_mnist.MnistDataSet ('../../data/') # x_img, y = mnist_data_set.next_train_batch (1) Im = cv2.imread ('images/888.jpg', cv2.IMREAD_GRAYSCALE).Astyp E (np.float32) Im = cv2.resize (im (28,28), interpolation=cv2.INTER_CUBIC) # image preprocessing #img_gray = cv2.cvtColor (IM, cv2.COLOR_BGR2GRAY).Astype (np.float32) # data from 0~255 to -0.5~0.5 (img_gray = im (255 / 2 / 255)) #cv2.imshow ('out', img_gray) #cv2.waitKey (x_img = 0) np.reshape (img_gray, [-1, 784]) print x_img output (y_conv = sess.run, feed_dict = {x:x_img}) print'the y_con:'n' output print'the ', predict', is: np.argmax (output) sess.close (if) # closed session __name__ = ='__main__': Main (

)

ok,

></p>
<p>

finally posted a cifar10, I feel a bit of input data, cifar10 data read directly because the test is no problem, but for your own images preprocessing input after the results have problems, (Reference: CV2 read data is BGR order, read PIL the data is RGB sequence, cifar10 sequence data is RGB), which shoes can point out a message to tell me

 remember # -*- coding:utf-8 from import path import -*- sys numpy as NP import tensorflow as TF import time import CV2 from PIL import Image path.append from common import ('../..') extract_cifar10 from common import inspect_image # initialization for a single the convolution kernel parameters of def weight_variable (shape): initial = tf.truncated_normal (shape, stddev=0.1) return tf.V Ariable (initial) # initialization for a single convolution kernel on the bias value of def bias_variable (shape): initial = tf.constant (0.1, shape=shape) return tf.Variable (initial) def conv2d # convolution operation (x, W): return tf.nn.conv2d (x, W, strides=[1, 1, 1, 1], padding='SAME') def (main) #: sess = (tf.InteractiveSession) definition of session # input image data, class x = tf.placeholder ('float', [None, 32,32,3]) y_ = tf.placeholder ('float', [None, 10]) # first layer laminated W_conv1 volume = weight_variable ([5, 5, 3, 64]) b_conv1 = bias_variable ([64]) # convolution operation. And add the relu activation function conv1 = tf.nn.relu (conv2d (x, W_conv1) + b_conv1) # pool1 pool1 = tf.nn.max_pool (conv1, ksize=[1, 3, 3, 1, strides=[1, 2, 2 1], padding='SAME', name='pool1', norm1) # norm1 = tf.nn.lrn (pool1, 4, bias=1.0, alpha=0.001 / 9, beta=0.75, name='norm1') # second layer convolution layer W_conv2 = weight_variable ([5,5,64,64]) b_conv2 = bias_variable ([64]) conv2 = tf.nn.relu (conv2d (norm1, W_conv2) + b_conv2) = tf.nn.lrn (conv2 norm2 norm2 # bias=1.0, 4, alpha=0.001 / 9, beta=0.75, name='norm2') # pool2 pool2 = tf.nn.max_pool (norm2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME', name='pool2') # fully connected layer # weight parameter W_fc1 = weight_variable ([8*8*64384]) # bias value b_fc1 = bias_variable ([384]) # the output of pool2_flat convolution = tf.reshape (pool2, [-1,8*8*64]) # neural network computing, and add the relu activation function FC1 = tf.nn.relu (tf.matmul (pool2_flat, W_fc1) + b_fc1) # fully connected second layer # weight parameter W_fc2 = weight_variable ([384192]) # bias value b_fc2 = bias_variable ([192]) # neural network computing, and add the relu activation function FC2 = tf.nn.relu (tf.matmul (FC1, W_fc2) + b_fc2) # output layer, use the softmax multi classification W_fc2 = weight_variable ([192,10]) b_fc2 = bias_variable ([10]) y_conv=tf.maximum (tf.nn.softmax (tf.matmul (FC2, W_fc2) + b_fc2), 1e-30 saver (tf.train.Saver) #) = saver.restore (sess,'model_data/model.ckpt') #input im = Image.open ('images/dog8.jpg') im.show (IM) = im.resize ((32,32)) # R G, B, im.split) = (# im = Image.merge ("RGB" (R, G, b) print) im.size, im.mode = im Np.array (IM).Astype (np.float32) Im = np.reshape (IM, [-1,32*32*3]) im (IM = - (255 / 2) / 255) batch_xs = np.reshape (IM, [-1,32,32,3]) #print batch_xs cifar10_data_set # cifar10 data acquisition # = extract_cifar10.Cifar10DataSet ('../../data/') # batch_xs, batch_ys = cifar10_data_set.next_train_batch (1) # print batch_ys = output sess.run (y_conv, feed_dict={x:batch_xs}) print output print'the out put is', np.argmax (output) sess.close (if) # closed session __name__ = ='__main__': Main (

)

all above is the article, hope to help everyone to learn, I hope you will support a script.

This paper fixed link:http://www.script-home.com/an-example-of-the-realization-of-a-single-picture-recognition-of-python-tensorflow-learning.html | Script Home | +Copy Link

Article reprint please specify:An example of the realization of a single picture recognition of Python tensorflow learning | Script Home

You may also be interested in these articles!