TensorFlow實現模型評估

咱們須要評估模型預測值來評估訓練的好壞。
模型評估是很是重要的,隨後的每一個模型都有模型評估方式。使用TensorFlow時,須要把模型評估加入到計算圖中,而後在模型訓練完後調用模型評估。python

在訓練模型過程當中,模型評估能洞察模型算法,給出提示信息來調試、提升或者改變整個模型。可是在模型訓練中並非總須要模型評估,咱們將展現如何在迴歸算法和分類算法中使用它。git

訓練模型以後,須要定量評估模型的性能如何。在理想狀況下,評估模型須要一個訓練數據集和測試數據集,有時甚至須要一個驗證數據集。web

想評估一個模型時就得使用大批量數據點。若是完成批量訓練,咱們能夠重用模型來預測批量數據點。可是若是要完成隨機訓練,就不得不建立單獨的評估器來處理批量數據點。算法

分類算法模型基於數值型輸入預測分類值,實際目標是1和0的序列。咱們須要度量預測值與真實值之間的距離。分類算法模型的損失函數通常不容易解釋模型好壞,因此一般狀況是看下準確預測分類的結果的百分比。dom

無論算法模型預測的如何,咱們都須要測試算法模型,這點至關重要。在訓練數據和測試數據上都進行模型評估,以搞清楚模型是否過擬合。svg

# TensorFlowm模型評估
#
# This code will implement two models. The first
# is a simple regression model, we will show how to
# call the loss function, MSE during training, and
# output it after for test and training sets.
#
# The second model will be a simple classification
# model. We will also show how to print percent
# classified correctly during training and after
# for both the test and training sets.

import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
ops.reset_default_graph()

# 建立計算圖
sess = tf.Session()

# 迴歸例子:
# We will create sample data as follows:
# x-data: 100 random samples from a normal ~ N(1, 0.1)
# target: 100 values of the value 10.
# We will fit the model:
# x-data * A = target
# 理論上, A = 10.

# 聲明批量大小
batch_size = 25

# 建立數據集
x_vals = np.random.normal(1, 0.1, 100)
y_vals = np.repeat(10., 100)
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)

# 八二分訓練/測試數據 train/test = 80%/20%
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]

# 建立變量 (one model parameter = A)
A = tf.Variable(tf.random_normal(shape=[1,1]))

# 增長操做到計算圖
my_output = tf.matmul(x_data, A)

# 增長L2損失函數到計算圖
loss = tf.reduce_mean(tf.square(my_output - y_target))

# 建立優化器
my_opt = tf.train.GradientDescentOptimizer(0.02)
train_step = my_opt.minimize(loss)

# 初始化變量
init = tf.global_variables_initializer()
sess.run(init)

# 迭代運行
# 若是在損失函數中使用的模型輸出結果通過轉換操做,例如,sigmoid_cross_entropy_with_logits()函數,
# 爲了精確計算預測結果,別忘了在模型評估中也要進行轉換操做。
for i in range(100):
    rand_index = np.random.choice(len(x_vals_train), size=batch_size)
    rand_x = np.transpose([x_vals_train[rand_index]])
    rand_y = np.transpose([y_vals_train[rand_index]])
    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
    if (i+1)%25==0:
        print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
        print('Loss = ' + str(sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})))

# 評估準確率(loss)
mse_test = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_test]), y_target: np.transpose([y_vals_test])})
mse_train = sess.run(loss, feed_dict={x_data: np.transpose([x_vals_train]), y_target: np.transpose([y_vals_train])})
print('MSE on test:' + str(np.round(mse_test, 2)))
print('MSE on train:' + str(np.round(mse_train, 2)))

# 分類算法案例
# We will create sample data as follows:
# x-data: sample 50 random values from a normal = N(-1, 1)
# + sample 50 random values from a normal = N(1, 1)
# target: 50 values of 0 + 50 values of 1.
# These are essentially 100 values of the corresponding output index
# We will fit the binary classification model:
# If sigmoid(x+A) < 0.5 -> 0 else 1
# Theoretically, A should be -(mean1 + mean2)/2

# 重置計算圖
ops.reset_default_graph()

# 加載計算圖
sess = tf.Session()

# 聲明批量大小
batch_size = 25

# 建立數據集
x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(2, 1, 50)))
y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))
x_data = tf.placeholder(shape=[1, None], dtype=tf.float32)
y_target = tf.placeholder(shape=[1, None], dtype=tf.float32)

# 分割數據集 train/test = 80%/20%
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]

# 建立變量 (one model parameter = A)
A = tf.Variable(tf.random_normal(mean=10, shape=[1]))

# Add operation to graph
# Want to create the operstion sigmoid(x + A)
# Note, the sigmoid() part is in the loss function
my_output = tf.add(x_data, A)

# 增長分類損失函數 (cross entropy)
xentropy = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output, labels=y_target))

# Create Optimizer
my_opt = tf.train.GradientDescentOptimizer(0.05)
train_step = my_opt.minimize(xentropy)

# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)

# 運行迭代
for i in range(1800):
    rand_index = np.random.choice(len(x_vals_train), size=batch_size)
    rand_x = [x_vals_train[rand_index]]
    rand_y = [y_vals_train[rand_index]]
    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
    if (i+1)%200==0:
        print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)))
        print('Loss = ' + str(sess.run(xentropy, feed_dict={x_data: rand_x, y_target: rand_y})))

# 評估預測
# 用squeeze()函數封裝預測操做,使得預測值和目標值有相同的維度。
y_prediction = tf.squeeze(tf.round(tf.nn.sigmoid(tf.add(x_data, A))))
# 用equal()函數檢測是否相等,
# 把獲得的true或false的boolean型張量轉化成float32型,
# 再對其取平均值,獲得一個準確度值。
correct_prediction = tf.equal(y_prediction, y_target)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
acc_value_test = sess.run(accuracy, feed_dict={x_data: [x_vals_test], y_target: [y_vals_test]})
acc_value_train = sess.run(accuracy, feed_dict={x_data: [x_vals_train], y_target: [y_vals_train]})
print('Accuracy on train set: ' + str(acc_value_train))
print('Accuracy on test set: ' + str(acc_value_test))

# 繪製分類結果
A_result = -sess.run(A)
bins = np.linspace(-5, 5, 50)
plt.hist(x_vals[0:50], bins, alpha=0.5, label='N(-1,1)', color='white')
plt.hist(x_vals[50:100], bins[0:50], alpha=0.5, label='N(2,1)', color='red')
plt.plot((A_result, A_result), (0, 8), 'k--', linewidth=3, label='A = '+ str(np.round(A_result, 2)))
plt.legend(loc='upper right')
plt.title('Binary Classifier, Accuracy=' + str(np.round(acc_value_test, 2)))
plt.show()

輸出:函數

Step #25 A = [[ 5.79096079]]
Loss = 16.8725
Step #50 A = [[ 8.36085415]]
Loss = 3.60671
Step #75 A = [[ 9.26366138]]
Loss = 1.05438
Step #100 A = [[ 9.58914948]]
Loss = 1.39841
MSE on test:1.04
MSE on train:1.13
Step #200 A = [ 5.83126402]
Loss = 1.9799
Step #400 A = [ 1.64923656]
Loss = 0.678205
Step #600 A = [ 0.12520729]
Loss = 0.218827
Step #800 A = [-0.21780498]
Loss = 0.223919
Step #1000 A = [-0.31613481]
Loss = 0.234474
Step #1200 A = [-0.33259964]
Loss = 0.237227
Step #1400 A = [-0.28847221]
Loss = 0.345202
Step #1600 A = [-0.30949864]
Loss = 0.312794
Step #1800 A = [-0.33211425]
Loss = 0.277342
Accuracy on train set: 0.9625
Accuracy on test set: 1.0

模型評估