本节讲述用tensorflow2实现sofrmax回归。
依然是一个很基础的版本,没有防止过拟合措施,也没有进行val验证。
首先是导包
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
print(tf.__version__)
输出:
2.1.0
由于tf会自动使用GPU进行运算,我们设置GPU内存为按需分配。
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
数据读取与数据标准化
(X_train,y_train),(X_test,y_test) = fashion_mnist.load_data()
# 数据标准化
X_train = X_train / 255.
X_test = X_test / 255.
模型编写:
model = tf.keras.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))
训练:
model.compile(optimizer=tf.keras.optimizers.SGD(0.1),loss='sparse_categorical_crossentropy',metrics=['acc'])
model.fit(X_train,y_train,epochs=5,batch_size=256)
输出,我们可以看出,经过5批次的简单计算,准确率已经达到83.4:
Train on 60000 samples Epoch 1/5 60000/60000 [==============================] - 1s 19us/sample - loss: 0.8008 - acc: 0.7394 Epoch 2/5 60000/60000 [==============================] - 1s 16us/sample - loss: 0.5744 - acc: 0.8114 Epoch 3/5 60000/60000 [==============================] - 1s 16us/sample - loss: 0.5289 - acc: 0.8243 Epoch 4/5 60000/60000 [==============================] - 1s 16us/sample - loss: 0.5052 - acc: 0.8307 Epoch 5/5 60000/60000 [==============================] - 1s 18us/sample - loss: 0.4893 - acc: 0.8343
然后我们看看测试集上的效果:
test_loss, test_acc = model.evaluate(X_test,y_test) print(test_loss,test_acc)
输出,可以看到在10000个数据集上,准确率接近83:
10000/10000 [==============================] - 1s 80us/sample - loss: 0.5089 - acc: 0.8264 0.5089153020858764 0.8264
0 条评论