我想知道是否有一种方法可以实现 scikit learn 包中的不同分数函数,如下所示:
from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred)
进入张量流模型以获得不同的分数。
with tf.Session(config=tf.ConfigProto(log_device_placement=True)) as sess:
init = tf.initialize_all_variables()
sess.run(init)
for epoch in xrange(1):
avg_cost = 0.
total_batch = len(train_arrays) / batch_size
for batch in range(total_batch):
train_step.run(feed_dict = {x: train_arrays, y: train_labels})
avg_cost += sess.run(cost, feed_dict={x: train_arrays, y: train_labels})/total_batch
if epoch % display_step == 0:
print "Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost)
print "Optimization Finished!"
correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print "Accuracy:", batch, accuracy.eval({x: test_arrays, y: test_labels})
我是否必须再次运行会话才能获得预测?
您实际上并不需要 sklearn 来计算精度/召回率/f1 分数。您可以通过查看以下公式轻松地以 TF 式的方式表达它们:
现在如果你有你的actual
and predicted
值作为 0/1 的向量,您可以使用以下方法计算 TP、TN、FP、FNtf.count_nonzero https://www.tensorflow.org/api_docs/python/tf/count_nonzero:
TP = tf.count_nonzero(predicted * actual)
TN = tf.count_nonzero((predicted - 1) * (actual - 1))
FP = tf.count_nonzero(predicted * (actual - 1))
FN = tf.count_nonzero((predicted - 1) * actual)
现在您的指标很容易计算:
precision = TP / (TP + FP)
recall = TP / (TP + FN)
f1 = 2 * precision * recall / (precision + recall)
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)