Spark 数据帧的使用方式与 Spark ML 不同;你所有的特征都需要是向量single列,通常命名为features
。以下是使用您提供的 5 行作为示例的方法:
spark.version
# u'2.2.0'
from pyspark.sql import Row
from pyspark.ml.linalg import Vectors
# your sample data:
temp_df = spark.createDataFrame([Row(V4366=0.0, V4460=0.232, V4916=-0.017, V1495=-0.104, V1639=0.005, V1967=-0.008, V3049=0.177, V3746=-0.675, V3869=-3.451, V524=0.004, V5409=0), Row(V4366=0.0, V4460=0.111, V4916=-0.003, V1495=-0.137, V1639=0.001, V1967=-0.01, V3049=0.01, V3746=-0.867, V3869=-2.759, V524=0.0, V5409=0), Row(V4366=0.0, V4460=-0.391, V4916=-0.003, V1495=-0.155, V1639=-0.006, V1967=-0.019, V3049=-0.706, V3746=0.166, V3869=0.189, V524=0.001, V5409=0), Row(V4366=0.0, V4460=0.098, V4916=-0.012, V1495=-0.108, V1639=0.005, V1967=-0.002, V3049=0.033, V3746=-0.787, V3869=-0.926, V524=0.002, V5409=0), Row(V4366=0.0, V4460=0.026, V4916=-0.004, V1495=-0.139, V1639=0.003, V1967=-0.006, V3049=-0.045, V3746=-0.208, V3869=-0.782, V524=0.001, V5409=0)])
trainingData=temp_df.rdd.map(lambda x:(Vectors.dense(x[0:-1]), x[-1])).toDF(["features", "label"])
trainingData.show()
# +--------------------+-----+
# | features|label|
# +--------------------+-----+
# |[-0.104,0.005,-0....| 0|
# |[-0.137,0.001,-0....| 0|
# |[-0.155,-0.006,-0...| 0|
# |[-0.108,0.005,-0....| 0|
# |[-0.139,0.003,-0....| 0|
# +--------------------+-----+
之后,您的管道应该运行良好(我假设您确实有多类分类,因为您的样本仅包含 0 作为标签),只需更改您的标签列rf
and evaluator
如下:
rf = RandomForestClassifier(numTrees=100, maxDepth=5, maxBins=5, labelCol="label",featuresCol="features",seed=42)
evaluator = MulticlassClassificationEvaluator().setLabelCol("label").setPredictionCol("prediction").setMetricName("accuracy")
最后,print accuracy
行不通 - 你需要model.avgMetrics
反而。