我打电话给sagemaker.tensorflow.TensorFlow.fit()
当我使用时无限期挂起,没有错误消息Pipe
代替File
as the input_mode
。我相应地替换了TensorFlowDataset
with Pipemodedataset
。此次培训在File
模式成功完成。
我的数据由两个 s3 存储桶组成,每个存储桶中有多个 tfrecord 文件。尽管广泛查看了文档,但我对如何使用并没有信心Pipemodedataset
在这种情况下 - 具体来说,如何设置channel
.
这是我的 Sagemaker 笔记本设置:
hyperparameters = {
"batch-size": 1,
"pipe_mode": 1,
}
estimator_config = {
"entry_point": "tensorflow_train.py",
"source_dir": "source",
"framework_version": "2.3",
"py_version": "py37",
"instance_type": "ml.p3.2xlarge",
"instance_count": 1,
"role": sagemaker.get_execution_role(),
"hyperparameters": hyperparameters,
"output_path": f"s3://{bucket_name}",
"input_mode": "Pipe",
}
tf_estimator = TensorFlow(**estimator_config)
s3_data_channels = {
"training": f"s3://{bucket_name}/data/training",
"validation": f"s3://{bucket_name}/data/validation",
}
tf_estimator.fit(s3_data_channels)
如果我要跑aws s3 ls
on the s3_data_channels
,我会得到 tfrecord 文件的列表。
这是我设置数据集的方式(请参阅 if / else 语句,具体取决于是否pipe_mode
被选中:
import tensorflow as tf
if __name__ == "__main__":
arg_parser = argparse.ArgumentParser()
...
arg_parser.add_argument("--pipe_mode", type=int, default=0)
arg_parser.add_argument("--train_dir", type=str, default=os.environ.get("SM_CHANNEL_TRAINING"))
arg_parser.add_argument(
"--validation_dir", type=str, default=os.environ.get("SM_CHANNEL_VALIDATION")
)
arg_parser.add_argument("--model_dir", type=str)
args, _ = arg_parser.parse_known_args()
AUTOTUNE = tf.data.experimental.AUTOTUNE
if args.pipe_mode == 1:
from sagemaker_tensorflow import PipeModeDataset
train_ds = PipeModeDataset(channel="training", record_format='TFRecord')
val_ds = PipeModeDataset(channel="validation", record_format='TFRecord')
else:
train_files = tf.data.Dataset.list_files(args.train_dir + '/*tfrecord')
val_files = tf.data.Dataset.list_files(args.validation_dir + '/*tfrecord')
train_ds = tf.data.TFRecordDataset(filenames=train_files, num_parallel_reads=AUTOTUNE)
val_ds = tf.data.TFRecordDataset(filenames=val_files, num_parallel_reads=AUTOTUNE)
train_ds = (
train_ds.map(tfrecord_parser, num_parallel_calls=AUTOTUNE)
.batch(args.batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds.map(tfrecord_parser, num_parallel_calls=AUTOTUNE)
.batch(args.batch_size)
.prefetch(AUTOTUNE)
)
...