抛开与 Docker 相关的问题不谈,Jupyter 内核的设置是在名为kernel.json
,驻留在特定目录中(每个内核一个),可以使用以下命令查看jupyter kernelspec list
;例如,这是我的(Linux)机器上的情况:
$ jupyter kernelspec list
Available kernels:
python2 /usr/lib/python2.7/site-packages/ipykernel/resources
caffe /usr/local/share/jupyter/kernels/caffe
ir /usr/local/share/jupyter/kernels/ir
pyspark /usr/local/share/jupyter/kernels/pyspark
pyspark2 /usr/local/share/jupyter/kernels/pyspark2
tensorflow /usr/local/share/jupyter/kernels/tensorflow
再次作为示例,以下是kernel.json
对于我的 R 内核(ir
)
{
"argv": ["/usr/lib64/R/bin/R", "--slave", "-e", "IRkernel::main()", "--args", "{connection_file}"],
"display_name": "R 3.3.2",
"language": "R"
}
这是我的相应文件pyspark2
kernel:
{
"display_name": "PySpark (Spark 2.0)",
"language": "python",
"argv": [
"/opt/intel/intelpython27/bin/python2",
"-m",
"ipykernel",
"-f",
"{connection_file}"
],
"env": {
"SPARK_HOME": "/home/ctsats/spark-2.0.0-bin-hadoop2.6",
"PYTHONPATH": "/home/ctsats/spark-2.0.0-bin-hadoop2.6/python:/home/ctsats/spark-2.0.0-bin-hadoop2.6/python/lib/py4j-0.10.1-src.zip",
"PYTHONSTARTUP": "/home/ctsats/spark-2.0.0-bin-hadoop2.6/python/pyspark/shell.py",
"PYSPARK_PYTHON": "/opt/intel/intelpython27/bin/python2"
}
}
正如您所看到的,在这两种情况下,第一个元素argv
是相应语言的可执行文件 - 就我而言,GNU R 代表我的语言ir
内核和 Intel Python 2.7pyspark2
核心。更改此设置,使其指向您的 GNU R 可执行文件,应该可以解决您的问题。