Accessing PySpark from a Jupyter Notebook - datawookie
datawookie.dev › blog › 2017Jul 04, 2017 · Install the findspark package. $ pip3 install findspark Make sure that the SPARK_HOME environment variable is defined Launch a Jupyter Notebook. $ jupyter notebook Import the findspark package and then use findspark.init () to locate the Spark process and then load the pyspark module. See below for a simple example.