PySpark in Eclipse: using PyDev ----> ImportError: Module use of python35.dll conflicts with this version of Python -


i have python version 3.6.0 installed on windows machine , trying run pyspark job on eclipse. eclipse version: neon pydev: 5.7 spark version:2.1.0

i getting python version conflict.

below error:

using spark's default log4j profile: org/apache/spark/log4j-defaults.properties setting default log level "warn". adjust logging level use sc.setloglevel(newlevel). sparkr, use setloglevel(newlevel). 17/04/14 23:18:46 error shell: failed locate winutils binary in hadoop binary path java.io.ioexception: not locate executable null\bin\winutils.exe in hadoop binaries.     @ org.apache.hadoop.util.shell.getqualifiedbinpath(shell.java:379)     @ org.apache.hadoop.util.shell.getwinutilspath(shell.java:394)     @ org.apache.hadoop.util.shell.<clinit>(shell.java:387)     @ org.apache.hadoop.util.stringutils.<clinit>(stringutils.java:80)     @ org.apache.hadoop.security.securityutil.getauthenticationmethod(securityutil.java:611)     @ org.apache.hadoop.security.usergroupinformation.initialize(usergroupinformation.java:273)     @ org.apache.hadoop.security.usergroupinformation.ensureinitialized(usergroupinformation.java:261)     @ org.apache.hadoop.security.usergroupinformation.loginuserfromsubject(usergroupinformation.java:791)     @ org.apache.hadoop.security.usergroupinformation.getloginuser(usergroupinformation.java:761)     @ org.apache.hadoop.security.usergroupinformation.getcurrentuser(usergroupinformation.java:634)     @ org.apache.spark.util.utils$$anonfun$getcurrentusername$1.apply(utils.scala:2373)     @ org.apache.spark.util.utils$$anonfun$getcurrentusername$1.apply(utils.scala:2373)     @ scala.option.getorelse(option.scala:121)     @ org.apache.spark.util.utils$.getcurrentusername(utils.scala:2373)     @ org.apache.spark.sparkcontext.<init>(sparkcontext.scala:295)     @ org.apache.spark.api.java.javasparkcontext.<init>(javasparkcontext.scala:58)     @ sun.reflect.nativeconstructoraccessorimpl.newinstance0(native method)     @ sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl.java:62)     @ sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:45)     @ java.lang.reflect.constructor.newinstance(constructor.java:423)     @ py4j.reflection.methodinvoker.invoke(methodinvoker.java:247)     @ py4j.reflection.reflectionengine.invoke(reflectionengine.java:357)     @ py4j.gateway.invoke(gateway.java:236)     @ py4j.commands.constructorcommand.invokeconstructor(constructorcommand.java:80)     @ py4j.commands.constructorcommand.execute(constructorcommand.java:69)     @ py4j.gatewayconnection.run(gatewayconnection.java:214)     @ java.lang.thread.run(thread.java:745) 17/04/14 23:18:47 warn nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable traceback (most recent call last):   file "c:\users\h192539\appdata\local\programs\python\python35\lib\runpy.py", line 174, in _run_module_as_main     mod_name, mod_spec, code = _get_module_details(mod_name, _error)   file "c:\users\h192539\appdata\local\programs\python\python35\lib\runpy.py", line 109, in _get_module_details     __import__(pkg_name)   file "c:\users\h192539\documents\dumps\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\__init__.py", line 44, in <module>   file "c:\users\h192539\documents\dumps\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\context.py", line 31, in <module>   file "c:\users\h192539\documents\dumps\spark-2.1.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\accumulators.py", line 90, in <module> importerror: module use of python35.dll conflicts version of python. 


Comments

Popular posts from this blog

php - Permission denied. Laravel linux server -

google bigquery - Delta between query execution time and Java query call to finish -

python - Pandas two dataframes multiplication? -