com.mysql.jdbc.Driver не найден в пути к классам при запуске искрового sql и бережливого сервера

Я получаю следующие ошибки при запуске оболочки spark-sql.

Но когда я запускаю оболочку с помощью команды, она работает

./spark-sql --jars /usr/local/hive/lib/mysql-connector-java.jar

Но когда я запускаю комиссионный сервер таким же образом, используя приведенную ниже команду, он снова выдает ту же ошибку.

/usr/local/spark/sbin/start-thriftserver.sh --jars /usr/local/hive/lib/mysql-connector-java.jar

Пожалуйста, помогите мне понять, как это можно решить, чтобы мне не приходилось передавать путь jar извне, и почему он работает для случая spark-sql, а не для бережливого сервера. Нужно ли мне устанавливать путь к классам где-то, что мне не хватает?

Пожалуйста, дайте мне знать, если вам нужно что-нибудь еще.

 5/10/18 05:15:33 INFO server.Server: jetty-8.y.z-SNAPSHOT
    15/10/18 05:15:33 INFO server.AbstractConnector: Started [email protected]:47703
    15/10/18 05:15:33 INFO util.Utils: Successfully started service 'HTTP file server' on port 47703.
    15/10/18 05:15:33 INFO spark.SparkEnv: Registering OutputCommitCoordinator
    15/10/18 05:15:38 INFO server.Server: jetty-8.y.z-SNAPSHOT
    15/10/18 05:15:38 INFO server.AbstractConnector: Started [email protected]:4040
    15/10/18 05:15:38 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
    15/10/18 05:15:38 INFO ui.SparkUI: Started SparkUI at http://192.168.1.12:4040
    15/10/18 05:15:38 INFO spark.SparkContext: Added JAR file:/usr/local/hive/lib/mysql-connector-java.jar at http://192.168.1.12:47703/jars/mysql-connector-java.jar with timestamp 1445125538564
    15/10/18 05:15:38 INFO client.AppClient$ClientActor: Connecting to master akka.tcp://[email protected]:7077/user/Master...
    15/10/18 05:15:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151018051538-0018
    15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor added: app-20151018051538-0018/0 on worker-20151018024224-192.168.1.12-50211 (192.168.1.12:50211) with 4 cores
    15/10/18 05:15:38 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20151018051538-0018/0 on hostPort 192.168.1.12:50211 with 4 cores, 512.0 MB RAM
    15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor updated: app-20151018051538-0018/0 is now LOADING
    15/10/18 05:15:38 INFO client.AppClient$ClientActor: Executor updated: app-20151018051538-0018/0 is now RUNNING
    15/10/18 05:15:39 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43460.
    15/10/18 05:15:39 INFO netty.NettyBlockTransferService: Server created on 43460
    15/10/18 05:15:39 INFO storage.BlockManagerMaster: Trying to register BlockManager
    15/10/18 05:15:39 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.1.12:43460 with 265.4 MB RAM, BlockManagerId(driver, 192.168.1.12, 43460)
    15/10/18 05:15:39 INFO storage.BlockManagerMaster: Registered BlockManager
    15/10/18 05:15:39 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
    15/10/18 05:15:40 INFO hive.HiveContext: Initializing execution hive, version 0.13.1
    15/10/18 05:15:40 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
    15/10/18 05:15:40 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    15/10/18 05:15:40 INFO metastore.ObjectStore: ObjectStore, initialize called
    15/10/18 05:15:41 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    15/10/18 05:15:41 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    15/10/18 05:15:41 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    15/10/18 05:15:41 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    15/10/18 05:15:42 INFO cluster.SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://[email protected]:56227/user/Executor#-1120183734]) with ID 0
    15/10/18 05:15:42 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.1.12:34713 with 265.4 MB RAM, BlockManagerId(0, 192.168.1.12, 34713)
    15/10/18 05:15:52 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
    15/10/18 05:15:52 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
    15/10/18 05:15:52 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), after : "".
    15/10/18 05:15:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    15/10/18 05:15:54 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    15/10/18 05:16:01 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
    15/10/18 05:16:01 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
    15/10/18 05:16:03 INFO metastore.ObjectStore: Initialized ObjectStore
    15/10/18 05:16:04 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa
    15/10/18 05:16:05 INFO metastore.HiveMetaStore: Added admin role in metastore
    15/10/18 05:16:05 INFO metastore.HiveMetaStore: Added public role in metastore
    15/10/18 05:16:05 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
    15/10/18 05:16:05 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr.
    15/10/18 05:16:05 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
    15/10/18 05:16:05 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes.
    15/10/18 05:16:06 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead
    15/10/18 05:16:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    15/10/18 05:16:06 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    15/10/18 05:16:06 INFO metastore.ObjectStore: ObjectStore, initialize called
    15/10/18 05:16:07 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
    15/10/18 05:16:07 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
    15/10/18 05:16:07 WARN DataNucleus.Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
    Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
        at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:105)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:170)
        at org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:166)
        at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:212)
        at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:175)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:55)
        at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:73)
        at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
    Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:62)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
        ... 21 more
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
        ... 26 more
    Caused by: javax.jdo.JDOFatalInternalException: Error creating transactional connection factory
    NestedThrowables:
    java.lang.reflect.InvocationTargetException
        at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:587)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:310)
        at org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:339)
        at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:248)
        at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:223)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:58)
        at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:67)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:497)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:475)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:523)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:397)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:356)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:54)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
        at org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:171)
        ... 31 more
    Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:325)
        at org.datanucleus.store.AbstractStoreManager.registerConnectionFactory(AbstractStoreManager.java:282)
        at org.datanucleus.store.AbstractStoreManager.<init>(AbstractStoreManager.java:240)
        at org.datanucleus.store.rdbms.RDBMSStoreManager.<init>(RDBMSStoreManager.java:286)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:631)
        at org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301)
        at org.datanucleus.NucleusContext.createStoreManagerForProperties(NucleusContext.java:1187)
        at org.datanucleus.NucleusContext.initialise(NucleusContext.java:356)
        at org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:775)
        ... 60 more
    Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "dbcp-builtin" plugin to create a ConnectionPool gave an error : The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:259)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.initialiseDataSources(ConnectionFactoryImpl.java:131)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.<init>(ConnectionFactoryImpl.java:85)
        ... 78 more
    Caused by: org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver ("com.mysql.jdbc.Driver") was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.
        at org.datanucleus.store.rdbms.connectionpool.AbstractConnectionPoolFactory.loadDriver(AbstractConnectionPoolFactory.java:58)
        at org.datanucleus.store.rdbms.connectionpool.DBCPBuiltinConnectionPoolFactory.createConnectionPool(DBCPBuiltinConnectionPoolFactory.java:49)
        at org.datanucleus.store.rdbms.ConnectionFactoryImpl.generateDataSources(ConnectionFactoryImpl.java:238)
        ... 80 more
    15/10/18 05:16:07 INFO spark.SparkContext: Invoking stop() from shutdown hook

person Kundan Kumar    schedule 18.10.2015    source источник


Ответы (4)


скопируйте mysql-connector-java-5.1.38-bin.jar в расположение jar-файлов искры в версиях spark 2.x.

$ cp -r $HIVE_HOME/lib/mysql-connector-java-5.1.38-bin.jar $SPARK_HOME/jars/
person KARTHIKEYAN.A    schedule 04.09.2017
comment
По какой-то причине --jars не работает, я думаю, нам нужно добавить --jars вместе с --driver-class-path, если jar также нужен на стороне драйвера, у меня сработало копирование jar в папку jars . - person Mahesh; 04.10.2017

Основная проблема: невозможно создать экземпляр org.apache.hadoop.hive.metastore.HiveMetaStoreClient.

Итак, в Hive lib вы скопировали коннектор mysql. поместите этот путь к классу mysql в файл spark-env.sh.

экспорт SPARK_CLASSPATH=/home/hadoop/work/apache-hive-2.0.0-bin/lib/mysql-connector-java-5.1.38-bin.jar введите здесь описание изображения

Наконец, поместите hive-site.xml в папку spark conf. Теперь проверьте, эта проблема будет решена.

person Venu A Positive    schedule 16.05.2016
comment
У меня это сработало после добавления spark_classpath и hive-site.xml - person Lijju Mathew; 06.06.2016

Попробуйте включить банки в SPARK_CLASSPATH. Вы также можете обновить этот spark-env.sh. Какую версию искры вы используете? Spark 1.3 и более ранние версии --jars имеют проблемы с добавлением драйверов JDBC.

person Sathish    schedule 18.10.2015
comment
Spark 1.5.2 жалуется на использование SPARK_CLASSPATH: 15/12/22 08:36:31 ПРЕДУПРЕЖДЕНИЕ SparkConf: обнаружен SPARK_CLASSPATH (установлен в '/usr/share/java/mysql-connector-java.jar'). Это устарело в Spark 1.0+. Вместо этого используйте: - ./spark-submit с --driver-class-path для расширения пути к классам драйвера - spark.executor.extraClassPath для расширения пути к классам исполнителя - person chrisinmtown; 22.12.2015

Добавьте SPARK_HOME в свой файл ~/.bash_profile, как показано ниже.

# set spark directory
export SPARK_HOME=/var/www/python_project/extras/spark/spark-2.4.4-bin-hadoop2.7

После того, как вы сохранили этот файл, выполните следующую команду:

source ~/.bash_profile

Это спасло мой день :)

Надеюсь это поможет.

person Rajarathinam    schedule 18.12.2019