设为首页 加入收藏

TOP

tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.ma...
2018-11-13 15:18:10 】 浏览:159
Tags:tried access method com.google.common.base.Stopwatch.< init> from class org.apache.hadoop.ma...
版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/blackweekend/article/details/51719385

用spark跑一个wordcount,报错如下:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/06/20 11:05:54 INFO SparkContext: Running Spark version 1.6.1
16/06/20 11:05:54 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/20 11:05:55 INFO SecurityManager: Changing view acls to: Max
16/06/20 11:05:55 INFO SecurityManager: Changing modify acls to: Max
16/06/20 11:05:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Max); users with modify permissions: Set(Max)
16/06/20 11:05:55 INFO Utils: Successfully started service 'sparkDriver' on port 53249.
16/06/20 11:05:55 INFO Slf4jLogger: Slf4jLogger started
16/06/20 11:05:56 INFO Remoting: Starting remoting
16/06/20 11:05:56 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.2.109:53250]
16/06/20 11:05:56 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 53250.
16/06/20 11:05:56 INFO SparkEnv: Registering MapOutputTracker
16/06/20 11:05:56 INFO SparkEnv: Registering BlockManagerMaster
16/06/20 11:05:56 INFO DiskBlockManager: Created local directory at /private/var/folders/z_/thb8h2gd4hqdcz9z5r9xzl140000gn/T/blockmgr-7fd2ee1f-8875-4e1b-87a6-483264892d39
16/06/20 11:05:56 INFO MemoryStore: MemoryStore started with capacity 1140.4 MB
16/06/20 11:05:56 INFO SparkEnv: Registering OutputCommitCoordinator
16/06/20 11:05:56 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/06/20 11:05:56 INFO SparkUI: Started SparkUI at http://192.168.2.109:4040
16/06/20 11:05:56 INFO Executor: Starting executor ID driver on host localhost
16/06/20 11:05:56 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53253.
16/06/20 11:05:56 INFO NettyBlockTransferService: Server created on 53253
16/06/20 11:05:56 INFO BlockManagerMaster: Trying to register BlockManager
16/06/20 11:05:56 INFO BlockManagerMasterEndpoint: Registering block manager localhost:53253 with 1140.4 MB RAM, BlockManagerId(driver, localhost, 53253)
16/06/20 11:05:56 INFO BlockManagerMaster: Registered BlockManager
16/06/20 11:05:57 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 156.2 KB, free 156.2 KB)
16/06/20 11:05:57 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 14.1 KB, free 170.3 KB)
16/06/20 11:05:57 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:53253 (size: 14.1 KB, free: 1140.4 MB)
16/06/20 11:05:57 INFO SparkContext: Created broadcast 0 from textFile at WC.scala:15
Exception in thread "main" java.lang.IllegalAccessError: tried to access method com.google.common.base.Stopwatch.<init>()V from class org.apache.hadoop.mapred.FileInputFormat
at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:312)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:331)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:330)
at cn.itcast.spark.WC$.main(WC.scala:16)
at cn.itcast.spark.WC.main(WC.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

16/06/20 11:05:58 INFO SparkContext: Invoking stop() from shutdown hook
16/06/20 11:05:58 INFO SparkUI: Stopped Spark web UI at http://192.168.2.109:4040
16/06/20 11:05:58 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/20 11:05:58 INFO MemoryStore: MemoryStore cleared
16/06/20 11:05:58 INFO BlockManager: BlockManager stopped
16/06/20 11:05:58 INFO BlockManagerMaster: BlockManagerMaster stopped
16/06/20 11:05:58 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/20 11:05:58 INFO SparkContext: Successfully stopped SparkContext
16/06/20 11:05:58 INFO ShutdownHookManager: Shutdown hook called
16/06/20 11:05:58 INFO ShutdownHookManager: Deleting directory /private/var/folders/z_/thb8h2gd4hqdcz9z5r9xzl140000gn/T/spark-81bc4c8a-5f7f-45c0-97ab-a82b88619ee5


Process finished with exit code 1

===================================================================================================================================

我的pom.xml几个重要的包得版本如下:

<properties>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.10.6</scala.version>
<spark.version>1.6.1</spark.version>
<hadoop.version>2.5.2</hadoop.version>
</properties>

做了几个测试,其他版本不变,hadoop.version 为 2.4.0,2.4.1,2.5.2,2.6.1,2.6.4 都会报上面的错误,不知道是不是hadoop的bug,有人说将hadoop的源码重新编译一下就行,但我没试。


解决办法:

把hadoop version改成2.2.0就可以了,测了一下2.7.2也可以,所以估计是hadoop的一个坑,2.7.2就修复了。其他版本没试




】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇<转>Spark 生态系统组件 下一篇<转>Apache Spark 内存管理..

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目