设为首页 加入收藏

TOP

Hadoop中出现错误如何查看错误信息
2019-04-22 00:39:11 】 浏览:63
Tags:Hadoop 出现 错误 如何 查看 信息

前面在配置hadoop的过程中,发现最开始安装解压之后,首先配置的就是hadoop的环境变量,而且配置的就是jdk的环境变量,后面配置完成之后查看进程也是使用的jps(java进程)查看服务是否运行的,所以hadoop实际上就是建立在java基础上的,他的所有服务都是一个java进程,所以首要的配置就是java环境变量,那么当某个服务或者运行在hadoop的某个应用出错时,如何查看输出的日志信息呢:

你可以打开logs目录:

[super-yong@bigdata-01 logs]$ ll
total 552
-rw-rw-r-- 1 super-yong super-yong  72671 Jan 18 09:51 hadoop-super-yong-datanode-bigdata-01.superyong.com.log
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:51 hadoop-super-yong-datanode-bigdata-01.superyong.com.out
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:39 hadoop-super-yong-datanode-bigdata-01.superyong.com.out.1
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:38 hadoop-super-yong-datanode-bigdata-01.superyong.com.out.2
-rw-rw-r-- 1 super-yong super-yong  97785 Jan 18 10:08 hadoop-super-yong-namenode-bigdata-01.superyong.com.log
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:51 hadoop-super-yong-namenode-bigdata-01.superyong.com.out
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:39 hadoop-super-yong-namenode-bigdata-01.superyong.com.out.1
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:38 hadoop-super-yong-namenode-bigdata-01.superyong.com.out.2
-rw-rw-r-- 1 super-yong super-yong  48665 Jan 18 10:08 hadoop-super-yong-secondarynamenode-bigdata-01.superyong.com.log
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:51 hadoop-super-yong-secondarynamenode-bigdata-01.superyong.com.out
-rw-rw-r-- 1 super-yong super-yong    722 Jan 18 09:39 hadoop-super-yong-secondarynamenode-bigdata-01.superyong.com.out.1
-rw-rw-r-- 1 super-yong super-yong      0 Jan 18 09:38 SecurityAuth-super-yong.audit
drwxr-xr-x 2 super-yong super-yong   4096 Jan 18 10:11 userlogs
-rw-rw-r-- 1 super-yong super-yong 127182 Jan 18 10:07 yarn-super-yong-nodemanager-bigdata-01.superyong.com.log
-rw-rw-r-- 1 super-yong super-yong   1515 Jan 18 10:07 yarn-super-yong-nodemanager-bigdata-01.superyong.com.out
-rw-rw-r-- 1 super-yong super-yong   1515 Jan 18 10:06 yarn-super-yong-nodemanager-bigdata-01.superyong.com.out.1
-rw-rw-r-- 1 super-yong super-yong   1515 Jan 18 10:05 yarn-super-yong-nodemanager-bigdata-01.superyong.com.out.2
-rw-rw-r-- 1 super-yong super-yong    702 Jan 18 09:44 yarn-super-yong-nodemanager-bigdata-01.superyong.com.out.3
-rw-rw-r-- 1 super-yong super-yong 146611 Jan 18 10:06 yarn-super-yong-resourcemanager-bigdata-01.superyong.com.log
-rw-rw-r-- 1 super-yong super-yong    702 Jan 18 10:06 yarn-super-yong-resourcemanager-bigdata-01.superyong.com.out
-rw-rw-r-- 1 super-yong super-yong    702 Jan 18 10:05 yarn-super-yong-resourcemanager-bigdata-01.superyong.com.out.1
-rw-rw-r-- 1 super-yong super-yong   1524 Jan 18 09:44 yarn-super-yong-resourcemanager-bigdata-01.superyong.com.out.2
[super-yong@bigdata-01 logs]$

可以发现这里有许多以 .log 和 .out结尾的文件:

.log

这些文件就是控制台输出的日志信息,你可将它理解为java中的报错信息:

栗子:

2019-01-18 09:51:37,072 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2019-01-18 09:51:38,061 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-01-18 09:51:38,173 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2019-01-18 09:51:38,173 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2019-01-18 09:51:38,181 INFO org.apache.hadoop.hdfs.server.datanode.BlockScanner: Initialized block scanner with targetBytesPerSec 1048576
2019-01-18 09:51:38,183 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is bigdata-01.superyong.com
2019-01-18 09:51:38,193 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2019-01-18 09:51:38,224 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2019-01-18 09:51:38,226 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2019-01-18 09:51:38,226 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2019-01-18 09:51:38,404 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-01-18 09:51:38,419 INFO org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-01-18 09:51:38,430 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2019-01-18 09:51:38,443 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-01-18 09:51:38,447 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2019-01-18 09:51:38,447 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2019-01-18 09:51:38,447 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2019-01-18 09:51:38,482 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 46892
2019-01-18 09:51:38,482 INFO org.mortbay.log: jetty-6.1.26
2019-01-18 09:51:38,743 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:46892
2019-01-18 09:51:38,834 INFO org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0.0:50075
2019-01-18 09:51:39,034 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = super-yong
2019-01-18 09:51:39,034 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2019-01-18 09:51:39,126 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2019-01-18 09:51:39,155 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2019-01-18 09:51:39,263 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2019-01-18 09:51:39,277 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2019-01-18 09:51:39,310 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2019-01-18 09:51:39,333 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to bigdata-01.superyong.com/192.168.59.11:8020 starting to offer service
2019-01-18 09:51:39,362 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-01-18 09:51:39,362 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2019-01-18 09:51:39,825 INFO org.apache.hadoop.hdfs.server.common.Storage: Using 1 threads to upgrade data directories (dfs.datanode.parallel.volumes.load.threads.num=1, dataDirs=1)
2019-01-18 09:51:39,843 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/in_use.lock acquired by nodename 4513@bigdata-01.superyong.com
2019-01-18 09:51:39,916 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-684178811-192.168.59.11-1547782880194
2019-01-18 09:51:39,916 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled for /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current/BP-684178811-192.168.59.11-1547782880194
2019-01-18 09:51:39,918 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=1782187720;bpid=BP-684178811-192.168.59.11-1547782880194;lv=-56;nsInfo=lv=-63;cid=CID-19c5e79c-af25-418b-9cdc-9a11f2eb22c2;nsid=1782187720;c=0;bpid=BP-684178811-192.168.59.11-1547782880194;dnuuid=eb6be290-2229-42d2-b8b8-3856f7068873
2019-01-18 09:51:39,983 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: DS-93aec2f1-8f3c-4012-b582-3de985fab6b2
2019-01-18 09:51:39,984 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current, StorageType: DISK
2019-01-18 09:51:40,047 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2019-01-18 09:51:40,050 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-684178811-192.168.59.11-1547782880194
2019-01-18 09:51:40,054 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-684178811-192.168.59.11-1547782880194 on volume /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current...
2019-01-18 09:51:40,068 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Cached dfsUsed found for /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current/BP-684178811-192.168.59.11-1547782880194/current: 32768
2019-01-18 09:51:40,076 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-684178811-192.168.59.11-1547782880194 on /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current: 22ms
2019-01-18 09:51:40,082 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-684178811-192.168.59.11-1547782880194: 32ms
2019-01-18 09:51:40,085 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-684178811-192.168.59.11-1547782880194 on volume /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current...
2019-01-18 09:51:40,086 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-684178811-192.168.59.11-1547782880194 on volume /opt/modules/hadoop-2.7.3/data/tmpData/dfs/data/current: 1ms
2019-01-18 09:51:40,089 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 7ms
2019-01-18 09:51:40,232 INFO org.apache.hadoop.hdfs.server.datanode.VolumeScanner: VolumeScanner(/opt/modules/hadoop-2.7.3/data/tmpData/dfs/data, DS-93aec2f1-8f3c-4012-b582-3de985fab6b2): no suitable block pools found to scan.  Waiting 1763490661 ms.
2019-01-18 09:51:40,235 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1547845205235 with interval 21600000
2019-01-18 09:51:40,237 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-684178811-192.168.59.11-1547782880194 (Datanode Uuid null) service to bigdata-01.superyong.com/192.168.59.11:8020 beginning handshake with NN
2019-01-18 09:51:40,280 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-684178811-192.168.59.11-1547782880194 (Datanode Uuid null) service to bigdata-01.superyong.com/192.168.59.11:8020 successfully registered with NN
2019-01-18 09:51:40,280 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode bigdata-01.superyong.com/192.168.59.11:8020 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2019-01-18 09:51:40,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-684178811-192.168.59.11-1547782880194 (Datanode Uuid eb6be290-2229-42d2-b8b8-3856f7068873) service to bigdata-01.superyong.com/192.168.59.11:8020 trying to claim ACTIVE state with txid=5
2019-01-18 09:51:40,376 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-684178811-192.168.59.11-1547782880194 (Datanode Uuid eb6be290-2229-42d2-b8b8-3856f7068873) service to bigdata-01.superyong.com/192.168.59.11:8020
2019-01-18 09:51:40,457 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Successfully sent block report 0x2641aaa405f,  containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 5 msec to generate and 76 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2019-01-18 09:51:40,457 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-684178811-192.168.59.11-1547782880194
[super-yong@bigdata-01 logs]$ 

这是我截取的一段log信息,和java控制台中的日志输出信息一模一样!

.out

我们知道java中,不仅有日志输出信息,也有标准输出信息,这个文件中的信息对应的就是标准输出信息,标准输出比如:system.out.println 这类型的的输出就会在写入这个文件中。

[super-yong@bigdata-01 logs]$ tail -100 hadoop-super-yong-datanode-bigdata-01.superyong.com.out
ulimit -a for user super-yong
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 14739
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1024
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

这是我截取的.out文件中的信息,是不是和java中的标准输出一样。

所以之后如果遇到错误,就打开日志记录信息查看他的最后100行就ok了。

】【打印繁体】【投稿】【收藏】 【推荐】【举报】【评论】 【关闭】 【返回顶部
上一篇Hadoop多节点分布式配置安装 下一篇HADOOP 增加子节点配置

最新文章

热门文章

Hot 文章

Python

C 语言

C++基础

大数据基础

linux编程基础

C/C++面试题目