版权声明:IT资讯科技 https://blog.csdn.net/qq_38460284/article/details/90232118
1.安装配置好Hadoop
常用命令:
hadoop dfs -ls path
hadoop dfs -rmr file
hadoop dfs -mkdir path
hadoop dfs -cat file
2.找个wordcount程序,命名为wordcount.cpp
可以是:http://wiki.apache.org/hadoop/C++WordCount
也可以是hadoop安装路径下的:/usr/local/hadoop-0.20.2/src/examples/pipes/impl/wordcount-simple.cc
3.写Makefile
HADOOP_INSTALL=/usr/local/hadoop-0.20.2
PLATFORM=Linux-i386-32
CC = g++
CPPFLAGS = -m32 -I$(HADOOP_INSTALL)/c++/$(PLATFORM)/include
wordcount: wordcount.cpp
$(CC) $(CPPFLAGS) $< -Wall -L$(HADOOP_INSTALL)/c++/$(PLATFORM)/lib -lhadooppipes -lhadooputils -lpthread -g -O2 -o $@
###
cat /proc/cpuinfo 查看cpu是intel的还是amd的,对应修改PLATFORM。
4.执行:
上传wordcount.cpp文件作为输入文件:hadoop fs -put wordcount.cpp input.txt
上传可执行文件: hadoop fs -put wordcount bin/wordcount
运行代码:
hadoop pipes \
-D hadoop.pipes.java.recordreader=true \
-D hadoop.pipes.java.recordwriter=true \
-input input.txt \
-output output \
-program bin/wordcount
查看结果:
hadoop dfs -cat output/*
推荐阅读文章
大数据工程师在阿里面试流程是什么?
学习大数据需要具备怎么样基础?
年薪30K的大数据开发工程师的工作经验总结?