1.下载hadoop [url]wget http://mirror.bjtu.edu.cn/apache/hadoop/common/hadoop-0.20.2/hadoop-0.20.2.tar.gz[/url]
2.配置core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
3.配置marpred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
4.修改hadoop-env.xml 增加JAVA_HOME配置
export JAVA_HOME=/usr/local/java/jdk1.6.0_29 --- java安装路径
5.设置公钥
[root@vm-platform-dev-138113 conf]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
97:75:f9:7e:42:f9:10:13:35:0f:b9:60:a3:9f:da:b7 root@vm-platform-dev-138113
The key's randomart image is:
+--[ DSA 1024]----+
| o+.|
| + .+o|
| o.o=..|
| .o ..= |
| S o. .+ .|
| . o. + |
| o . +|
| . . ...|
| .E. |
+-----------------+
[root@vm-platform-dev-138113 conf]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
6.启动
a.格式化
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop namenode -format
12/05/22 08:47:32 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = vm-platform-dev-138115/10.20.138.115
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
12/05/22 08:47:32 INFO namenode.FSNamesystem: fsOwner=root,root,bin,daemon,sys,adm,disk,wheel
12/05/22 08:47:32 INFO namenode.FSNamesystem: supergroup=supergroup
12/05/22 08:47:32 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/05/22 08:47:32 INFO common.Storage: Image file of size 94 saved in 0 seconds.
12/05/22 08:47:32 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
12/05/22 08:47:32 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at vm-platform-dev-138115/10.20.138.115
************************************************************/
b.启动服务
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/start-all.sh
starting namenode, logging to /root/kevin/hadoop-0.20.2/bin/../logs/hadoop-root-namenode-vm-platform-dev-138115.out
localhost: starting datanode, logging to /root/kevin/hadoop-0.20.2/bin/../logs/hadoop-root-datanode-vm-platform-dev-138115.out
localhost: starting secondarynamenode, logging to /root/kevin/hadoop-0.20.2/bin/../logs/hadoop-root-secondarynamenode-vm-platform-dev-138115.out
starting jobtracker, logging to /root/kevin/hadoop-0.20.2/bin/../logs/hadoop-root-jobtracker-vm-platform-dev-138115.out
localhost: starting tasktracker, logging to /root/kevin/hadoop-0.20.2/bin/../logs/hadoop-root-tasktracker-vm-platform-dev-138115.out
c.校验
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop fs -ls /
Found 1 items
drwxr-xr-x - root supergroup 0 2012-05-22 08:50 /tmp
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop fs -mkdir /user/hadoop/t
est
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop fs -chmod g+w /tmp
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop fs -chmod g+w /user/hadoop/test
[root@vm-platform-dev-138115 hadoop-0.20.2]# bin/hadoop fs -lsr /
drwxrwxr-x - root supergroup 0 2012-05-22 08:50 /tmp
drwxr-xr-x - root supergroup 0 2012-05-22 08:50 /tmp/hadoop-root
drwxr-xr-x - root supergroup 0 2012-05-22 08:50 /tmp/hadoop-root/mapred
drwx-wx-wx - root supergroup 0 2012-05-22 08:50 /tmp/hadoop-root/mapred/system
-rw------- 3 root supergroup 4 2012-05-22 08:50 /tmp/hadoop-root/mapred/system/jobtracker.info
drwxr-xr-x - root supergroup 0 2012-05-22 08:52 /user
drwxr-xr-x - root supergroup 0 2012-05-22 08:52 /user/hadoop
drwxrwxr-x - root supergroup 0 2012-05-22 08:52 /user/hadoop/test
分享到:
相关推荐
Linux运维-运维课程MP4频-06-大数据之Hadoop部署-17hadoop单机部署.mp4
Linux运维-运维课程MP4频-06-大数据之Hadoop部署-18hadoop单机部署应用测试.mp4
Linux运维-运维课程MP4频-06-大数据之Hadoop部署-16hadoop单机部署介绍及软件包获
Linux下Hadoop单机配置,供大家参考学习!
Hadoop-2.7.3 arm平台麒麟操作系统部署,已编译支持snappy lz4压缩 Hadoop国产化部署 Linux version 4.19.90-vhulk2001.1.0.0026.ns7.15.aarch64 (root@mockbuild) (gcc version 4.8.5 20150623 (NeoKylin 4.8.5-36)...
Hadoop单机与集群部署笔记.docx
Hadoop环境安装设置(最简单的hadoop单机环境部署教程) 安装前设置 SSH设置和密钥生成 安装Java.下载Java (JDK<最新版> - X64 ... 下载Hadoop.下载来自Apache基金会软件,使用下面 ... Hadoop操作模式 在单机模式下...
016 hadoop单机部署介绍及软件包获取-mp4 017 hadoop单机部-mp4 018 hadoop单机部客应用测试mp4 019 hadoop伪分布式介绍及软件准备-mp4 015 hadoop部分类mp4 016 hadoop单相部署介绍及软件包获取-mp4 017 hadoop单机...
大数据实践系列课程中的入门部分。大数据系列课程涉及Hadoop、Hbase、Hive、Spark、Spark SQL、Python数据分析等内容,将会为朋友们详细的做讲解,讲解的视频配有齐全的资料(源码、课程笔记)。
本文主要介绍了在Ubuntu系统上Hadoop单机版测试环境的搭建过程。
第五课:hadoopwindow单机部署和试用-python验证码识别1
单机下Hadoop部署与配置,这是一个很好的东西,现在应该够20个字了。
EasyHadoop Hadoop 单机系统 安装配置 本文档是Hadoop部署文档,提供了Hadoop单机安装和Hadoop集群安装的方法和步骤,本文档希望让Hadoop安装部署更简单
hadoop单机版安装教程
hadoop环境配置(单机集群),图文并茂
大数据安装教程(Virtual&ubuntu&hadoop单机),包含虚拟机、镜像文件下载指导,手把手带你进入大数据领域,轻松掌握大数据框架结构及底层组件原理,课程分阶段提供,由入门到精通。
搭建hadoop单机版+hbase单机版+pinpoint整合springboot 第一步-hadoop-hadoop-2.7.3在centos7上部署安装(单机版)
至此,我们通过Python网络爬虫手段进行数据抓取,将我们网站数据(2013-05-30,2013-05-31)保存为两个日志文件,由于文件大小超出我们一般的分析工具处理的范围,故借助Hadoop来完成本次的实践。 使用python对原始...
大数据技术 讲解Hadoop单机安装和集群部署的方法和步骤入门文档 适用于centos等操作系统(共24页).rar
3 单机安装Hadoop 17 3.1 hdfs和yarn单机安装 17 3.1.1 配置主机和防火墙 17 3.2 hadoop基本shell命令 19 3.3 简单JAVA实例 20 4 伪分布式部署spark 20 4.1 下载spark 20 4.2 解压安装 20 4.3 安装scala. 20 4.4 ...