Hadoop -分布式環境搭建安裝配置

jopen 10年前發布 | 20K 次閱讀 Hadoop 分布式/云計算/大數據

集群環境:

1 NameNode(真實主機):

Linux yan-Server 3.4.36-gentoo #3 SMP Mon Apr 1 14:09:12 CST 2013 x86_64 AMD Athlon(tm) X4 750K Quad Core Processor AuthenticAMD GNU/Linux

2 DataNode1(虛擬機):

Linux node1 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

3 DataNode2(虛擬機):

Linux node2 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

4 DataNode3(虛擬機):

Linux node3 3.5.0-23-generic #35~precise1-Ubuntu SMP Fri Jan 25 17:13:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux


1.安裝VirtualBox虛擬機

Gentoo下直接命令編譯安裝,或者官網下載二進制安裝包直接安裝:

emerge -av virtualbox

2.虛擬機下安裝Ubuntu 12.04 LTS

使用Ubuntu鏡像安裝完成后,然后再克隆另外兩臺虛擬主機(這里會遇到克隆的主機啟動的時候主機名和MAC地址會是一樣的,局域網會造成沖突)

主機名修改文件

/etc/hostname

MAC地址修改需要先刪除文件

/etc/udev/rules.d/70-persistent-net.rules

然后在啟動之前設置VirtualBox虛擬機的MAC地址

f1.png

啟動后會自動生成刪除的文件,配置網卡的MAC地址。

為了更方便的在各主機之間共享文件,可以啟動主機yan-Server的NFS,將命令加入/etc/rc.local中,讓客戶端自動掛載NFS目錄。

刪除各虛擬機的NetworkManager,手動設置靜態的IP地址,例如node2主機的/etc/network/interfaces文件配置如下:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 192.168.137.202
gateway 192.168.137.1
netmask 255.255.255.0
network 192.168.137.0
broadcast 192.168.137.255

主機的基本環境設置完畢,下面是主機對應的IP地址

類型 主機名 IP
NameNode yan-Server 192.168.137.100
DataNode node1 192.168.137.201
DataNode node2 192.168.137.202
DataNode node3 192.168.137.203
為了節省資源,可以設置虛擬機默認啟動字符界面,然后通過主機的TERMINAL ssh遠程登錄。(SSH已經啟動服務,允許遠程登錄,安裝方法不再贅述)

設置方式是修改/etc/default/grub文件將下面的一行解除注釋

GRUB_TERMINAL=console

然后update-grub即可。

3.Hadoop環境的配置


3.1配置JDK環境(之前就做好了,這里不再贅述)

export JAVA_HOME=/opt/jdk1.7.0_21
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

3.2在官網下載Hadoop,然后解壓到/opt/目錄下面(這里使用的是hadoop-2.0.4-alpha)

然后進入目錄/opt/hadoop-2.0.4-alpha/etc/hadoop,配置hadoop文件

修改文件hadoop-env.sh

export HADOOP_FREFIX=/opt/hadoop-2.0.4-alpha
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
export JAVA_HOME=/opt/jdk1.7.0_21

修改文件hdfs-site.xml

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/opt/hadoop-2.0.4-alpha/workspace/name</value>
        <description>Determines where on the local filesystem the DFS name node should store the name table.If this is a comma-delimited list of directories,then name table is replicated in all of the directories,for redundancy.</description>
        <final>true</final>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/opt/hadoop-2.0.4-alpha/workspace/data</value>
        <description>Determines where on the local filesystem an DFS data node should store its blocks.If this is a comma-delimited list of directories,then data will be stored in all named directories,typically on different devices.Directories that do not exist are ignored.
        </description>
        <final>true</final>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permission</name>
        <value>false</value>
    </property>
</configuration>


修改文件mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.job.tracker</name>
        <value>hdfs://yan-Server:9001</value>
        <final>true</final>
    </property>
    <property>
        <name>mapreduce.map.memory.mb</name>
        <value>1536</value>
    </property>
    <property>
        <name>mapreduce.map.java.opts</name>
        <value>-Xmx1024M</value>
    </property>
    <property>
        <name>mapreduce.reduce.memory.mb</name>
        <value>3072</value>
    </property>
    <property>
        <name>mapreduce.reduce.java.opts</name>
        <value>-Xmx2560M</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.mb</name>
        <value>512</value>
    </property>
    <property>
        <name>mapreduce.task.io.sort.factor</name>
        <value>100</value>
    </property>
    <property>
        <name>mapreduce.reduce.shuffle.parallelcopies</name>
        <value>50</value>
    </property>
    <property>
        <name>mapred.system.dir</name>
        <value>file:/opt/hadoop-2.0.4-alpha/workspace/systemdir</value>
        <final>true</final>
    </property>
    <property>
        <name>mapred.local.dir</name>
        <value>file:/opt/hadoop-2.0.4-alpha/workspace/localdir</value>
        <final>true</final>
    </property>
</configuration>


修改文件yarn-env.xml

export HADOOP_FREFIX=/opt/hadoop-2.0.4-alpha
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/etc/hadoop
export YARN_CONF_DIR=${HADOOP_FREFIX}/etc/hadoop
export JAVA_HOME=/opt/jdk1.7.0_21

修改文件yarn-site.xml

<configuration>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>yan-Server:8080</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>yan-Server:8081</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>yan-Server:8082</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce.shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

將配置好的Hadoop復制到各DataNode(這里DataNode的JDK配置和主機的配置是一致的,不需要再修改JDK的配置)


3.3 修改主機的/etc/hosts,將NameNode加入該文件

192.168.137.100 yan-Server

192.168.137.201 node1
192.168.137.202 node2
192.168.137.203 node3

3.4 修改各DataNode的/etc/hosts文件,也添加上述的內容

192.168.137.100 yan-Server
192.168.137.201 node1
192.168.137.202 node2
192.168.137.203 node3

3.5 配置SSH免密碼登錄(所有的主機都使用root用戶登錄)

主機上運行命令

ssh-kengen -t rsa
一路回車,然后復制.ssh/id_rsa.pub為各DataNode的root用戶目錄.ssh/authorized_keys文件

然后在主機上遠程登錄一次

ssh root@node1
首次登錄可能會需要輸入密碼,之后就不再需要。(其他的DataNode也都遠程登錄一次確保可以免輸入密碼登錄)

4.啟動Hadoop

為了方便,在主機的/etc/profile配置hadoop的環境變量,如下:

export HADOOP_PREFIX="/opt/hadoop-2.0.4-alpha"
export PATH=$PATH:$HADOOP_PREFIX/bin
export PATH=$PATH:$HADOOP_PREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
export YARN_HOME=${HADOOP_PREFIX}

4.1 格式化NameNode

hdfs namenode -format

4.2 啟動全部進程

start-all.sh
f2.png
在瀏覽器查看,地址:

http://localhost:8088/

f3.png

所有數據節點DataNode正常啟動。

4.3 關閉所有進程

stop-all.sh
f4.png

至此,Hadoop環境搭建基本結束。
上面使用的是alpha版本,不是穩定版,穩定版的配置文件會有所不同,如果照搬會可能導致jobtracker或tasktracker無法啟動的問題。遇到問題查看日志文件。

下面是我配置的1.1.2穩定版本的配置文件:

conf/core-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <property>
     <name>fs.default.name</name>
     <value>hdfs://yan-Server:49000</value>
 </property>
 <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp</value>
  </property>
</configuration>

conf/hadoop-env.sh

export HADOOP_FREFIX=/opt/hadoop-1.1.2
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_HDFS_HOME=${HADOOP_FREFIX}
export PATH=$PATH:$HADOOP_FREFIX/bin
export PATH=$PATH:$HADOOP_FREFIX/sbin
export HADOOP_MAPRED_HOME=${HADOOP_FREFIX}
export YARN_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_HOME=${HADOOP_FREFIX}/conf
export YARN_CONF_DIR=${HADOOP_FREFIX}/conf
export JAVA_HOME=/opt/jdk1.7.0_21

conf/hdfs-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/opt/hadoop-1.1.2/workspace/name</value>
        <description>Determines where on the local filesystem the DFS name node should store the name table.If this is a comma-delimited list of directories,then name table is replicated in all of the directories,for redundancy.</description>
        <final>true</final>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/opt/hadoop-1.1.2/workspace/data</value>
        <description>Determines where on the local filesystem an DFS data node should store its blocks.If this is a comma-delimited list of directories,then data will be stored in all named directories,typically on different devices.Directories that do not exist are ignored.
        </description>
        <final>true</final>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.permission</name>
        <value>false</value>
    </property>
</configuration>

 

conf/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>hdfs://yan-Server:9001</value>
        <final>true</final>
    </property>
</configuration>


conf/masters中填寫NameNode主機名

conf/slaves中填寫DataNode主機名

 

參考:

1.實戰Hadoop--開啟通向云計算的捷徑(劉鵬)

2.http://www.cnblogs.com/aniuer/archive/2012/07/16/2594448.html

來自:http://blog.csdn.net/yming0221/article/details/8989203

 本文由用戶 jopen 自行上傳分享,僅供網友學習交流。所有權歸原作者,若您的權利被侵害,請聯系管理員。
 轉載本站原創文章,請注明出處,并保留原始鏈接、圖片水印。
 本站是一個以用戶分享為主的開源技術平臺,歡迎各類分享!