Quantcast
Channel: 진성주(Sungju Jin)
Viewing all articles
Browse latest Browse all 10

하둡 설정 방법( How to configure Hadoop 2.X )

$
0
0

기술에 관심있는 많은 사람들이 하둡과 빅데이터에 대해서 알고 있습니다.(물론, 빅데이터 != 하둡) 그래서 하둡을 설치하고 운영하려고 합니다. 하지만, 많은 사람들이 하둡 설정에 대해서 어려워하고 어떻게 해야하는지 물어 보는데 설정법을 정리해서 공유합니다.

Nowadays tech guys know about Hadoop or Bigdata(Of course Bigdata != Hadoop). So, they try to install and operate Hadoop. However, soon they feel it’s hard to install or configure it. So many people ask how to configure Hadoop 2.X. Thus, I share my experience here.

1. 하둡의 설정파일 위치( Configuration file path of Hadoop )
$HADOOP_HOME/etc/hadoop

2. 핵심 설정 파일( Core configuration files )
– /etc/hosts
– core-site.xml
– hdfs-site.xml
– mapred-site.xml
– yarn-site.xml
– slaves

3. /etc/hosts

xxx.xxx.xxx.xxx namenode
xxx.xxx.xxx.xxx secondarynamenode
xxx.xxx.xxx.xxx datanode01
xxx.xxx.xxx.xxx datanode02
xxx.xxx.xxx.xxx datanode03
......
......
xxx.xxx.xxx.xxx datanode0N

4. core-site.xml

<configuration>
<property>
        <name>fs.defaultFS</name>
        <value>hdfs://namenode:9000</value>
</property>
<property>
        <name>hadoop.tmp.dir</name>
        <value>/data/hadoop/tmp</value>
</property>
</configuration>

5. hdfs-site.xml
$HADOOP_HOME 변수를 치환해야 합니다.(You have to replace the $HADOOP_HOME value.)

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
  <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>dfs.secondary.http.address</name>
    <value>secondarynamenode:50090</value>
  </property>
  <property>
    <name>dfs.hosts.exclude</name>
    <value>$HADOOP_HOME/etc/hadoop/exclude</value>
  </property>
  <property>
    <name>dfs.hosts</name>
    <value>$HADOOP_HOME/etc/hadoop/include</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/data/hadoop/dfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///data/hadoop/dfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>/data/hadoop/dfs/namesecondary</value>
  </property>
  <property>
    <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
    <value>true</value>
  </property>
</configuration>

6. mapred-site.xml
$HADOOP_HOME 변수를 치환해야 합니다.(You have to replace the $HADOOP_HOME value.)

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.hosts.exclude.filename</name>
    <value>$HADOOP_HOME/etc/hadoop/exclude</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.hosts.filename</name>
    <value>$HADOOP_HOME/etc/hadoop/include</value>
  </property>
</configuration>

7. yarn-site.xml

<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>namenode</value>
  </property>
</configuration>

Viewing all articles
Browse latest Browse all 10