欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

搭建大数据集群架构Hadoop+Hive+Hbase+Spark+Zookeeper+Phoenix+Sqoop+Flume+Kafka+Azkaban-Azkaban 执行器设置

最编程 2024-04-21 12:22:12
...

executor.port=12321

mail settings

mail.sender=
mail.host=
job.failure.email=
job.success.email=
lockdown.create.projects=false
cache.directory=cache

配置azkaban-users.xml






3.10.4 azkaban-executor-server-2.5.0.tar.gz
配置azkaban.properties
#Azkaban
default.timezone.id=Asia/Shanghai

Azkaban JobTypes Plugins

azkaban.jobtype.plugin.dir=plugins/jobtypes

#Loader for projects
executor.global.properties=conf/global.properties
azkaban.project.dir=projects

database.type=mysql
mysql.port=3306
mysql.host=192.168.1.31
mysql.database=azkaban
mysql.user=root
mysql.password=123456
mysql.numconnections=100

Azkaban Executor settings

executor.maxThreads=50
executor.port=12321
executor.flow.threads=30

4.调优和测试
4.1 集群调优
目前也只是搭建集群时候一个初步的调优,后续hive,hbase等组件调优会慢慢加进来
4.1.1 yarn-site.xml
添加以下配置


yarn.nodemanager.resource.memory-mb
20480


yarn.nodemanager.resource.cpu-vcores
16

yarn.scheduler.minimum-allocation-mb 2048 yarn.scheduler.maximum-allocation-mb 20480 yarn.scheduler.minimum-allocation-vcores 1 yarn.scheduler.maximum-allocation-vcores 32

4.1.2 mapred-site.xml
添加以下配置


mapreduce.framework.name
yarn


mapreduce.map.java.opts
-Xmx1024m


mapreduce.reduce.java.opts
-Xmx2048m

4.2 集群测试
4.2.1 MapReduce测试
hadoop jar /home/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar pi 5 5

4.2.2 Hive创建测试
create database if not exists test;
create external table if not exists test.test
(id int,name string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ‘\001’
location ‘/user/hive/external/fz_external_table’;

4.2.3 Sqoop操作测试
sqoop list-databases --connect jdbc:mysql://hadoop03:3306/ --username root --password 123456

4.2.4 Kafka操作测试
生产者:
public class KafkaProductTest {
public static void main(String[] args) {
Properties props = new Properties();
props.put(“bootstrap.servers”, “hadoop01:9092”);
props.put(“acks”, “all”);
props.put(“retries”, 0);
props.put(“batch.size”, 16384);
props.put(“linger.ms”, 1);
props.put(“buffer.memory”, 33554432);
props.put(“key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”);
props.put(“value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”);
Producer<String, String> producer = new KafkaProducer<>(props);
for (int i = 0; i < 100; i++)
producer.send(new ProducerRecord<String, String>(“test1”, Integer.toString(i), Integer.toString(i)));
producer.close();
}
}

消费者:
public class KafkaConsumerTest {
public static void main(String[] args) {
Properties props = new Properties();
props.put(“bootstrap.servers”, “hadoop01:9092”);
props.put(“group.id”, “test”);
props.put(“enable.auto.commit”, “true”);
props.put(“auto.commit.interval.ms”, “1000”);
props.put(“key.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
props.put(“value.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList(“test1”));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
System.out.printf(“offset = %d, key = %s, value = %s%n”, record.offset(), record.key(), record.value());
}
}

}

4.2.5 Phoenix操作测试
创建测试表test.test
CREATE TABLE if not exists test.test ( id INTEGER not null primary key , t.name char(20) ,t.age integer) t.DATA_BLOCK_ENCODING=‘DIFF’

UPSERT INTO test.test(id,name,age) VALUES(1,‘jack’,23);

测试连接代码
public class PhoenixTest {
private static String phoenixDriver = “org.apache.phoenix.jdbc.PhoenixDriver”;
public static void main(String[] args) throws SQLException {
try {
Class.forName(phoenixDriver);
} catch (ClassNotFoundException e) {
e.printStackTrace();
}
Statement stmt = null;
ResultSet rs = null;
Connection con = DriverManager.getConnection(“jdbc:phoenix:hadoop01,hadoop02,hadoop03:2181”);
stmt = con.createStatement();
String sql = “select * from test.test”;
rs = stmt.executeQuery(sql);
while (rs.next()) {
System.out.println(“id:”+rs.getInt(“id”));
System.out.println(“name:”+rs.getString(“name”));
System.out.println(“age:”+rs.getInt(“age”));
}
stmt.close();
con.close();
}
}
结果:
id:1
name:jack
age:23

4.2.6 Spark操作测试
spark yarn模式测试
bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster examples/jars/spark-examples_2.11-2.1.3.jar 10