1、编写过程sql
基于上篇文章的test_db库,vi test.sql,新增:
use test_db;
begin
insert into t_test2(id,name) values(2,'你好');
insert into t_test2(id,name) values(3,'你好');
insert into t_test2(id,name) values(4,'你好');
delete from t_test2 where id=1;
select * from t_test2;
end;
2、执行sql
hplsql -f test.sql,报错:
java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://localhost:10000: java.net.ConnectException: Connection refused (Connection refused)
...
Caused by: java.net.ConnectException: Connection refused (Connection refused)
3、解决方案
在hive的conf下新建 hplsql-site.xml 并添加以下内容,即
vi /data/server/apache-hive-2.0.1-bin/conf/hplsql-site.xml
<configuration>
<property>
<name>hplsql.conn.default</name>
<value>hive2conn</value>
<description>The default connection profile</description>
</property>
<property>
<name>hplsql.conn.hive2conn</name>
<!--设置hive节点的主机名,配置用户名密码(用户名建议写root,密码随意写)-->
<value>org.apache.hive.jdbc.HiveDriver;jdbc:hive2://node1:10000;root;123456</value>
</property>
<property>
<name>hplsql.conn.init.hiveconn2</name>
<value>
set mapred.job.queue.name=dev;
set hive.execution.engine=mr;
use default;
</value>
</property>
<property>
<name>hplsql.conn.convert.hive2conn</name>
<value>true</value>
</property>
<property>
<name>hplsql.conn.mysqlconn</name>
<!--数据库连接格式 com.mysql.jdbc.Driver;jdbc:mysql://hive的数据库IP:hive的数据库端口/hive库名;数据库用户名;数据库密码-->
<value>com.mysql.jdbc.Driver;jdbc:mysql://192.168.1.166:3306/hive;root;123456</value>
<description>MySQL connection</description>
</property>
<property>
<name>hplsql.dual.table</name>
<value>default.dual</value>
<description>Single row, single column table for internal operations</description>
</property>
<property>
<name>hplsql.insert.values</name>
<value>default.dual</value>
<description>Single row, single column table for internal operations</description>
</property>
<property>
<name>hplsql.insert.values</name>
<value>native</value>
<description>How to execute INSERT VALUES statement: native (default) and select</description>
</property>
<property>
<name>hplsql.onerror</name>
<value>exception</value>
<description>Error handling behavior: exception (default), seterror and stop</description>
</property>
<property>
<name>hplsql.temp.tables</name>
<value>native</value>
<description>Temporary tables: native (default) and managed</description>
</property>
<property>
<name>hplsql.temp.tables.schema</name>
<value></value>
<description>Schema for managed temporary tables</description>
</property>
<property>
<name>hplsql.temp.tables.location</name>
<value>/tmp/plhql</value>
<description>LOcation for managed temporary tables in HDFS</description>
</property>
</configuration>
4、重新执行sql
[root@node1 ~]# hplsql -f test.sql
which: no hbase in (/usr/lib64/qt-3.3/bin:/data/server/apache-hive-2.0.1-bin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.8.0_172-amd64/bin:/data/server/hadoop-2.6.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.8.0_172-amd64/bin:/data/server/hadoop-2.6.0/Sbin:/root/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/server/apache-hive-2.0.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/server/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Open connection: jdbc:hive2://node1:10000 (238 ms)
Starting SQL statement
SQL statement executed successfully (238 ms)
Starting SQL statement
SQL statement executed successfully (238 ms)
Starting SQL statement
SQL statement executed successfully (238 ms)
Starting SQL statement
SQL statement executed successfully (238 ms)
Starting SQL statement
SQL statement executed successfully (238 ms)
Starting query
Query executed successfully (216 ms)
2 你好
4 你好
3 你好
以上,成功!
参考资料:
http://lxw1234.com/archives/2015/09/490.htm
https://blog.csdn.net/smile0198/article/details/52343997
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)