首页 > 代码库 > Sqoop导入到hdfs

Sqoop导入到hdfs

 

 1.注意win下直接复制进linux 改一下--等

sqoop-list-databases --connect jdbc:mysql://122.206.79.212:3306/ --username root -P 

  

 先看一下有什么数据库,发现有些数据库,能查询到的数据库才能导入,很奇怪。

 

2.导入到hdfs

sqoop import  --connect jdbc:mysql://122.206.79.212:3306/dating --username root --password 123456 --table t_rec_top --driver com.mysql.jdbc.Driver 

  那个数据库 端口号 账户名 密码 那个表 不需要加上驱动  那没指定导入到hdfs的哪,肯定会有默认位置的

可以看出只有map任务 没有reduce任务

Warning: /home/hxsyl/Spark_Relvant/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../hcatalog does not exist! HCatalog jobs will fail.Please set $HCAT_HOME to the root of your HCatalog installation.Warning: /home/hxsyl/Spark_Relvant/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/../accumulo does not exist! Accumulo imports will fail.Please set $ACCUMULO_HOME to the root of your Accumulo installation.17/03/15 11:05:12 INFO sqoop.Sqoop: Running Sqoop version: 1.4.617/03/15 11:05:12 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.17/03/15 11:05:12 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.17/03/15 11:05:12 INFO manager.SqlManager: Using default fetchSize of 100017/03/15 11:05:12 INFO tool.CodeGenTool: Beginning code generation17/03/15 11:05:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=017/03/15 11:05:13 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=017/03/15 11:05:13 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hxsyl/Spark_Relvant/hadoop-2.6.4/share/hadoop/mapreduceNote: /tmp/sqoop-hxsyl/compile/ddeeb02cdbd25cddc2662317b89c80f1/t_rec_top.java uses or overrides a deprecated API.Note: Recompile with -Xlint:deprecation for details.17/03/15 11:05:18 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hxsyl/compile/ddeeb02cdbd25cddc2662317b89c80f1/t_rec_top.jar17/03/15 11:05:18 INFO mapreduce.ImportJobBase: Beginning import of t_rec_topSLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/home/hxsyl/Spark_Relvant/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/home/hxsyl/Spark_Relvant/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]17/03/15 11:05:19 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar17/03/15 11:05:19 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM t_rec_top AS t WHERE 1=017/03/15 11:05:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps17/03/15 11:05:21 INFO client.RMProxy: Connecting to ResourceManager at CentOSMaster/192.168.58.180:803217/03/15 11:05:28 INFO db.DBInputFormat: Using read commited transaction isolation17/03/15 11:05:28 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(id), MAX(id) FROM t_rec_top17/03/15 11:05:28 INFO mapreduce.JobSubmitter: number of splits:117/03/15 11:05:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1489547007191_000117/03/15 11:05:30 INFO impl.YarnClientImpl: Submitted application application_1489547007191_000117/03/15 11:05:31 INFO mapreduce.Job: The url to track the job: http://CentOSMaster:8088/proxy/application_1489547007191_0001/17/03/15 11:05:31 INFO mapreduce.Job: Running job: job_1489547007191_000117/03/15 11:05:48 INFO mapreduce.Job: Job job_1489547007191_0001 running in uber mode : false17/03/15 11:05:48 INFO mapreduce.Job:  map 0% reduce 0%17/03/15 11:06:06 INFO mapreduce.Job:  map 100% reduce 0%17/03/15 11:06:07 INFO mapreduce.Job: Job job_1489547007191_0001 completed successfully17/03/15 11:06:07 INFO mapreduce.Job: Counters: 30	File System Counters		FILE: Number of bytes read=0		FILE: Number of bytes written=127058		FILE: Number of read operations=0		FILE: Number of large read operations=0		FILE: Number of write operations=0		HDFS: Number of bytes read=99		HDFS: Number of bytes written=21		HDFS: Number of read operations=4		HDFS: Number of large read operations=0		HDFS: Number of write operations=2	Job Counters 		Launched map tasks=1		Other local map tasks=1		Total time spent by all maps in occupied slots (ms)=13150		Total time spent by all reduces in occupied slots (ms)=0		Total time spent by all map tasks (ms)=13150		Total vcore-milliseconds taken by all map tasks=13150		Total megabyte-milliseconds taken by all map tasks=13465600	Map-Reduce Framework		Map input records=1		Map output records=1		Input split bytes=99		Spilled Records=0		Failed Shuffles=0		Merged Map outputs=0		GC time elapsed (ms)=183		CPU time spent (ms)=1200		Physical memory (bytes) snapshot=107761664		Virtual memory (bytes) snapshot=2069635072		Total committed heap usage (bytes)=30474240	File Input Format Counters 		Bytes Read=0	File Output Format Counters 		Bytes Written=2117/03/15 11:06:07 INFO mapreduce.ImportJobBase: Transferred 21 bytes in 46.7701 seconds (0.449 bytes/sec)17/03/15 11:06:07 INFO mapreduce.ImportJobBase: Retrieved 1 records.

  技术分享

技术分享

创建一个user/yonhumig的目录,其中t_rec_top里就是我们的数据,不过没有标头,可以看出只是以m,表示map任务就结束了

技术分享

wc00是配置文件

"AS	1"License");	1${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.	1(the	1-->	32.0	1<!--	3</configuration>	1</description>	1</property>	15<?xml	1<configuration>	1<description>Amount	1<description>List	1<description>Number	1<description>The	7<description>Where	1<description>Whether	1<description>fair-scheduler	1<description>the	1<name>yarn.log-aggregation-enable</name>	1<name>yarn.nodemanager.aux-services</name>	1<name>yarn.nodemanager.local-dirs</name>	1<name>yarn.nodemanager.remote-app-log-dir</name>	1<name>yarn.nodemanager.resource.cpu-vcores</name>	1<name>yarn.nodemanager.resource.memory-mb</name>	1<name>yarn.resourcemanager.address</name>	1<name>yarn.resourcemanager.admin.address</name>	1<name>yarn.resourcemanager.hostname</name>	1<name>yarn.resourcemanager.resource-tracker.address</name>	1<name>yarn.resourcemanager.scheduler.address</name>	1<name>yarn.resourcemanager.scheduler.class</name>	1<name>yarn.resourcemanager.webapp.address</name>	1<name>yarn.resourcemanager.webapp.https.address</name>	1<name>yarn.scheduler.fair.allocation.file</name>	1<property>	15<value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>	1<value>${yarn.resourcemanager.hostname}:8030</value>	1<value>${yarn.resourcemanager.hostname}:8031</value>	1<value>${yarn.resourcemanager.hostname}:8032</value>	1<value>${yarn.resourcemanager.hostname}:8033</value>	1<value>${yarn.resourcemanager.hostname}:8088</value>	1<value>${yarn.resourcemanager.hostname}:8090</value>	1<value>/home/hxsyl/Spark_Relvant/yarn/local</value>	1<value>/tmp/logs</value>	1<value>12</value>	1<value>30720</value>	1<value>CentOSMaster</value>	1<value>mapreduce_shuffle</value>	1<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>	1<value>true</value>	1ANY	1An	1Apache	1BASIS,	1CONDITIONS	1CPU	1Configs	1IS"	1Individual	1KIND,	1LICENSE	1License	3License,	1License.	2Licensed	1MB,	1Manager	1OF	1OR	1RM	3RM.</description>	2Resource	1See	2Site	1Unless	1Version	1WARRANTIES	1WITHOUT	1YARN	1You	1a	1a-zA-Z0-9_	1accompanying	1adddress	1address	4admin	1aggregate	1aggregation</description>	1agreed	1allocated	2an	1and	2applicable	1application‘s	1application.</description>	2applications	1as	1at	1be	4by	1called	1can	3class	1compliance	1conf	1configuration	1contain	1container_${contid},	1containers‘	1containers.</description>	2copy	1cores	1directories	1directories,	1directory	1distributed	2either	1enable	1except	1express	1file	2file.	1files	1for	3found	1governing	1hostname	1http	1http://www.apache.org/licenses/LICENSE-2.0	1https	1implied.	1in	4in.	1in:	1interface	1interface.</description>	2is	1language	1law	1limitations	1localized	2location</description>	1log	1logs	1manager	1may	2memory,	1name	1not	2numbers</description>	1obtain	1of	11on	1only	1or	2permissions	1physical	1properties	1required	1resource	1scheduler	1scheduler.</description>	1service	1should	1software	1specific	2start	1store	1subdirectories	1that	2the	15this	1this.	1to	5to.</description>	1under	3use	2valid	1version="1.0"?>	1web	2will	2with	2work	1writing,	1you	1

  

 --target-dir  /path       放到那个路径        -m :标书numberMapper

 

从hdfs上打开的文件可以看出  默认是逗号       --fields-terminated-by ‘\t‘   这个分隔符不是为了写入到hdfs来分割,而是原始数据的分隔符

--columns ‘id,account,income‘    只导入某些特定的列

 

符合特定条件的列才被导入,--where "id>2 and id <9"

 

从多个表查询或者指定查询语句  --query "select * form t_detail where id >5 and $CONDITIONS"      $那个必须加 

但是如果-m大于1 就需要指定各个Mapper读取几条记录或者找分隔符 --split-by t_detail.id   $CONDITIONS就是根据分割的信息找到记录条数,进而切分数据,

 

建议使用单引号 使用双引号需要转义, --后边跟的是全称 -是简写

 

 

单引号与双引号的最大不同在于双引号仍然可以保有变量的内容,但单引号内仅能是
一般字符 ,而不会有特殊符号。我们以底下的例子做说明:假设您定义了一个变量, 
name=VBird ,现在想以 name 这个变量的内容定义出 myname 显示 VBird its me 这
个内容,要如何订定呢? 

[root@linux ~]# name=VBird 
[root@linux ~]# echo $name 
VBird 
[root@linux ~]# myname="$name its me" 
[root@linux ~]# echo $myname 
VBird its me 
[root@linux ~]# myname=‘$name its me‘ 
[root@linux ~]# echo $myname 
$name its me 

发现了吗?没错!使用了单引号的时候,那么 $name 将失去原有的变量内容, 仅为
一般字符的显示型态而已!这里必需要特别小心在意!

 

Sqoop导入到hdfs