SQOOP Complete Tutorial Part-8

This is the continuation part of “Sqoop Complete Tutorial”.

22) Import the table’s structure from MySQL to Hive without data
Create-hive-table is the option that generates the table structure based on the table available in the relational data source.

sqoop create-hive-table
–connect jdbc:mysql://localhost/empdept
–username root
–password cloudera
–table employees
–hive-table emp;


Note: The above command will create only the structure. Data will not be imported.

With the above command, we finished Sqoop Import commands. It depends solely on the requirement that which command needs to be selected. While there won’t be peak hours as in OLTP environments, we will still need to check the consumption of resources and prevent excessive data being extracted.

Now, let’s focus on Sqoop Export. Sqoop Export will help in transferring the data from HDFS to RDBMS. The target table must be exists to perform the data transfer successfully.

Based on its behavior we can categorize them into three modes
(1) Insert : Rows insertion in the destination table.
(2) Update : Updating the existing rows in the destination table.
(3) Update and insert : Combination of above two.

The below are the examples for the above mentioned modes.

INSERT MODE:
sqoop export
–connect jdbc:mysql://localhost/empdept
–username root
–password cloudera
–table employee
–export-dir /EmpData/dataset.txt

UPDATE MODE:
sqoop export
–connect jdbc:mysql://localhost/empdept
–username root
–password cloudera
–table employee
–export-dir /EmpData/dataset.txt
–update-key id;

INSERT AND UPDATE MODE:
sqoop export
–connect jdbc:mysql://localhost/empdept
–username root
–password cloudera
–table employee
–export-dir /EmpData/dataset.txt
–update-key id
–update-mode allowinsert ;

We’ll learn about these methods in detail in the upcoming article.


Please wait for the next parts.