Google Cloud Platform: Google Cloud Storage(GCS Bucket) and how to use Bucket for manual Data Import and export

Google cloud platform supports storage for both Structured Data and Un-Structured Data. Storage technologies can be summarized as follows - 

For unstructured data we have two main storage technologies - Persistence Disk and Google Cloud Storage buckets.

What is fundamental difference between Persistence and GCS bucket?

  1. Persistent Disks is used for Block storage, Buckets are used for Object storage
  2. Persistent Disks has upper limit of Max 64TB in size, Buckets are Infinitely scalable
  3. Pricing mechanism for Persistent Disks is Pay what you allocate, Buckets follows Pay what you use. i.e: If the you create a 100GB disk but you use just 5GB you
    pay for the entire 100GB
  4. Persistent Disks tightly coupled with GCE VM, GCS Buckets are independent of GCE VMs
  5. Persistent Disks has Zonal (or regional) access, buckets has Global access

How to create GCS "Bucket" in GCP?

  1. From Resources section, click on "Storage" and start with "Create Bucket"

  2. Since Bucket is globally available, we need to choose a name which is unique. Good practice is to add prefix/postfix your organization/department name to make bucket unique.

  3. GCP use default selection for geographic placement of your data(multi-region), Storage class(Standard) and Access control(Uniform). Based on need and use we can change it.

    Significance of Storage class and cost optimization - how often data is accessible is primary factor which drives "Storage class" selection.

    • Archive storage class (for beyond cold Data) which is even colder than Coldline and it comes with lowest storage cost & highest data access cost.
    • It has 365 day minimum storage, no availability SLA, higher data access costs than other storage classes.

  4. On submit of create request we will have our Bucket ready in cloud storage panel.

Now we have GCS Bucket created, it can be used for import/export activity.

Cloud SQL provides managed interface to import and export of  data and internally it use Google Cloud Storage to store the backup. Exporting data involves 

  1. Send command for Cloud SQL to run the mysqldump command against your database and 
  2. Put the output of that command into a bucket on Cloud Storage
  3. Exports command are run with the --single-transaction flag (meaning that at least InnoDB tables won’t be locked while the export runs).

To get started, go to the instance details page for your Cloud SQL instance, and click the Export button at the top of the page. 

  1. Click on Export from MYSQL Instance page and Follows opened popup window & select File format, Database Schema whose data we wanted to export.

  2. Click Browse to select Destination Bucket. Finally click "EXPORT" button to Initiate data export from Managed relational DB to GCS Bucket.

    If we do not have bucket created, we can create GCS Bucket by clicking [+] button.

  3. Verify Data exported in GCS Bucket.

Export optimization

If we have a replica DB(read-only replication), we can do export operation from. By doing this we avoid using primary instance’s CPU time while exporting data to Cloud Storage. Repeat above step with readOnly replica instance and verify in GCS Bucket.

 Import data from GCS Bucket to Cloud SQL Instance:

  1. Import SQL Insert statement file from local machine to GCS Bucket.

  2. From Cloud SQL primary DB instance, click import button and choose source as above uploaded file.


Verify Imported Data from MYSQL Client

  1. Before Import- 3 rows returned (ORDERS Table)
    n0r0082@m-c02z31rnlvdt ~ % mysql -u root --password=PASSWORD -h \
      --ssl-ca=../GCP/certs/server-ca.pem \
      --ssl-cert=../GCP/certs/client-cert.pem \
    Your MySQL connection id is 57547
    Server version: 5.7.36-google-log (Google)
    Copyright (c) 2000, 2022, Oracle and/or its affiliates.
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    mysql> show databases
        -> ;
    | Database           |
    | information_schema |
    | devInlineTxn       |
    | mysql              |
    | performance_schema |
    | sys                |
    5 rows in set (0.07 sec)
    mysql> use devInlineTxn
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    Database changed
    mysql> select * from ORDERS;
    | order_id | order_total | customer_id | order_date |
    |        1 | $120.98     | WM-198721   | 2022-02-26 |
    |        2 | $11.98      | WM-1100     | 2022-02-26 |
    |        3 | $41.98      | WM-4360     | 2022-02-26 |
    3 rows in set (0.05 sec)

  2. After import  - 6 rows(including 3 new rows) returned
    mysql> select * from ORDERS;
    | order_id | order_total | customer_id | order_date |
    |        1 | $120.98     | WM-198721   | 2022-02-26 |
    |        2 | $11.98      | WM-1100     | 2022-02-26 |
    |        3 | $41.98      | WM-4360     | 2022-02-26 |
    |        4 | $20.98      | WM-1921     | 2022-02-27 |
    |        5 | $12.98      | WM-1110     | 2022-02-27 |
    |        6 | $41.98      | WM-4060     | 2022-02-27 |
    6 rows in set (0.05 sec)

File Imported in GCS Bucket from local : orders-backlog.sql
USE `devInlineTxn`;
INSERT INTO `ORDERS` VALUES (4,'$20.98','WM-1921','2022-02-27'),(5,'$12.98','WM-1110','2022-02-27'),(6,'$41.98','WM-4060','2022-02-27');



  1. I will share it with my other friends as the information is really very useful. Keep sharing your excellent work. Read more info about cloud based billing software india

Previous Post Next Post