Setting up the Openexchangerates.org → Google BigQuery pipeline

The article guides you through setting up the Openexchangerates.org → Google BigQuery pipeline to seamlessly receive daily exchange rate data within the schema. Follow these steps to get started:

Getting started requirements

  1. Sign up on Openexchangerates.org:
  2. Google Cloud Platform (GCP) setup:
    • Use the Google Cloud Platform and have an active project (learn how to set up GCP).
    • Hold either the BigQuery Admin or BigQuery Data Owner role.

Step-by-step setup

Step 1: On the Workspace page, click the ‘New’ button and select the 'Pipeline' option from the drop-down menu.

01.png

Step 2: As a source, select 'Open Exchange Rates.'

02.png

Step 3: Choose the pipeline 'Open Exchange Rates → Google BigQuery' and click the 'Create & Setup' button.

03.png

Step 4: The new pipeline is successfully created in ‘Draft’ status. To start data import, configure pipeline settings, including access to the source, and destination dataset.

04.png

Step 5: Set up source data:

5.1. Click on the ‘Source access’ section.
05.png

5.2. In the opened dialog, you will see three sections:

06.png

5.3. Follow the steps to generate the API token on the Openexchangerates.org website.

07.png

5.4. Copy your App ID and paste it into the ‘API token’ section.

08.png

5.5 (Optional) Change the Base currency in the ‘Base currency’ section. Note that the Free plan in OpenExchangeRates works only with ‘USD’ currency.

5.6. Click the ‘Save’ button to apply changes and close the dialog. A green marker in the ‘Source access’ section confirms successful setup.

09.png

Step 6: Set up destination dataset:

6.1. Click on the ‘Destination dataset’ section.
10.png

6.2. In the opened dialog, choose a shared dataset from the list or click ‘Grant access…’ to add a new dataset following the instructions.

11.png

Optionally, after providing access to the GCP project, create a new dataset in the selected location.

12.png

Click the ‘Grant Access’ button to create a new dataset and provide access to it.

6.3. Select your dataset in the list of shared datasets, then click the ‘Save’ button to close the ‘List of shared’ dialog.

13.png

6.4. If everything is executed correctly, the dialog will automatically close. Therefore, a green marker with the name project.dataset.table will appear in the "Destination dataset" section.

14.png

Step 7: Activate the pipeline by clicking the ‘Activate’ button.

15.png

The pipeline status will change from ‘Draft’ to ‘Active.’

16.png

You're done. Tomorrow, the pipeline will start its first run.

 

How to Import Historical Data

The pipeline has been successfully created. Click the ‘Get data for the past 2 years’ button to retrieve historical data.

17.png

Select the start date and click the ‘Run once’ button. The pipeline will start a manual run to import historical data.

18.png

After finishing the run, open your Google BigQuery dataset to check the resulting data.

View all pipeline runs on the ‘Run history’ tab.

19.png

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request

0 Comments

Please sign in to leave a comment.