Setting up the → Google BigQuery pipeline

The article guides you through setting up the → Google BigQuery pipeline to seamlessly receive daily exchange rate data within the schema. Follow these steps to get started:

Getting started requirements

  1. Sign up on
  2. Google Cloud Platform (GCP) setup:
    • Use the Google Cloud Platform and have an active project (learn how to set up GCP).
    • Hold either the BigQuery Admin or BigQuery Data Owner role.

Step-by-step setup

Step 1: On the Workspace page, click the ‘New’ button and select the 'Pipeline' option from the drop-down menu.


Step 2: As a source, select 'Open Exchange Rates.'


Step 3: Choose the pipeline 'Open Exchange Rates → Google BigQuery' and click the 'Create & Setup' button.


Step 4: The new pipeline is successfully created in ‘Draft’ status. To start data import, configure pipeline settings, including access to the source, and destination dataset.


Step 5: Set up source data:

5.1. Click on the ‘Source access’ section.

5.2. In the opened dialog, you will see three sections:


5.3. Follow the steps to generate the API token on the website.


5.4. Copy your App ID and paste it into the ‘API token’ section.


5.5 (Optional) Change the Base currency in the ‘Base currency’ section. Note that the Free plan in OpenExchangeRates works only with ‘USD’ currency.

5.6. Click the ‘Save’ button to apply changes and close the dialog. A green marker in the ‘Source access’ section confirms successful setup.


Step 6: Set up destination dataset:

6.1. Click on the ‘Destination dataset’ section.

6.2. In the opened dialog, choose a shared dataset from the list or click ‘Grant access…’ to add a new dataset following the instructions.


Optionally, after providing access to the GCP project, create a new dataset in the selected location.


Click the ‘Grant Access’ button to create a new dataset and provide access to it.

6.3. Select your dataset in the list of shared datasets, then click the ‘Save’ button to close the ‘List of shared’ dialog.


6.4. If everything is executed correctly, the dialog will automatically close. Therefore, a green marker with the name project.dataset.table will appear in the "Destination dataset" section.


Step 7: Activate the pipeline by clicking the ‘Activate’ button.


The pipeline status will change from ‘Draft’ to ‘Active.’


You're done. Tomorrow, the pipeline will start its first run.


How to Import Historical Data

The pipeline has been successfully created. Click the ‘Get data for the past 2 years’ button to retrieve historical data.


Select the start date and click the ‘Run once’ button. The pipeline will start a manual run to import historical data.


After finishing the run, open your Google BigQuery dataset to check the resulting data.

View all pipeline runs on the ‘Run history’ tab.


Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request


Please sign in to leave a comment.