Getting Started with dbt Core and Redshift

Getting Started with dbt Core and Redshift

Steven Johnson
Steven Johnson

In this guide, we will walk through how to setup dbt Core in the cloud with Redshift. After you finish this guide, you will have the sample data provided uploaded to Redshift and run your first dbt command in the cloud.

Although the steps in this guide will specifically utilize Redshift, the steps can be modified slightly to work with any database that dbt supports. We also have guides made specifically for Bigquery, Databricks, and Snowflake.

If you would rather watch a video version of this guide, feel free to head over to Youtube. Let's jump right in!

Before getting into the steps of setting up the different cloud data warehouses, please download the sample files that we will use for this tutorial here.

Load data into S3

  1. FNavigate into Redshift and use the navigation menu to go to the query editor.
  2. Navigate into the bucket you created for this tutorial by clicking on its name.
  3. Create a folder inside of your bucket named fivethirtyeight_football by clicking the Create Folder button.
  4. Once the folder is created, navigate inside of it.
  5. Click the Upload button to begin the process of uploading our sample files.
  6. Click Add Files.
  7. Select the two files from your file system and click open. After you do that, your page should look like this:
  1. Click Upload. After the upload is complete, you should be shown an upload succeeded banner that looks like this:

Create Tables in Redshift

  1. Navigate into Redshift and to the query editor.
  2. Create new schema for our sample data named soccer by running this query:
create schema if not exists soccer
  1. Create the tables inside of our new soccer schema to hold our uploaded data in S3. This query will accomplish that:
create table soccer.stg_football_rankings(
  rank integer,
  prev_rank integer,
  name varchar(255),
  league varchar(255),
  offense float,
  def float,
  spi float

create table soccer.stg_football_matches(
  season integer,
  date date,
  league_id integer,
  league varchar(255),
  team1 varchar(255),
  team2 varchar(255),
  spi1 float,
  spi2 float,
  prob1 float,
  prob2 float,
  probtie float,
  proj_score1 float,
  proj_score2 float,
  importance1 float,
  importance2 float,
  score1 integer,
  score2 integer,
  xg1 float,
  xg2 float,
  nsxg1 float,
  nsxg2 float,
  adj_score1 float,
  adj_score2 float

Now that we have our tables setup in Redshift. We need to load the data from S3 into the tables.

Load data from S3 into Redshift Tables

  1. Navigate to S3 and find the files that we uploaded in the prior steps.
  2. Click the name of each table to locate the S3 URI.
  3. Copy and paste the S3 URIs to a notepad for use later in these steps.
  1. Navigate back to the Redshift console.
  2. Run the following two queries replacing S3 URI, IAM_role, and region with the values that are specific to you:
copy soccer.stg_football_matches( season, date, league_id, league, team1, team2, spi1, spi2, prob1, prob2, probtie, proj_score1, proj_score2, importance1, importance2, score1, score2, xg1, xg2, nsxg1, nsxg2, adj_score1, adj_score2)
from 'S3 URI'
iam_role 'arn:aws:iam::XXXXXXXXXX:role/RoleName'
region 'us-east-1'
delimiter ','
ignoreheader 1
copy soccer.stg_football_rankings( rank, prev_rank, name, league, offense, def, spi)
from 'S3 URI'
iam_role 'arn:aws:iam::XXXXXXXXXX:role/RoleName'
region 'us-east-1'
delimiter ','
ignoreheader 1

You should now be able to query soccer.stg_football_rankings and soccer.stg_football_matches. Feel free to run this query to verify that this process worked successfully:

select * from soccer.stg_football_matches

dbt Core Part 2 - Setting Up dbt on Github

Fork dbt Setup from GitHub

  1. Fork this repository. The repository contains the beginning state of a dbt project.
  2. Clone the repository locally on your computer.
  3. Open dbt_project.yml in your text editor.

dbt Project File Setup

  1. Change the project name to soccer_538.
  2. Change the profile to soccer_538.
  3. Change model name to soccer_538.
  4. Under the soccer_538 model, add a staging and marts folder that are both materialized as views.
  5. Save your changes.

Profile Setup

  1. Open profiles.yml.
  2. Update the file to this:
    target: dev
            type: redshift
            user: "{{ env_var('redshift_username') }}"
            password: "{{ env_var('redshift_password') }}"
            port: 5439
            dbname: analytics
            schema: soccer
            threads: 4
            keepalives_idle: 240 # default 240 seconds
            connect_timeout: 10 # default 10 seconds
            ra3_node: true

3. Create a new file in your root directory of your dbt project called ``.
4. Paste this code block for the content of ``:

import subprocess
import os
import json

dbt_command = os.environ.get('dbt_command', 'dbt run')['sh', '-c', dbt_command], check=True)
  1. Commit and push your changes to Github.

Now that we have our sample data and dbt processes setup, we need to write our example models for the dbt job to run.

dbt Models

  1. Navigate into the models folder in your text editor. There should be a subfolder under models called example. Delete that subfolder and create a new folder called 538_football.
  2. Create two subfolders inside 538_football called staging and marts.
  1. Inside the staging folder, create a file called stg_football_matches.sql.
  2. Paste the following code into that file:
    select * from soccer.stg_football_matches
  3. Inside the staging folder, create a file called stg_football_rankings.sql
  4. Paste the following code into that file:
    select * from soccer.stg_football_rankings
  5. In the staging folder, add a file called schema.yml.
  6. In this file, paste the following information:
version: 2

  - name: stg_football_matches
    description: Table from 538 that displays football matches and predictions about each match.

  - name: stg_football_rankings
    description: Table from 538 that displays a teams ranking worldwide
  1. In the marts folder, create a file called mart_football_information.sql.
  2. Paste the following code into that file:
  qryMatches as (
    SELECT * FROM {{ ref('stg_football_matches') }} where league = 'Barclays Premier League'
  qryRankings as (
    SELECT * FROM {{ ref('stg_football_rankings') }} where league = 'Barclays Premier League'

  qryFinal as (
      team_one.rank as team1_rank,
      team_two.rank as team2_rank
      qryMatches join
      qryRankings as team_one on
        (qryMatches.team1 = join
      qryRankings as team_two on
        (qryMatches.team2 =

select * from qryFinal
  1. In the marts folder, add a file called schema.yml
  2. In this file, paste the following:
version: 2

  - name: mart_football_information
    description: Table that displays football matches along with each team's world ranking.
  1. Save the changes.
  2. Push a commit to Github.

We are ready to move into Shipyard to run our process. First, you will need to create a developer account.

dbt Core Part 3 - Setting Up dbt on Shipyard

Create Developer Shipyard Account

  1. Navigate to Shipyard's sign-up page here.
  1. Sign up with your email address and organization name.
  2. Connect to your Github account by following this guide. After connecting your Github account, you'll be ready to create your first Blueprint.

Creating dbt Core Blueprint

  1. On the sidebar of Shipyard's website, click Blueprints.
  2. Click Add Blueprint on the top right of your page.
  3. Select Python.
  4. Under Blueprint variables, click Add Variable.
  5. Under display name, enter dbt CLI Command.
  6. Under reference name, enter dbt_command.
  7. Under default value, enter dbt run.
  8. Click the check box for required
  9. Under placeholder, enter Enter the command for dbt.
  10. Click Next
  11. Click Git.
  1. Select the repository where your dbt files sit.
  2. Click the source that you want the files pulled from. Generally main or master.
  3. Under file to run, enter
  4. Under Git Clone Location, select the option for Unpack into Current Working Directory.
  5. Click Next Step on the bottom right of the screen.
  6. Next to Environment Variable, click the plus sign to add an environment variable.

Add Environment Variables

The environment variables that need to be added will vary based on the cloud database that you use.

Variable Name Value
redshift_username username from redshift
redshift_password password from redshift

Python Packages

  1. Click the plus sign next to Python Packages.
  2. In the Name field, enter dbt-redshift. In the version field, enter ==1.0.0.
  3. Click Next.

Blueprint Settings

  1. Under Blueprint Name, enter dbt - Execute CLI Command.
  2. Under synopsis, enter This Blueprint runs a dbt core command.
  3. Click Save.
  4. In the top right of your screen, click Use this Blueprint. This will take you over to the Fleet Builder and prompt you to select a project.

Build dbt Core Fleet

  1. On the Select a Project prompt, click the drop down menu to expand it and select Create a New Project.
  2. Under project name, enter dbt Core Testing.
  3. Under timezone, enter your timezone.
  1. Click Create Project.
  2. Select dbt Core Testing and click Select Project. This will create a new Fleet in the project. The Fleet Builder will now visible with one Vessel located inside of the Fleet.
  3. Click on the Vessel in the Fleet Builder and you will see the settings for the Vessel pop up on the left of your screen.
  1. Under Vessel Name, enter dbt Core CLI Command.
  2. Under dbt CLI Command, enter dbt debug.
  3. Click the gear on the sidebar to open Fleet Settings.
  1. Under Fleet Name, enter dbt Core.
  2. Click Save & Finish on the bottom right of your screen.
  3. This should take you to a page showing that your Fleet was created successfully.
  1. Click Run Your Fleet. This will take you over to the Fleet Log.
  1. You can click on the bar to get the output from your run.

If you scroll to the top of the output, you will see that the environment variables that were put in during the Blueprint creation process are hidden from the user.

If dbt debug succeeds, we are ready to move into part three of the guide. If it fails, please go back to the steps above and make sure everything is setup correctly. Feel free to send an Intercom message to us at anytime using the widget on the bottom right of the Shipyard screen.

We will be at the dbt Coalesce conference in New Orleans from October 17-21. We would love to meet and discuss all of the workflows that you have been able to build out using dbt Core!