Data Unloading to S3

Learn how to export data from Optimove Data Share to your S3 bucket

📤 Exporting Data from Snowflake Reader Account to Amazon S3

Introduction

This guide walks you step-by-step through how to export data from a Snowflake Reader Account to Amazon S3 using secure, supported methods. It is designed specifically for environments where you do not have full Snowflake access and must work within the constraints of a Reader Account - a common setup for Optimove clients and Snowflake data share consumers.

We'll cover how to prepare data for export, use presigned URLs for secure file delivery, and ensure smooth extraction without needing a full Snowflake subscription. All examples are copy-paste ready.

🚀 What’s Possible with Reader Accounts

Snowflake Reader Accounts offer a lightweight yet powerful way to access and export data—without needing a full Snowflake subscription. While they have a simplified permission model, they still support flexible export workflows.

Here’s what to keep in mind:

  • You can export data securely to Amazon S3 using COPY INTO and a presigned URL.
  • 🔒 Direct use of external stages and storage integrations is restricted to maintain isolation and security.
  • ⚙️ Custom file formats and permanent stages aren’t available, but default formats work well for most use cases.

This setup ensures you can access and share data efficiently while keeping operations simple and controlled.

✅ Prerequisites

  1. Reader Account Role

    You must have READER_ACCOUNT_DBA or sufficient privileges to create tables/views.

  2. S3 Presigned URL

    Ask your AWS admin or use AWS CLI/Python to generate it. Example with AWS CLI:

    aws s3 presign s3://your-bucket-name/path/yourfile.csv --expires-in 3600
    
    
  3. Snowflake Worksheet or SQL Editor Access

🧙🏻‍♂️ Step-by-Step: Export Data from Snowflake Reader to S3

Step 1: Create a Dedicated IAM User and Policy in AWS

For security best practices, you'll create a new IAM user with specific, limited permissions to write to your target S3 bucket.

  1. Navigate to the IAM Dashboard: Log in to your AWS Management Console and go to the IAM service.

  2. Create an IAM Policy:JSON

    • In the left navigation pane, click on Policies and then Create policy.
    • Switch to the JSON tab and paste the following policy. This policy grants the minimum required permissions for Snowflake to unload data into your S3 bucket.
    `{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:PutObjectAcl"
                ],
                "Resource": "arn:aws:s3:::<your-bucket-name>/<path-to-unload>/*"
            },
            {
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": "arn:aws:s3:::<your-bucket-name>",
                "Condition": {
                    "StringLike": {
                        "s3:prefix": [
                            "<path-to-unload>/*"
                        ]
                    }
                }
            }
        ]
    }`
    
    • Important:
      • Replace <your-bucket-name> with the name of your S3 bucket.
      • Replace <path-to-unload> with the specific folder path within your bucket where you want to unload the data. If you want to unload to the root of the bucket, you can remove the <path-to-unload>/ from the Resource ARN and use "s3:prefix": ["*"].
    • Add Tags if needed.
    • Give the policy a descriptive name (e.g., snowflake-unload-policy) and a description.
    • Click Create policy.
  3. Create an IAM User:

    • In the left navigation pane, click on Users and then Add users.
    • Enter a user name (e.g., optimove-snowflake-unload-user).
    • For the AWS credential type, select Access key - Programmatic access.
    • Click Next: Permissions.
    • Select Attach existing policies directly.
    • Search for and select the policy you just created (optimove-snowflake-unload-policy).
    • Click Next: Tags, add any tags, and then Next: Review.
    • Review the user details and click Create user.
  4. Securely Store Your Credentials:

    • On the final screen, you will see the Access key ID and the Secret access key.
    • This is the only time you can view the secret access key. Download the .csv file and store it in a secure location. You will need these credentials in a later step.

Step 2: In Snowflake, Create a Simple Table or View (optional)

Use this example if you want to export a filtered dataset:

CREATE OR REPLACE TEMPORARY TABLE my_export_table AS
SELECT *
FROM BI.VW_CAMPAIGN_RESULTS_V1_2
WHERE EXECUTION_DATE >= '2024-07-01';

Step 3: Format Query Output (optional)

To enforce formatting:

CREATE OR REPLACE FILE FORMAT my_csv_format
  TYPE = 'CSV'
  FIELD_OPTIONALLY_ENCLOSED_BY = '"'
  COMPRESSION = NONE;

Note: In reader accounts, creating file formats may not be allowed. If so, skip this and rely on defaults.


Step 4: Use COPY INTO with Pre-Signed URL

Replace with your pre-signed S3 URL:

-- Default File Format
COPY INTO 'https://your-presigned-s3-url'
FROM (
  SELECT *
  FROM my_export_table
)
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' COMPRESSION = NONE)
OVERWRITE = TRUE;

------------------------------------------------------------------------

-- Using specific format defined
COPY INTO 'https://your-presigned-s3-url'
FROM (SELECT * FROM my_export_table)
CREDENTIALS = (
  AWS_KEY_ID = '<your-aws-access-key-id>'
  AWS_SECRET_KEY = '<your-aws-secret-access-key>'
)
FILE_FORMAT = (
  FORMAT_NAME = 'my_csv_format' -- Or your chosen file format name
)
HEADER = TRUE; -- Set to TRUE to include column headers`
  • Replace the placeholders:
    • <your-aws-access-key-id>: The Access Key ID of the IAM user you created.
    • <your-aws-secret-access-key>: The Secret Access Key of the IAM user you created.
    • my_csv_format: The name of the file format you created in the previous step.

You can also run COPY INTO directly from a view or query without a temp table.

🔐 Security Tip

  • Pre-signed URLs expose write access to your S3. Always expire them within 15–60 mins.
  • Never log or share them openly.

🧪 Example: Export Recent Triggered Campaigns

COPY INTO 'https://your-presigned-url'
FROM (
  SELECT
    ISOLATION_DATE,
    CAMPAIGN_ACTION_NAME,
    GROUP_SIZE,
    CHANNEL_ID,
    TEMPLATE_ID
  FROM BI.VW_TRIGGERED_CAMPAIGNS_RESULTS_V1_2
  WHERE ISOLATION_DATE >= CURRENT_DATE - 7
)
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' COMPRESSION = NONE)
OVERWRITE = TRUE;

🧪 Example: Export Predictive Churn Risk Scores

This example demonstrates how to export customers’ churn risk scores from the last two days. These scores are generated by Optimove’s machine learning models and can be used for customer segmentation or retention strategies.

COPY INTO 'https://your-presigned-url'
FROM (
  SELECT
    PROFILE_DATE,
    CLIENT_CUSTOMER_ID,
    LIFECYCLESTAGE,
    IF_CHURN_PREDICTION,
    IF_CHURN_PREDICTION_PERCENTILE,
    IF_CHURN_PREDICTION_PERCENTILE_LCS
  FROM BI.VW_CUSTOMER_PROFILES_REAL_FIELD_NAMES
  WHERE
    LIFECYCLESTAGE IN ('Active', 'New') AND
    PROFILE_DATE >= CURRENT_DATE - 2
)
FILE_FORMAT = (TYPE = CSV FIELD_OPTIONALLY_ENCLOSED_BY = '"' COMPRESSION = NONE)
OVERWRITE = TRUE;

Detailed Guide - How to Create a File Format in Snowflake

Within your Snowflake reader account, you need to define the format of the files you will be unloading.

  1. Log in to your Snowflake reader account.

  2. Create a new worksheet.

  3. Execute the CREATE FILE FORMAT command. Here are a few common examples. Choose the one that best suits your needs.SQLSQLSQL

    • For CSV files:
    CREATE OR REPLACE FILE FORMAT my_csv_format
      TYPE = 'CSV'
      FIELD_DELIMITER = ','
      SKIP_HEADER = 1
      NULL_IF = ('NULL', 'null')
      EMPTY_FIELD_AS_NULL = true
      FIELD_OPTIONALLY_ENCLOSED_BY = '"'
      COMPRESSION = 'GZIP';
    
    • For Parquet files:
    CREATE OR REPLACE FILE FORMAT my_parquet_format
      TYPE = 'PARQUET';
    
    • For JSON files:
    CREATE OR REPLACE FILE FORMAT my_json_format
      TYPE = 'JSON';