Data Delivery to Optimove
This section covers the various methods available to deliver data to Optimove, allowing you to choose the most suitable approach for your system. It includes a link to the QA processes documentation and an overview of the batch data process to help ensure smooth and accurate data transfers.
Data delivery methods
Optimove can gain access to the data you prepared using one of the following methods:
- Secure Data Sharing
- Delivery of files
- Connection to a database
- API connection
Secure Data Sharing
Optimove supports modern "Zero-Copy" data sharing architectures. This allows Optimove to query your data directly from your data warehouse without the need to physically move files or duplicate data.
Benefits of Secure Data Sharing:
- Efficiency: No need to move large files or manage ETL pipelines.
- Speed: Data is available to Optimove immediately upon readiness.
- Security: Data remains in your environment; Optimove is granted read-only access.
1. Snowflake Secure Data Sharing
Snowflake Secure Data Sharing is a feature of the Snowflake data platform that allows users to securely share data with external parties. With Snowflake Secure Data Sharing, you can create a virtual copy of your data, called a "share", which can be shared with Optimove.
- Cost Effective: Querying costs are covered by Optimove.
- Reliable: Removes reliance on daily file transfers, increasing process stability.
More information can be found in the Snowflake external guide.
How to connect: Optimove will provide the relevant steps required. Please contact your PM/CSM to initiate the share request.
2. Apache Iceberg Integration
Optimove supports the Apache Iceberg open table format. This allows you to share data tables from various platforms (such as Databricks or Snowflake) while maintaining high performance and transactional consistency.
To set up an Iceberg share, please refer to the guide specific to your infrastructure:
Delivery of files
The common file formats used in Optimove are CSV, JSON, and PARQUET.
All supported file formats and requirements for each source are available here.
To ensure optimal performance and seamless integration, please consider the following limitations when delivering files to Optimove:
- Compression:
- Supported Compression: GZ (ZIP is not supported)
- File Size:
- The maximum supported file size is 1 GB; however, for optimal performance and faster data processing, it is recommended to split large files into smaller batches ranging from 100 to 250 MB.
- JSON Object Structure:
- Each JSON object should be in a separate row without additional brackets at the beginning or end of the file. Use curly braces
{}as identifiers:{"Id":"30515240","SubscriptionId":null} {"Id":"30515241","SubscriptionId":null,"Amount":"99.00","Currency":"INR"}
- Each JSON object should be in a separate row without additional brackets at the beginning or end of the file. Use curly braces
Files can be provided through the following storage options. For detailed requirements, click here.
1. AWS S3
- Required Details: Access Key ID, Secret Access Key, Bucket Name. External Guide
- Additional Option: S3 storage integration. External Guide
2. Google Cloud Storage
- Required Details: Bucket Name
- Optimove will provide the relevant permissions details, please contact your PM/CSM.
3. Azure Blob
- Required Details: SAS Token, URL (Stage URL referencing the Azure account)
4. SFTP
- If Cloud Storage is unavailable, SFTP is an alternative option. Optimove will provide the SFTP details, please contact your PM/CSM.
- If SFTP is managed on your end, provide the following details:
- Host
- Username
- Password
- Port
Connection to a Database
All supported file formats and requirements for each source are available here.
You can configure security by whitelisting specific IP addresses. Please contact your PM/CSM for the IP addresses list.
Required Details for Database Connection:
To initiate a successful database connection, the following details must be provided:
- Host
- UserName
- Password
- Port
- Database Name
For Google BigQuery, Here are the relevant permissions details. After configuring the permissions, provide the following details:
- Project ID
- Dataset ID
Optimove’s batch-data ETL process will extract the data directly from your server. It is important to have the LastUpdated filter on the API call (since Optimove only extracts incremental data from the transactional and fact tables).
API connection
- To establish the connection, please send Optimove full documentation, including details of the available API calls (along with specific examples) and relevant credentials.
- A connection may be made available with or without whitelisting.
- Kindly remove any relevant restrictions that might impact the data extraction process (firewall for example). In addition, please prepare your sources based on the amount of data, to avoid overloading the servers.
Optimove’s batch-data ETL process will extract the data directly from your server. It is important to have the LastUpdated filter on the API call (since Optimove only extracts incremental data from the transactional and fact tables).
All supported API types and requirements for each source are available here.
Questions
We are here to help! Please contact us with any questions, and we will be happy to assist you. Email us at [email protected] or call us (seven days a week from 5:00 am to 8:00 pm GMT) at +972-3-672-4546.
Updated 12 days ago
