S3

Learn how to use the Amazon S3 storage action to build your workflow

Introduction

The Amazon S3 storage action in DBSync Cloud Workflow enables users to upload files directly to an Amazon S3 bucket as part of their integration processes. This is useful for scalable storage, backup, compliance, and sharing of files and reports generated by workflows.

Use Cases

  • Store integration logs and audit data in an S3 bucket.

  • Upload CSV or JSON files for reporting or downstream processing.

  • Archive transformed data for compliance or historical access.

Use Case Scenario

Scenario: Archiving Daily Sales Reports to Amazon S3

A retail company runs a DBSync workflow every night to generate a sales report in CSV format. To comply with data retention policies and provide easy access to historical reports, the company uploads the file to an Amazon S3 bucket structured by date.

Prerequisites

  • An active AWS account with S3 access.

  • IAM credentials (Access Key ID and Secret Access Key) with PutObject permissions.

  • Configured S3 connection in DBSync.

Configuration Steps

  1. Add S3 Storage Action to Workflow

  • Drag and drop the S3 storage action into your workflow.

  • Click Configure provided on the S3 action.

  • Click the dropdown in the S3 Connector field and select your Connector.

  • Select the desired operations provided (READ, WRITE).

Read from S3
  1. Define Storage Parameters for READ operation

  • Bucket Name: The bucket name that was specfied on the connector will appear here which is non-editable.

  • Object Name: Specify the object name located in the S3 bucket.

  • File Content: The variable holds all the downloaded files/file information. By default, the variable name is auto generated as (s3-download-file-content)

  • Properties

    • Use Accelerated End Point (Dropdown True/False): Option to use S3 Transfer Acceleration for faster data transfer.

  • Preview: It displays the list of files that you have selected.

Write to S3
  1. Define Storage Parameters for WRITE operation

  • Bucket Name: The bucket name that was specfied on the connector will appear here which is non-editable.

  • Object Name: Specify the object name located in the S3 bucket.

  • File Content: Select the variable that holds all the downloaded files/file information.

  • Properties

    • Use accelerated endpoint (Dropdown True/False): Option to use S3 Transfer Acceleration for faster data transfer.

    • Canned ACL: Canned ACL (Access Control List) in AWS S3 refers to a predefined set of permissions that can be applied to S3 objects or buckets. ACLs are used to manage access to S3 resources by specifying who can read, write, or perform other operations on the objects and buckets. AWS provides a set of canned ACLs like Private, Public-Read, Public-Read-Write, Authenticated-Read, Log-Delivery-Write, Bucket-Owner-Read, Bucket-Owner-FullControl, Aws-Exec-Read.

  • Advanced Settings: This is to setup the S3 tags and metadata key-values. These are optional fields to run the flow.

    • Tags: User can add multiple rows by clicking '+' icon and remove by clicking on delete icon. This row provides two fields to enter key and vaules.

    • Metadata: User can add multiple rows by clicking '+' icon and remove by clicking on delete icon. This row provides two fields to enter key and vaules.

    • Search box/Clear all: User can search from the list of metadata or tags that was added. Click Clear all to clear all the tags or metadata at once.

Best Practices

  • Use environment variables or encrypted credentials for AWS keys.

  • Structure S3 keys (paths) using date or record identifiers to organize files.

  • Use lifecycle rules in S3 for auto-archiving or deletion.

Troubleshooting

  • Access Denied: Ensure IAM policy includes necessary S3 permissions.

  • Invalid Bucket: Verify the bucket name and region.

  • Upload Fails: Check internet connectivity and file size limits.

Limitations

  • Large file uploads may require multipart upload support (not currently available).

  • Region-specific endpoints must be configured correctly.

  • No support for client-side encryption in current version.

Last updated