# S3

## Introduction

The Amazon S3 storage action in DBSync Cloud Workflow enables users to upload files directly to an Amazon S3 bucket as part of their integration processes. This is useful for scalable storage, backup, compliance, and sharing of files and reports generated by workflows.

## Use Cases

* Store integration logs and audit data in an S3 bucket.
* Upload CSV or JSON files for reporting or downstream processing.
* Archive transformed data for compliance or historical access.

## Use Case Scenario

**Scenario: Archiving Daily Sales Reports to Amazon S3**

A retail company runs a DBSync workflow every night to generate a sales report in CSV format. To comply with data retention policies and provide easy access to historical reports, the company uploads the file to an Amazon S3 bucket structured by date.

## Prerequisites

* An active AWS account with S3 access.
* IAM credentials (Access Key ID and Secret Access Key) with PutObject permissions.
* Configured S3 connection in DBSync.

## Configuration Steps

#### Add S3 Storage Action to Workflow

1. Drag and drop the S3 storage action into your workflow.
2. Click Configure provided on the S3 action.
3. Click the dropdown in the Select App field and select your Connector.
4. Select the required operations (Read, Write).

Based on the selected operation, define the storage parameters as follows.

#### Define Storage Parameters for Read Operation

1. **Bucket Name:** The bucket name that was specified on the connector will appear here which is non-editable.
2. **Object Name**: Specify the object name located in the S3 bucket.
3. **File Content:** The variable holds all the downloaded files/file information. By default, the variable name is auto generated as (s3-download-file-content)
4. **Properties**
   1. **Use Accelerated Endpoint** (Dropdown True/False): Option to use S3 Transfer Acceleration for faster data transfer.
5. **Preview:** It displays the list of files that you have selected.

<figure><img src="https://1036205596-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fv9avy716UiAsS24zOznZ%2Fuploads%2FXmoU0FDYqd5cdGSpM361%2Funknown.png?alt=media&#x26;token=971ca532-5e17-47bb-b4d9-b048637bc02a" alt=""><figcaption></figcaption></figure>

#### Define Storage Parameters for Write Operation

1. **Bucket Name**: The bucket name that was specified on the connector will appear here which is non-editable.
2. **Object Name:** Specify the object name located in the S3 bucket.
3. **File Name**: Choose the source file name from the drop down.
4. **File Content**: Select the variable that holds all the downloaded files/file information.
5. **Properties**
6. **Use accelerated endpoint** (Dropdown True/False): Option to use S3 Transfer Acceleration for faster data transfer.
7. **Canned ACL**: Canned ACL (Access Control List) in AWS S3 refers to a predefined set of permissions that can be applied to S3 objects or buckets. ACLs are used to manage access to S3 resources by specifying who can read, write, or perform other operations on the objects and buckets. AWS provides a set of canned ACLs like Private, Public-Read, Public-Read-Write, Authenticated-Read, Log-Delivery-Write, Bucket-Owner-Read, Bucket-Owner-FullControl, Aws-Exec-Read.
8. **Advanced Settings**: This is to setup the S3 tags and metadata key-values. These are optional fields to run the flow.
9. **Tags**: User can add multiple rows by clicking '+' icon and remove by clicking on delete icon. This row provides two fields to enter key and values.
10. **Metadata**: User can add multiple rows by clicking '+' icon and remove by clicking on delete icon. This row provides two fields to enter key and values.
11. Search box/Clear all: User can search from the list of metadata or tags that was added. Click Clear all to clear all the tags or metadata at once.

<figure><img src="https://1036205596-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fv9avy716UiAsS24zOznZ%2Fuploads%2FCw97B8lQkKSxF0RnETjd%2Funknown.png?alt=media&#x26;token=edefa4f3-8080-463e-90e2-2ef5b4abe842" alt=""><figcaption><p>Write to S3</p></figcaption></figure>

## Best Practices

* Use environment variables or encrypted credentials for AWS keys.
* Structure S3 keys (paths) using date or record identifiers to organize files.
* Use lifecycle rules in S3 for auto-archiving or deletion.

## Troubleshooting

* Access Denied: Ensure IAM policy includes necessary S3 permissions.
* Invalid Bucket: Verify the bucket name and region.
* Upload Fails: Check internet connectivity and file size limits.

## Limitations

* Large file uploads may require multipart upload support (not currently available).
* Region-specific endpoints must be configured correctly.
* No support for client-side encryption in the current version.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.mydbsync.com/cloud-workflow/2026_create-your-workflow/action/storage-actions/s3.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
