# CSV

## Introduction

In this article, you will learn how to use the CSV Storage Action in DBSync Cloud Workflow to create and parse CSV (Comma-Separated Values) files during workflow execution.

This action allows you to:

* Generate CSV files from source data as part of an integration
* Parse and process incoming CSV files from external systems

The CSV format is widely used for exporting and importing structured data. This feature is particularly useful for backing up records, generating reports, integrating with external systems, or preparing data for manual review or downstream processing.

By including this action in your workflow, you can easily manage CSV-based data exchanges within your automated integration pipelines.

## Use Cases

* Archiving processed records in a structured, portable format.
* Exporting data for use in third-party systems.
* Creating audit trails or logs of data operations.
* Supporting offline or manual data review processes.

## Configuration Steps

#### Add **CSV** Storage Action to Workflow

1. Drag and drop the **CSV** storage action into your workflow.
2. Click **Configure** provided on **CSV** action.
3. Click the **Operation** dropdown and select the operation (*Compose* or *Parse*) that you need to perform.

#### Define Compose Operation

After selecting the *Compose* operation from the **Operation** drop down, configure the following fields:

1. **Compose To**: Specify the desired output file name. You may use dynamic variables (e.g., output\_${date}.csv) for uniqueness.
2. **Select Separator**: Select the Separator that you want to compose your CSV file with. (example: Comma, Space, Tab, Semicolon, Pipe, and Custom).
3. **Custom Separator**: If you select Custom Separator from the Separator dropdown, you can manually specify the Separator you want to use.
4. **Include Header Row**: Enable the checkbox to include the headers in the CSV file that is generated.
5. **Header Layout**: Provide the column names in order to map.
6. **Map**: Define how data fields from the source system are transformed and transferred to corresponding fields in the target system during integration.
7. **Properties**
   1. **Encoding type**: Select any of the encoding type from the following:
      1. UTF-8
      2. ASCII
   2. **Batch size**: User can select the batch size of *1*, *50*, *100*, *200*, *500* from the dropdown.

<figure><img src="https://1036205596-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fv9avy716UiAsS24zOznZ%2Fuploads%2Fd8LuE5EkP6aWeietNHYJ%2Funknown.jpeg?alt=media&#x26;token=73f2afd8-2703-4e3d-b666-e0dde93bb3cf" alt=""><figcaption></figcaption></figure>

#### Define Parse Operation

After selecting the *Parse* Operation from the **Operation** drop down, configure the following fields:

3. **Parse From**: Specify the desired location from where you want to fetch the file. For example: VALUE("$file\_download\_6")
4. **Select Separator**: Select the separator that your CSV file has. (example: Comma, Space, Tab, Semicolon, Pipe and Custom).
5. **Custom Separator**: If you select the custom separator in the **Separator** dropdown, specify the custom seperator that you want.
6. **Include Header Row**: Enable the checkbox to include the headers in the CSV file that is generated.
7. **Header Layout**: Provide the column names in order to map.
8. **Properties**
   1. **Encoding type**: Select any of the encoding type from the following:
      1. UTF-8
      2. ASCII
   2. **Batch size**: User can select the batch size of 1, 50, 100, 200, 500 from the dropdown.

## Best Practices

* Use timestamp variables in file names to prevent overwriting critical data.
* Regularly monitor storage usage and archive older files if needed.
* Ensure sensitive data is securely stored and encrypted if necessary.

## Limitations

* File size is subject to plan-specific limits.
* Only plain-text CSV format is supported (no Excel-specific features).
* No automatic upload to external storage (use additional steps/actions if required).

## Troubleshooting Tips

* Ensure the specified file path exists.
* Validate that all required fields are present in the incoming data.
* Check file permissions if writing fails.

<br>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.mydbsync.com/cloud-workflow/2026_create-your-workflow/action/storage-actions/csv.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
