S3 Unload
    • Dark
      Light

    S3 Unload

    • Dark
      Light

    Article Summary

    This article is specific to the following platforms - Snowflake - Redshift.

    S3 Unload Component

    Creates files on a specified S3 bucket, and load them with data from a table or view.

    For Snowflake users: by default, your data will be unloaded in parallel.

    For Amazon Redshift users: your data will be unloaded in parallel by default, creating separate files for each slice on your cluster.

    For Amazon Redshift users: this component is similar in effect to the Text Output component. Since S3 Unload unloads data in parallel directly from Amazon Redshift to S3, it tends to be faster than using Text Output. However, S3 Unload sacrifices some of the added functionality that comes from Text Output pulling the data through the Matillion ETL instance (such as adding column headers to each file).

    To access an S3 bucket from a different AWS account, the following is required:


    Properties

    Snowflake Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    StageSelectChoose a predefined stage for your data. These stages must be created from your Snowflake account console. Otherwise, "Custom" can be chosen for the staging to be based on the component's properties.
    S3 Object PrefixText/SelectThe name of the file for data to be unloaded into.
    When a user enters a forward slash character / after a folder name, a validation of the file path is triggered. This works in the same manner as the Go button.
    File PrefixTextFilename prefix for unloaded data to be named on the S3 bucket. Each file will be named as the prefix followed by a number denoting which node this was unloaded from. All unloads are parallel and will use the maximum number of nodes available at the time.
    EncryptionSelectDecide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
    None: No encryption.
    Client Side Encryption: Encrypt the data according to a client-side master key. Read Protecting data using client-side encryption to learn more.
    SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
    SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more.
    KMS Key IDSelectThe ID of the KMS encryption key you have chosen to use in the Encryption property.
    Master KeySelectThe ID of the client-side encryption key you have chosen to use in the Encryption property.
    AuthenticationSelectSelect the authentication method. Users can choose either:
    Credentials: Uses AWS security credentials. Read Manage Credentials to learn more.
    Storage Integration: Uses a Snowflake storage integration. A storage integration is a Snowflake object that stores a generated identity and access management (IAM) entity for your external cloud storage, along with an optional set of permitted or blocked storage locations (Amazon S3, Google Cloud Storage, or Microsoft Azure). More information can be found at CREATE STORAGE INTEGRATION.
    CredentialsSelectSelect your AWS credentials. The special value, [Environment Default], uses the set of credentials specified in your Matillion ETL environment—this is the default value. Click Manage to edit or create new credentials in Manage Credentials.
    Storage IntegrationSelectSelect the storage integration. Storage integrations are required to permit Snowflake to read data from and write to a cloud storage location. Integrations must be set up in advance of selecting them in Matillion ETL. To learn more about setting up a storage integration, read Storage Integration Setup Guide.
    Note: Storage integrations can be configured to support Amazon S3, Google Cloud Storage, or Microsoft Azure cloud storage regardless of the cloud provider that hosts your Snowflake account.
    WarehouseSelectChoose a Snowflake warehouse that will run the load.
    DatabaseSelectChoose a database to create the new table in.
    SchemaSelectSelect the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
    Table nameTextThe table or view to unload to S3.
    FormatSelectChoose from preset file formats available in your Snowflake database. Additional file formats can be created using the Create File Format component. Selecting the [Custom] file format will use the S3 Unload component's properties to define the file format.
    File TypeSelectChoose whether you would like Matillion ETL to unload the data in a CSV, JSON, or PARQUET format.
    CompressionSelectWhether the input file is compressed in gzip format, BROTLI, BZ2, DEFLATE, RAW_DEFLATE, ZSTD or not compressed at all.
    Nest ColumnsSelectNote: This parameter is only available when the File Type parameter is set to "JSON".
    Specify whether or not ("True" or "False") the table columns should be nested into a single JSON object so that the file can be configured correctly. A table with a single variant column will not require this setting to be "True". Default is "False"
    Record DelimiterTextThe delimiter to be used that separates records (rows) in the file. Defaults to newline. \ can also signify a newline. \\r can signify a carriage return.
    Field DelimiterTextThe delimiter that separates columns. The default is a Comma. A [TAB] character can be specified as "\ ".
    Date FormatTextDefaults to "auto" - this can be used to manually specify a date format.
    Time FormatTextDefaults to "auto" - this can be used to manually specify a time format.
    Timestamp FormatTextDefaults to "auto" - this can be used to manually specify a timestamp format.
    EscapeSelect(CSV only) Specify a single character to be used as the escape character for field values that are enclosed. Default is NONE.
    Escape Unenclosed FieldString(CSV only) Specify a single character to be used as the escape character for unenclosed field values only. Accepts common escape sequences, octal values, or hex values. Also accepts a value of NONE (default). Default is \\\\.
    If a character is specified in the "Escape" field, it will override this field.
    If you have set a value in the property Field Optionally Enclosed, all fields will become enclosed, rendering the Escape Unenclosed Field property redundant, in which case it will be ignored.
    Field Optionally EnclosedTextA character that is used to enclose strings. Can be single quote (') or double quote (") or NONE (default). The character chosen can be escaped by that same character. Note: This MUST be set when unloading data that contains NULL values into CSV format .
    Null IfStringSpecify a string to convert to SQL NULL values. In this field, users can specify a value that will be used when unloading the data to the chosen file. If an SQL NULL value is found, then the NULL value is replaced with the first value listed in this. For unloads, only the first string that you specify in this property is used.
    Allow OverwritesSelectIf the target file already exists, overwrite data instead of generating an error.
    Single FileBooleanWhen True, the unload will work in serial rather than parallel. This results in a slower unload but a single, complete file.
    The default setting is False.
    When True, no file extension is used in the output filename (regardless of the file type, and regardless of whether or not the file is compressed).
    When False, a filename prefix must be included in the path.
    Max File SizeTextThe maximum size (in bytes) of each file generated, per thread. Default is 16000000 bytes (16 MB) and Snowflake has a 6.2GB file limit for copy-into-location operations. Files that exceed the stated maximum will be split into multiple size-abiding parts.
    Include HeadersSelectIf set to "True", write column names as headers at the top of the unloaded files.

    Redshift Properties

    PropertySettingDescription
    NameTextA human-readable name for the component.
    SchemaSelectSelect the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article. Note: An external schema is required if the Type property is set to "External".
    Table nameTextThe table or view to unload to S3.
    S3 URL LocationTextThe URL of the S3 bucket to load the data into.
    Note: This component can unload to any accessible bucket, regardless of region.
    When a user enters a forward slash character / after a folder name, a validation of the file path is triggered. This works in the same manner as the Go button.
    S3 Object PrefixTextCreate data files in S3 beginning with this prefix. The format of the output is
    <prefix><slice-number>_part_<file-number>
    Where slice-number is the number of the slice in your cluster and file number (files larger than 6.2GB) will be split.
    IAM Role ARNTextSupply the value of a role ARN that is already attached to your Redshift cluster, and has the necessary permissions to access S3. This is optional, since without this style of setup, the credentials of the environment (instance credentials or manually entered access keys) will be used.
    See the Redshift documentation for more information about using a Role ARN with Redshift.
    Generate ManifestSelectWhether or not to generate a manifest file detailing the files that were added.
    Note: Selecting the option Yes (Verbose) will create a manifest file that explicitly lists details for the data files created by the Unload process. For more information, please visit the Redshift documentation.
    Data File TypeSelectChoose the file type from: CSV, Delimited, Fixed Width, or Parquet.
    DelimiterTextThe delimiter that separates columns. The default is a Comma. A [TAB] character can be specified as "\ ".
    Fixed Width SpecTextLoads the data from a file where each column width is a fixed length, rather than separated by a delimiter. Each column is described by a name and length, separated by a colon. Each described column is then separated by a comma.
    e.g. We have four columns; name, id, age, state. These columns have the respective lengths; 12,8,2,2.
    The written description to convert this data into a table using fixed-width columns would then be:
    name:12,id:8,age:2,state:2

    Note that the columns can have any plaintext name. For more information on fixed width inputs, please consult the AWS documentation.
    Compress DataSelectWhether or not the resultant files are to be compressed.
    Compression TypeSelect(If Compress Data is Yes) Select either gzip or Bzip2 as the compression method.
    NULL AsTextThis option replaces the specified string with null in the output table. Use this is your data has a particular representation of missing data.
    EscapeSelectWhether or not to insert backslashes to escape special characters. This is often a good idea if you intend to re-load the data back into a table later, since the COPY also supports this option.
    Allow OverwritesSelectIf the target file already exists, overwrite data instead of generating an error.
    ParallelSelectIf set, the unload will work in parallel, creating multiple files (one for each slice of the cluster). Disabling parallel will result in a slower unload but a single, complete file.
    Add quotesSelectIf set, quotation marks are added to the data.
    S3 Bucket RegionSelectThe Amazon S3 region hosting the S3 bucket. This is not normally required and can be left as "None" (default) if the bucket is in the same region as your Redshift cluster.
    Max File SizeTextThe maximum size (in MB) of each file generated, per thread. Default is 16 MB and AWS has a 6.2GB file limit for Unload operations. Files that exceed the stated maximum will be split into multiple size-abiding parts.
    Include HeadersSelectIf set to Yes, Matillion will write column names as headers at the top of unloaded files.
    Note: This component property will not work in tandem with the Fixed Width component property.
    EncryptionSelectDecide how the files are encrypted inside the S3 bucket. This property is available when using an existing Amazon S3 location for staging.
    None: No encryption.
    SSE KMS: Encrypt the data according to a key stored on KMS. Read AWS Key Management Service (AWS KMS) to learn more.
    SSE S3: Encrypt the data according to a key stored on an S3 bucket. Read Using server-side encryption with Amazon S3-managed encryption keys (SSE-S3) to learn more...
    KMS Key IDSelectThe ID of the KMS encryption key you have chosen to use in the Encryption property.
    Master KeySelectThe ID of the server side encryption key you have chosen to use in the Encryption property.

    Example

    In this example, we have a table "t_carriers" and we wish to unload the data to an S3 bucket for long-term storage. To this end, we use the S3 Unload component in a job as shown below.

    In the S3 Unload component, an S3 URL is set, and an object prefix is set to "carriers_unload". This means that files will be created on the S3 bucket with the common name of "carriers_unload" followed by the slice number (if "Parallel" is enabled, which it is) and part number of the file. For example: "carriers_unload_3_part_2".

    Note that although the table is unloaded, it still exists and must be cleaned up separately.