RDS Query Component

RDS Query

This feature is only available for instances hosted on AWS.

Run an SQL Query on an RDS database and copy the result to a table, via S3.

This component is for data-staging - getting data into a table in order to perform further processing and transformations on it. The target table should be considered temporary, as it will either be truncated or recreated each time the components runs.

Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option 'Recreate Target Table' to 'Off' will prevent both recreation and truncation. Do not modify the target table structure manually.

Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Choice Basic - This mode will build a Query for you using settings from Data Source, Data Selection and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced - This mode will require you to write an SQL-like query to call data from the RDS Database.
Database Type Choice postgresql - the default option, PostgreSQL (see AWS Documentation here).
aurora - Amazon Aurora (see AWS Documentation here).
mysql - MySQL (see AWS Documentation here).
mssql - Microsoft SQL Server (see AWS Documentation here).
oracle - Oracle (see AWS Documentation here).
Note: For oracle you must first provide an Oracle JDBC driver as this is not distributed with Matillion ETL. Contact support for more information.
RDS Endpoint Select/Text This is the RDS Database Endpoint.You can find this in the Amazon AWS Console, and typically includes a long dotted name and port number separated by a colon.
By default, this parameter offers a list of all the RDS instances available within your current region that are the same type as the selected Database Type.
If the endpoint you wish to load from is located in a different region, or you are not running on Amazon EC2 and therefore do not have a region, you can type in your own values.
If no region is available, then the parameter is a freeform text field - otherwise, you can enter your own value in addition to the dropdown. When typing your own endpoint, include the port number.
Database Name Text This is the name of the database within your RDS instance.
Username Text This is your RDS connection username.
Password Text This is your RDS connection password. The password is masked so it can be set, but not read.Users have the option to store their password inside the component but we highly recommend using the Password Manager option.
JDBC Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are determined automatically from the driver, and may change from version to version.
They are usually not required as sensible defaults are assumed.
Value A value for the given Parameter. Values are somewhat database-specific. The links below may help.
  1. PostgreSQL
  2. Amazon Aurora and MySQL
  3. Microsoft SQL Server
  4. Oracle

Please contact support if you think you require an advanced JDBC option.
Data Source Filter Input Column The available input columns vary depending upon the Data Source.
Qualifier Is - Compares the column to the value using the comparator.
Not - Reverses the effect of the comparison, so "equals" becomes "not equals", "less than" becomes "greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: 'Equal To', 'Greater than', 'Less than', 'Greater than or equal to', 'Less than or equal to', 'Like', 'Null'.
'Equal To' can match exact strings and numeric values while other comparators such as 'Greater than' will work only with numerics. The 'Like' operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.
Not all data sources support all comparators, thus it is likely only a subset of the above comparators will be available to choose from.
Value The value to be compared.
Combine Filters Choice And - Multiple filters must ALL be true for a row to be returned.
Or - Any one of the filters must be true for a row to be returned.
Limit Number Limits the number of rows that are loaded from file.
SQL Query Text This is an SQL query, written in the dialect of the RDS database. It can be as simple as
select * from tablename
It should be a simple select query.
Target Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Staging Select Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
S3 Staging Area Text The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access and permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Database Select Select the database that the newly-created table will belong to.
Schema Select Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.
Concurrency Integer The number of S3 files to create. This helps when loading into Amazon Redshift as they are loaded in parallel. In addition, Matillion ETL will be able to upload parts of these files concurrently.
Note: The maximum concurrency is 8 times the number of processors on your cloud instance. For example: An instance with 2 processors has a maximum concurrency of 16.
Table Distribution Style Select Even - the default option, distribute rows around the Redshift Cluster evenly.
All - copy rows to all nodes in the Redshift Cluster.
Key - distribute rows around the Redshift cluster according to the value of a key column.
Table-distribution is critical to good performance - see the Amazon Redshift documentation for more information.
Table Distribution Key Select This is only displayed if the Table Distribution Style is set to Key. It is the column used to determine which cluster node the row is stored on.
Table Sort Key Select This is optional, and specifies the columns from the input that should be set as the table's sort-key.
Sort-keys are critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.
Primary Key Select Multiple Select one or more columns to be designated as Primary Keys for the table.
Load Options Multiple Selection Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings.
Recreate Target Table:Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used.
File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Load Options Choice & Text Clean Staged Files: Destroy staged files after loading data. Default is ON.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is OFF.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.
Encryption Select Decide on how the files are encrypted inside the S3 Bucket.This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket.
KMS Key ID Select The ID of the KMS encryption key you have chosen to use in the 'Encryption' property.
Load Options Multiple Select Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is On.
Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the source database and upload it to S3.
Time Taken To Load The amount of time (in seconds) taken to execute to COPY statement to load the data into the target table from S3.

Strategy

Connect to the RDS database and issue the query. Stream the results into objects on S3. Then create or truncate the target table and issue a COPY command to load the S3 objects into the table. Finally, clean up the temporary S3 objects.

Example

In this example we connect to a source database that contains a table of records that indicate an email sent via SES had been rejected (bounced). The job canvas is shown below.


We have selected the database type as 'postgresql' and given a username and password that relates to the RDS endpoint given. The SQL Query has been given as 'select * from ses_bounce' meaning we take the entire table.

When running, the results of the query is copied to rds_ses_bounce (as specified in the 'Target Table' property of the RDS Query component) which is reloaded each time the component runs. Further processing of the rds_ses_bounce table can now be done using a Transformation job.

Video