Table Output Component
Table Output Component
Write the input data flow out to an (existing) output table
Note: Successful validation of this component ensures the target table exists, and the target columns have been found. However, data is only written to the table when the Job containing the table output is actually run.
|Name||Text||The descriptive name for the component.
This is automatically determined from the table name when the Table Name property is first set.
|Warehouse||Select||Choose a Snowflake warehouse that will run the load.|
|Database||Select||Choose a database to create the new table in.|
|Schema||Select||Select the table schema. The special value, [Environment Default] will use the schema defined in the environment. For more information on using multiple schemas, see this article.|
|Target Project||Text||Enter the name of the Google Cloud Platform Project that the table belongs to.|
|Target Dataset||Text||Enter the name of the Google Cloud Platform Dataset that the table belongs to.|
|Target Table||Select||The name of the output table. The tables found in the currently selected
environment are provided to choose from.
You can change the currently selected environment in the Environments section.
|Fix Data Type Mismatches||Select||Yes - If the source column type does not match the target table type, attempt to CAST the value to the required target type.
No - Do not cast types.
Amazon Redshift may still attempt to coerce the types in this case.
|Column Mapping||Input Column||The source field from the input flow.|
|Output Column||The target table output column to insert data into.|
|Truncate||Select||Truncate - Empty existing rows in the target table before loading the input rows.
This may use a TRUNCATE or DELETE FROM style query, depending upon whether the current flow is in a transaction. TRUNCATE is faster, and therefore preferred, but is not transaction safe and is not used if a transaction is in progress.
Append - Add the records to the end of the table.
|Automatic Compression||Select||When set to Yes (only available when Truncate is selected above),
the target table is truncated by a DROP TABLE followed by a CREATE
TABLE with a set of column-encodings determined automatically by
scanning a sample of input data.
The encodings are determined by sampling up to 100,000 rows of input data and running ANALYZE COMPRESSION on the sample. This is similar to the S3 option COMPUPDATE.
|Cast Input||Select||Yes: If the source column type does not match the target table type, attempt to CAST the value to the required target type.
No: Do not cast types. Google BigQuery may still attempt to coerce the types in this case.
A full list of common component exports can be found here.
|Automatic Compression||If the 'Automatic Compression' property is set to 'If not already' then this variable will be set to "Yes" when compression has been applied where it had not been before. Otherwise, this will be "No".|
Perform a bulk-insert into the target table, possibly after a truncate.
In this example we have a summary of flight data that is the result of a long Transformation process (shown below). We want to write this summary to a permanent table and we will use the Table Output component to do this.
The Table Output properties are shown below. The Target Table property points to the table we want to output our data to. The data itself is automatically passed to the Table output component through the linked components in the Matillion ETL Transformation job.
The column mapping property allows users to choose what name each column of their data will take in the output, shown below. Notice that the names do not have to already match.
When this job is run, it will now end by loading the final data into the desired table where it will remain.