SharePoint Query

SharePoint Query

The SharePoint Query component integrates with the SharePoint Data API to retrieve data from Microsoft SharePoint and load that data into a table.

This component's driver currently supports: Windows SharePoint Services 3.0, Microsoft Office SharePoint Server 2007, SharePoint Server 2010, SharePoint Server 2013, SharePoint Server 2016, and SharePoint Online

Warning: This component is potentially destructive. If the target table undergoes a change in structure, it will be recreated. Otherwise, the target table is truncated. Setting the Load Option parameter's 'Recreate Target Table' setting to Off will prevent both recreation and truncation. Do not modify the target table structure manually.

Properties

Property Setting Description
Name Text The descriptive name for the component.
Basic/Advanced Mode Select Basic: This mode will build a Query for you using settings from the Data Source, Data Selection, and Data Source Filter parameters. In most cases, this will be sufficient.
Advanced: This mode will require you to write an SQL-like query to call data from SharePoint. The available fields and their descriptions are documented in the SharePoint Data Model.
URL Text In this instance, the URL is the web address that you visit to sign into your SharePoint account.
e.g. https://companyname.sharepoint.com
User Text A valid SharePoint username to use for authentication.
Password Text A valid SharePoint password. Users have the option to store their password inside the component; however, we highly recommend using the Password Manager feature instead.
SharePoint Edition Select Select your edition of SharePoint.
SQL Query Text This is an SQL-like SELECT query. Treat collections as table names, and fields as columns. (This Property is only available in Advanced Mode)
Data Source Select The name of a SharePoint collection. Collections are analogous to Tables in other databases.
Data Selection Choice Choose one or more columns to return from the query. Columns are determined by scanning the first few documents and looking for fields that appear in each document.
Data Source Filter Input Column The available input columns vary depending upon the Data Source  and are determined automatically by scanning a number of documents.
Qualifier Is: Compares the column to the value using the comparator.
Not: Reverses the effect of the comparison, so "Equals" becomes "Not equals", "Less than" becomes "Greater than or equal to", etc.
Comparator Choose a method of comparing the column to the value. Possible comparators include: "Equal To", "Greater than", "Less than", "Greater than or equal to", "Less than or equal to", "Like", "Null".

"Equal To" can match exact strings and numeric values while other comparators, such as "Greater than", will work only with numerics. The "Like" operator allows the wildcard character (%) to be used at the start and end of a string value to match a column. The Null operator matches only Null values, ignoring whatever the value is set to.

Note: Not all data sources support all comparators, meaning that it is likely that only a subset of the above comparators will be available to choose from.
Value The value to be compared.
Combine Filters Select Use the defined filters in combination with one another according to either "and" or "or".
Limit Integer Limit the number of rows that are loaded from file.
Connection Options Parameter A JDBC parameter supported by the Database Driver. The available parameters are determined automatically from the driver, and may change from version to version.
They are usually not required, since sensible defaults are assumed.
Value A value for the given parameter.
Warehouse Select Choose a Snowflake warehouse that will run the load.
Database Select Choose a database to create the new table in.
Project Text The target BigQuery project to load data into.
Dataset Text The target BigQuery dataset to load data into.
Type Select Choose between using a standard table or an external table.
Standard: The data will be staged on an S3 bucket before being loaded into a table.
External: The data will be put into an S3 Bucket and referenced by an external table.
Schema Select Select the table schema. The special value (Environment Default) will use the schema defined in the environment. For more information on using multiple schemas, see this article.
 
Note: An external schema is required if the "Type" property is set to "External".
Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Staging Select (AWS Only) Snowflake Managed: Allow Matillion ETL to create and use a temporary internal stage on Snowflake for staging the data. This stage, along with the staged data, will cease to exist after loading is complete.
Existing Amazon S3 Location: Selecting this will avail the user of properties to specify a custom staging area on S3.
Target Table Text Provide a new table name.
Warning: This table will be recreated and will drop any existing table of the same name.
Cloud Storage Staging Area Text The URL and path of the target Google Storage bucket to be used for staging the queried data.
Load Options Multiple Select Clean Cloud Storage Files: (If On) Destroy staged files on Cloud Storage after loading data. Default is ON.

Cloud Storage File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).

Recreate Target Table: Setting this Load Option to OFF will prevent both recreation and truncation of the target table.

Use Grid Variable: Check this checkbox to use a grid variable.
S3 Staging Area Text (AWS Only) The name of an S3 bucket for temporary storage. Ensure your access credentials have S3 access, as well as permission to write to the bucket. See this document for details on setting up access. The temporary objects created in this bucket will be removed again after the load completes, they are not kept.
This property is available when using an Existing Amazon S3 Location for Staging.
Distribution Style Select Auto: (Default) Allow Redshift to manage your distribution style.
Even: Distributes rows around the Redshift cluster evenly.
All: Copy rows to all nodes in the Redshift cluster.
Key: Distribute rows around the Redshift cluster according to the value of a key column.
Table distribution is critical to good performance. See the Amazon Redshift documentation for more information.
Sort Key Select This is optional, and specifies the columns from the input that should be set as the table's sort key.
Sort keys are critical to good performance - see the Amazon Redshift documentation for more information.
Sort Key Options Select Decide whether the sort key is of a compound or interleaved variety - see the Amazon Redshift documentation for more information.
Load Options Multiple Select Columns Comp Update: Apply automatic compression to the target table (if ON). Default is ON.
Stat Update: Automatically update statistics when filling a table (if ON). Default is ON. In this case, it is updating the statistics of the target table.
Clean S3 Objects: Automatically remove UUID-based objects on the S3 Bucket (if ON). Default is ON. Effectively decides whether to keep the staged data in the S3 Bucket or not.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Encryption Select (AWS Only) Decide on how the files are encrypted inside the S3 Bucket. This property is available when using an Existing Amazon S3 Location for Staging.
None: No encryption.
SSE KMS: Encrypt the data according to a key stored on KMS.
SSE S3: Encrypt the data according to a key stored on an S3 bucket
KMS Key ID Select (AWS Only) The ID of the KMS encryption key you have chosen to use in the Encryption property.
Load Options Multiple Selection Clean Staged Files: Destroy staged files after loading data. Default is ON.
String Null is Null: Converts any strings equal to "null" into a null value. This is case sensitive and only works with entirely lower-case strings. Default is ON.
Recreate Target Table: Choose whether the component recreates its target table before the data load. If OFF, the existing table will be used. Default is ON.
File Prefix: Give staged file names a prefix of your choice. Default is empty (no prefix).
Auto Debug Select Choose whether to automatically log debug information about your load. These logs can be found in the Task History and should be included in support requests concerning the component. Turning this on will override any debugging Connection Options.
Debug Level Select The level of verbosity with which your debug information is logged. Levels above 1 can log huge amounts of data and result in slower execution.

1: Will log the query, the number of rows returned by it, the start of execution and the time taken, and any errors.

2: Will log everything included in Level 1, plus cache queries and additional information about the request, if applicable.

3: Will additionally log the body of the request and the response.

4: Will additionally log transport-level communication with the data source. This includes SSL negotiation.

5: Will additionally log communication with the data source, as well as additional details that may be helpful in troubleshooting problems. This includes interface commands.

Variable Exports

This component makes the following values available to export into variables:

Source Description
Time Taken To Stage The amount of time (in seconds) taken to fetch the data from the data source and to upload it to storage.
Time Taken To Load The amount of time (in seconds) taken for the COPY statement to load the data into the target table from the staging area.
 

Strategy

Connect to the SharePoint API and issue the one or more queries. Stream the results into objects into a storage area, then recreate or truncate the target table as necessary. Then, use a COPY command to load the stored objects into the table. Finally, clean up the temporary stored objects.