Use case: You wish to populate a blank table with Matches and Values taken from a Datastream extract, without requiring constant manual intervention.
To achieve this, we need to adapt our Mapping Table to automatically delete and repopulate all entries every time an extract is fetched. "Flush Table" is crucial here, it must be ticked in order to remove any pre-existing mapping that may exist.
Example
-
For this example, let's presume we wish to map the field "Employee" to "Team", and create a mapping table that includes every value found within.
-
In your Datastream, add a Transformation script with the command "Map", and the following fields:
Sourcefield: "Employee" - The value from this field will be the "match" column in the mapping table. Fieldname: Random* - This field is needed to validate the Map command, but can be cut out later as it is not necessary. Mapping: Name of your Mapping Table. Missing: "Create" - This way, every unique new entry found in the "Employee" column will become a new entry in the mapping table. Flush Table: "True" - Otherwise, every newly added mapping table will append to the previous one, creating duplicates. Alternative: "Team" - When populating a mapping table from an extract, "Alternative" defines the column within the extract that will be used to populate the "Value" element of the Mapping Table.
-
Additionally, if desired you can create a "cutout" command to get rid of the unnecessary field "Random".
Following these steps, every time an extract is fetched from the File Server, the map transformation will run and replace the existing mapping table with the latest version.
Comments
0 comments
Article is closed for comments.