MetaData Process Input

From All n One's bxp software Wixi

Jump to: navigation, search

1 Overview

Overview Retrieve Input Process Output
Link Link Link Link Link


When creating a MetaData Input rule you are creating a set of parameters to retrieve data from a file into a bxp software Campaign Database.

These rules dictate the way the system will process the file.

Parameter Reason Options
0 Function Delimited file, Excel, Structured File
1 Campaign_Id intCampaign_Id
2 FilenamePattern *.csv
3 Delimiting Character ,
4 Delete When Done False
5 Force Field Match False
6 Field Mapping Fields to put into
7 Field Length Structured field width
8 Escape Characters *
9 Escape Delimited \
10 Blind False
11 Encoding -1 = Unicode 0 = ASCII


2 Input rules

2.1 Function: Read_Delimited

Delimited files are very common. A CSV file sees values separated by commas. However a CSV file is often a covering title for a character delimited / separated file. Often the tab character or hash character (#) or semicolon (;) are used to delimit fields.


Options Description
Destination Form The form into which the file will attempt to be loaded.
Filename Pattern If there are multiple files in the back end, it is important to help differentiate the files using patterns in their file names. *.csv would be all files that end with .csv for example
Delete When Done Should the file be deleted when processed to prevent reprocessing of the same file
Force Field Count Match If the field counts do not match, should the record be inserted. If there are too many fields in the data, the first matching records are added. If there are a too few fields, the matches are made to the first available fields. If this option is true, non-matching records will not be added at all.
Field Mapping
Delimiting Character What character delimits the fields


2.2 Function: Read_Excel

Reading from an Excel spreadsheet is supported for all types of Excel spreadsheet. There are a number of options also available for Excel sheets.


Options Description
Destination Form The form into which the file will attempt to be loaded.
Filename Pattern If there are multiple files in the back end, it is important to help differentiate the files using patterns in their file names. *.xls would be all files that end with .xls for example
Delete When Done Should the file be deleted when processed to prevent reprocessing of the same file
Force Field Count Match If the field counts do not match, should the file be inserted. If there are too many fields in the data, the first matching records are added. If there are a too few fields, the matches are made to the first available fields. If this option is true, non-matching files will not be added at all.
Field Mapping Should the system expect a header row in the data and should it attempt to perform Field Mapping using the bxp field mapping engine


2.3 Function: Read_Structured

Reading structured data is equally as easy but we must provide the spacing of the data.


Options Description
Destination Form The form into which the file will attempt to be loaded.
Filename Pattern If there are multiple files in the back end, it is important to help differentiate the files using patterns in their file names. .dat would be all files that have .dat somewhere in the file name.
Delete When Done Should the file be deleted when processed to prevent reprocessing of the same file
Force Field Count Match If the field counts do not match, should the record be inserted. If there are too many fields in the data, the first matching records are added. If there are a too few fields, the matches are made to the first available fields. If this option is true, non-matching records will not be added at all. The field count is matched on the total width of the data line. If too many / too few characters the line is considered not to match. The positional data will be taken regardless of length and spaces appended where fields are too short.
Field Mapping A custom order of the fields. Otherwise default blind matching will be used. Blinding matching means that first item of data is placed in the first field. Second item of data goes into the second field and so on.
Field Sizes The exact widths of fields to retrieve. Combined with field mapping the fields are loaded.