Xceptor Core Configuration Basics - 1
Xceptor brings together data, automation and artificial intelligence into single no code solution that can be configured by non-technical users.
è Xceptor is the platform handles complex processes from end to end requires a flexible, AI-enabled platform which deploys the right automation technology for the right task at the right time.
?
?
Xceptor can be configured to watch specified input channels and import relevant data when it arrives, it then ingests and reformats that data to standardize it into one common format.
It can then apply enrichments to normalize the captured data. These could be calculations, lookups to reference data and filters for example. Xceptor then applies business specific steps like validation checks and business rules.
è If needed, Xceptor can also store the data and flag data errors which can be validated and resolved within the system itself.
è As a final step, it delivers the processed data to your specified locations in a specified format and style.
è Once configured, only interactions such as exceptions need any intervention from the user. The rest is automated and requires no intervention.
?
How is this achieved?
We need to configure the automation.
For structured files, Xceptor users will deploy traditional rules to capture and process information. For unstructured files, Users may use natural language processing to classify the text or extract data entities. User can also use a combination of both techniques and / or OCR.
?
?
In this course, we will focus on traditional rules and the functionality used to capture and enrich structured and semi-structured data.
?
1)?????? Input Formats (Once set up with the appropriate rules, an input format captures data from input files to perform enrichments, Validations and transformation) according to your requirement.
?
è Generally a new input format is required for each different input file. However two input files in the same format, can be processed with the same input format.
è Input formats support a range of file types including the Excel, pdf, csv, swift, XML and more.
?
?
2)?????? Internal Formats (To transform all of the different input formats into one standardized common format, you’ll need to defines the common set of fields that represent your normalized transactional data. You will map captured fields from input formats to the internal format defined here. We can also apply repeatable business rules here)
?
3)?????? Output Format (An output format takes the ingested information and generates a file containing the relevant information. It is data structure that you want your exported data follow and this structure will be based on your internal format. Now that you’ve get a high-level overview of how xceptor works, find out more about the data process you’re going to the building in Xceptor on the next screen).
?
?
?
?
Questions need to be asked? :
è Improvements that client wants to make the current process.
è Ask about what steps the team applies to the data that are always repeated.
?
?
Our Internal Format
?
è Analyzing our business requirement: First thing to think about once you’ve reviewed the business requirement is the internal format. We need to push a downstream system. The business team have sent through a document detailing the data they need. This is the basis with which we can create the internal format that reflects this business requirement.
?
è Creating the internet format: It means that the internal format needs to have the following fields.
o?? InternalSecurityID
o?? SecurityID
o?? SourceName
o?? Price
o?? Currency
o?? PriceDate
?
è Mapping our input format: As a next step you need to review the input files alongside the internal format to identify which points of data can be mapped onto the internal format directly.
?
?
?
?
?
?
?
?
?
Input Formats:
Data Capture: ?Xceptor comes with a complete toolkit of data capture and formatting functions to help you normalize the data
For Example: These functions allow you to take a report date from the files header and include it on all captured rows as a new column. You might also want Xceptor to ignore the disclaimer text at the bottom of a table.
You don’t have to do all your data formatting and cleansing at this stage but always do as much as you can. These transformations are carried out before the data is loaded with creates huge gains in processing efficiency. Especially for files with large number of rows. Transformations requiring more flexibility can be run after the data is loaded using the enrichment rules. These are explained in Enrichment hotspot.
?
Enrichment: Xceptor comes with a complete toolkit of data capture and formatting functions to help you normalize the data.
For Example: These functions allow you to take a report date from the files header and include it on all captured rows as a new column. You might also want Xceptor to ignore the disclaimer text at the bottom of a table.
You don’t have to do all your data formatting and cleansing at this stage but always do as much as you can. These transformations are carried out before the data is loaded with creates huge gains in processing efficiency. Especially for files with large number of rows. Transformations requiring more flexibility can be run after the data is loaded using the enrichment rules. These are explained in Enrichment hotspot.
?
Internal Formats:
?
Business Rules: Data is normalized, we can define rules to carry out business tasks using the same data transformation toolkit.
This tool kit is used for different purposes at the two stages. Enrichments are applied to enhance your input format to match the internal format to normalize your data. Business rules, on other hand it is applied to enhance data, make business decisions and validate data as per requirements.
Data Storage
Data storage are two types:
Translation tables
Data Sets
?
?
Translation tables:
It stores reference data which can be matched with your transactional data via a key field or unique combination of key fields. The reference table will return any number of lookup fields to the internal format.
It is very similar to the functionality of a VLOOKUP in excel.
A good example of this would be FX Rate conversations. If you were capturing prices in various currencies, you can add a lookup to reference exchange rate data stored in your translation table to translate these prices to a default currency.
领英推荐
Transaction tables are defined through configuration in the UI and are managed by Xceptor. Typically, someone will only use a transaction table for 10,000 rows of data or less, but this will depend on a number of factors, including the architecture of a deployment.
?
Example: Datasets provide more user interaction capabilities and can handle larger amounts of data. They are created and managed as traditional SQL tables by a DBA.
?
Data Sets:
Data sets are typically used to store transactional data (in our price data). Data sets allow user interactions in the Xceptor UI.
è Carry out analysis reporting.
è Create work queues to streamline transactional user tasks,
è Complete exception management (review and fix exceptions in the UI)
è Archive and audit
?
?
Data sets store data in a traditional SQL table. DBA can architect the table to achieve the performance required.
Setting up the data set requires IT involvement as the SQL table needs to be created so sufficient time needs to be allocated in advance.
You won’t be configuring a data set in this module. If you need guidance on setting up a data set, go to Xceptor Docs for step-by-step guidance.
?
?
?
Message Processor:
?
Message processor configures the automation of the overall data process by identifying which data to import and which actions to execute.
?
?
Message processor can be configured to monitor certain input channels. A message processor can have multiple processing rules which identify the input format that Xceptor should use to read that file. It will then send that data through the internal format before executing actions such as delivering output files internally or loading data into translating tables or data sets.
?
?
Business Requirements:
?
The accounting team needs the processed data to be sent to their shared file location so that they can upload it to the accounting system. The data must be split into four separate files by the broker as calculations an reviews are run separately.
?
?
Next steps:
Overview view of Xceptor and how it can be configured to run the data process. It’s a lot to take in so there’s a PDF guide which contains all the information. You can also visit Xceptor Docs for further support.
?
?
?
?
?
XCEPTOR SANDBOX:
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
XLS Input Formats:
?
Testing Top Tips:
Testing is a vital part of configuring a process in Xceptor. As you build you should run tests on each part of the process to identify errors and make corrections and improvements early on.
?
Testing different documents:
When using a sample file to define your input format, it’s good practice to use several sample files. This ensures that your input format has been configured to capture all typical instances of the file type, not just the specific sample file you used.
?
Testing large sets of data:
For very large files, you should avoid displaying too many rows in the format tester. You can do this by setting the limit on the maximum rows displayed (the view is already defaulted to 100 rows).
?
?
?
XCEPTOR:
è Dashboard: ?Input Activity
è Configuration: Input Formats, Output Formats, Translation Tables, Message Processors, Data sets, Reconciliation Rules, Processing Rules, Download Sites, Utilities.
è Data:? Datasets, Reference Data, Reporting.
è Reconciliation: Activity Dashboards, Summary, Results.
è Administration:? Users, User Roles, Folders, Reports, System log, Utilities.
?