In order to bring you the best possible user experience, this site uses Javascript. If you are seeing this message, it is likely that the Javascript option in your browser is disabled. For optimal viewing of this site, please ensure that Javascript is enabled for your browser.
Login  |   Cloud ERP  |   Home  |

  •     QAD Glossary

  • Jobs
    Jobs can be roughly categorized into the following types, in the order in which the types run in Chained job processing. The descriptions include the naming conventions associated with each type.
    The purpose of a Truncate job is to truncate—delete all data from—load tables before beginning a new load. This ensures that load jobs are starting with a “clean slate.” There is one such job per analytical module, named PREPROCESS_[MODULE]_TRUNCATE.
    The purpose of a Load job is to extract data from the ERP sources and load the data into the Data Warehouse for processing. The Load job for an analytical module is named PREPROCESS_[MODULE]_LOAD. However, if there are specific tables that are relevant to history loads only, or to daily loads only, they are loaded in a job named PREPROCESS_[MODULE]_HISTONLY_LOADS or PREPROCESS_[MODULE]_DAILYONLY_LOADS, respectively. Also, there can be further distinction when a module requires separate job streams; for example, there are some tables processed for EE sources only, and some for SE sources only. In this case, the EE or SE distinction is included in the job name; for example, PREPROCESS_FIN_EE_LOADS is a load job for the Financials analytical module that runs for EE sources only. COMMON module load tables with SE- or EE-specific tables include SEONLY or EEONLY in their job names; for example, PREPROCESS_COMMON_EEONLY_LOADS or PREPROCESS_COMMON_SEONLY_LOADS.
    These jobs create the “perm” tables, which are permanent data stores, where the BI system stores data extracted from the ERP sources that must be maintained for long periods of time. There are two such jobs for each module:
    History loads. The History job expects a large amount of data to be processed at once as the Data Warehouse is loaded for the first time.
    Daily loads. Daily loads are usually processing a smaller amount of data—data that has changed in the sources since the last data load.
    These jobs are named [HIST_or_DAILY]_[MODULE]_PERM_CHAINED. For Financials, there can be an _EE or _SE after the [MODULE] if the job applies to one type of ERP source only.
    This set of jobs is to build extract and work tables from the perm tables. Not all of the data in the perm tables is typically extracted at one time, as much of the data in the perm tables has been processed already, and has not changed. So usually a subset of data, covering a certain period and/or targetting changed data, is extracted from the permanent data stores for further processing into the Fact tables. There are two such jobs for each module—one for History loads and one for Daily loads. These jobs are named [HIST_or_DAILY]_[MODULE]_PERM_EXTRACT. For Financials, there can be an _EE or _SE after the [MODULE] if the job applies to one type of ERP source only. Extract jobs do not end in CHAINED; they are not an explicit part of the job chain, but rather are always called from a Perm job.
    This set of jobs is to take extracted and loaded data through various stages to build the dimensions and facts. There are no summaries, aggregates, snapshots, or cubes. These jobs are named [HIST_or_DAILY]_[MODULE]_ PROCESS_CHAINED. For Financials, there can be an _EE or _SE after the [MODULE] if the job applies to one type of ERP source only.
    Additionally, some modules can have a main job containing the module name, and then additional jobs for processing related to a specific submodule. These jobs are named [HIST_or_DAILY]_[MODULE]_[SUB_MODULE_ABBREVIATION]_PROCESS_CHAINED and run after the primary job for the module. For example, the Financials module has its primary Daily job, DAILY_FIN_PROCESS_CHAINED, plus separate jobs for processing Accounts Payable (AP), Accounts Receivable (AR), and General Ledger (GL). The submodule jobs are DAILY_FIN_[EE_or_SE]_ AP_PROCESS_CHAINED, DAILY_FIN_[EE_or_SE]_ AR_PROCESS_CHAINED, and DAILY_FIN_[EE_or_SE]_ GL_PROCESS_CHAINED.
    Rollup (aggregate/rollup/snapshot)
    This set of jobs is to create aggregate/rollup/snapshot tables, which mainly rely on the relevant facts/dimensions that are populated already. There are two such jobs for each submodule—one for History and one for Daily loads. They are typically named POSTPROCESS_[MODULE]_SUMMARY. However, if the specific post-processing is to history only, or to daily only, the job name is POSTPROCESS_[MODULE]_HISTONLY_SUMMARY or POSTPROCESS_[MODULE]_DAILYONLY_SUMMARY, respectively. Additionally, if the post-processing is specific to a submodule of a module, the submodule’s abbreviation appear in the name; for example, POSTPROCESS_OP_PO_DAILYONLY_SUMMARY.
    Cubes jobs process analytical cubes so that they include the most recent data from the fact and dimension tables. There is one job per module. The naming convention is POSTPROCESS_[MODULE]_ CUBES. For Financials, there can be an SE or EE after the MODULE name if the cubes are specific to one source type; for example, POSTPROCESS_FIN_SE_CUBES. While not installed in the job chain automatically, these jobs can be added to the Daily job chain if desired.
    Notes on ERP Transaction Load History
    TR_HIST_LOAD_MAXSIZE is the maximum number of rows to load in an iteration loading tr_hist.
    The Transaction History load (HIST_LOAD_TR_HIST) processes data in chunks, rather than trying to extract and load all data at once, since the tables can contain large amounts of data. The History Load processing runs this job many times, processing some subset of the data with each pass. The TR_HIST jobs use the following parameters:
    Important: Do not update them manually. The HIST_LOAD jobs use these to track the data that has already been processed, so that if the load job is interrupted, it can pick up where it left off.