Hrms Essay

Skip Headers

Oracle HRMS Data Pump

Oracle HRMS Data Pump

This essay provides the information that you need to understand and use the Oracle HRMS Data Pump. To understand this information you should already have a good functional and technical knowledge of the Oracle HRMS product architecture, including:

  • The data model for Oracle HRMS and the importance of DateTrack.

  • The API strategy and how to call APIs directly.

  • How to code PL/SQL. Some PL/SQL code is normally required to convert legacy data for use with Data Pump.

  • The HRMS parameters that control the running of concurrent processes (for example, to make the process run in parallel).

    Note, however, that the Data Pump Process Manager reduces some of this complexity by providing a user interface that enables you to view the progress of your Data Pump batches, and run multiple Data Pump processes with a consistent set of parameters.

    See: Using Data Pump Process Manager

Restrictions

This essay does not describe the entire Data Pump schema in detail. Details are given as needed for some of the tables and in most cases you will use the PL/SQL routines to insert data to these batch interface tables. Full details are provided in the Oracle HRMS electronic Technical Reference Manual (eTRM), available on My Oracle Support.

Oracle delivers seed data to enable Data Pump API calls to use features such as passing in user values instead of system identifiers. This support is not available for all of the APIs that are delivered with Oracle HRMS. This essay describes a mechanism for calling APIs using Data Pump where the supporting seed data is not present.

When purging data from the Data Pump tables, take extra care that you do not delete information on User Keys that you might need for future loading of external data. See: User Key Values.

Contents

This essay includes the following sections:

Overview

Oracle HRMS has a set of predefined APIs that are business process related and you are strongly advised always to use these APIs to load data. The predefined APIs enforce all the business rules in the system and guarantee the integrity of any data loaded into the system.

The Oracle HRMS Data Pump supports rapid implementation by simplifying and standardizing the common tasks associated with loading batch data into the Oracle HRMS tables. This is done by providing a set of predefined batch tables and standard processes that simplify the tasks of data-loading using the supported APIs.

With the Oracle Data Pump you:

  1. Map the data items from your external system to the parameter values of the appropriate APIs.

    Because you map data to the parameters of the APIs you do not need to know the complexity of the HRMS data model. For example, to create an employee you need to co-ordinate inserting data into multiple tables. The create_employee API does this automatically, using the parameter values you pass in.

    A special feature of the Data Pump is that you can use user values in place of system IDs for the API parameters. These are translated automatically by the Data Pump.

  2. Load your data into a single generic batch lines table. (There is also a single batch header table to help you manage your batch loading processes.)

    The Data Pump works with a single generic batch lines table. It generates a specific view for each API so that you can easily review and update the data for each API using the parameter names for the API.

    Also, there are PL/SQL interface routines to insert your external data into the generic batch lines table.

  3. Run a standard process that automatically calls the appropriate API for each line of data in the batch table.

Components of Data Pump

Data Pump consists of the following components:

Meta-Mapper Process

This process generates the specific PL/SQL procedures and views for each of the supported API modules you want to use.

Use the Meta-Mapper to generate a set of views that you can use to examine or update data in the batch tables. For example you might want to correct data or change the order in which data is loaded.

Note: The Meta-Mapper is similar to an install process. You must run the Meta-Mapper before making a data pump API call. Meta-Mapper usually runs during the loading of your software, but there are occasions when you may need to run Meta-Mapper manually. For example, if you cannot find Meta-Mapper, or if you version displays as invalid, then you should run Meta-Mapper manually.

Batch Header Table and Batch Lines Table

Use these two tables to hold the header and lines information from your external data.

  • HR_PUMP_BATCH_HEADERS

  • HR_PUMP_BATCH_LINES

Note: The Meta-Mapper creates views based on the batch lines table called HRDPV_<API Procedure Name>, for example, HRDPV_CREATE_EMPLOYEE.

PL/SQL Routines

Use the predefined and generated PL/SQL routines to insert your external or legacy data into the batch lines table. Meta-Mapper generates a separate routine for each API that is supported by the Data Pump.

  • HR_PUMP_UTILS.CREATE_BATCH_HEADER(...)

  • HRDPP_<API Procedure Name>.INSERT_BATCH_LINES

    For example, HRDPP_ CREATE_EMPLOYEE .INSERT_BATCH_LINES

There is also a help routine to provide detailed information on the parameter options for specific procedures.

  • HR_PUMP_META_MAPPER.HELP ( <package_name>, <procedure_name>)

The Data Pump Engine Process

The Data Pump Engine process is a standard concurrent process that performs the actual data validation and loading operations. It takes these parameters:

  • Batch name

  • Processing mode

  • Action Parameter Group

Special Features of Data Pump

The following is a list of the special features provided with Data Pump:

User Keys

Data Pump enables you to define the combination of data items that uniquely identify records for loading into Oracle HRMS. For example, when you are loading data for a Person, you could use a combination of Last Name, First Name, Date of Birth, and Gender to identify that person uniquely in Oracle HRMS.

You store these user key definitions in the table HR_PUMP_BATCH_LINES_USER_KEYS.

Use Actual Values

In nearly all cases you can load data using actual names or values without having to identify a system value in Oracle HRMS. The conversion of name to ID is transparent to the user. For example, you can use a real Job Name without needing to identify the JOB_ID in Oracle HRMS; or you can use the value `Male' for gender without needing to know that the code value is `M'.

Alternative Meta-Mapper Generation Mode

It is possible to call the Meta-Mapper so that Data Pump API call is essentially a direct call to the API. This feature is most useful in the absence of seed data for Data Pump support.

Automatic Parallel Processing Of Batch Load Process

Data Pump automatically supports parallel processing on multi-processor systems without any extra code. You turn this on by inserting or updating a row for THREADS in the PAY_ACTION_PARAMETER_VALUES table.

This is the same parameter that controls parallel processing for the Payroll Run and other processes in Oracle HRMS.

Note: When you are using parallel processing, use the P_LINK_VALUE parameter in the batch lines to group transactions that must be run within the same thread.

Explicit User Ordering of Operations

When loading batch lines with related data you must perform some operations in a strict sequence. For example, entering salary information for an employee must take place after the employee record has been created.

With Data Pump, you use the P_USER_SEQUENCE parameter to control the order of processing of batch lines.

Note: Data Pump cannot validate the sequence numbers you enter. It accepts the sequence and tries to process as instructed. If you use incorrect numbers the process may return validation errors when it tries to load your data in the wrong sequence. See: Running the Data Pump.

Validation Mode Operation

When you submit the Data Pump concurrent process you can choose to run it in validation mode. This enables you to review errors in batches or in related records in a batch and to change them before any of them are committed to the HRMS database.

Processing Batches

When you run Data Pump the process only loads data that has not already been processed successfully. This means that you can run a batch, review and correct errors for any specific lines, and then rerun the same batch. You can repeat this process until you have successfully loaded all lines in the batch.

To do this you submit the concurrent process with the same batch name. All unprocessed or errored lines are reprocessed automatically.

Logging Options

There are many logging options with Data Pump that help you find errors when running the process.

Using Data Pump

To use Data Pump, follow this sequence of tasks:

  1. Decide which of the supported API modules you require for loading your external data and run the Meta-Mapper to generate interface procedures for these APIs.

    See: Running the Meta-Mapper.

  2. Use the predefined PL/SQL routines and those created by the Meta-Mapper to transfer your external data into the Data Pump tables.

    See: Loading Data Into the Batch Tables.

    Note: For each entity that requires a User Key you must include the value you want to use as a unique identifier. For example, the parameters P_PERSON_USER_KEY and P_ASSIGNMENT_USER_KEY for create_employee.

  3. Optional. Run Data Pump in validation mode to check and correct data before it is loaded.

    See: Running the Data Pump Process.

  4. Run Data Pump to load data from batch tables into the Oracle HRMS tables.

    Note: When you load a record for the first time, Data Pump automatically inserts your user key value from the batch lines, and the unique key ID generated by the API into the HR_PUMP_BATCH_LINE_USER_KEYS table. This combination is used for all further data loads that update existing records in Oracle HRMS.

    For example, P_PERSON_USER_KEY = USER_KEY_VALUE and PERSON_ID = UNIQUE_KEY_ID.

  5. Review any errors and correct causes.

    See: Finding and Fixing Errors.

  6. If necessary, rerun Data Pump to load corrected batch lines.

    See: Rerunning the Data Pump Process.

    Repeat 5 and 6 until all lines are successfully loaded.

  7. Optional. Purge data from the batch tables.

    See: Purging Data.

Running the Meta-Mapper

Based on your implementation you might decide that you do not need to use all of the predefined APIs to load external data. Run the Meta-Mapper for all APIs or for each single API that you select. The Meta-Mapper generates a specific PL/SQL package and view for each API.

Note: For APIs with overloaded interfaces, the Meta-Mapper will only generate code for the latest interface. The latest interface is the interface that has the greatest number of mandatory parameters.

Use the following SQL*PLUS command to generate packages and views for a number of APIs. (Not, however, all APIs, as the GENERATEALL name appears to suggest):

See also: APIs Supported by the GENERATEALL Command.

Use the following SQL*PLUS command to generate packages and views for one API:

For example:

The naming convention for the view is hrdpv_<api_module_name> and the naming convention for the PL/SQL package is hrdpp_<api module name>. This applies unless the name would exceed 30 bytes, in which case the name is truncated to 30 bytes. In the example, the name of the view is hrdpv_create_employee, and the name of the package is hrdpp_create_employee.

You can use the view to insert legacy data into the HRMS schema or the batch tables, or to update data already in the batch lines table. The PL/SQL package contains an insert_batch_lines procedure to make it easy to insert data from your external systems into the batch lines table; and a call procedure that executes the API on the rows in the batch lines table.

View Generated by the Meta-Mapper

For each API the Meta-Mapper generates a view on the HR_PUMP_BATCH_LINES table that reflects the parameters of the API. This makes it easier to examine and update row values. The name of the view reflects the API name. For example, HRDPV_CREATE_EMPLOYEE. For a full listing of this view see: Table and View Descriptions.

In addition to the parameters for the API, the Meta-Mapper always creates the following columns in the view:

Meta-Mapper also creates other columns for specific APIs. For example, some of the columns on the create employee view are:

  • P_EFFECTIVE_DATE

  • P_MANAGER_FLAG

  • P_ASSIGNMENT_USER_KEY

Other columns are created to reflect the PL/SQL OUT values returned from the API so that you can examine these values. For example:

You do not need to know which columns of the batch lines table hold specific parameters for the API.

Required Columns

If you use the view to insert data to the batch lines table then remember that in addition to the data required for the insert batch line procedure you also need :

  • batch_line_id

    Primary key generated using the hr_pump_batch_lines_s sequence.

  • line_status

    Must be set to 'U' (unprocessed).

  • api_module_id

    Foreign key to hr_api_modules.

    The following query gets the api_module_id for create employee:

PL/SQL Package Generated by the Meta-Mapper

The Meta-Mapper also generates a separate package for each API to make it easier for you to load data to the batch lines table or to review the content of the table for specific APIs.

For example, the create_employee package hrdpp_create_employee contains two procedures:

Insert Batch Lines Procedure

Use this procedure to simplify loading data into the batch lines table.

See also: Default and Null Values for API Parameters.

A call to this procedure creates one row in the batch lines table, complete with all the parameters. For create employee, some of the parameters are:

This example does not show all the parameters as there are many more.

The optional p_data_pump_business_grp_name parameter specifies a business group name to override the name specified in the batch header.

The optional p_data_pump_batch_line_id parameter specifies the batch_line_id for the inserted row (if necessary an existing row with this batch_line_id will be deleted).

Note: This procedure requires two user key values p_person_user_key and p_assignment_user_key. You must supply values for these keys. If you use Data Pump to create records in Oracle HRMS then Data Pump automatically inserts your key values and the HRMS key values generated by the APIs into the user keys table. For subsequent actions Data Pump can use these keys to match records from your external system with the Oracle HRMS records. A more detailed explanation and example is included in a later section of this document.

Call Procedure

This is the actual 'wrapper' procedure executed by the Data Pump process to call the API and pass in the appropriate parameter values. The procedure takes two arguments: p_business_group_id and p_batch_line_id.

Note: Direct calls to this procedure are NOT supported. You must use the Data Pump concurrent process to execute the procedures.

Meta-Mapper Help Procedure

The Meta-Mapper package also includes a help procedure hr_pump_meta_mapperhelp that returns information on the generated PL/SQL package and view names, and the batch lines table parameter values for a given API.

The help procedure has two parameters:

  • p_module_package

    The name of API PL/SQL package

  • p_module_name

    The name of API PL/SQL procedure

You must set server output on before calling this procedure.

For example, use the following SQL*PLUS to get help for hr_employee_api.create_employee:

The output is as follows:

The following is an explanation of the help output:

  • In the above example, the insert_batch_lines procedure is: hrdpp_create_employee.insert_batch_lines.

  • The Parameter Name column shows the name of the parameter as it appears in the insert_batch_lines procedure and generated view.

  • A parameter can have type USER_KEY which means that it is a user key (see the section User Key Values for more details). For example, P_SUPERVISOR_USER_KEY USER_KEY IN DEFAULT. User key parameters are implicitly of type VARCHAR2.

  • DATE parameter values are passed to the insert_batch_lines procedure as VARCHAR2 strings in the appropriate date format.

    Data Pump makes use of two different date formats:

    • Unsigned Date Format: This can be either YYYY/MM/DD, or for dates in Oracle HRMS, the internal date format is YYYY/MM/DD HH24:MM:SS

    • Signed Date Format: This can be either SYYYY/MM/DD, or for dates in Oracle HRMS, the internal date format is SYYYY/MM/DD HH24:MM:SS. The signed date format has been introduced for HR_API_G_SYSDATE which must be capable of holding a negative date.

  • BOOLEAN parameter values are passed to the insert_batch_lines procedure as VARCHAR2 strings with the values TRUE or FALSE'.

  • The In/Out column has the value IN for parameters that are PL/SQL IN or IN/OUT when passed to the API, or are user key parameters. If the parameter is an API PL/SQL OUT parameter, then the In/Out column value is OUT.

  • Only IN parameters are arguments to the insert_batch_lines procedure. OUT parameters appear in the generated view.

  • The Default column has the value DEFAULT if the parameter's value is not required in the batch lines table. For mandatory parameters this column is empty.

  • Mandatory parameter values must be passed to the insert_batch_lines procedure.

  • If the parameter is a lookup parameter, the Lookup Type column contains the name of the parameter's lookup type.

Loading Data Into the Batch Tables

The Meta-Mapper generates a specific PL/SQL package and view for each API. Use these PL/SQL interface procedures and views for loading data into the batch tables, except where stated otherwise in this document.

It is particularly important that inserts are performed exclusively through the interfaces. There are two reasons for this:

  • Using the PL/SQL procedure insulates you from the complexities of the underlying schema.

  • Using the PL/SQL procedure insulates you from any schema changes that might be made in any future release. This is important if you intend to use Data Pump on a continuing basis.

Tip: Test the validity of the legacy data capture code on a subset of the batch to be loaded. For example, if you plan to load details for 100000 people, test your routines to validate and load a subset of 100 representative people. This should help you to identify and resolve any obvious problems with your capture code before you attempt to load the bulk of your data.

The Batch Interface Tables

The main objective of the interface design was to keep everything as simple as possible. The result is that Data Pump only has one batch header and one batch lines table for loading data for all APIs. Views are generated by the Meta-Mapper with specific column names for each API.

Each row of the batch lines table holds the reference to an API and data values. Data Pump executes each API with the data passed in as parameters.

How to Control Processing Order

There are many instances where you need to control the order in which batch lines are loaded into the database. For example, Data Pump would generate an error if it tried to create an address for a person before it created the person.

To control the order in which operations are performed, use the p_user_sequence parameter to set the order manually. Choose some appropriate numeric values for this parameter when you insert the data to the batch lines table. Data Pump uses these numbers to determine processing order.

Different Approaches to Batch Loading

There are a number of approaches you can take when setting the order for processing batch lines.

One approach would be to load disparate data in separate batches. For example load personal information in one batch and address information in a second batch.

Another approach would be to create a batch containing lines with related API calls. For example, you could load person, address, and assignment information for one employee as part of one batch. In this approach, if you are using the parallel processing option, you would use the p_link_value parameter to make sure all the lines are processed in the same chunk. Use the default or p_user_sequence parameter to make sure that the different API calls are made in the correct order within the linked group.

Processing Order When Running Parallel

The Data Pump process has been optimized to take advantage of parallel processing options. If you want to run a multi-threaded process there are some special considerations for ordering batch lines.

When you run the Data Pump process in parallel, the concurrent manager generates multiple threads, each of which processes a defined number of batch lines before it commits them to the database. The number of lines is controlled by the CHUNK_SIZE payroll action parameter - see Other Parameters for details.

With parallel processing and chunking of lines, in theory a transaction that includes more than one line could be split between processes. This would mean that lines might not be processed in the order set by the p_user_sequence parameter.

You can prevent this by using the p_link_value parameter. This parameter tells Data Pump that a set of batch lines must be processed in the same chunk. Use the same link value for all the lines that must be processed by the same thread - this will automatically extend the number of rows processed by a single thread when necessary.

When lines have a common link value, they must also be in consecutive user sequence in order to be processed within a single chunk.

For example, in the following table, only the lines with the user sequences 1, 2 and 5 are guaranteed to be processed in the same thread.

Note: When running Data Pump in parallel you may find that performance does not scale as expected. Remember that running business process APIs in parallel may cause lock contention because of extended validation. For example, in the past, the personal payment method and element entry APIs were known to have problems in this area.

Default and Null Values for API Parameters

Specifying a Default or NULL Parameter Value

Part of the design for the APIs in Oracle HRMS is that many parameters have default values set for them. This means that they can be called directly without having to pass values for all parameters.

When you use Data Pump there is a similar mechanism that means you do not have to supply values for all parameters.

The following rules apply:

  • If an insert batch lines parameter is passed NULL or is not passed a value and can be defaulted, the appropriate default value will be passed to the API module itself.

  • If you want to set up an explicit NULL value for a parameter, use the special reserved string <NULL>. You may want to do this to update to a null value.

Any other value passed as a parameter will be the value inserted into the batch line and subsequently passed to the appropriate API process.

Indicator Parameters

The insert_batch_lines procedure may be generated with indicator parameters. Each indicator parameter is generated in addition to the corresponding standard parameter e.g. I_AMOUNT (indicator parameter), P_AMOUNT (standard parameter). The indicator parameters are generated to allow the special value NULL to be specified for non-mandatory number and date parameters whose default value is not NULL. If the indicator parameter = Y then the value NULL is written to the batch lines table, otherwise the standard parameter's value is used. The usual case for this is for update APIs where a number or date value needs to be updated to NULL

Assumed Default Values

Occasionally, when the value NULL is used to specify a non-mandatory parameter, the wrong default value gets passed to the API call. The usual reason for this is that the parameter in question has a non-standard default value, but the seed data has not taken this into account. In such case, the correct default value for the parameter should be explicitly set in the batch lines row for the Data Pump API call.

The meta-mapper assumes, that unless seeded otherwise, certain default values for API parameters - this is because it is not possible to get the actual default values from the database. The default value used for a create API (e.g. create_employee) is NULL. For all other APIs, the default values used are shown in the following table:

BOOLEAN NULL
DATE HR_APLG_DATE
LONG NULL
NUMBER HR_APLG_NUMBER
VARCHAR2 HR_APLG_VARCHAR2

Default and Null Values for Mapped Parameters

A mapped parameter is one where an actual value (or a user key) is used rather than a system identifier in the Data Pump API call. The meta-mapper call procedure calls a mapping function before making the API call to resolve the system identifier value from the input value. Such a mapping function will usually have two or more parameters – an obvious name parameter e.g. P_JOB_NAME, and other parameters such as P_EFFECTIVE_DATE.

If one or more of the mapping function parameters is set to <NULL> in batch lines then the mapped parameter is passed to the API as NULL. Otherwise, if one or more of the mapping function parameters is set to NULL in batch lines and the default value is NULL or an HR_API value (e.g. HR_API.G_NUMBER) then the mapped parameter is passed to the API with its default value.

Recommendation: To use this feature, set the name parameter to <NULL> or NULL in the batch lines table. There is no need to worry about what the other mapping function parameters could be.

Running the Data Pump Process

Use the Submit Reports and Processes form to start the Data Pump Engine process. It takes these parameters:

  • BATCH NAME

    The batch_name is one of the batches inserted via the create_batch_header procedure.

  • VALIDATE FLAG

    Default value for this flag is No. This commits all valid lines to the database.

    If the validate flag is set to Yes, the process runs in validation mode. The APIs are called, but their results are rolled back. Use this mode to check and correct data before committing changes to the database.

  • ACTION PARAMETER GROUP

    The action parameter group specifies the set of action parameter values to configure this Data Pump run.

    The default value for this parameter is set from the HR: Data Pump Action Parameter Group profile option.

Note: Before running the Data Pump process you should decide whether to use parallel threads and whether you want to turn on any logging options.

Overview of Data Pump Action Parameters

Data Pump process running can be controlled through the action parameter value settings. A number of these action parameters (THREADS, CHUNK_SIZE, MAX_ERRORS_ALLOWED) are also used by the other processes e.g. the payroll run.

With action parameter groups it is possible to have separate action parameter values for different processes, something that is highly recommended. Another use of action parameter groups is to switch in an action parameter group for debugging e.g. so that Data Pump is run as a single thread with logging switched on.

Any action parameters not set within the specified action parameter group take their values from the default action parameter group (the null action parameter group). Furthermore, if action parameters are NULL then the Data Pump process uses default values for them.

You can set action parameter values from the Action Parameters form (navigate to Process And Reports->Action Parameters).

Running In Parallel

To enable parallel processing you set a value for the THREADS parameter in PAY_ACTION_PARAMETER_VALUES.

The threads value includes the starting process. That means that if you set a value of 2, the main engine code starts with one slave process to make a total of two concurrent processes. When running in parallel, the 'master' process may finish before the slave processes. This is normal.

Note: The THREADS parameter also controls the parallel execution of the other Oracle Payroll processes. We recommend that you use action parameter groups to separate action parameters for Data Pump from normal payroll processing.

Other Parameters

There are six other payroll action parameters you can set for Data Pump.

CHUNK_SIZE

Default = 10

Controls how many batch API calls are processed at a time per thread when running in parallel. It also controls the number of API calls per commit. Note that there are certain circumstances under which the actual number can vary from this number. For example, it can be higher if the p_link_value parameter is set.

MAX_ERRORS_ALLOWED

Default = 20

Controls how many errors in calling an API will be tolerated before the entire Data Pump engine fails. This is the number of errors per parallel thread.

PUMP_DEBUG_LEVEL

Use this parameter to turn on logging for tracking errors generated by the Data Pump process. For a list of valid values for this parameter, see Logging Options.

DATA_PUMP_DISABLE_CONTINUOUS_CALC

Default = N

Use this parameter to turn off continuous calculation triggers. This may be desirable for performance reasons. The value Y turns off the continuous calculation triggers.

DATA_PUMP_NO_FND_AUDIT

Default = N

Use this parameter to turn off Oracle Applications auditing. This may be desirable for performance reasons. The value Y turns off the auditing.

DATA_PUMP_NO_LOOKUP_CHECKS

Default = N

Use this parameter to turn off lookup validation in the Data Pump API call. The Data Pump API call assumes that values for lookup parameters are passed in as lookup codes only. This may be desirable for performance reasons. The value Y turns off the lookup validation.

Checking Run Status

The Data Pump runs as a concurrent process. You can check process status at any time using the View Concurrent Requests window. The concurrent manager only reports failure if the entire process has failed. Usually this happens because the number of errors exceeded the value set by the MAX_ERRORS_ALLOWED parameter.

Note: Even if the concurrent process completes successfully there may be some data errors encountered by the process. You should always check for batch line errors.

Finding and Fixing Errors

This section deals with the logging options available for tracking errors generated by the Data Pump process, as well as hints and tips on how to deal with these.

Logging Options

You enable logging options for Data Pump by inserting appropriate values in the PAY_ACTION_PARAMETERS_VALUES table for the PUMP_DEBUG_LEVEL parameter.

Note: Turning logging on always affects the overall performance of the data pump process. You should only use logging to help track down problems when they occur. Remember also to switch logging off after you have solved your problem.

Valid values for PUMP_DEBUG_LEVEL are as follows.

Tip: The first three options are likely to be the most useful to you.

AMD API Module Debug (enables trace output from API)
RRP Range Row Processing logging (logs the number of errors that occurred for each unit of work, or range)
GID Get_id function failure information (logs failures in functions that map user values to IDs)
MSG Output specific logging messages
ROU Routing information (entry to and exit from procedures)
WCD Wrapper cache debug logging
STK Stack dump logging (trace information on failure)
EXT Exit information (trace information on success)
RRI Range row insert logging
BLI Batch Line Information (output the batch line number for the batch line being processed).
CLF Concurrent Log File (logging messages output with the MSG option go to the concurrent manager log file).

You can combine any number of these options by concatenating the values, separated by a colon. For example, the string 'MSG:RRI:RRP' combines MSG, RRI, and RRP debugging.

How to View Logging Output

When you enable logging options, output is produced for every thread that may be running. Use the PYUPIP command to view this output.

To use this command you will need to know the ID for the concurrent process you are logging. Online you can use the View My Requests window to find the Concurrent Request IDs. Alternatively, you can query from the HR_PUMP_REQUESTS table. One row is inserted for each process that is running. For example:

Typical output would be:

This tells us that there are two processes running, and the request_id values are 98533 and 98534.

Use PYUPIP to trace the output in a separate command line window. For example:

Note: If you are running multiple threads, you should trace all the threads. If you do not choose all threads, this means that the processing comes to halt when the database trace pipe fills up. It may be advisable to run a single thread only when tracing.

How to Find Errors in Batch Lines

When an error occurs during processing, Data Pump generates one or more rows in the HR_PUMP_BATCH_EXCEPTIONS table. There will be multiple rows if the API supports multiple messaging. In this release you must use SQL*PLUS to view this information.

Additionally, you can use SQL*PLUS to query rows in HR_PUMP_BATCH_LINES where the LINE_STATUS has a value of E - error.

Note: In validation mode LINE_STATUS is set to V- validated, for a successful API call. In update mode LINE_STATUS is set to to C - complete, for a successful API call.

Investigating the Cause of Errors

Investigation strategies depend on the type of error and the indications of its origin. For some errors you may need experience with the use of APIs and the Oracle HRMS application to recognize what might be wrong.

Some specific advice for Data Pump follows:

  • Start with the columns of the HR_PUMP_BATCH_EXCEPTIONS table to identify which batch line has caused the error. Use this to check the parameters and values of the batch line itself.

  • One common error is 'no data found'. This is most likely to happen because of an error in one of the functions called to convert user meaning to ID values. In this case, the exact cause of the error will not be obvious from looking in the exceptions table. More information can be gained from using the GID logging value. When failure occurs, the name of the function that failed, plus the argument values passed in, is displayed in the trace.

  • The AMD logging value can be used to help track down problems. It activates the logging in the API modules themselves - providing copious output to examine.

  • Another common cause of errors is incorrect ordering of the data load. For instance, attempting to load a person's address before the person. An associated error may occur if you are using parallel processing and do not use LINK_VALUE to associate multiple batch lines.

  • When running in validation mode, ordering errors will occur if the batch is not split up into chunks that are independent of the results of other chunks. This will occur even if the validation is done with a single thread. The reason is that the results of APIs over a single chunk are rolled back to release rollback segments. This is another reason to use the p_link_value parameter to control the running of a load.

How to Fix Errors

The most common cause of errors is likely to be that incorrect values have been loaded via the insert_batch_lines procedure and that these need to be corrected.

Using The Views To Correct Data

Use the HRDPV_ views on HR_PUMP_BATCH_LINES to correct values in the appropriate columns. You can use normal update statements on these views and this makes fixing data problems much simpler.

Warning: When using the views to make changes to problem data, you must not alter the LINE_STATUS on the HR_PUMP_BATCH_LINES table. The Data Pump engine uses this for processing.

Note: Views on HR_PUMP_BATCH_LINES display rows only for the APIs for which they were generated. Any attempt to update the API_MODULE_ID column with an incorrect value will fail with an ORA-1402 error. The views are generated with a WITH CHECK OPTION on the where-clause to prevent you from using a view to generate any row that the view could not select.

(The same warning applies to inserting rows into HR_PUMP_BATCH_LINES using the generated views.)

Rerunning The Data Pump Process

After you have fixed any problems you can rerun the batch by submitting the Data Pump process again using the same batch name. You can submit the process any number of times until all lines are successfully completed. Batch lines with a status of E - error; U- unprocessed; or V -validated are automatically reprocessed.

You do not have to take any action to remove rows from the exception table. Data Pump automatically deals with this.

Lines validated in previous Data Pump runs are reprocessed even if the Data Pump is run in validation mode because the results of the associated API calls would have been rolled back in the previous runs. Only lines with a status of C - complete are not reprocessed.

The following table summarizes each of the possible entries for batch line status, and explains whether each status is automatically reprocessed.

U Unprocessed Yes
N Not Processed
Applies to Data Pump purges and indicates that this call was not processed because a previous linked call failed to complete processing.
Yes
R Rolled Back
This linked API call completed successfully, indicating that the call parameters were valid. However, the results were rolled back because a subsequent linked API call failed to complete.
Yes
V Validated
Not yet committed.
Yes
E Error
Not committed
Yes
C Completed
Committed to the database.
No

Purging Data

You can use the Data Pump Purge Process to remove unwanted batches when you have successfully uploaded them to your database. You may wish to retain some of the batch information for future processing. When you purge a data pump batch, you can therefore select how much of the batch information you purge. The Data Pump Purge process enables you to select your criteria for purging and then submit your purge.

Before you submit a purge request, you should make sure that:

  • You have completed all processing is for the batch.

  • You have not prematurely removed data from the USER_KEYS table. For example, if you delete assignment and person user keys, you cannot create a secondary assignment for that employee until you run the add_user_key procedure to recreate the keys.

If you have existing Data Pump purge practices you can continue to use them in preference to the Data Pump Purge process.

See How to Purge

You run the Data Pump Purge process from the Submit Requests window.

To run the Data Pump Purge process:

  1. Enter the name of the batch that you want to purge. If you do not supply a batch name, confirm that you want to purge all batches.

  2. Specify whether you want to preserve the user keys so that you can use them for future batches.

  3. If you want to purge the unprocessed batch lines, enter Yes. However, these lines are probably unprocessed because Data Pump encountered the maximum number of errors and could not process any of the later batch lines. You can run these unprocessed lines again when you have fixed the data. So, if you want to run these batch lines again, enter No.

  4. If you want to purge the failed batch lines, enter Yes. However, if you intend to process these batch lines again when you have corrected the reason for failure, enter No to preserve these batch lines for future use.

  5. If you want to purge the completed batch lines, enter Yes.

  6. If you want to delete the batch header, enter Yes.

  7. Confirm that the action parameter group is correct.

How To Purge

In all cases you should start with the following actions:

Simple Purge Of All Rows

If you want to purge all rows regardless of status then use the following:

Purge Of All Successful Rows

This is more complicated. You should purge data only when all loads have been successful. This avoids the danger of purging rows that are still needed. Perform the following actions:

  • Use the HR_PUMP_BATCH_LINES.LINE_STATUS column to tell which rows have been successful, and therefore can be purged.

    • Look for a status of C. Of course, if all rows in a batch have status C then simply purge all rows in that batch.

  • Remove all appropriate rows in the following tables, in the order shown below:

    • HR_PUMP_BATCH_EXCEPTIONS

    • HR_PUMP_BATCH_LINE_USER_KEYS

    • HR_PUMP_BATCH_LINES

    If all rows in HR_PUMP_BATCH_LINES have been deleted, remove the appropriate batch from the HR_PUMP_BATCH_HEADER table.

Sample Code

This section contains some sample code showing how you could call the batch lines procedures.

This example is artificial in that the data for the API calls is generated. However, it shows how we can prepare the Data Pump to create a number of batch lines that:

  • Create an employee

  • Create an address for the employee

  • Update the default assignment criteria

  • Create a secondary assignment

The example also illustrates the use of p_link_value to make sure that the separate transactions for each employee and assignment are processed by the same thread.

Notes on Using The Generated Interfaces

The Meta-Mapper process generates a view and PL/SQL packages for each API. This section explains some of the factors that you should keep in mind when using them.

Finding System IDs from Names or Values

When you use APIs you must supply lookup codes and surrogate primary keys for many parameters. For example:

Without Data Pump you would need to write additional code to convert values from your external system to Oracle HRMS system IDs for each API.

However, with Data Pump you have a set of predefined procedures for each of the supported APIs that automatically convert user names or values into lookups and system IDs. For example:

Note: For lookup parameters, you can use the meaning or the lookup code itself. For non-lookup type IDs you will find an alternative parameter to use.

Exceptions

There are three major exceptions to the use of names for parameter values:

  • Flexfield Attribute Parameters

  • PL/SQL IN/OUT Parameters

  • Legislation Specific Lookup Parameters

Flexfield Attribute Parameters

Most of the API processes include flexfield attribute parameters with names like P_SEGMENT18 or P_ATTRIBUTE20. Data Pump cannot know what the mappings of these values are in your specific implementation and therefore value conversion is not supported.

This means that you must take responsibility for passing the correct lookup code or other value as appropriate.

PL/SQL IN/OUT Parameters

When an API performs a combination of different actions then you need to provide the appropriate ID or code values for the parameters rather than the user meanings. This should not be a great problem where the values for these items can be derived before the Data Pump run.

For example, in hr_assignment_api.update_emp_asg , p_special_ceiling_step_id must be passed in as an ID, even though other APIs require it to be a user key.

Note: You cannot provide user keys for PL/SQL IN/OUT parameters of the API because the Data Pump code that calls the specific API has no way to determine whether the user key existed before the API call and therefore whether it is to be created or its ID value updated after the API call.

Many APIs generate a comment_id as an output parameter. However, you are not required to supply a user key value for the comment_id. This avoids the generation of a lot of meaningless user keys.

Note: A comment_id user key is required for the comment_id parameters to the element entry creation and update APIs. You must add these user keys if you require them for the element entry API calls.

Legislation Specific Lookup Parameters

A similar situation arises with legislation-specific business process API calls where a specific lookup in the legislation-specific API call corresponds to a generic parameter in the generic business process API call.

For example, the p_region_1 parameter in the hr_person_address_api.create_person_address API corresponds to p_county lookup parameter in the hr_person_address_api.create_gb_person_address API.

When calling hr_person_address_api.create_person_address for a GB address via Data Pump, you would have to pass the 'GB_COUNTY' lookup code for the p_region_1 parameter. Alternatively you could use the 'GB_COUNTY' lookup meaning if you used hr_person_address_api.create_gb_person_address.

Note: You should use legislation-specific APIs where these are available.

User Key Values

When you are mapping data from your external system to Oracle HRMS you will find that there are some cases where an ID value for an Oracle entity cannot be derived from a logical unique key or name. Examples of this are Person, Assignment and Address. Consider the unique identifier for a person. It is very difficult, if not impossible, to identify a person uniquely. In theory different people may share the same first and last names, gender, birth date, marital status, and so forth.

There are similar problems if an entity does not have a logical key, and its surrogate ID cannot be derived easily from the names of any of its component entities. For example, it isn't easy to identify a unique Element Link by looking simply at names of its components - Payroll, Job, Position etc.

Or, the entity may be an abstract entity specific to the Oracle Applications products and is only identifiable using an ID value. For example an ID_FLEX_NUM.

The solution provided by Data Pump is to enable you to set a 'User Key' value. This value must be a unique character string. It could be a unique ID taken from your external system or it could be a concatenation of multiple values. For example a user key for a person could be the person's name concatenated with the existing employee number from your legacy system. An illustration would be:

You must define user key values for any parameters with a name that ends 'user_key'. Data Pump uses these user key values to identify IDs for the records in the Oracle HRMS system.

Note: User key values must be unique across all entities. For example, it is not possible to have a Person user key value of 'SMITH1001', and an Assignment user key value also of 'SMITH1001'.

In most cases you will have one user key value for each system ID. However, with Data Pump you can define many different user keys for the same system ID. This is important if you are loading data from different external systems and the unique keys do not match.

User keys are held as rows in the HR_PUMP_BATCH_LINE_USER_KEYS table.

Creating User Key Values

User keys are created in one of two ways:

  • Data Pump inserts new user keys

    Using Data Pump you must specify user keys for several API parameters. After a successful call to an API that creates a new record, Data Pump inserts a new row in the user keys table with the name you specified and the system ID value returned from the API. The returned ID value is a PL/SQL OUT parameter to the API.

  • Manually insert a new user key

    If you have already loaded data from an external system, or you want to create multiple user keys for the same system ID you can manually insert rows into HR_PUMP_BATCH_LINE_USER_KEYS using the add_user_key utility procedure.

Once the user keys have been created you can use the same key with other APIs to update an existing entity, or to specify another entity. For example, two person user keys can be used to specify a contact relationship.

Utility Procedures Available With Data Pump

This section lists the utility procedures that are provided with the Data Pump.

All the procedures are in the HR_PUMP_UTILS package.

create_batch_header

An example of a call to this procedure is:

add_user_key

modify_user_key

Table and View Descriptions

The following section provides more detailed descriptions of the specific tables and views you use with Data Pump.

APIs Supported by the GENERATEALL Command

HR_APPLICANT_API CREATE_APPLICANT
  CREATE_GB_APPLICANT
  CREATE_US_APPLICANT
HR_ASSIGNMENT_API ACTIVATE_EMP_ASG
  ACTUAL_TERMINATION_EMP_ASG
  CREATE_SECONDARY_EMP_ASG
  CREATE_GB_SECONDARY_EMP_ASG
  CREATE_US_SECONDARY_EMP_ASG
  SUSPEND_EMP_ASG
  UPDATE_EMP_ASG
  UPDATE_EMP_ASG_CRITERIA
  UPDATE_GB_EMP_ASG
  UPDATE_US_EMP_ASG
HR_CONTACT_API CREATE_PERSON
HR_CONTACT_REL_ API CREATE_CONTACT
HR_EMPLOYEE_API CREATE_EMPLOYEE
  CREATE_GB_EMPLOYEE
  CREATE_US_EMPLOYEE
HR_EX_ EMPLOYEE_API ACTUAL_TERMINATION_EMP
  FINAL_PROCESS_EMP
HR_JOB_API CREATE_JOB
HR_JOB_REQUIREMENT_API CREATE_JOB_REQUIREMENT
HR_PERSONAL_PAY_METHOD_API CREATE_GB_PERSONAL_PAY_METHOD
  CREATE_PERSONAL_PAY_METHOD
  CREATE_US_PERSONAL_PAY_METHOD
  DELETE_PERSONAL_PAY_METHOD
  UPDATE_PERSONAL_PAY_METHOD
  UPDATE_GB_ PERSONAL_PAY_METHOD
  UPDATE_US_ PERSONAL_PAY_METHOD
HR_PERSON_ADDRESS_API CREATE_GB_PERSON_ADDRESS
  CREATE_PERSON_ADDRESS
  CREATE_US_PERSON_ADDRESS
  UPDATE_PERSON_ADDRESS
  UPDATE_GB_PERSON_ADDRESS
  UPDATE_US_PERSON_ADDRESS
HR_PERSON_API UPDATE_PERSON
  UPDATE_GB_PERSON
  UPDATE_US_PERSON
HR_POSITION_API CREATE_POSITION
  UPDATE_POSITION
HR_POSITION_REQUIREMENT_API CREATE_POSITION_REQUIREMENT
HR_SIT_API CREATE_SIT
HR_VALID_GRADE_API CREATE_VALID_GRADE
PY_ELEMENT_ENTRY_API CREATE_ELEMENT_ENTRY
  DELETE_ELEMENT_ENTRY
  UPDATE_ELEMENT_ENTRY

Using Data Pump with Unsupported APIs

Sometimes the necessary seed data for a Data Pump call to a particular API is not present. The usual problem when running the meta-mapper generate is the lack of mapping functions to resolve system identifiers from user values, for example:

ORA-2001: Seed data error: Mapping function get_set_of_books_id does not exist. Please contact your support representative.

This type of error is usually caused by API parameters with names ending in _ID, for example, P_JOB_ID.

You can call the meta-mapper in an alternative generate mode that essentially generates a direct call to the API rather than processing parameter values beforehand to get system values. Making a Data Pump call with this generate mode requires a better understanding of the API itself than is required when using the standard generate mode.

Use this SQL*PLUS command to generate packages and views for an API:

sql > execute hr_pump_meta_mapper.generate (<package_name>, <procedure_name>, false) ;

Use these SQL*PLUS commands to display the help text for the API:

sql > set serveroutput on size 1000000;

sql > execute hr_pump_meta_mapper.generate (<package_name>, <procedure_name>, false) ;

The view and package generated are the same as in the standard generation mode discussed earlier in this essay. They can be used as described in this essay. However, when using this generate mode you should note that:

  • There must be a row for the API with API_MODULE_TYPE A1 or BP in HR_API_MODULES. Note that Oracle does not support customer creation of rows in HR_API_MODULES. This is because problems can occur if the data is delivered in future patches.

  • You must explicitly set the correct default values for API parameters when you make the Data Pump API call. This is because API parameter default values are not predefined and the meta-mapper makes assumptions about the default parameter values. For details about these assumptions, see Default and NULL Values for API Parameters (Assumed Default Values).

  • You will have to resolve the system values when you set up the data for each individual API call. This is because the generated Data Pump API does not have user keys, or names to identify the system values. This also restricts the mix of API calls within a batch because you cannot pass system identifiers implicitly between API calls. The same restriction applies to the object version number where an API call creates or updates an object.

Table and View Descriptions

The following section provides more details of the specific tables and views that you use with Oracle HRMS Data Pump

HR_API_MODULES

API modules supported by Data Pump

HR_PUMP_BATCH_LINE_USER_KEYS

This table holds key mappings between your external system and the Oracle HRMS system. These keys are required for specific entities where it may be difficult to identify the record uniquely in Oracle HRMS from a single field in the batch line table. For example, you might want to use Name||National Identifier from the external system to map to Person ID in Oracle HRMS.

This table is populated automatically by the Data Pump process when you create new records in Oracle HRMS. For example when you load your legacy data. You can insert new lines to this table if you have already loaded your legacy data.

You can have multiple external key mappings to the same unique_key_id in Oracle HRMS. For example, if you want to interface data from an external payroll system and an external benefits system to Oracle HR where the unique IDs are different.

HR_PUMP_BATCH_HEADERS

This table holds batch header information for Data Pump. BATCH_NAME is a parameter for the Data Pump concurrent process.

HR_PUMP_BATCH_LINES

This table holds the individual batch lines that will be loaded by Data Pump

HR_PUMP_BATCH_EXCEPTIONS

Holds exception information.

HRDPV_CREATE_EMPLOYEE

PAY_ACTION_PARAMETER_GROUPS

PAY_ACTION_PARAMETER_VALUES

Human Resource Management

Category: Corporate Management

Introduction

Human resource management is the strategic and coherent approach to the management of an organization's most valued assets - the people working there who individually and collectively contribute to the achievement of the objectives of the business. HRM is the acronym for the term “Human Resource Management”. Human Resource Management is the organizational function that deals with issues related to people such as compensation, hiring, performancemanagement ,organization development, safety, wellness, benefits, employee motivation, communication, administration, and training. HRM can also be performed by line managers.

A Human Resource Management System (HRMS), Human Resource Information System (HRIS), HR Technology or also called HR modules, or simply “Payroll”, refers to the systems and processes at the intersection between human resource management (HRM) and information technology.

It merges HRM as a discipline and in particular its basic HR activities and processes with the information technology field, whereas the programming of data processing systems evolved into standardized routines and packages of enterprise resource planning (ERP) software.

On the whole, these ERP systems have their origin on software that integrates information from different applications into one universal database. The linkage of its financial and human resource modules through one database is the most important distinction to the individually and proprietary developed predecessors, which makes this software application both rigid and flexible.

What is Human Resource Information System?

The Human Resource Information System (HRIS) is a software or online solution for the data entry, data tracking, and data information needs of the Human Resources, payroll, management ,and accounting functions within a business. Normally packaged as a data base, hundreds of companies sell some form of HRIS and every HRIS has different capabilities. Pick your HRIS carefully based on the capabilities you need in your company.

Typically, the better The Human Resource Information Systems (HRIS) provide overall:

  • Management of all employee information.
  • Reporting and analysis of employee information.
  • Company-related documents such as employee handbooks, emergency evacuation procedures, and safety guidelines.
  • Benefits administration including enrollment, status changes, and personal information updating.
  • Complete integration with payroll and other company financial software and accounting systems.
  • Applicant tracking and resume management .

The HRIS that most effectively serves companies tracks:

  • attendance and PTO use
  • pay raises and history
  • pay grades and positions held
  • performance development plans
  • training received
  • disciplinary action received
  • personal employee information, and occasionally
  • management and key employee succession plans
  • high potential employee identification
  • applicant tracking, interviewing, and selection.

A paid time off (PTO) policy combines vacation, sick time and personal time into a single bank of days for employee paid time off. A PTO policy creates a pool of days that an employee may use at their discretion. When an employee needs to take time off from work, the PTO policy enables this time off.

An effective HRIS provides information on just about anything the company needs to track and analyze about employees, former employees, and applicants. Your company will need to select a Human Resources Information System and customize it to meet your needs.

With an appropriate HRIS, Human Resources staff enables employees to do their own benefits updates and address changes, thus freeing HR staff for more strategic functions. Additionally, data necessary for employee management ,knowledge development, career growth and development, and equal treatment is facilitated.

Finally, managers can access the information they need to legally, ethically, and effectively support the success of their reporting employees.

Purpose

Human Resource management is based in the efficient utilization of employees in achieving two main goals within a corporation or other type of organization.

The first goal is to effectively make use of the talents and abilities of employees to achieve the operational objectives that are the ultimate aim of the organization. Along with realizing the goals of the organization, Human Resource management also seeks to ensure that the individual employee is satisfied with both the working environment and the compensation and benefits that he or she receives.

Key Elements of HRM

HRMS allow enterprises to automate many aspects of human resource management ,with the dual benefits of reducing the workload of the HR department as well as increasing the efficiency of the department by standardizing HR processes. Currently Human Resource Management Systems have the following key modules:

  • Organization
  • Payroll
  • Time & Attendance
  • Benefits Administration
  • HR Management Information System
  • Recruiting
  • Training
  • Employee Self-Service
  • Reports

Organization

The Organization module is organization structure such as company, location, department, designations, employee group and organization change such as resignation, termination, transfer, promotion.

Payroll

The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from the human resources and time keeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems.

Time & Attendance

The Time & Attendance Module automates time tracking related processes and enhances the organization's performance by eliminating paperwork and manual processes associated with time and attendance needs. The sophisticated module helps to efficiently organize labor data, improve the workforce management and minimize errors in enforcement of company's attendance policies.

Benefits Administration

The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing and retirement.

HR Management Information System

The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and development, capabilities and skills management ,compensation planning records and other related activities. Leading edge systems provide the ability to "read" applications and enter relevant data to applicable database fields, notify employers and provide position management and position control not in use. Human resource management function involves the recruitment, placement, evaluation, compensation and development of the employees of an organization.

Recruiting

Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent Management systems typically encompass:

  • analyzing personnel usage within an organization
  • identifying potential applicants
  • recruiting through company-facing listings
  • recruiting through online recruiting sites or publications that market to both recruiters and applicants.

The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated Applicant Tracking System, or 'ATS', module.

Training

The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a Learning Management System if a stand alone product, allows HR to track education, qualifications and skills of the employees, as well as outlining what training courses, books, CDs, web based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMS allow managers to approve training, budgets and calendars alongside performance management and appraisal metrics.

Employee Self-Service

The Employee Self-Service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. (Overtime) requests from their subordinates through the system without overloading the task on HR department.

Reports

The Reports Module provides customized reporting according to employees individual needs. Any number of reports can be defined by selecting from a range of search criteria and report fields. Report definitions can be saved to avoid repeating this task. Once the report definition is saved the report can be generated by providing the required criteria data.

Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety and security, while others integrate an outsourced Applicant Tracking System that encompasses a subset of the above.

Advantages of HRM Systems

While these above mentioned HR modules each provide benefits to the HR department itself, the applications also aid the enterprise as a whole.

HRM systems convert human resources information into a digital format, allowing that information to be added to the knowledgemanagement systems of the enterprise. The result of this is that HR data can be integrated into the larger Enterprise Resource Planning (ERP) systems of the enterprise.

In analysing enterprise wide resource usage this data can prove invaluable. Data related to the time usage of the workforce can enhance the decision making abilities of management ,allowing the HR department to form an integral aspect of strategy formation for the enterprise as a whole.

E-HRM

E-HRM is the (planning, implementation) application of information technology for both networking and supporting at least two individual or collective actors in their shared performing of HR activities.

E-HRM is not the same as HRIS (Human resource information system) which refers to ICT systems used within HR departments. Nor is it the same as V-HRM or Virtual HRM - which is defined by Lepak and Snell as “...a network-based structure built on partnerships and typically mediated by information technologies to help the organization acquire, develop, and deploy intellectual capital.”

E-HRM is in essence the devolution of HR functions to management and employees. They access these functions typically via intranet or other web-technology channels. The empowerment of managers and employees to perform certain chosen HR functions relieves the HR department of these tasks, allowing HR staff to focus less on the operational and more on the strategic elements of HR, and allowing organizations to lower HR department staffing levels as the administrative burden is lightened. It is anticipated that, as E-HRM develops and becomes more entrenched in business culture, these changes will become more apparent, but they have yet to be manifested to a significant degree.

Types

There are three tiers of E-HRM. These are described respectively as:

  • Operational
  • Relational
  • Transformational.

Operational

Operational E-HRM is concerned with administrative functions - payroll and employee personal data for example.

Relational

Relational E-HRM is concerned with supporting business processes by means of training, recruitment, performance management and so forth.

Transformational

Transformational E-HRM is concerned with strategic HR activities such as knowledgemanagement ,strategic re-orientation.

An organization may choose to pursue E-HRM policies from any number of these tiers to achieve their HR goals.

Purpose

E-HRM is seen as offering the potential to improve services to HR department clients (both employees and management ),improve efficiency and cost effectiveness within the HR department, and allow HR to become a strategic partner in achieving organizational goals.


Category: Corporate Management



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *