Thursday 17 January 2013

About SQL Loader



SQL Loader Utility is explained with different options and examples
Overview:
SQL LOADER is an Oracle utility used to load data into table given a datafile which has the records that need to be loaded. SQL*Loader takes  data file, as well as a control file, to insert data into the table. When a Control file is executed, it can create Three (3) files called log file, bad file or reject file, discard file.
  • Log file tells you the state of the tables and indexes and the number of logical records already read from the input datafile. This information can be used to resume the load where it left off.
  •  Bad file or reject file gives you the records that were rejected because of formatting errors or because they caused Oracle errors.
  • Discard file specifies the records that do not meet any of the loading criteria like when any of the WHEN clauses specified in the control file. These records differ from rejected records.
Structure of the data file:
The data file can be in fixed record format or variable record format.
 Fixed Record Format would look like the below. In this case you give a specific position where the Control file can expect a data field:
 7369 SMITH      CLERK        7902  12/17/1980         800                  
 7499 ALLEN      SALESMAN  7698  2/20/1981           1600    
 7521 WARD      SALESMAN  7698  2/22/1981           1250    
 7566 JONES      MANAGER   7839  4/2/1981             2975             
 7654 MARTIN    SALESMAN  7698  9/28/1981           1250    
 7698 BLAKE      MANAGER   7839  5/1/1981             2850             
 7782 CLARK      MANAGER   7839  6/9/1981             2450             
 7788 SCOTT      ANALYST    7566  12/9/1982           3000             
 7839 KING        PRESIDENT          11/17/1981         5000             
 7844 TURNER    SALESMAN  7698  9/8/1981            1500           
 7876 ADAMS     CLERK         7788  1/12/1983          1100             
 7900 JAMES      CLERK         7698  12/3/1981          950              
 7902 FORD        ANALYST     7566  12/3/1981          3000            
 7934 MILLER     CLERK         7782  1/23/1982          1300            
 Variable Record Format would like below where the data fields are separated by a delimiter.
 Note: The Delimiter can be anything you like. In this case it is "|"
 1196700|9|0|692.64
 1378901|2|3900|488.62
 1418700|2|2320|467.92
 1418702|14|8740|4056.36
 1499100|1|0|3.68
 1632800|3|0|1866.66
 1632900|1|70|12.64
 1637600|50|0|755.5
Structure of a Control file:
Sample CTL file for loading a Variable record data file:
OPTIONS (SKIP = 1)   --The first row in the data file is skipped without loading
LOAD DATA
INFILE '$FILE'             -- Specify the data  file  path and name
APPEND                       -- type of loading (INSERT, APPEND, REPLACE, TRUNCATE
INTO TABLE "APPS"."BUDGET"   -- the table to be loaded into
FIELDS TERMINATED BY '|'           -- Specify the delimiter if variable format datafile
 OPTIONALLY ENCLOSED BY '"'   --the values of the data fields may be enclosed in "
TRAILING NULLCOLS     -- columns that are not present in the record treated as null
  (ITEM_NUMBER    "TRIM(:ITEM_NUMBER)", -- Can use all SQL functions on columns
  QTY                 DECIMAL EXTERNAL,
  REVENUE             DECIMAL EXTERNAL,
  EXT_COST            DECIMAL EXTERNAL TERMINATED BY WHITESPACE "(TRIM(:EXT_COST))"  ,
  MONTH           "to_char(LAST_DAY(ADD_MONTHS(SYSDATE,-1)),'DD-MON-YY')" ,
DIVISION_CODE    CONSTANT "AUD"  -- Can specify constant value instead of Getting value from datafile
   )
OPTION statement precedes the LOAD DATA statement. The OPTIONS parameter allows you to specify runtime arguments in the control file, rather than on the command line. The following arguments can be specified using the OPTIONS parameter.
 SKIP = n -- Number of logical records to skip (Default 0)
 LOAD = n -- Number of logical records to load (Default all)
 ERRORS = n -- Number of errors to allow (Default 50)
 ROWS = n   -- Number of rows in conventional path bind array or between direct path data saves (Default: Conventional Path 64, Direct path all)
 BINDSIZE = n -- Size of conventional path bind array in bytes (System-dependent default)
 SILENT = {FEEDBACK | ERRORS | DISCARDS | ALL} -- Suppress messages during run
                (header, feedback, errors, discards, partitions, all)
 DIRECT = {TRUE | FALSE} --Use direct path (Default FALSE)
 PARALLEL = {TRUE | FALSE} -- Perform parallel load (Default FALSE)
LOADDATA statement is required at the beginning of the control file.
INFILE: INFILE  keyword is used to specify location of the datafile or datafiles.
INFILE* specifies that the data is found in the control file and not in an external file. INFILE '$FILE', can be used to send the filepath and filename as a parameter when registered as a concurrent program.
 INFILE   '/home/vision/kap/import2.csv' specifies the filepath and the filename.
 Example where datafile is an external file:
 LOAD DATA
 INFILE   '/home/vision/kap/import2.csv'
 INTO TABLE kap_emp
 FIELDS TERMINATED BY ","
( emp_num, emp_name, department_num, department_name )
 Example where datafile is in the Control file:
LOAD DATA
INFILE *
INTO TABLE kap_emp
FIELDS TERMINATED BY ","              
( emp_num, emp_name, department_num, department_name )
BEGINDATA
 7369,SMITH,7902,Accounting
7499,ALLEN,7698,Sales
 7521,WARD,7698,Accounting
7566,JONES,7839,Sales
 7654,MARTIN,7698,Accounting
 Example where file name and path is sent as a parameter when registered as a concurrent program
LOAD DATA
INFILE '$FILE'
INTO TABLE kap_emp
FIELDS TERMINATED BY ","              
( emp_num, emp_name, department_num, department_name )
TYPE OF LOADING:
 INSERT   -- If the table you are loading is empty, INSERT can be used.
 APPEND  -- If data already exists in the table, SQL*Loader appends the new rows to it. If data doesn't already exist, the new rows are simply loaded.
 REPLACE -- All rows in the table are deleted and the new data is loaded
 TRUNCATE -- SQL*Loader uses the SQL TRUNCATE command.
INTOTABLEis required to identify the table to be loaded into. In the above example INTO TABLE "APPS"."BUDGET", APPS refers to the Schema and BUDGET is the Table name.
FIELDS TERMINATED BY specifies how the data fields are terminated in the datafile.(If the file is Comma delimited or Pipe delimited etc)
OPTIONALLY ENCLOSED BY '"' specifies that data fields may also be enclosed by quotation marks.
TRAILINGNULLCOLS clause tells SQL*Loader to treat any relatively positioned columns that are not present in the record as null columns.
Loading a fixed format data file:
 LOAD DATA
 INFILE 'sample.dat'
 INTO TABLE emp
(      empno         POSITION(01:04)   INTEGER EXTERNAL,
       ename          POSITION(06:15)   CHAR,
       job            POSITION(17:25)   CHAR,
       mgr            POSITION(27:30)   INTEGER EXTERNAL,
       sal            POSITION(32:39)   DECIMAL EXTERNAL,
       comm           POSITION(41:48)   DECIMAL EXTERNAL,
       deptno         POSITION(50:51)   INTEGER EXTERNAL)
Steps to Run the SQL* LOADER from UNIX:
At the prompt, invoke SQL*Loader as follows:
      sqlldr USERID=scott/tiger CONTROL=<control filename> LOG=<Log file  
      name>
 SQL*Loader loads the tables, creates the log file, and returns you to the system prompt. You can check the log file to see the results of running the case study.
Register as concurrent Program:
 Place the Control file in $CUSTOM_TOP/bin.
 Define the Executable. Give the Execution Method as SQL*LOADER.
 Define the Program. Add the Parameter for FILENAME.
 Skip columns:
You can skip columns using the 'FILLER' option.
Load Data
    --
    --
    --
    TRAILING  NULLCOLS
    (
    name Filler,
    Empno ,
    sal
    )
here the column name will be skipped. 

Thursday 27 December 2012

An Introduction to Basics of Interfaces in Oracle Apps.

Interfaces in Oracle Application: An Introduction

What are Interfaces?
·         Interfaces are used in Oracle Applications to integrate external systems and Data Conversion.
·         The interfaces are mainly used to either transfer data from Oracle Applications to a flat file or data from legacy system to Oracle Applications.
·         Used extensively at the time of Data Conversion from legacy/ old systems to a fresh implementation of Oracle Applications.
·         Used also at regular intervals when data transfer is from other live systems if the systems are not defined in Oracle Applications implementation.
·         Oracle provides flexible and flexible tools in the form of Interface programs to import the master and transactional data like Customers, Invoices, and Sales Orders etc from external systems into Oracle Applications.
Types of Interfaces
There are two major types of Interfaces:

·         Inbound Interface : These interfaces are used to transfer data from external systems to Oracle Applications.
·         Outbound Interface :  These interfaces are used to transfer data from Oracle Applications to external systems.
Two other distinctions of Interfaces:

·         Open Interface: If the interface logic is provided by Oracle Applications, it is called an Open Interface.
·         Custom Interface: If the interface logic needs to be developed by the implementation team, it is called a Custom Interface.
Interface Components





Open Interface Logic
·         First the data from the source application is loaded into a database table (called Interface table).
·         Then the provided validation program logic validates the records whether they are correct or not .
·         If the validation fails, the errors are transferred into another table (called Error Table).
·         If the validation succeeds, the correct records are transferred through a process into the destination application table.
Components of an Interface
a] Source Application:
You obtain data from a source application to pass on to a destination application for further processing and/or storage.
b] Source Data Issues:
Type of file, Size, Frequency of upload, Record Length (Variable or fixed), Delimiter, Datatype for each field, Any unwanted data, Naming convention and uniqueness of file, Location of the file, Access on the file.
c] Destination Application:
You send data to a destination application so that the application can perform further processing and/or storage.
d] Interface Table:
For inbound interfaces, the interface table is the intermediary table where the data from your source application temporarily resides until it is validated and processed into the destination application.
e] Identifier columns:
 
Uniquely identify rows in the interface table provide foreign key reference to both the source and destination applications.
f] Control Columns:

·         Control columns track the status of each row in the interface table, as it is inserted, validated, rejected, processed, and ultimately deleted.
·         WHO columns are also control columns.
g] Data Columns:

·         Stores the data that is being converted.
·         Required columns store the minimum information needed by the destination application to successfully process the interface row.
h] Derived Columns:
Derived columns are created by the destination application from information in the required columns.
i] Optional Columns:
Optional columns are not necessarily required by the destination application, but can be used by the destination application for additional value-added functionality beyond the basics.
j] Error Table:

·         For inbound interfaces, the errors table stores all errors found by the validation and processing functions.
·         In some cases, the errors table is a child of the interface table. This allows each row in the interface table to have many errors, so that you can easily manage multiple errors at once.
·         In other cases, the errors are stored in a column within the interface table, which requires you to fix each error independently.
Developing an Interface
1] Identification:
Find out if there exists an Open Interface to carry out the functionality.
2] Creation of Pre-Interface table ( staging Table):
A table in the format of the data file which can be pruned to load as clean a data into the Interface table.
3] Load data into Pre-Interface table:
SQL*LOADER can be used to load the flat file into the pre-interface table.
4] Validate data in the Pre-Interface table:
Basic validation of the data loaded into the Pre-Interface table can be carried out like:

·         For checking NULL values in required columns
·         Checking for Foreign Key and Quick Code values.
·         Duplication Validation
·         Business Rule validation
5] Mapping the values:
Generated fields in Oracle Applications can be mapped in this step to either default values or sequences.
6] Load data into Interface table:

·         Once the data is as clean as you can get it, the data can be inserted into the Interface table.
·         At such a time, certain columns, which are necessary in Applications but not found in legacy system, need to be populated accordingly like WHO columns.
7] Run the interface program
8] Check for Errors
9] Report on the Interface



ABOUT INTERFACES:
In Oracle Apps Interfaces are generally tables, which act as a medium to transfer the data from one module to another module or to transfer the data from legacy system into Oracle Applications. There are 352 tables provided by the Oracle Package. Each module has its own Interface Tables.
A typical path to transfer the data from Legacy System to Oracle Apps:







What is Interfacing?
It is the process of converting the records from one format to another format. The main components of this interfacing are
• Transfer Program
• Interface Table and
• Import Program

A] Transfer Program:
If the source modules data are implemented in Oracle Applications then the Transfer Programs are integrated with the Package. If the source modules are implemented in external system (i.e. other than Oracle Applications) then we have to develop our own Transfer Programs. Generally these Transfer Programs are developed using PL/SQL, JAVA or SQL Loader.
What they do?

·         It maps the columns of source table with the columns of Interface Tables.
·         It performs Row Level and Column Level validations.
·         It transfers the data from Source to the Interface Table.

B] Interface Tables:
The Interface tables basically have 4 types of columns.

1.     Mandatory Columns.
2.     Conditionally Required Columns.
3.     Optional Columns.
4.     Internal Processing Columns.

Mandatory Columns:
These are the main columns which are required in the destination tables (i.e. Oracle Application Module Tables). With the help of mandatory columns only the Import Program will converts the records from source to destination.

Conditionally Required Columns:
The values for these columns are based on the values of Mandatory columns. For Example:
 If you are converting foreign currency transactions to INR then it as compulsory to provide conditionally required columns like Currency conversion rate, Conversion Time and Conversion Date.

Optional Columns:
These are used when a client wanted to transfer some additional information from source to destination. These are based on client’s requirement.

Internal Processing Columns:
Status and Error Message columns are called Internal Processing Columns. These are specific only to Interface Table. These columns are going to be used by the Import Program to update the status and error message, if the record fails its validation while importing from Interface Table to the Destination Table.

C] Import Program:
For all Interface Tables, Oracle Application Package is going to provide Import Programs. These are generally registered with destination modules. These Import Programs are designed using PL/SQL, JAVA, C, C++, etc.
What they do?

·         It maps the columns of the Interface Table with one or more columns in the destination table.
·         It performs row level and column level validation.
·         It imports the data from Interface Table to the Destination tables, if the records validated successfully.
·         It deletes all the successfully validated records from Interface Table.
·         If the record fails its validation then the Import Program will update the status and error message columns of Interface Table.
Interface Vs. Application Program Interface (API):
Interfaces are used to transfer the data from legacy system to Oracle Application system where as API is used to convert the data from one form to another form with in the Oracle Application Module.



Data Migration vs. Data conversion
When we need to enter data into oracle Apps, following are the few techniques:
• The Data can be entered using the application Screens (for small amount of data, like creating PO, entering sales orders using Oracle Apps screens). 
• The data can be entered using Oracle’s Open System Interface (for regular operations e.g. for moving data from one module to another).
• The data can be stored in the database table directly (Not recommended by oracle and can be very risky, because on any event data is going to be stored in many tables and data should be validated before inserting into tables that may cause data integrity and inconsistency problem, sometimes it may corrupt the data completely.).
• Using third party tools like data loader (It is also can be used when data is relatively small (25-200 records) because it captures the keystrokes and works like manually entering the data into Oracle form but much faster as process is automated).

What is the need of Migration/Conversion?
Migration/Conversion are required when we are upgrading to one version to another (e.g. Oracle Apps 11.5.7 to Oracle 11.5.10) or moving data from some legacy system to Oracle Apps. There will be bulk  of data (sometimes millions or even more than that) that needs to be moved from one system to another  and  before moving the data it should be validated and only valid records should be entered into Oracle Apps.
If both the systems (Target and source) are not having same structure for data (Tables are not same/Table Structure is not same/The data is being stored in database is not same), it needs to be translated (e.g. upgrading from Oracle 11i to R12 where table structures are not same) then we say it as conversion (any kind of translation of data on Source data to make it suitable for Target system) otherwise migration.


What is Migration?


Migration of data means moving the data from one system to another using Interface Programs/APIs where both the systems have same structure of data.
Process of Migrating of data:
• Identify the data to be imported to new system (Business requirement).
• Extract the data into flat file/Staging table
• Load the data into Interface Table(using SQL* Loader/DB Link/Others) after validation(If loading the data using Interface)

What is Conversion?
Conversion of data means translating the data to suite target system (data should be formatted according to target system )  and then move the translated data using Interface Programs/APIs.
• Identify the data to be imported to new system (Business requirement).
• Extract into flat file/Staging table
• Translate/Convert/Format the data
• Load the data into Interface Table(using SQL* Loader/DB Link/Others) after validation(If loading the data using Interface) and then launch standard Interface concurrent program to load the data to Oracle Apps Base Tables
• If using API, fetch the data, validate it and then call API to import the data


How conversion/Migration and interface differ?


There are good numbers of parameter on which they can be categorized. Take few of them:
Frequency 
• Conversions/Migration are a one time event 
• interfaces are ongoing 
Occurrence in the project timeline
• conversions/Migration executed before production 
• interfaces executed during production 
Manner of execution
• Conversions/Migration are batch 
• Interfaces may be batch or real time 
Complexity
• Conversion/Migration does have very complex, it’s totally depends upon the data mapping activity. 
• Coordinating with other systems make interfaces more complex 
Maintenance
• Maintenance of interface is bit cost intensive task.
 ***************************************************************************************
During my work on a projects conversion, I came accros this very nice document. This document is very informative and so sharing with you all. Happy reading!
Main Article:
In this article I will be explaining the general steps involved in any Conversion/Data Migration of Oracle Projects module. 
At the end of this article, you would have learned:

·         Stages in Oracle Projects Conversion.
·         How to setup the Oracle Projects module for the conversion/Data Migration.
·         Options for the Load (flat file, csv, or direct Loads).
·         Oracle Projects AMG APIs needed to perform the Conversion.
·         Testing the Conversion Process.
·         Verifying the Conversion Process.


Scenario:
Company ‘XYZ’ is using a Project Management and Accounting Software for years long. The Management has decided to move from their existing system to Oracle Projects module because of its vast functionality and integration with other financial modules. 


How to deal with it? 
Now the question arises: What data to migrate from the legacy system to Oracle Projects?. Well, it depends upon the type of projects.
If the Projects are used for Internal Administration and tracking of costs, you may want to migrate the existing projects, tasks(the work break down structure), Cost Budgets, Cost (Timecards, Employee Expenses, Miscellaneous Expenses) etc.
If the Projects are used for billing the Clients for the work done (Typical Contract Projects), then you may want to Revenue, Agreements (Contracts), Revenue budgets and Invoices in addition to the above data.
Once the decision is made to which data to migrate, then the next step would be setting up the Oracle Projects for the conversion purpose, which we will see in detail sooner. Once the System has been setup, the technical elements (programs, concurrent processes etc) have to be created in order to migrate the data from Legacy System to Oracle Projects.


Stages in Oracle Projects Conversion
1.     The First Stage is to obtain the data from the Legacy System which needs to be migrated to Oracle Projects.
2.     The Second Stage will be most crucial step in the process which is to massage the data according to the Oracle Projects Conversion Interface (Programs built using AMG APIs). This Step is indeed time consuming, manual labor intensive to massage and rectify the errors etc. But completing this step successfully pays dividends in the consecutive processes / Stages.
3.     The Third Stage is uploading the data obtained from legacy systems into the Staging Area (Staging Tables created to hold the data temporarily till it gets migrated into Oracle Projects). Once the data is uploaded to the Staging tables, the programs built for migration (We will see how to build these programs in detail) will validate the Staging Table data to confirm that it is in compliance with the Projects Conversion Program (The AMG APIs used in the programs indeed needs data in certain format, also the data should be validated against the Oracle Projects Setup. For instance, when migrating the cost or hours from legacy to Projects, we might need to validate if the expenditure type is already setup in Oracle Projects, if the expenditure type is not setup, the program/APIs will throw an error. So it is always better to capture these kind of scenarios in the Validation Step of the Migration.
The Second Stage and Third Stages are repetitive until you get the valid data from the legacy system which can be migrated into Oracle Projects without any errors or issues.
The Fourth Stage is the actual migration process which will migrate the data from the Staging Tables to the Oracle Projects Base tables. Once this step is done, the projects, tasks and other data are available in Oracle Projects for use.
Before going through the stages, we will look at some of the basic setups that need to be done in Oracle Projects. 


Oracle Projects Setup For Conversion


Product Code: 
The Product Code needs to be setup in Oracle Projects in the AMG Gateway – Source Products Form in the Oracle Projects Implementation Super User Responsibility. This setup is mandatory since this product code needs to be passed when using the Oracle Projects AMG APIs

Project Types and Project Templates:
The project types and project templates for conversion projects need to be setup up. This is a mandatory setup since while migrating projects we need to tell the APIs which project template/type the projects use. 
For Contract Projects, setup the Contract Project Type Template. For administrative or internal projects, setup the Indirect Project Type templates. 
If you are migrating Cost and Revenue Budgets, then the Plan Types need to be attached to the templates in order to create the budgets for the migrated projects.


Implementation Option Setup:

Project Numbering: This implementation option is by default set to ‘Automatic’ which means when creating projects in Oracle Projects, the project number is automatically derived and users are not required to provide any project numbers. This option is best suitable when creating projects in Oracle Projects. But when migrating the projects from the third party systems, there is an option to migrate the projects with the same project number as in the legacy system. This is not mandatory but is recommended since it will be easy to refer back the projects in the source system using the project numbers.
In order to pass the project number to the Migration program, this implementation option needs to be setup to ‘Manual’. Once the migration is done, this setup can be reverted back to ‘Automatic’.

Setup Transaction Source:
The Transaction Source needs to be setup in Oracle Projects in the Transaction Sources form in Oracle Projects Implementation Super User Responsibility. This is a mandatory setup for the Costs/hours migration from the legacy system to Oracle Projects. We need to tell the migration API’s what the source system is and how the data is handled when it is imported to Oracle Projects.

Setup Expenditure Types:
Expenditure Types are needed to categorize the cost/hours when it is imported to Oracle Projects. This is a mandatory setup for Cost/hours migration. We need to tell the system which expenditure type the cost/hour belongs to.

Setup Employee Cost Rates:
Setting up cost rates for employees is not mandatory. But if you need to cost the hours that are migrated in the system, the labor cost distribution process in Oracle Projects do need the rates setup in order to calculate the costs. 
But if you are migrating the costs directly from the legacy instead of hours then this step is not needed. But ideally the cost rates are required in a general production scenario wherein the employees/contractors enter their timecards.
You can setup job rate schedule, employee level rate schedule or employee level overrides. Alternatively, the costing client extension can be setup to calculate the cost according to the business scenario.
Refer to the Oracle Projects User Guide for how to setup the employee cost rates.

First Stage: Obtain Data from Legacy System
The first stage deals with obtaining the data from the legacy system in the desired format. The data can be obtained in the form of flat text file or comma separated file csv, tab delimited file or file with any delimiters. Generally tab delimited files are recommended since comma separated files behave strange when there is a comma in the data itself.
If there is a database link created between the Source Legacy database and the Oracle Projects Database then the data can be obtained directly using the select statements against the Source DB from within the Oracle Projects DB. But this method is not preferred as it is more performance intensive when it comes to selecting large data over the network.
For Projects Migration, generally 2 files are obtained. One file for Projects Data and the other file for Tasks Data.
For Transaction Migration, single file is enough with all the cost/hours data.
For Cost/Revenue Budget migration, single file is enough with all the Budgets Data.
Create SQL Loader concurrent program which will upload the obtained data into the Oracle Staging Tables.
Also it is always the best practice to create a control table in the Staging area, which will control the data migration. For example your control table might look like the one below:

Parameter Type
Parameter
Parameter Value
Template
Contract
Contract_Template
Template
Indirect
Indirect_Template
Expenditure Type
Hours
Labor
Expenditure Type
Expenses
Employee_Expense
Transaction
Transaction Source
Legacy1
Product Code
Product code
LEGACY1
Project
Publish Workplan
Yes
Project
Baseline Workplan
Yes
Cost Budget
Baseline
Yes
Revenue Budget
Baseline
Yes
This control table is looked upon by the migration program. So whenever there is a change in the templates, expenditure types it is easy to change this control table instead of the code. So the advise is never hard code any values in the code, always handle it using the control table.
Also it will be better to have a form based on this table, so that this table data can be changed from the front end.

Second Stage & Third Stage: Validate and Format the Data
I am coupling the second and third stage because both are interdependent. Validating data is very important and it prevents some of the time consuming tasks in actual migration such as trouble shooting the errors due to the invalid data.
Below are some of the key validations that need to be done before doing the actual migration. 

Projects/Tasks Migration:
Though the projects and tasks are in different staging tables, the migration of projects/tasks is doing using a single program. We can always migrate projects and tasks separately, but the issue is with the performance when adding task by task to each project. So it always better to create projects and tasks together because of the bulk loading of tasks.


Project/Task – Setup Validations: 

·         Validate the Product code is setup.
·         Validate if the required Project Templates are setup.
·         Validate if the Project Numbering is set to ‘Manual’ for creating projects with the   predefined project numbers.


Project/Task Data Validations:

·         Validate if the project name is unique. Project with the same name should not exist in Oracle Projects.
·         Validate if the project number is unique. Project with the same number should not exist in Oracle Projects.
·         Validate if the project long name is unique. Project with the same long name should not exist in Oracle.
·         Validate the project reference(this field is mandatory in the projects file, it can be the projects identifier of the source project or project number of the source project, but it has to be unique in the source system as well. This field needs to be populated in all the converted projects in order to track back and identify the project in the source system)
·         Project name and project number should be 30 chars in length. Project long name should be 240 chars in length. Project Description should be 250 chars in length. Project description is not a mandatory field when creating project.
·         Check if the project has a project manager and the project manager is active in Oracle HR and has an assignment and a Job assigned. Also the project manager has to be active from the project start date, else you cannot create a project with that project manager.
·         In case of contract projects, check if the customer of the project is a valid customer defined and with a valid Bill To site assigned.
Apart from the above necessary validations, you may have to validate the additional data such as Projects DFF Data you may want to populate with your custom field values. For example you may want to populate the Project cost center value in the Segment1 of the Project DFF. In such case you have to validate if the cost center value is a valid value for that Segment1 (sometimes you may have attached an LOV to that segment1, so in that case, the cost center has to be validated against that LOV Values).
For tasks, values for task types, work type, task manager has to be validated. Task types and work types have to be defined in Oracle Projects before the task with those values are migrated, else the task will not be created.


Cost/hours validation
Setup Validations:

·         Validate if the Transaction source is setup.
·         Validate if the Expenditure type is setup.

Data Validations:

·         Check if the hours value is greater than zero.
·         Check if the employee number is valid in HR and is active on the timecard date.
·         If the transaction source is setup as costed, then the cost has provided while migrating the transactions. If the transaction source is setup as accounted, then the code combination ids need to be provided when migrating transactions.
Apart from the above validations, you may want to validate the additional DFF segments that you are going to populate for that expenditure item.


Budgets Validation
Setup Validations:

·         Validate the project template has the required financial plan type attached. Financial plans are the project management versions of the Budget types in the Forms applications.
·         Budget amount has to be greater than zero.
·         There is no need to create revenue budgets if the ‘Baseline funding without budget’ option is checked at the project or project type level. Whenever the funding is created for the contract project and is baselined, the revenue budget is automatically created and baselined. If that option is not checked, it is necessary that a revenue budget with the same amount as the funding amount needs to be created and baselined in order to baseline the funding.
Data Validations:
The cost budget for the project can be from the source system’s budgeting system. If there is no budgeting in the source system, a cost budget with the total cost of the project can be created in Oracle Projects.
For revenue budgets, it has to be equal to the funding amount of the project. If there is no funding amount in the source system, the sum of the revenue amount can be the funding amount and it is the revenue budget amount as well.


Agreements and Funding Validation
Data Validations:

·         Agreement type should be valid.
·         Agreement Amount should be greater than zero.
·         Hard Limits can be setup according to business rules. If the hard limits are setup for revenue and invoice then the revenue and invoice has to be within the funding limits for that project.
·         Funding amount has to be within the Agreement amount.
·         If the funding at the top task level, then the ‘Customer at top task’ has to be enabled and the customer should have been assigned at the top task.
·         Funding amount should be same as the Revenue budget amount which in general will be same as the total revenue amount for that project. If there are no hard limits then the revenue or invoice can exceed the funding amounts.
Records which fail the above validations have to be rectified before doing the actual migration.


Revenue and Invoice Validations
Data Validations:

·         Project / Task should already been converted to Oracle.
·         Event amount should be non zero.
·         For revenue event revenue amount should be populated.
·         For invoice event invoice amount should be populated.
Generally for a project, the total revenue is obtained from the source system and is created as a revenue event for that project. The total invoiced amount is calculated per project and an invoice event is created for each project.
Once these events are created successfully in the system, the Generate Draft Revenue process and Generate Draft Invoice process needs to be run so that the desired revenue and invoices are generated.
The revenue and invoice automatic approval and release client extensions can be used to automatically release the revenue when it is generated and approve/release invoices respectively. 
If the revenue amounts are already interfaced to General Ledger (GL) through a different interface, then uncheck the ‘Interface Revenue to GL’ option in the implementation options and run the ‘Interface Revenue to GL’ process in Oracle projects. This will turn the flags in the revenue records as accepted in GL, though it is not interfaced. Once this is done, revert back the implementation option back to its original state.
If the invoice amounts are already interfaced to Accounts Receivables (AR) by different means, it is not desired to interface the projects invoices to AR again since it will double the invoice amount in AR. In this case, we do not have an implementation option like we had for Revenue. So a script can be created to update the Invoice’s flag to Accepted State. Alternatively the generated projects invoices can be interfaced to AR, tied back to Oracle and then the invoices can be deleted in AR.

Stage 4: Actual Migration
Once the data is validated, the program for conversion is executed to migrate the data into oracle projects base tables. There might be still errors due to AMG APIs which has to analyzed and resolved. But the chances of such AMG API issues are just below 10% in any migration (based on my experience in Oracle Projects Conversion).
Below is a table with Conversion and which AMG APIs are used for that conversion:


Conversion
AMG APIs
Projects/Tasks Conversion
PA_PROJECT_PUB.CREATE_PROJECT
Budgets Conversion
PA_BUDGET_PU B.CREATE_DRAFT_BUDGET, PA_BUDGET_PUB.BASELINE_BUDGET
Agreements
PA_AGREEMENT_PUB.CREATE_AGREEMENT
Funding
PA_AGREEMENT_PUB.ADD_FUNDING
Revenue/Invoice
PA_EVENT_PUB.CREATE_BILLING_EVENT
User Defined Attributes (UDA)
PA_PROJECT_PUB.LOAD_EXTENSIBLE_ATTRIBUTE
For Transactions (cost/hours) migration, there is no APIs to create the expenditures in Oracle. The pa_transaction_interface_all table needs to be populated with the migration data and once it is populated, the PRC: Transaction Import process with the Transaction source as parameter needs to be run in Oracle Projects. All invalid records need to be rectified in order to migrate all the transactions.
The rejected records can be found in the same interface table with the transfer_status_code as ‘R’.

Conversion Tips:
1.     Make sure the templates are defined properly and exactly the way it is needed. Once the projects are created using the templates and the template was wrongly defined, then it takes ages to rectify the converted projects.
2.     Create the conversion program to operate in two modes: Validate, Run. A concurrent process with a parameter called mode accepting Validate/Run can be created. So the same concurrent program can be used to validate as well as run the actual migration.
3.     It is a good practice to have source Project id / Project Number as parameter to the projects conversion program. This will allow us to test the conversion for a single project and validate the data for that project.
4.     The validation process can write the invalid records to the output file. So once validation process completes, the output will have all the invalid records which needs to be rectified.
5.     Create a separate concurrent program to know the status of the already running migration process. If you want to know where the migration process is in terms of the number of records migrated, number of records rejected etc. If the volume of the migration data is huge, then it is likely possible that the conversion programs may run for hours. So in these scenarios this concurrent program can be helpful in finding the status of that migration process.
6.     For Transactions migration, the custom program written to populate the interface table can kick off the PRC: Transaction Import process and wait for its completion. Once the transaction import completes, the custom process can print the invalid records from the interface table to the output file.
7.     There are APIs to publish and baseline the workplans created as a part of projects migration. But these APIs need to be used with care. There are lot of performance issues and bugs when using these APIs.



Supplier Conversion in R12 
Vendor conversion program will load the Supplier Master, Sites and Contacts data from data files to the staging tables, validate the data and then load the data into Interface tables, finally Validated data will import into Oracle Supplier Standard Tables by using Oracle Standard Supplier Import Programs.
Pre-requisites setup’s are: Payment terms, Pay Groups, CCID, Supplier classifications, Bank Accounts , Employees (if employees have to set up as vendors).
The Interface Tables are:

·         AP_SUPPLIERS_INT
·         AP_SUPPLIER_SITES_INT
·         AP_SUP_SITE_CONTACT_INT

AP_SUPPLIERS_INT:
This is the open interface table for AP Suppliers. It holds Supplier information which is loaded by the user for import. The columns in the table map to corresponding columns in the PO_VENDORS table. The Oracle Payables application uses this information to create a new Supplier record when the Supplier Open Interface Import program is submitted. Each row in the table will be identified by a unique identifier, the VENDOR_INTERFACE_ID.
Mandatory Columns:

·         VENDOR_INTERFACE_ID (ap_suppliers_int_s.NEXTVAL)- Supplier interface record unique identifier
·         VENDOR_NAME – Supplier name
Other important columns:

·         SEGMENT1 – Supplier Number
·         VENDOR_TYPE_LOOKUP_CODE – Supplier type
·         SHIP_TO_LOCATION_CODE – Default ship-to-location name
·         BILL_TO_LOCATION_CODE – Default bill-to-location name
·         TERMS_NAME – Payment terms name
·         TAX_VERIFICATION_DATE – Tax verification date(1099)
·         VAT_REGISTRATION_NUM – Tax registration number
·         ATTRIBUTE1 -15 – Descriptive Flexfield Segments
·         PAY_GROUP_LOOKUP_CODE – Payment group type
·         INVOICE_CURRENCY_CODE – Default currency unique identifier
·         PAYMENT_CURRENCY_CODE – Default payment currency unique identifier
·         NUM_1099 – Tax identification number (1099)
·         VAT_CODE – Default invoice tax code
·         HOLD_FLAG – Indicates whether or not the supplier is on purchasing hold
·         SUMMARY_FLAG – Key flexfield summary flag
·         ENABLED_FLAG – Key flexfield enable flag
·         EMPLOYEE_ID – Employee unique identifier if supplier is an employee


AP_SUPPLIER_SITES_INT:
This is the open interface table for AP Supplier Sites. It holds Supplier Site information which is loaded by the user for import. The columns in the table map to corresponding columns in PO_VENDOR_SITES_ALL table. The Oracle Payables application uses this information to create a new Supplier Site record when the Supplier Sites Open Interface Import program is submitted. Each row in the table will be joined to the appropriate Supplier using the VENDOR_ID column.
Mandatory Columns:

·         VENDOR_SITE_INTERFACE_ID (ap_supplier_sites_int_s.NEXTVAL) – Supplier Site interface record unique identifier
·         VENDOR_SITE_CODE – Supplier Site name
Other important columns:

·         ADDRESS_LINE1 – First line of supplier address
·         ADDRESS_LINE2 – Second line of supplier address
·         ADDRESS_LINE3 – Third line of supplier address
·         CITY – City name
·         STATE – State name or abbreviation
·         ZIP – Postal code
·         COUNTRY – Country name
·         PHONE – Phone number
·         FAX – Supplier site facsimile number
·         SHIP_TO_LOCATION_CODE – Default ship-to-location name
·         BILL_TO_LOCATION_CODE – Default bill-to-location name
·         PAYMENT_METHOD_LOOKUP_CODE – Default payment method type
·         VAT_CODE – Invoice default tax code
·         PAY_GROUP_LOOKUP_CODE – Payment group type
·         TERMS_NAME – Payment terms name
·         INVOICE_CURRENCY_CODE – Default currency unique identifier
·         PAYMENT_CURRENCY_CODE – Default payment currency unique identifier
·         EMAIL_ADDRESS – E-mail address of the supplier contact
·         PURCHASING_SITE_FLAG – Indicates whether purchasing is allowed from this site
·         AUTO_TAX_CALC_FLAG – Level of automatic tax calculation for supplier
·         HOLD_ALL_PAYMENTS_FLAG – Indicates if Oracle Payables should place payments for this supplier on hold
AP_SUP_SITE_CONTACT_INT:
This is the open interface table for AP Supplier Site Contacts. It holds Supplier contact data. The columns in the table map to corresponding columns in PO_VENDOR_CONTACTS table. The Oracle Payables application uses this information to create a new Supplier Contact record when the Supplier Sites Open Interface Import program is submitted. Each row in the table will be joined to the appropriate Supplier Site using the VENDOR_SITE_CODE  and ORG_ID.
Mandatory Columns:

·         VENDOR_INTERFACE_ID – Supplier interface record unique identifier
·         VENDOR_CONTACT_INTERFACE_ID (AP_SUP_SITE_CONTACT_INT_S.NEXTVAL) – Vendor Contact Interface Identifier
·         VENDOR_SITE_CODE – Supplier Site name
Other important columns:

·         FIRST_NAME – Contact First name
·         LAST_NAME – Contact last name
·         AREA_CODE – Area code of contact phone number
·         PHONE – Contact phone number
·         FIRST_NAME_ALT – Alternate Supplier contact first name
·         LAST_NAME_ALT – Alternate Supplier contact last name
·         EMAIL_ADDRESS – Email address for the Supplier Site contact
·         FAX – Facsimile number for the Supplier Site contact
·         VENDOR_ID – Supplier unique identifier


Validations:

·         Vendor Number (Check for duplicate records in ap_suppliers table)
·         Vendor Name (Check for duplicate records in staging as well as in ap_suppliers table)
·         Terms Name (Check for proper record in ap_terms_tl table)
·         Pay Group (Check for proper record in fnd_lookup_values_vl table where lookup_type = ‘PAY GROUP’)
·         Employee Id (Check for proper employee record in per_all_people_f table)
·         Vendor Type (Check for proper record in po_lookup_codes table where lookup_type = ‘VENDOR TYPE’)
·         Vendor Site Code (Check for duplicate records in ap_supplier_sites_all table)
·         Country Code ( Check for proper country code in fnd_territories_vl table)
·         Payment Method (Check for proper payment method in iby_payment_methods_vl table)


Interface programs:

1.     Supplier Open Interface Import
2.     Supplier Sites Open Interface Import
3.     Supplier Site Contacts Open Interface Import
The data inserted via these interfaces are automatically populated into TCA tables.
Note: AP_SUPPLIER_INT_REJECTIONS table contains suppliers, sites, contacts rejections information.

AP Invoice Interface
This interface helps us to import vendor invoices into Oracle applications from external systems into Oracle Applications.

Interface tables: 
1] AP_INVOICES_INTERFACE
This is the open interface table for importing AP Invoices from external sources and stores header information about invoices. Invoice data comes from sources including:

·         EDI invoices from suppliers that are loaded through Oracle e-Commerce Gateway
·         Supplier invoices that are transferred through the Oracle XML Gateway
·         Invoices that are loaded using Oracle SQL*Loader
·         Lease invoices from Oracle Property Manager
·         Lease payments from Oracle Assets
·         Credit card transaction data that are loaded using the Credit Card Invoice Interface Summary
·         Expense Report invoices from Oracle Internet Expenses
·         Payment Requests from Receivables
·         Invoices that are entered through the Invoice Gateway.
There is one row for each invoice you import. Oracle Payables application uses this information to create invoice header information when Payables Open Interface program is submitted.
Data in the AP_INVOICES_INTERFACE table used in conjunction with AP_INVOICE_LINES_INTERFACE table to create Payables Invoice, Invoice lines, Distributions and Schedule payments. Data in this table can be viewed and edited using ‘Open Interface Invoices’ window. 

The Payables Open Interface program validates each record in this interface table selected for import, and if the record contains valid data then the program creates a Payables Invoice.

Important columns: 
INVOICE_ID (Required) : Unique identifier for this invoice within this batch. Same value should be populated in invoice’s lines in the AP_INVOICE_LINES_INTERFACE table to identify the data as belonging to the same invoice.
INVOICE_NUM (Required) :  Enter the invoice number that needs to be assigned to the invoice created in Payables from this record.
INVOICE_TYPE_LOOKUP_CODE (Optional) : Type of invoice: Credit or Standard.
INVOICE DATE (Optional) : Date of the invoice. If you do not enter a value, the system uses the date you submit Payables Open Interface Import as the invoice date.
PO_NUMBER (Optional)  : Purchase order number from PO_HEADERS.SEGMENT1. This column needs to be populated if invoice to be matched with an purchase order.
VENDOR_ID & VENDOR_SITE_ID (Required) : VENDOR_ID is unique identifier for a supplier and VENDOR_SITE_ID is Internal supplier site identifier. Supplier of the invoice to be derived by value in one of the following columns in this table: VENDOR_ID, VENDOR_NUM, VENDOR_NAME, VENDOR_SITE_ID or PO_NUMBER.
VENDOR_NUM & VENDOR_NAME (Optional) : Supplier number and name. You must identify the supplier by entering a value for one of the following columns in this table: VENDOR_ID, VENDOR_NUM, VENDOR_SITE_ID, VENDOR_SITE CODE, or PO_NUMBER.
INVOICE_AMOUNT (Required) : Amount of the invoice.
INVOICE_CURRENCY_CODE (Optional)  : Currency code for the invoice. If you want to create foreign currency invoices, enter a currency code that is different from your functional currency.
EXCHANGE_RATE (Optional) :  This column is required if you enter a foreign currency code in the INVOICE_CURRENCY_CODE column and you enter User as the EXCHANGE_RATE_TYPE.
TERMS_ID (Optional) : Internal identifier for the payment terms.
DESCRIPTION (Optional) : Enter the description that you want to assign to the invoice created from this record.
SOURCE (Required) : Source of the invoice data. If you import EDI invoices from the Oracle EDI Gateway, the source is EDI Gateway. For invoices you import using SQL*Loader, use a QuickCode with the type Source that you have defined in the QuickCodes window in Payables.

2] AP_INVOICE_LINES_INTERFACE
This is the lines interface table for the AP Invoice Open Interface and it is used in conjunction with AP_INVOICE_INTERFACE table. AP_INVOICE_LINES_INTERFACE stores information used to create one or more invoice distributions. Note that one row in this table may create, during the import process, more than one invoice distribution.

Important columns: 
INVOICE_ID (Required) :Enter the INVOICE_ID of the corresponding invoice in the AP_INVOICES_INTERFACE table. 
INVOICE_LINE_ID : This value is not required. You can enter a unique number for each invoice line of an invoice.
LINE_NUMBER (Optional) : You can enter a unique number to identify the line.
LINE_TYPE_LOOKUP_CODE (Required) : Enter the lookup code for the type of invoice distribution that you want Payables Open Interface Import to create from this record. The code you enter must be ITEM, TAX, MISCELLANEOUS, or FREIGHT. These lookup codes are stored in the AP_LOOKUP_CODES table.
AMOUNT (Required) : The invoice distribution amount. If you are matching to a purchase order, the AMOUNT = QUANTITY_INVOICED x UNIT PRICE. If the total amount of all the invoice distributions does not equal the amount of the invoice that has the same INVOICE_ID, then Payables Open Interface Import will reject the invoice.




Concurrent program:  
Payables Open Interface Import
Parameters:

Source: Choose the source of the invoices from the list of values. Use EDI Gateway, Credit Card, or a Source type QuickCode you defined in the Payables QuickCodes window.
Group: To limit the import to invoices with a particular Group ID, enter the Group ID. The Group must exactly match the GROUP_ID in the 
Payables Open Interface tables.

Batch Name: Payables groups the invoices created from the invoices you import and creates an invoice batch with the batch name you enter. You can enter a batch name only if you have enabled the Use Batch Control Payables option, and if you have enabled the Use Batch Control Payables option, you must enter a batch name. If you use a batch name and some invoices are rejected during the import process, you can import the corrected invoices into the same batch if you enter the exact batch name during the subsequent import.

Hold Name: If you want to place all invoices on hold at the time of import, enter an Invoice Hold Reason. You can define your own hold reasons in the Invoice Approvals window.

Hold Reason: Payables displays the Invoice Hold Reason Description.

GL Date: If you want to assign a specific GL Date to all invoices, enter a GL Date. If you do not enter a value here, the system will assign a GL Date based on the GL Date Payables option.

Purge: Enter Yes if you want Payables to delete all successfully imported invoice records that match the Source and Group ID of this import. Payables does not delete any invoice data for which it has not yet created invoices. If you want to purge later, you can use the Payables Open Interface Purge Program.


Steps:
1] Firstly, let’s get a unique number to be used as INVOICE_ID to the invoice to be imported.  This method ensures that each invoice has a unique INVOICE_ID assignment.

  • select ap_invoices_interface_s.nextval from dual;
  • Next Val
2] Then, create records in the Invoice Open Interface tables through SQL queries.


Insert into AP_INVOICES_INTERFACE for columns :

  1. invoice_id, 
  2. invoice_num,
  3. vendor_id,
  4. vendor_site_id, 
  5. invoice_amount, 
  6. invoice_currency_code, 
  7. invoice_date, 
  8. description, 
  9. pay_group_lookup_code, 
  10. source, 
  11. org_id

3] You can go to Payables > Invoices > Entry > Open Interface Invoices to check the details of Invoice and Invoice Lines from the front end. If required you can do any modifications here. Alternatively you can use these forms to put invoice data in AP_INVOICES_INTERFACE and AP_INVOICE_LINES_INTERFACE tables.
4] Go to the front end and run the concurrent program “Payables Open Interface Import”  to submit a request for Invoice Import.





Output:





output of Payables Open Interface Import
5] The imported invoice becomes available for review in Invoices Workbench.