Staging Table Challenge in PL SQL Proc
Hi Guys,
Need your idea for my below requirement.
I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
Or else is there any other way to achieve this....?
At the end we should have the data that got updated each day and reports on that data.
Please help me.
Cheers,
Naresh
Naresh wrote:
Hi Guys,
Need your idea for my below requirement.
I have two DB's and daily we'll sync Data from DB1 to DB2 (similar structures) using some plain SQL queries as below
Data synching to only few tables that too only to few columns on some conditions (and we are using nearly 25 temp tables to copy data) but we were unable to track the data what is getting updated dailiy..?
But we badly need to track the data that is getting updated daily, so please suggest me how could I do this...?
Staging Tables..? Note: (we can't maintain staging tables for all the temp tables and every now and then we'll change the DB tables structure also.)
Or else is there any other way to achieve this....?
At the end we should have the data that got updated each day and reports on that data.
Please help me.
Cheers,
NareshChange Data Capture
Similar Messages
-
Copy from staging table to multiple tables
We are using an SSIS package to fast load our data source into a staging table for processing.
The reason we are using a staging table is that we need to copy the data from staging to our actual DB tables (4 in total), and the insertion order matters as we have foreign key relationships.
When copying the data from our staging table, should we enumerate through all the records and use an insert-select method for each row or is there a more efficient way to do this?Our raw data source is a mdb file and we are using SSIS to fast load into SQL Server, and we are looking to transform the data set into 3 tables (using a stored proc):
Site (SiteID, Name)
Gas (ID, Date, Time, GasFIeld1, GasField2....., SiteID)
GenSet (ID, Date, Time, GenSetField1, GenSetField2.....,
SiteID)
Each record in our raw data source contains a Name field which identifies the Site. We only need to add a new site to the Site table if it does not already exist. This is already coded and working using insert-select and NOT EXISTS.
We now need to iterate over all records and extract a subset of data for the Gas table and extract a subset of data for the GenSet table and link each row with the
associated SiteID, using Name field.
The insertion order should be Site table first then remaining tables.
Are you saying it would be better to transform this data using SSIS and not to use a staging table and stored procedure?
I would prefer using staging + sp appproach here as that would involve set based logic and would be faster performance wise
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
How to store data file name in one of the columns of staging table
My requirement is to load data from .dat file to oracle staging table. I have done following steps:
1. Created control file and stored in bin directory.
2. Created data file and stored in bin directory.
3. Registered a concurrent program with execution method as SQL*Loader.
4. Added the concurrent program to request group.
I am passing the file name as a parameter to concurrent program. When I am running the program, the data is getting loaded to the staging table correctly.
Now I want to store the filename (which is passed as a parameter) in one of the columns of staging table. I tried different ways found through Google, but none of them worked. I am using the below control file:
OPTIONS (SKIP = 1)
LOAD DATA
INFILE '&1'
APPEND INTO TABLE XXCISCO_SO_INTF_STG_TB
FIELDS TERMINATED BY ","
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
COUNTRY_NAME
,COUNTRY_CODE
,ORDER_CATEGORY
,ORDER_NUMBER
,RECORD_ID "XXCISCO_SO_INTF_STG_TB_S.NEXTVAL"
,FILE_NAME CONSTANT "&1"
,REQUEST_ID "fnd_global.conc_request_id"
,LAST_UPDATED_BY "FND_GLOBAL.USER_ID"
,LAST_UPDATE_DATE SYSDATE
,CREATED_BY "FND_GLOBAL.USER_ID"
,CREATION_DATE SYSDATE
,INTERFACE_STATUS CONSTANT "N"
,RECORD_STATUS CONSTANT "N"
I want to store file name in the column FILE_NAME stated above. I tried with and without constant using "$1", "&1", ":$1", ":&1", &1, $1..... but none of them worked. Please suggest me the solution for this.
Thanks,
AbhayPl post details of OS, database and EBS versions. There is no easy way to achieve this.
Pl see previous threads on this topic
SQL*Loader to insert data file name during load
Sql Loader with new column
HTH
Srini -
Synchronizing Updates on a Staging Table
Please help me out with the resolving the following issue:
A load script is running for moving records from a data file to a staging table.
After this script completes, there is a code to update two fields of the staging table.
To do this the shell script runs a script (generate_ranges.sql). It takes a parameter of 5000000. It creates ranges based on this passed in number upto the total number of rows in the staging table. So say the staging table has 65,000,000 rows.
This script will create a file that looks like the following (when 5000000 is passed in):
1 | 5000000
5000001 | 10000000
10000001 | 15000000
15000001 | 20000000
20000001 | 25000000
25000001 | 30000000
30000001 | 35000000
35000001 | 40000000
40000001 | 45000000
45000001 | 50000000
50000001 | 55000000
55000001 | 60000000
60000001 | 65000000
The script goes on to read the data file for each row and it calls a shell script and passes in each range. So in this case there are 13 ranges. What is happening is there are 13 seperate updates on the staging table happening in the background.
The first update rows 1 - 5000000, the second rows 5000001 - 10000000 etc.
So there are 13 updates happenng behind the scenes.
The problem is that there is no way for the script to know that all updates are completed successfully before proceeding automatically. Right now I manually check to see that that all updates completed and then I restart the script at the next step. However we want to code to ensure that all the updates are done automatically and then move on in the script. So we need a way to count the number of candidate updates ( right now 13 but could be 14 or more in future) and know that all "x" updates completed, it may be the case that update (1-5000000) is taking 30 mins and the next update ( 5000001 - 10000000) is taking 35 mins, all updates iare running parallely, and only when after the 13 parallel updates are complete, the script can proceed with subsequent steps.
So please help me out with fixing this problem programmatically.
Thanks for your cooperation in advance.
Regards,
Ayan.Ayan,
Are you really sure you want to update 65 million rows ?
Alternative: create table as select <columns with 2 columns updated> from staging table;
While using this approach, you probably don't need to split the update.
Regards,
Rob. -
How to load the data from a staging table to interface table
Hi..
I have a staging table having these many columns
invoice_number,invoice_date,vendor_name,vendor_site_code,description,line-amount,line-description,segment1,segment2,segment3,segment4,segment5
I want to insert data into oracle interface tables
1st table is ap_invoices_interface which is primary
and 2nd is ap_invoice_lines_interfaces.
According to the invoice_id I have to insert the sum of amount in the amount column of primary table
can anyone plz give the codes .
any help appreciateHi,
you need to write pl/sql procedure or package for validiating the data and inserting.
first u need to know wat r the mandatory colums. and write the code igiving here a simple example
Create or replace procedure xxstg_po_vendors_int(errbuf out varchar2,retcode out number) IS
Cursor po_cur IS Select sno,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG From xxstg_po_vendor;
l_SUMMARY_FLAG Varchar(1);
l_ENABLED_FLAG varchar(1);
l_VENDOR_NAME varchar2(240);
l_err_msg varchar2(240);
l_flag varchar2(2);
l_err_flag varchar2(2);
Begin
Delete from Ap_suppliers_INT;
commit;
for rec_cur in po_cur loop
l_flag :='A';
l_err_flag:= 'A';
Begin
select summary_flag into l_SUMMARY_FLAG from po_vendors
where summary_flag = rec_cur.summary_flag;
Exception
when others then
l_summary_flag:= null;
l_flag:='E';
l_err_msg:= 'Summary_flag Does not Exist';
END;
FND_FILe.PUT_LINE(FND_FILE.LOG,'Inserting data into interface table'||l_flag);
Begin
Select enabled_flag into l_enabled_flag from po_vendors
where enabled_flag = rec_cur.enabled_flag;
exception
when others then
l_enabled_flag:=null;
l_flag :='E';
L_err_msg:='Enabled_flag Does not Exist';
End;
FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
FND_FILE.PUT_LINE(FND_FILE.log,'Inserting data into interface table'||l_flag);
INSERT INTO AP_SUPPLIERS_INT
( VENDOR_INTERFACE_ID,VENDOR_NAME,SUMMARY_FLAG,ENABLED_FLAG )
values(rec_cur.sno,rec_cur.VENDOR_NAME,rec_cur.SUMMARY_FLAG,rec_cur.ENABLED_FLAG);
l_flag :=null;
l_err_msg:=null;
end loop;
commit;
end;
Regards
Goutham -
How to move data from a staging table to three entity tables #2
Environment: SQL Server 2008 R2
I have a few questions:
How would I prevent duplicate records, when/ IF SSIS is executed many times?
How would I know that all huge volume of data being loaded in the entity tables?
In reference to "how to move data from a staging table to three entity tables ", since I am loading large volume of data, while using lookup transformation:
which of the merge components is best suited.
How to configure merge component correctly. (screen shot is preferred)
Please refer to the following link
http://social.msdn.microsoft.com/Forums/en-US/5f2128c8-3ddd-4455-9076-05fa1902a62a/how-to-move-data-from-a-staging-table-to-three-entity-tables?forum=sqlintegrationservicesYou can use RowCount transformation in the path where you want to capture record details. Then inside rowcount transformation pass a integer variable to get count value inside
the event handler can be configured as below
Inside Execute SQL task add INSERT statement to add rowcount to your audit table
Can you also show me how to Check against destination table using key columns inside a lookup task and insert only non
matched records (No Match output)
This is explained clearly in below link which Arthur posted
http://www.sqlis.com/sqlis/post/Get-all-from-Table-A-that-isnt-in-Table-B.aspx
For large data I would prefer doing this in T-SQL. So what you could do is dump data to staging table and then apply
T-SQL MERGE between tables (or even a combination of INSERT/UPDATE statements)
Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs -
Injecting data into a star schema from a flat staging table
I'm trying to work out a best approach for getting data from a very flat staging table and then loading it into a star schema - I take a row from a table with for example 50 different attributes about a person and then load these into a host of different tables, including linking tables.
One of the attibutes in the staging table will be an instruction to either insert the person and their new data, or update a person and some component of their data or maybe even to terminate a persons records.
I plan to use PL/SQL but I'm not sure on the best approach.
The staging table data will be loaded every 10 minutes and will contain about 300 updates.
I'm not sure if I should just select the staging records into a cursor then insert into the various tables?
Has anyone got any working examples based on a similar experience?
I can provide a working example if required.The database has some elements that make SQL a tad harder to use?
For example:
CREATE TABLE staging
(person_id NUMBER(10) NOT NULL ,
title VARCHAR2(15) NULL ,
initials VARCHAR2(5) NULL ,
forename VARCHAR2(30) NULL ,
middle_name VARCHAR2(30) NULL ,
surname VARCHAR2(50) NULL,
dial_number VARCHAR2(30) NULL,
Is_Contactable CHAR(1) NULL);
INSERT INTO staging
(person_id, title, initials, forename, middle_name, surname, dial_number)
VALUES ('12345', 'Mr', 'NULL', 'Joe', NULL, 'Bloggs', '0117512345','Y')
CREATE TABLE person
(person_id NUMBER(10) NOT NULL ,
title VARCHAR2(15) NULL ,
initials VARCHAR2(5) NULL ,
forename VARCHAR2(30) NULL ,
middle_name VARCHAR2(30) NULL ,
surname VARCHAR2(50) NULL);
CREATE UNIQUE INDEX XPKPerson ON Person
(Person_ID ASC);
ALTER TABLE Person
ADD CONSTRAINT XPKPerson PRIMARY KEY (Person_ID);
CREATE TABLE person_comm
(person_id NUMBER(10) NOT NULL ,
comm_type_id NUMBER(10) NOT NULL ,
comm_id NUMBER(10) NOT NULL );
CREATE UNIQUE INDEX XPKPerson_Comm ON Person_Comm
(Person_ID ASC,Comm_Type_ID ASC,Comm_ID ASC);
ALTER TABLE Person_Comm
ADD CONSTRAINT XPKPerson_Comm PRIMARY KEY (Person_ID,Comm_Type_ID,Comm_ID);
CREATE TABLE person_comm_preference
(person_id NUMBER(10) NOT NULL ,
comm_type_id NUMBER(10) NOT NULL
Is_Contactable CHAR(1) NULL);
CREATE UNIQUE INDEX XPKPerson_Comm_Preference ON Person_Comm_Preference
(Person_ID ASC,Comm_Type_ID ASC);
ALTER TABLE Person_Comm_Preference
ADD CONSTRAINT XPKPerson_Comm_Preference PRIMARY KEY (Person_ID,Comm_Type_ID);
CREATE TABLE comm_type
comm_type_id NUMBER(10) NOT NULL ,
NAME VARCHAR2(25) NULL ,
description VARCHAR2(100) NULL ,
comm_table_name VARCHAR2(50) NULL);
CREATE UNIQUE INDEX XPKComm_Type ON Comm_Type
(Comm_Type_ID ASC);
ALTER TABLE Comm_Type
ADD CONSTRAINT XPKComm_Type PRIMARY KEY (Comm_Type_ID);
insert into comm_type (comm_type_id, NAME, description, comm_table_name) values ('23456','HOME PHONE','Home Phone Number','PHONE');
CREATE TABLE phone
(phone_id NUMBER(10) NOT NULL ,
dial_number VARCHAR2(30) NULL);
Take the record from Staging then update:
'person'
'Person_Comm_Preference' Based on a comm_type of 'HOME_PHONE'
'person_comm' Derived from 'Person' and 'Person_Comm_Preference'
Then update 'Phone' with the number based on a link derived from 'Phone' which is made up of Person_Comm Primary_Key where 'Comm_ID' (part of that composite key)
relates to the Phone table Primary_Key which is Phone_ID.
Does you head hurt as much as mine? -
Hello everybody, I am a SQL server DBA and I am planning to implement table partitioning on some of our large tables in our data warehouse. I
am thinking to design it using the sliding window scenario. I do have one concern though; I think the staging tables we use for new data loading and for switching out the old partition are going to be non-partitioned, right?? Well, I don't have an issue with
the second staging table that is used for switching out the old partition. My concern is on the first staging table that we use it for switch in purpose, since this table is non-partitioned and holding the new data, HOW ARE WE GOING TO USE/access THIS DATA
FOR REPORTING PURPOSE before we switch in to our target partitioned table????? say, this staging table is holding a one month worth of data and we will be switching it at the end of the month. Correct me if I am wrong okay, one way I can think of accessing
this non-portioned staging table is by creating views, which we don’t want to change our codes.
Do you guys share us your thoughts, experiences???
We really appreciate your help.Hi BG516,
According to your description, you need to implement table partitioning on some of our large tables in our data warehouse, the problem is that you need the partition table only hold a month data, please correct me if I have anything misunderstanding.
In this case, you can create non-partitioned table, import the records which age is more than one month into the new created table. Leave the records which age is less than one month on the table in your data warehouse Then you need to create job to
copy the data from partition table into non-partitioned table at the last day of each month. In this case, the partition table only contain the data for current month. Please refer to the link below to see the details.
http://blog.sqlauthority.com/2007/08/15/sql-server-insert-data-from-one-table-to-another-table-insert-into-select-select-into-table/
https://msdn.microsoft.com/en-us/library/ms190268.aspx?f=255&MSPPError=-2147217396
If this is not what you want, please provide us more information, so that we can make further analysis.
Regards,
Charlie Liao
TechNet Community Support -
How can I INSERT INTO from Staging Table to Production Table
I’ve got a Bulk Load process which works fine, but I’m having major problems downstream.
Almost everything is Varchar(100), and this works fine.
Except for these fields:
INDEX SHARES, INDEX MARKET CAP, INDEX WEIGHT, DAILY PRICE RETURN, and DAILY TOTAL RETURN
These four fields must be some kind of numeric, because I need to perform sums on these guys.
Here’s my SQL:
CREATE
TABLE [dbo].[S&P_Global_BMI_(US_Dollar)]
[CHANGE]
VARCHAR(100),
[EFFECTIVE DATE]
VARCHAR(100),
[COMPANY]
VARCHAR(100),
[RIC]
VARCHAR(100),
Etc.
[INDEX SHARES]
NUMERIC(18, 12),
[INDEX MARKET CAP]
NUMERIC(18, 12),
[INDEX WEIGHT]
NUMERIC(18, 12),
[DAILY PRICE RETURN]
NUMERIC(18, 12),
[DAILY TOTAL RETURN]
NUMERIC(18, 12),
From the main staging table, I’m writing data to 4 production tables.
CREATE
TABLE [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
[CHANGE]
VARCHAR(100),
[EFFECTIVE DATE]
VARCHAR(100),
[COMPANY]
VARCHAR(100),
[RIC]
VARCHAR(100),
Etc.
[INDEX SHARES]
FLOAT(20),
[INDEX MARKET CAP]
FLOAT(20),
[INDEX WEIGHT] FLOAT(20),
[DAILY PRICE RETURN]
FLOAT(20),
[DAILY TOTAL RETURN]
FLOAT(20),,
INSERT
INTO [dbo].[S&P_Global_Ex-U.S._LargeMidCap_(US_Dollar)]
SELECT
[CHANGE],
Etc.
[DAILY TOTAL RETURN]
FROM
[dbo].[S&P_Global_BMI_(US_Dollar)]
WHERE
isnumeric([Effective Date])
= 1
AND
[CHANGE] is
null
AND
[COUNTRY] <>
'US'
AND ([SIZE] =
'L' OR [SIZE]
= 'M')
The Bulk Load is throwing errors like this (unless I make everything Varchar):
Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
Msg 4863, Level 16, State 1, Line 1
When I try to load data from the staging table to the production table, I get this.
Msg 8115, Level 16, State 8, Line 1
Arithmetic overflow error converting varchar to data type numeric.
The statement has been terminated.
There must be an easy way to overcome this, right.
Please advise!
Thanks!!
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.Nothing is returned. Everything is VARCHAR(100). the problem is this.
If I use FLOAT(18) or REAL, I get exponential numbers, which is useless to me.
If I use DECIMAL(18,12) or NUMERIC(18,12), I get errors.
Msg 4863, Level 16, State 1, Line 41
Bulk load data conversion error (truncation) for row 7, column 23 (INDEX SHARES).
Msg 4863, Level 16, State 1, Line 41
Bulk load data conversion error (truncation) for row 8, column 23 (INDEX SHARES).
Msg 4863, Level 16, State 1, Line 41
Bulk load data conversion error (truncation) for row 9, column 23 (INDEX SHARES).
There must be some data type that fits this!
Here's a sample of what I'm dealing with.
-0.900900901
9.302325581
-2.648171501
-1.402805723
-2.911830584
-2.220960866
2.897762349
-0.219640074
-5.458448607
-0.076626094
6.710940231
0.287200186
0.131682908
0.124276221
0.790818723
0.420505119
Knowledge is the only thing that I can give you, and still retain, and we are both better off for it. -
Passing an array of beans to PL/SQL proc - can't quite figure it out
Hi
I'm trying to pass in a Java array of beans to a PL/SQL proc and I can't quite get it to work. I did have the more simple case of an array of strings working but I'm stumped as to how to get this more complicated case to work.
I'm using Java 5 and Oracle 10.
My Oracle User Defined Types
create or replace type MY_OBJECT as object (
id integer,
join_table_name varchar(30)
create or replace type MY_OBJECT_ARRAY as table of MY_OBJECT;
My PL/SQL proc
create or replace package threshold is
function validateThresholdSequence (
thresholdSeqId integer,
testValue number,
testDate date,
validationCriteria in MY_OBJECT_ARRAY
) return number;
end;
My Java
public class ThresholdValidationCriteriaBean
private String joinTableName = null;
private Integer id = null;
//Getters and setters...
//Map my bean to the PL/SQL UDT - thought this might help but it seems not!
Map<String, Class<?>> map = c.getTypeMap();
map.put("MY_OBJECT", ThresholdValidationCriteriaBean.class);
//Prepair my statement
String sql=new String("{call threshold.validateThresholdSequence(?,?,?,?) }");
ps= c.prepareStatement(sql);
// Set the values to insert
ps.setInt(1, thresholdSequenceId);
ps.setDouble(2, testValue);
ps.setDate(3, new java.sql.Date(date.getTime()));
//Sort out the array thing
ArrayDescriptor desc = ArrayDescriptor.createDescriptor("MY_OBJECT_ARRAY", c);
ThresholdValidationCriteriaBean[] beanArray = new ThresholdValidationCriteriaBean[validationCriteria.size()];
validationCriteria.toArray(beanArray);
ARRAY array = new ARRAY (desc, c, beanArray);
((oracle.jdbc.driver.OraclePreparedStatement)ps).setARRAY(4, array); When I run this I get the following error on the creation of the ARRAY object
java.sql.SQLException: Fail to convert to internal representation: uk.co.cartesian.ascertain.imm.threshold.ThresholdValidationCriteriaBean@15c7850
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146)
at oracle.jdbc.oracore.OracleTypeADT.toDatum(OracleTypeADT.java:239)
at oracle.jdbc.oracore.OracleTypeADT.toDatumArray(OracleTypeADT.java:274)
at oracle.jdbc.oracore.OracleTypeUPT.toDatumArray(OracleTypeUPT.java:115)
at oracle.sql.ArrayDescriptor.toOracleArray(ArrayDescriptor.java:1314)
at oracle.sql.ARRAY.<init>(ARRAY.java:152)I've spent most of the day so far going from one error to the next - but I seem to be stuck now.
Any help or hints very much appreciated
Cheers
Ian
Edited by: Yanis on Feb 28, 2008 12:12 PMI've found the answer - I'll put the code here so everyone else can see what seems to work for me
First off the object that is being passed into the array needs to implement a couple of interfaces and so becomes
public class ThresholdValidationCriteriaBean
implements SQLData, Serializable
private String joinTableName = null;
private Integer id = null;
//Getters and Setters
public String getSQLTypeName()
throws SQLException
return "MY_OBJECT";
public void readSQL(SQLInput stream, String typeName)
//No need to implement this
public void writeSQL(SQLOutput stream)
//No need to implement this
}The code that I used to call the PL/SQL procedure with an array of MY_OBJECT's is
//Sort out our array stuff
ArrayDescriptor desc = ArrayDescriptor.createDescriptor("MY_OBJECT_ARRAY", c);
ThresholdValidationCriteriaBean[] ba = new ThresholdValidationCriteriaBean[validationCriteria.size()];
//Populate array
ARRAY arrayToPass = new ARRAY (desc, c, ba);
//Create our statement
String sql = new String("{call ? := threshold.validateThresholdSequence(?,?,?,?) }");
ps = c.prepareCall(sql);
//Register our out parameter
((oracle.jdbc.OracleCallableStatement)ps).registerOutParameter(1, Types.INTEGER);
// Set the values to insert
ps.setInt(2, thresholdSequenceId);
ps.setDouble(3, testValue);
ps.setDate(4, new java.sql.Date(date.getTime()));
((oracle.jdbc.driver.OraclePreparedStatement)ps).setARRAY(5, arrayToPass);
//Execute call to PL/SQL
ps.execute(); Edited by: Yanis on 10-Mar-2008 13:17 -
Table names in generated stored procs are qualified with sa schema name
I am using OMW 9.2.0.1.2 with the 9.2.0.1.3 SQL Serevr plugin to help with a SQL Server 7 to Oracle 9.2.0.1 migration on NT.
As is common with SQL Server databases, the dbo is sa. I don't want my Oracle schema to be called sa. I have succesfully gotten around this by renaming the sa user in the Oracle model in OMW.
However, the stored procedure code that OMW generates has table names qualified with sa as the schema (the tables names in the original T/SQL procs were not qualified).
How can I stop OMW from generating table names qualified with sa?
Thanks.Hi,
this is a bug in the OMWB. As a workaround, you can generate the migration scripts (see reference guide and user guide for more information) from the OMWB Oracle Model and then edit these scripts to ensure that the 'sa' prefix does not appear in the text of the stored procedures. Then use these scripts to generate the schema in your database.
An alternative is to migrate the stored procedures, schema and data over to the Oracle database using OMWB and then open each procedure in Enterprise Manager, remove the references to the 'sa' prefix and re-compile the procedure.
I will keep you updated on the release this fix will appear in.
I hope this helps,
Tom. -
Conditional Insert into staging table by using sqlloader
Hi,
In Oracle apps I'm submitting a concurrent program programmatically which will call sqlloader and insert into a staging a table.
This table consists of 30 columns.Program has one input parameter.If parameter value = REQUIRED Then it should insert into first three columns of staging table.If it's APPROVED then it should insert into first 10 columns of the same table.
Data file is pipe delimited file which may or may not have all possible values :For Required,I many not have all three column values>
I think you understood the thinks wrongly. OP marked UTL_FILE as the correct answer, which is again a server side solution.
>
Perhaps you failed to notice that the answer now marked correct was posted AFTER mine. I'm not clairvoyant and OP did not state whether a server side or client side solution was appropriate.
I stand by my comments. Using an external table is an alternative to sql loader but, as I said, simply using an external table instead of sql loader doesn't address OPs problem.
>
And IMO, external table will be faster than UTL_FILE
>
I'd be more concerned with why OP wants to write a procedure to parse flat files, deal with data conversion errors and perform table inserts when Oracle already provides that functionality using either external tables or sql loader.
I would suggest loading the file into a new staging table that can hold all of the columns that might be contained in the flat file. Then the appropriate data can be moved to OPs staging table according to the required business rules.
Sounds like we each think OP should reconsider the architecture. -
Pros/Cons of replicating to files versus staging tables
I am new to GoldenGate and am trying to figure out pros/cons of replicating to flatfiles to be processed by an ETL tool versus replicating directly to staging tables. We are using GoldenGate to source data from multiple transaction systems to flatfiles and then using Informatica to load thousands of flatfiles to our ODS staging. Trying to figure out if it would be better just to push data directly to staging tables. I am not sure which is better in terms of recovery, reconcilliation, etc. Any advice or thoughts on this would be appreciated.
Hi,
My Suggestion would be to push the data from multiple source systems directly to staging table and then populate target system using ELT tool like ODI.
Oracle Data Integrator can be combined with Oracle Golden Gate (OGG) , that provides a cross-platform data replication and changed data capture. Oracle Golden Gate worked in a similar way to Oracle’s asynchronous change data capture but handles greater volumes and works across multiple database platforms.
Source -> Staging -> Target
ODI-EE supports all leading data warehousing platforms, including Oracle Database, Teradata, Netezza, and IBM DB2. This is complemented by the Oracle GoldenGate architecture, which decouples source and target systems, enabling heterogeneity of databases as well as operating systems and hardware platforms. Oracle GoldenGate supports a wide range of database versions for Oracle Database, SQL Server, DB2 z/Series and LUW, Sybase ASE, Enscribe, SQL/MP and SQL/MX, Teradata running on Linux, Solaris, UNIX, Windows, and HP NonStop platforms as well as many data warehousing appliances including Oracle Exadata, Teradata, Netezza, and Greenplum. Companies can quickly and easily involve new or different database sources and target systems to their configurations by simply adding new Capture and Delivery processes.
ODI-EE and Oracle GoldenGate combined enable you to rapidly move transactional data between enterprise systems:
Real-time data. - Immediately capture, transform, and deliver transactional data to other systems with subsecond latency. Improve organizational decision-making through enterprise-wide visibility into accurate, up-to-date information.
Heterogeneous. - Utilize heterogeneous databases, packaged or even custom applications to leverage existing IT infrastructure. Use Knowledge Modules to speed the time of implementation.
Reliability. - Deliver all committed records to the target, even in the event of network outages. Move data without requiring system interruption or batch windows. Ensure data consistency and referential integrity across multiple masters, back-up systems, and reporting databases.
High performance with low impact. - Move thousands of transactions per second with negligible impact on source and target systems. Transform data at high performance and efficiency using E-LT. Access critical information in real time without bogging down production systems.
Please refer to below links for more information on configuration of ODI-OGG.
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_gg_integration/odi_gg_integration.htm
http://www.biblogs.com/2010/03/22/configuring-odi-10136-to-use-oracle-golden-gate-for-changed-data-capture/
Hope this information helps.
Thanks & Regards
SK -
How to create staging Tables with synonyms
Hi All,
I have a package which runs and updates a table weekly.
My question is,the web service is hitting on this table(TABLE CC) all the time.When my script runs it will be at least 20 minutes.Hence there would be a gap and the web service is unable to hit anything on the table as the previous data will be deleted and loaded with new data.
I have heard of staging tables and synonyms but I hope someone could share how do I start about it.
1)Rename CCto CC_1;
2)Create synonym CC for CC_1;
3)Run the scripts to load the data into a new table called CC_2;
4)Drop the synonym CC
5)Create synonym CC for CC_2;
6)check in select * from all_synonyms to see whether the synonyms are pointing to which table.
What about my scripts that would update the table CC.Do I need to change it to CC1 or CC2 all the time??Or can it be done dynamically.Stuck and don't knwo where to start.
Tha!Why exactly do you need to do all this and what type(s) of DML are done on the table? In other words, WHY do all the rows need to be deleted and why does the table need to be loaded with completely new rows?
"What-cha driving at dude?"
Is it possible to just insert, delete or update rows in place based on whatever new information you've loaded into a staging table? Because, if you have say 100,000 rows, and you need to delete 20,000, update 5,000 and insert 15,000, you could just do it all live even if it takes 20 minutes--and I think that could likely be improved upon with bulk collections and bulk binding in you're PL/SQL.
OTOH, if your table contains last weeks data and you need to load this weeks data, I'd suggest partitioning by range (1 week at a time). Then you can merely load the new week's data and afterwards drop the prior week's data partition. If you do all the inserts on a single commit, no user will ever notice the difference until they see this week's data instead of last week's.
HTH -
How to Compare Data length of staging table with base table definition
Hi,
I've two tables :staging table and base table.
I'm getting data from flatfiles into staging table, as per requirement structure of staging table and base table(length of each and every column in staging table is 25% more to dump data without any errors) are different for ex :if we've city column with varchar length 40 in staging table it has 25 in base table.Once data is dumped into staging table I want to compare actual data length of each and every column in staging table with definition of base table(data_length for each and every column from all_tab_columns) and if any column differs length I need to update the corresponding row in staging table which also has a flag called err_length.
so for this I'm using cursor c1 is select length(a.id),length(a.name)... from staging_table;
cursor c2(name varchar2) is select data_length from all_tab_columns where table_name='BASE_TABLE' and column_name=name;
But we're getting data atonce in first query whereas in second cursor I need to get each and every column and then compare with first ?
Can anyone tell me how to get desired results?
Thanks,
Mahender.This is a shot in the dark but, take a look at this example below:
SQL> DROP TABLE STAGING;
Table dropped.
SQL> DROP TABLE BASE;
Table dropped.
SQL> CREATE TABLE STAGING
2 (
3 ID NUMBER
4 , A VARCHAR2(40)
5 , B VARCHAR2(40)
6 , ERR_LENGTH VARCHAR2(1)
7 );
Table created.
SQL> CREATE TABLE BASE
2 (
3 ID NUMBER
4 , A VARCHAR2(25)
5 , B VARCHAR2(25)
6 );
Table created.
SQL> INSERT INTO STAGING VALUES (1,RPAD('X',26,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (2,RPAD('X',25,'X'),RPAD('X',26,'X'),NULL);
1 row created.
SQL> INSERT INTO STAGING VALUES (3,RPAD('X',25,'X'),RPAD('X',25,'X'),NULL);
1 row created.
SQL> COMMIT;
Commit complete.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX
SQL> UPDATE STAGING ST
2 SET ERR_LENGTH = 'Y'
3 WHERE EXISTS
4 (
5 WITH columns_in_staging AS
6 (
7 /* Retrieve all the columns names for the staging table with the exception of the primary key column
8 * and order them alphabetically.
9 */
10 SELECT COLUMN_NAME
11 , ROW_NUMBER() OVER (ORDER BY COLUMN_NAME) RN
12 FROM ALL_TAB_COLUMNS
13 WHERE TABLE_NAME='STAGING'
14 AND COLUMN_NAME != 'ID'
15 ORDER BY 1
16 ), staging_unpivot AS
17 (
18 /* Using the columns_in_staging above UNPIVOT the result set so you get a record for each COLUMN value
19 * for each record. The DECODE performs the unpivot and it works if the decode specifies the columns
20 * in the same order as the ROW_NUMBER() function in columns_in_staging
21 */
22 SELECT ID
23 , COLUMN_NAME
24 , DECODE
25 (
26 RN
27 , 1,A
28 , 2,B
29 ) AS VAL
30 FROM STAGING
31 CROSS JOIN COLUMNS_IN_STAGING
32 )
33 /* Only return IDs for records that have at least one column value that exceeds the length. */
34 SELECT ID
35 FROM
36 (
37 /* Join the unpivoted staging table to the ALL_TAB_COLUMNS table on the column names. Here we perform
38 * the check to see if there are any differences in the length if so set a flag.
39 */
40 SELECT STAGING_UNPIVOT.ID
41 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_A
42 , (CASE WHEN ATC.DATA_LENGTH < LENGTH(STAGING_UNPIVOT.VAL) THEN 'Y' END) AS ERR_LENGTH_B
43 FROM STAGING_UNPIVOT
44 JOIN ALL_TAB_COLUMNS ATC ON ATC.COLUMN_NAME = STAGING_UNPIVOT.COLUMN_NAME
45 WHERE ATC.TABLE_NAME='BASE'
46 ) A
47 WHERE COALESCE(ERR_LENGTH_A,ERR_LENGTH_B) IS NOT NULL
48 AND ST.ID = A.ID
49 )
50 /
2 rows updated.
SQL> SELECT * FROM STAGING;
ID A B E
1 XXXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXX Y
2 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXX Y
3 XXXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXXXXXXXXXXXXXXXHopefully the comments make sense. If you have any questions please let me know.
This assumes the column names are the same between the staging and base tables. In addition as you add more columns to this table you'll have to add more CASE statements to check the length and update the COALESCE check as necessary.
Thanks!
Maybe you are looking for
-
Reg. Automatic payment program - URGENT!
Hi all, We are creating a validation for the F110. Can somebody tel me where do i enter my header data like th reference and other details? Is there a way to change my parameter screen layout? Kindly guide me on this. Regards Karpagam
-
Update JV then the window is frozen
Dear All, When we open the JV and update the posting date, the system asked if 'update posting date for each line?' and 'update values according to new value', . After we said yes, the window is frozen. The JV has 120+ lines. I've tested this from th
-
Fetch the number of users in bulk request
Hi All, We are working on OIM 11gR1. While raising request "Modify User", when we are submitting the request with multiple users, how can we fetch the number of users selected to be modified in java code? Regards, Aparna
-
Time setting display in quicktime
Did anyone else notice that now in quicktime you can view standard time, Timecode-Drop Frame, and Frame Number? Did this come in 7.1.5 along with that aperture setting that's messing some people up, or has this been there for a while and i just notic
-
Installation disc missing software packages?
Hi, I wanted to re-install safari and found instruction on how to do so here: http://docs.info.apple.com/article.html?artnum=301229 However, it won't install safari because it tells me the package is empty and there is nothing to install. Wonder why?