Best approach to get the source tables into Target
Hi
I am new to Goldengate and I would like to know what is the best approach to get the Source tables to be replicated into the target (oracle to oracle) before performing the initial load without using exp/expdp . Is there any native Goldengate utility which i can use during the initial load or before that will create the tables on the target before loading the data ?
Thanks
i dont think so, for the initial load replication your struncture should be available at target machine. once your machines are in sync then you can use goldengate ddl setup for automatically replicate the table with data.
Batter approach for you to create a structure on target machine using export improt. In export use conect=metadata_only for copy the structure only.....
like
EXPDP <<user>>/<<password>>@connection_string schemas=abc directory=TEST_DIR dumpfile= gg.dmp Content = metadata_only logfile= gg.log
Similar Messages
-
Re: getting the source tables into models in designer
Hi all
i need help while extracting the source table's to ODI designer
my source: Oracle
Question:
i have given the source schema information. with that information i created logical and physical schema in topology manager.
And trying to create a model to extract source tables to ODI.
As i don't have all tables in the same schema (some tables were coming from different users and i don't have the information of those users) am unable to see the those tables when in selective reverse tab.
i requested them to give select privileges for those tables in the schema which am using.
after getting the select privileges for those tables.
will i would be able to see those tables in selective reverse tab?
Could some one guide me steps in this.
Thanks917704 wrote:
Hi Alastair
firstly thank you for your reply.
my soure is oracle erp.
i cannot create physical/logical schemas for that user bez as it is head user in oracle erp, i dont have the access for that user.Hi, I've done change data capture from ebusiness suite using ODI, what we did was this :
get a 'read only' database account set up in the ebiz suite database, this is your connecting user and your work schema (for CDC objects).
grant select any table, or be more specific if you wish on the objects you need to read data from to ODI, then connect as your read only user but map the physical schemas as you wish.
Back to your original question, a model can only have one logical schema, which in turn maps to one phyiscal schema - so I think your stuck needing to read across more than one schema on the source system. -
Best approach to join multiple statistics tables into one
I have read different approaches to join multiple statistics tables into one, they all have a column "productobjectid".
I want to get all data for each product and put it to excel and output a jpg statistic with cfchart.
How would you do this, the sql part?
Thanks.A couple suggestions:
1) when joining tables, its best to list both table/fields that you are joining in the FROM clause:
FROM shopproductbehaviour_views INNER JOIN shopproductbehaviour_sails ON shopproductbehaviour_views.productobjectid = shopproductbehaviour_sails.productobjectid
2) You add tables to a SQL join by placing another join statement after the SQL above:
SELECT *
FROM TableA INNER JOIN TableB on TableA.myField = TableB.myField
INNER JOIN TableC on TableA.anotherField = TableC.anotherField
3) If you have columns in the tables that are named the same, you can use column aliases to change the way they appear in your record set:
SELECT TableA.datetimecreated 'tablea_create_date', TableB.datetimecreated 'tableb_create_date'
4) Certainly not a requirement, but you might want to look into using <cfqueryparam> in your where clause:
WHERE shopproductbehaviour_sails.productobjectid = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#all.productobjectid#">
You might want to consider checking out one of the many tutorials on SQL available online. Many of the questions you posed in your post are covered in pretty much every basic SQL tutorial. Alternately, a good SQL book is worth its weight in gold for a beginning web applications developer. -
Best approach to delete records that are not in the source table anymore.
I have a situation where I need to remove records from dimensions that are not in the source data anymore. Right now we are not maintaing history, i.e. not using SCD but planning for the next release. If we did that it would be easy to figure the latest records. The load is nightly and records are updated and new added.
The approach that I am considering is to join the dimension tables the the sources on keys and delete what doesn't join. However, is there perhaps some function in OWB that would allow to do this automatically on import so it can be also in place for the future?
Thanks!Bear in mind that deleting dimension records becomes problematic if you have facts attached to them. Just because this record is no longer in the active set doesn't mean that it wasn't used historically, and so have foreign key constraints on it in your database. IF this is the case, a short-term solution would be to add an expiry_date field to the dimension and update the load to set this value when the record disappears rather than to delete it.
And to do that, use the target dimension as a source table, outer join it to the actual source table on the natural key, and so your update will set expiry_date=nvl(expiry_date,sysdate) to set to sysdate if this record has not already been expired on all records where the outer join fails.
Further consideration: what do you do if the record is re-inserted into the source table? create a new dimension key? Or remove the expiry date?
But I will say that I am not a fan of deleting records in most circumstances. What do you do if you discover a calculation error and need to fix that and republish historical cubes? Without the historical data, you lose the ability to do things like that. -
ORA-30926: unable to get a stable set of rows in the source tables
hi,
I am loading data from source table to target table in a interface.
Using LKM incremental update .
In the merge rows step , getting the below error.
30926 : 99999 : java.sql.SQLException: ORA-30926: unable to get a stable set of rows in the source tables
please help as what should be done to resolve this.Below is the query in the merge step...
when i run from SQL also, same error
SQL Error: ORA-30926: unable to get a stable set of rows in the source tables
30926. 00000 - "unable to get a stable set of rows in the source tables"
*Cause: A stable set of rows could not be got because of large dml
activity or a non-deterministic where clause.
*Action: Remove any non-deterministic where clauses and reissue the dml.
merge into TFR.INVENTORIES T
using TFR.I$_INVENTORIES S
on (
T.ORGANIZATION_ID=S.ORGANIZATION_ID
and T.ITEM_ID=S.ITEM_ID
when matched
then update set
T.ITEM_TYPE = S.ITEM_TYPE,
T.SEGMENT1 = S.SEGMENT1,
T.DESCRIPTION = S.DESCRIPTION,
T.LIST_PRICE_PER_UNIT = S.LIST_PRICE_PER_UNIT,
T.CREATED_BY = S.CREATED_BY,
T.DEFAULT_SO_SOURCE_TYPE = S.DEFAULT_SO_SOURCE_TYPE,
T.MATERIAL_BILLABLE_FLAG = S.MATERIAL_BILLABLE_FLAG,
T.LAST_UPDATED_BY = S.LAST_UPDATED_BY
,T.ID = TFR.INVENTORIES_SEQ.NEXTVAL,
T.CREATION_DATE = CURRENT_DATE,
T.LAST_UPDATE_DATE = CURRENT_DATE
when not matched
then insert
T.ORGANIZATION_ID,
T.ITEM_ID,
T.ITEM_TYPE,
T.SEGMENT1,
T.DESCRIPTION,
T.LIST_PRICE_PER_UNIT,
T.CREATED_BY,
T.DEFAULT_SO_SOURCE_TYPE,
T.MATERIAL_BILLABLE_FLAG,
T.LAST_UPDATED_BY
,T.ID,
T.CREATION_DATE,
T.LAST_UPDATE_DATE
values
S.ORGANIZATION_ID,
S.ITEM_ID,
S.ITEM_TYPE,
S.SEGMENT1,
S.DESCRIPTION,
S.LIST_PRICE_PER_UNIT,
S.CREATED_BY,
S.DEFAULT_SO_SOURCE_TYPE,
S.MATERIAL_BILLABLE_FLAG,
S.LAST_UPDATED_BY
,TFR.INVENTORIES_SEQ.NEXTVAL,
CURRENT_DATE,
CURRENT_DATE
) -
MERGE error : unable to get a stable set of rows in the source tables
Hi,
For an update, the following MERGE statement throws the error-unable to get a stable set of rows in the source tables:
MERGE INTO table2t INT
USING (SELECT DISTINCT NULL bdl_inst_id,.......
FROM table1 ftp
WHERE ftp.gld_business_date = g_business_date
AND ftp.underlying_instrument_id IS NOT NULL) ui
ON ( ( INT.inst_id = ui.inst_id
AND g_business_date BETWEEN INT.valid_from_date
AND INT.valid_to_date
OR ( INT.ric = ui.ric
AND g_business_date BETWEEN INT.valid_from_date
AND INT.valid_to_date
OR ( INT.isin = ui.isin
AND g_business_date BETWEEN INT.valid_from_date
AND INT.valid_to_date
OR ( INT.sedol = ui.sedol
AND g_business_date BETWEEN INT.valid_from_date
AND INT.valid_to_date
OR ( INT.cusip = ui.cusip
AND g_business_date BETWEEN INT.valid_from_date
AND INT.valid_to_date
WHEN MATCHED THEN
UPDATE
SET INT.inst_id = ui.inst_id, INT.ric = ui.ric
WHEN NOT MATCHED THEN
INSERT (inst_key, ......)
VALUES (inst_key, ......);
To determine the existence of a record, first check if any match is found on the first key, if not then search for the second key and so on.
Now two records with differenct first key, i.e. inst_id, can have the same ric(second key). On a rerun, with the target table already populated, the code fails. The reason is it finds duplicate entries for the second key.
Any suggestions on how to make this work?
Thanks in advance.
AnnieAnnie
You've spotted the problem (that two records have the same RIC). MERGE doesn't allow that; each record in the data being updated is only allowed to be updated once.
Is there a PK column (or columns) that we can rely on?
What you can try is to outer join FTP to INT. Something like:
MERGE INTO INT int1
USING (
select columns_you_need
from (
select ftp.columns -- whatever they are
, int2.columns
, row_number() over (partition by int2.pk_columns order by int2.somecolumns) as rn
from ftp
left join int int2
on (the condition you used in your query)
where rn=1
) s
WHEN MATCHED THEN UPDATE ...
WHEN NOT MATCHED THEN INSERT ...So if you can restrict the driving query so that only the first one of the possible updates actually gets presented to the MERGE operation, you might be in with a chance :-)
And of course this error is nothing to do with any triggers.
HTH
Regards Nigel -
MERGE Statement - unable to get a stable set of rows in the source tables
OWB Client: 10.1.0.2.0
OWB Repository: 10.1.0.1.0
I am trying to create a MERGE in OWB.
I get the following error:
ORA-12801: error signaled in parallel query server P004 ORA-30926: unable to get a stable set of rows in the source tables
I have read the other posts regarding this and can't seem to get a fix.
The target table has a unique index on the field that I am matching on.
The "incoming" data doesn't have a unique index, but I have checked and confirmed that it is unique on the appropriate key.
The "incoming" data is created by a join and filter in the mapping and I'd rather avoid having to load this data into a new table and add a unique index on this.
Any help would be great.
Thanks
LauraHello Laura,
The MERGE statement does not require any constraints on its target table or source table. The only requirement is that two input rows cannot update the same target row, meaning that all existing target rows can be matched by at most one input row (otherwise the MERGE would be undeterministic since you don't know which of the input rows you would end up with in the target).
If a table takes ages to load (and is not really big) I suspect that your mapping is not running in set mode and that it performs a full table scan on source data for each target row it produces.
If you ARE running in set mode you should run explain plan to get a hint on what is wrong.
Regarding your original mapping, try to set the target operator property:
Match by constraint=no constraints
and then check the Loading properties on each target column.
Regards, Hans Henrik -
Getting the Source File name Info into Target Message
Hi all,
I want to get the Source file name Info into Target message of one of the fields.
i followed Michal BLOG /people/michal.krawczyk2/blog/2005/11/10/xi-the-same-filename-from-a-sender-to-a-receiver-file-adapter--sp14
Requirement :
1) I am able to get the Target file name as same as the source file name when i check the ASMA in Sender & Receiver Adapter , with out any UDF...............this thing is OK
2) I took One field extra in the target structure Like "FileName" & I mapped it Like
Constant(" " )--UDF-----FileName
I Checked the Option ASMA in Both Sender & Receiver Adapters
Here iam getting the Target File name as same as Source file name + Source File name Info in the Target Field " FileName".
I Dont want to get the Target File name as same as Source file name. I want like Out.xml as Target file name.
If i de-select the Option ASMA in Adapters means it is showing " null" value in the target field "FileName".
Please Provide the Solution for this
Regards
BopannaHi All,
Iam able to do this by checking the Option ASMA in only sender adapter itself
Regards
Bopanna -
How to get the source of a strange posted pic into my camera roll?
I got a strange pic put into my camera roll, this pic is most likely put by an app which has an access to my photo gallery, all I want to know is how to get the source which put this image into my camera roll, I have a punch of apps that have a photo access grant and really don't know to disable all of these apps because of only one of them.
p.s: the apps with photo access in my device (ProCam, CameraArtFXFree, Photo Editior-, Instagram, Poster++, Photo Vault, Facebook, Tango, Viber, Y! Messenger, Ipadio, Line, WhatsApp) And the only opened apps when this photo pushed to my Camera Roll were (Viber, Tango, Line, Whatsapp).
Thanks in advance.>
Nitesh Kumar wrote:
> Hi,
>
> FM to get the program source code: RPY_PROGRAM_READ
>
> By using this FM you can get the program name(say report_name) and then you can use
>
> READ REPORT report_name INTO itab
>
> Thanks
> Nitesh
u dont need the last statement the FM itself returns an itab with code in it. -
How will get the source code of all the tables in a given schema using SQL?
Hi All,
How can we get the source code of all the tables in a given schema using SQL?
Thanks in Adv.
JunuTry something like...
set heading off
set pagesize 0
col meta_data for a96 word_wrapped
set long 100000
SELECT DBMS_METADATA.GET_DDL(object_type, object_name, owner) ||';' AS meta_data
FROM dba_objects
WHERE owner = '<SCHEMA NAME>'
AND object_type not in (<list of stuff you do not want>); -
ORA-30926: unable to get a stable set of rows in the source table
When user are trying to open a form getting below error.
com.retek.platform.exception.RetekUnknownSystemException:ORA-30926: unable to get a stable set of rows in the source tables
Please advice
Edited by: user13382934 on Jul 9, 2011 1:32 PMPlease try this
create table UPDTE_DEFERRED_MAILING_RECORDS nologging as
SELECT distinct a.CUST_ID,
a.EMP_ID,
a.PURCHASE_DATE,
a.drank,
c.CONTACT_CD,
c.NEW_CUST_CD,
a.DM_ROW_ID
FROM (SELECT a.ROWID AS DM_ROW_ID,
a.CUST_ID,
a.EMP_ID,
a.PURCHASE_DATE,
dense_rank() over(PARTITION BY a.CUST_ID, a.EMP_ID ORDER
BY a.PURCHASE_DATE DESC, a.ROWID) DRANK
FROM deferred_mailing a) a,
customer c
WHERE a.CUST_ID = c.CUST_ID
AND a.EMP_ID = c.EMP_ID
AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
c.PURCHASE_DATE IS NULL)
and a.drank=1;
The query you've posted is behaving according to the expectations. The inner select is returning one row and the outer is returning two as the
WHERE a.CUST_ID = c.CUST_ID
AND a.EMP_ID = c.EMP_ID
AND (a.PURCHASE_DATE <= c.PURCHASE_DATE OR
c.PURCHASE_DATE IS NULL)
conditions are seeing two rows in the table customer.
I've added the a.drank=1 clause to skip the duplicates from the inner table and distinct in the final result to remove duplicates from the overall query result.
For eg, if you have one more row in the deferred_mailing like this
SQL> select * from DEFERRED_MAILING;
CUST_ID EMP_ID PURCHASE_
444 10 11-JAN-11
444 10 11-JAN-11
then the query without "a.drank=1" will return 4 rows like this by the outer query.
CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
It'll return the below even if we use distinct on the same query(i.e. without a.drank=1)
CUST_ID EMP_ID PURCHASE_ DM_ROW_ID DRANK C N
444 10 11-JAN-11 AAATi2AAGAAAACcAAB 2 Y Y
444 10 11-JAN-11 AAATi2AAGAAAACcAAA 1 Y Y
which contains duplicates again.
So, we need a combination of distinct and dense here.
btw, Please mark the thread as 'answered', if you feel you got your question answered. This will save the time of others who search for open questions to answer.
Regards,
CSM -
Row should get added in the target as soon as the data in the source table
I have done the following:
* The source table is part of the CDC process.
* I have started the journal on the source table.
Whenever I change the data in the source, I expect the target to get a new row added with a new sequence number as the surrogate key. I find that even though the source data changes, the new row does not get added.
Could someone point out to me why is the new row not getting added?Step 1 - Sequence Number
create a sequence in your rdbms namely
CREATE SEQUENCE SEQUENCE_NAME
MINVALUE 1
MAXVALUE 99999
START WITH 1
INCREMENT BY 1
YOU can use the above sequence in your mapping in this way
schema_name.sequence_name.nextval executed on Target option .
Next select only Insert option for sequence column .
Click on the Source datastore and in the Properties panel you will find an option called " Journalized Data Only " . Now whenever this interface runs , only the journalized data gets transferred.
The other way to see the journalized data from the source side is right click on the source datastore under the model which is journalized and now go to " changed data capture " and then to " journal data .. "
Now you can see only the journzalied data.
As CDC creates as trigger at the source , so whenever there is change in the source it gets captured at the target whenver you run the interface above interface with Journalized data only option.
I hope iam clear and elaborate now.
Thanks -
What is the best way to get the end of record from internal table?
Hi,
what is the best way to get the latest year and month ?
the end of record(KD00011001H 1110 2007 11)
Not KE00012002H, KA00012003H
any function for MBEWH table ?
MATNR BWKEY LFGJA LFMON
========================================
KE00012002H 1210 2005 12
KE00012002H 1210 2006 12
KA00012003H 1000 2006 12
KD00011001H 1110 2005 12
KD00011001H 1110 2006 12
KD00011001H 1110 2007 05
KD00011001H 1110 2007 08
KD00011001H 1110 2007 09
KD00011001H 1110 2007 10
KD00011001H 1110 2007 11
thank you
dennis
Edited by: ogawa Dennis on Jan 2, 2008 1:28 AM
Edited by: ogawa Dennis on Jan 2, 2008 1:33 AMHi dennis,
you can try this:
Sort <your internal_table MBEWH> BY lfgja DESCENDING lfmon DESCENDING.
Thanks
William Wilstroth -
SLT - Splitting one source table into two tables in the destination
Hi,
I am wondering if we can split content of one source table into two different tables in the destination (HANA DB in my case) with SLT based on the codified mapping rules?
We have the VBAK table in ERP which has the header information about various business objects (quote, sales order, invoice, outbound delivery to name a few). I want this to be replicated into tables specific to business object (like VBAK_QUOT, VBAK_SO, VBAK_INV, etc) based on document type column.
There is one way to do it as far as i know - have multiple configurations and replicate to different schema. But we might have to be content with 4 different config at the max.
Any help here will be highly appreciated
Regards,
SeshPlease take a look at these links related to your query.
http://stackoverflow.com/questions/5145637/querying-data-by-joining-two-tables-in-two-database-on-different-servers
http://stackoverflow.com/questions/7037228/joining-two-tables-together-in-one-database -
Dynamically passing the source table name to OWB mapping
I am building a mapping wherein one of the source tables is a view. The view name varies with the time parameter I pass in. I am looking at ways to pass in the time parameter to the mapping procedure such that it first gets the view name from a table and uses that view as the source table to fetch data. Any directions?
In normal PL/SQL coding, I can first get the view name and use this view name to buld a dynamic query, which can be then executed.This is a common question. The best way to do this is to use a synonym.
Create the synonym in the database and import into OWB. Use the synonym in your mapping. Have your mapping accept a mapping input for the table you want the synonym to point to. Setup a premapping process to re-create the synonym with the table you want to use.
Here is the procedure that I use. It defaults to a private synonym. Remember, the synonym will be created in the same schema that the mapping is deployed to.
CREATE OR REPLACE PROCEDURE "CAWDATA"."CREATE_SYNONYM_PRC" ("P_SYNONYM_NAME" IN VARCHAR2, "P_OBJECT_NAME" IN VARCHAR2,
"P_IS_PUBLIC_SYNONYM" IN BOOLEAN DEFAULT false) IS
BEGIN
if p_is_public_synonym = true then
execute immediate 'create or replace public synonym '|| p_synonym_name || ' for '|| p_object_name;
else
execute immediate 'create or replace synonym '|| p_synonym_name || ' for '|| p_object_name;
end if;
exception
when others
then
raise_application_error(sqlcode,sqlerrm) ;
END;
Maybe you are looking for
-
GUY'S I am facing a strange problem. i am working on server on which they created my user-id with developer access, user told me to change the form and to add signature in form, i have uploaded this signature using se78 tcode,put when i go for print
-
How to Upload Program Source Code
Hello, i have a program which downloads the source code into text files. I need to create all these into another server. so i need a program to upload these programs. Any idea how to it? Thanks in advance. Thanks&Regards, Sayanna Damerla
-
Is the iPhone 4S supposed to work in landscape mode when doing searches in the App Store and other non-internet searches?
-
CFBuilder 3 loses second server settings on cfbuilder exit/restart
I have two servers which I have connected to the cfbuilder. First one is localhost, which works okay, even debugger. Second is vmware XP image, which i can connect to the builder via new add server setup window. settings: Name:Vmware XP type: Other h
-
had an install disc of earlier os x release, but ubuntu didn't want to recognize it. would only show an harddrive image labeled "windows" which led back to elementary os.