How to lock my source table?
There are other jobs that are updating my source table at the same time my mapping is running. This is causing problems for me. Is there any way that I can lock my source table until after my mapping is complete? IOW: Can I issue "select for update"??
I am running 10gR1 and 10gR2.
Thanks,
Tim O'Brien
I'll share two (not straight) approach;
1) In the KM, add a last line "order by 1" in the 'select' of load step of the KM. Then modify the source datastore such that you push the required column order to the top ...i.e order 1. and when ever you this use this KM it will sort by the first column
2) Do a "cheat" in the filter. Use the order by clause like this ....
table.col1=0) order by (table.col_sort
when compiled...it would result in
(table.col1=0) order by (table.col_sort)
If you have more than one column in order by clause, then use the following;
table.col1=0) order by (table.col_sort1), (table.col_sort2
Enjoy ODI,
Karthik
Similar Messages
-
How to find the Source Table in one corresponding schema?
Hi All,
How to find the Source Table in one corresponding schema?
regards,
DBDB wrote:
Hi All,
How to find the Source Table in one corresponding schema?
regards,
DBHUH?
I do not understand your question
How do I ask a question on the forums?
SQL and PL/SQL FAQ -
How to Order a Source Table ???
How can I do if I have to order the select of my source table ???
I have try to put an order by on the same filter than my filter but it is build with parenthesis
if I put an filter with :
COL1='3' ORDER BY COL2
it's build like
Select ... where ((COL1 = '3') (ORDER BY COL2))
and the parenthesis raise an error...
If someone know....
ThanksI'll share two (not straight) approach;
1) In the KM, add a last line "order by 1" in the 'select' of load step of the KM. Then modify the source datastore such that you push the required column order to the top ...i.e order 1. and when ever you this use this KM it will sort by the first column
2) Do a "cheat" in the filter. Use the order by clause like this ....
table.col1=0) order by (table.col_sort
when compiled...it would result in
(table.col1=0) order by (table.col_sort)
If you have more than one column in order by clause, then use the following;
table.col1=0) order by (table.col_sort1), (table.col_sort2
Enjoy ODI,
Karthik -
How to read the source table using a dblink in oracle
Hi,
I want to read data from a source table which I can access using dblink in the datawarehouse. I tried different things in DI and I don't know how to do that? Can any one help me on this?
Thanks,
GowriTwo options:
First you could create a view in the dwh database that is using that remote link: create or replace view r_source as select * from source@dblink.
Better option is to create the source system datastore - but I assume that exists already - and then go to the dwh datastore and say that there is a dblink of a given name with which the target database can read from the other database. DI then will execute a insert...select from source@dblink kind of statement whenever that is possible. In case no such full pushdown is possible and the data is going through the DI engine anyway, reading via the dblink does not make sense anyway.
https://boc.sdn.sap.com/node/5065
https://boc.sdn.sap.com/node/5814 -
How to delete the source table rows once loaded in Destination Table in SSIS?
Data Base=kssdata
Tables= Userdetails having 1000 rows
Using SSIS:
Taking A
OLE DB Source----------------->OLE DB Destination
Am Taking 200 rows in Source table and loaded into Destination table once
Constraint: here once 200 rows are exported in destination table , that 200 rows are deleted in source table
repeat the task as source table all the records are loaded into Destination table
After that am taking another 200 rows in source table and loaded into Destination tableProvided you've a sequential primary key or audit timestamp (datetime/date) column in the table you can do an approach like this
1. Add a execute sql task connectng to source db with below statement
SELECT COUNT(*) FROM table
Store the result in a variable
2. Have another variable and set it to below expression
(@[User::CountVariable]/200) + (@[User::CountVariable]%200 >0? 1:0)
by setting EvaluatesExpression as true. Here CountVariable is variable created in previous step
3. Have a for loop container with below settings
InitExpression
@NewVariable = @CounterVariable
EvalExpression
@NewVariable > 0
AssignExpression
@NewVariable = @NewVariable - 1
3. Add a data flow task with OLEDB source and OLEDB Destination
4. Use source query as
SELECT TOP 200 columns...
FROM Table
ORDER BY [PK | AuditColumn]
Use PK or audit column depending which one is sequential
5. After data flow task have a execute sql task with statement as below
DELETE t
FROM (SELECT ROW_NUMBER() OVER (ORDER BY PK) AS Rn
FROM Table)t
WHERE Rn <= 200
This will make sure the 200 records gets deleted each time
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
How to join multiple source tables and do lookup?
I have a requirement to load a target table by joining 4 source tables. Also I have to do a lookup on a domain table to transform codes and check for nulls. What will be the best approach to load the target table?
Is it possible to do it in one interface or do I need to build multiple interfaces to achive this?
My source and target database both are oracle and I am planing to use Oracle Incremental Update Merge.
Thank youYou are in the right direction by creating one interface for this transformation.
You will need to drag drop 4 source tables + the lookup table on the Sources window of Interface and then make appropriate joins.
Also, check for NULLS in the transformation. Depends what you want to do with the NULLS. If you want to ignore them, use a filter.
If you want them to error out, use a constraint.
If you want to convert them, use NVL
Start with Oracle Incremental Update and once successful, use Oracle Incremental Update MERGE. -
How to identify EBS Source tables for SC and OM modules?
Hi,
I need to identify EBS source tables for Supply Chain and Order Management module.
What prefix I should check in EBS tables?
Is there any document on this?
Regards
SudiptaCheck etrm.oracle.com
For order Management you should check with OE
Purchasing PO and inventory INV and I guess for ASCP MSC
Mahendra -
Hi
I am doing a POC on joing two MSSQL 2K source tables and populating into Oracle table.
I did not find the joiner operator and other transformation operators.
Can anyone help me in using them?
Thanks,
GaneshHi,
First, just drag&drop the two source table in the interface.
Then, connect the two tables with eachother using your mouse while selecting the two columns that you want to use in your join.
Transformations can be found when selecting a field in your target datastore; you will see a screen appear in the 'properties-panel' of your interface-editor; here you can edit transformations.
Good luck ...
Steffen -
How to get source table name according to target table
hi all
another question:
once a map was created and deployed,the corresponding information was stored in the repository and rtr repository.My question is how to find the source table name according to the target table,and in which table these records are recorded.
somebody help me plz!!
thanks a lot!This is a query that will get you the operators in a mapping. To get source and targets you will need some additional information but this should get you started:
set pages 999
col PROJECT format a20
col MODULE format a20
col MAPPING format a25
col OPERATOR format a20
col OP_TYPE format a15
select mod.project_name PROJECT
, map.information_system_name MODULE
, map.map_name MAPPING
, cmp.map_component_name OPERATOR
, cmp.operator_type OP_TYPE
from all_iv_xform_maps map
, all_iv_modules mod
, all_iv_xform_map_components cmp
where mod.information_system_id = map.information_system_id
and map.map_id = cmp.map_id
and mod.project_name = '&Project'
order by 1,2,3
Jean-Pierre -
Hi Experts,
Please some one tell me how to find the source table for a structure table. I have table called AFVGD( Structure Table) and AFIH( Transparent Table) , I want to find the physical table where this two tables based on like I want to know Price field is coming from that is existing in AFVGD.
Thanks
Robbiego to se11 display the structure afvgd.
the price field PREIS take data element name and it will be PREIS
and come back to initial screen of se11 go to data type give the name PREIS here in application tool bar you can find the where used list button just click it will pop up some box check tables there and press enter it will display the tables list.
i think you can find it in AFVC table.
regards
shiba dutta -
Hi Experts,
Need a help from you. Please tell me know how to find the source tables for a particular field. I m using the transaction code MCTA and when i am trying to get teh table for KUNNR, i,e. Sold to Party, its showing me a structure and not the source table .....Though we know the tables for customer master....I would like to know the procedure of identifying the source table and not structure....
Thanks in Advance
Regards,
Shivaji.Hi,
If you know the field, then go to transaction SE84. Click on ABAP dictionary > Fields. Double click on the view fields. There you will find 'Field Name'. Put KUNNR and Execute. You will find the list of tables and views. You can recognize the table by description.
Else, go to SE12, put DD03L and go to display. Put the field name and you will get the list of all the views, structures and tables.
SE84 will be more useful for you.
Thanks
Mukund S
Reward points if helpful.... -
Hi,
Please let me know how to find the source table for GRN in MM .
Thanks in Advance,
ManuHi,
You can also use ST05..
Switch on the SQL Trace..
Execute the transaction..
Switch off the SQL Trace..
You can see the tables that were used in the transaction.
Reward points if found helpful..
Cheers,
Chandra Sekhar. -
Locking source tables during process flow
Hi All,
I have a process flow which calls multiple mappings and stored procedures. I pull data from source located in different database and insert data into current database. Since, this migration is delta migration, I want to make sure that during migration, no user should add data into source till migration is successfully completed.
I had called a procedure which locks individual tables, but as soon as next mapping is completed the lock gets released, since each mappings commits data. I want that the lock should be active till all mappings gets completed.
Regards,
DanishCommit will release all locks. Therefor, no matter how you lock the table, the lock will release when you commit the map. An alternative might be to capture a frozen image of the data into a local GTT created with the on commit preserve rows option.
-
How to get Materialized View to ignore unused columns in source table
When updating a column in a source table, records are generated in the corresponding materialized view log table. This happens even if the column being updated is not used in any MV that references the source table. That could be OK, so long as those updates are ignored. However they are not ignored, so when the MV is fast refreshed, I find it can take over a minute, even though no changes are required or made. Is there some way of configuring the materialized view log such that the materialized view refresh ignores these updates ?
So for examle if I have table TEST:
CREATE table test (
d_id NUMBER(10) PRIMARY KEY,
d_name VARCHAR2(100),
d_desc VARCHAR2(256)
This has an MV log MLOG$_TEST:
CREATE MATERIALIZED VIEW LOG ON TEST with rowid, sequence, primary key;
CREATE MATERIALIZED VIEW test_mv
refresh fast on demand
as
select d_id, d_name
from test;
INSERT 200,000 records
exec dbms_mview.refresh('TEST_MV','f');
update test set d_desc = upper(d_desc) ;
exec dbms_mview.refresh('TEST_MV','f'); -- This takes 37 seconds, yet no changes are required.
Oracle 10g/11gI would love to hear a positive answer to this question - I have the exact same issue :-)
In the "old" days (version 8 I think it was) populating the materialized view logs was done by Oracle auto-creating triggers on the base table. A "trick" could then make that trigger become "FOR UPDATE OF <used_column_list>". Now-a-days it has been internalized so such "triggers" are not visible and modifiable by us mere mortals.
I have not found a way to explicitly tell Oracle "only populate MV log for updates of these columns." I think the underlying reason is that the MV log potentially could be used for several different materialized views at possibly several different target databases. So to be safe that the MV log can be used for any MV created in the future - Oracle always populates MV log at any update (I think.)
One way around the problem is to migrate to STREAMS replication rather than materialized views - but it seems to me like swatting a fly with a bowling ball...
One thing to be aware of: Once the MV log has been "bloated" with a lot of unneccessary logging, you may perhaps see that all your FAST REFRESHes afterwards becomes slow - even after the one that checked all the 200000 unneccessary updates. We have seen that it can happen that Oracle decides on full table scanning the MV log when it does a fast refresh - which usually makes sense. But after a "bloat" has happened, the high water mark of the MV log is now unnaturally high, which can make the full table scan slow by scanning a lot of empty blocks.
We have a nightly job that checks each MV log if it is empty. If it is empty, it locks the MV log and locks the base table, checks for emptiness again, and truncates the MV log if it is still empty, before finally unlocking the tables. That way if an update during the day has happened to bloat the MV log, all the empty space in the MV log will be reclaimed at night.
But I hope someone can answer both you and me with a better solution ;-) -
How can I replicate a source table that doesn't have any primary keys?
We have transactional replication setup in our workplace.
In the source database, there are some tables that do not have any primary key.
1) How can I get these tables to replicate in the current scenario?
2) Is it possible to introduce foreign elements in a replicated instance of the database?
Example, additional records in a table that don't exist in the source or additional tables in the database?1) You need to add a primary key to this table. There must be a criteria that the app uses to identify which row it wants to up date or delete. If not you might be able to add an identity column to the table and then add a primary key to it. If this is not
possible you might want to use snapshot replication or CDC to do change tracking and then something like SSIS or service broker to move the change to the destination server.
2) yes, but be careful. They should not modify the schema or the data of tables which are being replicated.
looking for a book on SQL Server 2008 Administration?
http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941
Maybe you are looking for
-
How to fix calendar date inconsistency
Hello, I have a problem with closing material period due to inconsistency with actual calendar date. Last closed period in material master of specific cc is 07.2009.(It is test system with IDES database). How to close remaining periods to 01.2012. I
-
Why all the copies of files in Projects?
Looking at the folders in 'Movies' they seem fairly self-explanatory: 'iMovie Events' - contains folders of imported event clips 'iMoview Projects' - contains the current and saved projects 'iMovie Sharing' - seems to hold copies of exports to iTunes
-
How do I get Safari to open links in new tabs instead of windows?
-
Types of Alerts,Advantages,disvantages of using Alerts
Hi all, Can you Pl let me know the types of alerts Available. Why SAP does not recommend Alerts Advantantages,Disadvantages of using alerts. Using Alerts in BPM. CCMS ALERTS usage advantages over normal Alerts etc. Thanks, Srini
-
What sort order does address book use?
I'm trying to be able to sort things "after" the alphabet in a standard address book "sort by name" sort order. So aA to zZ then other stuff. In a standard ASCII sort, ~ would work - but that's been changed in Thunderbird. So far I've been trying to