Setup table Administration with Direct Delta mode
Hi guys,
I have a direct delta as update mode for my 2LIS_11_VAITM extractor in R/3.
I'd like to know if I have to manage the relative setup table (MC11VA0ITMSETUP) with this update mode also.
For instance, If I delete an ODS content and restart a new INIT on BW system, do I need to delete and then to fill the setup table?
In general, Do I need always to menage the setup table with each kind of update mode (direct delta, queued delta and V3 update)?
if you know the answers, please reply me direcly without insert link.
Many Thanks.
yes, you do....the initial loads for lo datasources are completely independent of the delta loads and hence of the delta method...
/people/sap.user72/blog/2005/01/19/logistic-cockpit-delta-mechanism--episode-three-the-new-update-methods
M.
Similar Messages
-
2LIS_03_UM - FULL LOAD VIA SETUP TABLE NOT EQUAL TO DELTA
Hello,
I try to load stock data with 2LIS_03_UM.
When I extract data using setup table and full load I don't get the same records as I get with delta load. Although the key figures values are the same, the documents numbers are different. Most documents from delta load are ML (material ledger) documents while documents from full load (via setup table) are mainly FI documents.
What can be the reason? How can I fix it?
Thank you,
HadarHi,
Delta load extracts data from Delta Queue based on your update mode selected in LBWE.
Full Load or Init load extracts data from Setup tables, here if you are extracting data from setup tables ? did you fill the setup tables now or not after deleting the existing data of the setup tables?
are you getting same values when compare to R/3 and BW data for some document numbers? -
Serial table scan with direct path read compared to db file scattered read
Hi,
The environment
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
8K block size
db_file_multiblock_read_count is 128
show sga
Total System Global Area 1.6702E+10 bytes
Fixed Size 2219952 bytes
Variable Size 7918846032 bytes
Database Buffers 8724152320 bytes
Redo Buffers 57090048 bytes
16GB of SGA with 8GB of db buffer cache.
-- database is built on Solid State Disks
-- SQL trace and wait events
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
-- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
TABLE_NAME Rows Table Size/MB Used/MB Free/MB
TDASH 1,729,204 15,242 15,186 56
TABLE_NAME Allocated blocks Empty blocks Average space/KB Free list blocks
TDASH 1,943,823 7,153 805 0
Objectives
To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
Approach
We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
The code used
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
/Server is rebooted prior to any tests
Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
With this choice it takes 6,520 seconds to complete the query. The results are shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 2 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.84 4.03 320 48666 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.84 4.03 320 48666 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.84 4.03 320 48666 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.84 4.03 320 48666 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.01 0.00 1 52 2 1
Fetch 140 10.88 6519.89 194142805 194831110 0 130
total 278 10.91 6519.91 194142808 194831209 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 5 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1531657 lines in trace file.
6520 elapsed seconds in trace file.I then force the query not to use direct path read by invoking
ALTER SESSION SET EVENTS '10949 trace name context forever, level 1' -- No Direct path read ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
The report is shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.92 4.07 319 48625 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.92 4.07 319 48625 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.92 4.07 319 48625 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.92 4.07 319 48625 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.00 0.00 2 52 2 1
Fetch 140 143.44 4298.87 110348674 194491010 0 130
total 278 143.46 4298.88 110348678 194491109 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129763 0.01 17.50
Disk file operations I/O 3 0.00 0.00
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1357958 lines in trace file.
4299 elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
Appreciate any advise and apologies for being long winded.
Thanks,
MichHi Charles,
I am doing your tests for t1 table using my server.
Just to clarify my environment is:
I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
SELECT
NUM_ROWS,
BLOCKS
FROM
USER_TABLES
WHERE
TABLE_NAME='T1';
NUM_ROWS BLOCKS
12000000 178952which is pretty identical to yours.
Then I run the query as brelow
set timing on
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
SELECT
COUNT(*)
FROM
T1
WHERE
RN=1;
which gives
COUNT(*)
60000
Elapsed: 00:00:05.29
tkprof output shows
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 5.28 178292 178299 0 1
total 4 0.02 5.28 178292 178299 0 1
Compared to yours:
Fetch 2 0.60 4.10 178493 178498 0 1
It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
Now the way I see it elapsed time = CPU time + wait time. Further down I have
Rows Row Source Operation
1 SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1405 0.00 4.68
Your direct path reads are
direct path read 1404 0.01 3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
With db file scattered read I get
Elapsed: 00:00:06.95
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.22 6.93 178293 178315 0 1
total 4 1.22 6.94 178293 178315 0 1
Rows Row Source Operation
1 SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
Disk file operations I/O 3 0.00 0.00
db file sequential read 1 0.00 0.00
db file scattered read 1414 0.00 5.36
SQL*Net message from client 2 0.00 0.00
compared to your
db file scattered read 1415 0.00 4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
First default with direct path read
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
CPU ~ 11 sec, elapsed ~ 6520 sec
wait stats
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path read 1517504 0.05 6199.93
roughly 0.004 sec for each I/ONow with db scattered file read I get
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
CPU ~ 143 sec, elapsed ~ 4299 sec
and waits:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
roughly 17.5/129759 = .00013 sec for single block I/O and 3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
Regards,
Mich -
Tax issue with Direct Input mode of RFBIBL00
Hi, I have a problem using <b>RFBIBL00</b> (direct input mode) to create A/R invoices. There is no tax associated with the invoice, however, when I use direct input mode, instead of posting immediately, a batch input session is created. In the log, an <b>information</b> message: <i>'Specify a tax jurisdiction key'</i>. The BDC session is processed with no error.
When using Call transaction mode for RFBIBL00, the document is posted immediately but the requirement is to use Direct input mode.
There is no converted data in the BBTAX structure since the doc. does not need to post to tax account. Do I need to populate the Tax amount, Tax code and jurisdiction code in this structure and BBSEG in order to by pass the information message?
Any advice is appreciated.
- MinamiProblem solved. Just need to untie the relationship between the ITEM import structure and the BBTAX so the Direct Input program will not require a tax jurisdiction code.
-
IMS52 (with Direct LDAP Mode) Directory Failover
I would like to configure all components of iMS5.2 for Directory Server failover. That should include (Direct LDAP) MTA, Messaging Express, authentication, Personal Address Book, Delegated Administration, etc.
What are all the settings I need to configure for any of these components to failover to an alternate directory server?
Thanks,
Fred./configutil -o local.ugldaphost -v "host.domain,host.domain,host.domain"
See the 5.2 Reference Manual, Chapter 4 for all of the configutil variables. -
Does setup tables shud be deleted every time in delta
hi gurus
can u tell me how the setup tables run(work) when delta loading taken place....
tell me the diffe between following tables and there names.. ex vbak, vbep etc..
application tables i know vbak,vbap,vbep, vbup etc
application tables = update tables.?
statistical tables?Hi,
Setup tables(one time used) holds historical data which is very important for initialization.
For daily delta run the data will be picked from application tables(say for SD:sales order data , VBAK(header level info),VBAP(item level),VBEP(schedule line) tables )depending on the collection mode to delta Q:
we have three types of collection mode:
Direct Delta
Direct transfer of extraction data to delta queue(use T- code to check RSA7 delta Q)
Each posting to a document results to a LUW in the delta queue
Queued Delta
Extraction data is collected in a so-called extraction queue(LBWQ)
Collective run transfers data from extraction queue to delta queue(RSA7)
Unserialized V3 Update
As for the Unserialized V3 Update, extraction data is stored intermediately in the update queue.
Collective run transfers data from update queue(SM13) to delta queue without keeping the chronological sequence of the data.
Once data reaches delta Que,from here the data will be sent to the target by scheduling an info package.
Hope you are clear now!!!!!!!!!!!!!!
Assign points if it helps.
Regards,
Swapna.G -
How can one find the table name of the Delta Que and setup table?
Hi !
Is there any method to find the table name of the delta que and Extraction que ?setup table = <extract structure>_setup
example: setup table for MC11VA0ITM = MC11VA0ITM_SETUP
Delta Queue is based on following 3 tables:
ARFCSDATA
ARFCSSTATE
TRFCQOUT
assign points if useful ***
Thanks,
Raj -
Setup tables ,load failed . Please help !!!
Hi, I have newly created a setup for sd infocube and have completed 5 delta loads from BI. But now my 6th delta load fails, and the data in my cube is lost. I have a queued delta setup .So now
1) delete the init flag.
2) delete all the setup table data with LBWG.
3) FILL the setup tables again OLI7BW newly.
4) Now if i do a full upload so will i get all my previous data ( all data till my 5th delta load)
5) by doing a full upload will my 6th delta load also be included in the setup tables, since im refilling the setup tables.
What should I do if its an ODS , is it the same case for ODS also ?Hi.......
Are u sure data is lost.......hav u repeated the failed load.........look if u delete the failed request from the target after making the QM status forcefully red in the IP Monitor>> Status tab..........then the init pointer will set back........after that if u repeat the load...........then the system will ask u for Repeat delta...........means that it will pick all the missed records.........that were not loaded due to the failed load........
Now if u hav deleted the failed load without making the QM status red..........but have'nt repeated the load.........then nothing to worry about..............just make the QM status forcefully red........then repeat the load..............system will ask u for repeat delta................but if u hav deleted the request without making the QM status red......and also hav repeated the load..............then ur delta mechanism will get corrupted and data will be lost.........
In that case......
1)) delete the init flag.
2) delete all the setup table data with LBWG.
3) FILL the setup tables again OLI7BW newly........with selection...........to get the lost records....
4) After that either do full upload......followed by init without data transfer.................or simply do init with data transfer........it will pick records.as well as set the init pointer...........in that case........no need to do full upload......
In case of ODS............never do full upload.........bcoz an ODS does'nt support full upload and delta upload parrallely.......then ur ODS activation will fail............in this case instead of full upload..........u hav to do full repair........same thing.........just go to IP scheduler >> Scheduler tab >> Repair full request>>check the box.............then go ahead.......as u do in case of full upload...........schedule the load..................if u do full upload....then again u hav to convert the full upload request to full repair request....using the program :RSSM_SET_REPAIR_FULL_FLAG .......
Hope this helps......
Regards,
Debjani.... -
Direct Delta behaviour in a BI 7.0 Upgrade
Hi all,
We are in an BI Upgrade process and a doubt raises regarding DIRECT delta.
Scenario:
The BW 3.1 will upgrade to BI 7.0 but the R/3 4.7 will remain the same.
There are some r/3 logistics extractors with DIRECT delta enabled.
Considering the Upgrade best practices is recommended to stop the V3 process in r/3 side, but the Direct Delta has no V3 process, so, which are the reccomendations for upgrade process considering DIRECT-DELTA ?
Can R/3 Direct-delta remain active during the BI Upgrade or should we stop the current delta and re-initialize 'without any data' after upgrade ?
Best Regards,
Mauro.Kedar,
Most of the cases, the BI Upgrade process takes from 2 to 4 days, so is very difficult to keep the productive r/3 system sttoped for the logistic processes.
Our plan is :
1) block the users in r/3 side until the delta queue has been emptied.
2) load from r/3 to BW all Delta data from delta queue.
3) make sure all delta load were made and the QM status are green in the BW side.
4) unblock the users in r/3 side after the step 3. (Delta queue is free to new postings)
5) initialize the PREPARE phase for BI migration.
6) Execute the Upgrade
7) Restablish the r/3 source system
8) Execute the delta loads from r/3 (to obtain the delta data since step 4)
Do you think that will it work ?
Regards,
Mauro. -
I am trying to install CS5 Master Collection (trial). I receive the Exit Code 7, and the error 1402 - Could not open key. I have followed all the suggestions provided by Adobe Support, to no avail. I have run the setup as administrator with full rights. I am installing on a desktop computer, Win 7 64-bit, AMD 64 processor, 6 gigs of RAM, 274 gigs free on a SATA HD. Please help, I am tearing my hair out. I started the day with a full head of hair and am now bald.
You may follow the suggestions mentioned in the below article.
http://helpx.adobe.com/creative-suite/kb/error-1401-1402-1404-or.html
In case you still get the same issue, give us the install.log file as already mentioned by Manish and Mylenium. -
What other datasources require building setup tables
1)Hi I would like to find out if there are any other datasources in other application areas that require a build of setup tables and then
facilitate deltas ?
2)what are the standard data sources for PP, QM, WM most commonly used.Hi,
Only LO cockpit datasources require filling up of datasources. You can check transaction LBWE, it should give you a complete list.
For PP, most common datasources that I have come across are 2LIS_04_P_COMP and 2LIS_04_P_MATNR
For QM, 2LIS_05_QE1 and 2LIS_05_QE2, for WM which i think is the same as MM, its 2LIS_03_BF and 2LIS_03_UM.
Hope this helps,
Regards. -
hi
greetings.
when setup tables will be deleted and when it will be filled.
thanks
regards
sridhar
[email protected]Hi,
You have to fill the setup tables to do the delta init. After the first load from the setup tables all the data will be loaded from the delta queue. But if you have problems and lose some data you will have to delete the setup tables and fill again.
I hope it helps. -
LIS11 - will fill setup table again conflict with delta?
Hi experts,
we are using LIS11 delta mechanism. LIS11 data is used for two different dataflows in BW.
Because of a delta problem with the R3 enhancements the delta doesn't work properly for one of these.
For this one we want to do a full load until the problem in R3 is solved.
Therefore we have to fill the setup tables again each time we want to do a full load.
My question:
Does filling the setup tables again conflict with the delta mechanism for the other flow?
Or will delta continue to work properly?
Marcohi,
The delta type for all 2LIS_11* DS is ABR--Complete Delta with Deletion Flag Via Delta Queue.
This means that there will be no conflict of data duplication in case of full repair, the delta is independent of set up tables.
Delete and reload the setup tables and the delta will not be effected and also the data will also not be duplicated in cube.
regards,
Arvind.
Edited by: Arvind Tekra on May 4, 2011 9:35 AM -
Data in setup tables and qued delta
Hi Samuris'
How can I see 1. the data in setup tables 2. Data in qued delta
3.From which table(in R/3) the data is extracted when we schedule a IP for a delta.
Do we have any transaction code to view 1,2.
Could anyone let me know exact flow of data when a request is raised from BW.I mean if it is a full load request,from which table in R/3, data gets captured.Also if it is delta load request,from which table in R/3, data gets captured.
Any help is appreciated,
Thanks in Advance!
JamesHi,
first it depends on the data source....
I guess you are talking about LO DSources.
1. you can open the table itself (usually MCaax_SETUP, e.g MC03BF0SETUP) but this will be raw data, like in a trfc LUW); otherwise call transaction RSA3 with your LO datasource in full mode and you'll see the data as it will be extracted to BW.
2 call SMQ1, double click MCEX03 for instance, again the queue name in the next screen, you'll see pending transactions (LUWS) to be updated in the BW delta queue, double clicking the TID will lead you to the data in the transaction...
The corresponding table is ARFCSDATA and it is populated when, in our example, a material document is commited.
3. it depends on your DSource: LO datasources will take their data from ARFCSDATA, other might extract their data directly from several DB tables...
hope this helps...
Olivier.
Message was edited by:
Olivier Cora -
Question regarding deltas and filling of setup tables
Hello Friends,
We are using the Sales Order Item Datasource 2LIS_11_VAITM. We have transported it to BI Production. We have initialized the delta and the deltas are running for the past 3-4 months.
Now we had a new requirement which was getting satisfied by the Sales Order Header and Sales Order Header Status DataSources, 2LIS_11_VAHDR & 2LIS_11_VASTH respectively.
Now we wan to transfer both these (header & header status) DataSources to BI Procution. My question is:
1) Do we have to refill the setup tables again in R/3 for the Sales (11) application?
2) Do we have to reload the entire Sales data again from scratch.
3) Is there any way to transport the 2 new DataSources to BI Production and without disturbing the already running deltas of Sales Order Item Datasource?
Regards,
Prem.Hi,
1) Do we have to refill the setup tables again in R/3 for the Sales (11) application?.
Yes you need to refill the setuptables, because if you want to load deltas, you need to do Init then deltas, other wise first time when you are filled setup tables, (first load init with the existing data in setuptables) from that day to till date you can do full then you can set delta.
It is better and good to fill setup tables and load.
2) Do we have to reload the entire Sales data again from scratch.
Any way you need down time to load the other 2 datasources, and also for 11 application you are filling setuptables.So for perfectness it is better to load from first.
3) Is there any way to transport the 2 new DataSources to BI Production and without disturbing the already running deltas of Sales Order Item Datasource?
If you transport the new datasources then delta won't distrub in BW. If you did any changes to DataSource and trying to transport it to production then it will give error, because LBWQ and RSA7 will have data so first you need to clear it and then transport it.
Thanks
Reddy
Maybe you are looking for
-
My iPod Classic will no longer allow me to create podcast playlists. It create music playlists. I've restored it twice. What is the problem?
-
If photos are inmore than one album or event are they taking up twice as much disk space? Is there a way to eliminate duplicates without doing it one by one?
-
I cannot get any document to print in Adobe, but all other programs on my computer print fine. Should I reload? I have the Acrobat Pro one year subscription. If I reload, will it count that I have downloaded twice and not let me reload?
-
Get InitialContext for P4-connection via Http tunneling
Hi, I have to use an Http tunnel to access my integration J2EE engine. Now, I want to create a P4-connection. It doesn't work. No InitialContext will be returned. I can open the VisualAdmin on the http tunneling port. But in the connection GUI I can
-
I receive a message that indicates this JAVA console 6.0.02.03 update is available for download and installation. After installation Firefox displays a message that this is no longer supported and will be removed the next time firefox is loaded. This