Table with Full / Delta Load information?
Is there a table I can go to where I can see if a cube is a full load vs delta?
Thanks, I will assign points!
~Nathaniel
Hi,
ckeck the table ROOSPRMSC in R/3.It gives you the complete details of your Init and Delta Loads.
hope this helps.
Assign points if useful.
Regards,
Venkat
Similar Messages
-
Issues with 0PP_C03 delta loads
Hi,
I loaded the data from 2LIS_04_P_ARBPL & 0PP_WCCP to 0PP_C03. Initialization of the data load went fine. Did the delta loads to BI and observed that I am not getting any of the Key figure data in the cube. All the values are showing as Zero.
When I observed the in the PSA table and in the RSA3 , I observed that for every order 2 entries got created, One is with +ve values and another with u2013ve. So at the end while updating to cube its nullifying the values. Because of this reason I am not able to view the latest data which is updated as Deltas.
I am not sure what settings I missed. Could some one please help me to fix this issue.
Thanks & Regards,
Shanthi.Thanks Francisco Milan and Shilpa for the links. Itu2019s very useful. But still I didnu2019t able to find the cause of the issue.
My data source 2LIS_04_P_ARBPL is of ABR type and the update mode for KF is Summation. In the data source level, I am getting the values with before & after image (same entries one with +ve & another with u2013ve) and as I am using the u201CSummation as my update type for KF its getting nullified.
Because of this reason I could not able to get the values in my report. Could any one pls help me on this as I am reaching the go-live date. I need to address this issue immediately. Thanks for all your inputs.
Regards,
Shanthi. -
I am loading data from one DSO to another. The change log of the source DSO has been deleted for any request before 20 days.
I want to do a full load from active table and then start the delta loads from change log. But the problem is the load is picking up all the previous deltas from change log again, instead of picking deltas after the full load.
Is there a way to do a "init without data transfer", like the option we had before - where it resets the delats so that only the deltas after the full load are picked.
Thanks. Points will be awarded.Hello Sachin,
In BI7, there's no longer the notion of delta initialization. Just do a delta from your DSO.
Just load the full data in our DSO and launch the delta, then run the deltas for populating your DSO and the corresponding deltas from it.
Hope it helps, -
Init ,full,delta load
Hi friends,
can u tell difference between init ,full,delta updates
and what situation we use these updates ?and any more update methods are there other than these?manohar ,
init - it is used to initialize the data load and will be followed by delta loads thereafter
delta - incremental data loads and can be done only after an init
full - you load all the data at once and do not wish to init. You cannot run a delta load after a full load. you will always have to init first and then do the delta loads.
please refer back to previous postings for more information on the same.
Arun -
Populate setup table impact on delta loading
Hi Expert,
For a ECC standard data source supporting delta mode; Taking 2lis_03_bf for example, now DSO A is loading data from 2lis_03_bf; but there are new requirement need to load the history data from this data source. So i want to populate the setup table of 2lis_03_bf and then make a full upload to another DSO B. If i do this, any impact on delta loading to DSO A ?
In my opinion, populating the setup table is not impact on the delta loading data to DSO A ! Because delta loading is related to Delta Queue and full upload is related to setup table.
Thanks,
DragonHi Draco,
Are you in 3.x or 7.0 version?
This scenario needs to be addressed diffferently based on the version.
If its 3.x version, then you need to create a full repair request (Overwrite mode in DSO A) and load the data into two targets
or
Create a full update infopack, remove tick from data target tab for DSO A and load to DSO B.
If 7.x version.
Create a full upload info pack and extract the data into PSA and using the new DTP from PSA to DSO B.
Create another DTP on PSA to DSO A and do the inti without data transfer and run regular deltas. If you dont do this, then the DTP for DSO a will pick up the full update request from PSA in delta run.
Gurus: what are your comments on this approach?
thanks
Srikanth -
Hi,
I'm trying to use the example
load_cell_null_off_shuntcal.vi with a full-bridge load cell (Honeywell
Model 31, unamplified). I am using LabView 8.6, cDAQ-9172 and NI9237. The
load cell is connected to pins 2,3,6 and 7.
The inputs for the VI front panel
are: internal excitation10V; mV/V 2.1492 (calibration sheet); max weight 10
lbs; bridge resistance 350 ohms (Honeywell specs); 9237 internal shunt
resistance 100 kohms; shunt location R4 (default setting). I have selected
"Do offset null" and "Do shunt cal".
This is the error I receive:
Error -200077 occurred at DAQmx
Perform Shunt Calibration (Bridge).vi:1 Possible reason(s):
Measurements: Requested value is not
a supported value for this property.
Property:
AI.Bridge.ShuntCal.GainAdjust
You Have Requested: -61.980405e3
Valid Values Begin with: 500.0e-3
Valid Values End with: 1.500000
If the "Do shunt cal"
green button is not selected, there is no error. I understand that the Gain
adjust value should be approx 1, whereas the one I get is much larger. The subVI DAQmx PerformShuntCalibration
(bridge).vi contains a "Call library function node" which I don't
know how to interrogate.
Has anyone else had experience
with this error? Do you have any advice on:
1)
How to "see" the calculations being
perfomed inside the "call library function node"?
2)
What the correct shunt element
location for a full bridge load cell is? (although changing this location does
not eliminate the error, I can't find this info).
3)
Anything I may be doing wrong with
my inputs to cause this error?
Thanks,
Claire.
Solved!
Go to Solution.Hi Claire,
You have to physically connect the SC terminals to one arm of the bridge (normally R3). The terminal is not provided for connecting external resistors.
See example
C:\Program Files\National Instruments\LabVIEW 8.6\examples\DAQmx\Analog In\Measure Strain.llb\Cont Acq Strain Samples (with Calibration) - NI 9237.vi
"A VI inside a Class is worth hundreds in the bush"
യവന് പുലിയാണു കേട്ടാ!!! -
Handling Init/Full/Delta-Loading for a generic DS based on function-module
Hi Experts,
we need to build a generic datasource based on a function-module; and we would like to
implement a slightly different logic in the function-module depending on the data-staging-mode that was selected in the InfoPackage.
What we're missing is a possibility to distinguish (in the ds-function-module) if an init/delta/full-load was requested.
Could you please provide some information/hints about that topic?
Thanks in advance,
MarcoHi Anjum,
we found an alternative that seems to be closer to SAPs idea of how it should work:
We check the status of the Init-Table (RSA7) - depending on the result
a) Init exists ---> Delta or Full
b) Init does not exist --> Init or Full
we check if the system itself passes a timestamp together with the data-request.
1) If it does so --> Delta/Init
2) If it doesn't provide a timestamp --> Full.
Summary:
a1 --> Delta
a2 --> Full
b1 --> Init
b2 --> Full -
Load data from flat file to target table with scheduling in loader
Hi All,
I have requirement as follows.
I need to load the data to the target table on every Saturday. My source file consists of dataof several sates.For every week i have to load one particular state data to target table.
If first week I loaded AP data, then second week on Saturday karnatak, etc.
can u plz provide code also how can i schedule the data load with every saturday with different state column values automatically.
Thanks & Regards,
SekharThe best solution would be:
get the flat file to the Database server
define an External Table pointing to that flat file
insert into destination table(s) as select * from external_table
Loading only single state data each Saturday might mean troubles, but assuming there are valid reasons to do so, you could:
create a job with a p_state parameter executing insert into destination table(s) as select * from external_table where state = p_state
create a Scheduler chain where each member runs on next Saturday executing the same job with a different p_stateparameter
Managing Tables
Oracle Scheduler Concepts
Regards
Etbin -
SRM issue with Invoice delta loads
I am loading data into DSO - 0SRIV_D3 (Invoices) from SRM where I am getting duplicate data records updated into the DSO. The extractor for this DSO is 0SRM_TD_IV is AIMD which should deliver only after image delta records on changes.
Can someone advise what needs to be done to avoid data record duplication ?
In the Data source setting, the 'Delivery of Duplicate Data Records is set to "Undefined" ' mode. I have tried to change the data source setting to "None" but this is greyed out and cannot change in BI. Any help?
Thanks.
RameshI have been wondering about the same thing. Some information would be greatly appreciated. Anyone?
Thanks and best regards
debru -
Reapeating Table with Full Document URL in column
All,
SP 2010 + InfoPath 2010
I can easily connect to a library or list on my SP library from an InfoPath form.
I can pull into a repeating table the SP library Name column, yet the Name field doesn't appear. (thus you don't get the full path of a document).
I want to be able to open a document up by clicking a url from the InfoPath repeating table.
I don't want to add another column to my library or list as I want to replicate this functionality across multiple libraries/lists. This would mean I'd have to write a 2nd workflow for each library/and or update all my existing workflows.
Thus how can I retrieve the full path of a document and have it in a repeating table and make it clickable?
Thanks
WHi W,
How about using Document ID as workaround? Document ID for a document is unique and could be used to access the document.
Add document ID into data connection and display it as a column, it should display as hyperlink in the repeating table.
For reference to enable and configure document id in SharePoint Server 2010:
http://office.microsoft.com/en-in/sharepoint-server-help/enable-and-configure-unique-document-ids-HA101790471.aspx
Regards,
Rebecca Tu
TechNet Community Support -
Loading through Process Chains 2 Delta Loads and 1 Full Load (ODS to Cube).
Dear All,
I am loading through Process chains with 2 Delta Loads and 1 Full load from ODS to Cube in 3.5. Am in the development process.
My loading process is:
Start - 2 Delta Loads - 1 Full Load - ODS Activation - Delete Index - Further Update - Delete overlapping requests from infocube - Creating Index.
My question is:
When am loading for the first am getting some data and for the next load i should get as Zero as there is no data for the next load but am getting same no of records for the next load. May be it is taking data from full upload, i guess. Please, guide me.
Krishna.Hi,
The reason you are getting the same no. of records is as you said (Full load), after running the delta you got all the changed records but after those two delta's again you have a full load step which will pick whole of the data all over again.
The reason you are getting same no. of records is:
1> You are running the chain for the first time.
2> You ran this delta ip's for the first time, as such while initializing these deltas you might have choosen "Initialization without data transfer", as such now when you ran these deltas for the first time they picked whole of the data.Running a full load after that will also pick the same no. of records too.
If the two delats you are talking are one after another then is say u got the data because of some changes, since you are loading for a single ods to a cube both your delta and full will pick same "For the first time " during data marting, for they have the same data source(ODS).
Hope fully this will serve your purpose and will be expedite.
Thax & Regards
Vaibhave Sharma
Edited by: Vaibhave Sharma on Sep 3, 2008 10:28 PM -
Slow Query Using index. Fast with full table Scan.
Hi;
(Thanks for the links)
Here's my question correctly formated.
The query:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'Runs on 32 seconds!
Same query, but with one extra where clause:
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD') --- ADDED HERE
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'This runs in 400 seconds.
It should return data from one table, given the conditions.
The version of the database is Oracle9i Release 9.2.0.7.0
These are the parameters relevant to the optimizer:
SQL> show parameter optimizer
NAME TYPE VALUE
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 99
optimizer_index_cost_adj integer 10
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE
SQL> Here is the output of EXPLAIN PLAN for the first fast query:
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes | Cost |
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
|* 2 | TABLE ACCESS FULL | EHCONS | | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE"
IS NULL AND "EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy
-mm-dd
hh24:mi:ss') AND "EC"."TYPE"='BAR')
Note: rule based optimizationHere is the output of EXPLAIN PLAN for the slow query:
PLAN_TABLE_OUTPUT
| |
| 1 | SORT AGGREGATE | | |
| |
|* 2 | TABLE ACCESS BY INDEX ROWID| ehgeoconstru | |
| |
|* 3 | INDEX RANGE SCAN | ehgeoconstru_VSN | |
| |
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - filter(SUBSTR("EC"."strgfd",1,8)<>'[CIMText' AND "EC"."DEATHDATE" IS
NULL AND "EC"."TYPE"='BAR')
PLAN_TABLE_OUTPUT
3 - access("EC"."CONTEXTVERSION"='REALWORLD' AND "EC"."BIRTHDATE"<=TO_DATE('2
009-10-06
11:52:12', 'yyyy-mm-dd hh24:mi:ss'))
filter("EC"."BIRTHDATE"<=TO_DATE('2009-10-06 11:52:12', 'yyyy-mm-dd hh24:
mi:ss'))
Note: rule based optimizationThe TKPROF output for this slow statement is:
TKPROF: Release 9.2.0.7.0 - Production on Tue Nov 17 14:46:32 2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Trace file: gen_ora_3120.trc
Sort options: prsela exeela fchela
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
SELECT count(1)
from ehgeoconstru ec
where ec.TYPE='BAR'
and ( (ec.contextVersion = 'REALWORLD')
AND ( ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') ) )
and deathdate is null
and substr(ec.strgfd, 1, length('[CIMText')) <> '[CIMText'
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 4 0.00 538.12 162221 1355323 0 1
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 153
Rows Row Source Operation
1 SORT AGGREGATE
27747 TABLE ACCESS BY INDEX ROWID OBJ#(73959)
2134955 INDEX RANGE SCAN OBJ#(73962) (object id 73962)
alter session set sql_trace=true
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.02 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 1 0.00 0.02 0 0 0 0
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 153
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.02 0 0 0 0
Fetch 2 0.00 538.12 162221 1355323 0 1
total 5 0.00 538.15 162221 1355323 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
total 0 0.00 0.00 0 0 0 0
Misses in library cache during parse: 0
2 user SQL statements in session.
0 internal SQL statements in session.
2 SQL statements in session.
Trace file: gen_ora_3120.trc
Trace file compatibility: 9.02.00
Sort options: prsela exeela fchela
2 sessions in tracefile.
2 user SQL statements in trace file.
0 internal SQL statements in trace file.
2 SQL statements in trace file.
2 unique SQL statements in trace file.
94 lines in trace file.Edited by: PauloSMO on 17/Nov/2009 4:21
Edited by: PauloSMO on 17/Nov/2009 7:07
Edited by: PauloSMO on 17/Nov/2009 7:38 - Changed title to be more correct.Although your optimizer_mode is choose, it appears that there are no statistics gathered on ehgeoconstru. The lack of cost estimate and estimated row counts from each step of the plan, and the "Note: rule based optimization" at the end of both plans would tend to confirm this.
Optimizer_mode choose means that if statistics are gathered then it will use the CBO, but if no statistics are present in any of the tables in the query, then the Rule Based Optimizer will be used. The RBO tends to be index happy at the best of times. I'm guessing that the index ehgeoconstru_VSN has contextversion as the leading column and also includes birthdate.
You can either gather statistics on the table (if all of the other tables have statistics) using dbms_stats.gather_table_stats, or hint the query to use a full scan instead of the index. Another alternative would be to apply a function or operation against the contextversion to preclude the use of the index. something like this:
SELECT COUNT(*)
FROM ehgeoconstru ec
WHERE ec.type='BAR' and
ec.contextVersion||'' = 'REALWORLD'
ec.birthDate <= TO_DATE('2009-10-06 11:52:12', 'YYYY-MM-DD HH24:MI:SS') and
deathdate is null and
SUBSTR(ec.strgfd, 1, LENGTH('[CIMText')) <> '[CIMText'or perhaps UPPER(ec.contextVersion) if that would not change the rows returned.
John -
Delta Load on DSO and Infocube
Hi All,
I would like to know the procedure for the scenario mentioned below.
Example Scenario:
I have created a DSO with 10 characteristics and 3 keyfigure. i have got 10 Customers whose transactions are happening everyday. A full upload on the DSO was done on 7th October 09. How can i load their changing data's from 8th Oct to till date to DSO? and what will be the situation for the same in the case of Infocube??
Step by step guidance will be a great help
Thanks in advance
LiquidHi,
The key-fields take an important role at DSO level alone as you get the power of overwritting records. Once thats done at the DSO level you can simply carry on with a Delta load into the cube from your Change Log table and you don't have to worry about anything. Just to add, all the characteristics in the cube are key-fields so you will get a new records for each different value only the key-figures will sum up for a set of all the same characteristics.
Thanks,
Arminder -
How should I reload the delta load?
Dear all,
An ODS which includes Sales invoice information with daily delta load. The loading is fine but somehow one KF got different logic from April e.g. for same order this KF has correct value after April but incorrect value before April. So I am going to reload data old than April, but all the old load are fine and existed, so how should I do?
ThanksHi,
Thanks for you reply, No date for selection. So I found some data in the ODS for a specific comany code and the bill type(I got 16 recordes), then I created a Infopackage then put the selection same with the company code and bill type, set it to full load and run immediatly, but 0 record was loaded. I am sure the corresponding data exist in R3 because I checked the data using VA03. So what's the problem?
Thanks in advance -
Job terminated in source system - Request set to red during delta load
Hi All,
There is issue with the delta load where as full load works fine and it results in below job termination with below log.
Job started
Step 001 started (program SBIE0001, variant &0000000128086, user ID ALEREMOTE1)
Asynchronous transmission of info IDoc 2 in task 0001 (0 parallel tasks)
DATASOURCE = 2LIS_13_VDITM
RLOGSYS = BWP_010
REQUNR = REQU_DA9R6Y5VOREXMP21CICMLZUZU
UPDMODE = R
LANGUAGES = *
Current Values for Selected Profile Parameters *
abap/heap_area_nondia......... 0 *
abap/heap_area_total.......... 17173577728 *
abap/heaplimit................ 40000000 *
zcsa/installed_languages...... DE *
zcsa/system_language.......... E *
ztta/max_memreq_MB............ 2047 *
ztta/roll_area................ 3000000 *
ztta/roll_extension........... 2000000000 *
Object requested is currently locked by user ALEREMOTE1
Job cancelled after system exception ERROR_MESSAGE
Please help me to resolve this issue.
Thanks,
Madhu,Hello,
As per the screen shot provided below, data soruce belongs to LO, so first check in RSA7, whether it has got mark in RSA7, for which you have entries in RSA7 against the data source. If you are not able to find the data source in RSA7, then you need to set-up the update methods, based on the requirement.
1) Direct Delta
2) Queued Delta
3) Unserialised V3 Update
Once it is done, Delete the Previous Initilisation, again perform the Re-Init With Out data and now schedule the Delta Data Loads
Hope this may Help you.
Thanks
PT
Maybe you are looking for
-
Print Preview is missing from Firefox 6 on Mac OSX Lion in both File Menu and Print Dialog Box
The question says it all really. Print Preview option is missing from both the File menu and from the Print dialog box when you select Print. I use Print Preview a lot to avoid wasting paper. A relevant issue might be that the Mac OSX app Preview is
-
What is difference in wiki, documents and blog
I have Lion server installed and an active Wiki. There are also Documents and Blogs and Wiki's. What is the distinction between them. You can have documents and blogs or documents and wiki. It seems like my users don't understand the differences and
-
Question About Xerces Parser and Java JAXP
Hi, I have confusion about both of these API Xerces Parser and Java JAXP , Please tell me both are used for same purpose like parsing xml document and one is by Apache and one is by sun ? And both can parse in SAX, or DOM model ? Is there any differe
-
Why do I get a Trial has expired pop up?
Why do I get a Trial has expired pop up?
-
Previous Recipient Address Book integration problem
does anyone know how to add a "Previous Recipient" beyond simply seleccting the contact and checking the box to add? for instance, how and when does this sync take place? i have closed out of each of these softwares after adding a contact and i am no