Full vs Delta processing
Hello, can someone please help me understand difference between the two and if BPC MS 7 has issues with Delta processing?thanks.
Hi,
When we process a dimension, BPC decides the way of processing the dimension depending on the nature of updates done to the dimension.
Full Processing: whenever a new property is added, or a member is inserted in the middle of the worksheet or the hierarchy is changed, BPC does a full processing.
Incremental: whenever a member is added that do not effect the hierarchy, or modifying the dimension formula, adding a property value, BPC does an incremental processing.
Hope this helps.
Similar Messages
-
ODS Delta Process, DataSource and Delta Queue
Hi to all,
loading data from a SAP source system into SAP BW 7.0 via generic 3.x datasource causes problems.
Here is a short description:
The data flow is from a source table using a generic extractor into a target ODS in full update mode.
Update rules should move data from table structure into ODS structure in this way:
Source table structure
CustKey1 Key2
13386 C23
13386 B14
13387 A13
13387 E25
ODS structure
CustKey1 col1 col2
13387 A13 E25
This works pretty well - as long as all records with the same CustKey1 are transfered in the same data package. Data Browser (SE16) shows the situation in ODS-"New data" view (data is not activated):
Request Data_packet_number Data_record_number CustKey1
112 000003 1.061 0000013386
112 000004 1 0000013386
112 000004 2 0000013387
There are two records for CustKey1 0000013386 with
data record 1.061 in data packet 000003 and
data record 1 in data packet 000004.
The obove constellation is the cause for errors in the ODS Delta Queue and subsequent data processing.
I think there may be one solution to solve the problem by changing the Delta Process of the Data Source.
The properties are:
- Record type: Full, delta, before, after images
- Delta type: Delta from delta queue or from extractor
- Serialization: No serialization, serialization of request, serialization of data packages
But how can I change these settings? Transactions RSA2 RSO2 RSO6 RSO8 don't do this.
What is the right delta process to use?
I have tried changing the delta process generating several 3.x datasources with the following delta processes (see table RODELTAM)
- " " (Full-upload)
- AIE
- ADD
Unfortunately with no effect.
Who can help?
Regards
Josef
Edited by: Josef L. on Mar 20, 2009 7:44 PMhi venkat,
whenever you load a delta from ods to cube , whatever the data changed in ods since last delta will be updated, hence from you case, both req1 and req2 will be loaded to cube.
Assign Points if useful
Ramesh -
SAP BW Purchasing - Scheduling Agreement delta processing
Hi,
I have a question about Scheduling Agreements coming from ECC to BW with delta processing.
When a goods receipt happens for a Scheduling Agreement item, it registers process key 2.
However, it seems that when a second goods receipt happens for that same scheduling agreement item, the old goods receipts updates with a process key 4, and the new receipt takes the process key 2 when we aggregate at an InfoCube level. In the first level write optimized DSO, the process keys are in tact for every line item & schedule line (all goods receipts process key 2 are there).
Can somebody confirm this logic?
We are having issues because we need all of that goods receipt history stored in our InfoCubes, and we do not know how to resolve. Please help! Points will be assigned...
Thanks very much,
Courtneyare you saying that Process Key is 2 for both the good reciepts in write optimized DSO but in cube, first good reciept is having a process key 4 when second good reciept is also there?
If yes, it seems that problem is with the info object for process key - is it a Key Figure or characteristic?
It should be a characteristic so that the value is not added but overwritten.
you are doing a full or delta from Data Source to DSO and DSO to Cbue?
Regards,
Gaurav -
Error in Full and Delta CPACache refresh.
Hi Experts,
In our production environent we have a SOAP -> XI - > PROXY scenario [Synchronous].
The Consumer of the WebService has reported an error:
500_Internal_Server_Error = SOAP:Server_Server_Error_XIAdapter_ADAPTER.JAVA_EXC
When I checked there are no messages in SXMB_MONI , but when I checked the Cache Refresh History I found that
there was error in both Full and Delta Cache Refresh. But the restart of the XI(Production) server resolved the issue.
The error is as shown below:
<?xml version="1.0" encoding="UTF-8" ?>
- <CacheRefreshError>
<EngineType>AE</EngineType>
<EngineName>af.pxi.su0956</EngineName>
<RefreshMode>F</RefreshMode>
- <GlobalError>
<Message>Couldn't parse Configuration Data cache update XML string from Directory.</Message>
org.xml.sax.SAXException: JavaErrors Tag found in cache update XML. at com.sap.aii.af.service.cpa.impl.cache.directory.DirectoryDataSAXHandler.startElement(DirectoryDataSAXHandler.java:157) at com.sap.engine.lib.xml.parser.handlers.SAXDocHandler.startElementEnd(SAXDocHandler.java(Compiled Code)) at com.sap.engine.lib.xml.parser.XMLParser.scanElement(XMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.XMLParser.scanContent(XMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.XMLParser.scanElement(XMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.XMLParser.scanDocument(XMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.XMLParser.parse0(XMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.AbstractXMLParser.parseAndCatchException(AbstractXMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.AbstractXMLParser.parse(AbstractXMLParser.java(Inlined Compiled Code)) at com.sap.engine.lib.xml.parser.AbstractXMLParser.parse(AbstractXMLParser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.Parser.parse_DTDValidation(Parser.java(Inlined Compiled Code)) at com.sap.engine.lib.xml.parser.Parser.parse(Parser.java(Compiled Code)) at com.sap.engine.lib.xml.parser.SAXParser.parse(SAXParser.java(Compiled Code)) at javax.xml.parsers.SAXParser.parse(Unknown Source) at javax.xml.parsers.SAXParser.parse(Unknown Source) at com.sap.aii.af.service.cpa.impl.cache.directory.DirectoryDataParser.updateCentralCache(DirectoryDataParser.java:56) at com.sap.aii.af.service.cpa.impl.cache.CacheManager.updateCacheWithDirectoryData(CacheManager.java:871) at com.sap.aii.af.service.cpa.impl.cache.CacheManager.performCacheUpdate(CacheManager.java:640) at com.sap.aii.af.service.cpa.impl.servlet.CacheRefresh.process(CacheRefresh.java:104) at com.sap.aii.af.service.cpa.impl.servlet.CacheRefresh.doGet(CacheRefresh.java:53) at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled Code)) at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled Code)) at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.runServlet(HttpHandlerImpl.java(Compiled Code)) at com.sap.engine.services.servlets_jsp.server.HttpHandlerImpl.handleRequest(HttpHandlerImpl.java(Compiled Code)) at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java(Inlined Compiled Code)) at com.sap.engine.services.httpserver.server.RequestAnalizer.startServlet(RequestAnalizer.java(Compiled Code)) at com.sap.engine.services.httpserver.server.RequestAnalizer.invokeWebContainer(RequestAnalizer.java(Compiled Code)) at com.sap.engine.services.httpserver.server.RequestAnalizer.handle(RequestAnalizer.java(Compiled Code)) at com.sap.engine.services.httpserver.server.Client.handle(Client.java(Inlined Compiled Code)) at com.sap.engine.services.httpserver.server.Processor.request(Processor.java(Compiled Code)) at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java(Compiled Code)) at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java(Compiled Code)) at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java(Compiled Code)) at java.security.AccessController.doPrivileged1(Native Method) at java.security.AccessController.doPrivileged(AccessController.java(Compiled Code)) at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java(Compiled Code)) at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java(Compiled Code))
Can anyone tell me what is the cause of this error and how restart of the XI server resolved the issue?
Thannks,Guys,
I have deployed the below components and after this repository could not be started which gives the below error.
1000.7.11.10.18.20130529165100
SAP AG
sap.com
SAP_XIAF
1000.7.11.10.11.20130529165100
SAP AG
sap.com
SAP_XIESR
1000.7.11.10.15.20130529165100
SAP AG
sap.com
SAP_XITOOL
We have the same issue. Still the Restart did not solve ths issue.
Can anybody help? We are not able to start the XI Repository.
Our PI components are as below
com.sap.engine.services.ejb3.runtime.impl.refmatcher.EJBResolvingException: Cannot start applicationsap.com/com.sap.xi.repository; nested exception is: java.rmi.RemoteException: [ERROR
CODE DPL.DS.6125] Error occurred while starting application locally and wait.; nested exception is:
com.sap.engine.services.deploy.container.DeploymentException: Cannot activate endpoint for message-driven bean sap.com/com.sap.xi.repository*xml|com.sap.xpi.ibrep.server.jar*xm
l|CacheRefreshMDB^M
at com.sap.engine.services.ejb3.runtime.impl.DefaultContainerRepository.startApp(DefaultContainerRepository.java:315)^M
at com.sap.engine.services.ejb3.runtime.impl.DefaultContainerRepository.getEnterpriseBeanContainer(DefaultContainerRepository.java:106)^M
at com.sap.engine.services.ejb3.runtime.impl.DefaultRemoteObjectFactory.resolveReference(DefaultRemoteObjectFactory.java:55)^M
1000.7.11.10.6.20121015232500
SAP AG
sap.com
MESSAGING
Regards
Omkar -
Hi BW Experts,
For AP(Accounts payable),AR(Accounts receivable) we can run delta process to pick delta records. How?
Could anyone please let me know?
ThanksFI extractors are worked on after image delta. Delta records are diractly selected from the R/3 tables using a time stamp mechanism.delta records are diractly transfored to bw no need to writen to the bw Plz go through reg FI OFI_Gl_4,OFI_AP_4,OFI_AR_4 0FI_GL_4 (G/L Accounts: line items) No redundant fields are transferred into BW: Only fields from the FI document tables (BKPF/BSEG) that are relevant to general ledger accounting (compare table BSIS), No customer or vendor related fields. 0FI_AP_4 (AP: line items) and 0FI_AR_4 (AR: line items) Vendor / Customer related information (e.g. payment/dunning data). Coupled consistent snapshot of FI data in BW:extraction G/L account extraction determines selection criteria (comp.code, fiscal period) and upper time limit of all extracted FI line-items. AP and AR extraction: no further selection criteria necessary / possible. Uncoupled extraction possible with PlugIn PI 2002.2, see OSS note 551044. 0FI_GL_4, 0FI_AP_4, 0FI_AR_4 use an After Image Delta Delta type Extractor: Delta records are directly selected from the R/3 tables using a timestamp mechanism. Delta records are directly transferred to BW. No record is written to the BW delta queue. After Image Delta: FI line items are transferred from the source system in their final state (= After Image). This delta method is not suitable for direct InfoCube update. ODS object is obligatory to determine the delta for InfoCube update. Delta queue and BW scheduler ensure correct serialization of the records (e.g. inserts must not pass changes) Distribution of delta records to multiple BW systems. Selection criteria of Delta-Init upload are used to couple the datasources logically. time mechanism... New FI documents Posted in R/3 since the last line-item extraction. Selection based on the field BKPF-CPUDT. Hierarchy Extractor for Balance Sheet & P&L Structure Technical name: 0GLACCEXT_T011_HIER Technical Data Type of DataSource Hierarchies Application Component FI-IO Use The extractor is used for loading hierarchies (balance sheetl/P&L structures) for the characteristic (InfoObject) 0GLACCEXT. Fields of Origin in the Extract Structure Field in Extract Structure Description of Field in the Extract Structure Table of Origin Field in Table of Origin .INCLUDE ROSHIENODE RFDT CLUSTD FIELDNM RSFIELDNM RFDT CLUSTD GLACCEXT GLACCEXT RFDT CLUSTD RSIGN RR_RSIGN RR_PLUMI RFDT CLUSTD PLUMI ROSHIENODE RFDT CLUSTD Features of Extractor Extractor: FBIW_HIERARCHY_TRANSFER_TO Extraction structure: DTFIGL_HIERNODE_1 Financial Accounting: Line Item Extraction Procedure General Information About the Line Item Extraction Procedure BW Release 3.1 makes consistent data extraction in the delta method possible for line items in General Ledger Accounting (FI-GL), and selected subsidiary ledgers (Accounts Receivable FI-AR and Accounts Payable FI-AP) and tax reporting. The extraction procedure delivered with BW Release 2.0B, based on DataSources 0FI_AR_3 and 0FI_AP_3 , can be replaced. This is described in note 0410797. The decisive advantage of choosing R/3 line item tables as the data source is that extra fields can be transferred to the BW. These were not available with transaction figures from table GLTO, the previous R/3 data source in General Ledger Accounting FI-GL. This provides more extensive and flexible analysis in BW. To enable you to assure consistent Delta data, four new InfoSources are provided in the OLTP system (with the corresponding DataSources and extractors in the SAP R/3 system): Application InfoSource Description FI-GL 0FI_GL_4 General Ledger: Line Items FI-AP 0FI_AP_4 Accounts Payable: Line Items (Extraction Linked to 0FI_GL_4) FI-AR 0FI_AR_4 Accounts Receivable: Line Items (Extraction Linked to 0FI_GL_4) FI-TX 0FI_TAX_4 General Ledger: Data for Tax on Sales/Purchases For the General Ledger, selection is made from tables BKPF and BSEG, while selection for the subsidiary accounts is made from tables BSID/BSAD (Accounts Receivable) and BSIK/BSAK (Accounts Payable). InfoSource 0FI_GL_4 transfers only those fields that are relevant for General Ledger Accounting from the Financial Accounting document (tables BKPF and BSEG) to the BW system. The consisten recording of data from General Ledger Accounting and Subledger Accounting is provided by means of coupled delta extraction in the time stamp procedure. General ledger accounting is the main process in delta mode and provides subsidiary ledger extraction with time stamp information (time intervals of previously selected general ledger line items). This time stamp information can also be used for a loading history: this shows which line items have previously been extracted from the SAP R/3 system. Delta Method Delta extraction enables you to load into the BW system only that data that has been added or has changed since the last extraction event. Data that is already loaded and is not changed is retained. This data does not need to be deleted before a new upload. This procedure enables you to improve performance unlike the periodic extraction of the overall dataset. Financial Accounting line items are read by extractors directly from the tables in the SAP R/3 system. A time stamp on the line items serves to identify the status of the delta data. Time stamp intervals that have already been read are then stored in a time stamp table. The delta dataset is transferred to the BW system directly, without records being transferred to the delta queue in the SAP R/3 system (extraktor delta method). The Financial Accounting line items are extracted from the SAP R/3 system in their most recent status (after-image delta method). This data method is not suitable for filling InfoCubes directly in the BW system. To start with therefore, the line items must be loaded in the BW system in an ODS object that identifies the changes made to individual characteristics and key figures within a delta data record. Other data destinations (InfoCubes) can be provided with data from this ODS object. Time Stamp Method With Financial Accounting line items that have been posted in the SAP R/3 system since the last data request, the extractors identify the following delta dataset using the time stamp in the document header (BKPF-CPUDT). When a delta dataset has been selected successfully, the SAP R/3 system logs two time stamps that delimit a selection interval for a DataSource in table BWOM2_TIMEST: Field Name Key Description MANDT X Client OLTPSOURCE X DataSource AEDAT X SYSTEM: Date AETIM X SYSTEM: Time UPDMODE Data update mode (full, delta, deltainit) TS_LOW Lower limit of the time selection (time stamp in seconds since 1.1.1990) TS_HIGH Upper limit of the time selection (time stamp in seconds since 1.1.1990) LAST_TS Flag: 'X' = Last time stamp interval of the delta extraction TZONE Time zone DAYST Daylight saving time active? The time stamps are determined from the system date and time and converted to the format seconds since 1.1.1990, taking into account the time zone and daylight saving time. To ensure correct and unique reconversion to date and time, the time zone and daylight saving time must be stored in table BWOM2_TIMEST. Table BWOM2_TIMEST therefore serves to document the loading history of Financial Accounting line items. It also provides defined restart points following incorrect data requests. To provide a better overview, the time stamps in the example table are entered in the date format. The columns TZONE and DAYST were left out. OLTPSOURCE AEDAT/AETIM UPD DATE_LOW DATE_HIGH LAST_TS 0FI_GL_4 16 May 2000/20:15 Init 01 Jan. 1990 15 May 2000 0FI_GL_4 24 May 2000/16:59 Delta 16 May 2000 23 May 2000 0FI_GL_4 02 June 2000/21:45 Delta 24 June 2000 01 June 2000 0FI_GL_4 15 June 2000/12:34 Delta 02 June 2000 14 June 2000 0FI_GL_4 21 June 2000/18:12 Delta 15 June 2000 20 June 2000 X 0FI_AP_4 18 May 2000/21:23 Init 01 Jan. 1990 15 May 2000 0FI_AP_4 30 May 2000/12:48 Delta 16 May 2000 23 May 2000 0FI_AP_4 10 June 2000/13:19 Delta 24 June 2000 01 June 2000 X 0FI_AR_4 17 May 2000/18:45 Init 01 Jan. 1990 15 May 2000 0FI_AR_4 04 June 2000/13:32 Delta 16 May 2000 01 June 2000 0FI_AR_4 16 June 2000/15:41 Delta 02 June 2000 14 June 2000 X 0FI_TX_4 17 May 2000/18:45 Init 01 Jan. 1990 15 May 2000 0FI_TX_4 04 June 2000/13:32 Delta 16 May 2000 01 June 2000 0FI_TX_4 16 June 2000/15:41 Delta 02 June 2000 14 June 2000 X Constraints Per day, no more than one delta dataset can be transferred for InforSource 0FI_GL_4. The extracted data therefore has the status of the previous day. For further data requests on the same day, the InfoSource does not provide any data. In delta mode, data requests with InfoSource 0FI_AR_4 and InfoSource 0FI_AP_4 do not provide any data if no new extraction has taken place with InfoSource 0FI_GL_4 since the last data transfer. This ensures that the data in the BW system for Accounts Receivable and Accounts Payable Accounting is exactly as up to date as the data for General Ledger Accounting. If you delete the initialization selection in the source system for InfoSource 0FI_GL_4 in the BW system Administrator Workbench, the time stamp entries for InfoSources 0FI_GL_4, 0FI_AP_4, 0FI_AR_4 and OFI_TX_4 are also removed from table BWOM2_TIMEST. Recording Changed Line Items In the case of Financial Accounting line items that have been changed since the last data request in the SAP R/3 system, there is no reliable time stamp that can document the time of the last change. For this reason, all line items that are changed in a way relevant for BW must be logged in the SAP R/3 system
-
Difference: Full load, Delta Load & INIT load in BW
Hi Experts,
I am new to SAP BW Data warehousing.
What is the difference between Full load, Delta Load & INIT load in BW.
Regards
PraveenHi Pravin,
Hope the below helps you...
Full update:
A full update requests all the data that corresponds with the selection criteria you have determined in the scheduler.
You can indicate that a request with full update mode is a full repair request using the scheduler menu. This request can be posted to every data target, even if the data target already contains data from an initialization run or delta for this DataSource/source system combination, and has overlapping selection conditions.
If you use the full repair request to reload the data into the DataStore object after you have selectively deleted data from the DataStore object, note that this can lead to inconsistencies in the data target. If the selections for the repair request do not correspond with the selections you made when selectively deleting data from the DataStore object, posting this request can lead to duplicated data records in the data target.
Initialization of the delta process
Initializing the delta process is a precondition for requesting the delta.
More than one initialization selection is possible for different, non-overlapping selection conditions to initialize the delta process when scheduling InfoPackages for data from an SAP system. This gives you the option of loading the data
relevant for the delta process to the Business Information Warehouse in steps.
For example, you could load the data for cost center 1000 to BI in one step and the data for cost center 2000 in another.
The delta requested after several initializations contains the upper amount of all the successful init selections as the selection criterion. After this, the selection condition for the delta can no longer be changed. In the example, data for cost
centers 1000 and 2000 were loaded to BI during a delta.
Delta update
A delta update requests only the data that has appeared since the last delta. Before you can request a delta update you first have to initialize the delta process. A delta update is only possible for loading from SAP source systems. If a delta update fails (status red in the monitor), or the overall status of the delta request was set to red manually, the next data request is carried out in repeat mode. With a repeat request, the data that was loaded incorrectly or incompletely in the failed delta request is extracted along with the data that has accrued from this point on. A repeat can only be requested in the dialog screen. If the data from the failed delta request has already been updated to data targets, delete the data from the data targets in question. If you do not delete the data from the data targets, this could lead to duplicated data records after the repeat request.
Repeat delta update
If the loading process fails, the delta for the DataSource is requested again.
Early delta initialization : This is a advanced concept, but just for your information ...
With early delta initialization you can already write the data to the delta queue or to the delta tables in the application while the initialization is requested in the source system. This enables you to initialize the delta process, in other
words, the init request, without having to stop posting the data in the source system. You can only carry out early delta initialization if this is supported by the extractor for the DataSource that is called in the source system with this data
request. Extractors that support early delta initialization were first supplied with. -
Extraction - full or delta or neither?
Hi all,
I have encountered a strange behaviour.
I am having a DataSource extracts data from R/3 to BW. In the "Test DataSource" screen, I put a range 2007.10 to the posting period, and it extracts all the records for 2007.10 for me.
But when I execute the InfoPackage in BW, and subsequently DTP and activation of DSO, I have got both 2007.10 and 2007.11 data in my DSO (which means all data ran into my DSO).
I suspect there is something wrong with my DS or DTP, because in my Data Transfer Process Monitor under "Details" tab, under "Process Request", there are multiple data package -- and it seems like this includes all the previous request and load that I have performed onto this DSO.
In my DTP extraction mode, I have tried both "Full" and "Delta" and in both cases, I got same problem.
Does anybody have any clue on how to resolve this? Is there a way to reset or I have missed out something? Appreciate for help. Thanks a lot.
- JKHi Alec,
If I delete the InfoPackage, would all the requests be deleted as well?
What I have tried is to delete the InfoPackage, delete data in DSO, and create Infopackage again, then execute DTP again.
As said, the PSA got only the data I need, but when loaded DTP and activate ODS, I got everything...
Still don't know why it behaves like that...
- JK -
2LIS_11_VAHDR - Initialize delta process error
Hi,
I have an issue initializing infosource 2LIS_11_VAHDR for delta process on a new BW3.1c instance created from a system copy. Other delta loads work properly but not this one.
In the AWB monitor, the status indicates 'no data available' even if the source system LIS setup tables have been recreated with some records available. What's more puzzling is the fact that the error also indicates that 'the data request was a full update' even if the update mode is set to initialize delta process.
Any tips would be appreciated.
Thank you,
FrankRSA3 simulated 0 records but the R/3 table had plenty of records to be processed.
But what resolved the problem was deleting and re-creating the BW infopackage and re-initializing the delta process.
Thank you all for your time.
Frank -
First time delta process for 2lis_13_vdkon in bw 3.5?
Hi all,
Till now the data was loaded through setup tables, and there is no delta process being setup for 2lis_13_vdkon.
Can anyone let me know how to carry out the delta process, so that the daily updates runs in the system rather than updating through a full update.
Thanks
PoojaHi Pooja,
Once you delete the setup table, it deletes the data for all the DataSources for the application area. In your case 13 (sales).
Then, initialise the setup table. Make sure there is no transaction happening when the initialisation run is executed.
Once the initialisation is complete in R/3 side, run the initialisation InfoPackage for the DataSource.
Next load onwards, all the data will be stored in the delta queue in case of direct delta and in the update queue in ccase of queued delta. In any case you will get the delta data. You might have to create a new InfoPackage for delta.
Roy -
Hi,
I need some help!!
In my actual proyect, we are using DS "0FI_GL_10", which has a AIED delta process. Someone knows, if this configuration could carry on any type of extraction problem, such as a duplication of data in BI? This is what it is happening in my proyect. In the inic of "0FI_GL_10", I extract "one" record, and after a few days when I run a delta extraction, I have a new same record in the PSA, (with the same key fileds and attributes than the other). Also, in my tranformation rule, that link my PSA with a DSO, the key figure 0SALE is parameterized as summation in the aggregation, this is wrong and or i have to use a overwrite aggregation? or my problem is in the type on process delta that has the extractor "0FI_GL_10"?
Thanks in advance for any kind of help.
Regards,
Juan ManuelHi
There are chances that this datasource might send in the same records when run in delta mode and that is the reason a DSO is needed with overwrite mode in between.
Also there are a couple of notes stating problems related to this datasource.
Note 1002272,1153944, 1127034
Refer this if you are having specific issues with this datasource
Hope this helps
Gaurav -
Informatica Night Update (Delta) Process - More than one Delta Source
We are setup to run a nightly process to update our DW with changes that took place that day in our Siebel OLTP environment. We refer to this as our "Delta Process" since it only updates rows that were chagned in the OLTP or adds new rows that were added to the OLTP. Here is our design:
* In our Siebel OLTP we have a View (V table) created that contains a view of only the records that have been changed since the last "Delta Process". This way we can identify only those rows that need to be updated. So in these examles when you see a table that is prefixed as "S_" it references the entire table and a table prefixed as "V_" references on only the changes to the underlying "S_" table.
Ex 1: Order Item table (S_ORDER_ITEM) joins to Account table (S_ORG_EXT). In the Informtica mapping SQ_JOINER we have a query to SELECT statements that with the results concatentated with a UNION statement. The first SELECT statement selects all rows from the V_ORDER_ITEM joined to the S_ORG_EXT so that all delta rows on the order item view are updated with the corresponding data from the account table (S_ORG_EXT). The second SELECT statement selects all rows from the S_ORG_ITEM joined to the V_ORG_EXT so that all of the order item records that contained account information that changed (per the view) are updated. The result is an updated Order Item DW table that contains all updates made to the Order Item and all any associated Accounts information that is stored on the Order Item.
SELECT A.*, B.* FROM V_ORDER_ITEM A, S_ORG_EXT B WHERE A.ORG_ID = B.ROW_ID
UNION
SELECT A.*, B.* FROM S_ORDER_ITEM A, V_ORG_EXT B WHERE A.ORG_ID = B.ROW_ID
The issues:_
This works fine when you have two tables joined together that contain deltas and you need only on UNION statement. However, the issue is when I have 14 tables joined to S_ORDER_ITEM that contain deltas. This cannot be accomlised (that I can see) with one UNION statement but you would need a UNION statement for each delta table.
Ex 2: This example contains just 3 tables. Order Item table (S_ORDER_ITEM) joins to Account table (S_ORG_EXT) and joins to Product table (S_PROD_INT). In this example you will need to have one UNION for each delta table. If you combine delta tables in the same union you will ultimately end up missing data in the final result. This is because the delta tables will only contain the rows that have changed and if one delta table contains a change and needs to pull data from another delta table that did not contain a corresponding change it will not pull the information.
SELECT A.*, B.*, C.* FROM V_ORDER_ITEM A, S_ORG_EXT B, S_PROD_INT C WHERE A.ORG_ID = B.ROW_ID AND A.PROD_ID = C.ROW_ID
UNION
SELECT A.*, B.*, C.* FROM S_ORDER_ITEM A, V_ORG_EXT B, S_PROD_INT C WHERE A.ORG_ID = B.ROW_ID AND A.PROD_ID = C.ROW_ID
UNION
SELECT A.*, B.*, C.* FROM S_ORDER_ITEM A, S_ORG_EXT B, V_PROD_INT C WHERE A.ORG_ID = B.ROW_ID AND A.PROD_ID = C.ROW_ID
The question:_
1. Is my understanding of how the delta process works correct?
2. Is my understanding that I will need a UNION for each delta table correct?
3. Is there another way to perform the delta process?
My issues are based upon the fact that I join roughly 15 delta tables and select about 100 columns to denormalize the data in the DW. If this is the only option that I have then this will generate an very large and complex query which would be very difficult to manage and udpate.
Thanks...Hi,
Going thru your post, I find that you have the delta view (V_) and the main table (S_) as drivers, i.e you have two driver tables and hence you make outer joins (w.r.t each other) and make a union to get the complete set of data.
Can you please tell me why both are considered as drivers? Is there a possiability that the V_ view may not have some data but the corresponding S_table might have an update?
Regards,
Bharadwaj Hari -
Full and delta Data loads in SAP BI 7.0
Hi,
Kindly explain me the difference between full and delta data loads in cube and DSO.
When do we select change log or active table(with or without archive) with full/delta in DTP for DSO.
Please explain the differences in data load for above different combinations.
With thanks and regards,
MukulPlease search the forums. This topic has already been discussed a lot of times.
-
Re: Update Cache Objects in Delta Process Dosn't work
Hi All,
Re: Update Cache Objects in Delta Process doesn't work.
BI 7 - SP 17
This is the scenario I am working on, am running a bex query on a Cube(via a multi) with bunch aggregates.
The daily extraction & Aggregate rollup is correct, but when I run a Bex Query it display incorrect keyfigure values as compared to what we see in LISTCUBE for the infocube.
So when I ran the same query in RSRT with "Do not use Cache", it gave correct results and then when I ran the Bex Query again it fixed itself and it displayed correctly.
InfoCube - standard & No compression for requests
Query Properties are
Read Mode - H
Req Status - 1
Cache - Main Memory Cache Without swaping
Update Cache Objects in Delta Process (Flag selected)
SP grouping - 1
This problem occurs once in couple of weeks and my question is there a permanant fix for it??
OR should we turn the cache off??
Can anyone please help.
Thanking You.
RaoHi Kevin/Rao,
We are currently experiencing problems with the 'Update Cache Objects in Delta' process. Did either of you manage to resolve your issues, and if so, how? -
Explication about the Delta process in SAP BI - ROCANCEL and 0RECORDMODE
Hello,
I use the delta process with cockpit datasources (2LIS_17_I3HDR and 2LIS_17_I0NOTIF).
I have transfered all data from the extraction queue to the BW queue.
After, when I will launch the delta process in SAP BI (with the infopackage), two ODS will be updated. I wanted to know how SAP will know what is really the delta? I have seen that there is a ROCANCEL field in the PSA, how does SAP choose the good row? Does-it recognize the ROCANCEL and so replace the row ROCANCELED with the new one? Have we to do a special manipulation (like a mapping of ROCANCEL with 0RECORDMODE?)?
Can you explain me a little how does SAP work (ROCANCEL values, 0RECORDMODE, etc. and what I have to do? ).
Thanks a lot,
Regards,
JulienCheck :
Re: Indicator: Cancel Data Record
Re: 0RECORDMODE, 0STORNO, ROCANCEL -
FBU Internal Job Queue Full for Synchronous Processing of Print Requests
Hello All,
We are getting a lot of errors in system log (SM21) with the
Error : FBU Internal Job Queue Full for Synchronous Processing of Print Requests
User :SAPSYS
MNO:FBU
=============================================================
Documentation for system log message FB U :
If a spool server is to process output requests in the order they were
created using multiple work processes, the processing of requests for a
particular device can only run on one work process. To do this an
internal queue (limited size) is used.
If too many requests are created for this device too quickly, the work
process may get overloaded. This is recognized when, as in this case,
the internal queue is exhausted.
This can only be solved by reducing the print load or increasing
processor performance or, if there is a connection problem to the host
spooler, by improving the transfer of data to the host spooler.
Increasing the number of spool work processes will not help, as
requests for one device can only be processed by one work process. If
processing in order of creation is not required, sequential request
processing can be deactivated (second page of device configuration in
Transaction SPAD). This allows several work processes to process
requests from the same device thus alleviating the bottleneck.
Enlarging the internal queue will only help if the overload is
temporary. If the overload is constant, a larger queue will eventually
also be overloaded.
===========================================================
Can you please tell me how to proceed.
Best Regards,
PratyushaSolution is here:
412065 - Incorrect output sequence of output requests
Reason and Prerequisites
The following messages appear in the developer trace (dev_trc) of the SPO processes or in the syslog:
S *** ERROR => overflow of internal job queue [rspowunx.c 788]
Syslog Message FBU:
Internal job queue for synchronous request processing of output requests full
The "request processing sequence compliance" on a spool server with several SPO processes only works provided the server-internal job queue (see Note 118057) does not overflow. The size of this request queue is prepared using the rspo/global_shm/job_list profile parameter. The default value is 50 requests. However, if more output requests arrive for the spool server than can be processed (and the internal request queue is full as a result), more SPO processes are used to process the requests (in parallel), and the output sequence of the requests is no longer guaranteed.
Solution
Increase the rspo/global_shm/job_list profile parameter to a much larger value. Unfortunately, the value actually required cannot be found by "trial and error" because this queue contains all the incoming output requests on a spool server, not just the "sequence compliant" requests. A practical lower limit for this value represents the maximum sequence-compliant output requests for the above generated output device. If, for example, 1000 documents that should be output in sequence are issued from an application program to an output device, the queue must be able to hold 1000 entries so that it does not overflow if the SPO process processes the requests at a maximum low-speed.
Maybe you are looking for
-
Pops and crackles when using EAX in F.E.A.R.
Here is my system: Windows 7 64-bit intel i7 930 ASUS P6T:SE Mobo 2x Nvidia GTX 470 SLI 12GB DDR2 RAM 1TB 7200 RPM HDD Creative X-Fi Titanium and the latest non-beta drivers -I also have the Windows PID patch and the SLI patch I recently discovered A
-
How can I get a new iPhone 5?
My iPhone 5 lately has been experieincg lots of battery issues. I hardly get any battery life when on LTE and my phone gets really hot when using LTE or gaming and I turn off most location services and searching services. I also lose wifi connection
-
Using variables in shape's coordinates - urgent
Hi, how can i draw using a variable to set the shapes coordinates ? i need to increment this variable to continue drawing in the panel at the last position. LIKE THIS: g.fillRect(490,210,300,100); ==> it works g.fillRect(x+100,y+100,150,150); ==> it
-
Clearing UCS Manager fault about unused adapter uplink from C-series
I have a number of C240M3 servers in a UCS infrastructure. They all have two VIC 1225 adapters and when they were initially connected, the technicians accidentally connected the second adapter on some of them to the FEXs. For now at least, we only
-
Mandatory data selection field in infopackage
Hello together, Do you know how to mark as mandatory one filed from data selection tab in the infopackage? I would like the load not to start if the field is not filed. Thank you, Iuliana