Process chains- different data loads at different points of time
hi friends
i want to load master data at 4am
and then i want to load transaction data at 4:25 am
how can i achieve this by using process chains
to load other datas also at different times and where we have to specify this different timings?
thanks in advance
Hi Venkat,
Create Processchain for your data & follow the belo steps -
In Processchain Double click on start button -
1) Select direct scheduling option
2) clcik maintain selections
3) give the date & time when you want to run the job.
4) according the time that you given in selections the job is going to start.
Regards,
Lakshman kumar Ghattamaneni
Similar Messages
-
Load fails while running a Process Chain even before loading to the PSA.
Hi Experts,
We are working on BI 7.0 and ECC 6.0 on an IS-U Project.
We are loading Transaction Data using Process Chains. Data load is failing even before loading to the PSA and the error message is as below:
Error message during processing in BI
Diagnosis
An error occurred in BI while processing the data. The error is documented in an error message.
System Response
A caller 01, 02 or equal to or greater than 20 contains an error meesage.
Further analysis:
The error message(s) was (were) sent by:
Inbound Processing
Procedure
Check the error message (pushbutton below the text).
Select the message in the message dialog box, and look at the long text for further information.
Follow the instructions in the message.
Had anyone come across this type of error, if so, what could be the reason of failure and probable solution.
Your time is very much appreciated.
Thanks in Advance.
Best Regards,
Chandu.Hi,
It might be a network connection between the source system and BW? Can you get your Basis colleague to look into the connection?
Hope this helps.
Cheers,
Gimmo -
Aggregating data loaded into different hierarchy levels
I have some problems when i try to aggregate a variable called PRUEBA2_IMPORTE dimensinated by time dimension (parent-child type).
I read the help in DML Reference of the OLAP Worksheet and it said the follow:
When data is loaded into dimension values that are at different levels of a hierarchy, then you need to be careful in how you set status in the PRECOMPUTE clause in a RELATION statement in your aggregation specification. Suppose that a time dimension has a hierarchy with three levels: months aggregate into quarters, and quarters aggregate into years. Some data is loaded into month dimension values, while other data is loaded into quarter dimension values. For example, Q1 is the parent of January, February, and March. Data for March is loaded into the March dimension value. But the sum of data for January and February is loaded directly into the Q1 dimension value. In fact, the January and February dimension values contain NA values instead of data. Your goal is to add the data in March to the data in Q1. When you attempt to aggregate January, February, and March into Q1, the data in March will simply replace the data in Q1. When this happens, Q1 will only contain the March data instead of the sum of January, February, and March. To aggregate data that is loaded into different levels of a hierarchy, create a valueset for only those dimension values that contain data. DEFINE all_but_q4 VALUESET time
LIMIT all_but_q4 TO ALL
LIMIT all_but_q4 REMOVE 'Q4'
Within the aggregation specification, use that valueset to specify that the detail-level data should be added to the data that already exists in its parent, Q1, as shown in the following statement. RELATION time.r PRECOMPUTE (all_but_q4)
How to do it this for more than one dimension?
Above i wrote my case of study:
DEFINE T_TIME DIMENSION TEXT
T_TIME
200401
200402
200403
200404
200405
200406
200407
200408
200409
200410
200411
2004
200412
200501
200502
200503
200504
200505
200506
200507
200508
200509
200510
200511
2005
200512
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
2004 NA
200412 2004
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
2005 NA
200512 2005
DEFINE PRUEBA2_IMPORTE FORMULA DECIMAL <T_TIME>
EQ -
aggregate(this_aw!PRUEBA2_IMPORTE_STORED using this_aw!OBJ262568349 -
COUNTVAR this_aw!PRUEBA2_IMPORTE_COUNTVAR)
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 4,00 ---> here its right!! but...
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00 ---> here must be 30,00 not 10,00
200512 NA
DEFINE PRUEBA2_IMPORTE_STORED VARIABLE DECIMAL <T_TIME>
T_TIME PRUEBA2_IMPORTE_STORED
200401 NA
200402 NA
200403 NA
200404 NA
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
2004 NA
200412 NA
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
2005 10,00
200512 NA
DEFINE OBJ262568349 AGGMAP
AGGMAP
RELATION this_aw!T_TIME_PARENTREL(this_aw!T_TIME_AGGRHIER_VSET1) PRECOMPUTE(this_aw!T_TIME_AGGRDIM_VSET1) OPERATOR SUM -
args DIVIDEBYZERO YES DECIMALOVERFLOW YES NASKIP YES
AGGINDEX NO
CACHE NONE
END
DEFINE T_TIME_AGGRHIER_VSET1 VALUESET T_TIME_HIERLIST
T_TIME_AGGRHIER_VSET1 = (H_TIME)
DEFINE T_TIME_AGGRDIM_VSET1 VALUESET T_TIME
T_TIME_AGGRDIM_VSET1 = (2005)
Regards,
Mel.Mel,
There are several different types of "data loaded into different hierarchy levels" and the aproach to solving the issue is different depending on the needs of the application.
1. Data is loaded symmetrically at uniform mixed levels. Example would include loading data at "quarter" in historical years, but at "month" in the current year, it does /not/ include data loaded at both quarter and month within the same calendar period.
= solved by the setting of status, or in 10.2 or later with the load_status clause of the aggmap.
2. Data is loaded at both a detail level and it's ancestor, as in your example case.
= the aggregate command overwrites aggregate values based on the values of the children, this is the only repeatable thing that it can do. The recomended way to solve this problem is to create 'self' nodes in the hierarchy representing the data loaded at the aggregate level, which is then added as one of the children of the aggregate node. This enables repeatable calculation as well as auditability of the resultant value.
Also note the difference in behavior between the aggregate command and the aggregate function. In your example the aggregate function looks at '2005', finds a value and returns it for a result of 10, the aggregate command would recalculate based on january and february for a result of 20.
To solve your usage case I would suggest a hierarchy that looks more like this:
DEFINE T_TIME_PARENTREL RELATION T_TIME <T_TIME T_TIME_HIERLIST>
-----------T_TIME_HIERLIST-------------
T_TIME H_TIME
200401 2004
200402 2004
200403 2004
200404 2004
200405 2004
200406 2004
200407 2004
200408 2004
200409 2004
200410 2004
200411 2004
200412 2004
2004_SELF 2004
2004 NA
200501 2005
200502 2005
200503 2005
200504 2005
200505 2005
200506 2005
200507 2005
200508 2005
200509 2005
200510 2005
200511 2005
200512 2005
2005_SELF 2005
2005 NA
Resulting in the following cube:
T_TIME PRUEBA2_IMPORTE
200401 NA
200402 NA
200403 2,00
200404 2,00
200405 NA
200406 NA
200407 NA
200408 NA
200409 NA
200410 NA
200411 NA
200412 NA
2004_SELF NA
2004 4,00
200501 5,00
200502 15,00
200503 NA
200504 NA
200505 NA
200506 NA
200507 NA
200508 NA
200509 NA
200510 NA
200511 NA
200512 NA
2005_SELF 10,00
2005 30,00
3. Data is loaded at a level based upon another dimension; for example product being loaded at 'UPC' in EMEA, but at 'BRAND' in APAC.
= this can currently only be solved by issuing multiple aggregate commands to aggregate the different regions with different input status, which unfortunately means that it is not compatable with compressed composites. We will likely add better support for this case in future releases.
4. Data is loaded at both an aggregate level and a detail level, but the calculation is more complicated than a simple SUM operator.
= often requires the use of ALLOCATE in order to push the data to the leaves in order to correctly calculate the aggregate values during aggregation. -
Process chain for full load.
Hi,
I have developed a Process chain for delta loads. Now my question is, can we use process chain to load full load data source for master data, if yeas then what are the steps.
If we can use process chain for full load to master data then can we use delta and full load infopackages in the process chain.
Please update.Hi Sata,
You can Include both Full and Delta in the Process Chain. But make sure that you execute the Init InfoPackage manually before executing the delta InfoPackage in the Process Chain.
For Full Load Process chain, add the infopackage
>DTP for Full Load---->delete PSA data variant(after 2days) -
>attribute change run
Normally load the hierarchy first then the attribute and text.
Assign points if it helped you.
Regards,
Senoy -
Authorization objects for Process chain and Data source in BW 3.x
Hi,
Can any one tell me the authorization objects regaring process chain and Data source in BW 3.x versions. I guess we have auth objects for both of them in BW 3.5 that is S_RS_PC AND S_RS_DS .
Can any one help me solving this issue
Thanks
BharatHi bharat
I dont thin these objects are part of 3.0
check from SU03 weather these objects are present in 3.o box
In 7.0 they exist:
http://help.sap.com/saphelp_bw33/helpdata/en/8b/134c3b5710486be10000000a11402f/frameset.htm
TO see weather these objects exist:
Go to Tools -> Administration ->User maintenance ->Information System -> Authorization objects -> Authorization objects by Complex Selection Criteria -> By Object Class. For the object class, enter either RS (for Business Information Warehouse objects)
OR
S_BCE_68001413 (Tcode for this rep)
Thanks,
Raj -
Authorization objects for Process chain and Data sources in BW 3.x version
Hi,
Can any one tell me the authorization objects regaring process chain and Data source in BW 3.x versions. I guess we have auth objects for both of them in BW 3.5 that is S_RS_PC AND S_RS_DS .
Can any one help me solving this issue
Thanks
Bharatits the same thread again
/community [original link is broken]
Thanks,
Raj -
Regarding master data loading for different source systems
Hi Friends,
I have an issue regarding master data loading.
we have two source systems one is 4.6c and another is ecc 6.0.
First i am loading the master data from 4.6c to bi7.0.
Now this 4.6c is upgraded to ecc6.0.
*In 4.6c and ecc6.0c master data is changing.
After some time there is no 4.6c, only ecc 6.0 is there.
Now if i load master data from ecc6.0 to bi7.0 what will happen.
Is it possible ?
Could you please tell me?
Regards,
ramnaresh.Hi ramnaresh porana,
Yes, its possible. You can load data from ECC.
Data will not change, may be you may get more fields in datasource at r/3 side, but BW/BI side no change in mappings structures are same. So data also same.
You need to take care of Delta's before and after upgrade.
Hope it Helps
Srini -
Error in process chain (psa data)
We have got an error in our PROCESS chain for master data.
Info object Name : 0customer
In that it was showing there was error in our request. When I wanted to correct data in the request there it was showing no error records. All the records are good and also it is showing as error in request. I have deleted the request from infoobject and also from the PSA. And the last updated good request is also showing error as No data packets available for the request .
0Customer is delta update ; Process chain is daily based.
My doubt is whether the records in the deleted requests will be reloaded again from sap R/3 into PSA, as requests during Process chain or not.
While monitoring the request details in Monitor u2013 Administrator Work bench, it shows the following:
1. Requests (messages) Everything Ok
2. Extraction (messages) Missing messages
3. Transfer (IDocs and TRFC) Missing messages
4. Processing (data Packet) No data
5. Process chains : Everything Ok.
Even after deleting the requests my process chain is not getting updated.
Kindly help me in solving the above problem
Regards
PadmaHi,
Once deleted the request, try to manually trigger the load and see whether it completes. if it completes and pulls reocrd then you can push the chain process ahead with the program RSPC_PROCESS_FINISH
Regds,
Shashank -
Loading through Process Chains 2 Delta Loads and 1 Full Load (ODS to Cube).
Dear All,
I am loading through Process chains with 2 Delta Loads and 1 Full load from ODS to Cube in 3.5. Am in the development process.
My loading process is:
Start - 2 Delta Loads - 1 Full Load - ODS Activation - Delete Index - Further Update - Delete overlapping requests from infocube - Creating Index.
My question is:
When am loading for the first am getting some data and for the next load i should get as Zero as there is no data for the next load but am getting same no of records for the next load. May be it is taking data from full upload, i guess. Please, guide me.
Krishna.Hi,
The reason you are getting the same no. of records is as you said (Full load), after running the delta you got all the changed records but after those two delta's again you have a full load step which will pick whole of the data all over again.
The reason you are getting same no. of records is:
1> You are running the chain for the first time.
2> You ran this delta ip's for the first time, as such while initializing these deltas you might have choosen "Initialization without data transfer", as such now when you ran these deltas for the first time they picked whole of the data.Running a full load after that will also pick the same no. of records too.
If the two delats you are talking are one after another then is say u got the data because of some changes, since you are loading for a single ods to a cube both your delta and full will pick same "For the first time " during data marting, for they have the same data source(ODS).
Hope fully this will serve your purpose and will be expedite.
Thax & Regards
Vaibhave Sharma
Edited by: Vaibhave Sharma on Sep 3, 2008 10:28 PM -
Process Chain Master Data Failed.Showing Entire chain has status R
Hi,
Everyday SDMasterChain is running successfully.
Today one of the localChain or subchain has failed.
I have noticed that it has failed because last delta for one infoPackage has not yet completed and chain showing status "Entire chain now has status 'R'"
Can anybody resolve the issue?
Below is te log for the error.
Job started
Step 001 started (program RSPROCESS, variant &0000000113991, user ID ALEREMOTE)
Last delta upload not yet completed. Cancel
Data saved successfully
Start InfoPackage ZPAK_3VB5JI5I0N6QJ7RFSFU7Z3PF2
Last delta upload not yet completed. Cancel
Last delta upload not yet completed. Cancel
InfoPackage ZPAK_3VB5JI5I0N6QJ7RFSFU7Z3PF2 created request
Request REQU_449HFL0OFD22BSU2GX1X2MJJJ could not be generated with InfoPackage REQU_449HFL0OFD22BSU2GX1X2MJJJ without errors
Last delta upload not yet completed. Cancel
Error After Starting InfoPackage ZPAK_3VB5JI5I0N6QJ7RFSFU7Z3PF2 in Process Chain
Entire chain now has status 'R'
Process Attribute Change Run, variant Compounding Object has status Undefined (instance )
Process Save Hierarchy, variant Generated from LOADING ZPAK_3VBSFASA7NWCNP1JX9WXI5 has status Undefined (instance )
Process Execute InfoPackage, variant 0CUST_SALES_ATTR - Full has status Undefined (instance )
Process Execute InfoPackage, variant 0CUST_SALES_TEXT has status Undefined (instance )
Process Execute InfoPackage, variant 0CUST_SALES_TID_LKDH_HIER has status Undefined (instance )
Process Execute InfoPackage, variant ZMAT_SALET - TEXT has status Undefined (instance )
Process Start Process, variant Bekaert Master Data Loads - Start Variant has status Undefined (instance 449HEDPDI8N6AP5XKDUODJS6N)
Process Execute InfoPackage, variant Load from 0MAT_SALES_ATTR into ZMAT_SALE has status Undefined (instance REQU_449HB4K9K3W7GPJEF52YM83N3)
Process Execute InfoPackage, variant Load from 0MAT_SALES_TEXT into ZMAT_SALE has status Undefined (instance REQU_449HJOO6322QV09OL73P18ODR)
Process Execute InfoPackage, variant ZMAT_SALEM - ATTR - FULL has status Undefined (instance REQU_449HEJW4S44QAUTY9LQKH4QY7)
Process Execute InfoPackage, variant Delta load from 0MAT_PLANT_ATTR into 0MAT_PLANT has status Undefined (instance REQU_449HFL0OFD22BSU2GX1X2MJJJ)
Termination of chain has been reported to meta chain 449CKM1O64AHRLQJLNZ2GBWQ7
Message sent successfully
Job finished
Rgds,
CV.Hi,
There are times when Master Data load compel us to do Re-Init. I guess you need a Re-Init.
Check theses links:
1: Re: Update mode R is not supported by the extraction API
2: pl help me with repeat delta for text info object
Regards
Happy Tony -
Process Chain - Master data Reorganizing and Realignment
Hello Experts,
1) Could you provide me the best practice on using the Re-organizing the master data for Time independant and time dependant attributes/texts.
2) In addition to attribute change run for Master data attr/hier/texts, how significantly it would help by adding Re-organizing the master data process step to the master data load process chains ? Is it mandatory to have this step at all ?
3) Is it sufficient to run the realignment for attributes alone or you have to run it again after loading the hierarchies ?
4) Advice whether to add Re-organize master data step for every single attribute and texts since it cant be grouped like other process types such as attribute realignment run!
Look forward for your updates!
Thank you, VijayHallo
Please have allok at the following link:
http://help.sap.com/saphelp_nw04/helpdata/en/e3/e60138fede083de10000009b38f8cf/frameset.htm
Probably it is not mandatory but if your masterdata is a mess than it would be better on performance
Mike -
Number of parallel process definition during data load from R/3 to BI
Dear Friends,
We are using Bi7.00. We have a requirement in which i should increase the number of parallel process during data load from R/3 to BI. I want to modify this for a particular data source and check.Can experts provide helpful answers for the following question.
1) When load is taking place or have taken place, where can we see how many parallel process that particular load has taken.
2) Where should i change the setting for the number of parallel process for data load (from R/3 to BI) and not within BI.
3) How system works and what will be net result of increasing or decreasing the number of parallel process.
Expecting Experts help.
Regards,
M.MDear Des Gallagher,
Thank you very much for the useful information provided. The following was my observation.
From the posts in this forum, i was given to understand that the setting for specific data source can be done in the infopackage and DTP level, i carried out the same and found that there is no change in the load, i.e., system by default takes only one parallel process even though i maintained 6.
Can you kindly explain about the above mentioned point. i.e.,
1) Even though the value is maintained in the infopackage level , will system consider it or not. -> if not then from which transaction system is able to derive the 1 parallel process.
Actually we wanted to increase the package size but we failed because i could not understand what values have to be maintained -> can you explain in detail
Can you calrify my doubt and provide solution?
Regards,
M.M -
hi Experts,
am using BI 7.0(BW 7.0) my process chain is scheduled to run daily morning, it works fine, sometimes it gets delay in loading the data for even a longtime or stucks in the early start itself.Please help me in having a solution so that the process chain doesn't stuck or how to reduce the loadtime.
thanks
Sankaresh SOne reason could be the system resources are maximum utilized when this process chain was about to start. If it's the case you can change the schedule time.
You can check the Monitor screen - details tab for further information when it gets hanged and also check the job log / status of job in SM37 / any dumps recorded in ST22 ? -
Hi
I have created a process chain and i have set it to run every day night.
The selection in the infopackage must be the present date like today 2007.09.16 and tomorrow it should be 2007.09.17.
Is there any way where it can change the date automatically instead of me changing the date in info package daily and activating the process chain daily.
Thanks
sandeepHi
check these links...might be helpful for you.
Re: scheduling process chain 3 specific times a day
Re: How to schedule process chain to run only during specific time.
Process chain schedule daily twice
regards
biplab
<i>***Reward with points if it helps u!!</i> -
Process Chain error while loading CVC's into planning book
Hi I am trying to run the process chain and till half chain it was good and at one place after VMI Generate CVC's there is a box showing in RED and it is Load Infocube and when i checked it in batch processing it says end with errors and I could'nt find exactly what to do to run the entire process chain. Please anybody can tell me how can i resolve this issue.
Thanks
HariHi Senthil,
For Some of the SKUs it is giving the Error , some abrupt values are stored in Live Cache for these SKUs(CVC), thats why it giving the Error.
The only solution is ..i have to delete the CVCs and create new one...
But i will loose the data as we cant take the Backup of Old CVC data --> Giving COM Error
The reason of Abrupt Values in Live Cache Could Be -->.>
Macro/Alerts
How can we fix these types of Error , as regularly i am getting these Error??
Is there any Note/Patch to overcome this issue?
Will be GR8 if i will get the solution..
Regards,
Rajesh Patil
Maybe you are looking for
-
Problem when printing a PDF file
I am currently using Pagemaker 7 and after I create a PDF file and then email it to my client, the image shrinks when they print it out on their printer. If I print that same PDF file on my printer it does not shrink. Can anyone explain why this happ
-
hi friends, i am working with WebDynpro-Abap,ABAP i want to learn ESS & MSS i had searched in SDN i find no of documents but i can't decide which is right and better one. i don't know about those topics can any one suggest 1.what is ESS & MSS? 2.Wha
-
How do I remove duplicates in the desktop. They were added at the latest sync.Thanks -JL Post relates to: Treo 755p (Verizon)
-
Icon for "outbox" is on screen
Icon for "outbox" is on screen homepage and was not there previously. I checked the outbox and it shows (no messages). How can this icon be removed?
-
After first time machine backup half available GB on computer are gone
I did my first backup up to an external HD using Time Machine. Pre backup my computer showed 713 GB available. After backup my computer showed 334 GB available (less than half pre-backup!) and the external HD showed 713 GB available. What happened?