DTP issue

hi everyone,
I created a cube and the associated DTP. It worked well.
Then, I changed something in my cube ( I added a dimension) and now I'm trying to execute the data transfer process. But, I don't manage to display the request monitor. The request status popup doesn't open. When I execute the DTP, I have to wait for a while, then I have this error message:
Runtime errors: TIME_OUT
Time limit exceeded
Package 1 Status 'processed with errors'
Message no. RSBK257
Have you ever had this pb ?
Thank you in advance for your help.

2 things you can check for us to get a better view...
1. check on st22 and see if there is a related shortdump for this DTP.
2. goto sm21 and check if there is any errors which might be useful to help us troubleshoot.

Similar Messages

  • DTp issue in process chain

    HI guys,
    I have an issue with DTP in process chain.
    Actually we have a process cahin in quality as well as in development and production,the structure of chain in production and development is same but the structure in quality is not matching.we are getting 1 additional DTP in the chain in quality system .when we run the chain it get fail at the additional DTP,and the process cahin get stopped.
    now iam not able to find out wher from this additional DTP came and how to find out wher exactly it is from and how to restart the chain.
    can anybody help me out?Its very urgent for me.
    Regards
    sk

    Hi,
    I have asked u to deschedule the process chain and then Check the Process chain using the Check View..
    I think that this is a dummy process which has got added when u have done the transport.
    Check if u can see the error DTP when u check.. then u can delete this DTP as this is a Dummy one.. and then link the processes if the link breaks bcz of this... and then check the PC again, and activate the same..
    Finally u can use PC after this..
    Thanks
    Assign points if this helps

  • Open Hub DTP issue (Error 21 when writing to the local workstation)

    Hi,
    I am using an openhub destination to output my csv file. Its a local workstation share folder. Data Provider got around 50000 rows (expecting a 40MB file). When ever I execute the DTP, it run for first data package and fell down with below error message.
    Error 21 when writing to the local workstation
    Error while updating to target XXXXXXX (type Open Hub Destination)
    Operation  could not be carried out for
    Exception CX_RS_FAILED logged
    Exception CX_RS_FAILED logged
    Sometimes the file get created with partial entries, most of the times the file wont even get created. I created new dtps and tried the same. Changed the package size, servers, parallel processes etc, but of no use.
    Thanks Neo

    Hi Neo,
    Did you try to define the logical/physcal file path from tcode FILE? If not can you try to use this option and see if it works.
    Also can you change your DTP settings by reducing the data packet size to see if your facing this error message?
    Thanks
    Abhishek Shanbhogue

  • DTP issue in QA system

    Experts,
    I am facing with this error with DTP in Q. The DTP succesfully loaded data in BI Dev and when transported, in Q there was an error and I deleted and retransported. I could see it back in Q, but when I double click it the msg is:
    "Object Data Transfer Process DTP_D54DXT95LZOV29CXCI6HBHCII not found". Wondering if experiences?
    Thanks much.

    I figured it out. My second time transport of the same DTP could show it but when I clicked on the msg was "dtp not found". A third transport worked, finally.
    Sai

  • SAP BI issues.

    Hi Experts,
    I am a novice candidate pursuing for SAP BW/BI opportunities and  appearing for different interviews in SAP BW/BI. Following are some of the questions that I was fired by the interviewers. I appreciate your time and consideration for answering my queries.
    1)How do you initialize the setup tables for filling the data of just past three years ?
    And after full load from set up tables and executing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.How should we go about without getting duplicate records in the source system and retaining the original delta
    2) what are the general different transformations issues and methods to solve it?
    3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
    4)  RDA background process flow?
    5) How to have plan/actual comparisons?lets say Profitability analysis.
       Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
    6) In business content activation,how to exclude the objects that have been already activated.
    7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
    8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
    9) what are functional support issues in SAP BI implementations?
    10) The fastest and best method to improve query performance?
        If I am right, is it Cache settings?
    11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Thanks
    Mujtaba.
    Lot of points will be given for urgent replies.
    Edited by: Nazeeruddin Mujtaba Mohammed on Feb 16, 2008 7:20 PM

    Hi Jacky
    6) In business content activation,how to exclude the objects that have been already activated.
         Just you have to select the particular object ,context menu ,select Donot install below.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Normally the production support activities include
    Scheduling
    R/3 Job Monitoring
    B/W Job Monitoring
    Taking corrective action for failed data loads.
    Working on some tickets with small changes in reports or in AWB objects.
    The activities in a typical Production Support would be as follows:
    1. Data Loading - could be using process chains or manual loads.
    2. Resolving urgent user issues - helpline activities
    3. Modifying BW reports as per the need of the user.
    4. Creating aggregates in Prod system
    5. Regression testing when version/patch upgrade is done.
    6. Creating adhoc hierarchies.
    we can perform the daily activities in Production
    1. Monitoring Data load failures thru RSMO
    2. Monitoring Process Chains Daily/weekly/monthly
    3. Perform Change run Hierarchy
    4. Check Aggr's Rollup
    To add to the above
    1)check data targets are ready for reporting,
    2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
    3) All requests are loaded for day, monthly and yearly also.
    4) Also to note down time taken for loading of critical info cubes which are used for reporting.
    5) Is there any break in any schedules from your process chains.
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    Why there is frequent load failures during extractions? and how to analyse them?
    If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
    If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
    What is the daily task we do in production support.How many times we will extract the data at what times.
    It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
    Usually You need to work on RSMO and see what records are failing.. and update from PSA.
    What are some of the frequent failures and errors?
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    for Rfc connections:
    We use SM59 for creating RFC destinations
    Some questions
    1)     RFC connection lost.
    A) We can check out in the SM59 t-code
    RFC Des
    + R/3 conn
    CRD client (our r/3 client)
    double click..test connection in menu
    2) Invalid characters while loading.
    A) Change them in the PSA & load them.
    3) ALEREMOTE user is locked.
    A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
    2) Password Changed
    3) Number of incorrect attempts to login into ALEREMOTE.
    4) USE SM12 t-code to find out are there any locks.
    4) Lower case letters not allowed.
    A) Uncheck the lower case letters check box under "general" tab in the info object.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    A) It might be locked by some other process or a user. Also check for authorizations
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic ZCUSTSAL already deleted .
    9) extraction job aborted in r3
    A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
    10) request couldnt be activated because there is another request in the psa with a smaller sid
    A)
    11) repeat of last delta not possible
    12) datasource not replicated
    A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
    13) datasource/transfer structure not active.
    A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
    14) ODS activation error.
    A) ODS activation errors can occur mainly due to following reasons-
    1.Invalid characters (# like characters)
    2.Invalid data values for units/currencies etc
    3.Invalid values for data types of char & key figures.
    4.Error in generating SID values for some data.
    15. conversio routine error
    solution.check the data format in source
    16.OBJECT CANOOT BE ACTIVATED.or error when activating object
    check the consistency of the object.
    17.no data found.(in query)
    check the info provider wether data is there or not and delete unsucessful request.
    18.error generating or activating update rules.
    1. What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start routine
    11. What is the difference between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
    o V1 & V2 don’t need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
    • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    21. How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    24. What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    SAP BW Interview Questions 2
    1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
    2) What is data integrityand how can we achieve this?
    3) What is index maintenance and what is the purpose to use this in real time?
    4) When and why use infocube compression in real time?
    5) What is mean by data modelling and what will the consultant do in data modelling?
    6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
    7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
    8) What is mean by multiprovider and what purpose we use multiprovider?
    9) What is scheduled and monitored data loads and for what purpose?
    Ans # 1:
    Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
    PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
    PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
    This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
    1. Check the Source System for that particular PC.
    2. Select the request ID (it will be in Header Tab) of PC
    3. Go to SM37 of Source System.
    4. Double Click on the Job.
    5. You will navigate to a screen
    6. In that Click "Job Details" button
    7. A small Pop-up Window comes
    8. In the Pop-up screen, take a note of
    a) Executing Server
    b) WP Number/PID
    9. Open a new SM37 (/OSM37) command
    10. In the Click on "Application Servers" button
    11. You can see different Application Servers.
    11. Goto Executing server, and Double Click (Point 8 (a))
    12. Goto PID (Point 8 (b))
    13. On the left most you can see a check box
    14. "Check" the check Box
    15. On the Menu Bar.. You can see "Process"
    16. In the "process" you have the Option "Cancel with Core"
    17. Click on that option. * --
    Ans # 2:
    Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
    Ans # 4:
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Ans#3
    Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
    Ans#5
    Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
    Ans#6
    We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
    Ans#7
    Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
    Ans#8
    Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
    Ans#9
    Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request\
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    1. What is process chain and how you used it?
    Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    2. What is transaction for creating Process Chains ?
    RSPC .
    3. Explain Colector Process ?
    Collector processes are used to manage multiple predecessor
    processes that feed into the same subsequent process. The collector
    processes available for BW are:
    AND :
    All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
    OR :
    A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
    Any additional predecessor processes that send an event will again trigger
    subsequent process (Only if the chain is planned as “periodic”)
    EXOR : Exclusive “OR”
    Similar to regular “OR”, but there is only ONE execution of the successor
    processes, even if several predecessor processes raise an event
    4. What are application Process ?
    Application processes represent BW activities that are typically
    performed as part of BW operations.
    Examples include:
    Data load
    Attribute/Hierarchy Change run
    Aggregate rollup
    Reporting Agent Settings
    5. Tell some facts about process Chains
    Process chains are transportable Button for writing to a change request when
    maintaining a process chain in RSPC
    Process chains available in the transport connection wizard (administrator workbench)
    If a process “dumps”, it is treated in the same manner as a failed process
    Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
    A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
    Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
    6. What happens when chain is activated ?
    When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if “start via meta-chain” it is not planned to batch
    7. Steps in process chains ?
    Go to transaction code-> RSPC
    Follow the Basic Flow of Process chain..
    1. Start chain
    2. Delete BasicCube indexes
    3. Load data from the source system into the PSA
    4. Load data from the PSA into the ODS object
    5. Activate data in the ODS object
    6. Load data from the ODS object in the BasicCube
    7. Create indexes after loading for the BasicCube
    Regards,
    Hari

  • Need asnwers - interview questions

    Hi Experts,
    I am a novice candidate pursuing for SAP BW/BI opportunities and appearing for different interviews in SAP BW/BI. Following are some of the questions that I was fired by the interviewers. I appreciate your time and consideration for answering my queries.
    1)How do you initialize the setup tables for filling the data of just past three years ?
    And after full load from set up tables and executing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.How should we go about without getting duplicate records in the source system and retaining the original delta
    2) what are the general different transformations issues and methods to solve it?
    3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
    4) RDA background process flow?
    5) How to have plan/actual comparisons?lets say Profitability analysis.
    Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
    6) In business content activation,how to exclude the objects that have been already activated.
    7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
    8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
    9) what are functional support issues in SAP BI implementations?
    10) The fastest and best method to improve query performance?
    If I am right, is it Cache settings?
    11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Thanks
    Mujtaba.
    Lot of points will be given for urgent replies.
    Edited by: Nazeeruddin Mujtaba Mohammed on Feb 16, 2008 7:20 PM

    1)How do you initialize the setup tables for filling the data of just past three years ? And after intilization/full load from set up tables and performing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.Here we still need delta load for the original records.
    You can give the date  for the past 3 yrs and fill the setup table.
    2) what are the general different transformations issues and methods to solve it?
    Short dump while creating transformation
    Inactive transformations
    Error while copying transformations
    Automatic transformation not possible etc
    3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
    You can do it with error handling.( Error DTP, you can maintain in Error DTP and schedule the load from Error DTP to Target).
    You can do
    4) RDA background process flow?
    The RDA concept has been implemented in BI 7 . This is to get the real time data from the source system for a regular interval of one min or more than that .
    http://help.sap.com/saphelp_nw70/helpdata/en/43/fc4d7d87e1025de10000000a1553f7/frameset.htm.
    /people/kamaljeet.kharbanda/blog/2006/11/13/real-time-data-acquisition-bi2004s
    5) How to have plan/actual comparisons?lets say Profitability analysis.
    Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
    Actual data is loaded from R3 or any other source, Planned data is loaded from APo or entered manually...
    U can switch between actual and planned mode and load data from source or enter the data in planned mode..
    Example
    Stock/sales can be predicted and entered in plan mode vs actual mode
    6) In business content activation,how to exclude the objects that have been already activated.
    Chk compare/copy check box...
    7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
    It will hit the aggregates for which master data changes has happened
    8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
    The Blue print for Business Inteligence:
    http://help.sap.com/bp_biv335/BI_EN/html/Business_Blueprint.htm
    Now if you are looking the Blueprint for any other modules, Select from the list
    http://www.sap.com/services/servsuptech/bestpractices/index.epx
    9) what are functional support issues in SAP BI implementations?
    Check these threads:
    http://help.sap.com/bp_epv170/EP_US/Documentation/How-to/Planning/Master_Guide_NW2004s.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/2e8e5288-0b01-0010-2ea8-bcd4df5084a7
    10) The fastest and best method to improve query performance?
    Is I am right is it Cache settings?
    Aggregates
    Avoid nav attributes
    Dont fetch line item dimensions unless needed
    Avoid too many chars in the row
    Avoid using RKF and CKF as it fetch data from database
    Query Read mode(adjust with A, X and H)
    11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
    Browse SDN..ull find a lot  Chk the link
    https://forums.sdn.sap.com/search.jspa?threadID=&q=BI7.0certification+questions&objID=f131
    Regards,
    B

  • Overwrite attributes of infoobject

    Hi,
    we have a object ZMATERIAL in which MATERIAL is key and we have other attrobutes to ZMATERIAL like some indicators.
    this is loaded from a DSO using DTPs.
    issue is : when we load from DSO to ZMATERIAL the attributes are not overwritten with Deltas. we are using DTP's.
    example"
    Records in ZMATERIAL :
    Material    indicator
    348970        X
    when this indicator is changed from X to Y, (and we are loading as delta). but this is not updated to ZMATERIAL and it is written to error stock saying that duplicates are written to error stock though we dont have semantic groups not chosen and handle duplicate records is not checked.
    Please help.
    Praveen,

    Hi Praveen,
    Select the check boxes material & Indicator in Semantic Keys in the DTP 1st tab (You are making these as Key fields). Also check the check box "Handle Duplicate Keys" In the DTP Update tab. Now try to load & Hop it solves your issue.
    Regards,
    Pavan

  • Issue for DTP from DSO to open hub destination

    Hello Gurus,
            I have a issue for DTP from DSO to open hub destination, long text for error in the monitor is as follows:
              " Could not open file
    SAPBITFS\tst_bi\bit\BIWork\GENLGR_OneTimeVendr_2 on application server"
              " Error while updating to target ZFIGLH03 (type Open Hub Destination)     "
          for open hub destination, I check the configure for logical file name , which is "
    SAPBITFS\tst_bi\bit\BIWork\GENLGR_OneTimeVendr",
    I am wondering where that file " 
    SAPBITFS\tst_bi\bit\BIWork\GENLGR_OneTimeVendr_2" in the error message comes from?
    Many thanks,

    Hi
    You do not need to create a file in application server. It will be created automatically.
    But if you have defined a logical file name in tcode FILE and used that in OHD and if it is not correct then it will show a conflict. Check this out.

  • DTP Short dump issue

    Hi Expert,
    I face a DTP shortdump issue on loading data from PSA to DSO
    The DSO is write optimize.
    DTP is using symantic loading and support 3 processor, pack size is default
    Situation is as below:
    1. This issue occurs every two days. In some situation, it will occur everday
    2. The DTP will face short dump. Then when we rerun the DTP, everything goes will
    3. Not all the system face this issue (for example, we have dev/qas/prd, this issue cannot be reproduced in qas)
    Below is the shortdump point in ST22
       when '3'.
          select single * from rsmonicdp into l_s_monicdp where
                 rnr   = l_rnr and
                 icube = p_ods and
                 dp_nr = i_datapakid.
          if sy-subrc <> 0.
            data: l_s_iccont type rsiccont.
            data: l_s_reqdone    type rsreqdone.
            select single * from rsreqdone into l_s_reqdone where
                   rnr = l_rnr.
            if sy-subrc <> 0.
              message x000.
            endif.
            l_s_reqdone-datum = l_s_reqdone-datum + 3.
            if l_s_reqdone-datum < sy-datum.
              select single * from rsiccont into l_s_iccont where
                     rnr   = l_rnr and
                     icube = p_ods.
              if sy-subrc <> 0.
                exit.
              else.
                message x000.
              endif.
            else.
               message x000. (Shortdump here)
            endif.
          endif.
          l_s_monicdp-status     = icon_red_light.
    Could you kindly let me know you face simulation issue before and possible solution?
    Thanks and best regards
    Alex yang

    Hi KP,
    Thanks for your information.
    Please check below information in ST22 side
    Error analysis
        Short text of error message:
    Status Manager *****
        Long text of error message:
        Technical information about the message:
        Message class....... "RSSTATMAN"
        Number.............. 000
        Variable 1.......... " "
        Variable 2.......... " "
        Variable 3.......... " "
        Variable 4.......... " "
    How to correct the error
        Probably the only way to eliminate the error is to correct the program.
        If the error occures in a non-modified SAP program, you may be able to
        find an interim solution in an SAP Note.
        If you have access to SAP Notes, carry out a search with the following
        keywords:
        "MESSAGE_TYPE_X" " "
        "CL_RSSM_STATMAN_TARGET_ODSO===CP" or "CL_RSSM_STATMAN_TARGET_ODSO===CM007"
        "IF_RSBK_DATAPACKAGE_OBSERVER~UPDATE_STATUS_TABLES"
    BTW, do you think this issue will be related to larger data volum in DSO side?
    Thanks and best regards
    Alex yang

  • BI Load issue in the transformation/DTP after SP 17

    Hello,
    After SP 17 applied in our BI system, we are having load issue in one particular transformation/DTP.
    See below the errors:
    u2022 Messages for 101 data records saved; request is red acc. to
    configuration
    u2022 Thee is no active nametab in the system, although an active ABAP/4
    Dictionary object was expected. Reactivate the ABAP/4 Dictionary object.
    Also I have activated the DSO, transformation and DTP. Also I went to SE14 and try to adjust my DSO, SE11 and activate the table but still having the issue.
    Can someone advise me on how to resolve this issue?
    Thanks a lot.
    Cesar

    Hi,
    Do the following: In BI, run transaction SE38 >> RSDG_CUBE_ACTIVATE to recreate the Metadata information
    for the Info-provider.
    Then execute another program via SE38: RSDD_RSTMPLWIDTP_GEN_ALL
    See Note 1161940 and 1152453 for more details.
    Regards,

  • BI - DTP activation issue

    Hi, Gurus.
    Good day.
    We have performed a refresh of our BI system and all its connected source systems to build our QA landscape.  All post
    processing steps were peformed in accordand to OSS note 886102.
    However, in RSA1, a lot of DTPs are still pointing to the old logical source system.  We have tried to reactivate the affected DTP but got the following error:
    -Transformation does not exist (see long text) RSTRAN10
    -Error saving Data Transfer Process DTP_<>
    We have applied the ff notes and rerun BDLS and RSBKDTP_BLDS (with Force Conversion option) but the issue still persists.
    1493814
    1489612
    1480336
    1477575
    1476616
    1604726
    1612875
    1521968
    1632276
    BI Version is 7.0 Ehp1
    Please advise.
    Thanks,
    Virgilio
    Edited by: Virgilio Padios on Nov 11, 2011 3:24 AM

    Run followin programs
    1.     Activate Data Source u2013 RSDS_DATASOURCE_ACTIVATE_ALL (Activate All DataSources of a Log System)
    2.     Activate Transfer Structures u2013 RS_TRANSTRU_ACTIVATE_ALL (Activate all active transfer structures for a source system)
    3.     Update initialization/Delta in case of dump while running info package  -RSSM_OLTP_INIT_DELTA_UPDATE
    Regards,
    Sushant

  • Issue in process chain with DTP after imported to production

    Hi,
    we have a issue with process chain.
    after transportation DTP technical name is differ in prod and dev. process chain is went inactive.
    i have read some related threads, i'm not find any suitable solution for this.
    we don't have access to change process chain(they are not willing to open prod system)
    could you please provide solution on this issue .
    Thanks,
    EDK....
    Edited by: EdK666 on Oct 21, 2011 2:54 PM

    Hi,
    I have observed the behavior of changing technical name for DTP but still I think your process chain should not become inactive due to this.  I think process chain somehow adjust the new technical name but anyways you should be able to change the DTP in process chain directly in production system.
    If that is not possible you can just try re-transporting it.
    Regards,
    Durgesh.

  • Urgent Issue - Copy DTP and Process Chains in New Client Instance

    We have created new client Instance ( CLNT 510 ) based on our present client ( CLNT 500 ).
    We have all the process changes, application components from source system CLNT 500 and data ( both for master and transaction data ) in place for the CLNT 500.
    Now we need to use the CLNT 510 for our futher development. RFC connection has been established.
    Just want to know, do we need to create all the Application component trasfer , Transformation, DTP , Infopackages , Process chains again for the CLNT 510 data.
    please help.
    Thanks and Kind Regards,
    Pratap Gade.

    Hello Jr Roberto,
    Hello Jr Roberto,
    We have company implementing mutiple SAP Implementations in different countries.
    Client for BW Dev is 400.
    Source system for Client BWD :
    ECC Client ERP Dev - 500 ( for Holland Implementation )
    ECC Client EPR Dev - 510 ( for German Implementation  - Instance of ECC 500)
    We are completed with the Client 500 implementation.
    We are starting the german implementation and they have created a intance of Holland ECC Dev client 500 for german implementation with german data.
    Now they have setup 2 rfc connection in BWD 400.
    Also finally when they move to QAS client they are going to integrate the data of the both countries into one client.
    Presenlty I need to use ECC client 510 client for our german development.
    We have around 40 SAP BI reports in our first implementation. and around 10 in our second implementation.
    Just want to know, what are issues to be considered in loading master data ( will it overwrite the master data from the client 500, if i extract the master data from client 510 )...and all the other issues like can I copy all the process chains...etc  from client ECC 500 to ECC 510, without creating all transformation, DTP , Infopackages to load all the master from client 510 again.
    I need a broad understand on how to go about the development ...starting with Application Components transfer from source system Client 510 ....to report development. ( if possible using the already created objects....infopackages , DTP etc..)
    your help would be really appricated Mr Roberto.
    Thanks and Kind Regards,
    Shekar.

  • Data Packet issue for DTP

    Hi,
        I have Data packet  issue with DTP  when i am loding for General Ledger Account -2009
       (cube) the data load is from DATA SOURCE (0FI_GL_4) . the data is first loaded to DSO
       and then to a cube. upto this level the data is going fine, when the data is loaded to 2009
       cube using DTP as full load  i have an issue in the report, where the net balance is not " 0 ".
       but when i do a manual load for Selective company code's as a single value selection in the
       DTP filter condetion for all the company codes my data is matching in the report, where
       the netbalance is " 0 ". 
       With this i think there is an issue with Datapacket for DTP.
       Please sugest in this regard.
       Regards,
       prasad.

    Hi Ngendra,
    Yes, there will problem wth data loads some time with DTP.
    This can be resolved by setting scemantic grouping at dtp level, make sure that scemantic field ur defining will be unique.
    I am not sure with respect to functional side but with respect to ur post. i assume your problem will resoved by seeting company code as scemantic field or by setting some unique field as scemantic while data loading....
    Hope this helps you.
    Best Regards,
    Maruthi

  • VirtualProvider based on DTP- An issue with 0FI_GL_10

    We have a VirtualProvider based on DTP and the datasource is 0FI_GL_10. When a report was generated out of it, it gave the correct value as it is in ERP. In the RSDS transformation, the field for Trading Partner (RASSC) was not mapped to any InfoObject
    However, when we used a basis InfoCube to get the data from 0FI_GL_10 (via a DSO in the dataflow after PSA for 0FI_GL_10), we found that few data did not match ERP. Part of the problem is we can only include 16 Key fields in DSO. When looked into the PSA data, the field for Trading Partner (RASSC), which is not mapped in the transformation to any of the infoObjects, is the only field that has different values in the two records (other than the data fields like balance, debit etc.). It makes me believe that Trading Partner should have been one of the keys. However, we already have 16 key fields (max.) in the DSO and hence we could not include it.
    I am wondering how the data comes in accurately in report based on the  "VirtualProvider based on DTP" (as mentioned above), when RASSC is not in the mapping? Again, please note that I could not bring in the data accurately in the Basis InfoCube.

    no answers

Maybe you are looking for

  • I can't migrate iPhoto library from time machine

    I have my iPhoto library backed up on time machine and want to restore it to a new mac mini, but every time I try, I create a new account and a blank photo library! If I try to get to the folder library in Finder on my Mac Mini I don't have permissio

  • How to create a bootable OEL5.8 .iso from scratch?

    Are there any instructions available about how to create a bootable .iso file, for installing OEL5.8, from scratch? We need to build a system which is a customised version of OEL5.8; we want to add some extra RPMs and remove others. We have an older

  • I can't throw from Xperia Z tablet to LG smart tv

    Has anyone had a similar problems and found a solution....please help:-)

  • FI postings to COPA

    When we transfer FI to PA , there are fields from and to cost elements. If our cost elements category 1 are not in same range and there are some category 11 cost elements in between them, then how do we enter them in Transfer FI to PA in KEI2? Becaus

  • Ibooks - Can no longer tap back in IOS7

    Since the upgrade to IOS7, yo can no longer use a Tap in the left margin of a book to go Back a page on an ipad. Everyone else having this issue? Thanks.