Infospoke Issue

Hi-
We have an infospoke which is running successfully in production exporting 4 million records. For some reasons, it is not exporting any records in quality box...it runs successfully, but it exports 0 records.
I checked the "RSBREQUID3RD" table to see if the status of any request is in "Green" or not. After changing few request from "o" to "G", I re ran it. but no use....
Any one encountered this? if so, please let me know what steps u followed to overcome this problem?

HI Balaji,
Thanks for your response.
But where do u specify the escape character for the CSV file. It only gives you the option of specifying the Field Separator.
Regards,
Jasprit

Similar Messages

  • Issue regarding Full and Delta InfoSpoke

    Hi Experts,
    I have created 2 Infospokes Full and Delta to extract data from DSO to SQL server, the settings in both the InfoSpokes are identical except for the extraction mode. However, the delta InfoSpoke is pulling the correct number of records from the DSO (360 records), but the full InfoSpoke is extracting only 196 records. I also tried loading the DSO using both Full/delta Infopackage and Full/Delta DTPs but still the results were same. Can anyone tell how this issue can be resolved.
    Thanks,
    Rajan

    Dear Rajan,
    I'd try to help you about your question,
    Your case it's curious, did you check the filter in both InfoPackage?, try to delete and create them again (it is very important check whole filter). Otherwise, I suggest use Open Hub Destination which BI 2004s have the same functional (export data)
    I hope this suggestion can help you,
    Luis

  • Infospoke.............urgent   issue

    Hi Experts,
    I have small doubt
    I executed the Infospoke successfully without any errors in to  the applications server as CSV.File
    If I try to see the values using AL11 transaction. it shouldn't display all columns I have around 100 columns it shows only few columns not all 100 columns (all fields in the target structure are not displayed).
    why it is so happenend...........any suggestions please
    cheers
    sailekha

    Hi
    Change the destination as local file in InfoSpoke-Destination tab. for this, uncheck the check box 'Application server' and select a destination on your local PC.
    This downoalds two files to your PC, one is the structure infirmation suc as field name, length etc..and the second file is data file, which includes actual information.
    Hope this helps.
    Regards
        GSK

  • Infospoke informatica issue

    hi,
    we are creating infospoke to send data to informatica .
    Process is like this
    1. Informatica triggers PC
    2 PC start and the when extraction is complete it send status to infomatica that the extarction is complete and database table is ready for extraction.
    NOw the problem I am facing is that informatica triggers Pc correctly but there is failure in sending status to infomatica or the thirdparty system.
    error : communication error failed to send staus to third party
    though the extraction is complete and successful the message that infomatica recievs is that
    PC has failed
    and RFC is working fine
    studying the SM21 logs
    it is saying that there is a
    gateway problem
    shared memory problem
    network problem
    SAP web as problem
    plz help
    thanks and regards
    sapsaps

    Hi,
    The problem could be due to the connectivity of the SAP and Informatica systems. Basis team could be of help in this regard.
    regards
    Hemanth

  • Missing BAdI after the changes to InfoSpoke in BI 7.0

    Hi Gurus,
    Recently our BW system was upgraded from BW 3.5 to BI 7.0. And it was a Technical Upgrade.
    And now we are trying to change one InfoSpoke Selection which has a BAdI defined in the Transformations Tab.
    We didnt touch anything on the BAdI it was just the selection change in the Selection tab.
    Aftre activating the InfoSpoke when we come back to Tranformation Tab we dont see the Traget Structure and BAdi Implementation name any more.
    But when the InfoSpoke was executed, we can see the Transformations step in the monitor screen.
    That means even though the BAdI name was not visible in the InfoSpoke definition, transformations was being performed in the
    extraction.
    But this is not true when  the changes(InfoSpoke) were migrated to Quality system.
    There BAdI and Target Structure are not visible in the InfoSpoke and no transformation step is being executed in the Extraction.
    But both BAdI and Target Structure objects are active in the repository. Only link between the InfoSPoke and BAdI is missing.
    This is happening to all the InfoSpokes which are being changed.And this didnt happen before upgrade to BI 7.0.
    Can any body share some inputs on how to solve this issue..
    Thanks in advance....
    Regards,
    Padma.

    Hi,
    May be the following will help you!
    Regards,
    Subha
    API: RSB_API_OHS_SPOKE_GETDETAIL
    This API identifies the InfoSpoke details.
    Parameters:
    Parameter
    Type
    Description
    Import
    INFOSPOKE
    RSINFOSPOKE
    Name of the InfoSpoke
    Export
    RFCOHDEST
    RSOHDEST
    Name of the open hub destination
    RFCUPDATEMETHOD
    RSBUPDMODE
    Extraction Mode for the InfoSpoke
    RFCPROCESSCHAIN
    RSPC_CHAIN
    Process chain
    RFCOHSOURCE
    RSOHSOURCE
    Open hub data source
    RFCTLOGOSRC
    RSTLOGOSRC
    TLOGO type of data source
    RFCMAXPACKSIZE
    RSIDOCSIZE
    Maximum number of records per package
    RETURN
    BAPIRET2
    Tables
    T_MESSAGES
    BAPIRETTAB
    Messages

  • Regarding the Deltas in Infospoke

    Hi Gurus,
    We are using two Infospokes,one for the Full and one for Delta.We get the full data using full update,But when we run the Delta Infospoke,First run it is picking all the records.From the second run it is picking deltas.Please shed some light on this issue.
    Thanks
    Kiran

    Hi,
    Refer
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/11e1b990-0201-0010-bf9a-bf9d0ca791b0
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3042c5fc-21a8-2910-c79e-ad530260ae2e
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/01d3a090-0201-0010-9783-bc
    http://help.sap.com/saphelp_erp2005vp/helpdata/en/bf/50453c01f4f75fe10000000a11402f/frameset.htm
    http://help.sap.com/saphelp_nw2004s/helpdata/en/43/58e1cdbed430d9e10000000a11466f/content.htm
    thanks,
    JituK

  • Regarding the Summarization in Infospoke

    Hi Gurus,
    We are having 0ne Million Records in the Infocube,But once when we send the data through a logical file,when are getting only 2hundredthousand  records.Is The infospoke Summarizes the data in Badi.Please shed some Light on this issue.
    Thanks And Regards
    Bhaskar

    Hi Bhaskar,
                       Do u've any inverse routines.........
    & match the totals of the data, If itz not matching then cross check the file data with cube data(with some selection criteria) ..........
    Regards,
    Vijay.

  • Issue in transport regarding Open Hub Services

    Hi All,
    I have a strabge scenario, where I am moving one of my open hub related transport from system test env to UAT,but each time it's failing.
    The story behind is: I have a infospoke based on one ODS,where I have included a field called BRAND in the source structure.
    This transport went fine from Dev to Sytem test env,but it's failing while moving from System test to UAT.
    Error is.
    Program ZCL_IM_ZLP_WRKF===============CP, Include ZCL_IM_ZLP_WRKF===============CM001: Syntax error in line 000085
    The data object 'WA_SOURCE' does not have a component called '0IS_CLAIM__BRANDVALE'.
    Any comment will be appriciated.
    Regards,
    Kironmoy Banerjee.

    Hi,
    Generally transports fails with different situations like, by sequence and dependecy missing / with in active objects/ with RFC issue.
    so, we should follow the sequence and dependency while transporting the objetcs and they should be in active state in the source first.
    In your case, check your source fields whether they mapped correctly or not and it should be in active state.
    as per your error:
    The data object 'WA_SOURCE' does not have a component called '0IS_CLAIM__BRANDVALE'.
    Check source "'WA_SOURCE", why does not have the component called "0IS_CLAM_BRANDVALE". I think the souce missing the component called ""0IS_CLAM_BRANDVALE".
    see whether it was mapped correctly/active or not  and follow the seqence if there are any dependancy and retransport the request.
    Regards.
    Rambabu

  • Master Data Extraction through Open Hub services(InfoSpoke)

    Hello,
             I need your help on this issue,
    Here is my scenario...
    I want to extract Master data from BW (Info Object) as a file and put it in a server then later that can be picked up by some other deprtment for their process. This process has to be automated to run on a weekly basis.
    I know this can be done through a InfoSpoke(either ATTR or Text) and Process Chain but I want both attributes & text data in a single file because through InfoSpoke we can get either ATTR or TEXT data in a file.
    1)Is there any way we can have both TEXT and ATTRIBUTEs data in a single file.
    2) Also what are the options to transfer the file from a BW app server into another non-BW server?
    Currenty I created a BEx query against a InfoObject, ruuning a report and sending them the file manually. I want to automate the process.
    I would appreciate if you could give me input on the above.
    Thanks
    BI Consultant

    Hi,
    The simpleset way to do is to create a ODS with the required columns and map it to your TEXT and ATTR. Then you could use either a Query or a Info Spoke to extract data how ever you want it. Since you are using master data it would have been validated, so you don't need to use a reportable ODS, which takes time to activate the data. So create a info set on top of the ODS and do a simple query if you want to use it. If you want to automate completely and deliver the output as .CSV file to the destination server you may have to do little bit more and that will involve some serious ABAP work. The simpleset would be have a ODS 'Non reportable' and extract the data using info spoke.
    Let me know if you need more info.
    Thanks,
    Alex.

  • Infospoke inactive after transport

    Experts,
    In version 3.5, I have created an infospoke with transformation and unit tested in Dev box. After transport I have got an error 8 while importing in QA. I checked all objects if anything missing or inactive in dev box related with this transport. But could not find anything.
    Looking into QA, I noticed infospoke was in revised version. Do you have any suggestions about what could be missing in transport? Or what should be the next possible steps to resolve the issue, please. Thanks.
    Regards,
    Nimesh
    Message was edited by:
            Nimesh Patel

    Hi Anil, I am having the same problem. My ABAP routine puts the correct selection variables, but the infospoke is getting into 'Revised' version. Can you please let me know, if you have a fix for this?
    Thanks,
    Naveen

  • SAP BI issues.

    Hi Experts,
    I am a novice candidate pursuing for SAP BW/BI opportunities and  appearing for different interviews in SAP BW/BI. Following are some of the questions that I was fired by the interviewers. I appreciate your time and consideration for answering my queries.
    1)How do you initialize the setup tables for filling the data of just past three years ?
    And after full load from set up tables and executing the delta, later there is a requirement of adding more application tables in the datasource of LO cockpit.How should we go about without getting duplicate records in the source system and retaining the original delta
    2) what are the general different transformations issues and methods to solve it?
    3) How to handle DTP issues like if number of records were 1000 in DSO object and the Infocube received just 900, bad characters problem,etc
    4)  RDA background process flow?
    5) How to have plan/actual comparisons?lets say Profitability analysis.
       Plan infocubes contains which data with respect to actual infocube? Can you explain with example?
    6) In business content activation,how to exclude the objects that have been already activated.
    7)Does change run affect all aggregates or only aggregates containing that master data which is undergoing change run is affected?
    8) Can somebody send me sample Functional requirements design documents/blueprints/detal design docs.
    9) what are functional support issues in SAP BI implementations?
    10) The fastest and best method to improve query performance?
        If I am right, is it Cache settings?
    11) I am also preparing for certification exam BI 7.0 which is on march 7th? I need some sample questions.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Thanks
    Mujtaba.
    Lot of points will be given for urgent replies.
    Edited by: Nazeeruddin Mujtaba Mohammed on Feb 16, 2008 7:20 PM

    Hi Jacky
    6) In business content activation,how to exclude the objects that have been already activated.
         Just you have to select the particular object ,context menu ,select Donot install below.
    12) Need more information to prepare for SAP BI functional analyst and developer interviews.
    Normally the production support activities include
    Scheduling
    R/3 Job Monitoring
    B/W Job Monitoring
    Taking corrective action for failed data loads.
    Working on some tickets with small changes in reports or in AWB objects.
    The activities in a typical Production Support would be as follows:
    1. Data Loading - could be using process chains or manual loads.
    2. Resolving urgent user issues - helpline activities
    3. Modifying BW reports as per the need of the user.
    4. Creating aggregates in Prod system
    5. Regression testing when version/patch upgrade is done.
    6. Creating adhoc hierarchies.
    we can perform the daily activities in Production
    1. Monitoring Data load failures thru RSMO
    2. Monitoring Process Chains Daily/weekly/monthly
    3. Perform Change run Hierarchy
    4. Check Aggr's Rollup
    To add to the above
    1)check data targets are ready for reporting,
    2) No failed or cancelled jobs in sm37 monitors and Bw Monitor.
    3) All requests are loaded for day, monthly and yearly also.
    4) Also to note down time taken for loading of critical info cubes which are used for reporting.
    5) Is there any break in any schedules from your process chains.
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    Why there is frequent load failures during extractions? and how to analyse them?
    If these failures are related to Data, there might be data inconsistency in source system. Though we are handling properly in transfer rules. We can monitor these issues in T-code -> RSMO and PSA (failed records) and update.
    If we are talking about whole extraction process, there might be issues of work process scheduling and IDoc transfer to target system from source system. These issues can be re-initiated by canceling that specific data load and ( usually by changing Request color from Yellow - > Red in RSMO). and restart the extraction.
    What is the daily task we do in production support.How many times we will extract the data at what times.
    It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of records and kind of transfer rules you have provided. If transfer rules have some kind of round about transfer rules and updates rules has calculations for customized key figures... long times are expected..
    Usually You need to work on RSMO and see what records are failing.. and update from PSA.
    What are some of the frequent failures and errors?
    As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for the interview perspective I would answer it in this way.
    a) Loads can be failed due to the invalid characters
    b) Can be because of the deadlock in the system
    c) Can be because of previous load failure , if the load is dependant on other loads
    d) Can be because of erroneous records
    e) Can be because of RFC connections
    These are some of the reasons for the load failures.
    for Rfc connections:
    We use SM59 for creating RFC destinations
    Some questions
    1)     RFC connection lost.
    A) We can check out in the SM59 t-code
    RFC Des
    + R/3 conn
    CRD client (our r/3 client)
    double click..test connection in menu
    2) Invalid characters while loading.
    A) Change them in the PSA & load them.
    3) ALEREMOTE user is locked.
    A) Ask your Basis team to release the user. It is mostly ALEREMOTE.
    2) Password Changed
    3) Number of incorrect attempts to login into ALEREMOTE.
    4) USE SM12 t-code to find out are there any locks.
    4) Lower case letters not allowed.
    A) Uncheck the lower case letters check box under "general" tab in the info object.
    5) While loading the data i am getting messeage that 'Record
    the field mentioned in the errror message is not mapped to any infoboject in the transfer rule.
    6) object locked.
    A) It might be locked by some other process or a user. Also check for authorizations
    7) "Non-updated Idocs found in Source System".
    8) While loading master data, one of the datapackage has a red light error message:
    Master data/text of characteristic ZCUSTSAL already deleted .
    9) extraction job aborted in r3
    A) It might have got cancelled due to running for more than the expected time, or may be cancelled by R/3 users if it is hampering the performance.
    10) request couldnt be activated because there is another request in the psa with a smaller sid
    A)
    11) repeat of last delta not possible
    12) datasource not replicated
    A) Replicate the datasource from R/3 through source system in the AWB & assign it to the infosource and activate it again.
    13) datasource/transfer structure not active.
    A) Use the function module RS_TRANSTRU_ACTIVATE_ALL to activate it
    14) ODS activation error.
    A) ODS activation errors can occur mainly due to following reasons-
    1.Invalid characters (# like characters)
    2.Invalid data values for units/currencies etc
    3.Invalid values for data types of char & key figures.
    4.Error in generating SID values for some data.
    15. conversio routine error
    solution.check the data format in source
    16.OBJECT CANOOT BE ACTIVATED.or error when activating object
    check the consistency of the object.
    17.no data found.(in query)
    check the info provider wether data is there or not and delete unsucessful request.
    18.error generating or activating update rules.
    1. What are the extractor types?
    • Application Specific
    o BW Content FI, HR, CO, SAP CRM, LO Cockpit
    o Customer-Generated Extractors
    LIS, FI-SL, CO-PA
    • Cross Application (Generic Extractors)
    o DB View, InfoSet, Function Module
    2. What are the steps involved in LO Extraction?
    • The steps are:
    o RSA5 Select the DataSources
    o LBWE Maintain DataSources and Activate Extract Structures
    o LBWG Delete Setup Tables
    o 0LI*BW Setup tables
    o RSA3 Check extraction and the data in Setup tables
    o LBWQ Check the extraction queue
    o LBWF Log for LO Extract Structures
    o RSA7 BW Delta Queue Monitor
    3. How to create a connection with LIS InfoStructures?
    • LBW0 Connecting LIS InfoStructures to BW
    4. What is the difference between ODS and InfoCube and MultiProvider?
    • ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
    • CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
    • MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
    5. What are Start routines, Transfer routines and Update routines?
    • Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
    • Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
    6. What is the difference between start routine and update routine, when, how and why are they called?
    • Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
    7. What is the table that is used in start routines?
    • Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
    8. Explain how you used Start routines in your project?
    • Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
    9. What are Return Tables?
    • When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
    10. How do start routine and return table synchronize with each other?
    • Return table is used to return the Value following the execution of start routine
    11. What is the difference between V1, V2 and V3 updates?
    • V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
    • V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
    o V1 & V2 don’t need scheduling.
    • Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
    12. What is compression?
    • It is a process used to delete the Request IDs and this saves space.
    13. What is Rollup?
    • This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
    14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
    • It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
    15. How many extra partitions are created and why?
    • Two partitions are created for date before the begin date and after the end date.
    16. What are the options available in transfer rule?
    • InfoObject
    • Constant
    • Routine
    • Formula
    17. How would you optimize the dimensions?
    • We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
    18. What are Conversion Routines for units and currencies in the update rule?
    • Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
    19. Can an InfoObject be an InfoProvider, how and why?
    • Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
    20. What is Open Hub Service?
    • The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
    21. How do you transform Open Hub Data?
    • Using BADI we can transform Open Hub Data according to the destination requirement.
    22. What is ODS?
    • Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
    23. What are BW Statistics and what is its use?
    • They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
    24. What are the steps to extract data from R/3?
    • Replicate DataSources
    • Assign InfoSources
    • Maintain Communication Structure and Transfer rules
    • Create and InfoPackage
    • Load Data
    25. What are the delta options available when you load from flat file?
    • The 3 options for Delta Management with Flat Files:
    o Full Upload
    o New Status for Changed records (ODS Object only)
    o Additive Delta (ODS Object & InfoCube)
    SAP BW Interview Questions 2
    1) What is process chain? How many types are there? How many we use in real time scenario? Can we define interdependent processes with tasks like data loading, cube compression, index maintenance, master data & ods activation in the best possible performance & data integrity.
    2) What is data integrityand how can we achieve this?
    3) What is index maintenance and what is the purpose to use this in real time?
    4) When and why use infocube compression in real time?
    5) What is mean by data modelling and what will the consultant do in data modelling?
    6) How can enhance business content and what for purpose we enhance business content (becausing we can activate business content)
    7) What is fine-tuning and how many types are there and what for purpose we done tuning in real time. tuning can only be done for infocube partitions and creating aggregates or any other?
    8) What is mean by multiprovider and what purpose we use multiprovider?
    9) What is scheduled and monitored data loads and for what purpose?
    Ans # 1:
    Process chains exists in Admin Work Bench. Using these we can automate ETTL processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
    PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any given process chain. Is a procedure either with in the SAP or external to it with a start and end. This process runs in the background.
    PROCESS CHAIN is set of such processes that are linked together in a chain. In other words each process is dependent on the previous process and dependencies are clearly defined in the process chain.
    This is normally done in order to automate a job or task that has to execute more than one process in order to complete the job or task.
    1. Check the Source System for that particular PC.
    2. Select the request ID (it will be in Header Tab) of PC
    3. Go to SM37 of Source System.
    4. Double Click on the Job.
    5. You will navigate to a screen
    6. In that Click "Job Details" button
    7. A small Pop-up Window comes
    8. In the Pop-up screen, take a note of
    a) Executing Server
    b) WP Number/PID
    9. Open a new SM37 (/OSM37) command
    10. In the Click on "Application Servers" button
    11. You can see different Application Servers.
    11. Goto Executing server, and Double Click (Point 8 (a))
    12. Goto PID (Point 8 (b))
    13. On the left most you can see a check box
    14. "Check" the check Box
    15. On the Menu Bar.. You can see "Process"
    16. In the "process" you have the Option "Cancel with Core"
    17. Click on that option. * --
    Ans # 2:
    Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
    Ans # 4:
    InfoCube compression creates new cube by eliminating duplicates. Compressed infocubes require less storage space and are faster for retrieval of information. Here the catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you don't have any error in modeling.
    This compression can be done through Process Chain and also manually.
    Ans#3
    Indexing is a process where the data is stored by indexing it. Eg: A phone book... When we write somebodys number we write it as Prasads number would be in "P" and Rajesh's number would be in "R"... The phone book process is indexing.. similarly the storing of data by creating indexes is called indexing.
    Ans#5
    Datamodeling is a process where you collect the facts..the attributes associated to facts.. navigation atributes etc.. and after you collect all these you need to decide which one you ill be using. This process of collection is done by interviewing the end users, the power users, the share holders etc.. it is generally done by the Team Lead, Project Manager or sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make sure that you have read datamodeling before attending any interview or even starting to work....
    Ans#6
    We can enhance the Business Content bby adding fields to it. Since BC is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use according to your company's data model... eg: you have a customer infocube(In BC) but your company uses a attribute for say..apt number... then instead of constructing the whole infocube you can add the above field to the existing BC infocube and get going...
    Ans#7
    Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that means lowering time for loading data in cube.. lowering time for accessing a query.. lowering time for doing a drill down etc.. fine tuning=lowering time(for everything possible)...tuning can be done by many things not only by partitions and aggregates there are various things you can do... for eg: compression, etc..
    Ans#8
    Multiprovider can combine various infoproviders for reporting purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
    Ans#9
    Scheduled data load means you have scheduled the loading of data for some particular date and time you can do it in scheduler tab if infoobject... and monitored means you are monitoring that particular data load or some other loads by using transaction RSMON.
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request\
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    1. What is process chain and how you used it?
    Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    2. What is transaction for creating Process Chains ?
    RSPC .
    3. Explain Colector Process ?
    Collector processes are used to manage multiple predecessor
    processes that feed into the same subsequent process. The collector
    processes available for BW are:
    AND :
    All of the direct predecessor processes must raise an event in order for subsequent processes to be executed
    OR :
    A least one predecessor process must send an event The first predecessor process that sends an event triggers the subsequent process
    Any additional predecessor processes that send an event will again trigger
    subsequent process (Only if the chain is planned as “periodic”)
    EXOR : Exclusive “OR”
    Similar to regular “OR”, but there is only ONE execution of the successor
    processes, even if several predecessor processes raise an event
    4. What are application Process ?
    Application processes represent BW activities that are typically
    performed as part of BW operations.
    Examples include:
    Data load
    Attribute/Hierarchy Change run
    Aggregate rollup
    Reporting Agent Settings
    5. Tell some facts about process Chains
    Process chains are transportable Button for writing to a change request when
    maintaining a process chain in RSPC
    Process chains available in the transport connection wizard (administrator workbench)
    If a process “dumps”, it is treated in the same manner as a failed process
    Graphical display of Process Chain Maintenance requires the 620 SAPGUI and SAP BW 3.0B Frontend GUI
    A special control background job runs to facilitate the execution of the of the other batch jobs of the process chain
    Note your BTC process distribution, and make sure that an extra BTC process is available so the supporting control job can run immediately
    6. What happens when chain is activated ?
    When a chain gets activated It will be copied into active version The processes will be planned in batch as program RSPROCESS with type and variant given as parameters with job name BI_PROCESS_<TYPE> waiting for event, except the trigger The trigger is planned as specified in its variant, if “start via meta-chain” it is not planned to batch
    7. Steps in process chains ?
    Go to transaction code-> RSPC
    Follow the Basic Flow of Process chain..
    1. Start chain
    2. Delete BasicCube indexes
    3. Load data from the source system into the PSA
    4. Load data from the PSA into the ODS object
    5. Activate data in the ODS object
    6. Load data from the ODS object in the BasicCube
    7. Create indexes after loading for the BasicCube
    Regards,
    Hari

  • BADI in InfoSpoke

    Hello,
    I would like to use a BADI to transform data from a source structure to a target structure in an InfoSpoke.
    I am having trouble changing target structure. Apparently, there should be some HOW TO document (How To..Change an Infospoke's extraction layout).
    Has anyone come across this document??? I am not able to find it anywhere.
    Further, does anyone know how to customize the target structure. I am trying to get a file in the following format:
    COST_CENTER COST_CENTER_TEXT COST_ELEM COST_ELEM_TEXT AMT
    I am extracting the data from the cube:0CCA_C11. As I activate the BADI in the InfoSpoke, BW automatically creates a Source Structure (/BIC/CYZCOCCC11_IS001 for e.g.) & a Target Structure (/BIC/CZZCOCCC11_IS001 for e.g.)
    I have the following problems:
    BW forces me extract the Currency Field with the Amount Key Figure. I do not want the Currency Field as I am extracting data in only one currency CO Area Currency.
    When I do make changes & try to save it, BW does not allow me to use a normal customer defined Package/Development Class starting with Z*. It provides a message that the Target Structure should be stored in a Package for generated objects starting with /BIC/. SAP does not seem to allow me create such a package in SE80.
    Would anyone have answers to these issues??
    Thank you in advance.
    Raja

    Hi Rajasekhar,
    Hope I understood your issue correctly.
    Issue:
    Ignore CURRENCY in the o/p.
    Solution:
    This can be done in two ways:
    1. In the BAdI TARGET STRUCTURE, ensure that, this field is not selected. Hence, in the BAdI ensure that you populate the data correctly to the respective fields.
    2. O/P the file with Currency and hence use a ABAP program to re-translate this file to another one without the CURRENCY characters.
    By default CURRENCY is comes along with either Net Value or so.
    These are the steps I followed:
    1. Goto RSBOH1.
    2. Create your InfoSpoke with default options.
    3. Activate you InfoSpoke.
    4. Go to the tab "Transformations" and check the box "InfoSpke with ... BAdI".
    5. This will take you to the BAdI. In the BAdI enter the filter to be your InfoSpoke Name.
    6. Activate the BAdI.
    7. Goto SE12 and edit the TARGET STRUCTURE and ACTIVATE it.
    8. Go back to the BAdI and write the source code for the BAdI.
    Thats it!!
    Hope this helps.
    Regards,
    GPK.

  • Urgent Help needed - BADI's in Infospoke

    Hi,
    My Scenario:
    I am pulling data from master data using infospoke into Application server. I need some kind of easy transformations during this stage.
    I got ZSTATE field in my data and I need to restrict my output to only certain states(Ex: NJ,CA, TX , MNetc). Since I can't give those selection conditions in infospoke I need to try BADI. I created a BADI and have target and source structure. Can anyone write me small code for this to eliminate other states and allow NJ , CA, MN, TX etc.
    Source structure: /BIC/CYZZTEST
    target Structure: /BIC/CZZZTEST
    infospoke: ZZTEST
    Field: Zstate
    Class:ZCL_IM_ZZTEST
    Method:IF_EX_OPENHUB_TRANSFORM~TRANSFORM
    Full points to helpful answer!!
    Anil.

    The problem is cross user too - I have two user files on the machine and the same thing happens regardless of which user file I'm working in.
    That points to s 'system-wide' issue.
    Try resetting your SMC.
    Resetting the System Management Controller >>
    Also, you could try booting from your install DVD and see if it does it there. If it does not, it's more than likely a software issue and an Archive and Install should fix it.
    Mac OS X: About the Archive and Install feature >>
    -Bmer
    Mac Owners Support Group - Join us @ MacOSG.com
      Mac611 Mobile Mac Support - about.Mac611.com
       iTunes:MacOSG Podcast | YouTube.MacOSG.com
                       An Apple User Group 
    Have an iPhone or iPod touch? Enter Mac611.com in Safari on it for 'mobile Mac support.'

  • How to Stop runnnig  Infospoke

    'Hi Experts,
    I am Working in SAP Bw. My process Chain Has Sequence of Steps.
    Order of Step   (1) Start
                     (2) Load data
                     (3) Acr
                     (4) ABAP Program to delete  data 
                         from Open Hub
                      (5) InfoSpoke
    My Process Chain ran fine Upto  Fourth Step But Failed at  InfoSpoke.
    I have tried to Restart from InfoSpoke using RSBO Tcode.
    I could not find  background run option instead there is Dialog option.
    When i monitor for the records it is showing running status(yellow colour).
    Can anyone plz explain how to solve the issue. How can i find my infospoke in SM51 tcode to stop the Infospoke.
    Thanks in Advance

    Hi,
    to stop the job  find the job with the help of the request name,then in the job Details find the PID .
    find the process in the process Overview (SM50 / SM51 ).
    Set the restart to NO and Cancel the process without core
    u can also see these threads that are already posted:
    stopping v3 run job LIS-BW-VB_APPLICATION_02_010
    how to cancel or change the background job which is scheduled
    Shreya

  • Infospoke - 3rd party tool not setting request to G (successful )

    Hi,
    Anyone of you had experience of this issue before? I have an infospoke that loads data to the DB table. A 3rd party tool ( Datastage) extracts the data from the table. Everything works as expected, BW sends notification to 3rd party tool that data is ready for extraction. 3rd party tool extracts the data successfully, but the problem is that the request is not set to G (successful). The people working on Datastage said that their system issued a message to BW that extraction is successful but it seems BW did not accept/receive it. So although the data has been successfully extracted to the 3rd party tool, the request is still set to N (new) and not G. No errors are issued.
    Below is the Infospoke Message run:
    1. Extraction is running
    2. Extraction running in background process BW_OH19802
    3. Data Package 1
    4. Send message to DATASTG
    5. Message from DATASTG confirmed
    6. The extraction ended successfully
    7. Data is being read: Open hub destination DW1234
    8. Finished reading data: Open hub destination DW1234
    In this issue, the log should have:
    9. Datastage job completed successfully (status set to G)
    10. Status for request 154,567 was set to G.
    But the infospoke only run up to 8. The catch here is that this doesnt happen consistently. Sometimes the infospoke status is correctly set. but sometimes it's not.
    Any ideas what cause this problem? Will really appreciate any inputs.

    Hi,
    The Thirdy party tool (in this case datastage) is responsible for setting the status to G in the table RSBREQUID3RD. If this does not
    happen then normally the problem is with the Third party tool and often with the RFC communication. However there is an known issue on the
    BI side as explained in the note 1243168, if this is relevant for your BI support package level please apply the note and check if the problem
    still exists.
    Best Regards,
    Des.

Maybe you are looking for

  • Adobe Reader X:Recurring issues with print and save requiring reinstalls

    One of our staff has recurring issues with PDF print, view, save.  These issue are temporarily resolved by re-installing the Reader from the Adobe website.  No error messages are displayed - the functions just cease to work.  In the most recent itera

  • Custom genres

    Hello, with my 30G iPod is it possible to have a song in more than one genre? How do I go about setting it up, will the song show up in 2 separate ones or am I just creating a new genre. For an example, Pat Benatar's song "Invincible" is a soundtrack

  • Nokia 2730 and iSync?

    I'm interested in the Nokia 2730 from Three (in the UK), for £80. However, I cannot find if it is supported in iSync (or if there's any 3rd party software)... Is it? Will address book, and calendar synching work? What fields, specifically? Thanks!

  • Customer Sign-off on Approved Artwork

    I'm not well-versed in Acrobat and I'm wondering if anyone can tell me if they have found a good, easy process for documenting customer approval of artwork. We've been relying on emails with the attached pdf for approval but we'd like to have the app

  • "ESC" button to close Dialog window? how it can be done

    Dear Friends, my application has got lot of Dialog windows, How can i map ESC button so that when the users press ESC button the window will get close automatically, all these windows designed so show data only. Please advise me.