Use of rsrv

Hello All,
          iam doing consitency on different objects using RSRV for fiist time. please tell me how to check consistency of objects.how does succesful indicates?

Transaction RSRV: BI Data and Metadata Test and Repair Environment.
Transaction RSRV checks the consistency of data stored in BI. It mostly examines the foreign key relationships between individual tables in the enhanced star schema of the BI system.
The transaction interface was re-designed for SAP BW release 3.0A. The following provides an introduction to running the transaction.
Starting the Transaction
You can reach the test and repair environment:
by entering the transaction code RSRV
in the SAP Easy Access Menu under SAP Menu -> Administration -> Analysis Tool>/>
from InfoObject maintenance (transaction RSD1)
from transaction RSD1 by choosing Analyze from the initial screen.
in the maintenance screen for a characteristic by choosing Edit -> Analyze InfoObject from the main menu.
The Initial Screen
When using the test and repair environment for the first time, the message "Values were not found for all setting parameters" draws your attention to the fact that there are not any saved settings for your user.
After confirming the dialog box, you reach the initial screen of the transaction. This is divided into two parts:
1. On the left-hand side, you can see a tree structure with the pool of available tests.
2. The right-hand side is empty at first. The tests you have selected will be put here. A selection of tests is called a Test Package here.
Combined and Elementary Tests
An Elementary Test is a test that cannot be divided into smaller tests and can therefore only be executed as a whole (or not at all).
In this regard, a Combined Test determines which elementary tests are to be executed after entering the parameters. You can remove individual elementary tests from the test package before carrying out the actual test run, in order to reduce run time, for example.
Combining a Test Package and Executing it.
Firstly select one or more tests with drag&drop or by double-clicking. Each selected test appears as a closed folder in the view of your test package. (An exception is elementary tests without parameters: These do not appear as a folder). You can also drag a whole folder of tests from the test pool across to the right-hand screen area; all tests that are located in the hierarchical structure under this folder are then added to the test package. You can also display a short description of the test, if required. Do this right-clicking on a test and choosing "Description" from the context menu.
Afterwards, you must supply the tests with parameters. Tests without parameters must not be given parameters. You are given notification of this when selecting them. You can enter parameters by double-clicking on a test (test package) or by opening a test folder.
A popup appears in which you have to enter the required parameter values. Often, there is value help available. After the parameters are entered, a folder with the name Parameter is added under the test. This contains the parameter values. The test name can change in some circumstances, enabling you to see at first glance for which parameter values the test is to be executed. It is possible, and often useful, to select the same test several times and give it different parameters. When you have supplied the combined test with parameters, the folder with the name Elementary Tests is added under this one. It contains the elementary tests from which the combined test is built. You can delete individual elementary tests in the test pool using drag&drop.
After supplying all tests with parameters, you can start the test run by clicking on the Execution button. After execution, the test icons change from a gray rhombus to a red, yellow or green one, depending on whether the test had errors, warnings or was error-free.
Test Results
The test results are written to the application log. Depending on the settings, the system jumps automatically to this display, or you can reach it by clicking on the Display button. The results are saved in the database, and can therefore be compared later with additional test runs.
In the left-hand side of the window, you can see an overview of the most recent test runs. Double-clicking on a folder displays all messages under these nodes as a flat (non-hierarchical) list in the right-hand screen area. Long texts or detail data may be available for individual messages, which can be displayed with a mouse click.
Repairs
Some tests can repair inconsistencies and errors. Automatic correction is generally not possible: If entries are missing from the SID table for a characteristic, in which case the lost SIDs are still being used in a dimension table (and the corresponding dimension keys are still being used in the fact table) of an InfoCube, you can only remove the inconsistency by reloading the transaction data of the InfoCube. Also note that you must make repairs in the correct sequence. You must always read the documentation for the test and have a good idea about how the error occured, before making the repairs.
After executing the test run, go from the application log back to the initial screen to make these repairs. Click on the Fix Errors button to start an error run. Since the dataset could have changed between the test and the repair run, the required tests are executed again before the actual repair. The results can be found in the application log once again.
After a repair, the test package should be executed again in order to check that the error has been fixed.
Test Packages
The test package is deleted if you do not save the test package in the display before leaving the test environment. Choose Test Package -> Save Test Package from the main menu. You can do the following from options in the Test Package menu:
Load packages; locks are not set for the package; it can only be saved under different names.
Load for processing - the package is then locked against changes by others - and you can save the package again under a different name.
Delete and
Schedule execution at a later date or at regular intervals in background processing.
Note that the execution of test packages can be integrated in process chains. See below for how you do this.
Settings
In the Settings menu option, you can make user-specific settings (adjust the size of the screen areas, for example) and save them. These settings are read automatically when starting the test environment. Additional settings options are being delivered with support packages since the test environment is currently still under development. A message notifies the user at the start if there aren't any values for the setting options.
Jobs Menu Option
You can access the job overview via the Jobs -> Job Overview menu. Use this when you want to check the status of a test package you have scheduled.
Application Log Menu Option
You can display old logs from previous test runs in the dialog box, as well as scheduled ones. The option of deleting test logs can also be found here.
New Selection
The currently selected test package is deleted using the New Selection function (from the memory, though not from the database if the test package had already been saved).
Filter
Use Filter to delete all elementary tests without errors or warnings from the test package after a test run.
Executing Test Packages in Process Chains
In process chain maintenance, transaction RSPC, add your process chain to ABAP program RSRV_JOB_RUNNER. (To do this, in the process type view under General Services, choose the ABAP Program process type by Drag&Drop. When you maintain the process variants, you are asked to specify the program name and a program variant. Enter RSRV_JOB_RUNNER as the name of the program. Choose a program variant name and then Change. On the next screen, you can change or display already existing variants and create new variants. When creating a new variant you are asked to specify the package name (value help is available), the level of detail for the log (to which the RSRV log is to be integrated in the process chain log), and a message type that signifies process chain processing is to be terminated.
The RSRV processes log in the process chain is structured as follows:
It starts with a summary of any errors or warnings that have been produced for each elementary test.
It finishes with a view of the log from the RSRV test package, up to and including the specified level of detail.
Example: If you choose 3 as the level of detail, only messages at levels up to 3 are included in the Process Chain log. Messages produced for a more detailed level of the test package when it is tested are not displayed in this log. Note that, in contrast to the application log, errors are not passed from more to less detailed levels in the process log. For example, if a single error is produced at level 4, the initial summary reports that the test has produceded an error, but this error is not listed in the second part of the log.
A complete log is always written, independently of the log for the RSRV process in the process chain. You can view this log from the menu option Application Log -> Display Log -> From Batch.
Note that there are currently no transport objects for test packages, meaning that these cannot be transported. Process chains that execute RSRV test packages have to be postprocessed manually after being transported into another system: You have to create the corresponding test packages.

Similar Messages

  • Data loaded in Infocube is not visible in the Reporting

    Hi all,
    In the info cube, the data is not visible for reporting, How can i make it available for reporting,
    When i select the Manage option from the context menu of the cube and select the Requests Tab, i get a pop up message that tells
    " There is an inconsistency between the load status of the data and the option of reporting on this data.
    There is data in the InfoCube/ODS object that is OK from a quality point of view, but is not yet displayed in reporting.
    The problem, for example, is to do with request 0000018049, number REQU_F4ZBFRMDGBULE9WCUN3R5UX5X."
    How do i find out the inconsistancies?
    PS: All the requesta are delta upload
    There is no aggregates for this cube.
    Thanks n regards
    Girikumar

    Hi Girikumar
    Use the RSRV transaction and select All Elementary Tests  in the dropdown ->Transaction data  , select the Consistency of the Time Dimension for an InfoCube  and select your cube and Execute the button ..if any error are there it will display the inconsistency ..Repair the Inconsistency with Correct Button..
    Check the other test also ..
    Hope the above helps you..\
    Bye
    Shu Moh..

  • ASSERTION_FAILED when Activate a DTP

    I got an error message when trying to activate a DTP. Does anyone know how to fix it? Thanks!
    Runtime Errors         ASSERTION_FAILED
    Date and Time          03/27/2007 14:29:57
    Short dump has not been completely stored (too big)
    Short text
         The ASSERT condition was violated.
    What happened?
         In the running application program, the ASSERT statement recognized a
         situation that should not have occurred.
         The runtime error was triggered for one of these reasons:
         - For the checkpoint group specified with the ASSERT statement, the
           activation mode is set to "abort".
         - Via a system variant, the activation mode is globally set to "abort"
           for checkpoint groups in this system.
         - The activation mode is set to "abort" on program level.
         - The ASSERT statement is not assigned to any checkpoint group.
    Error analysis
         The following checkpoint group was used: "No checkpoint group specified"
         If in the ASSERT statement the addition FIELDS was used, you can find
         the content of the first 8 specified fields in the following overview:
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
    Trigger Location of Runtime Error
         Program                                 CL_RSAR_PSA===================CP
         Include                                 CL_RSAR_PSA===================CM006
         Row                                     152
         Module type                             (METHOD)
         Module Name                             UPDATEDIRECTORY_TABLES
    Source Code Extract
    Line  SourceCde
    122               i_uni_idc25       = l_codeid
    123               i_program_class   = 'RSAR_ODS_MAINTAIN'
    124             EXCEPTIONS
    125               deletion_rejected = 2
    126               OTHERS            = 3.
    127         ENDIF.
    128       ENDIF.
    129       UPDATE rstsods SET tstpnm   = sy-uname
    130                 timestmp  = l_s_ods-timestmp
    131                 userapp   = p_userapp
    132                 userobj   = p_userobj
    133                 maintprog = ''
    134            WHERE odsname = l_s_odsfield-odsname
    135            AND   version = l_s_odsfield-version.
    136       IF sy-subrc = 0.
    137         IF i_partitioned = rs_c_true.
    138 *--   Entry could exist but includes no partition number,
    139 *     because the PSA was not partitioned before
    140           l_tablnm = p_psa_techname.
    141
    142           CALL FUNCTION 'RSDU_PARTITIONS_INFO_GET'
    143             EXPORTING
    144               i_tablnm              = l_tablnm
    145             IMPORTING
    146               e_ts_part_info        = l_ts_part_info
    147             EXCEPTIONS
    148               table_not_exists      = 1
    149               table_not_partitioned = 2
    150               OTHERS                = 3.
    151
    >>>           ASSERT sy-subrc = 0.
    153
    154           DESCRIBE TABLE l_ts_part_info LINES l_num_partitions.
    155           READ TABLE l_ts_part_info INDEX l_num_partitions INTO l_s_part_info.
    156           l_highest_partvalue = l_s_part_info-high_value.
    157
    158           UPDATE rstsods SET partno  = l_highest_partvalue
    159             WHERE odsname = l_s_odsfield-odsname
    160             AND   version = l_s_odsfield-version.
    161
    162         ENDIF.
    163       ELSE.
    164 *       create new version
    165         l_s_ods-odsname       = l_s_odsfield-odsname.
    166         l_s_ods-version       = i_next_version.
    167         l_s_ods-dateto        = rsods_c_dateto_01019999.
    168         l_s_ods-datefrom      = rsods_c_datefrom_01011998.
    169         l_s_ods-objstat       = rs_c_objstat-active.
    170         l_s_ods-odsname_tech  = p_psa_techname.
    171         l_s_ods-progname      = i_progname.

    I guess it is Notes 1012607.
    Summary
    Symptom
    Note: This note is relevant only for 'ORACLE' and 'MSSQL' database systems. After you implement this note, you must also carry out some manual corrections (see 'Solution', below).
    If you are working with database system DB2 or MSSQL, also implement Note 1022026.
    When data is written or activated or when a DataStore object is activated, the following errors occur:
    Similar errors may also occur for the DataSource and data transfer process (DTP).
    ORA-01502: index 'SAPDAT./BIC/A*KE' or partition of such index is in unusable state
    Column 'PARTNO' is partitioning column of the index '/BIC/A*KE'. Partition columns for a unique index must be a subset of the index key.
    error #RSDU_TABLE_TRUNC_PARTITION_MSS: Error While Calling Module MSS_TRUNC_PARTITION_FROM_TABLE Message no. 0U534#.
    <b>ASSERTION_FAILED in class 'CL_RSAR_PSA'.</b>
    Error message D0 313 in the activation log. The message does not contain any text. In the activation log it is displayed as an empty line with a red traffic light.
    Other terms
    DBIF_RSQL_SQL_ERROR, D0 313, D0313
    Reason and Prerequisites
    Reason:
    The partitioning logic of the persistent staging area (PSA) service does not recognize that the PARTNO field must not be deleted.
    For write-optimized DataStore objects, the active table is created as a partitioned table, even though a global index is used to ensure uniqueness of data. This is not compatible with the 'drop of a partition'.
    In the DataSource maintenance, you have the option to define key fields. For the first 16 key fields of the DataSource field list, a global index is also created.
    If 'semantic groups' are used in the DTP, the error stack is created with a global index.
    Solution
    Implement the corrections by importing the Support Package or by implementing the advance correction. As a result, the 'range' partitioning is deactivated in the PSA service as soon as a global index is requested.
    The error can occur for the objects: DataStore (only the write-optimized type), DataSource, and error stack of the DTP.
    This note contains the 'RSAR_PSA_PARTITION_CHECK' program, which you can use to analyze the objects. Execute the program. Use the search strings listed in section 5), depending on whether you want to analyze individual objects or object types. If you do not make an entry in the PODSTECH field (technical name of the PSA), the system checks all existing PSA tables, which may take some time.
    You can use transaction SLG1 to display the log for 'RSAR_PSA_PARTITION_CHECK'. Select the following:
               Object        = 'RSAR'
    Subobject   = 'METADATA'
    Ext. Identif. = 'RSAR_PSA_PARTITION_CHECK'
    You must make different manual changes to repair each of the different object classes.
    1) DataStore (write-optimized)
    Incorrect DataStores are identified in the log of the check program with the PSA type 'FASTSTORE'. The name after 'Obj:' is the technical name of the corresponding DataStore object.
    For a DataStore object of the 'write-optimized' type, a global index with relation to the semantic key is created if the 'Do Not Check Uniqueness of Data' indicator is not set.
    Check if you need to ensure that data is unique in your scenario.
    1. If you do not need the data to be unique:
                        Set the flag: 'Do Not Check Uniqueness of Data', and activate the DataStore object. The DataStore object is now consistent again.
    2. If you need unique data:
    In this case, you must departition and convert the table.
    If the error occurred when you activate the DataStore itself or when you activate the data, you must activate the DataStore object after converting the active table. You need the technical name of the active table for the conversion. You can get this directly from the log of 'RSAR_PSA_PARTITION_CHECK'. If you know which DataStore contains errors, find the technical name of the active table in the Maintain DataStore screen by choosing:
               <Extras> ->
              'Information (logs/status)
    Choose 'Dictionary DB status' to access the status POPUP. You can find the technical name in the 'Active table' field.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DataStore.
    If the table contains data, you must departition and convert the table as described in section 4.
    After that, use the AdminWorkBench (transaction RSA1) to activate the DataStore object.
    2) DataSource:
    Incorrect DataSources are identified in the log of the check program with the PSA type 'NEW_DS'. The 'Obj:' indicator  is followed by two additional character strings. The first is the technical name of the relevant DataSource. The second is the technical name of the source system.
    PSA tables for DataSources with a key definition must be departitioned.
    The name of the PSA table for the DataSource is contained directly in the 'RSAR_PSA_PARTITION_CHECK' log.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DataStore.
    If the table contains data, you must departition and convert the table as described in section 4.
    Call transaction 'RSDS' and enter the technical name of the DataSource and the source system and activate the DataSource.
    3) Error stacks for the DTP:
    Incorrect Error Stacks are identified in the log of the check program with the PSA type 'ERRORSTACK'. The 'Obj:' indicator  is  followed by the technical name of the relevant DTP. There may be more than one error stack table for each DTP.
    PSA tables for ErrorStack with a key definition must be departitioned.
    The name(s) of the PSA Error Stack table(s) for the DTP is/are contained directly in the 'RSAR_PSA_PARTITION_CHECK' log.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DTP.
    If the table contains data, you must departition and convert the table as described in section 4.
    Now call transaction RSDTP, enter the technical name of the DTP and activate the DTP.
    4) Departitioning and converting
    The following manual conversion using transaction SE14 is supported only for ORACLE database systems. Open a problem message under component BW-SYS-DB-MSS if you need to convert tables on a MSSQL database system.
    Call transaction SE14 (Database Utility) for the tables you need to convert. Select 'Table', enter the technical name of the table and choose 'Edit'.
    On the next screen, choose 'Storage Parameters' (Shift+F6).
    On the next screen (Storage Parameters), choose 'For new creation' (F8).
    In the dialog box that then appears, select 'Current database parameters' and copy it by choosing 'Enter'.
    You now get an overview of the storage parameters <Tables>, <Indexes> and existing <Partitions>.
    Under the 'Table' node, if the content of the 'TABLESPACE' field is initial, enter the value from the 'TABLESPACE' field of the first partition.
    For the field 'PARTITIONED BY', choose the option 'No partitioning' and save your changes.
    Exit the screen with the storage parameters.
    On the next screen, ensure that the 'Save data' radio button after 'Activate and adjust database' is selected, then execute the conversion. You execute the conversion by choosing 'Force Conversion' in the <Extras> menu.
    Next, you must correct the PARTNO indicator in table RSTSODS. To do this, call transaction RSRV and execute the test 'Consistency Between PSA Partitions and SAP Administration Information'. You can find this test in transaction RSRV under
    <All Elementary Tests>
                 -> <PSA Tables>
    You can execute the RSRV test and repair for all converted tables at once. For further information about how to use transaction RSRV in this case, see the online documentation. You can call the online documentation by choosing the 'Info' icon.
    5) Search strings:
    a) Use the search string '/BI+/B*' to find the relevant entries for the DataSource, the change logs and the error stack.
    b) Use the search string '/BI+/A*00' to find the relevant entries in the active tables for the DataSource objects.
    SAP NetWeaver 2004s BI
               Import Support Package 13 for SAP NetWeaver 2004s BI (BI Patch 13 or SAPKW70013) into your BI system. The Support Package is available once Note 991093 "SAPBINews BI 7.0 Support Package 13", which describes this Support Package in more detail, has been released for customers.
    In urgent cases, you can implement the correction instructions as an advance correction.
    You must first implement Notes 932065, 935140, 948389, 964580, 969846, 975510, 983212 and 1000448, which provide information about transaction SNOTE. Otherwise, problems and syntax errors may occur when you deimplement certain notes.
    To provide information in advance, the notes mentioned above may already be available before the Support Package is released. In this case, the short text of the note still contains the words "Preliminary version".
    Before you implement an advance correction (if one exists and you want to implement it), see Note 875986. This contains notes regarding the SAP Note Assistant and these notes prevent problems during the implementation.

  • Error in Query (Pls help)

    Hi friends,
    When i executed one of the queries, i get this error:
    Abort System error in program <b>CL_RSDM_READ_MASTER_DATA and form_sidval_direct</b>
    Diagnosis
    This internal error is a targeted termination since the program has an incorrect status.
    Procedure
    Analyse the situation and inform SAP.
    I have check all my infoobjects and master data...everything is in active but still i dont understand why i get this....
    please suggest how to get rid of this error? its urgent
    Regards
    Balaji

    Hi balaji,
       This can be program error, need to apply support package or correction,
    seems no oss found for form_sidval_direct .....
    perhaps it's sid error, try to analyze and repair the infoprovider where the query created on, use transaction RSRV.
    hope this helps.
    assign points if useful
    Regards,
    Archna

  • ODS Activation Problem - Error getting SID

    Hi All,
    I am facing the problem while activating the request,
    "Error getting SID for ODS object xxxx.i know lot of threads are there in forum but those all are taking about while loading the data to ODS,Here i loaded the data into my first ODS(here fine) and then transferring the data to another ODS while activation of Data in second ODS i am getting problem.the second ODS checked the Bex reporting.
    If masterdata not existis then it can be a problme in my first ODS but i didn't face any problem in First ODS.First ODS loaded the data from R/3 and second ODS using the datamarts.
    Any ideas how to solve this problem.
    Thanks,

    Use Tcode RSRV  Tests in Transaction RSRV  All Elementary Tests  ODS Objects  Foreign Key relationship of reporting-relevant ODS object and SID table characteristics (Dbl Click)
    From the right side window, expand u201CForeign Key relationship of reporting-relevant ODS object and SID table characteristics & Enter second ODS name, and click u201CTransferu201D
    Now, click u201CExecuteu201D (Toolbar)u2026 and check whether the results displayed in green icon..
    Otherwise, go back and click u201CCorrect erroru201D (Toolbar)...then try to activate second ODS

  • Restricting time dependent records on employee master data in BW

    Experts,
    We have a Future hire report which gets data from 0Employee. Now whenever hire date of an employee is getting assigned in case of new hire  record get's created in BW.
    Now, due to some reason this new hire employee is not able to join on the hire date first assigned to him and new hire date is updated in PA30 for this employee in ECC.
    This would create a new record in BW with new hire date for this employee which is valid in BW from the date it moved into BW till 9999 and the previous record 'Valid To' date is changed to previous date of the "Valid From" date of newly entered record.
    Now all these records has moved into Multiple Cubes also therefore we cannot delete these records from Master data until transaction data is delete.
    Client HR team is saying that previous records of all such employees are not valid , so they should not appear in BW reports even if they are running this report on historical dates.
    0Employee is time dependent master data. So it makes sense to have one records for any actions done for an employee in HR system.
    Cleaning up a master data for all such employee would be a tedious task and need lot of effort.
    Is there any way , we can restrict previous records of all such employee , so that they do not appear in BW report in case users are going to run the report previous dates.

    We were doing this activity as part of weekly house keeping job. Weekly we get list of invalid PERNR records from ECC team.
    While deleting particular PERNR (say 10015824) from 0EMPLOYEE, and the log (SLG1)says, SID exists in DIM table (like /BIC/D0PA_C011).
    First delete that PERNR (10015824) from cube (0PA_C01)
    Use Tcode RSRV --> Tests in Transaction RSRV  --> All Elementary Tests --> Transaction Data --> Entries Not Used in the dimension of an InfoCube (Dbl Click)
                 From the right side window, expand “Entries Not Used in the Dimension of an InfoCube” Node &
                 Enter InfoCube (like 0PA_C01), click “Transfer”
                 Now, click “Execute” (Toolbar)… and the results displayed in the right side window..
                 Now, click “Correct error” (Toolbar)
    Now, try to delete same PERNR from 0EMPLOYEE, and if the log says SID used in some other DIM table, follow the above process.
    If the log says, the SID used in DSO/Cube, then goto manage (of DSO/Cube) and do the selective deletion.

  • Damaged inventory cube 0IC_C03

    Hi all,
              My inventory cube is damanged because some data is not loaded via delta.
    what i done is i stopped the back ground job which is running for compression of the request.i done Init with full repair request loaded missing data.(_03_BF). full load.
             now my question is setting up marker for BF . my opening stock is around 2006. I loaded from 02.2007 to till the date. It is not a historical data(Because it after 2006). current data.
    can i do compression with marker or without marker.
    pls help me this is very important for me.
    thanks .

    Hi
    About the Marker - understood........I did....without checking it.   that is not the issue. Please see below the detail of the error log. Listcube does not show any entries. the range of posting dates i loaded was 1/1/2000 to 1/10/2010....valuated stock only..........
    this is the error message..........
    Multiple entries found with NCUMTIM 'unlimited'; compression not possible
    Message no. DBMAN380
    Diagnosis
    The time dimension contains multiple entries for which the time
    reference characteristic (NCUMTIM) is set to 'unlimited'.
    The value 'unlimited' is the maximum possible value for the time
    reference characteristic. This value is reserved for markers in
    non-cumulative InfoCubes and cannot be used for standard transaction
    records.
    System Response : Compression is terminated.
    Procedure : Contact your system administrator.
    Procedure for System Administration
    To allow compression to be carried out, the entries must first be
    removed from the InfoCube.
    To do this, you can proceed as follows:
    1. Delete all packages that contain records in which the time reference
    characteristic has the value 'unlimited'. You can identify these
    packages by using transaction LISTCUBE.
    2. Reload these packages and create a rule in the transformation that
    maps the value 'unlimited' to a year before 9000.
    3. Using transaction RSRV, then delete all entries that are no longer
    used from the time dimension.
    The InfoCube can be compressed again.

  • Loaded data not visible for Reporting?

    Hi all,
       In the info cube, the data is not visible for reporting, How can i make it available for reporting,
    When i select the Manage option from the context menu of the cube and select the Requests Tab, i get a pop up message that tells
    " There is an inconsistency between the load status of the data and the option of reporting on this data.
    There is data in the InfoCube/ODS object that is OK from a quality point of view, but is not yet displayed in reporting.
    The problem, for example, is to do with request 0000018049, number REQU_F4ZBFRMDGBULE9WCUN3R5UX5X."
    How do i find out the inconsistancies?
    PS: All the requesta are delta upload
    Thanks n regards
    Girikumar

    Hi Girikumar
    Use the RSRV transaction and check the inconsistency of the cube ..If any inconsistency is there repair that with the repari option in the toolbar..
    Let me knwo if it not resolved..
    Bye
    Shu Moh..

  • Missing indexes in db02

    Hello,
    Checking DB02 i've found a list of missing primary indexes. I know that i can use SE14 to adjust the indexes but i found some decumentation that says that i should first check if double index for the primary index exist and elimenate them.
    Please Advice.
    David.

    hi David,
    oss note 157918
    BW: DB02 shows "missing indexes"  
    Symptom
    DB02 shows indexes of a fact table to be missing. Such indexes have names that start with prefixes /BI0/F or /BIC/F (BW 1.2, BW 2.x) or /BI0/E or /BIC/E (BW 2.x only).
    Additional key words
    Business Information Warehouse, InfoCube, Fact Table, Bitmap Indexes, Oracle, DB02, Unique Index
    Cause and prerequisites
    For BW 1.2, this only applies to Oracle-based systems. For BW 2.x, this might apply to any DB-platform.
    BW 1.2 and BW 2.x take advantage of certain DB-specific features which are not supported by the data dictionary of R/3 4.0, 4.5 or 4.6 base systems. Prominent examples are bitmap indexes on fact tables (Oracle-based BW systems), partitioned/fragmented indexes, nologging and parallel index building facilities etc. Such features are used in BW by triggering native SQL statements which bypass the data dictionary.
    While all these features improve performance of the BW system, there are, however, some other transactions that are also affected when the data dictionary is bypassed. Amongst these is DB02. It sometimes claims that certain indexes of infocube fact tables are missing while direct checks on the database level show that those indexes are not missing at all or substituted by equivalent indexes. The latter might happen in BW 2.x where the primary index on fact tables might be replaced by a non-unique index or simply skipped as it is not required. Therefore you can usually ignore those messages. Obviously, we are currently working on removing such inconsistent information. If you want to be sure on the indexing then check the solution section of this note.
    Solution
    For BW 1.2A systems (Oracle only):
    You have to ask your local DBA to check the state of the indexes directly by looking at the USER_INDEXES table on the Oracle database.
    For BW 1.2B systems (Oracle only):
    There are two alternatives to check the secondary indexes of infocube and aggregate cubes fact tables:
               (1) Go to the Admin Workbench. Go to the infocube. Click the right mouse button and choose "InfoCube Performance". This leads you to a screen that shows the state of those indexes via traffic light semantics. There are also buttons to repair inconsitent states of the indexes.
               (2) Use transaction RSRV. Go to the tabstrip "Database". Choose the item "Indices of an InfoCube and its aggregates" and insert the (technical) infocube name (e.g. 0BWTC_C01) in the input box at the bottom of the screen. Press F8 ("Analysis") and wait until a red, yellow or green light appears beside "Indices of an InfoCube and its aggregates", i.e. in the "Result" column. Then press F6 ("Results") in order to see a detailed report on the index situation of that cube.
               (3) Go to SE37 and do a "single test" for the function module RSDU_CHECK_SECONDARY_INDEXES. Use the infocube's technical name (e.g. 0BWTC_C01) as the input parameter and 'X' for both, the I_COMPLETE_CHECK and I_WITH_AGGREGATES, parameters. Press F8 to run the module. Only the C_T_INDEX output parameter is relevant. It shows a list of indexes. Check the TYPE and STATUS columns. These should show 'BITMAP' and 'VALID' respectively.
    For BW 2.0A systems (all DB-platforms):
    For checking secondary indexes on individual infocubes the following methods can be applied, similar to the BW 1.2B solution:
               (1) Go to the Admin Workbench. Go to the infocube. Click the right mouse button and choose "Manage". Choose the tabstrip "Performance". This leads you to a screen that shows the state of those indexes via traffic light semantics. There are also buttons to repair inconsitent states of the (secondary) indexes.
               (2) same as (2) for BW 1.2B.
               (3) Go to SE37 and do a "single test" for the function module RSDU_INFOCUBE_INDEXES_CHECK. Use the infocube's technical name (e.g. 0BWTC_C01) as the input parameter, leave the I_FACTTAB initial, use 'X' for both, the I_COMPLETE_CHECK and I_WITH_AGGREGATES, parameters and use 'U' in the I_DOUBLE_FACTTAB parameter. Press F8 to run the module. Only the C_T_INDEX output parameter is relevant. It shows a list of indexes. Check the TYPE_CHECK, UNIQUE_CHECK, PARTITIONED_CHECK and STATUS_CHECK columns. These should show 'G' (= "green" = ok) respectively.
    If your BW 2.0A system is on patch level 11 Make DB02 consistent by running the report SAP_UPDATE_DBDIFF (via SE38), go to DB02 and press the "Refresh" button in order to synchronise the information in DB02 with the DBDIFF table. This should provide you with a consistent view.
    For BW 2.0B and BW 2.1C systems (all DB-platforms):
    DB02 should work consistently in BW 2.0B / BW 2.1C  with infocubes created in 2.0B / BW 2.1C. If you wish, you can still use the BW 2.0A approach. For infocubes that were created in BW 2.0A or BW 1.2 you need to adjust the index setup (on the facttables) by running the report SAP_INFOCUBE_INDEXES_REPAIR. The latter is available from BW 2.0B patch 3 onwards. It should be run in a background process as it might take a while to run through.
    You also might want to run the report SAP_UPDATE_DBDIFF once in order to update the table DBDIFF that lists database objects whose data dictionary setting do not correspond to the actual setup and that should therefore be omitted in DB02 checks.
    Source code corrections

  • Performance issues in bw

    Hi All,
    What is buffering number?How it is useful in performance issue? Tell me the option where it is available? To set this what are navigational steps?
    Thanks inadvance.
    Yogeswar

    Hi Yogi,
    A nice weblog by Vikas Please do check this.on number range buffering,
    /people/vikash.agrawal/blog/2006/04/05/load-lots-of-data-147faster148-with-buffering-number-range
    Check these links.
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/b4674415-0b01-0010-ae81-deb009860b7e [original link is broken]
    following are the links that may help you
    http://help.sap.com/search/highlightContent.jsp
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/s-u/sap%20bw%20business%20planning%20and%20simulation%20-%20how%20to%20guides%20list.htm
    http://help.sap.com/search/highlightContent.jsp
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b7bdde90-0201-0010-26b1-dcda5e0b394d
    How to improve performance in reporting side?
    Query Performance Techniques:
    1. Check Query properties—Use RSRT tcode
    2. Check whether cube is compressed
    3. Optimize query definition
    4. Analyze query execution
    5. Check for additional indexes
    6. Archive unwanted data
    7. Check for partitioning options
    8. Check for additional aggregates ( Consider DB ratio and KPI ratio)
    9. Check for parallelization options
    10. Use Nav attributes instead of hierarchies, use free char and filters.
    Possible causes for the performance :
    A) High Database Runtime
    B) High OLAP Runtime
    C) High Frontend Runtime
    Depending upon your analysis
    A)Strategy - High Database Runtime
    Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    Check if database statistics are update to data for the Cube/Aggregate, use Tcode RSRV output (use database check for statistics and indexes)
    Check if the read mode of the query is unfavourable - Recommended (H)
    B)Strategy - High OLAP Runtime
    Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    a) Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    b) Check if a user exit Usage is involved in the OLAP runtime?
    c) Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    C)Strategy - High Frontend Runtime
    1) Check if frontend PC are within the recommendation (RAM, CPU Mhz)
    2) Check if the bandwidth for WAN connection is sufficient.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1e553368-0601-0010-49ab-c429607f3eb3
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b
    check this, you can download lot of performance materials
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    and e-learning -> intermediate course and advance course
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
    e.g
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/8e6183ad-0701-0010-e083-9ab1c6afe6f2
    performance tools in bw 3.5
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/07a4f070-0701-0010-3b91-a6bf7644c98f
    (here also you can download the presentation by righ click the disk drive icon)
    Check the following links,
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    http://help.sap.com/saphelp_nw04/helpdata/en/06/b5f8926ba22b45bc9eaa589f1c835b/content.htm
    Some bw docs/ performance material
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    and don't miss bw performance knowledge centre, there are e-learning
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    Hope this Helps.
    <removed>
    Regards,
    Ravikanth.

  • Performance issues in reporting side

    hi gurus,
       how to improve performance in reporting side.
    thanku

    Hi kumar,
    Query Performance Techniques:
    1.     Check Query properties—Use RSRT tcode
    2.     Check whether cube is compressed
    3.     Optimize query definition
    4.     Analyze query execution
    5.     Check for additional indexes
    6.     Archive unwanted data
    7.     Check for partitioning options
    8.     Check for additional aggregates ( Consider DB ratio and KPI ratio)
    9.     Check for parallelization options
    10.     Use Nav attributes instead of hierarchies, use free char and filters.
    Possible causes for the performance :
    A) High Database Runtime
    B) High OLAP Runtime
    C) High Frontend Runtime
    Depending upon your analysis
    A)Strategy - High Database Runtime
    Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    Check if database statistics are update to data for the Cube/Aggregate, use Tcode RSRV output (use database check for statistics and indexes)
    Check if the read mode of the query is unfavourable - Recommended (H)
    B)Strategy - High OLAP Runtime
    Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    a) Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    b) Check if a user exit Usage is involved in the OLAP runtime?
    c) Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    C)Strategy - High Frontend Runtime
    1) Check if frontend PC are within the recommendation (RAM, CPU Mhz)
    2) Check if the bandwidth for WAN connection is sufficient.
    Hope this helps you..
    Regards
    Mallikarjun

  • Ods is not consistent

    when i checked an ods i had created ,it showed a message:ods XXX is not consistent,i want to know what is it mean of this not consistent ?does it compare with something?if yes,what  does it compare with and how can i correct this mistake?

    Hi,
    Try to activate the ODS it will list you down the errors due to which inconsistency has been araised.
    What are the errors it is displaying. Can you let us know?
    Check those and correct them. If it is already been activated and in use, goto RSRV and run the inconsistency check this will list down the inconsistencies and you can even correct them there.
    Hope this helps
    regards
    akhan

  • Abap issues....its urgent

    can anyone please enlist what are the problems at which abap consultant should enquire when going for a company visit to check the problems over there.
    Reward points will surely be given on urgent basis.
    Message was edited by:
            Ameet Jassani

    Check these if they are of any help to you.
    A nice weblog by Vikas Please do check this.on number range buffering,
    /people/vikash.agrawal/blog/2006/04/05/load-lots-of-data-147faster148-with-buffering-number-range
    Check these links.
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/b4674415-0b01-0010-ae81-deb009860b7e [original link is broken]
    following are the links that may help you
    http://help.sap.com/search/highlightContent.jsp
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/s-u/sap%20bw%20business%20planning%20and%20simulation%20-%20how%20to%20guides%20list.htm
    http://help.sap.com/search/highlightContent.jsp
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/afbad390-0201-0010-daa4-9ef0168d41b6
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b7bdde90-0201-0010-26b1-dcda5e0b394d
    How to improve performance in reporting side?
    Query Performance Techniques:
    1. Check Query properties—Use RSRT tcode
    2. Check whether cube is compressed
    3. Optimize query definition
    4. Analyze query execution
    5. Check for additional indexes
    6. Archive unwanted data
    7. Check for partitioning options
    8. Check for additional aggregates ( Consider DB ratio and KPI ratio)
    9. Check for parallelization options
    10. Use Nav attributes instead of hierarchies, use free char and filters.
    Possible causes for the performance :
    A) High Database Runtime
    B) High OLAP Runtime
    C) High Frontend Runtime
    Depending upon your analysis
    A)Strategy - High Database Runtime
    Check if an aggregate is suitable (use All data to get values "selected records to transferred records", a high number here would be an indicator for query performance improvement using an aggregate)
    Check if database statistics are update to data for the Cube/Aggregate, use Tcode RSRV output (use database check for statistics and indexes)
    Check if the read mode of the query is unfavourable - Recommended (H)
    B)Strategy - High OLAP Runtime
    Check if a high number of Cells transferred to the OLAP (use "All data" to get value "No. of Cells")
    a) Use RSRT technical Information to check if any extra OLAP-processing is necessary (Stock Query, Exception Aggregation, Calc. before Aggregation, Virtual Char. Key Figures, Attributes in Calculated Key Figs, Time-dependent Currency Translation) together with a high number of records transferred.
    b) Check if a user exit Usage is involved in the OLAP runtime?
    c) Check if large hierarchies are used and the entry hierarchy level is as deep as possible. This limits the levels of the hierarchy that must be processed.
    C)Strategy - High Frontend Runtime
    1) Check if frontend PC are within the recommendation (RAM, CPU Mhz)
    2) Check if the bandwidth for WAN connection is sufficient.
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1e553368-0601-0010-49ab-c429607f3eb3
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5401ab90-0201-0010-b394-99ffdb15235b
    check this, you can download lot of performance materials
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    and e-learning -> intermediate course and advance course
    https://www.sdn.sap.com/irj/sdn/developerareas/bi?rid=/webcontent/uuid/fe5b0b5e-0501-0010-cd88-c871915ec3bf [original link is broken]
    e.g
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/10b589ad-0701-0010-0299-e5c282b7aaad
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/d9fd84ad-0701-0010-d9a5-ba726caa585d
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/8e6183ad-0701-0010-e083-9ab1c6afe6f2
    performance tools in bw 3.5
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/media/uuid/07a4f070-0701-0010-3b91-a6bf7644c98f
    (here also you can download the presentation by righ click the disk drive icon)
    Check the following links,
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    http://help.sap.com/saphelp_nw04/helpdata/en/06/b5f8926ba22b45bc9eaa589f1c835b/content.htm
    Some bw docs/ performance material
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    and don't miss bw performance knowledge centre, there are e-learning
    Business Intelligence Performance Tuning [original link is broken] [original link is broken] [original link is broken]
    Hope this Helps.

  • Nav attrib in master data

    Hello,
    i can't see any values for sales dist which is nav attrib of 0sold_to. All my r/3 transactions capture cutomer values in 0sold_to object.
    0sold_to is created with reference 0customer.
    0customer also has sales dist as nav attrib.
    My reports and cubes don't show any values for sales dist. i want sales dist to be picked up from customer master data and displyed in reports.
    i have loaded 0customer master data and it(master data for 0customer) shows values for sales dist.
    i have not loaded 0sold_to seperately(as it is reference to 0customer).
    Please suggest....
    thanks

    Hello A.H.P,
    thanks a lot. i could also assign points to the earlier post today. there seemed to be a problem in assigning yesterday.
    anyways, can u please elaborate on the need to run attribute change run? also do we need to select the infoObjects(which ones) in the attrib change run screen or do i need to run it for all objects availble?
    also how do i use transaction rsrv? i have never used it before!!
    thanks

  • Urgent : Error in transfer rule of 0material_attr

    Hello,
    We are trying to activate transfer rules of InfoSource 0MATERIAL_ATTR
    in BW Quality System. But We are getting the following error message:
    ================================================
    Error when generating the message type for transfer
    structure /BIC/CCBA0MATERIAL_ATTR
    Message no. R3115
    Diagnosis
    An error occurred in the generation of the ALE message type for the
    transfer of changes to a certain basic characteristic.
    System response
    The transfer of changes to this basic characteristic cannot be carried
    out.
    Procedure
    Errors in the number range object for the naming of the message type or
    errors with the entries in the respective ALE tables could be the
    possible cause.
    ======================================================
    Pl. help me to resolve the issue

    Hi,
    Use tcode RSRV to repair object. Goto RSRV>All Combined test>Master data-->Check Master data for a char .
    hope it helps...
    regards,
    raju

Maybe you are looking for

  • HP Officejet Pro 8600 e-All-in-One

    i have buy and paid this printer in chain shop in country where i also try to buy new cartridges in same shop as sold me this "all in one",,,,BUT THEY DONT HAVE AND WILL NOT order home cartridges for this product,,, so i wonder if this is also HP´s o

  • MS Word Template in Content Server

    Hi, we have this problem. we have a Z transactions asking the user to give some data and save it in a table with key number, the user can attach several documents (.DOC) from local pc and transaction use content server to store these documents and li

  • How to get data by using only file name in XI

    Dear All.        I have 2 senarios . How can I configure it in the Configuration? 1. get data from file name (xxx.txt) --> HTTP 2. convert data into text file            --> File Thak you in advance. Au

  • Easier way to rearrange photos in iWeb?

    Hello I am running 10.4.6 and iWeb 1.1.1 on a PowerBook 1.5Ghz with 1GB of RAM. OK, when I click a bunch of photos in iPhoto and send them to an iWeb photo page, like everyone else, the photos are completely out of order when the iWeb page opens up.

  • RVHOST.exe - How to remove it?

    When I start my laptop rvhost.exe cannot found. Than I click ok and everything runs fine. I try to locate this file by putting it in run widows cannot locate it. I go in startup option but I didn't find this file name. If it is there it is not same n