PSA Partition

Hi All,
When I load the master data, I am getting following short dump for only one master data again and again.
Error with partition when loading
Error:
Communication errors (RFC)
DBIF_RSQL_SQL_ERROR
CX_SY_OPEN_SQL_DB ORA-14400:
inserted partition key does not map to any partition
Internal call codeu2026u2026u2026: u201CRSQL/INSR//BIC/B0000xxxxxx u201D
Solution:
RSRV -> PSA Tables -> Consistency Check for PSA
Error calling number range object for dimension when loading cube
I follow the following solution...
Solution:
1. rsrv > all elementry tests > master data > compare number range and maximum sid > execute. press [u2026]
It works for only one week, then again nightly data load for the same master data fail.
Is there any long term solution to fix this issue ?
Is that good idea if I create a program for function module RSDDCVER_PSA_PARTITION to run daily basis?
Since it happens in only one PSA table, anyone have any ideas why it happens only one master data PSA table?

Hi.........
Check SAP NOTE 339896............but is for BW 2.0
Check SAP Note : 675760..............for ur reference........
Symptom
You want to change the definition of an existing InfoCube.
Other terms
Conversion, ORA 14400, field change
Reason and Prerequisites
If you add or delete a key figure in an InfoCube, the structure of the fact tables changes.
These must therefore be converted in se14.
Solution
Before you start the conversion, you must compress all requests of the F fact table into the E fact table, because the converter cannot handle the BW fact tables that are structurally different from normal R/3 database tables.
Also u can try to repair the PSA partition in tcode RSRV..................
Regards,
Debjani......
Regards,
Debjani........

Similar Messages

  • PSA Partitioning at the DBA Level

    Hi,
    Kindly can anyone will give me the steps how we can create the partition at the DBLevel for PSA Table...
    what would be the impact of doing this...
    Thanks
    Sonu
    Edited by: Sonu Sharma on May 16, 2008 12:25 PM

    Check note 105047 - Support for Oracle functions in the SAP environment
    Point 35 and follow up notes.
    Markus

  • Unable to transport transfer rules in BI 7(Partition in PSA)

    Hi,
    I want to activate T.R in Production but not able to that due to Partition in PSA.
    Errors accessing partition parameters for table /BIC/B0000341000 (-> long text)
    We used the programmed called RS_TRANSTRU_ACTIVATE_ALL but still the same error.
    Even when we transported (T.R) from QA to PRD not able to transport to PRD with error 8.In the error analysis it is giving the same MSG u201CPSA partitionu201D
    T.R: Transfer Rules
    Object Type: Master data Text(0ACTIONREAS_TXT)
    Thanks,
    Gattu.
    Thanks= Points in SDN

    Laxman,
    Try to repair PSA table at RSRV --> All Elementary Tests --> PSA Tables --> Consistency Between PSA Partitions and SAP Administration Information##.
    After repair try to activate.
    If still not working, then empty PSA data in production system (if not required...) and import transfer rules and activate if required.,
    Hope it Helps
    Srini

  • SAP-BW Oracle tablespace for ODS/PSA data

    Hi Experts,
    Could you please give some knowledge on the below " Releasing table space " issue:
    We r on BW 3.5  with Data base system as ORACLE 9.2.0.8.0.
    We've deleted all ODS/PSA data (  /BIC/B0000590000 - Transfer Structure Application 8ZSDDLITM ( It’s an ODS ) and even after refreshing the DB02 statistic,  it's still the same , Space is not getting released.
    The partitioning is set up with the field “PARTNO”. But this field has only one value:
    SQL> select partno, count(*) from sapr3."/BIC/B0000590000"
      2  group by partno
      3  having count(*) > 1000000;
             0   67616209
    SQL> select count(*) from sapr3."/BIC/B0000590000";
      67616209
    All the records have a value “ 0 “ for the partno field, so each record is placed in the same partition. In this way partitioning is useless.
    So , Do the space can be regained by reorganization?...Will this reorganization effects anything?...
    I have also found some notes on this : 565176 , 733371
    Please suggest ...
    Regards,
    KANTH

    Dear Oliver,
    Thanks for response and sorry for the late reply.
    I have checked our ODS/PSA DDIC table ( /BIC/B0000590000 ). It has huge number of records ( More than 6 million ) although i have already deleted the PSA data from BW administration.
    I did the RSRV for consistancy check but there is no inconsistancy i found  Between PSA Partitions and SAP Administration Information## .
    I think i will approach BASIS on this but what should i exactly request them. I am afraid of any reorganization from there side will result in problems..
    Thanks a lot..
    Regards,
    KANTH

  • PSA tables in SAP BW

    Hi experts,
    I'm having trouble with creating PSA tables in SAP BW. Can somebody please explain the different ways to create PSA tables.
    Thank You Very Much
    My Regards...

    Hi..
    After it is extracted from source systems, data is transferred to the entry layer the data
    warehouse, the persistent staging area (PSA). In this layer, data is stored in the same form as
    in the source system. The way in which data is transferred from here to the next layer</b>incorporates quality-assuring measures and the transformations and clean up required for a
    uniform, integrated view of the data.
    <b>When you activate the DataSource, BI generates a PSA table and a transfer program.</b>
    The data in the DataSource (R3TR RSDS) is transferred to the PSA.
    When you transport the restored 3.x DataSource into the target system, the DataSource
    (R3TR RSDS) is deleted in the after image. The PSA and InfoPackages are retained. If a
    transfer structure (R3TR ISTS) is transported with the restore process, the system tries to
    transfer the PSA for this transfer structure. This is not possible if no transfer structure exists
    when you restore the 3.x DataSource or if IDoc is specified as the transfer method for the 3.x
    DataSource. The PSA is retained in the target system but is not assigned to a DataSource/3.x DataSource or to a transfer structure.
    <b>The PSA table to which the data is written is created when the transfer structure is activated.</b>
    <b>A transparent PSA table is created for every DataSource that is activated</b>.
    The PSA tables each have the same structure as their respective DataSource. They are also flagged with key
    fields for the request ID, the data package number, and the data record number.
    InfoPackages load the data from the source into the PSA. The data from the PSA is processed with data transfer processes.
    With the context menu entry Manage for a DataSource in the Data Warehousing Workbench
    you can go to the PSA maintenance for data records of a request or delete request data from the PSA table of this DataSource. You can also go to the PSA maintenance from the monitor for requests of the load process.
    Using partitioning, you can separate the dataset of a PSA table into several smaller, physically independent, and redundancy-free units. This separation can mean improved performance when you update data from the PSA.
    In the Implementation Guide with SAP
    NetWeaver &#8594; Business Intelligence &#8594; Connections to Other Systems &#8594; Maintain Control
    Parameters for Data Transfer you define the number of data records needed to create a new
    partition. Only data records from a complete request are stored in a partition. The specified value is a threshold value.
    The number of fields is limited to a maximum of 255 when using TRFCs to transfer data. The length of the data record is limited to 1962 bytes when you use TRFCs.
    <b>PSA Table
    For PSA tables, you access the database storage parameter maintenance by choosing Goto &#8594; Technical Attributes in DataSource maintenance. In dataflow 3.x, you access this setting in transfer rule maintenance in the Extras menu.
    You can also assign storage parameters for a PSA table already in the system. However, this has no effect on the existing table. If the system generates a new PSA version (a new PSA table) due to changes to the DataSource, this is created in the data area for the current storage parameters.</b>
    <b>Define PSA Part. size using transaction RSCUSTV6
    Minimum no. of records in a PSA partition
    Use program SAP_PSA_PARTITION_COMPRESS for existing PSA
    PSA
    RSTSODS    Directory of all PSA Tables</b>
    with regards,
    hari kv
    Message was edited by:
            hari k

  • Duplicate record found short dump, if routed through PSA

    Hi Experts,
    I am getting these errors extracting the master data for 0ACTIVITY
    1 duplicate record found. 1991 recordings used in table /BI0/XACTIVITY
    1 duplicate record found. 1991 recordings used in table /BI0/PACTIVITY
    2 duplicate record found. 2100 recordings used in table /BI0/XACTIVITY
    2 duplicate record found. 2100 recordings used in table /BI0/PACTIVITY.
    If I extract via PSA with the option to ingnore duplicate records, I am getting a short dump
    ShrtText                                            
        An SQL error occurred when accessing a table.   
    How to correct the error                                                             
         Database error text........: "ORA-14400: inserted partition key does not map to  
          any partition"                                                                  
    What is  causing errors in my extraction ?
    thanks
    D Bret

    Go to RSRV .Go to All Elementary Tests -->PSA Tables -->Consistency Between PSA Partitions and SAP Administration Information## ..and correct the error...

  • Delete requests in PSA found error DDL time

    Hello Experts,
    I got the following situation and I'd like some suggestions from anyone who used to got the similar problems:
    - SAP BI-QA were refreshed from SAP BI-Production
    - delete old requests in PSA
    - found the following errors:
    (1) DDL time(___1): .........1 milliseconds     @35@
    (2) Delete request REQU_D34ZBFRB4LSZ3R99FR4BRPF1Z from PSA /BIC/B0000747: Error - subrc:  2     @35@
    Is there anyone know what is a cause of this kind of problem?
    How should those problems requests could be cleared?
    Any help would be appreciated.
    Thank you very much.
    -WJ-

    Well.
    I did the following steps and it resolved my problems.
    - transaction = RSRV
    - under: Tests in Transaction RSRV > All Elementary Tests > PSA Tables
    - double clicks = Consistency Between PSA Partitions and SAP Administration Information##
    - input target PSA table
    - click Execute button
    - click Correct Error
    That's all.
    -WJ-

  • PSA Data Load Error

    I am getting the following PSA error and am currently fixing this error manually. Could someone explain me how to have this in a Process Chain.
    <b>failed step: psa cleaning
    cause: not able to truncate psa because of inconsistency of data in partition
    steps taken: make partition consistent using RSRVand return the load
    </b>
    Thanks,
    Priya

    Go to RSRV --> Go to Tests in Transaction RSRV -->PSA Tables -->Consistency Between PSA Partitions and SAP Administration Information## --> give the PSA table name  here and press F8. Once you see errors press on Correct errors button and it shd get fixed.
    You shd be able to get the PSA table name from the Process chain step variant..

  • How to calculate the size each PSA request or all PSA for the Datasources?

    Hi All,
    Can anybody tell me how to calculate the size of each PSA Request or all PSA's requests for all datasources?
    Regards,
    Rajesh

    Hi ,
    PSA technical name can be found as,
    1)Right click on your Data Target,go to Display dataflow,click on the PSA ,here you can find the technical name.
    2)Goto RSRV tcode>PSA Table>Consistency between PSA Partitions and SAP Administration-->Give your Infosoure name,it gives u PSA technical name also,
    Thanks&Regards,
    Praveena.

  • ASSERTION_FAILED when Activate a DTP

    I got an error message when trying to activate a DTP. Does anyone know how to fix it? Thanks!
    Runtime Errors         ASSERTION_FAILED
    Date and Time          03/27/2007 14:29:57
    Short dump has not been completely stored (too big)
    Short text
         The ASSERT condition was violated.
    What happened?
         In the running application program, the ASSERT statement recognized a
         situation that should not have occurred.
         The runtime error was triggered for one of these reasons:
         - For the checkpoint group specified with the ASSERT statement, the
           activation mode is set to "abort".
         - Via a system variant, the activation mode is globally set to "abort"
           for checkpoint groups in this system.
         - The activation mode is set to "abort" on program level.
         - The ASSERT statement is not assigned to any checkpoint group.
    Error analysis
         The following checkpoint group was used: "No checkpoint group specified"
         If in the ASSERT statement the addition FIELDS was used, you can find
         the content of the first 8 specified fields in the following overview:
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
         " (not used) "
    Trigger Location of Runtime Error
         Program                                 CL_RSAR_PSA===================CP
         Include                                 CL_RSAR_PSA===================CM006
         Row                                     152
         Module type                             (METHOD)
         Module Name                             UPDATEDIRECTORY_TABLES
    Source Code Extract
    Line  SourceCde
    122               i_uni_idc25       = l_codeid
    123               i_program_class   = 'RSAR_ODS_MAINTAIN'
    124             EXCEPTIONS
    125               deletion_rejected = 2
    126               OTHERS            = 3.
    127         ENDIF.
    128       ENDIF.
    129       UPDATE rstsods SET tstpnm   = sy-uname
    130                 timestmp  = l_s_ods-timestmp
    131                 userapp   = p_userapp
    132                 userobj   = p_userobj
    133                 maintprog = ''
    134            WHERE odsname = l_s_odsfield-odsname
    135            AND   version = l_s_odsfield-version.
    136       IF sy-subrc = 0.
    137         IF i_partitioned = rs_c_true.
    138 *--   Entry could exist but includes no partition number,
    139 *     because the PSA was not partitioned before
    140           l_tablnm = p_psa_techname.
    141
    142           CALL FUNCTION 'RSDU_PARTITIONS_INFO_GET'
    143             EXPORTING
    144               i_tablnm              = l_tablnm
    145             IMPORTING
    146               e_ts_part_info        = l_ts_part_info
    147             EXCEPTIONS
    148               table_not_exists      = 1
    149               table_not_partitioned = 2
    150               OTHERS                = 3.
    151
    >>>           ASSERT sy-subrc = 0.
    153
    154           DESCRIBE TABLE l_ts_part_info LINES l_num_partitions.
    155           READ TABLE l_ts_part_info INDEX l_num_partitions INTO l_s_part_info.
    156           l_highest_partvalue = l_s_part_info-high_value.
    157
    158           UPDATE rstsods SET partno  = l_highest_partvalue
    159             WHERE odsname = l_s_odsfield-odsname
    160             AND   version = l_s_odsfield-version.
    161
    162         ENDIF.
    163       ELSE.
    164 *       create new version
    165         l_s_ods-odsname       = l_s_odsfield-odsname.
    166         l_s_ods-version       = i_next_version.
    167         l_s_ods-dateto        = rsods_c_dateto_01019999.
    168         l_s_ods-datefrom      = rsods_c_datefrom_01011998.
    169         l_s_ods-objstat       = rs_c_objstat-active.
    170         l_s_ods-odsname_tech  = p_psa_techname.
    171         l_s_ods-progname      = i_progname.

    I guess it is Notes 1012607.
    Summary
    Symptom
    Note: This note is relevant only for 'ORACLE' and 'MSSQL' database systems. After you implement this note, you must also carry out some manual corrections (see 'Solution', below).
    If you are working with database system DB2 or MSSQL, also implement Note 1022026.
    When data is written or activated or when a DataStore object is activated, the following errors occur:
    Similar errors may also occur for the DataSource and data transfer process (DTP).
    ORA-01502: index 'SAPDAT./BIC/A*KE' or partition of such index is in unusable state
    Column 'PARTNO' is partitioning column of the index '/BIC/A*KE'. Partition columns for a unique index must be a subset of the index key.
    error #RSDU_TABLE_TRUNC_PARTITION_MSS: Error While Calling Module MSS_TRUNC_PARTITION_FROM_TABLE Message no. 0U534#.
    <b>ASSERTION_FAILED in class 'CL_RSAR_PSA'.</b>
    Error message D0 313 in the activation log. The message does not contain any text. In the activation log it is displayed as an empty line with a red traffic light.
    Other terms
    DBIF_RSQL_SQL_ERROR, D0 313, D0313
    Reason and Prerequisites
    Reason:
    The partitioning logic of the persistent staging area (PSA) service does not recognize that the PARTNO field must not be deleted.
    For write-optimized DataStore objects, the active table is created as a partitioned table, even though a global index is used to ensure uniqueness of data. This is not compatible with the 'drop of a partition'.
    In the DataSource maintenance, you have the option to define key fields. For the first 16 key fields of the DataSource field list, a global index is also created.
    If 'semantic groups' are used in the DTP, the error stack is created with a global index.
    Solution
    Implement the corrections by importing the Support Package or by implementing the advance correction. As a result, the 'range' partitioning is deactivated in the PSA service as soon as a global index is requested.
    The error can occur for the objects: DataStore (only the write-optimized type), DataSource, and error stack of the DTP.
    This note contains the 'RSAR_PSA_PARTITION_CHECK' program, which you can use to analyze the objects. Execute the program. Use the search strings listed in section 5), depending on whether you want to analyze individual objects or object types. If you do not make an entry in the PODSTECH field (technical name of the PSA), the system checks all existing PSA tables, which may take some time.
    You can use transaction SLG1 to display the log for 'RSAR_PSA_PARTITION_CHECK'. Select the following:
               Object        = 'RSAR'
    Subobject   = 'METADATA'
    Ext. Identif. = 'RSAR_PSA_PARTITION_CHECK'
    You must make different manual changes to repair each of the different object classes.
    1) DataStore (write-optimized)
    Incorrect DataStores are identified in the log of the check program with the PSA type 'FASTSTORE'. The name after 'Obj:' is the technical name of the corresponding DataStore object.
    For a DataStore object of the 'write-optimized' type, a global index with relation to the semantic key is created if the 'Do Not Check Uniqueness of Data' indicator is not set.
    Check if you need to ensure that data is unique in your scenario.
    1. If you do not need the data to be unique:
                        Set the flag: 'Do Not Check Uniqueness of Data', and activate the DataStore object. The DataStore object is now consistent again.
    2. If you need unique data:
    In this case, you must departition and convert the table.
    If the error occurred when you activate the DataStore itself or when you activate the data, you must activate the DataStore object after converting the active table. You need the technical name of the active table for the conversion. You can get this directly from the log of 'RSAR_PSA_PARTITION_CHECK'. If you know which DataStore contains errors, find the technical name of the active table in the Maintain DataStore screen by choosing:
               <Extras> ->
              'Information (logs/status)
    Choose 'Dictionary DB status' to access the status POPUP. You can find the technical name in the 'Active table' field.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DataStore.
    If the table contains data, you must departition and convert the table as described in section 4.
    After that, use the AdminWorkBench (transaction RSA1) to activate the DataStore object.
    2) DataSource:
    Incorrect DataSources are identified in the log of the check program with the PSA type 'NEW_DS'. The 'Obj:' indicator  is followed by two additional character strings. The first is the technical name of the relevant DataSource. The second is the technical name of the source system.
    PSA tables for DataSources with a key definition must be departitioned.
    The name of the PSA table for the DataSource is contained directly in the 'RSAR_PSA_PARTITION_CHECK' log.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DataStore.
    If the table contains data, you must departition and convert the table as described in section 4.
    Call transaction 'RSDS' and enter the technical name of the DataSource and the source system and activate the DataSource.
    3) Error stacks for the DTP:
    Incorrect Error Stacks are identified in the log of the check program with the PSA type 'ERRORSTACK'. The 'Obj:' indicator  is  followed by the technical name of the relevant DTP. There may be more than one error stack table for each DTP.
    PSA tables for ErrorStack with a key definition must be departitioned.
    The name(s) of the PSA Error Stack table(s) for the DTP is/are contained directly in the 'RSAR_PSA_PARTITION_CHECK' log.
    If the table does not contain any data according to the 'RSAR_PSA_PARTITION_CHECK' log, the table is automatically departitioned when you activate the DTP.
    If the table contains data, you must departition and convert the table as described in section 4.
    Now call transaction RSDTP, enter the technical name of the DTP and activate the DTP.
    4) Departitioning and converting
    The following manual conversion using transaction SE14 is supported only for ORACLE database systems. Open a problem message under component BW-SYS-DB-MSS if you need to convert tables on a MSSQL database system.
    Call transaction SE14 (Database Utility) for the tables you need to convert. Select 'Table', enter the technical name of the table and choose 'Edit'.
    On the next screen, choose 'Storage Parameters' (Shift+F6).
    On the next screen (Storage Parameters), choose 'For new creation' (F8).
    In the dialog box that then appears, select 'Current database parameters' and copy it by choosing 'Enter'.
    You now get an overview of the storage parameters <Tables>, <Indexes> and existing <Partitions>.
    Under the 'Table' node, if the content of the 'TABLESPACE' field is initial, enter the value from the 'TABLESPACE' field of the first partition.
    For the field 'PARTITIONED BY', choose the option 'No partitioning' and save your changes.
    Exit the screen with the storage parameters.
    On the next screen, ensure that the 'Save data' radio button after 'Activate and adjust database' is selected, then execute the conversion. You execute the conversion by choosing 'Force Conversion' in the <Extras> menu.
    Next, you must correct the PARTNO indicator in table RSTSODS. To do this, call transaction RSRV and execute the test 'Consistency Between PSA Partitions and SAP Administration Information'. You can find this test in transaction RSRV under
    <All Elementary Tests>
                 -> <PSA Tables>
    You can execute the RSRV test and repair for all converted tables at once. For further information about how to use transaction RSRV in this case, see the online documentation. You can call the online documentation by choosing the 'Info' icon.
    5) Search strings:
    a) Use the search string '/BI+/B*' to find the relevant entries for the DataSource, the change logs and the error stack.
    b) Use the search string '/BI+/A*00' to find the relevant entries in the active tables for the DataSource objects.
    SAP NetWeaver 2004s BI
               Import Support Package 13 for SAP NetWeaver 2004s BI (BI Patch 13 or SAPKW70013) into your BI system. The Support Package is available once Note 991093 "SAPBINews BI 7.0 Support Package 13", which describes this Support Package in more detail, has been released for customers.
    In urgent cases, you can implement the correction instructions as an advance correction.
    You must first implement Notes 932065, 935140, 948389, 964580, 969846, 975510, 983212 and 1000448, which provide information about transaction SNOTE. Otherwise, problems and syntax errors may occur when you deimplement certain notes.
    To provide information in advance, the notes mentioned above may already be available before the Support Package is released. In this case, the short text of the note still contains the words "Preliminary version".
    Before you implement an advance correction (if one exists and you want to implement it), see Note 875986. This contains notes regarding the SAP Note Assistant and these notes prevent problems during the implementation.

  • Error while loding the maseter data

    Hi Experts,
    I am working on BI, I am trying to extract the data from 0PM_MEASDOC_ATTR to PSA, while I am loading the dat I got bellow error.
    ERROR OCCURED WHILE DECIDING PARTITION NUMBER
    What could be the problem,
    How can solve it, please help me to do this,
    <removed by moderator>
    Thanks in advance,
    Venkat
    Edited by: Siegfried Szameitat on Feb 16, 2009 3:09 PM

    Hi Venkat,
    Check the PSA table name and run these two reports RSAR_PSA_CLEANUP_DIRECTORY, SAP_PSA_PARTNO_CORRECT to check if there is inconsistencies in the PSA table.
    Also from RSRV -> All Elementary Tests -> PSA Tables -> Consistency Between PSA Partitions and SAP Administration Information.
    Run this with giving the tech name of PSA table and u can find the info if anything is wrong.
    Regards
    Srini

  • Error Message When Activating InfoSource

    Hello - I receive the following error message when I attempt to activate an InfoSource in BW (I'm using the program "RS_TRANSTRU_ACTIVATE_ALL" via SE38):
    Mass generation: No versioning for PSA table /BIC/B0000729000 for InfoSource 8ZO_CHGRE
    Message no. RSAR023
    The InfoSource is used to load a cube with ODS data.  If anyone has any suggestions, I would greatly appreciate it.  Thank you in advance.

    1. I performed the test in RSRV (i.e. PSA Tables/Consistency Between PSA Partitions and SAP Administration Information## ) and I received the following results:
    PSA Table /BIC/B0000729000 is not partitioned (according to RSTSODS)
    Message no. RSRV186
    2.  Generating the export datasource was not successful - I receive the following message from a "Runtime Error - Description of Exception" screen:
    Runtime Errors         MESSAGE_TYPE_X
    Date and Time          06/27/2007 12:13:49
    ShrtText
         The current application triggered a termination with a short dump.
    What happened?
         The current application program detected a situation which really
         should not occur. Therefore, a termination with a short dump was
         triggered on purpose by the key word MESSAGE (type X).
    Error analysis
         Short text of error message:
         Serious internal error:
         Technical information about the message:
          Diagnosis
              A serious internal error occurred. It could not be corrected.
          Procedure
              The following information is available on this error:
              1.
              2.
              3.
              4.   OSS note
              Check the OSS for corresponding notes and create a new problem
              message if necessary.
         Message classe...... "RSAR"
         Number.............. 001
         Variable 1.......... " "
         Variable 2.......... " "
    Trigger Location of Runtime Error
        Program                                 SAPLRSAC
        Include                                 LRSACU75
        Row                                     536
        Module type                             (FUNCTION)
        Module Name                             RSAR_TRANSTRUCTURE_ACTIVATE

  • Short Dump in SAP BW Sys while extracting data from R/3

    HI All,
    I am getting short dump problem in sap bw while extracting the data from r/3 to bw info cube.
    the error message is like this,
    Runtime Error          DBIF_RSQL_SQL_ERROR
    Except.                CX_SY_OPEN_SQL_DB
    Date and Time          11.05.2011 09:36:51
    ShrtText
    An SQL error occurred when accessing a table.
    What can you do?
    Make a note of the actions and input which caused the error.
    To resolve the problem, contact your SAP system administrator.
    You can use transaction ST22 (ABAP Dump Analysis) to view and administer
    termination messages, especially those beyond their normal deletion
    date.
    How to correct the error
    Database error text........: "ORA-14400: inserted partition key does not map to
    any partition"
    Internal call code.........: "[RSQL/INSR//BIC/B0000115000 ]"
    Please check the entries in the system log (Transaction SM21).
    You may able to find an interim solution to the problem
    in the SAP note system. If you have access to the note system yourself,
    use the following search criteria:
    "DBIF_RSQL_SQL_ERROR" CX_SY_OPEN_SQL_DBC
    "GPCX2UNC5JG556R7DPSFJUGJS1Z" or "GPCX2UNC5JG556R7DPSFJUGJS1Z"
    |    "INSERT_ODS"                
    If you cannot solve the problem yourself and you wish to send
    an error message to SAP, include the following documents:
    1. A printout of the problem description (short dump)
    To obtain this, select in the current display "System->List->
    Save->Local File (unconverted)".
    2. A suitable printout of the system log
    To obtain this, call the system log through transaction SM21.
    Limit the time interval to 10 minutes before and 5 minutes
    after the short dump. In the display, then select the function
    "System->List->Save->Local File (unconverted)".
    3. If the programs are your own programs or modified SAP programs,
    supply the source code.
    To do this, select the Editor function "Further Utilities->
    Upload/Download->Download".
    4. Details regarding the conditions under which the error occurred
    or which actions and input led to the error.
    The exception must either be prevented, caught within the procedure
    "INSERT_ODS"
    "(FORM)", or declared in the procedure's RAISING clause.
    To prevent the exception, note the following:
    System environment
    SAP Release.............. "640"
    Application server....... "bwdev"
    Network address.......... "10.10.100.36"
    Operating system......... "Windows NT"
    Release.................. "5.2"
    Hardware type............ "8x AMD64 Level"
    Character length......... 8 Bits
    Pointer length........... 64 Bits
    Work process number...... 0
    Short dump setting....... "full"
    Database server.......... "BWDEV"
    Database type............ "ORACLE"
    Database name............ "BWD"
    Database owner........... "SAPR3"
    Character set............ "English_United State"
    SAP kernel............... "640"
    Created on............... "Aug 17 2008 20:56:58"
    Created in............... "NT 5.2 3790 Service Pack 1 x86 MS VC++ 14.00"
    Database version......... "OCI_10201_SHARE "
    Patch level.............. "247"
    Patch text............... " "
    Supported environment....
    Database................. "ORACLE 9.2.0.., ORACLE 10.1.0.., ORACLE
    10.2.0.."
    SAP database version..... "640"
    Operating system......... "Windows NT 5.0, Windows NT 5.1, Windows NT 5.2,
    Windows NT 6.0"
    Memory usage.............
    Roll..................... 16128
    EM....................... 6265200
    Heap..................... 0
    Page..................... 0
    MM Used.................. 5972296
    MM Free.................. 289384
    SAP Release.............. "640"
    User and Transaction
    Information on where terminated
    The termination occurred in the ABAP program "GPCX2UNC5JG556R7DPSFJUGJS1Z" in
    "INSERT_ODS".
    The main program was "SAPMSSY1 ".
    The termination occurred in line 41 of the source code of the (Include)
    program "GPCX2UNC5JG556R7DPSFJUGJS1Z"
    of the source code of program "GPCX2UNC5JG556R7DPSFJUGJS1Z" (when calling the
    editor 410).
    Processing was terminated because the exception "CX_SY_OPEN_SQL_DB" occurred in
    the
    procedure "INSERT_ODS" "(FORM)" but was not handled locally, not declared in
    the
    RAISING clause of the procedure.
    The procedure is in the program "GPCX2UNC5JG556R7DPSFJUGJS1Z ". Its source code
    starts in line 21
    |    of the (Include) program "GPCX2UNC5JG556R7DPSFJUGJS1Z ".

    Hi,
    This error  occurs if you want to insert an entry in the table that does not match the value range of any partition. In such a case, you must compare the value of the entry with the definitions of the partitions to determine the cause of the error.
    please run the following test/repair in transaction RSRV ,
    RSRV -> All Elementary Test -> PSA Tables -> Consistency Between PSA Partitions and SAP Administration  Information
    -> include the PSA table
        /BIC/B0000115000
    Check OSS note 339896 which explains  the repair procedure.
    Regards,
    Lokesh

  • ODS to CUBE loading - taking too much time

    Hi Experts,
    I am loading data from R/3(4.7) to BW (3.5).
    I am loading with option --> PSA and then Data Target (ODS ).
    I have a selection criteria in Infopackage while loading from standard Datasource to ODS.
    It takes me 20 mins to load 300K records.
    But, from ODS to Infocube ( update method: Data Target Only), it is taking 8 hours.
    The data packet size in Infopackage is 20,000 ( same for ODS and Infocube).
    I also tried changing the data packet size, tried with full load , load with initialization,..
    I tried scheduling it as a background job too.
    I do not have any selection criteria in the infopackage from ODS to Cube.
    Please let me know how can I decrease this loading time from ODS to Infocube.

    Hi,
    To improve the data load performance
    1. If they are full loads then try to see if you make them delta loads.
    2. Check if there are complex routines/transformations being performed in any layer. In that case see if you can optimize those codes with the help of an abaper.
    3. Ensure that you are following the standard procedures in the chain like deleting Indices/secondary Indices before loading etc.
    4. Check whether the system processes are free when this load is running
    5. try making the load as parallel as possible if the load is happening serially. Remove PSA if not needed.
    6. When the load is not getiing processed due to huge volume of data, or more number of records per data packet, Please try the below option.
    1) Reduce the IDOC size to 8000 and number of data packets per IDOC as 10. This can be done in info package settings.
    2) Run the load only to PSA.
    3) Once the load is succesfull , then push the data to targets.
    In this way you can overcome this issue.
    Ensure the data packet sizing and also the number range buffering, PSA Partition size, upload sequence i.e, always load master data first, perform change run and then transaction data loads.
    Check this doc on BW data load perfomance optimization
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    BI Performance Tuning
    FAQ - The Future of SAP NetWeaver Business Intelligence in the Light of the NetWeaver BI&Business Objects Roadmap
    Thanks,
    JituK

  • Can someone take some time to answer these interview Qs?

    Hi BW Experts,
    1)how many internal tables are there?(or may be internal table types)
    2)in ODS settings there is a setting for BEX flag.  what exactly happens behind the scenes when we set that flag?
    3)how to improve performance on <b>data loading</b>(not on query)
    Thanks in advance,
    Sam

    Hi Samay,
      I dont know abt the internal tables but for other 2 queries i can give answers for best of my knowledge..
    2a) After setting Bex reporting flag on the ods objects creates sid tables so it degrades reporting performance so that we always use infoset query to report on ods..
    3a) Data loading performance can be improved by
    1)disabling indices
      2)PSA partitioning on required datapacket size
    3)Line Item dimensions as there will be no dim table between fact table and sid table ..
    Hope it helps you..

Maybe you are looking for