Data Package size will be detemined dynamically.

Dear SDNers,
I have seen for some DTPs in my projects the data packet size in DTP is determined dynamically .How do we get this.
I am getting this message in DTP->Extraction Tab
The package size corresponds to package size in source.
It is determined dynamically at runtime.
Thanks,
Swathi

Hello,
You would get this when semantic keys are not defined in the DTP.
Regards..
Balaji

Similar Messages

  • How do we control the data package size that comes into the DSO?

    Hi experts,
    I have this scenario:
    Initial information (numbers are not real):
    I have 10 contracts in CRM (one order documents)
    Each contract when extracted becomes 50 records.
    Running BW 3.x
    (1) Now i start data extraction in BW, i will receive 5 packets, split like following:
    DP1: 100 records (contract 1 and 2)
    DP2: 100 records (contract 3 and 4)
    DP3: 50 records (contract 5)
    These records are stored in the PSA.
    (2) Then, it seems the system keeps the same package size and send these DPs to DSO like following:
    DP1 -> 100 records -> DSO
    DP2 -> 100 records -> DSO
    DP3 -> 50 records -> DSO
    What i want:
    I have a special case and i want to be able to do the following starting from (2).
    Instead of sending
    DP1 -> 100 records -> DSO
    DP2 -> 100 records -> DSO
    DP3 -> 50 records -> DSO
    I want to send:
    DP1 -> 10 records -> DSO
    DP2 -> 10 records -> DSO
    DP3 -> 10 records -> DSO
    DP25 -> 10 records -> DSO
    Do I have control over the data package size (number of records)?
    Can the DPs between DataSource <-> DSO be different then the ones from SourceSystem <->DataSource?
    Can i even go further and do some kind of selection to be able to send like following:
    DP1 -> all records from item 01 to 10 of contract 1 -> DSO
    DP2 -> all records from item 11 to 20 of contract 1 -> DSO
    DP3 -> all records from item 01 to 10 of contract 2 -> DSO
    DP4 -> all records from item 11 to 20 of contract 2 -> DSO
    DPn -> all records from item 11 to 20 of contract 10 -> DSO
    Thanks!

    Hi,
      If you are using infopackage try the setting in the infopackage ie in the scheduler menu at the top
    choose DataS: Default data transfer in which you can change the package size of data
    if using DTP in Extraction Tab you can specify  Package Size.
    Hope this helps for you.
    Thanks,
    Arun

  • Increase data package size

    Hi,
    I'm using a flat file datasource to load data from the BW server into a cube.  When I load a file containing 9000 records, the data package size is 1000.  I'm trying to improve the load time and would like to increase the data package size to 2000.
    I made an entry to specify 2000 records for my flat file source in SBIW -> Maintain Control Parameters for Data Transfer.  My file still loads with 1000 records in the data package. 
    I've also tried changing the setting on the infopackage itself, but it keeps telling me the max is 1000.
    Any suggestions?
    Thanks

    the settings you maintained in SBIW is for Other BI system not for Current System. whiel you exchange data between your current BI system with other BI System, then you need to maintain in SBIW.
    If you want to change for current system, you need to maintain in SPRO not SBIW. i can't tell you the exact navigation. it will be under "Links to Other System"
    If you are on BI 7.0, follow the below navigation....
    SPRO --> F5 -->SAP Netweaver -> BI --> Links to Other Systems -->Maintain Control Parameters for Data Transfer.
    Let me know if you need any information.
    Nagesh Ganisetti.

  • Data package size

    What is the basic difference between RSCUSTV6 & SBIW->General setting ->Maintain Control parameters in relation of modification of data package format.

    Hi,
    Just see the help on
    Maintain Control Parameters for Data Transfer:
    1. Source System
    Enter the logical system of your source client and assign the control parameters you selected to it.
    You can find further information on the source client in the source system by choosing the path
    Tools -> Administration -> Management -> Client Maintenance.
    2. Maximum Size of the Data Package
    When you transfer data into BW, the individual data records are sent in packages of variable size. You can use these parameters to control how large a typical data packet like this is.
    If no entry was maintained then the data is transferred with a default setting of 10,000 kBytes per data packet. The memory requirement not only depends on the settings of the data package, but also on the size of the transfer structure and the memory requirement of the relevant extractor.
    3. Maximum Number of Rows in a Data Package
    With large data packages, the memory requirement mainly depends on the number of data recrods that are transferred with this package. Using this parameter you control the maximum number of data records that the data package should contain.
    By default a maximum of 100,000 records are transferred per  data package.
    The maximum main memory requiremen per data package is approximately 2  Max. Rows 1000 Byte.
    4. Frequency
    The specified frequency determines the number of IDocs that an Info IDoc is to be sent to, or how many data IDocs an Info Idoc describes.
    Frequency 1 is set by default.. This means that an Info Idoc follows every data Idoc. In general, you should select a frequency between 5 and 10 but no higher than 20.
    The bigger the data IDoc packet, the lower the frequency setting should be. In this way, when you upload you can obtain information on the respective data loading in relatively short spans of time .
    With the help of every Info IDoc, you can check the BW monitor to see if there are any errors in the loading process. If there are none, then the traffic light in the monitor will be green. The Info IDocs contain information such as whether the respective data IDocs were uploaded correctly.
    5. Maximum number of parallel processes for the data transfer
    An entry in this field is only relevant from release 3.1I onwards.
    Enter a number larger than 0. The maximum number of parallel processes is set by default at 2. The ideal parameter selection depends on the configuration of the application server, which you use for transferring data.
    6. Background job target system
    Enter the name of the application server on which the extraction job is to be processed.
    To determine the name of the application server, choose
    Tools -> Administration -> Monitor -> System monitoring -> Server. The name of the application server is displayed in the column Computer.
    7. Maximum Number of Data Packages in a Delta Request
    With this parameter, you can restrict the number of data packages in a delta request or in the repetition of a delta request.
    Only use this restriction when you expect delta requests with a very high data volume, so that, despite sufficiently large data package sizes, more than 1000 data packages can result in a request.
    With an initial value or when the value is 0, there is no restriction. Only a value larger than 0 leads to a restriction in the number of data packages. For reasons of consistency, this number is not generally exactly adhered to. The actual restriction can, depending on how much the data is compressed in the qRFC queue , deviate from the given limit by up to 100.
    RSA6:
    Used to change the Datapacket Size.
    Thanks
    Reddy
    Edited by: Surendra Reddy on Mar 12, 2010 6:27 AM

  • Impact of Changing Data Package Size with DTP

    Hi All,
    We have delta dtp to load data from DSO to infocube. Default data package size with dtp is 50,000 records.
    Due to huge no of data, internal table memory space is used and data loading get fails.
    Then we changed the data package size to 10,000, which executes the data load successfully.
    DTP with package size of 50,000 took 40 minutes to execute and failed, but DTP with package size of 10,000 took 15 minutes (for same amount of data).
    Please find below my questions:
    Why a DTP with bigger size of packet runs longer than a DTP with lower packet size ?
    Also by reducing the standard data package size 50,000 to 10,000, will it impact any other data loading?
    Thanks

    Hi Sri,
    If your DTP is taking more time then check your transformation .
    1.Transformation with Routines always take more time so you if you want to reduce the time of execution then routine should be optimized for good performance .
    2.Also check if you have filter at DTP level .Due to filters DTP takes long time .If same data get filtered at routine level it take much lesser time .
    3.If you cannot change routine then you can set semantic keys at your DTP .The package data will be sorted as per semantic keys and thus it may be helpful at routine level for fast processing.
    4.Your routine is getting failed due to  internal table memory space so check if you have select statement in routine without FOR ALL ENTRIES IN RESULT_PACKAGE or SOURCE_PACKAGE line .if you will use this It will reduce record count .
    5.Wherever possible delete duplicate records and if possible filter useless data at start routine itself .
    6.Refresh internal table if data no longer needed .If your tables are global then data will be present at every routine level so refreshing will help to reduce size.
    7.The maximum memory that can be occupied by an internal table (including its internal administration) is 2 gigabytes. A more realistic figure is up to 500 megabytes.
    8.Also check no of jobs running that time .May be you have lots of jobs active at the same time so memory availability will be less and DTP may get failed .
    Why a DTP with bigger size of packet runs longer than a DTP with lower packet size ?
    *Start and end routine works at package level so routine run for each package one by one .By default package have sorted data based on keys (non unique keys (characteristics )of source or target) and by setting semantic keys you can change this order.So Package having more data will take more time in processing then package have lesser data .
    by reducing the standard data package size 50,000 to 10,000, will it impact any other data loading?
    It will only impact running of that load .but yes if lots of other loads are running simultaneously then server can allocate more space to them .So better before reducing package size just check whether it is helpful in routine performance (start and end ) or increasing overhead .
    Hope these points will be helpful .
    Regards,
    Jaya Tiwari

  • Restrict Data Package Size

    hi all,
    I have created a generic data source which is extracting data using the FM.
    I wana restrict my data by packet size. how can i do this?
    my FM is
    FUNCTION zstock_requirement.
    ""Local interface:
    *"  IMPORTING
    *"     VALUE(I_DSOURCE) TYPE  RSISOURCE OPTIONAL
    *"     VALUE(I_REQUNR) TYPE  RSREQUNR
    *"     VALUE(I_MAXSIZE) TYPE  RSMAXSIZE DEFAULT 1000
    *"     VALUE(I_INITFLAG) TYPE  RSINITFLG OPTIONAL
    *"     VALUE(I_UPDMODE) TYPE  RSUPDMODE OPTIONAL
    *"     VALUE(I_DATAPAKID) TYPE  RSDATAPID DEFAULT 50000
    *"     VALUE(I_READ_ONLY) TYPE  SBIW_BOOL DEFAULT SBIW_C_FALSE
    *"  TABLES
    *"      I_T_SELECT TYPE  SBIWA_T_SELECT OPTIONAL
    *"      I_T_FIELDS TYPE  SBIWA_T_FIELDS OPTIONAL
    *"      E_T_DATA STRUCTURE  MDEZ OPTIONAL
    Maximum number of lines for DB table
      STATICS: s_s_if TYPE srsc_s_if_simple,
    counter
              s_counter_datapakid LIKE sy-tabix,
    cursor
              s_cursor TYPE cursor.
      DATA : lt_mdezx TYPE TABLE OF mdez,
              la_mdezx LIKE LINE OF lt_mdezx.
    Initialization mode (first call by SAPI) or data transfer mode
    (following calls) ?
      IF i_initflag = sbiwa_c_flag_on.
    Initialization: check input parameters
                    buffer input parameters
                    prepare data selection
    Check DataSource validity
        CASE i_dsource.
          WHEN 'ZSTOCK_REQUIREMENT1'.
          WHEN OTHERS.
            IF 1 = 2. MESSAGE e009(r3). ENDIF.
    this is a typical log call. Please write every error message like this
            log_write 'E'                  "message type
                      'R3'                 "message class
                      '009'                "message number
                      i_dsource   "message variable 1
                      ' '.                 "message variable 2
            RAISE error_passed_to_mess_handler.
        ENDCASE.
        APPEND LINES OF i_t_select TO s_s_if-t_select.
    Fill parameter buffer for data extraction calls
        s_s_if-requnr    = i_requnr.
        s_s_if-dsource = i_dsource.
        s_s_if-maxsize   = i_maxsize.
    Fill field list table for an optimized select statement
    (in case that there is no 1:1 relation between InfoSource fields
    and database table fields this may be far from beeing trivial)
        APPEND LINES OF i_t_fields TO s_s_if-t_fields.
      ELSE.                 "Initialization mode or data extraction ?
    Data transfer: First Call      OPEN CURSOR + FETCH
                   Following Calls FETCH only
    First data package -> OPEN CURSOR
        IF s_counter_datapakid = 0.
          SELECT matnr FROM mara INTO TABLE i_matnr
                            WHERE mtart IN ('ZFIN','ZRAW','ZSMI').
          If not i_matnr is initial.
          SELECT matnr werks FROM marc INTO TABLE i_input FOR ALL ENTRIES IN i_matnr
                                           WHERE matnr = i_matnr-matnr.
           endif.
            CLEAR wa_input.
            LOOP AT i_input INTO wa_input.
              CALL FUNCTION 'MD_STOCK_REQUIREMENTS_LIST_API'
                EXPORTING
       PLSCN                          =
                  matnr                          =  wa_input-matnr
                  werks                          =  wa_input-werks
       BERID                          =
       ERGBZ                          =
       AFIBZ                          =
       INPER                          =
       DISPLAY_LIST_MDPSX             =
       DISPLAY_LIST_MDEZX             =
       DISPLAY_LIST_MDSUX             =
       NOBUF                          =
    IMPORTING
       E_MT61D                        =
       E_MDKP                         =
       E_CM61M                        =
       E_MDSTA                        =
               TABLES
       MDPSX                          =
                 mdezx                          =   lt_mdezx
       MDSUX                          =
               EXCEPTIONS
                 material_plant_not_found       = 1
                 plant_not_found                = 2
                 OTHERS                         = 3.
              IF sy-subrc <> 0.
                MESSAGE i000(abc) WITH 'information'.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
            WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
              ENDIF.
              LOOP AT lt_mdezx INTO la_mdezx.
                APPEND la_mdezx TO e_t_data.
              ENDLOOP.
            ENDLOOP.
          ENDIF.
          s_counter_datapakid = s_counter_datapakid + 1.
        ENDIF.              "Initialization mode or data extraction ?
      ENDFUNCTION.

    Hi,
    One thing why u want to restrict data package. Anyway if it is delta enbaled u will make it as a initial load then delta load.
    Its very complex to implement datapackage as well as cursor and fetch statement in function module.
    Thanks,
    Debasish

  • Data package size-LAN and WAN

    HI Experts,
    Could anybody give explanation/Document for the  below query?
    When transfering R/3 to BW.How the data pacakage size is determined.?Would the size of data packets be different over LAN and WAN? Why?
    Thanks
    Pradeep

    In transaction SBIW -> General Settings -> Maintain Control Parameters for Data Transfer (in the OLTP System), you can see and edit the default values for the source system.
    If you display the infopackage in the BW system and click on the menu option Scheduler -> DataS. Default Data Transfer, you will be able to edit the settings for the infopackage and also see what's the default configuration for the source system.
    You may also consult the following SAP Notes for more information (including exceptions):
    [417307 - Extractor package size: Collective note for applications.|https://websmp107.sap-ag.de/sap/support/notes/417307]
    [409641 - Examples of packet size dependency on ROIDOCPRMS.|https://websmp107.sap-ag.de/sap/support/notes/409641]
    I don't see what difference it could have over LAN or WAN, though...

  • Data package/container size

    We are using MI and I have a little question about data packages sizes and container sizes.
    If I use data packaging and the size very small, I read at help.sap.com that MI is sending 20 containers.
    But how much data is sent by MI. What is the size of 1 container?
    I ask this question because we have a little problem with the sizes of the data packages/containers.
    So the major question is: What is the size of a container which is sent from MI to the Middleware (SAP WAS)?
    How can I see the data I sent from MI to the middleware (SAP WAS) and how can I see if the packages are full ore not.
    I hope someone can help me. Thanks in advance.
    Kind regards,
    Bart Elshout

    Hi Thomas,
    Do you know anything about a "Fragment Bit" which is set with sending data from SAP MI client to SAP WAS (middleware)?
    The data is sent with a "Don't fragment bit" and one of our switches/routers or something in the network has to fragment our data, but the bit is set. So the MI client receives an error message (Unexpected end of file from server).
    We want to set something in the SAP MI client which sais this datapacket may be fragmented so it can pass the network switches/routers/etc.
    Can someone help us with this problem?
    Thanks in advance and kind regards,
    Bart Elshout

  • Same set of Records not in the same Data package of the extractor

    Hi All,
    I have got one senario. While extracting the records from the ECC based on some condition I want to add some more records in to ECC. To be more clear based on some condition I want to add addiional lines of data by gving APPEND C_T_DATA.
    For eg.
    I have  a set of records with same company code, same contract same delivery leg and different pricing leg.
    If delivery leg and pricing leg is 1 then I want to add one line of record.
    There will be several records with the same company code contract delivery leg and pricing leg. In the extraction logic I will extract with the following command i_t_data [] = c_t_data [], then sort with company code, contract delivery and pricing leg. then Delete duplicate with adjustcent..command...to get one record, based on this record with some condition I will populate a new line of record what my business neeeds.
    My concern is
    if the same set of records over shoot the datapackage size how to handle this. Is there any option.
    My data package size is 50,000. Suppose I get a same set of records ie same company code, contract delivery leg and pricing leg as 49999 th record. Suppose there are 10 records with the same characteristics the extraction will hapen in 2 data packages then delete dplicate and the above logic will get wrong. How I can handle this secnaio. Whether Delta enabled function module help me to tackle this. I want to do it only in Extraction. as Data source enhancement.
    Anil.
    Edited by: Anil on Aug 29, 2010 5:56 AM

    Hi,
    You will have to do the enhancement of the data source.
    Please follow the below link.
    You can write your logic to add the additional records in the case statement for your data source.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035c402-3d1a-2d10-4380-af8f26b5026f?quicklink=index&overridelayout=true
    Hope this will solve your issue.

  • Issue in Update routine due to Data Package

    We have this peculiar situation.
    The scenario is ..
    We have to load data from ODS1 to ODS2.
    The data package size is 9980 while transferring data from ODS1 to ODS2.
    In the update rule we have some calculations and we rank the records based on these calculations.
    The ODS key for both ODS1 and ODS2 is same ie Delivery Number , Delivery Item & Source System.
    For example a Delivery Number has 12 Delivery Items.
    These Delivery Items are in different Data Packages namely Data Package 1 and Data Package 4.
    So instead of having the ranks as 1 to 10 its calculating it as 1 to 5 and second item as 1 to 5.
    But what we require is Rank as 1 to 10.
    This is due to the fact that the items are in different Data packages.
    In this case the ABAP routine is working fine but the Data Package is the problem.
    Can anybody any alternative solution to this issue.?
    Thanks in advance for assistance.............

    CODE FOR INTER DATA PACKAGE TREATMENT
    PROGRAM UPDATE_ROUTINE.
    $$ begin of global - insert your declaration only below this line  -
    TABLES: ...
    DATA:   ...
    DATA: v_packet_nbr TYPE i VALUE 1.
    DATA:
      g_requnr  TYPE rsrequnr.
    DATA:
      l_is        TYPE string VALUE 'G_S_IS-RECNO',
      l_requnr    TYPE string VALUE 'G_S_MINFO-REQUNR'.
    FIELD-SYMBOLS: <g_f1> TYPE ANY,
                   <g_requnr> TYPE ANY.
    TYPES:
      BEGIN OF global_data_package.
            INCLUDE STRUCTURE /bic/cs8ydbim001.
    TYPES: recno   LIKE sy-tabix,
      END OF global_data_package.
    DATA lt_data_package_collect TYPE STANDARD TABLE OF global_data_package.
    DATA ls_datapack TYPE global_data_package.
    datapackage enhancement Declaration
    TYPES: BEGIN OF datapak.
            INCLUDE STRUCTURE /bic/cs8ydbim001.
    TYPES: END OF datapak.
    DATA: datapak1 TYPE STANDARD TABLE OF datapak,
          wa_datapak1 LIKE LINE OF datapak1.
    Declaration for Business Rules implementation
    TYPES : BEGIN OF ty_ydbsdppx.
            INCLUDE STRUCTURE /bic/aydbsdppx00.
    TYPES: END OF ty_ydbsdppx.
    DATA : it_ydbsdppx TYPE STANDARD TABLE OF ty_ydbsdppx WITH HEADER LINE,
           wa_ydbsdppx TYPE ty_ydbsdppx,
           temp TYPE /bic/aydbim00100-price,
           lv_tabix TYPE sy-tabix.
    $$ end of global - insert your declaration only before this line   -
    The follow definition is new in the BW3.x
    TYPES:
      BEGIN OF DATA_PACKAGE_STRUCTURE.
         INCLUDE STRUCTURE /BIC/CS8YDBIM001.
    TYPES:
         RECNO   LIKE sy-tabix,
      END OF DATA_PACKAGE_STRUCTURE.
    DATA:
      DATA_PACKAGE TYPE STANDARD TABLE OF DATA_PACKAGE_STRUCTURE
           WITH HEADER LINE
           WITH NON-UNIQUE DEFAULT KEY INITIAL SIZE 0.
    FORM startup
      TABLES   MONITOR STRUCTURE RSMONITOR "user defined monitoring
               MONITOR_RECNO STRUCTURE RSMONITORS " monitoring with record n
               DATA_PACKAGE STRUCTURE DATA_PACKAGE
      USING    RECORD_ALL LIKE SY-TABIX
               SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS
      CHANGING ABORT LIKE SY-SUBRC. "set ABORT <> 0 to cancel update
    $$ begin of routine - insert your code only below this line        -
    fill the internal tables "MONITOR" and/or "MONITOR_RECNO",
    to make monitor entries
    TABLES: rsmonfact.
      TYPES:
        BEGIN OF ls_rsmonfact,
          dp_nr TYPE rsmonfact-dp_nr,
        END OF ls_rsmonfact.
      DATA: k TYPE i,
            v_lines_1 TYPE i,
            v_lines_2 TYPE i,
            v_packet_max TYPE i.
    declaration of internal tables
      DATA: it_rsmonfact TYPE STANDARD TABLE OF ls_rsmonfact.
    INTER-PACKAGE COLLECTION TREATMENT *******************
      ASSIGN (l_requnr) TO <g_requnr>.
      SELECT dp_nr FROM rsmonfact
        INTO TABLE it_rsmonfact
        WHERE rnr = <g_requnr>.
      DESCRIBE TABLE it_rsmonfact LINES v_packet_max.
      IF v_packet_nbr < v_packet_max.
      APPEND LINES OF DATA_PACKAGE[] TO lt_data_package_collect[].
        CLEAR: DATA_PACKAGE.
        REFRESH DATA_PACKAGE.
        v_packet_nbr = v_packet_nbr + 1.
        CLEAR: MONITOR[], MONITOR.
        MONITOR-msgid = '00'.
        MONITOR-msgty = 'I'.
        MONITOR-msgno = '398'.
        MONITOR-msgv1 = 'All data_packages have been gathered in one. '.
        MONITOR-msgv2 = 'The last DATA_PACKAGE contains all records.'.
        APPEND MONITOR.
      ELSE.
    last data_package => perform Business Rules.
        IF v_packet_max > 1.
          APPEND LINES OF DATA_PACKAGE[] TO lt_data_package_collect[].
          CLEAR: DATA_PACKAGE[], DATA_PACKAGE.
          k = 1.
    We put back all package collected into data_package, handling recno.
          LOOP AT lt_data_package_collect INTO ls_datapack.
            ls_datapack-recno = k.
            APPEND ls_datapack TO DATA_PACKAGE.
            k = k + 1.
          ENDLOOP.
          CLEAR : lt_data_package_collect.
          REFRESH : lt_data_package_collect.
        ENDIF.
    sorting global data package and only keep the first occurence of the
    *record
      SORT DATA_PACKAGE BY material plant calmonth.
      DELETE ADJACENT DUPLICATES FROM DATA_PACKAGE
            COMPARING material plant calyear.
      SELECT * FROM /bic/aydbsdppx00
          INTO TABLE it_ydbsdppx
          FOR ALL ENTRIES IN DATA_PACKAGE
            WHERE material = DATA_PACKAGE-material
              AND plant    = DATA_PACKAGE-plant
              AND calyear  = DATA_PACKAGE-calyear.
    Enhance Data_package with Target additionnal fields.
      LOOP AT DATA_PACKAGE.
        CLEAR : wa_datapak1, wa_ydbsdppx.
        MOVE-CORRESPONDING DATA_PACKAGE TO wa_datapak1.
        READ TABLE it_ydbsdppx INTO wa_ydbsdppx
          WITH KEY material = DATA_PACKAGE-material
                      plant = DATA_PACKAGE-plant
                    calyear = DATA_PACKAGE-calyear.
        IF sy-subrc NE 0.       "new product price
          APPEND wa_datapak1 TO datapak1.
        ELSE.                   " a product price already exists
          IF wa_ydbsdppx-calmonth GE DATA_PACKAGE-calmonth.
    keep the eldest one  (for each year), or overwrite price if same month
            APPEND wa_datapak1 TO datapak1.
          ENDIF.
        ENDIF.
      ENDLOOP.
    ENDIF.
    if abort is not equal zero, the update process will be canceled
      ABORT = 0.
    $$ end of routine - insert your code only before this line         -
    ENDFORM.
    Edited by: mansi dandavate on Jun 17, 2010 12:32 PM

  • My DSO does not activate.how do i see contents os data package?

    My Dso has data in new data table but it status turns RED when I am trying to activate. I used standard data source 0fi_gl_4 and standard DSO 0FIGL_O02 . pretty straight forward. no added fields or objects. I have deleted 2 requests from previous loads and executed DTP . i did full update.then new data table has new data now but i could not activate.
    pls help what should i check for
    REQUEST STATUS is 'Error occured during activation process'
    REQUEST FOR REPORT AVAILABLE - 'request available for reporting'
    In the LOG i can see RED on Data package 000039 when I click on it
    the error is
    -value electronic account statum of characteristic 0DOC_HD_TXT cont
      long text  looked like it is INVALID CHARACTER
    how do i see the contents in a particular data package (000039) ???
    thanks
    Edited by: Ramya27v on Dec 12, 2011 1:56 AM

    Hi,
    This can happen if the size of teh data package from R/3 to PSA is larger than the size of the data package from PSA to DSO. Assume till PSA the data package size is 50000 and you are receiving 9 data packages the total number of records equals to 50000 X 9. Now from PSA to DSO suppose the data package size is 10000 then number of data packages will increase from 9 to 45 that is 9 X 50000/10000.
    If you are having problem in data package 39 (in the above mentioned scenario) then you should move to data package 39 X 10000 /50000 that is data package 7 of the PSA and rectify the corresponding record in the PSA and load it to the DSO.
    Similarly you will have to calculate the data package number of the PSA in your scenario.
    Navesh

  • Data Package Issue in DTP

    Hi gurus,
    My dataflow is like datsource->infosource->wrute optimised DSO with semantic key..
    In source , i have 10 records in that 7 records are duplicate records.
    I reduced the DTP datapackage size from 50000 to 5.
    When i excuted the DTP , i got 2 data package. in the first data package i got all the 7 records for the same set of keys and in the second data package i got the remaining records.
    My doubt is i have defined the data package size as "5" then how come the first data package can hold 7 records instead of 5 records.
    Thanks in advance !

    Hi ,
    It is because of the Semantic Key seeting that you have maintained .Data records that have the same key are combined in a single data package. This setting is only relevant for DataStore objects with data fields that are overwritten .
    Semantic Groups ensures how you want to build the data packages that are read from the source (DataSource or InfoProvider).
    This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.
    Hope it helps .
    Thanks
    Kamal Mehta

  • Info on package size

    Hi,
    I would like to understand what does this setting means in infopackage
    Maximum size of a data packet in kByte = 50000?
    How do I determine what is the package size that is going to come from the source system? Is there any documentation or notes that tells what the suitable package size is?
    Number of data packets per Info-IDoc = 10
    How does records get divided per package? On what logic?
    I extracted the data into PSA.
    DP 1 = 8000 records
    DP2 = 9500
    DP3 = 8400.
    The number of records is not same in all the packages. How does it work?
    Thanks
    Annie

    The individual records are sent in packages of varying sizes in the data transfer to the Business Information Warehouse. Using these parameters you determine the maximum size of such a package and therefore how much of the main memory may be used for the creation of the data package.
    SAP recommends a data package size between 10 and 50 MB.
    if you transfer a lot of fields, you can transfer less lines...
    if you transfer less fields, you transfer more lines,...
    more info :
    http://help.sap.com/saphelp_nw70/helpdata/en/51/85d6cf842825469a51b9a666442339/content.htm
    M.

  • Determining support package size

    Hi,
    i want to apply New Support packages in my BI system.Our system is low in memory.I want to know how do you determine how much space a support package will take  in the system??

    Hi Priya,
    The support package size will be mentioned in the Service marketplace itself. While downloading itself you can see the size of the SAR file.
    According to that you can plan.
    Regards,
    Raja. G

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

Maybe you are looking for

  • Convert External HD from PC to Mac

    I have an external hard drive hooked up to a PC. I no longer use the PC and would like to use the external HD with this iMac. I have a feeling that it will not work because it's probably formated for the PC. How can I make it work with the iMac. Can

  • Jsp :: Importing Classes

    I have followed a jsp tutorial with regards to a simple set of scripts. I have stored a class in the /cyb/WEB-INF/NameHandler folder that is called NameHandler.class When I call the import statement in the index.jsp file: <%@page import="hello.NameHa

  • No Thumbnails of any file type

    Just downloaded and installed CS4. When in Bridge I see that the program is building previews but when it is done I get no icons. Doesn't matter what file type or what folder I click on I get nothing but building previews then a black preview area...

  • Is there any version of Oracle BPEL Process Manager 10.1.2.0.2  for Solaris

    Hi Thank you for reading my post Is there any version of Oracle BPEL Process Manager 10.1.2.0.2 for Solaris X86 available or under development? Does any one tried the sparc version on x86? Thanks

  • Logging same NC code for multiple components

    Hi, When logging NC codes in SAP ME 6.0 it is possible to log the same NC Code (defect) against many ref-des for multiple components. However when logging a secondary code (Action) for the previous primary NC Code all the choosen ref-des and componen