Performance considerations for XI

Hello
Suppose if we have scenario where there are around 1000 IDocs to be pumped into XI (say everyday 100 orders are getting generated and sent to different vendors at a time) are posting orders and we should create at  what typical performance problems we should anticipate while going ahead with design and how we should take care of that?
Few things that came to my mind was
1] Going with message mapping with increase in heap size (since I guess message mapping is best in performance as compared to other mappings)
2] Taking unwanted segments off at the sending side itself by use of IDoc distribution technique.
pl provide any suggestions and additions to this
Thanks in advance.
Regards
rajeev

1. U may also use xsl mapping in case the mapping is complex
2. This is a good way of handling idocs
In addition, u may use idoc packaging option
/people/michal.krawczyk2/blog/2007/12/02/xipi-sender-idoc-adapter-packaging
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/877c0d53-0801-0010-3bb0-e38d5ecd352c
Regards,
Prateek

Similar Messages

  • Performance tuneup for a special DB (disable locking, check-pointing,...)

    Hi,
    I have simple database contains key/value records. The program is a multi-thread application that iterate over records. Each worker thread read a record and after some calculations, replace it. The records are completely independent from each other. The following is my DBController class that share between all threads. Is there any considerations to achieve best performance?, For example I don't want any locking, check-pointing, caching,...overheads. I just want to achieve THE BEST PERFORMANCE to store and retrieve each record independently.
    import gnu.trove.*;
    import java.io.*;
    import com.sleepycat.bind.tuple.*;
    import com.sleepycat.je.*;
    public class DBController {
         private class WikiSimTupleBinding extends TupleBinding<TIntObjectHashMap<TIntDoubleHashMap>> {
              // Write an appropriate object to a TupleOutput (a DatabaseEntry)
              public void objectToEntry(TIntObjectHashMap<TIntDoubleHashMap> object, TupleOutput to) {
                   try {
                        ByteArrayOutputStream bout = new ByteArrayOutputStream();
                        ObjectOutputStream oout = new ObjectOutputStream(bout);
                        oout.writeObject(object);
                        oout.flush();
                        oout.close();
                        bout.close();
                        byte[] data = bout.toByteArray();
                        to.write(data);
                   } catch (IOException e) {
                        e.printStackTrace();
              // Convert a TupleInput(a DatabaseEntry) to an appropriate object
              public TIntObjectHashMap<TIntDoubleHashMap> entryToObject(TupleInput ti) {
                   TIntObjectHashMap<TIntDoubleHashMap> object = null;
                   try {
                        byte[] data = ti.getBufferBytes();
                        object = (TIntObjectHashMap<TIntDoubleHashMap>) new java.io.ObjectInputStream(
                                  new java.io.ByteArrayInputStream(data)).readObject();
                   } catch (Exception e) {
                        e.printStackTrace();
                   return object;
         private Environment myDbEnvironment = null;
         private Database db_R = null;
         private WikiSimTupleBinding myBinding;
         public DBController(File dbEnv) {
              try {
                   // Open the environment. Create it if it does not already exist.
                   EnvironmentConfig envConfig = new EnvironmentConfig();
                   envConfig.setAllowCreate(true);
                   myDbEnvironment = new Environment(dbEnv, envConfig);
                   // Open the databases. Create them if they don't already exist.
                   DatabaseConfig dbConfig = new DatabaseConfig();
                   dbConfig.setAllowCreate(true);
                   db_R = myDbEnvironment.openDatabase(null, "R", dbConfig);
                   // initialize Binding API
                   myBinding = new WikiSimTupleBinding();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         private final byte[] intToByteArray(int value) {
              return new byte[] { (byte) (value >>> 24), (byte) (value >>> 16), (byte) (value >>> 8), (byte) value };
         public void put(int id, TIntObjectHashMap<TIntDoubleHashMap> repository) {
              try {
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   myBinding.objectToEntry(repository, theData);
                   db_R.put(null, theKey, theData);
              } catch (Exception dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         public TIntObjectHashMap<TIntDoubleHashMap> get(int id) {
              TIntObjectHashMap<TIntDoubleHashMap> repository = null;
              try {
                   // Create a pair of DatabaseEntry objects. theKey is used to perform the search. theData is used to store the
                   // data returned by the get() operation.
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   // Perform the get.
                   if (db_R.get(null, theKey, theData, LockMode.DEFAULT) == OperationStatus.SUCCESS) {
                        // Recreate the data repository
                        repository = myBinding.entryToObject(theData);
                   } else {
                        System.out.println("No record found for key '" + id + "'.");
              } catch (Exception e) {
                   // Exception handling goes here
                   e.printStackTrace();
              return repository;
         public void close() {
              // closing the DB
              try {
                   if (db_R != null)
                        db_R.close();
                   if (myDbEnvironment != null)
                        myDbEnvironment.close();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
    }

    If you are writing and you need to recover in a reasonable amount of time after a crash, you need checkpointing.
    If multiple threads may access a record concurrently, you need locking.
    If you need good read performance, you need as large a JE cache as possible.
    If you want to tune performance, the first step is to print the EnvironmentStats (Environment.getStats) periodically, and read the FAQ performance section. Try to find out if your app's performance is limited by CPU or I/O.
    If you are reading records in key order, then you'll get better performance if you also write them in key order.
    I'm not sure why you're using a TupleBinding to do Java object serialization If you want Java serialization, try using a SerialBinding.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • XML Embedded in Stored Function - Performance Considerations

    A developer in my company approached us with a question about performance considerations while writing stored procedures or functions that embed XML.
    The primary use for this function would be to provide a quick decision given a set of parameters. The function will use the input parameters along with some simple calculations and DB lookups to come up with an answer. These parameters will be stored in the database. Potentially even more parameters that are currently represented in the xml will be available in the DB and therefore could be looked up by the function.
    My biggest question is if this way of using XML as an input parameter introduces any performance considerations or concerns for storage/bandwidth etc.
    Thank you
    Edited by: user8699561 on May 19, 2010 9:24 AM

    user8699561 wrote:
    A developer in my company approached us with a question about performance considerations while writing stored procedures or functions that embed XML.
    The primary use for this function would be to provide a quick decision given a set of parameters. The function will use the input parameters along with some simple calculations and DB lookups to come up with an answer. These parameters will be stored in the database. Potentially even more parameters that are currently represented in the xml will be available in the DB and therefore could be looked up by the function.
    My biggest question is if this way of using XML as an input parameter introduces any performance considerations or concerns for storage/bandwidth etc.
    Thank you
    Edited by: user8699561 on May 19, 2010 9:24 AMStorage/bandwith will be determined regarding the size of the XML doc, but there are ways to minimize those to the minimum (binary XML support in JDBC eg.). Performance overhead in general...eh..."it depends" (how you set it up)...

  • Physical Database Design Steps & Performance Considerations

    Hi,
    We have a Oracle 9i install need help to create a DB.
    Required to know the Physical Database Design Steps & Performance Considerations.
    like
    1-Technical consideration of DB as per server capacity. how to calculate..?
    2- What will be the best design parameter for DB...?
    Can you please help how to do that. Any metalink ID help to get those information.
    thanks
    kishor

    there is SOOO much to consider . . . .
    Just a FEW things are . . .
    Hardware - What kind of Host is the database going to run on?
    CPU and Memory
    What kind of Storage
    What is the Network like?
    What is the database going to do OLTP or DW?
    Start with your NEEDS and work to fulfill those needs on the budget given.
    Since you say Physical Database Design . . . is your Logical Database Design done?
    Does it fulfill the need of your application?

  • Performance enhancement for parallel loops

    Hi,
      I have performance problem for the following parallel loops.Please help me solve this to improve performance of report,urgently.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
    lv_wa_final-afnam     = lv_wa_ekpo-afnam.
    LOOP at xt_git_ekkn into lv_wa_ekkn  where ebeln = lv_wa_ekpo-ebeln
    and ebelp = lv_wa_ekpo-ebelp.
    lv_wa_final-meins     = lv_wa_ekpo-meins.
    READ TABLE xt_git_ekko INTO lv_wa_ekko
                                    WITH KEY ebeln = lv_wa_ekpo-ebeln
                                    BINARY SEARCH.
          IF sy-subrc IS INITIAL.
            lv_wa_final-ebeln     = lv_wa_ekko-ebeln.
            lv_wa_final-ebelp     = lv_wa_ekpo-ebelp.
            lv_wa_final-txz01     = lv_wa_ekpo-txz01.
            lv_wa_final-aedat     = lv_wa_ekko-aedat.
    READ TABLE xt_git_lfa1 INTO lv_wa_lfa1
                                          WITH KEY lifnr = lv_wa_ekko-lifnr
                                          BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
              lv_wa_final-name1 = lv_wa_lfa1-name1.
            ENDIF.
         LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE   ebeln      =   lv_wa_ekpo-ebeln
    AND  ebelp = lv_wa_ekpo-ebelp.
    waiting for quick reply.

    Hi
    U can use SORTED TABLE instead of STANDARD TABLE:
    DATA: xt_git_ekkn TYPE SORTED TABLE OF EKKN WITH NON-UNIQUE KEY EBELN EBELP,
              xt_git_ekbe  TYPE SORTED TABLE OF EKBE WITH NON-UNIQUE KEY EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebeln = lv_wa_ekpo-ebeln
                                                                  and ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebeln = lv_wa_ekpo-ebeln
                                                                             AND ebelp = lv_wa_ekpo-ebelp.
    Anyway u should considere to upload in the internal table only the record of the current document, in this case u need to insert the SELECT into the loop:
    SORT  xt_git_ekpo by EBELN EBELP.
    LOOP AT xt_git_ekpo INTO lv_wa_ekpo.
        lv_wa_final-afnam = lv_wa_ekpo-afnam.
       IF lv_wa_ekkn-EBELN <>  lv_wa_ekpo-EBELN.
         SELECT * FROM EKKN INTO TABLE xt_git_ekkn WHERE EBELN = lv_wa_ekpo-EBELN.
         SELECT * FROM EKBE INTO TABLE xt_git_ekbe WHERE EBELN = lv_wa_ekpo-EBELN.
       ENDIF.
        LOOP at xt_git_ekkn into lv_wa_ekkn where ebelp = lv_wa_ekpo-ebelp.
            lv_wa_final-meins = lv_wa_ekpo-meins.
            READ TABLE xt_git_ekko INTO lv_wa_ekko WITH KEY ebeln = lv_wa_ekpo-ebeln
                                                                                    BINARY SEARCH.
            IF sy-subrc IS INITIAL.
              lv_wa_final-ebeln = lv_wa_ekko-ebeln.
              lv_wa_final-ebelp = lv_wa_ekpo-ebelp.
              lv_wa_final-txz01 = lv_wa_ekpo-txz01.
              lv_wa_final-aedat = lv_wa_ekko-aedat.
              READ TABLE xt_git_lfa1 INTO lv_wa_lfa1 WITH KEY lifnr = lv_wa_ekko-lifnr
                                                                                    BINARY SEARCH.
              IF sy-subrc IS INITIAL.
                 lv_wa_final-lifnr = lv_wa_lfa1-lifnr.
                 lv_wa_final-name1 = lv_wa_lfa1-name1.
             ENDIF.
             LOOP AT xt_git_ekbe INTO lv_wa_ekbe WHERE ebelp = lv_wa_ekpo-ebelp.
    In my experience (for a very large number of records) this second solution was faster than the first one:
    - Using the first solution (upload all data in internal table and use sorted table): my job takes 2/3 days
    - Using the second solution: my job takes 1 hour.
    Max

  • Performance Optimization for Cubes

    Hi All,
    In our project, we have a daily proces chain which will refresh four reporting cube, which is consuming 8-10 hours to complete the refresh. We have suggested to archive the historical data to the new cube to improve the performance of the daily load
    In UAT, the performance of the daily load did not improved after we performed the archiving.
    Kindly suggest the performance improvement for the cubes.
    Regards
    Suresh Kumar

    Hi,
    Before loading the cube , you need to delete the index and once the load is complete recreate the same.For this you have to go to the manage screen of the infocube----> Performance Tab.
    Also Create the DB Statistics.For this you have to go to the manage screen of the infocube----> Performance Tab.This will reduce the load time to a considerable amount.
    Also increase the Maximum size of the data packet in the Infopackage. For this you have to go to the infopackage-->Scheduler in the menu bar--> Data S. Default Data Transfer.Increase the size to a considerable amount(not very high).Also increase the Number of Data packets per info IDOC. This field will be available just after Maximum size of the data packet in the Infopackage.
    Hope It Helps,
    Regards,
    Amit
    Edited by: Amit Kr on Sep 4, 2009 5:37 PM

  • Performance Consideration when updating NX-OS?

    Is there any performance consideration on the SAN switches we need to monitor prior to updating NX-OS?

    if it's a non-disruptive upgrade then you never lose FC connectivity. Always read the release notes for the version you are upgrading to make sure there are no surprises (hardware support ..etc)
    @dynamoxxx

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • Performance issue for this function-module(HR_TIM_REPORT_ABSENCE_DATA)

    Hi Friends
    I am having performance issue for this function-module(HR_TIM_REPORT_ABSENCE_DATA) and one my client got over 8 thousend employees . This function-module taking forever to read the data. is there any other function-module to read the absences data IT2001 .
    I did use like this .if i take out this F.M 'HR_TIM_REPORT_ABSENCE_DATA_INI' its not working other Function-module.please Suggest me .
    call function 'HR_TIM_REPORT_ABSENCE_DATA_INI'
    exporting "Publishing to global memory
    option_string = option_s "string of sel org fields
    trig_string = trig_s "string of req data
    alemp_flag = sw_alemp "all employee req
    infot_flag = space "split per IT neccessary
    sel_modus = sw_apa
    importing
    org_num = fdpos_lines "number of sel org fields
    tables
    fieldtab = fdtab "all org fields
    field_sel = fieldnametab_m. "sel org fields
    To Read all infotypes from Absences type.
    RP_READ_ALL_TIME_ITY PN-BEGDA PN-ENDDA.
    central function unit to provide internal tables: abse orgs empl
    call function 'HR_TIM_REPORT_ABSENCE_DATA'
    exporting
    pernr = pernr-pernr
    begda = pn-begda
    endda = pn-endda
    IMPORTING
    SUBRC = SUBRC_RTA
    tables
    absences = absences_01
    org_fields = orgs
    emp_fields = empl
    REFTAB =
    APLTAB =
    awart_sel_p = awart_s[]
    awart_sel_a = awart_s[]
    abstp_sel = abstp_s[]
    i0000 = p0000
    i0001 = p0001
    i0002 = p0002
    i0007 = p0007
    i2001 = p2001
    i2002 = p2002
    i2003 = p2003.
    Thanks & Regards
    Reddy

    guessing will not help you much, check with SE30 to get a better insight
    SE30
    The ABAP Runtime Trace (SE30) -  Quick and Easy
    what is the total time, what are the Top 10 in the hitlist.
    Siegfried

  • Performance Tuning for a report

    Hi,
    We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.
    To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.
    The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.
    select matnr from mara into table t_mara.
    select werks from t001w into corresponding fields of table t_t001w .
    select matnr werks from marc into corresponding fields of table t_marc.
    loop at str_table into wa_table.
    if not wa_table-partnumber is initial.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
       EXPORTING
         INPUT         =  wa_table-partnumber
         IMPORTING
        OUTPUT        = wa_table-partnumber
    endif.
    clear wa_message.
    read table t_mara into wa_mara with key matnr = wa_table-partnumber.
    if sy-subrc is not initial.
    concatenate 'material ' wa_table-partnumber ' doesnot exists'
    into wa_message.
    append wa_message to t_message.
    endif.
    read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.
      if sy-subrc is not initial.
      concatenate 'plant ' wa_table-HostLocID  ' doesnot exists' into
      wa_message.
      append wa_message to t_message.
      else.
      case wa_t001w-werks.
    when 'DE40'
    or  'DE42'
    or  'DE44'
    or  'CN61'
    or  'US62'
    or  'SG70'
    or  'FI40'
    read table t_marc into wa_marc with key matnr = wa_table-partnumber
                                            werks = wa_table-HostLocID.
    if sy-subrc is not initial.
    concatenate 'material' wa_table-partnumber  ' not extended to plant'
    wa_table-HostLocID  into  wa_message.
    append wa_message to t_message.
    endif.
    when others.
    concatenate 'plant ' wa_table-HostLocID ' not allowed'
      into wa_message.
    append wa_message to t_message.
    endcase.
    endif.
        if wa_message is initial.
          data: wa_headdata type BAPIMATHEAD,
          wa_PLANTDATA type BAPI_MARC,
          wa_PLANTDATAx type BAPI_MARCX.
          wa_headdata-MATERIAL = wa_table-PartNumber.
          wa_PLANTDATA-plant = wa_table-HostLocID.
          wa_PLANTDATAX-plant = wa_table-HostLocID.
          wa_PLANTDATA-REORDER_PT = wa_table-ROP.
          wa_PLANTDATAX-REORDER_PT = 'X'.
          wa_plantdata-ROUND_VAL = wa_table-EOQ.
          wa_plantdatax-round_val =  'X'.
          CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
            EXPORTING
              HEADDATA                   = wa_headdata
             PLANTDATA                  = wa_PLANTDATA
             PLANTDATAX                 = wa_PLANTDATAX
          IMPORTING
             RETURN                     =  t_bapiret
          CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    write t_bapiret-message.
    endif.
    clear: wa_mara, wa_t001w, wa_marc.
    endloop.
    loop at t_message into wa_message.
    write wa_message.
    endloop.
    Thanks in advance.
    Peter
    Edited by: kishan P on Sep 17, 2010 4:50 PM

    Hi Peter,
    I would suggest few changes in your code. Please refer below procedure to optimize the code.
    Steps:
               Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.
               Please run extended program check or code inspector to remove any errors and warnings.
               Few code changes that i would suggest in your code
              For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)
              Also put an initial check if str_table[] is not initial before you execute the loop.
              where ever you have used read table. Please sort these tables and use binary search.
              Please clear the work areas after every append statment.
              As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.
    Hope this helps to resolve your problem.
    Have a nice day
    Thanks

  • Regarding performance widget (performance objects) for custom dashboard

    Hi,
    I'm making a custom dashboard, I'm trying to use the performance widget for a netapp volume, but the performance objects are not available and only see "(All)" on the options (see screenshot).  When I create the dashboard it shows the widget
    with empty information.
    Thanks for any suggestions.

    Hi all,
    I managed to fix all my dashboards, from what I understand it is not a supported MS fix as per articles I found on forums etc.
    My issues started when I renamed some of my groups because of requirements, think SCOM 2012 doesn't like the rename, serious bug that I think causes the info in the db's to go out of sync etc.
    Also found that data from the main opsmgr db to the dwdb hadn't submitted for months, so my reporting was way out. Hence blank stats on dashboards and uptime reports showed grey.
    I followed this article and fixed it, please ensure that your db's are fully backed up before doing this. OperationsMananger DB and the OperationsManagerDW db.
    The issue in the end is actually with the DW db.
    http://operationsmanager2012.blogspot.com/2013/02/scom-availability-report-monitoring.html
    Regards
    Simon Craner

  • Insert, search, delete performance overhead for different collections

    Hi,
    I am trying to create a table which compares performance overheads for different collections data structures. Does anyone want to help me out? I want to put a number from 1 - 9 in each of the question marks. 1 being very poor performance and 9 being very good performance (the reason I am doing this is that I had this question in a job interview test and I didn't pass it)
    anyone have any comments?
              Searching     Inserting     Deleting     
    ArrayList ? ? ?
    LinkedList ? ? ?
    TreeSet ? ? ?
    TreeMap ? ? ?
    HashMap ? ? ?
    HashSet ? ? ?
    Stack ? ? ?

    sorry the formatting has screwed up a bit when I posted it. It should have a list of the collection types and three columns (inserting, deleting, searching)

  • IPhoto was working fine. Performed update for HP printer, which  should have nothing to do with iPhoto. But the next time I tried to open iPhoto, got this message. "To open your library with this version of iPhote, it first needs to be prepared." When I g

    iPhoto was working fine. Performed update for HP printer, which  should have nothing to do with iPhoto. But the next time I tried to open iPhoto, got this message. "To open your library with this version of iPhote, it first needs to be prepared." When I g

    What version of iPhoto? Assuming 09 or later...
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In early versions of Library Manager it's the File -> Rebuild command. In later versions it's under the Library menu.)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.  
    Regards
    TD

  • Cannot perform rounding for invoices with a currency other than the documen

    Hi all,
    I need some one to help me
    I want to process the incoming payment.
    The AR Invoice is using USD
    In Incoming Payment i want to pay using SGD,
    I already set the BP in all currency
    I also set the Bank account in Bank transfer payment mean in all currency.
    But when i add the document i saw the message like this
    "Cannot perform rounding for invoices with a currency other than the document currency.  [Message 3524-31]"
    What should i do ?
    Thanks in advance
    Regards
    KK

    Hi gordon,
    Thanks for your respon.
    I still do not understand, what you mean.
    I test it in SBO DEMO Au which is the local currency is AUD and the system currency is also AUD.
    Is the any special setting for this issue?
    Or Do i miss the settings in SBO
    My version is 8.81 pl 9
    Thanks in advance

  • Performance Tuning for BAM 11G

    Hi All
    Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on Linux

    It would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
    Few key things to follow:
    1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
    2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
    3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
    the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
    Check the Dataobject tables involved in the query for missing indexes.
    4. Use batching (this is on by default for most cases)
    5. Fast network
    6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
    7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client

Maybe you are looking for

  • Need help to draw a graph from the output I get with my program please

    Hi all, I please need help with this program, I need to display the amount of money over the years (which the user has to enter via the textfields supplied) on a graph, I'm not sure what to do further with my program, but I have created a test with a

  • Capture Date/Time - clobbered by LR 1.1

    Oh No! It appears that LR 1.1 has clobbered the Capture Date/Time after writing out the metadata. To try to be clearer, I've been using Elements 5.0 which organizes my photos (all jpg) by date. The date the photo was taken (which is also the date the

  • Interface Mapping  creation

    Hi,    I'm doing File -to -File scenario.. but in the sender side i have two files.. and Receiver side i have a single file.. that means. i need to merge this two files into a single file... for this i'm using BPM ..    for this i created two Message

  • How to go to different jsp pages based on different selection?

    I put SELECT and OPTION in the page. I want to go to different jsp           pages when different OPTION been selected, but there is only one URL           in the action attribute. How can I do it ?           

  • Skype for Business groups

    My company uses Office 365 and Skype for Business.  We have several Groups defined in Exchange.  We can add those groups to Skype, but when the groups are updated as a result of turnover, those changes we do in the Exchange group do not flow down to