Performance Impovement for Join

I have the following join statement. I have 220,000 records ... when I run this program, it times out. Can any one suggest performance impovement?
  SELECT vbapvbeln  vbapposnr  vbapkwmeng  vbapmatnr
         vbapwerks  vbapkdmat  vbapposex   vbakauart
         vbakkunnr  vbakvdatu  vbepedatu   lipsvbeln
         lipsposnr  lipslfimg  marcdispo   marcbeskz "--> PV
         maktmaktx  marameins  vbkdihrez   vbkdempst
         vbkdbstkd  vbkdbstkd_e vbpa~ablad
  INTO TABLE lt_orders_t
  FROM vbak INNER JOIN vbap
              ON ( vbakvbeln = vbapvbeln )
            INNER JOIN vbep
              ON ( vbapposnr = vbepposnr AND vbapvbeln = vbepvbeln )
            LEFT OUTER JOIN lips
              ON ( vbapvbeln = lipsvgbel AND vbapposnr = lipsvgpos )
            LEFT OUTER JOIN marc
              ON ( vbapmatnr = marcmatnr AND vbapwerks = marcwerks )
            LEFT OUTER JOIN makt
              ON ( vbapmatnr = maktmatnr AND makt~spras = 'E'        )
            LEFT OUTER JOIN mara
              ON ( vbapmatnr = maramatnr )
            LEFT OUTER JOIN vbkd
              ON ( vbapvbeln = vbkdvbeln AND vbapposnr = vbkdposnr )
            LEFT OUTER JOIN vbpa
              ON ( vbapvbeln = vbpavbeln AND vbpa~posnr = '000010' AND
                   vbpa~parvw = 'WE'       )
  WHERE vbak~vbeln IN s_vbeln AND
        vbep~etenr = '0001'   AND
        vbap~posnr IN s_posnr AND
        vbap~werks IN s_werks AND
        vbak~auart IN s_auart AND
        vbak~kunnr IN s_kunnr AND
        vbak~vbtyp IN s_vbtyp AND
        vbap~matnr IN s_matnr AND
        vbep~ettyp IN s_ettyp AND
        vbep~edatu IN s_edatu.

Hi PV,
As others have mentioned you really need to break the statement down. 3 joins is usually pushing it (depending on the tables). By the way, you can go to SE12 and look up each table. On the main screen you will see a pushbutton saying Indexes I think. Click on it and it will show you the secondary indexes for that table. There may be some or none defined. They may be SAP defined or customer defined (Z).
Keep your inner joins on vbak, vbap and vbep. The resulting table will be used as your driver for the other tables.
You can create another inner join of mara, marc and makt.
Use your driver table as For All Entries for the other tables.
If you keep your driver table to be as shown, create another table with the first 5 fields.( vbeln, posnr, kwmeng, matnr, werks). The reason for this is so you can load the table with:
It_newtab[] = It_orders_t[].
All you want from the new table is matnr werks.
Sort it_newtab by matnr werks.
Delete adjacent duplicates from i_newtab comparing matnr werks.
Now you can use it_newtab as your driver for marc-mara-makt join. The reason is that if you have 220,000 records, when you narrow down the material/plant combinations you will probably have only a few hundred materials. So your For All entries will be a table of a few hundred instead of 200 thousand.
The result will be in a new table it_materials. When you read this table, don't forget to sort it by matnr and plant first. And READ with Binary Search.
Once you have all your other smaller tables created. As with it_materials, SORT and READ with BINARY SEARCH, as you loop through your driver table and populate the missing data from the smaller tables.
I think you will end up with 5 Select statements.
Indexes are important. Once you have split out the Select statements, if the code is still running unacceptably slow, you can use ST05 to see what indexes are being used and see if new indexes need to be created to help your cause.
You'll probably have to involve the Basis team to help you with your new indexes and they can help with the analysis of your ST05 results as well. Indexes must be created correctly in the proper order, or they may not be effective.
Hope this helps.
Filler

Similar Messages

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • Performance issue with joins on table VBAK, VBEP, VBKD and VBAP

    hi all,
    i have a report where there is a join on all 4 tables VBAK, VBEP, VBKD and VBAP.
    the report is giving performance issues because of this join.
    all the key fields are used for the joining of tables. but some of the non-key fields like vbap-vstel, vbap-abgru and vbep-wadat are also part of select query and are getting filled.
    because of these there is a performance issue.
    is there any way i can improve the performance of the join select query?
    i am trying "for all entries" clause...
    kindly provide any alternative if possible.
    thanks.

    Hi,
    Pls perform some of the below steps as applicable for the performance improvement:
    a) Remove join on all the tables and put joins only on header and item (VBAK & VBAP).
    b) code should have separate select for VBEP and VBKD.
    c) remove the non key fields from the where clause. Once you retrieve data from the database into the internal table, sort the table and delete the entries which are not part of the non-key fields like vstel, abgru and wadat.
    d) last option is you can create index in the VBAP & VBEP table with respect to the fields vstel, abgru & wadat ( not advisable)
    e) buffering option on database tables also possible.
    f) select only the fields into the internal table that are applicable for the processing logic and also the select query should contaian the field names in the same order as mentioned in the database table.
    Hope this helps.
    Regards
    JLN

  • Performance Issue for BI system

    Hello,
    We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
    There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
    The Main problem is while running a report or creating a query is taking way too long time.
    Kindly help me.

    Hello SIva,
    Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
    we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
    No one is using the system currently but in ST02 i can see the swaps are in red.
    Buffer                 HitRatio   % Alloc. KB  Freesp. KB   % Free Sp.   Dir. Size  FreeDirEnt   % Free Dir    Swaps    DB Accs
    Nametab (NTAB)                                                                                0
       Table definition     99,60     6.798                                                   20.000                                            29.532    153.221
       Field definition     99,82      31.562        784                 2,61           20.000      6.222          31,11          17.246     41.248
       Short NTAB           99,94     3.625      2.446                81,53          5.000        2.801          56,02             0            2.254
       Initial records      73,95        6.625        998                 16,63          5.000        690             13,80             40.069     49.528
                                                                                    0
    boldprogram                97,66     300.000     1.074                 0,38           75.000     67.177        89,57           219.665    725.703bold
    CUA                    99,75         3.000        875                   36,29          1.500      1.401          93,40            55.277      2.497
    Screen                 99,80         4.297      1.365                 33,35          2.000      1.811          90,55              119         3.214
    Calendar              100,00       488            361                  75,52            200         42              21,00               0            158
    OTR                   100,00         4.096      3.313                  100,00        2.000      2.000          100,00              0
                                                                                    0
    Tables                                                                                0
       Generic Key          99,17    29.297      1.450                  5,23           5.000        350             7,00             2.219      3.085.633
       Single record        99,43    10.000      1.907                  19,41           500         344            68,80              39          467.978
                                                                                    0
    Export/import          82,75     4.096         43                      1,30            2.000        662          33,10            137.208
    Exp./ Imp. SHM         89,83     4.096        438                    13,22         2.000      1.482          74,10               0    
    SAP Memory      Curr.Use %    CurUse[KB]    MaxUse[KB]    In Mem[KB]    OnDisk[KB]    SAPCurCach      HitRatio %
    Roll area               2,22                5.832               22.856             131.072     131.072                   IDs           96,61
    Page area              1,08              2.832                24.144               65.536    196.608              Statement     79,00
    Extended memory     22,90       958.464           1.929.216          4.186.112          0                                         0,00
    Heap memory                                    0                  0                    1.473.767          0                                         0,00
    Call Stati             HitRatio %     ABAP/4 Req      ABAP Fails     DBTotCalls         AvTime[ms]      DBRowsAff.
      Select single     88,59               63.073.369        5.817.659      4.322.263             0                         57.255.710
      Select               72,68               284.080.387          0               13.718.442             0                        32.199.124
      Insert                 0,00                  151.955             5.458             166.159               0                           323.725
      Update               0,00                    378.161           97.884           395.814               0                            486.880
      Delete                 0,00                    389.398          332.619          415.562              0                             244.495
    Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM

  • Performance issue for this function-module(HR_TIM_REPORT_ABSENCE_DATA)

    Hi Friends
    I am having performance issue for this function-module(HR_TIM_REPORT_ABSENCE_DATA) and one my client got over 8 thousend employees . This function-module taking forever to read the data. is there any other function-module to read the absences data IT2001 .
    I did use like this .if i take out this F.M 'HR_TIM_REPORT_ABSENCE_DATA_INI' its not working other Function-module.please Suggest me .
    call function 'HR_TIM_REPORT_ABSENCE_DATA_INI'
    exporting "Publishing to global memory
    option_string = option_s "string of sel org fields
    trig_string = trig_s "string of req data
    alemp_flag = sw_alemp "all employee req
    infot_flag = space "split per IT neccessary
    sel_modus = sw_apa
    importing
    org_num = fdpos_lines "number of sel org fields
    tables
    fieldtab = fdtab "all org fields
    field_sel = fieldnametab_m. "sel org fields
    To Read all infotypes from Absences type.
    RP_READ_ALL_TIME_ITY PN-BEGDA PN-ENDDA.
    central function unit to provide internal tables: abse orgs empl
    call function 'HR_TIM_REPORT_ABSENCE_DATA'
    exporting
    pernr = pernr-pernr
    begda = pn-begda
    endda = pn-endda
    IMPORTING
    SUBRC = SUBRC_RTA
    tables
    absences = absences_01
    org_fields = orgs
    emp_fields = empl
    REFTAB =
    APLTAB =
    awart_sel_p = awart_s[]
    awart_sel_a = awart_s[]
    abstp_sel = abstp_s[]
    i0000 = p0000
    i0001 = p0001
    i0002 = p0002
    i0007 = p0007
    i2001 = p2001
    i2002 = p2002
    i2003 = p2003.
    Thanks & Regards
    Reddy

    guessing will not help you much, check with SE30 to get a better insight
    SE30
    The ABAP Runtime Trace (SE30) -  Quick and Easy
    what is the total time, what are the Top 10 in the hitlist.
    Siegfried

  • Can we implement the custom sql query in CR for joining the two tables

    Hi All,
    Is there anyway to implement the custom sql query in CR for joining the two tables?
    My requirement here is I need to write sql logics for joining the two tables...
    Thanks,
    Gana

    In the Database Expert, expand the Create New Connection folder and browse the subfolders to locate your data source.
    Log on to your data source if necessary.
    Under your data source, double-click the Add Command node.
    In the Add Command to Report dialog box, enter an appropriate query/command for the data source you have opened.
    For example:
    SELECT
        Customer.`Customer ID`,
        Customer.`Customer Name`,
        Customer.`Last Year's Sales`,
        Customer.`Region`,
        Customer.`Country`,
        Orders.`Order Amount`,
        Orders.`Customer ID`,
        Orders.`Order Date`
    FROM
        Customer Customer INNER JOIN Orders Orders ON
            Customer.`Customer ID` = Orders.`Customer ID`
    WHERE
        (Customer.`Country` = 'USA' OR
        Customer.`Country` = 'Canada') AND
        Customer.`Last Year's Sales` < 10000.
    ORDER BY
        Customer.`Country` ASC,
        Customer.`Region` ASC
    Note: The use of double or single quotes (and other SQL syntax) is determined by the database driver used by your report. You must, however, manually add the quotes and other elements of the syntax as you create the command.
    Optionally, you can create a parameter for your command by clicking Create and entering information in the Command Parameter dialog box.
    For more information about creating parameters, see To create a parameter for a command object.
    Click OK.
    You are returned to the Report Designer. In the Field Explorer, under Database Fields, a Command table appears listing the database fields you specified.
    Note:
    To construct the virtual table from your Command, the command must be executed once. If the command has parameters, you will be prompted to enter values for each one.
    By default, your command is called Command. You can change its alias by selecting it and pressing F2.

  • Performance Tuning for a report

    Hi,
    We have developed a program which updates 2 fields namely Reorder Point and Rounding Value on the MRP1 tab in TCode MM03.
    To update the fields, we are using the BAPI BAPI_MATERIAL_SAVEDATA.
    The problem is that when we upload the data using a txt file, the program takes a very long time. Recently when we uploaded a file containing 2,00,000 records, it took 27 hours. Below is the main portion of the code (have ommitted the open data set etc). Please help us fine tune this, so that we can upload these 2,00,000 records in 2-3 hours.
    select matnr from mara into table t_mara.
    select werks from t001w into corresponding fields of table t_t001w .
    select matnr werks from marc into corresponding fields of table t_marc.
    loop at str_table into wa_table.
    if not wa_table-partnumber is initial.
    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
       EXPORTING
         INPUT         =  wa_table-partnumber
         IMPORTING
        OUTPUT        = wa_table-partnumber
    endif.
    clear wa_message.
    read table t_mara into wa_mara with key matnr = wa_table-partnumber.
    if sy-subrc is not initial.
    concatenate 'material ' wa_table-partnumber ' doesnot exists'
    into wa_message.
    append wa_message to t_message.
    endif.
    read table t_t001w into wa_t001w with key werks = wa_table-HostLocID.
      if sy-subrc is not initial.
      concatenate 'plant ' wa_table-HostLocID  ' doesnot exists' into
      wa_message.
      append wa_message to t_message.
      else.
      case wa_t001w-werks.
    when 'DE40'
    or  'DE42'
    or  'DE44'
    or  'CN61'
    or  'US62'
    or  'SG70'
    or  'FI40'
    read table t_marc into wa_marc with key matnr = wa_table-partnumber
                                            werks = wa_table-HostLocID.
    if sy-subrc is not initial.
    concatenate 'material' wa_table-partnumber  ' not extended to plant'
    wa_table-HostLocID  into  wa_message.
    append wa_message to t_message.
    endif.
    when others.
    concatenate 'plant ' wa_table-HostLocID ' not allowed'
      into wa_message.
    append wa_message to t_message.
    endcase.
    endif.
        if wa_message is initial.
          data: wa_headdata type BAPIMATHEAD,
          wa_PLANTDATA type BAPI_MARC,
          wa_PLANTDATAx type BAPI_MARCX.
          wa_headdata-MATERIAL = wa_table-PartNumber.
          wa_PLANTDATA-plant = wa_table-HostLocID.
          wa_PLANTDATAX-plant = wa_table-HostLocID.
          wa_PLANTDATA-REORDER_PT = wa_table-ROP.
          wa_PLANTDATAX-REORDER_PT = 'X'.
          wa_plantdata-ROUND_VAL = wa_table-EOQ.
          wa_plantdatax-round_val =  'X'.
          CALL FUNCTION 'BAPI_MATERIAL_SAVEDATA'
            EXPORTING
              HEADDATA                   = wa_headdata
             PLANTDATA                  = wa_PLANTDATA
             PLANTDATAX                 = wa_PLANTDATAX
          IMPORTING
             RETURN                     =  t_bapiret
          CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
    write t_bapiret-message.
    endif.
    clear: wa_mara, wa_t001w, wa_marc.
    endloop.
    loop at t_message into wa_message.
    write wa_message.
    endloop.
    Thanks in advance.
    Peter
    Edited by: kishan P on Sep 17, 2010 4:50 PM

    Hi Peter,
    I would suggest few changes in your code. Please refer below procedure to optimize the code.
    Steps:
               Please run SE30 run time analysis and find out if ABAP code or Database fetch is taking time.
               Please run extended program check or code inspector to remove any errors and warnings.
               Few code changes that i would suggest in your code
              For select query from t001w & marc remove the corresponding clause as this also reduces the performance. ( For this you can define an Internal table with only the required fields in the order they are specified in the table and execute a select query to fetch these fields)
              Also put an initial check if str_table[] is not initial before you execute the loop.
              where ever you have used read table. Please sort these tables and use binary search.
              Please clear the work areas after every append statment.
              As i dont have a sap system handy. i would also check if my importing parameters for the bapi structure is a table. Incase its a table i would directly pass all the records to this table and then pass it to the bapi. Rather than looping every records and updating it.
    Hope this helps to resolve your problem.
    Have a nice day
    Thanks

  • Regarding performance widget (performance objects) for custom dashboard

    Hi,
    I'm making a custom dashboard, I'm trying to use the performance widget for a netapp volume, but the performance objects are not available and only see "(All)" on the options (see screenshot).  When I create the dashboard it shows the widget
    with empty information.
    Thanks for any suggestions.

    Hi all,
    I managed to fix all my dashboards, from what I understand it is not a supported MS fix as per articles I found on forums etc.
    My issues started when I renamed some of my groups because of requirements, think SCOM 2012 doesn't like the rename, serious bug that I think causes the info in the db's to go out of sync etc.
    Also found that data from the main opsmgr db to the dwdb hadn't submitted for months, so my reporting was way out. Hence blank stats on dashboards and uptime reports showed grey.
    I followed this article and fixed it, please ensure that your db's are fully backed up before doing this. OperationsMananger DB and the OperationsManagerDW db.
    The issue in the end is actually with the DW db.
    http://operationsmanager2012.blogspot.com/2013/02/scom-availability-report-monitoring.html
    Regards
    Simon Craner

  • Insert, search, delete performance overhead for different collections

    Hi,
    I am trying to create a table which compares performance overheads for different collections data structures. Does anyone want to help me out? I want to put a number from 1 - 9 in each of the question marks. 1 being very poor performance and 9 being very good performance (the reason I am doing this is that I had this question in a job interview test and I didn't pass it)
    anyone have any comments?
              Searching     Inserting     Deleting     
    ArrayList ? ? ?
    LinkedList ? ? ?
    TreeSet ? ? ?
    TreeMap ? ? ?
    HashMap ? ? ?
    HashSet ? ? ?
    Stack ? ? ?

    sorry the formatting has screwed up a bit when I posted it. It should have a list of the collection types and three columns (inserting, deleting, searching)

  • Performance tuneup for a special DB (disable locking, check-pointing,...)

    Hi,
    I have simple database contains key/value records. The program is a multi-thread application that iterate over records. Each worker thread read a record and after some calculations, replace it. The records are completely independent from each other. The following is my DBController class that share between all threads. Is there any considerations to achieve best performance?, For example I don't want any locking, check-pointing, caching,...overheads. I just want to achieve THE BEST PERFORMANCE to store and retrieve each record independently.
    import gnu.trove.*;
    import java.io.*;
    import com.sleepycat.bind.tuple.*;
    import com.sleepycat.je.*;
    public class DBController {
         private class WikiSimTupleBinding extends TupleBinding<TIntObjectHashMap<TIntDoubleHashMap>> {
              // Write an appropriate object to a TupleOutput (a DatabaseEntry)
              public void objectToEntry(TIntObjectHashMap<TIntDoubleHashMap> object, TupleOutput to) {
                   try {
                        ByteArrayOutputStream bout = new ByteArrayOutputStream();
                        ObjectOutputStream oout = new ObjectOutputStream(bout);
                        oout.writeObject(object);
                        oout.flush();
                        oout.close();
                        bout.close();
                        byte[] data = bout.toByteArray();
                        to.write(data);
                   } catch (IOException e) {
                        e.printStackTrace();
              // Convert a TupleInput(a DatabaseEntry) to an appropriate object
              public TIntObjectHashMap<TIntDoubleHashMap> entryToObject(TupleInput ti) {
                   TIntObjectHashMap<TIntDoubleHashMap> object = null;
                   try {
                        byte[] data = ti.getBufferBytes();
                        object = (TIntObjectHashMap<TIntDoubleHashMap>) new java.io.ObjectInputStream(
                                  new java.io.ByteArrayInputStream(data)).readObject();
                   } catch (Exception e) {
                        e.printStackTrace();
                   return object;
         private Environment myDbEnvironment = null;
         private Database db_R = null;
         private WikiSimTupleBinding myBinding;
         public DBController(File dbEnv) {
              try {
                   // Open the environment. Create it if it does not already exist.
                   EnvironmentConfig envConfig = new EnvironmentConfig();
                   envConfig.setAllowCreate(true);
                   myDbEnvironment = new Environment(dbEnv, envConfig);
                   // Open the databases. Create them if they don't already exist.
                   DatabaseConfig dbConfig = new DatabaseConfig();
                   dbConfig.setAllowCreate(true);
                   db_R = myDbEnvironment.openDatabase(null, "R", dbConfig);
                   // initialize Binding API
                   myBinding = new WikiSimTupleBinding();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         private final byte[] intToByteArray(int value) {
              return new byte[] { (byte) (value >>> 24), (byte) (value >>> 16), (byte) (value >>> 8), (byte) value };
         public void put(int id, TIntObjectHashMap<TIntDoubleHashMap> repository) {
              try {
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   myBinding.objectToEntry(repository, theData);
                   db_R.put(null, theKey, theData);
              } catch (Exception dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
         public TIntObjectHashMap<TIntDoubleHashMap> get(int id) {
              TIntObjectHashMap<TIntDoubleHashMap> repository = null;
              try {
                   // Create a pair of DatabaseEntry objects. theKey is used to perform the search. theData is used to store the
                   // data returned by the get() operation.
                   DatabaseEntry theKey = new DatabaseEntry(intToByteArray(id));
                   DatabaseEntry theData = new DatabaseEntry();
                   // Perform the get.
                   if (db_R.get(null, theKey, theData, LockMode.DEFAULT) == OperationStatus.SUCCESS) {
                        // Recreate the data repository
                        repository = myBinding.entryToObject(theData);
                   } else {
                        System.out.println("No record found for key '" + id + "'.");
              } catch (Exception e) {
                   // Exception handling goes here
                   e.printStackTrace();
              return repository;
         public void close() {
              // closing the DB
              try {
                   if (db_R != null)
                        db_R.close();
                   if (myDbEnvironment != null)
                        myDbEnvironment.close();
              } catch (DatabaseException dbe) {
                   // Exception handling goes here
                   dbe.printStackTrace();
    }

    If you are writing and you need to recover in a reasonable amount of time after a crash, you need checkpointing.
    If multiple threads may access a record concurrently, you need locking.
    If you need good read performance, you need as large a JE cache as possible.
    If you want to tune performance, the first step is to print the EnvironmentStats (Environment.getStats) periodically, and read the FAQ performance section. Try to find out if your app's performance is limited by CPU or I/O.
    If you are reading records in key order, then you'll get better performance if you also write them in key order.
    I'm not sure why you're using a TupleBinding to do Java object serialization If you want Java serialization, try using a SerialBinding.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • IPhoto was working fine. Performed update for HP printer, which  should have nothing to do with iPhoto. But the next time I tried to open iPhoto, got this message. "To open your library with this version of iPhote, it first needs to be prepared." When I g

    iPhoto was working fine. Performed update for HP printer, which  should have nothing to do with iPhoto. But the next time I tried to open iPhoto, got this message. "To open your library with this version of iPhote, it first needs to be prepared." When I g

    What version of iPhoto? Assuming 09 or later...
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In early versions of Library Manager it's the File -> Rebuild command. In later versions it's under the Library menu.)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.  
    Regards
    TD

  • Cannot perform rounding for invoices with a currency other than the documen

    Hi all,
    I need some one to help me
    I want to process the incoming payment.
    The AR Invoice is using USD
    In Incoming Payment i want to pay using SGD,
    I already set the BP in all currency
    I also set the Bank account in Bank transfer payment mean in all currency.
    But when i add the document i saw the message like this
    "Cannot perform rounding for invoices with a currency other than the document currency.  [Message 3524-31]"
    What should i do ?
    Thanks in advance
    Regards
    KK

    Hi gordon,
    Thanks for your respon.
    I still do not understand, what you mean.
    I test it in SBO DEMO Au which is the local currency is AUD and the system currency is also AUD.
    Is the any special setting for this issue?
    Or Do i miss the settings in SBO
    My version is 8.81 pl 9
    Thanks in advance

  • Performance Tuning for BAM 11G

    Hi All
    Can anyone guide me for any documents or any tips realted to performance tuning for BAM 11G on on Linux

    It would help to know if you have any specific issue. There are number of tweaks all they way from DB to Browser.
    Few key things to follow:
    1. Make sure you create index on DO. If there are too much old data in the DO and not useful then periodically delete it. Similar to relational database indexes, defining indexes in Oracle BAM creates and maintains an ordered list of data object elements for fast retrieval.
    2. Ensure that IE setup to do automatic caching. This will help with reducing server round trips.
    3. Tune DB performance. This would typically require DBA. Identify the SQL statements most likely to be causing the waits by looking at
    the drilldown Top SQL Statements Ordered by Wait Time. Use SQL Analyze, EXPLAIN PLAN, or the tkprof utility to tune the queries that were identified.
    Check the Dataobject tables involved in the query for missing indexes.
    4. Use batching (this is on by default for most cases)
    5. Fast network
    6. Use profilers to look at machine load/cpu usage and distribute components on different boxes if needed.
    7. Use better server AND client hardware. BAM dashboard are heavy users of ajax/javascript logic on the client

  • Performance counter for BTS 2010

    As per below link BizTalk 2006 shipped with 294 performance counters.
    http://blogs.technet.com/b/clint_huffman/archive/2008/09/02/how-to-use-the-pal-tool-for-biztalk-performance-analysis.aspx
    Howmany performance counter supported by BTS 2010?

    If you are looking for performance counters for 2010, here is the msdn link -
    msdn.microsoft.com/en-us/library/aa578394.aspx

  • One performance view for two classes, possible?

    Hi there,
    My system is still running 2007 R2. I am writing a MP now which contains two classes. There are few performance collection rules targeting those two classes. I want to create one performance view to display performance data for BOTH classes. Is it possible?
    I already created an instance group and added both classes as member of the group. By using the group, I can created one alert view to display alerts from either class. Can I use the same trick for the performance view? Thanks!

    In addition, we also can add a dashboard view with two columns for the two classes, and add performance widget for each column.
    Regards,
    Yan Li
    Please remember to mark the replies as answers if they help and unmark them if they provide no help.

Maybe you are looking for

  • Purchase Order Status Tab

    Hell gurus, We have a purchase order where all of the line items have been marked for deletion except for one, however the purchase order status tab "Ordered" amount is not being updated.  However, the to be delivered and to be invoiced on the tab ar

  • ITunes will not launch in Windows 7

    I have had iTunes on my windows 7 PC for some time.  Today it would not open when I tried to launch it.  I have uninstalled and reinstalled several times.  I have also tried all the recommended fixes in the Apple Support Center. I do not know what to

  • How to run the batch files

    Hi Experts, I am trying to implement automatic cache purging every day at specified time and i created txt file with SApurge All cache() and created bat file with nqcmd -q Analyticsweb -u username -p password -s (txt file location). Now i want to run

  • External drive with PC files.

    I have already backed up content from my older PC on my external harddrive .When I connect it to Time Machine it wants to delete the content. Obviously a way around this is to partition in Disc utilities but can it do this with the PC content already

  • Using Linux with Nvidia binary drivers, how do I enable WebGL in Firefox 8?

    I've heard it is possible to use WebGL in firefox 8 for linux if using Nvidia binary drivers, but I haven't found a succinct recipe. can someone point me to a working recipe? thanks!