Poor performance for the 1st select everyday for AFRU table

Hello everyone, I have performance problems with AFRU table. Every day, the first time I run a "Z" transaction, it takes around 100-120 seconds, but the second time and following it only takes four seconds. What could I do, in order to reduce the first execution time?
This is the select:
SELECT * FROM AFRU WHERE MANDT = :A0 AND CATSBELNR = :A1 AND BUDAT = :A2 AND PERNR = :A3 AND STOKZ <> :A4 AND STZHL = :A5
The execution plan for this select takes index AFRU~ZCA with an acceptable cost of 6.319. Index AFRU~ZCA is a nonunique index with these colums: MANDT + CATSBELNR + BUDAT + PERNR
I'll appreciate any ideas.
Thanks in advance,
Santi.

What database system are you using (ASE, Oracle, etc?).
If ASE, for the general issue of the first exection of a query taking longer, the two most likely reasons would be
a)  the table's data has aged out of cache so the query has to do a lot of physical i/o to read the data back into cache
or
b) the query plan for the query has aged out of statement cache and needs to be recompiled.
This query looks pretty simple, so the data cache seems much more likely.
To get a better feel, some morning run the query with
set statistics io on
set statistics time on
then run it again and look for differences in the physical vs logical i/o numbers and compile vs execution times.
You could use a scheduled event (Job Scheduler, cron job) to run the query or some query very like it a little earlier in the day to prime the data cache with the table data.

Similar Messages

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • WAD-Analysis Item-Poor performance for new lines

    Hi.
    I use WAD7 for entering new records in IP.
    According to our business requirements, in analysis item properties I have defined "Number of New Lines in Planning Queries" = 50.
    After that I face before extremely poor performance - it takes about 1-2 minutes until the page has loaded (note that it is empty page - only new blank rows).
    When I define new lines = 1 performance is very good (2-3 sec).
    Does anybody know what could be the problem ?
    Thanks.

    Hello Andrey,
    the number of new lines configured in WAD is completely unknown in the ABAP backend and has no impact on backend performance; in fact, the front end gets information about one 'line' of template cells, this costs nothing.
    Checks for characteristic relationships etc. only may have significant costs for cells of the result set, not for new lines.
    Whether this problem comes from the ABAP backed or form the Java stack or from the browser rendering can be checked
    with an RSTT trace. Is the run time of the trace different between 1 or 50 new lines, were different backend calls recorded?
    If not, the problem comes from the Java stack or from the browser rendering. One can check the latter via task manager.
    To check (partially) the run time in Java and/or ABAP stack add the parameter &PROFILING=X in the url, cf. note 1048691.
    Regards,
    Gregor

  • Poor performance for full screen desktop

    Hi,
    Full screen desktop ( displayed as Kiosk ) of Linux with gnome ( I believe it's the same for all window managers ) is poor ( even with command as ls you see the delays ).
    It happens on the local network. Connection to the application server is SSH.
    SGD server - Solaris 10 , Sun Fire 280. Application server is regular modern PC.
    Regular windows performance is very good.
    Any suggestions ?
    Thanks

    I think you will find the poor performance is only with GTK applications.
    For example, if you go into a large directory of files, and do an ls -aRl, you will notice it is a lot slower with a gnome-terminal than it is with a plain xterm.
    I think 4.3 will resolve this performance issue.

  • HOW TO USE A SINGLE PERFORM FOR VARIOUS TABLES ?

    perform test TABLES t_header.
    select
           KONH~KNUMH
           konh~datab
           konh~datbi
           konp~kbetr
           konp~konwa
           konp~kpein
           konp~kmein
           KONP~KRECH
           FROM konh INNER JOIN konp
                  ON konpknumh = konhknumh
           into table iTABXXX
            "ANY TEMPERARY INTERNAL TABLE.
           for all entries in t_header
           where
                 konh~kschl = t_header-kschl
             AND konh~knumh = t_header-knumh.
    endform.
    how can I use above perform for various internal tables of DIFFERENT LINE TYPES but having the fields KSCHL & KNUMH.

    u can use single perform....
    just see this example......hope this is what u r expecting....
    tables : pa0001.
    parameters : p_pernr like pa0001-pernr.
    data : itab1 like pa0001 occurs 0 with header line.
    data : itab2 like pa0002 occurs 0 with header line.
    perform get_data tables itab1 itab2.
    if not itab1[] is initial.
    loop at itab1.
    write :/ itab1-pernr.
    endloop.
    endif.
    if not itab2[] is initial.
    loop at itab2.
    write :/ itab2-pernr.
    endloop.
    endif.
    *&      Form  get_data
          text
         -->P_ITAB1  text
         -->P_ITAB2  text
    form get_data  tables   itab1 structure pa0001
                            itab2 structure pa0002.
    select * from pa0001 into table itab1 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    select * from pa0002 into table itab2 where pernr = p_pernr and begda le sy-datum and endda ge sy-datum.
    endform.                    " get_data
    Regards
    vasu

  • Increase performance of the following SELECT statement.

    Hi All,
    I have the following select statement which I would want to fine tune.
      CHECK NOT LT_MARC IS INITIAL.
      SELECT RSNUM
             RSPOS
             RSART
             MATNR
             WERKS
             BDTER
             BDMNG FROM RESB
                          INTO TABLE GT_RESB 
                          FOR ALL ENTRIES IN LT_MARC
                          WHERE XLOEK EQ ' '
                          AND MATNR EQ LT_MARC-MATNR
                          AND WERKS EQ P_WERKS
                          AND BDTER IN S_PERIOD.
    The following query is being run for a period of 1 year where the number of records returned will be approx 3 million. When the program is run in background the execution time is around 76 hours. When I run the same program dividing the selection period into smaller parts I am able to execute the same in about an hour.
    After a previous posting I had changed the select statement to
      CHECK NOT LT_MARC IS INITIAL.
      SELECT RSNUM
             RSPOS
             RSART
             MATNR
             WERKS
             BDTER
             BDMNG FROM RESB
                          APPENDING TABLE GT_RESB  PACKAGE SIZE LV_SIZE
                          FOR ALL ENTRIES IN LT_MARC
                          WHERE XLOEK EQ ' '
                          AND MATNR EQ LT_MARC-MATNR
                          AND WERKS EQ P_WERKS
                          AND BDTER IN S_PERIOD.
      ENDSELECT.
    But the performance improvement is very negligible.
    Please suggest.
    Regards,
    Karthik

    Hi Karthik,
    <b>Do not use the appending statement</b>
    Also you said if you reduce period then you get it quickly.
    Why not try dividing your internal table LT_MARC into small internal tables having max 1000 entries.
    You can read from index 1 - 1000 for first table. Use that in the select query and append the results
    Then you can refresh that table and read table LT_MARC from 1001-2000 into the same table and then again execute the same query.
    I know this sounds strange but you can bargain for better performance by increasing data base hits in this case.
    Try this and let me know.
    Regards
    Nishant
    > I have the following select statement which I would
    > want to fine tune.
    >
    >   CHECK NOT LT_MARC IS INITIAL.
    > SELECT RSNUM
    >          RSPOS
    > RSART
    >          MATNR
    > WERKS
    >          BDTER
    > BDMNG FROM RESB
    >                       INTO TABLE GT_RESB 
    > FOR ALL ENTRIES IN LT_MARC
    >                       WHERE XLOEK EQ ' '
    > AND MATNR EQ LT_MARC-MATNR
    >                       AND WERKS EQ P_WERKS
    > AND BDTER IN S_PERIOD.
    >  
    > e following query is being run for a period of 1 year
    > where the number of records returned will be approx 3
    > million. When the program is run in background the
    > execution time is around 76 hours. When I run the
    > same program dividing the selection period into
    > smaller parts I am able to execute the same in about
    > an hour.
    >
    > After a previous posting I had changed the select
    > statement to
    >
    >   CHECK NOT LT_MARC IS INITIAL.
    > SELECT RSNUM
    >          RSPOS
    > RSART
    >          MATNR
    > WERKS
    >          BDTER
    > BDMNG FROM RESB
    > APPENDING TABLE GT_RESB
    >   PACKAGE SIZE LV_SIZE
    >                     FOR ALL ENTRIES IN LT_MARC
    >   WHERE XLOEK EQ ' '
    >                     AND MATNR EQ LT_MARC-MATNR
    >   AND WERKS EQ P_WERKS
    >                     AND BDTER IN S_PERIOD.
    > the performance improvement is very negligible.
    > Please suggest.
    >
    > Regards,
    > Karthik
    Hi Karthik,

  • Performance for ALTER TABLE statements

    Hi,
    I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
    This is the original code:
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_TO_INVOICE NUMBER NULL );
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_INVOICED NUMBER NULL );
    1. Would I gain any performance by making the following changes?
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_TO_INVOICE NUMBER NULL,
    QTY_INVOICED NUMBER NULL );
    These columns are later on filled with values and then made NOT NULL.
    2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
    /Roland Bali
    null

    1. It wud definitely increase the performance.
    2. You can only have NOT NULL columns added to an existing table if the table is empty.
    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Roland Bali ([email protected]):
    Hi,
    I'd like to improve performance for scripts running several ALTER TABLE statements. I have two questions regarding this.
    This is the original code:
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_TO_INVOICE NUMBER NULL );
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_INVOICED NUMBER NULL );
    1. Would I gain any performance by making the following changes?
    ALTER TABLE CUSTOMER_ORDER_DELIVERY_TAB ADD (
    QTY_TO_INVOICE NUMBER NULL,
    QTY_INVOICED NUMBER NULL );
    These columns are later on filled with values and then made NOT NULL.
    2. Would I gain anything by make these columns NOT NULL with a DEFAULT value in the first statement and then later on insert the values?
    /Roland Bali<HR></BLOCKQUOTE>
    Naveen

  • Why append opration will not perform for hashed table???

    could you pls explain why append is not working for  hashed table while it is working for sort and hashed.......
    Moderator Message: Interview-type questions are not allowed. Read the Rules of Engagement of these forum to avoid getting your ID deleted.
    Edited by: kishan P on Mar 1, 2012 11:25 AM

    Hello,
    See the hashed tables does not support index operations like in standard and sorted tables rather its individual entries are accessed by key. The hashed internal table has been developed specifically using hashing algorithm. In other words, APPEND statement will not work in hashed internal tables but only in standard tables.
    The processing of hashed tables are undertaken by using a KEY whereas for the standard table you may use the key to access it contents or not.
    For more info you can refer to following link below -
    [http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb35de358411d1829f0000e829fbfe/content.htm]
    Hope this helps !

  • Performance issue when using select count on large tables

    Hello Experts,
    I have a requirement where i need to get count of data  from a database table.Later on i need to display the count in ALV format.
    As per my requirement, I have to use this select count inside a nested loops.
    Below is the count snippet:
    LOOP at systems assigning <fs_sc_systems>.
    LOOP at date assigning <fs_sc_date>.
    SELECT COUNT( DISTINCT crmd_orderadm_i~header )
       FROM crmd_orderadm_i
       INNER JOIN bbp_pdigp
           ON crmd_orderadm_iclient EQ bbp_pdigpclient               "MANDT is referred as client
         AND crmd_orderadm_iguid  EQ bbp_pdigpguid
         INTO w_sc_count
    WHERE crmd_orderadm_i~created_at BETWEEN <fs_sc_date>-start_timestamp
         AND <fs_sc_date>-end_timestamp
         AND bbp_pdigp~zz_scsys   EQ <fs_sc_systems>-sys_name.
    endloop.
    endloop.
    In the above code snippet,
    <fs_sc_systems>-sys_name is having the system name,
    <fs_sc_date>-start_timestamp is having the start date of month
    and <fs_sc_date>-end_timestamp is the end date of month.
    Also the data in tables crmd_orderadm_i and bbp_pdigp is very large and it increases every day.
    Now,the above select query is taking a lot of time to give the count due to which i am facing performance issues.
    Can any one pls help me out to optimize this code.
    Thanks,
    Suman

    Hi Choudhary Suman ,
    Try this:
    SELECT crmd_orderadm_i~header
      INTO it_header                 " interna table
      FROM crmd_orderadm_i
    INNER JOIN bbp_pdigp
        ON crmd_orderadm_iclient EQ bbp_pdigpclient
       AND crmd_orderadm_iguid   EQ bbp_pdigpguid
       FOR ALL ENTRIES IN date
    WHERE crmd_orderadm_i~created_at BETWEEN date-start_timestamp
                                          AND date-end_timestamp
       AND bbp_pdigp~zz_scsys EQ date-sys_name.
        SORT it_header BY header.
        DELETE ADJACENT DUPLICATES FROM it_header
        COMPARING header.
        describe table it_header lines v_lines.
    Hope this information is help to you.
    Regards,
    José

  • Improving performance:How to know selected rows in af:table through chk box

    Hi,
    I've a VO which has 3 transient variables. 2 of these transient variables getting the values from a view accessor. Using the other transient variable in the Ui to select the rows in the af:table.
    In my UI, I've a table and a Check Box to select the rows. I want to get the selected rows through the check box, in the backing bean for my business requirements.
    I've used below logic to get the selected rows. Here, I am iterating through the entire viewobject rows, so it is impacting the performance of the UI. How can I know the selected rows in the bean? or How can I improve the performance?
    I also applied below view criteria, but still its not performant.
    // ViewCriteria vc = vo.createViewCriteria();
    // //.getViewCriteriaManager().getViewCriteria("SelectedMerchantCriteria");
    // ViewCriteriaRow vcr = vc.createViewCriteriaRow();
    // vc.setCriteriaMode(ViewCriteria.CRITERIA_MODE_CACHE | ViewCriteria.CRITERIA_MODE_QUERY);//MerchantName1
    // //vcr.setAttribute("MerchantName1", "1017 CAFE");
    // vcr.setAttribute("SelectToMap", "true");
    // vc.addRow(vcr);
    // vo.applyViewCriteria(vc);
    // vo.executeQuery();
    public void mapSupplierMerchant2(ActionEvent actionEvent) {
    // Add event code here...
    OperationBinding method = null;
    AdfFacesContext adfFacesContext = AdfFacesContext.getCurrentInstance();
    Map pageFlowScope = adfFacesContext.getPageFlowScope();
    ArrayList merchantMappingList = new ArrayList();
    Long primaryVendorId = null, groupId = null;
    CollectionModel tableModel = (CollectionModel) this.getMerchantTable().getValue();
    JUCtrlHierBinding tableBinding = (JUCtrlHierBinding) tableModel.getWrappedData();
    ViewObject vo = tableBinding.getViewObject();
    // ViewCriteria vc = vo.createViewCriteria();
    // //.getViewCriteriaManager().getViewCriteria("SelectedMerchantCriteria");
    // ViewCriteriaRow vcr = vc.createViewCriteriaRow();
    // vc.setCriteriaMode(ViewCriteria.CRITERIA_MODE_CACHE | ViewCriteria.CRITERIA_MODE_QUERY);//MerchantName1
    // //vcr.setAttribute("MerchantName1", "1017 CAFE");
    // vcr.setAttribute("SelectToMap", "true");
    // vc.addRow(vcr);
    // vo.applyViewCriteria(vc);
    // vo.executeQuery();
    List<JUCtrlHierNodeBinding> merchantList = (List<JUCtrlHierNodeBinding>)tableBinding.getChildren();
    JUCtrlHierNodeBinding merchantNode = null;
    List supToMap = new ArrayList();
    boolean isMerchantSelected = false, isMasterSup = false;
    Long merchantIdToMap = null, masterSupId = null;
    Row r = vo.first();
    while(r!=null) {
    System.out.println("================================ " + r.getAttribute("MerchantName1") + " " + r.getAttribute("SelectToMap") );
    if(r.getAttribute("SelectToMap")!=null) {
    isMerchantSelected = (Boolean) r.getAttribute("SelectToMap");
    if(isMerchantSelected) {
    merchantIdToMap = (Long) r.getAttribute("MerchantMapId");
    if(merchantIdToMap!=null) merchantMappingList.add(merchantIdToMap);
    r = vo.next();
    if(merchantMappingList.size()>0) {
    primaryVendorId = (Long) pageFlowScope.get("primaryVendorId");
    groupId = (Long) pageFlowScope.get("groupId");
    method = (OperationBinding) ADFUtil.findOperation("mapSupplierMerchant");
    if(method!=null) {
    Map paramMap = method.getParamsMap();
    paramMap.put("vendorId", primaryVendorId);
    paramMap.put("groupId", groupId);
    paramMap.put("merchantList", merchantMappingList);
    Boolean result = (Boolean) method.execute();
    List errors = method.getErrors();
    if(result!=null && result) {                   
    this.getMerchantMappedFlag().setValue(true);
    AdfFacesContext.getCurrentInstance().addPartialTarget(this.getMerchantMappedFlag());
    AdfFacesContext.getCurrentInstance().addPartialTarget(this.getMerchantTable());
    tableBinding.getViewObject().setRangeSize(25);
    FacesMessage message = new FacesMessage("The selected merchants have been mapped to the group. Please save the changes. " );//+ adfFacesContext.getPageFlowScope().get("merchantGroupName") );
    message.setSeverity(FacesMessage.SEVERITY_INFO);
    FacesContext.getCurrentInstance().addMessage(null, message);
    } else {
    FacesMessage message = new FacesMessage("No merchants have been selected for mapping. Please select atleast one merchant." );//+ adfFacesContext.getPageFlowScope().get("merchantGroupName") );
    message.setSeverity(FacesMessage.SEVERITY_ERROR);
    FacesContext.getCurrentInstance().addMessage(null, message);
    }

    Hi
    Please check this post [http://adfwithejb.blogspot.in/2012/08/hi-i-came-across-one-common-use-case.html].
    It has clear explanation of how to provide a checkbox for selection of rows. You can also go through my comments at the end of the post. That should solve your problem of iterating through the entire VO.
    Thanks,
    Rakesh

  • Poor performance for my wlan in conference rooms

    Hi,
    I have real problems in my conference rooms. I have deployed about 25 aps for my building. I have 1242, 1131, 3501 and 1142's. I have a 2 story building. I've used the WCS maps feature to provide me with a coverage area map. I think that the upstairs and downstairs AP's are interfering with each other. It was suggested I put 3 AP's into one conference room, each with it's own a/b or g radio. How do I pinpoint what is causing the problems with connectivity in my conference rooms?     
    Also, have you tried manually adjusting power levels? I believe once I start with that, I'll have to touch each and every ap, if I start messing with that. Any suggestions?
    Thanks            

    Remeber that when working with wireless we should always do a site survey to determine the current rf the site has, where to locate the access points , how many access points to get and where to install each ap so that the overlap between the aps is not more then 15%.
    Also once the APs are installed you can use the wlc options and heck how many rouges aps the wlc aps are seeing since this rouges would affect your aps and auto rrm will not be able to know what channel to use on the aps managed by your wlc so you would need to configure it manully, also you can go to the monitro tab, select the ap by ap to see how each ap sees the signal to another ap managed y the wlc.
    For auto rrm to work on the wlc you need that each ap sees atleast 3 more aps with a good signal to be able to set the correct power and channel to use.
    Sent from Cisco Technical Support iPhone App

  • Help! Poor Performance for BW reporting in WAD

    Hi, Gurus:
    I have one BW report, if running it in analyzer and  can get the result within 10 seconds, however, when running it in WAD, it cost more than 4 minutes even through it has no data, and I'm really confused by this phenomenon, pls help ,thanks in advance!

    Hi
    Please check this link .
    Re: Tips for Frontend performance -  Web Reports (WAD).
    Hope this helps.
    Cheers
    Chanda

  • Observing poor performance on the execution of the quereis

    I am executing a relatively simple query which is rougly taking about 48-50 seconds to execute. Can someone suggest an alternate way to query the semantic model where we can achieve response time of a second or under. Here is the query
    PREFIX bp:<http://www.biopax.org/release/biopax-level3.owl#>
    PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
    PREFIX ORACLE_SEM_FS_NS:<http://oracle.com/semtech#dop=24,RESULT_CACHE,leading(t0,t1,t2)>
    SELECT distinct ?entityId ?predicate ?object
    WHERE
    ?entityId rdf:type bp:Gene .
    ?entityId bp:name ?x .
    ?entityId bp:displayName ?y .
    ?entityId ?predicate ?object .
    FILTER(regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i"))
    Same query executed from sqldeveloper takes about as long as well
    SELECT distinct /*+ parallel(24) */subject,p,o
    FROM TABLE
    (sem_match ( '{?subject rdf:type bp:Gene .
    ?subject bp:name ?x .
    ?subject bp:displayName ?y .
    ?subject ?p ?o
    filter (regex(?x, "GB035698", "i")||regex(?y, "GB035698", "i") )
    sem_models ('biopek'),
    null,
    sem_aliases
    ( sem_alias
    ('bp',
    'http://www.biopax.org/release/biopax-level3.owl#'
    NULL,
    null,null ))
    Is there anything I am missing, can we do anything to optimize our data retrieval?
    Best Regards,
    Ami

    For better performance when using FILTER involving regular expression, you may want to create a full-text index on MDSYS.RDF_VALUE$ table as described in:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e11828/sdo_rdf_concepts.htm#CIHJCHBJ
    I am assuming that you are checking for case-insensitive occurrence of the string GB035698 in ?x or ?y. (On the other hand if you are checking if ?x or ?y is equal to a case-insensitive form of the string GB035698, then the filter could be written in an expanded form involving just value-equality checks and would not need a full-text index for performance.)
    Thanks.

  • Poor Performance of the query.

    Hi all,
    i am using this query
    select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website
    from (select address1,address2,address3,city,place,pincode,siteid,bpcnum_0, contactname,fax,mobile,phone,website,
                 row_number() over (partition by contactname, address1
                                   order by contactname, address1) as rn
            from vw_sub_cl_add1 where siteid=10 and bpcnum_0 = '0063') emp where rn =1I used explain plan for the query the result is
    Plan hash value: 3976107967
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | | | 0 (0)| | |
    | 1 | REMOTE | | | | | INFO | R->S |
    8 rows returned in 0.04 seconds      but, actually the query return 10 rows.
    the view "vw_sub_cl_add1" is created using database links(remote database server).
    this query i am using in for loop to retrieve the records and print it one by one.
    The problem is : The perfomance of the query is so poor. it takes 1.08 sec to display all the records.
    what are the steps i should do to minimize the retrival time.?
    Thanks in advance
    bye
    Srikavi

    Since this is query that is processed completely on the remote site, there are at least two potential issues that you should check if you don't want to use the "materialized view" approach:
    1. The time it takes to transport the result set to your local database, i.e. potential network issues
    2. The time it takes to process the query on the remote site
    Since you're only fetching 10 rows - if I understand you correctly - the first point shouldn't be an issue in your case.
    If you have suitable access to the remote site you would need to generate an execution plan of the "local" version of the query by logging directly into the remote size to find out why it takes longer than you expect. Probably it's missing some indexes if the number of rows to process should be only a few and you expect it to return more quickly.
    Here are simple instructions how to generate a meaningful execution plan if you want to post it here:
    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Poor Performance of the WebLogic Portal System

    Hi,
    I am facing one issue which has become bottleneck over the time as far as the development of my application is concerned.
    My problem is that when i run and wish to see my portal page (Web Page)on Internet Explorer/Mozzila Firefox it takes so much time to get rendered (appx 10 mins). This is affecting the productivity as the page rendering is a frequent process to see the output of your work/changes made.
    I would be very thankful if anyone can guide me what is wrong. Is this problem is with me only? Why Weblogic Portal system so slow as compared to other Portal systems like Microsoft's Sharepoint and IBM's Webshpere Portal system.
    I am using Weblogic Portal v10.
    CPU is 3.2 Ghz, 4 GB RAM, 3 MB Cache.
    Please guide. I would appreciate if one can provide some way out to speed up the page rendering. I have tried changing the Heap Size etc but failed.
    Thank You all. Have a great Day.

    10 minutes?!
    We need to narrow that down, it may be something in your portlet implementation. An easy way to get an idea would be to take a series of Java thread dumps of the WLP server instance while it is processing that portlet. On Windows, press CTRL-Break or Google for the way to do it for your platform.
    It will print out what each thread is working on - if you see your code in there over a period of time, you've got a problem in your portlet. If it is stuck in WLP code, let us know.
    I also did a blog entry about performance improvement tips during iterative development, some might apply for you:
    http://peterlaird.blogspot.com/2007/05/optimized-development-for-weblogic.html
    Peter

Maybe you are looking for

  • SDO_BUFFER Geometry in Mapviewer

    I'm using the sdo_geom.sdo_buffer procedure to successfully generate a buffer around a point, but I can't seem to get the buffer to display the way I'd like. This is what I am doing to get a gray buffer: mv.addJDBCTheme(dataSrc, "ASSET_CIRCLES", circ

  • Accessing ADF model within Struts based portlet

    Hi, I cannot seem to access and initialise the model from within a Struts based portlet. Has anyone been able to get this right, and if so, would you mind sharing it with me. I suspect it has something to do with the attributes that are being stored

  • Trouble with iCloud 2 factor authentication in outlook app

    I am using 2 factor authentication on my iCloud account.  I recently downloaded and setup Microsoft's new outlook app on my iPhone and iPad.  I find it very useful for what I like to do.  I added all of my email accounts as well as dropbox and other

  • Export JPEGS as PDF out of iPhoto

    I have in albums many images of documentrs.  They are named properly contract rent (6 images), etc,  I would like to make a PDF rather than have 6 jpeg

  • Odd Font Kerning Problem

    Problem with type kerning in ID CS5 and CS5.5 on Mac 10.6.8 and Mac 10.7.2 using either Suitcase Fusion 3 or Font Book. All our workstations exhibit same behavior. Activate fonts, launch ID and open document and kerning is correct. Close document, de