LOV load delay issue

Hi Gurus,
I have a great disparity issue with lOV with regards to different instances. The LOV queries quickly in local instance , but when we deploy the same page in client instance, the LOV takes minutes to pop up and even great time to return a value to base page.
What I observed is when the LOV is selected the form gets submitted ( The page blinks) and a count gets displayed on the status bar saying "300 items remaining" ( to download) . The count gradually decreases and the LOV pop up appears.
Please suggest me a solution. Is it because due the fact that client instance is connected through a VPN and hence the delay.
Thanks,
Srikanth
Edited by: Srikanth Parupally on May 4, 2009 12:23 PM

Hi Sumit,
Thanks for your reply. I have checked the query and its cost is very little. I have nearly 50 LOVs ( Torch icon) , 50 Date items ( Calendar icon) and 50 Delete icons on my page. My page is an extensive data entry page.
When I click on the torch the page submits and all these icons get refreshed. I guess on the staus bar it shows the count of refreshing icons ( Similar to any web page downloading images).
I tried to put a single LOV in the page and it executes without problem (Normal time ). Can we in anyway make the LOV not to submit or refresh the page? Pls suggest.
Thanks,
Srikanth

Similar Messages

  • CSS arrowpoint cookie load balancing issue

    Hi guys,
    I need some advice on a load balancing issue.
    We have connections hitting the CSS via a proxy environment. As a result i see only one source ip address. I want to use arrowpoint cookies for session stickeyness. However when i enable the rule the tcp session negotiation fails. The CSS sends a TCP/RST which terminates the session.
    Here's the rule config:
    content HTTP_rule
    add service ZSTS299102
    add service ZSTS281101
    vip address <filtered>
    add service LONS299102
    add service LONS281101
    balance weightedrr
    change service ZSTS299102 weight 5
    change service ZSTS281101 weight 5
    advanced-balance arrowpoint-cookie
    protocol tcp
    port 80
    url "/*"
    active
    Any help would be much appreciated.

    Remko,
    in L3/L4 the CSS sends the SYN directly to the server.
    So when the FIN comes in, we simply pass it to the server.
    With L5 the CSS spoofs the connection and we select the server only after receiving the GET.
    If there was some delay between the GET and the FIN, the CSS would have time to establish a connection with the server and the FIN could be simply forwarded.
    Unfortunately, in this case the FIN is right after the GET with no delay.
    Gilles.

  • In Log-shipping what is load delay period on secondary server - Skipping log backup file since load delay period has not expired ....

    During logshipping, job on secondary server is ran successfully BUT give this information
    "Skipping log backup file since load delay period has not expired ...."
    What is this "Load delay period" ? Can we configure this somehow, somewhere ?
    NOTE : The value on "Restore Transasction Log tab", Delay Restoring backups at least = Default (zero minutes)
    Thanks
    Think BIG but Positive, may be GLOBAL better UNIVERSAL.

    How to get the LSBackup, LSCopy, and LSRestore jobs back in sync...
    Last I posted the issue was that my trn backups were not being copied from Primary to Secondary. 
    I found upon further inspection of the LS related tables  in MSDB the following likely candidates for adjustment:
    1) dbo.log_shipping_monitor_secondary, column  last_copied_file 
    change last copied file column to something older than the file that is stuck. For example, the value in the table was 
    E:\SQLLogShip\myDB_20140527150001.trn
    I changed last_copied_file to E:\SQLLogShip\myDB_20140525235000.trn. Note that this is just a made up file name that is a few minutes before the actual file that I would like to restore (myDB_2014525235428.trn). 4 mins and 28 seconds before, to
    be exact.
    LSCOPY runs and voila! now it is copied from primary to secondary. That appears to be the only change needed to get the copy going again.
    2) For LSRestore, see the MSDB table dbo.log_shipping_monitor_secondary, change
    last_restored_file
    again I used the made up file E:\SQLLogShip\myDB_20140525235000.trn
    LSRESTORE runs and my just copied myDB_2014525235428.trn is restored
    ** note that
    dbo.log_shipping_secondary_databases also has a last_restored_file column - this did not seem to have any effect, though I see that it updates after completing the above and LSRestore has run successfully, so now it is correct as well
    3) LSBackup job is still not running, the job still has a last run date in the future. Could just leave it and eventually it will come right, but I made a fairly significant time change, plus it's all an experiment....back to MSDB.
    look at dbo.sysjobs, get the job_id of your LSBackup job
    edit dbo.sysjobschedules - change next_run_date  next_run_time as needed to a datetime before the current time, or when you would like the job to start running. 
    I wouldn't be so cavalier with data that was important, but that's the benefit of being in Test, and it appears that these time comparisons are very rudimentary - a value in the relevant log shipping table and the name of the trn file. That said, if you
    were facing a problem of this nature due to lost trn files, corrupted, or some similar scenario, this wouldn't fix your problem, though it _might_ allow you to continue? But in my case I know I have all the trn files, it's just the time that changed, in this
    case on my Primary server, and thus the names of the trn logs got out sync.

  • Firmware update for HP OfficeJet Pro 8000 wireless to fix load paper issue - as for the 8500?

    Is there any hope that that soon will be a firmware update to fix the issues loading paper and the printheads for the HP OfficeJet Pro 8000 wireless printer? 
    A month or two ago there came a firmware update to the HP OfficeJet Pro 8500 wireless, which has some of the same issues,
    and allegedly should have the much of the same load-mechanism as the 8000 printer.
    It would be great, as we are on the fourth (4) replacement of this HP 8000 printer. For example, it would be great if we could use normal (80g) paper and not has to use (HP) 90g paper, as this prevents us from using the duplexer and only print on one side.

    Thanks for the advice to contact HP tech support!
    I have tried in multiple ways since November 2009 to contact HP, when I got the first (of 4) printers. The first 3 printers was subsequently replaced.
    Finally in late June 2010 I got the message that there was coming an firmware update for Windows, but not for Mac's. The firmware should improve among other things the paper pick up (load paper issue) and the print-heads life.
    I have applied the Windows firmware on the printer from a pc, but still seems to have load-paper issues, even using HP Bright White paper, when printing from Mac's. I have not tested the printing from a pc, but assumes that a firmware upgrade the printer, and are not specific to a certain OS.

  • ITS load balancing issue

    Hi all,
    During our testing we are getting a load balancing issue.  However, one of the agates in our network is has more CPU power than compared to the other agates in our ITS network.  The memory on all the agate servers is the same. 
    Our current issue we are getting is the one agate that has more cpu power but acquires more sessions as compared to the other two agates.  It roughly gets 60 more sessions per agate process as compare to the other Agate servers.  Does having more cpu on a Agate affect the load balancing on ITS?  We are on ITS patch level 19 with the Hotfix. 
    Thanks,
    Jin Bae

    Hello Jin,
    yes, at (re)initialize the WGate retrieves the capacity from the AGates.
    This is an accumulated number based on CPU performance and the number of CPUs!
    The number can be seen in "wgate-status" as the "Capacity" of the AGate.
    When running multiprocess Agates the number is retrieved from the MManager and also involves the number of agate-processes.
    The WGate dispatches the load in proportion depending on these capacity numbers.
    By my knowledge there is no way that these values can be configured (fixed).
    Regards,
      Fekke

  • SIP load balancing issue with ACE 4710

    SIP Load balancing Issue with ACE 4710
    I have a Cisco ace 4710 with vesion Version A4(2.2). i configued simple SIP load balancing first without stickiness. without stikeiness we are having a problem because bye packet at the was not going to the same server all the time that left our port in used even though user hang up the phone. its happen randmly. i have a total 20 licenced ports and its fill out very quickly. so i dicided to use the stickiness with call-ID but still same issue. below is the config
    rserver host CIN-VOX-31
      ip address 172.20.130.31
      inservice
    rserver host CIN-VOX-32
      ip address 172.20.130.32
      inservice
    serverfarm host CIN-VOX
      probe SIP-5060
      rserver CIN-VOX-31
        inservice
      rserver CIN-VOX-32
        inservice
    sticky sip-header Call-ID VOX_SIP_GROUP
      timeout 1
      timeout activeconns
      replicate sticky
      serverfarm CIN-VOX
    class-map match-all CIN_VOX_L4_CLASS
      2 match virtual-address 172.22.12.30 any
    class-map match-all CIN_VOX_SIP_L4_CLASS
      2 match virtual-address 172.22.12.30 udp eq sip
    policy-map type loadbalance sip first-match CIN_VOX_LB_SIP_POLICY
      class class-default
        sticky-serverfarm VOX_SIP_GROUP
    policy-map multi-match GLOBAL_DMZ_POLICY
       class CIN_VOX_SIP_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
      class CIN_VOX_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
    interface vlan 20
      description VIP_DMZ_VLAN
      ip address 172.22.12.4 255.255.255.192
      alias 172.22.12.3 255.255.255.192
      peer ip address 172.22.12.5 255.255.255.192
      access-group input PERMIT-ANY-LB
      service-policy input GLOBAL_DMZ_POLICY
    could you please help me on this...
    thanks
    Rakesh Patel

    I mean there should be one more statement-
    class-map type sip loadbalance match-any CIN_VOX_LB_SIP_POLICY 
    match sip header Call_ID header-value sip:
    and that will be called under-
    policy-map multi-match GLOBAL_DMZ_POLICY
       class CIN_VOX_SIP_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
    is that missing in your config ?

  • Load monitoring issue

    Hi,
          in load monitoring process chain having  issue with 1 master data load, this issue containg  "Error message when processing in the Business Warehouse" and in monitor 2 duplicate records are there, how  can i solve the issue and continue the process chain, could you plz anyone sugess me in resolving the issue.
    Thanks!
    regards,
    Buvana.

    hii bhuvana,
    there is one option which u can try but am not sure if it is correct or not. go to DTP from there go to update tab and there u can see a option called "Handle duplicare record key" check that option. It should work. In the past when we had the similar error of duplicate records. we did it in the same way.
    regards,
    raghu.

  • Page loading delay

    Is there a way to go in to Safari and set the page loading delay to a shorter time ?

    Empty Safari's cache (from the Safari menu), then close Safari.
    Go to Home/Library/Safari and delete the following files:
    form values
    download.plist
    Then go to Home/Library/Preferences and delete
    com.apple.Safari.plist
    Repair permissions.
    Start up Safari again, and things should have improved.
    If not, MacFixit have published a very detailed (very!) article on speeding up a slow Safari, here:
    http://www.macfixit.com/article.php?story=20070416000657464

  • Delay issue with mighty mouse and macbook pro. help?

    Connected but delayed in movement. I am connected to an HD tv via HDMI and only a delay issue on the tv monitor. The mouse responds as normal on the laptop screen if I have it open, but on the TV it is delayed. Any clue as to what might be wrong? I am running HDMI to a converter for the mini display port and have seen it work many times with no delay. I have changed the batteries and reconnected/restarted the computer and the mouse numerous times. 

    Hey there, I'm on the forums because I can't get my wireless mouse to connect to my new MacBook Pro either (just got it yesterday). Bluetooth Device Manager seems to "see" it when it searches, but insists it can't connect to it, saying either "device not found" (which is ridiculous, since it's right up there in the list) or "cannot find the necessary services." Yet I cannot find anything definitive on the tubes to say that the two don't work together. Anyone else have any ideas for us??
    I should mention I changed the batteries b/c at first I thought that's all it was. When it didn't work I brought it to a 2009 MacBook Pro, and that one connected no problem ::shaking head::

  • Logshipping: Skipping Log back up file since load delay period has not Expired

    Skipping Log back up file since load delay period has not Expired
    What does it mean?
    Thanks.

    Check step 19 in the link
    http://msdn.microsoft.com/en-gb/library/ms190640.aspx
    "If you want to delay the restore process on the secondary server, choose a delay time under
    Delay restoring backups at least." It just means that the restore of a particular log backup will be delayed untill the time specified.
    It is also explained here
    http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-730-multiple-mirrors-and-log-shipping-load-delays/
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Program delay issue

    1. Once or twice a week but not necessarily every week I will turn on my TV and the box will stuck playing a program from an hour before. I will try pushing the Fios TV button to go to live programming and it will flash the correct program info but will then change the info to the earlier program and stay showing the earlier program. Turning the box on and off will do nothing and I will have to unplug and reboot the box to see current programming. I have a Motorola QIP6416-2 hd-dvr. I figure I'm probably going to have to get a new box.
    2. However, I had a newer box when I first had my fios hooked up and had all kinds of issues with it.
    So, I guess I have two questions,
    1. Any suggestions on the program delay issue?
    And
    2. Have they fixed the problems with the sound/etc on the new boxes?
    "If your problem has been solved, please mark it as such. Don't forget to hand out your Kudos!"

    Thank you. I have not tried a 2 channel move, only 1. Hopfully they get the newer boxes fixed. The ability to record 2 programs and watch a third using the buffer is very useful.
    "If your problem has been solved, please mark it as such. Don't forget to hand out your Kudos!"

  • Outlook delay issue

    Outlook delay issue

    So, what's your point?
    1. You want send email via Outlook delay function?
    2. You have a problem with Outlook sending/receiving delay issue?
    If the #1, please click
    http://office.microsoft.com/en-us/outlook-help/delay-or-schedule-sending-email-messages-HP010355051.aspx
    If the #2, could you please check Internet headers?

  • ERPi Data load mapping Issue

    Hi,
    We are facing issue with ERPi data load mappings issue. Mapping file (txt file) has 36k records, whenever we are trying to load mappings, it's taking very long time, nearly 1 hour 30mins. but we want to reduce that time. is there any way to reduce data load mapping time??
    Hyperion verion: 11.1.2.2.300
    Please help, thanks in advance!!
    Thanks.

    Any one face the same kind of issue??

  • Loading performance issues

    HI gurus
    please can u help in loading issues.i am extracting  data from standard extractor  in purchasing  for 3 lakhs record it is taking  18 hrs..can u please  suuguest  me loading performance issues .
    -KP

    Hi,
    Loading Performance:
    a) Always load and activate the master data before you load the transaction data so that the SIDs don't have to be created at the loading of the transaction data.
    b) Have the optimum packet size. If you have too small a packet size, the system writes messages to the monitor and those entries keep increasing over time and cause slow down. Try different packet sizes to arrive at the optimum number.
    c) Fine tune your data model. Make use of the line item dimension where possible.
    d) Make use of the load parallelization as much as you can.
    e) Check your CMOD code. If you have direct reads, change them to read all the data into internal table first and then do a binary search.
    f) Check code in your start routine, transfer rules and update rules. If you have BW Statistics cubes turned on, you can find out where most of the CPU time is spent and concentrate on that area first.
    g) Work with the basis folks and make sure the database parameters are optimized. If you search on OSS based on the database you are using, it provides recommendations.
    h) Set up your loads processes appropriately. Don't load all the data all the time unless you have to. (e.g) If your functionals say the historical fiscal years are not changed, then load only current FY and onwards.
    i) Set up your jobs to run when there is not much activity. If the system resources are already strained, your processes will have to wait for resources.
    j) For the initial loads only, always buffer the number ranges for SIDs and DIM ids
    Hareesh

  • Loads delayed due to Indexing Issue

    HI SAP GURUS ,
    Please suggest correct answer as we have an issue where Deletion / Creation of Index process takes long time to complete .
    Also how does the partitions in an Infocube grows & to Increase the performance what steps one should take .
    Full points will be rewarded for right answers
    Regards ,
    Subash Balakrishnan

    Hi Subash,
    Firstly your question sis vrey generic. It depends that it is taking time in creating Index on DSO or InfoCube.
    InfoCubes are more prone to Index creation issue than DSO, due to high number of uncompressed requests. However this also depnds on the database due to archicture:
    ORACLE:
    With Oracle database, If you have a cube with 1000 uncompressed requests and 10 dimensions (every dimension brings with it a local bitmap index) you have 1000 partitions for the table itself and 10 x 1000 index partitions which makes a total of 11.000 database objects. This amount of objects hurts most when updating statistics or dropping/recreating secondary indexes. In situations with a high load frequency you will most certainly run into problems in this area sooner or later.
    If you do a statistics update, all these 11.000 objects will receive new statistics which will take some time and may raise locking problems during execution or at least a serialization when changing the statistics fields in the database catalog tables.
    If you drop/recreate secondary indexes, all these objects must be removed or created in the database catalog which also is a serial operation and may again rise locking situations and/or long runtimes. Additionally there is lots of DB cost for returning the allocated disk space.
    You will not experience trouble from this direction in the beginning, when the number of partitions is low. But you will run into problems randomly at first, when the number of partitions increase, and permanently, after the number of partitions have exceeded some (not specifiable, system and context dependent) threshold.
    SQL Server:
    With SQL Server 2005 onwards, there is limitation of 1000 partitions for a table. And each loaded uncompressed request in InfoCube is a partition in SQL Server. In BW System, each uncompressed requests are loaded in one partition and once 1000 partition limit has been reached in a SQL Server database, it will continue writing each new request to the 1000th partition and will write an error to the system log (SQL Error 7719). This is done only to avoid a hard failure when loading data, it is not the recommended business process that one should keep loading requests to the last partition. Continuing to load to the 1000th partition could cause a performance problem later when trying to delete requests and creating indexes and updating statistics of the InfoCube.
    How to check number of partitions on the SQL server database:
    Execute report RSDD_MSSQL_CUBEANALYZE
    - Menu Settings => Expert mode
    - Press the Details button
    - Choose the number of minimum partitions (choose let say 500)
    - Press the button Start Checks
    This will display all the tables with more than 500 partitions in the database.
    So, to avoid this index creation issue for InfoCubes, it is suggested that one should keep the less number of uncomressed requests in the InfoCube. Compress the InfoCube with the latest request whereever possible.
    Hope this helps. Award points if helpful.
    Regards
    Tanuj

Maybe you are looking for