Enormous response times to simple queries

Dear Forum
I am using TopLink via JPA in a Spring(V2) environment – using for tests the “org.springframework.orm.jpa.LocalEntityManagerFactoryBean”.
There are 2700 Location objects stored in memory which have one Many-to-one relationship and a lot of one-to-many to not yet existing objects defined as lazy.
I defined a query "findLocationByCode" (“SELECT DISTINCT l FROM Location l WHERE l.code = :code ") which takes normally about 30 ms!
When I use that a creating a very complicated object with about 1000 dependent objects then that query may take up to 30 sec(!!!). Inspecting the database there is no relevant activity and it seems that the time is mainly wasted within TopLink (cache?).
I use Oracle 10g 10.2.0 Enterprise edition, I have 2GB notebook ("Intel Pentium 1.7GHz) with Windows XP Professional, SP 2.
TopLink: 2.0-b41-beta2 (03/30/2007)
In the following I append a snapshot of the persistence.xml and the annotations of Location and LocClass classes.
Y.s.
Wolfgang
TopLink settings in persistence.xml
<property name="toplink.sessions-xml"
     value="META-INF/sessions.xml" />
<property name="toplink.session-name" value="solver" />
<property name="toplink.cache.type.default" value="HardWeak" />
<property name="toplink.cache.size.default" value="15000"/>
<property name="toplink.cache.type" value="NONE" />
<property name="toplink.refresh" value="true"/>
<property name="oracle.orm.throw.exceptions" value="false" />
<property name="toplink.weaving" value="true" />
<property name="toplink.jdbc.bind-parameters" value="true" />
<property name="toplink.jdbc.native-sql" value="true" />
<property name="toplink.jdbc.batch-writing"
     value="BUFFERED" />
<property name="toplink.jdbc.cache-statements.size"
     value="100" />
<property name="toplink.jdbc.read-connections.max"
     value="10" />
<property name="toplink.jdbc.read-connections.min"
     value="2" />
<property name="toplink.jdbc.read-connections.shared"
     value="true" />
<property name="toplink.jdbc.write-connections.max"
     value="4" />
<property name="toplink.jdbc.write-connections.min"
     value="4" />
<property name="toplink.logging.level" value="SEVERE" />
<property name="toplink.logging.timestamp" value="true" />
<property name="toplink.logging.thread" value="true" />
<property name="toplink.logging.session" value="true" />
<property name="toplink.logging.exceptions" value="true" />
<property name="toplink.logging.file"
Location Class
@Entity
@Table(name = "LOCATION")
@NamedQuery(
name="findLocationByCode",
query="SELECT DISTINCT l FROM Location l WHERE l.code = :code "
public class Location implements ILocation {
     // persistent part
     // primary key field
     @Id
     @GeneratedValue
     @Column(name = "LOC_ID")
     transient
     private Long id;
     @Id
     @Column(name = "LOC_CODE", nullable = false)
     private String code;
     @Column(name = "LOC_NAME", nullable = true)
     private String name;
     @ManyToOne(cascade = CascadeType.ALL)
     @JoinColumn(name = "LOC_CLASS_ID")
     private LocClass locClass;
     @Column(name = "LOC_GPSX", nullable = true)
     private Float gpsX;
     @Column(name = "LOC_GPSY", nullable = true)
     private Float gpsY;
     @Column(name = "LOC_GRX", nullable = true)
     private Float grX;
     @Column(name = "LOC_GRY", nullable = true)
     private Float grY;
     @Column(name = "LOC_LOCAL_RADIO", nullable = true)
     private String localRadio;
     @OneToMany(mappedBy = "fromLoc", fetch = FetchType.LAZY, cascade = CascadeType.ALL)
     private Set<Connection> fromConnections = new HashSet();
     @OneToMany(mappedBy = "toLoc", fetch = FetchType.LAZY, cascade = CascadeType.ALL)
     private Set<Connection> toConnections = new HashSet();
     @OneToMany(mappedBy = "location", fetch = FetchType.LAZY, cascade = CascadeType.ALL)
     private Set<ViewLoc> viewLocs = new HashSet();
     @OneToMany(mappedBy = "location", fetch = FetchType.LAZY, cascade = CascadeType.ALL)
     private Set<Entry> entries = new HashSet();
LocClass Class
@Entity
@Table(name = "LOCCLASS")
@NamedQuery(
name="findLocClassByCode",
query="SELECT DISTINCT lc FROM LocClass lc WHERE lc.code = :code"
public class LocClass implements ILocClass{
//persistent part
// primary key field
@Id
@GeneratedValue
@Column(name = "LC_ID")
transient     
private Long id;
@Id
@Column(name = "LC_CODE", nullable=false)
public String code;
@Column(name = "LC_NAME", nullable=false)
public String name;
@Column(name = "LC_DIS_NAME", nullable=false)
public Boolean displayLocName;
@Column(name = "LC_D_LINE_MODE", nullable=false)
public Integer displayLineMode;
@Column(name = "LC_D_RUNTIME")
public Boolean displayRuntime;
@Column(name = "LC_SWITCH_POSS")
public Boolean switchPossibility;

Hello Wolfgang,
You mention your notebook has 2gs but not how much memory or heap space the JVM is configured to use (or how much it is actually using). If you haven't tuned the JVM, this could be once cause of the slowdown.
I do not see anything wrong with your entities - assuming the LOC_CLASS_ID field is a string type anyway. One thing to try is to mark the Many-to-one relationship as lazy, or bring it in using fetch join with the location query ie:
SELECT DISTINCT l FROM Location l join fetch l.locClass WHERE l.code = :code
Best Regards,
Chris

Similar Messages

  • Slow response  on data dictionary queries with optimizer_mode=rule in  10g

    I have two dataabse: DB1 (9i) and DB2 (10g) on windows 2000
    They are two development databases with the same schemas and same tables. The application executes the same commands but with different results and execution plans.
    In DB2 the queries with the most slow response tima are the queries on the data dictionary (for example: all_synonyms).
    These query are very fast with the optimizer_mode=cost and very slow with the optimizer_mode=rule.
    And the the problem is this:
    in DB1 and DB2 the application executes after the connection this command:
    ALTER SESSION SET OPTIMIZER_MODE = 'RULE';
    These are the traces of the session in db1 and db2:
    The queries are created dynamically by the application.
    Is there a solution for this?
    thanks
    Message was edited by:
    user596611

    Here is a simple example of what can happen,
    @>alter session set optimizer_mode=all_rows;
    @>SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    | Id  | Operation        | Name | Rows  | Cost (%CPU)|
    |   0 | SELECT STATEMENT |      |     1 |     2   (0)|
    |   1 |  FAST DUAL       |      |     1 |     2   (0)|
    @>alter session set optimizer_mode=rule;
    @>SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
    PLAN_TABLE_OUTPUT
    | Id  | Operation        | Name |
    |   0 | SELECT STATEMENT |      |
    |   1 |  FAST DUAL       |      |
    Note
          - rule based optimizer used (consider using cbo)As you can see incomplete explain plans. Therefore it is not advised.
    Adith

  • How to get query response time from ST03 via a script ?

    Hello People,
    I am trying to get average query response time for BW queries with a script (for monitoring/historisation).
    I know that this data can be found manually in ST03n in the "BI workload'.
    However, I don't know how to get this stat from a script.
    My idea is to run a SQL query to get this information, here is the state of my query :
    select count(*) from sapbw.rsddstat_olap
    where calday = 20140401
    and (eventid = 3100 or eventid = 3010)
    and steptp = 'BEX3'
    The problem is that this query is not returning the same number of navigations as the number shown in ST03n.
    Can you help me to set the correct filters to get the same number of navigation as in ST03n ?
    Regards.

    Hi Experts,
    Do you have ideas for this SQL query ?
    Regards.

  • Faster response time of queries

    I have a query which joins a few tables with seveeral thousand rows each. This query normaly returns tens of thousands and the response time is almost 10 minutes and it's not acceptable for web application.
    To speed it up I just want Oracle to return the first let's say 1000 rows.
    Changing the max rows returned parameter(APEX) to 1000 doesn't help at all. It seems like query executes in full and then only the first 1000 rows of the resultset are sent.
    So my question is: is there way to instruct Oracle to stop execution of the query once first n rows are retrieved?
    I tried the SELECT /* FIRST_ROWS(1000) */ .... but this doesn't help. and I wonder how could it when it seems that TOAD determines this as a comment and doesn't change the optimizer mode - still ALL_ROWS.
    What am I doing wrong here, this is the first time I am trying to use FIRST_ROWS hint , - is there another - better way to speed up my query?

    Hi Bob, thanks for the response. rownum < n was the first thing I tried. One would think that if a query takes 5 minutes to execute and returns 50 000 rows then after adding rownum < 5000 it shuldn't take more than a minute - well it takes pretty much the same time as w/out rownum < n. It seems like rownum is determined for the whole resultset and then the where codition is applied.
    The tables actually have much more than a few thousand rows. one is with close to 250 000 and couple other tables with over a million and I don't see much that I can optimize. I think being able to only return certain first n rows fast for web applications must be fairly common situation when dealing with large tables/views.

  • Explain plan - lower cost but higher response time in 11g compared to 10g

    Hello,
    I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
    In 11g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANALYZED NUM_ROWS
    11-08-2012 18:21:12 3413956
    Elapsed: 00:00:00.30
    In 10g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANAL NUM_ROWS
    07-NOV-12 3502160
    Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
    BTW - 'm running the queries directly on the server, so no network latency in play here.
    Thanks in advance
    aBBy.

    *11g Env:*
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    *10g Env:*
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    The query used is:
    explain plan for
    select
    NCP_DETAIL_ID ,
    NCP_ID ,
    STATUS_ID ,
    FIBER_NODE ,
    NODE_DESC ,
    GL ,
    FTA_ID ,
    OLD_BUS_ID ,
    VIRTUAL_NODE_IND ,
    SERVICE_DELIVERY_TYPE ,
    HHP_AUDIT_QTY ,
    COMMUNITY_SERVED ,
    CMTS_CARD_ID ,
    OPTICAL_TRANSMITTER ,
    OPTICAL_RECEIVER ,
    LASER_GROUP_ID ,
    UNIT_ID ,
    DS_SLOT ,
    DOWNSTREAM_PORT_ID ,
    DS_PORT_OR_MOD_RF_CHAN ,
    DOWNSTREAM_FREQ ,
    DOWNSTREAM_MODULATION ,
    UPSTREAM_PORT_ID ,
    UPSTREAM_PORT ,
    UPSTREAM_FREQ ,
    UPSTREAM_MODULATION ,
    UPSTREAM_WIDTH ,
    UPSTREAM_LOGICAL_PORT ,
    UPSTREAM_PHYSICAL_PORT ,
    NCP_DETAIL_COMMENTS ,
    ROW_CHANGE_IND ,
    STATUS_DATE ,
    STATUS_USER ,
    MODEM_COUNT ,
    NODE_ID ,
    NODE_FIELD_ID ,
    CREATE_USER ,
    CREATE_DT ,
    LAST_CHANGE_USER ,
    LAST_CHANGE_DT ,
    UNIT_ID_IP ,
    US_SLOT ,
    MOD_RF_CHAN_ID ,
    DOWNSTREAM_LOGICAL_PORT ,
    STATE
    from markethealth.NCP_DETAIL_TAB
    WHERE UNIT_ID = :B1
    ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
    This is the query used for Query 1.
    Stats differences are:
    1. Rownum differes by apprx - 90K more rows in 10g env
    2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
    3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g.

  • Query Tuning - Response time Statistics collection

    Our Application is Load tested for a period of 1 hour with peak load.
    For this specific period of time, say thousands of queries gets executed in the database,
    What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
    Number of times Executed
    Average Response time
    Maximum response time
    minimum response time
    90th percentile response time ( sorted in ascending order, 90th percentile guy)
    All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
    I tried using sql trace and TKPROF but unable to get all these statistics...
    Application uses connection pooling, so connections are taken as and when needed...
    Any thoughts on this?
    Appreciate your help.

    I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
    There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
    Any more thoughts on this?

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

  • Webservice response times --How can we improve ?

    Hi All,
    I am making two different calls to a Function module from java
    1. web service   2. Jco
    When I go to STAD transaction i can see webservice response timings are more compared to Jco.
    What intresting here for is CPU timings and DB timings.
    Some case The DB timings for webservice 3 to 4 times more than Jco  .
    Ideally DB timings should be similar in both the cases..right ??
    CPU timngs also more in webservice ? Why ? How we optimze this for good performance of web service ??
    My webservice is simple one conatins 4 input parmaters (simple type)  retuening a simple structure.
    Jco response time is around 500-2000 (ms)
    web service response time is  2000-5000  (ms)
    Looking for expert suggestions from our community
    Thanks in advance.
    Best Regards,Anilkumar

    hi,
    JCo is an native binary RFC oder FastRFC, walks through the Gateway,
    Webservice are textoriented, more Overhead and in summary with still less performance,
    walks through the ICM
    e.g. RFC-Connections are ca. 10 times faster than Webservices!

  • High Response Times with 50 content items in a Publisher 6.5 portlet

    Folks,
    Have set up a load test, running with a single user, in which new News Article content items are inserted into a Publisher 6.5 portlet created of the News Portlet template. Inserts have good response times through 25 or so content items in the portlet. Then response times become linearly longer, until it takes ten minutes to insert a content item, when there are 160 content items already.
    This is a test system that is experiencing no other problems. There are no other users on the system, only the single test user in LoadRunner, inserting one content item at a time. The actual size of the content item is tiny. Memory usage in the Publisher JVM (as seein on the Diagnostics page) does not vary from 87% used with 13% free. So I asked for a DB Trace, to determine if there were long-running queries. I can provide this on request, it zips to less than 700k.
    Have seldom seen this kind of linear scalability!
    Looked at the trace through SQL Server Profiler. There are several items running for more than one second, the Audit Logout EventClass repeatedly occurs with long durations (ten minutes and more). The users are publisher user, workflow user, an NT user and one DatabaseMail transaction taking 286173 ms.
    In most cases there is no TextData, and the ApplicationName is i-net opta 2000 (which looks like a JDBC driver) in the longest-running cases.
    Nevertheless, for the short running queries, there are many (hundreds) of calls to exec sp_execute and IF @@TRANCOUNT > 0 ROLLBACK TRAN. This is most of what fills the log. This is strange because only a few records were actually inserted successfully, during the course of the test. I see numerous calls to sp_prepexec related to the main table in question, PCSCONTENTITEMS, but very short duration (no apparent problems) on the execution of the stored procedures. Completed usually within 20ms.
    I am unble to tell if a session has an active request but is being blocked, or is blocking others... can anyone with SQL Server DBA knowledge help me interpret these results?
    Thanks !!!
    Robert

    hmmm....is this the ootb news portlet? does it keep all content items in one publisher folder? if so then it is probably trying to re-publish that entire folder for every content item and choking on multiple republish executes. i dont think that ootb portlet was meant to cover a use case of multiple content item inserts so quickly. by definition newsworth stuff should not happen to need bulk inserts. is there another way to insert all of the items using publisher admin and then do one publish for all?
    i know in past migration efforts when i've written utilities to migrate from legacy to publisher the inserts and saves for each item took a couple of seconds each. the publishing was done at the end and took quite a long time.

  • OSB - Service Invocation instance response times

    Hi,
    In my research and discussion with OSB vendor team, I found there is no product feature to gather statistics on per invocation response times for a OSB service.
    My requirement is to gather per invocation response time of service. I am contemplating few ways of doing this
    1. Java call outs before the start and end of service.
    Downside of this approach is in my composite service (composing 10 biz services) with challenging response time requirements, it might be a over head to wrap each biz service with java call outs for measurements. Any thots?
    2. There is a report feature in OSB. How about using SNMP traps for reporting the start and ends. I am wondering if this is any better than java call outs which might be synchronous I/O operation.
    Do you folks see alternate approaches?
    TIA

    I think that generally it's not a good idea to modify production logic (code or configuration) to gather any statistics. It may look simple, but there is still possibility of unexpected failure that would cause failure of your service. Not to mention complexity of such a step.*
    I totally agree.
    This kind of data should be gathered from your infrastructure components. I know that OSB doesn't provide such a feature, but if you have your services published on HTTP protocol, than you can always use some kind of proxy server. In our company, we use feature-rich Apache HTTP server for many reasons. Response time logging is one of such reasons.*
    Interesting. Thanks. This approach might help gather stats on the Proxy services. However the biz services composed inside proxy may not get the stats.
    Another possibility is to use a specialized component. I think that OWSM can be useful. However, I don't have any experience with it and it could be overkill considering your needs. http://www.oracle.com/technology/products/webservices_manager/index.html*
    We are looking into OWSM, as you rightly said, wanted to keep it simple without OWSM.
    Thanks

  • High RFC response time in SAP BW system

    Hello all,
    How to analyze and fine tune high RFC response time in SAP BW system ?
    Regards,
    Archana

    Hi
    Kindly Check follows
    1. Check the RFC connections are correctly Configured or not? You can execute the program “RSRFCTRC” and get a full log of the RFC connections details
    2 you can check the BW queries are right optimized? Is this any network issues?
    3.  In which time are you facing the high rfc response? (During the 24 hours which time)?
    4. Kindly refer the SCN & SAP Notes for overall system performance
    Short Notes on PERFORMANCE MONITORING - ABAP Development - SCN Wiki
    1063061 - Information about response time in STAD/ST03
    948066 - Performance Analysis: Transactions to use
    Regards
    Sriram

  • Get alerted after 5 metric breaches for Response Time (msec) metric

    Hi Guys,
    Can i edit the Response Time (msec) metric under the Listener Availability Notification rule to alert only if the response time is breaching the alerting threshold for 5 minutes,as in for 5 consecutive occurences when this metric is pooled ?
    For example we can execute dbms_server_alert.set_threshold(.....
    consecutive_occurrences=>5
    for a metric like redo_generated_sec
    Can we do this for Response Time (msec) metric under the Listener Availability Notification rule ?
    I am also not seeing this metric listed under v$METRICNAME
    Any ideas...
    Regards,
    Swanand

    I have no doubt this is it. I switched to the OID of the timeticks that the system has been up for and tried:
    <CollectionItem NAME="Response">
    <Schedule>
    <IntervalSchedule INTERVAL="5" TIME_UNIT="Min"/>
    </Schedule>
    <Condition COLUMN_NAME="Status" CRITICAL="1" OPERATOR="LT"/>
    </CollectionItem>
    The corresponding snmpwalk retrieves:
    -bash-3.00# ./snmpwalk -Os -v1 -c public 100.100.100.100 .1.3.6.1.2.1.1.3
    sysUpTimeInstance = Timeticks: (26302086) 3 days, 1:03:40.86
    Now I get a clock once in a while (it looks like its collecting) but then I revert to down. I assume that this structure is basically:
    RetrievedValueForStatus OPERATOR CRITICAL triggers a CRITICAL alert and I could add another for WARNING
    And then I'm making the assumption that "Running" is the state of CRITICAL or WARNING not being triggered?
    I noticed Example 2-9 Default Collection File for Simple Server Alpha uses NO condition.
    What happens if the system simply cannot be contacted, is there an assumption that the value is always available in these Conditions? For example, if I return the name of the system in the Snmp Fetchlet and SNMP cannot retrieve ANY values (because the system is down), would the CRITICAL condition be:
    <Condition COLUMN_NAME="Status" CRITICAL="" OPERATOR="EQ"/>
    I will keep going on trial an error
    Thanks...

  • Experiencing very slow response time using AirPort while wired response is fine. Suggestions?

    After a surfing session a few weeks ago the response time for my AirPort internet connection became painfully slow.  When I plug in to the wired connection it works fine.  Resetting the router and modem doesn't seem to fix it.  What might have happened and what can I do to fix it?

    It is very possible that you may have some form of Wi-Fi interference that appears during these hours that is preventing your AirPort Base Station from providing a clean RF signal.
    I suggest you perform a simple site survey, using utilities like iStumbler, Wi-Fi Explorer or AirRadar to determine potential areas of interference, and then, try to either eliminate or significantly reduce them where possible.

  • "Windows 8 using 100% of HDD with high average response times and low read/write speed"

    Turns out this is a fairly well known windows 8.1 issue that has been plaguing users since at least 2013 and there is no one simple fix, so it may not be *entirely* hp's fault; but I've had two of these laptops, both the same model, the first one needing to be returned and exchanged for an entirely unrelated issue (hardware failure: ethernet port nonfunctional with lights stuck on). Both are refurbished. Both have been extreemly slow and unresponsive even compared to a lesser Brazos powered laptop I had before, but I've only recently decided to investigate why. 
    So if there is something HP specific going on here, I hope there is one simple fix. My average response time has gone up as far as well over a minute (>60,000ms), so I may be an outlier case here compared to the typical windows 8.1 hardrive responsiveness/bandwidth problem. 
    Edit: there is a case with another HP pavilion laptop (intel powered though, so it may be an intel storage driver issue described in the first link) being much, much worse.
    This question was solved.
    View Solution.

    Guess what just now happened again
     So using DISM did _not_ fix it.

  • Anyway to speed up the response time of E62/E61?

    I bought the E62 and found it is considerably slower than most smart phones specially when compared with Blackberry andsets. It really takes a while to open the mails or applications/folders.
    Is there any way we can improve the response time for E62?

    You aren't by any chance calling a function in your repeating frame that in turn goes back and queries the database, are you? If so ... don't. We regularly do 500+ page PDF-file reports, and one thing we discovered early on was that repeatedly going back to the database while generating the report output (in our case, in calculations that were being done on each line of a report) slowed the output down by an order of magnitude. Instead, we now retrieve all the data needed for each report up front (via functions or views called in the initial SQL for the report), and just use Reports to format the output. MUUUUUUUCH faster -- 200 page reports that used to take 15 minutes to complete now complete in just seconds.
    One way you can visually see if this is part of your problem is to watch the report execute in the Report Queue Manager application. If it spends all its time on the "Opening" stage then breezes through each page, this is not your problem. If instead it seems to take a long time generating each page, I'd suspect that this may be at least part of your delay.
    - Bill

Maybe you are looking for

  • How to Send RFQ output to Supplier

    Hi Gurus, I am researching the use of RFQ's in Purchasing in Oracle 11.5.10 or R12 I would like to know which request needs to be run to produce the RFQ output, how to send the soft copy of RFQ to supplier by emaill? Regards AK

  • Error entering SSO administrator creds

    I am trying to configure SA 5.5.1.1 but when it asks me to put in the SSO administrator credentials I get the following error: Something failed. Try again. General failure. java.net.UnknownHostException: XXXXXXXXXXXXX.xxx.xxxxxxxx.gov: Name or servic

  • Contructing HTML tags in JSP files

    Hi, I am new oracle portal development and using version 10.1.4. version for my development. Our requirement is develop a portlet using JSR compliance API so we could deploy our portal applications across all vendors like IBM,ORacle,Vignette,weblogic

  • Categorize brushes?

    With so many brush sets available, is there a way to subdivide them in PSE3? I already have trouble finding what I want. It would be nice, for example, to have a folder for Hard, a folder for Soft, a folder for Clouds, a folder for Stars, and so on.

  • Environment for working with web dynpro

    Hi All, I am new to web Dynpro and would like to work on this. Please let me know what software i need to install to work on web dynpro. PLease provide all the necessary information. Thanks, Ramana Kishore.