Response times to queries

I have a number of unresolved queries with BT that have been ongoing for quite some time. I have recently received a Final bill as BT seem to no longer want me as a customer and my emails are being ignored.
I have advised them that i wish to escalate my query as i am not satisfied with the lack of a response however there has been no reaction.
Can anyone confirm whether i am wasting my time and should just pay up and shut up?

Hi SuffolChap,
I would like to take a look at the details of your complaint. Please could you send me in your details using the link found in the "About me" section of my profile?
Thanks
Paddy
BTCare Community Mod
If we have asked you to email us with your details, please make sure you are logged in to the forum, otherwise you will not be able to see our ‘Contact Us’ link within our profiles.
We are sorry but we are unable to deal with service/account queries via the private message(PM) function so please don't PM your account info, we need to deal with this via our email account :-)

Similar Messages

  • Faster response time of queries

    I have a query which joins a few tables with seveeral thousand rows each. This query normaly returns tens of thousands and the response time is almost 10 minutes and it's not acceptable for web application.
    To speed it up I just want Oracle to return the first let's say 1000 rows.
    Changing the max rows returned parameter(APEX) to 1000 doesn't help at all. It seems like query executes in full and then only the first 1000 rows of the resultset are sent.
    So my question is: is there way to instruct Oracle to stop execution of the query once first n rows are retrieved?
    I tried the SELECT /* FIRST_ROWS(1000) */ .... but this doesn't help. and I wonder how could it when it seems that TOAD determines this as a comment and doesn't change the optimizer mode - still ALL_ROWS.
    What am I doing wrong here, this is the first time I am trying to use FIRST_ROWS hint , - is there another - better way to speed up my query?

    Hi Bob, thanks for the response. rownum < n was the first thing I tried. One would think that if a query takes 5 minutes to execute and returns 50 000 rows then after adding rownum < 5000 it shuldn't take more than a minute - well it takes pretty much the same time as w/out rownum < n. It seems like rownum is determined for the whole resultset and then the where codition is applied.
    The tables actually have much more than a few thousand rows. one is with close to 250 000 and couple other tables with over a million and I don't see much that I can optimize. I think being able to only return certain first n rows fast for web applications must be fairly common situation when dealing with large tables/views.

  • How to improve response time of queries?

    Although it looks that that this question relates to Reports but actually the problem is related to SQL.
    I have a Catalogue type report which retrieves data and prints. That is not much calculations involved.
    It uses six tables.
    The data model is such that I have a separate query for each table and then all
    these tables are liked thru link tool.
    Each table contains 3000 to 9000 rows but one table contains 35000 rows.
    The problem is that the report is taking too much time - about 3-1/2 hours while
    expectation is that it should take 20 to 40 minutes max.
    What can I do to reduce the time it takes to produce report.
    I mean should I modify data model to make a single query with equi-join?
    A)Specially I want to know what is traffic between client and server when
    1) we have multiple quieries and LINK tool is used
    2) Single query and equi-join is used
    B)Which activity is taking most of time ?
    1) Retrieving data from server to client
    2) Sorting according to groups (at client) and formating data and saving in file
    Pl. guide.
    Every body is requested to contribute as per his/her experience and knowledge.
    M Tariq

    Generally speaking, your server is faster than your PC (if it is not, then you have bigger problems than a slow query), let the server do as much of the work as possible, particularly things like sorting, and grouping. Any calculations that can be pushed off onto the server (e.g. aggregate functions, cola + colb etc.) should be.
    A single query will always be faster than multiple queries. Let the server do the linking.
    The more rows you return from the server to your PC, the more bytes the network and your PC have to deal with. Network traffic is "expensive", so get the result set as small as possible on the server before sending it back over the network. PC's generally have little RAM and slow disks compared to servers. Large datasets cause swapping on the PC, this is really expensive.
    Unless you are running on a terribly underpowered server, I think even 30 - 40 minutes would be slow for the situation you describe.

  • Explain plan - lower cost but higher response time in 11g compared to 10g

    Hello,
    I have a strange scenario where 'm migrating a db from standalone Sun FS running 10g RDBMS to a 2-Node Sun/ASM 11g RAC env. The issue is with response time of queries -
    In 11g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANALYZED NUM_ROWS
    11-08-2012 18:21:12 3413956
    Elapsed: 00:00:00.30
    In 10g Env:
    SQL> select last_analyzed, num_rows from dba_tables where owner='MARKETHEALTH' and table_name='NCP_DETAIL_TAB';
    LAST_ANAL NUM_ROWS
    07-NOV-12 3502160
    Elapsed: 00:00:00.04If you look @ the response times, even a simple query on the dba_tables takes ~8 times. Any ideas what might be causing this? I have compared the XPlans and they are exactly the same, moreover, the cost is less in the 11g env compared to the 10g env, but still the response time is higher.
    BTW - 'm running the queries directly on the server, so no network latency in play here.
    Thanks in advance
    aBBy.

    *11g Env:*
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1104 | 376K| 394 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1104 | 376K| 393 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1136 | | 15 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    *10g Env:*
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 4147636274
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 1 | SORT ORDER BY | | 1137 | 373K| 389 (1)| 00:00:05 |
    | 2 | TABLE ACCESS BY INDEX ROWID| NCP_DETAIL_TAB | 1137 | 373K| 388 (1)| 00:00:05 |
    |* 3 | INDEX RANGE SCAN | IDX_NCP_DET_TAB_US | 1137 | | 15 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    3 - access("UNIT_ID"='ten03.burien.wa.seattle.comcast.net')
    15 rows selected.
    The query used is:
    explain plan for
    select
    NCP_DETAIL_ID ,
    NCP_ID ,
    STATUS_ID ,
    FIBER_NODE ,
    NODE_DESC ,
    GL ,
    FTA_ID ,
    OLD_BUS_ID ,
    VIRTUAL_NODE_IND ,
    SERVICE_DELIVERY_TYPE ,
    HHP_AUDIT_QTY ,
    COMMUNITY_SERVED ,
    CMTS_CARD_ID ,
    OPTICAL_TRANSMITTER ,
    OPTICAL_RECEIVER ,
    LASER_GROUP_ID ,
    UNIT_ID ,
    DS_SLOT ,
    DOWNSTREAM_PORT_ID ,
    DS_PORT_OR_MOD_RF_CHAN ,
    DOWNSTREAM_FREQ ,
    DOWNSTREAM_MODULATION ,
    UPSTREAM_PORT_ID ,
    UPSTREAM_PORT ,
    UPSTREAM_FREQ ,
    UPSTREAM_MODULATION ,
    UPSTREAM_WIDTH ,
    UPSTREAM_LOGICAL_PORT ,
    UPSTREAM_PHYSICAL_PORT ,
    NCP_DETAIL_COMMENTS ,
    ROW_CHANGE_IND ,
    STATUS_DATE ,
    STATUS_USER ,
    MODEM_COUNT ,
    NODE_ID ,
    NODE_FIELD_ID ,
    CREATE_USER ,
    CREATE_DT ,
    LAST_CHANGE_USER ,
    LAST_CHANGE_DT ,
    UNIT_ID_IP ,
    US_SLOT ,
    MOD_RF_CHAN_ID ,
    DOWNSTREAM_LOGICAL_PORT ,
    STATE
    from markethealth.NCP_DETAIL_TAB
    WHERE UNIT_ID = :B1
    ORDER BY UNIT_ID, DS_SLOT, DS_PORT_OR_MOD_RF_CHAN, FIBER_NODE
    This is the query used for Query 1.
    Stats differences are:
    1. Rownum differes by apprx - 90K more rows in 10g env
    2. RAC env has 4 additional columns (excluded in the select statement for analysis purposes).
    3. Gather Stats was performed with estimate_percent = 20 in 10g and estimate_percent = 50 in 11g.

  • Increase the time of response of my queries

    Hi, I have a db2 as database on AS400 and I would like to increase the time of response of my queries because in a table I have more of one million records.
    I select a part of this record (select * from sergt where wbtruc='A735') but to get this record where wbtruc=A735 it read the whole table so more of one million records.
    is it posssible that it read a part of table and not the whole table
    or you are another solution to increase the time of response ?
    Thanks in advance?

    Uh, oh, temporary index for 1 million records sounds
    like OutOfMemoryError :)If this query is going to be run regularly then an access path should be created on the AS400 to support it. An access path over 1 million records isn't a big deal there, the largest file we have is about 120 million records with two access paths and I'm sure there are much much bigger files in the AS400 world.

  • Query Tuning - Response time Statistics collection

    Our Application is Load tested for a period of 1 hour with peak load.
    For this specific period of time, say thousands of queries gets executed in the database,
    What we need is say for one particular query " select XYZ from ABC" within this span of 1 hour, we need statistics like
    Number of times Executed
    Average Response time
    Maximum response time
    minimum response time
    90th percentile response time ( sorted in ascending order, 90th percentile guy)
    All these statistics are possible if i can get all the response times for that particular query for that period of 1 hour....
    I tried using sql trace and TKPROF but unable to get all these statistics...
    Application uses connection pooling, so connections are taken as and when needed...
    Any thoughts on this?
    Appreciate your help.

    I don't think v$sqlarea can help me out with the exact stats i needed, but certainly it has lot of other stats to take. B/w there is no dictionary view called v$sqlstats.
    There are other applications which share the same database where i am trying to capture for my application, so flushing cache which currently has 30K rows is not feasible solution.
    Any more thoughts on this?

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

  • High Response Times with 50 content items in a Publisher 6.5 portlet

    Folks,
    Have set up a load test, running with a single user, in which new News Article content items are inserted into a Publisher 6.5 portlet created of the News Portlet template. Inserts have good response times through 25 or so content items in the portlet. Then response times become linearly longer, until it takes ten minutes to insert a content item, when there are 160 content items already.
    This is a test system that is experiencing no other problems. There are no other users on the system, only the single test user in LoadRunner, inserting one content item at a time. The actual size of the content item is tiny. Memory usage in the Publisher JVM (as seein on the Diagnostics page) does not vary from 87% used with 13% free. So I asked for a DB Trace, to determine if there were long-running queries. I can provide this on request, it zips to less than 700k.
    Have seldom seen this kind of linear scalability!
    Looked at the trace through SQL Server Profiler. There are several items running for more than one second, the Audit Logout EventClass repeatedly occurs with long durations (ten minutes and more). The users are publisher user, workflow user, an NT user and one DatabaseMail transaction taking 286173 ms.
    In most cases there is no TextData, and the ApplicationName is i-net opta 2000 (which looks like a JDBC driver) in the longest-running cases.
    Nevertheless, for the short running queries, there are many (hundreds) of calls to exec sp_execute and IF @@TRANCOUNT > 0 ROLLBACK TRAN. This is most of what fills the log. This is strange because only a few records were actually inserted successfully, during the course of the test. I see numerous calls to sp_prepexec related to the main table in question, PCSCONTENTITEMS, but very short duration (no apparent problems) on the execution of the stored procedures. Completed usually within 20ms.
    I am unble to tell if a session has an active request but is being blocked, or is blocking others... can anyone with SQL Server DBA knowledge help me interpret these results?
    Thanks !!!
    Robert

    hmmm....is this the ootb news portlet? does it keep all content items in one publisher folder? if so then it is probably trying to re-publish that entire folder for every content item and choking on multiple republish executes. i dont think that ootb portlet was meant to cover a use case of multiple content item inserts so quickly. by definition newsworth stuff should not happen to need bulk inserts. is there another way to insert all of the items using publisher admin and then do one publish for all?
    i know in past migration efforts when i've written utilities to migrate from legacy to publisher the inserts and saves for each item took a couple of seconds each. the publishing was done at the end and took quite a long time.

  • High RFC response time in SAP BW system

    Hello all,
    How to analyze and fine tune high RFC response time in SAP BW system ?
    Regards,
    Archana

    Hi
    Kindly Check follows
    1. Check the RFC connections are correctly Configured or not? You can execute the program “RSRFCTRC” and get a full log of the RFC connections details
    2 you can check the BW queries are right optimized? Is this any network issues?
    3.  In which time are you facing the high rfc response? (During the 24 hours which time)?
    4. Kindly refer the SCN & SAP Notes for overall system performance
    Short Notes on PERFORMANCE MONITORING - ABAP Development - SCN Wiki
    1063061 - Information about response time in STAD/ST03
    948066 - Performance Analysis: Transactions to use
    Regards
    Sriram

  • Anyway to speed up the response time of E62/E61?

    I bought the E62 and found it is considerably slower than most smart phones specially when compared with Blackberry andsets. It really takes a while to open the mails or applications/folders.
    Is there any way we can improve the response time for E62?

    You aren't by any chance calling a function in your repeating frame that in turn goes back and queries the database, are you? If so ... don't. We regularly do 500+ page PDF-file reports, and one thing we discovered early on was that repeatedly going back to the database while generating the report output (in our case, in calculations that were being done on each line of a report) slowed the output down by an order of magnitude. Instead, we now retrieve all the data needed for each report up front (via functions or views called in the initial SQL for the report), and just use Reports to format the output. MUUUUUUUCH faster -- 200 page reports that used to take 15 minutes to complete now complete in just seconds.
    One way you can visually see if this is part of your problem is to watch the report execute in the Report Queue Manager application. If it spends all its time on the "Opening" stage then breezes through each page, this is not your problem. If instead it seems to take a long time generating each page, I'd suspect that this may be at least part of your delay.
    - Bill

  • High Response time

    Hi All ,
    In my SANDBOX , in time of transaction code execution it's OK. But after that in the time of any customization, table viewing it is using much more response time & even sometimes session is going terminated . How to fix & solve this problem. Can anybody help me ?
    Regards
    Asad

    Hi,
    I would look into following areas in st03n which would give you idea which transactions are taking more time and which area its struggling such as db time, cpu time, number of db reads, gui time and this would give an idea.
    Also check if your system has sufficient resources in terms of memory, cpu and paging files(swap size). Please check the paging out/paging rates as this would indicate problem areas.
    You can do a trace as mentioned earlier and check for the st22 dumps and system logs sm21 on what the errors messages indicate. This should really make it clear on what the core issues are and if you have any queries, pls post it here so that we can check it for you.
    Cheers Sam

  • Slow response time on query from one schema to another

    I've got two separate databases (one test, the other production). The schemas are pretty much identical and contain the same data. This particular query is executed from .NET (SQL text command, not SP) and takes about 3 seconds when performed against my test db, but takes anywhere from 45 seconds to 5 minutes on the production system. I've exceuted the same statement using Aquadata studio and get the same response times. I didn't build the queries or the schemas, nor do I manage the database servers. But I am responsible for making this work now. These queries have been working in the past, but recently have been changed. I am able to speed this one up with some sql changes, but the others I'm not sure about. I've checked and production db has indexes on the fields I'm querying while the test db does not. I would expect test to be slower, but it's not. Any ideas that I could try. On Monday I'll ask our server admins to check out the server end of it, and see if they need to reboot or something. In the meantime, I'm going to work to revamp the SQL.
    Any ideas you can provide would be much appreciated.
    Sincerely,
    Zam

    when your query takes long...
    When your query takes too long ...

  • How to get query response time from ST03 via a script ?

    Hello People,
    I am trying to get average query response time for BW queries with a script (for monitoring/historisation).
    I know that this data can be found manually in ST03n in the "BI workload'.
    However, I don't know how to get this stat from a script.
    My idea is to run a SQL query to get this information, here is the state of my query :
    select count(*) from sapbw.rsddstat_olap
    where calday = 20140401
    and (eventid = 3100 or eventid = 3010)
    and steptp = 'BEX3'
    The problem is that this query is not returning the same number of navigations as the number shown in ST03n.
    Can you help me to set the correct filters to get the same number of navigation as in ST03n ?
    Regards.

    Hi Experts,
    Do you have ideas for this SQL query ?
    Regards.

  • How to get Query response Time?

    II am on BI 7.0. I ran some queries using RSRT command. I want to find how much time the queries took.
    I went to
    st03 -> expert mode -> BI system load-> select today / week/month according to the query runtime day
    I do not see any Info Providers. Query was on a cube so why no Info Providers.
    Does something have to turned on InfoPorvider to show.
    When I look  in RSDDSTAT_OLAP table, I do see many rows but cannot make any sense. Is there some documentation on how to get total  query time from this table?
    Is there any other way to get query response time?
    Thanks a lot.

    HI,
    why not use RSRT ? You can add database statistics option in "Execut & Debug" and you get all the runtime metrics of your query
    In transaction RSRT, enter the query name and press u2018Execute +Debugu2019.
    Selecting u2018Display Statistics Datau2019 .
    After executing the query will return a list of the measured metrics.
    The event id / text describes the steps  (duration in seconds):
    "OLAP: Read data" gives the SQL statements repsonse time (ok - because the SAP
    application server acts as an Oracle client a little network traffic from the db server is included,
    but as far as you not transferring zillions of rows it can be ignored)
    But it gives you much more (i.e. if the OLAP cache gets used or not )...
    In the "Aggreagate statistcs" you get all the infoproviders involved in that query.
    bye
    yk

  • Response time much more in migrated AIX database than in Windows database

    We migrated a 9.2.0.6 database from windows to AIX (on same oracle version). Initially we were getting really slow response on direct sqls and batch sqls fom application server. Then we tuned aix database a bit and got a better response time for direct sqls on aix database.
    Now queries run faster on aix database than the db on windows.
    However, the queries from applciation (ACCURATE application) are still twice slower on aix database compared to windows database.
    Can anyone suggest a possible cause or tuning area?
    We have checked on network level and found no bottlenecks.
    Regards,
    Garima

    user650817 wrote:
    We migrated a 9.2.0.6 database from windows to AIX (on same oracle version). Initially we were getting really slow response on direct sqls and batch sqls fom application server. Then we tuned aix database a bit and got a better response time for direct sqls on aix database.
    Now queries run faster on aix database than the db on windows.
    However, the queries from applciation (ACCURATE application) are still twice slower on aix database compared to windows database.
    Can anyone suggest a possible cause or tuning area?Garima,
    if you have particular queries that you know of that are behaving differently, you could start right away by comparing the execution plans and tracing the execution using the SQL trace facility and the "tkprof" trace file analyzer.
    If you want to get an overview of the database performance, you should consider installing STATSPACK on both systems if you haven't done so yet and create snapshots to compare statspack reports from both systems for a representative time period. This way you can gather information about the top wait events and top SQLs of the periods analyzed.
    By the way, how did you "migrate" the database? Did you use export/import to migrate? What about the statistics for the cost based optimizer, could these be different between the two systems or are you still using the rule based optimizer?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

Maybe you are looking for

  • Multi Screen Authoring

    How do i find the tablet.css, ipad.css and desktop.css files? I am trying to make my website user friendly for all phones and ipads but cant seem to find the correct files? Are they files i have to create myself or are they already on my computer som

  • Weird things going on with my contacts. Anyone else???

    My phone lost almost all internet and phone signal after the last update. I took it in the the local apple store and they traded me a new phone for my old one. New phone still had 1.1.2 on it. Took it home, downloaded the new itunes, and did the sync

  • CC fails to load on second mac

    I have a current subscription to CC loaded on one mac. When trying to install on the other, only Dreamweaver will install. Indesign, Illustrator and Photoshop fail. Any solutions or ideas?

  • New air, copying files from desktop to hard disk requires authentication

    Sorry if this has been discussed before but I learned today that Lion requires you give read and write permissions to "everyone" on the hard disk otherwise you have to authenticate every time you try to copy a file from the desktop to the hard disk.

  • BAPI_ALM_ORDER_MAINTAIN- create with reference to notification/order (IW34)

    Hi, I need to create a new PM order with reference to a PM notification, as in transaction IW34. Does BAPI_ALM_ORDER_MAINTAIN support passing in an existing order and notification? Couldn't see it myself, nor could I see a definitive statement in the