Query on v$flash_recovery_area_usage slow (high I/O)
Hi,
When I query the view v$flash_recovery_area_usage it takes several sec and causes disk high disk I/O. This is getting worse and worse over time. i'm running 10.2 on Win 2003 x64. Fash Recovery Area is on a raid5 scsi disk sub system etc. and also contains the last Autobackup and the last full backup. I understand that the amount of flastback logs is related to the amount of change in the database.
1) flashback logs take up 22GB at the moment and i suspect the bigger the number the slow the query on v$flash_recovery_area_usage . Is this correct?
2) flashback retention is set to 24hrs. The database is small, about 4GB. In OEM Flashback Time 02-Jul-2007 17:48:50 when i posted this (last modified date on the files also go back to this date). Should this not show a time/date of (NOW() - 24Hrs)?
Thanks
Hi,
Go into the backup management page where the crosscheck all, delete expired and delete obsolete rman buttons are located. Do a crosscheck all this will update the control file and mark expired for all files outside the retention period. Next click the delete expired, this will delete those records in the control file then do a delete obsolete this will (should) delete all those off the disk. Try this through the oem tool if you are not satisfied go into rman service and run these at the command prompt.
hope this helps.
regards,
al
Similar Messages
-
Querying the toplink cache under high-load
We've had some interesting experiences with "querying" the TopLink Cache lately.
It was recently discovered that our "read a single object" method was incorrectly
setting query.checkCacheThenDB() for all ReadObjectQueries. This was brought to light
when we upgraded our production servers from 4 cores to 8. We immediatly started
experiencing very long response times under load.
We traced this down to the following stack: (TopLink version 10.1.3.1.0)
at java.lang.Object.wait(Native Method)
- waiting on <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at java.lang.Object.wait(Object.java:474)
at oracle.toplink.internal.helper.ConcurrencyManager.acquireReadLock(ConcurrencyManager.java:179)
- locked <0x00002aab08fd26d8> (a oracle.toplink.internal.helper.ConcurrencyManager)
at oracle.toplink.internal.helper.ConcurrencyManager.checkReadLock(ConcurrencyManager.java:167)
at oracle.toplink.internal.identitymaps.CacheKey.checkReadLock(CacheKey.java:122)
at oracle.toplink.internal.identitymaps.IdentityMapKeyEnumeration.nextElement(IdentityMapKeyEnumeration.java:31)
at oracle.toplink.internal.identitymaps.IdentityMapManager.getFromIdentityMap(IdentityMapManager.java:530)
at oracle.toplink.internal.queryframework.ExpressionQueryMechanism.checkCacheForObject(ExpressionQueryMechanism.java:412)
at oracle.toplink.queryframework.ReadObjectQuery.checkEarlyReturnImpl(ReadObjectQuery.java:223)
at oracle.toplink.queryframework.ObjectLevelReadQuery.checkEarlyReturn(ObjectLevelReadQuery.java:504)
at oracle.toplink.queryframework.DatabaseQuery.execute(DatabaseQuery.java:564)
at oracle.toplink.queryframework.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:779)
at oracle.toplink.queryframework.ReadObjectQuery.execute(ReadObjectQuery.java:388)
We moved the query back to the default, query.checkByPrimaryKey() and this issue went away.
The bottleneck seemed to stem from the read lock on the CacheKey from IdenityMapKeyEnumeration
public Object nextElement() {
if (this.nextKey == null) {
throw new NoSuchElementException("IdentityMapKeyEnumeration nextElement");
// CR#... Must check the read lock to avoid
// returning half built objects.
this.nextKey.checkReadLock();
return this.nextKey;
We had many threads that were having contention while searching the cache for a particular query.
From the stack we know that the contention was limited to one class. We've since refactored that code
not to use a query in that code path.
Question:
Armed with this better knowledge of how TopLink queries the cache, we do have a few objects that we
frequently read by something other than the primary key. A natural key, but not the oid.
We have some other caching mechanisms in place (JBoss TreeCache) to help eliminate queries to the DB
for these objects. But the TreeCache also tries to acquire a read lock when accessing the cache.
Presumably a read lock over the network to the cluster.
Is there anything that can be done about the read lock on CacheKey when querying the cache in a high load
situation?CheckCacheThenDatabase will check the entire cache for a match using a linear search. This can be efficient if the cache is very large. Typically it is more efficient to access the database if your cache is large and the field you are querying in the database is indexed in the table.
The cache concurrency was greatly improved in TopLink 11g/EclipseLink, so you may wish to try it out.
Supporting indexes in the TopLink/EclipseLink cache is something desirable (feel free to log the enhancement request on EclipseLink). You can simulate this to some degree using a named query and a query cache.
-- James : http://www.eclipselink.org -
SqlDeveloper Query = Fast, PL/SQL = Slow
I've got a nagging problem that is driving me crazy. Database is 11.1.0.7 and SQLDeveloper is 1.5.1 with the same behavior in 3.0.03.
Often I will develop a complex query in SQLDeveloper and get it tuned to a point where performance is great. However when I take that query and put it in a PL/SQL procedure with dynamic SQL the performance takes a nose dive. This happens when taking a query to an Apex report as well. I use bind variables in my queries in SQLDeveloper as well as Apex/PLSQL.
If I run an explain plan in SQLDeveloper it is often identical to the plan for the query in the pl/sql environment (seen through tkprof), yet the sqldeveloper query window is always faster.
The difference in speed is remarkable, my current "problem" query is runs in 71 seconds inside a PL/SQL stored procedure using dynamic sql with bind variables. If I print that query out and copy paste the query in to a SQLDeveloper sql window (worksheet?), the exact same query prompts me for the bind variable values and then runs in 0.07 seconds. Just to make sure the rows aren't caching, if I run the stored procedure version again it still takes 71+ seconds.
If I hard code values in the PL/SQL query instead of using bind variables, the stored procedure runs as fast as SQLDeveloper, 0.07 seconds.
I originally posted a similar problem over in the Apex forum and there we suspected the issue might be related to the 11g optimizer using bind variable peeking. SLOW report performance with bind variable
Ultimately, the goal is to have predictable results when taking a query from SQLDeveloper to PL/SQL.
Are there any SQLDeveloper developers out there that can confirm that "bind variable" syntax in SQL Developer is not using bind variables, but is instead rewriting the query with hard coded values in query strings?mcallister wrote:
I've got a nagging problem that is driving me crazy. Database is 11.1.0.7 and SQLDeveloper is 1.5.1 with the same behavior in 3.0.03.
when I take that query and put it in a PL/SQL procedure with dynamic SQL the performance takes a nose diveThat's one of the reasons not to use dynamic SQL - tuning the resulting queries is very difficult.
Is the dynamic SQL necessary? If you are only using bind variables it probably is not necessary. If you are swapping WHERE clause predicates in and out through program logic then predicting performance will be very hard (or using different column or table names too)
When I must use dynamic SQL I find a useful technique is to create a debugging table with a CLOB into which I can insert the generated SQL code for later reference. That code can be used for both debugging (looking for syntax errors) and tuning. I also find it very, very useful to write the dynamic SQL with proper formatting and linefeeds so it is immediately readable on inspection
Are there any SQLDeveloper developers out there that can confirm that "bind variable" syntax in SQL Developer is not using bind variables, but is instead rewriting the query with hard coded values in query strings?You can see if query rewrites are taking place by using the NO_QUERY_TRANSFORMATION hint and seeing if the execution plan changes while tuning. To a lesser degree you can examine execution plans from V$SQL_PLAN to look for signs of changes like predicates you didn't code but this is also very hard -
Select query performance is very slow
Could you please explain me about BITMAP CONVERSION FROM ROWIDS
Why the below query going for two times BITMAP CONVERSION TO ROWIDS on the same table.
SQL> SELECT AGG.AGGREGATE_SENTENCE_ID ,
2 AGG.ENTITY_ID,
3 CAR.REQUEST_ID REQUEST_ID
4 FROM epic.eh_aggregate_sentence agg ,om_cpps_active_requests car
5 WHERE car.aggregate_sentence_id =agg.aggregate_sentence_id
6 AND car.service_unit = '0ITNMK0020NZD0BE'
7 AND car.request_type = 'CMNTY WORK'
8 AND agg.hours_remaining > 0
9 AND NOT EXISTS (SELECT 'X'
10 FROM epic.eh_agg_sent_termination aggSentTerm
11 WHERE aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id
12 AND aggSentTerm.date_terminated <= epic.epicdatenow);
Execution Plan
Plan hash value: 1009556971
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 5 | 660 | 99 (2)| 00:00:02 |
|* 1 | HASH JOIN ANTI | | 5 | 660 | 99 (2)| 00:00:02 |
| 2 | NESTED LOOPS | | | | | |
| 3 | NESTED LOOPS | | 7 | 658 | 95 (0)| 00:00:02 |
|* 4 | TABLE ACCESS BY INDEX ROWID | OM_CPPS_ACTIVE_REQUESTS | 45 | 2565 | 50 (0)| 00:00:01 |
| 5 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 6 | BITMAP AND | | | | | |
| 7 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 8 | INDEX RANGE SCAN | OM_CA_REQUEST_REQUEST_TYPE | 641 | | 12 (0)| 00:00:01 |
| 9 | BITMAP CONVERSION FROM ROWIDS| | | | | |
|* 10 | INDEX RANGE SCAN | OM_CA_REQUEST_SERVICE_UNIT | 641 | | 20 (0)| 00:00:01 |
|* 11 | INDEX UNIQUE SCAN | PK_EH_AGGREGATE_SENTENCE | 1 | | 0 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | EH_AGGREGATE_SENTENCE | 1 | 37 | 1 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID | EH_AGG_SENT_TERMINATION | 25 | 950 | 3 (0)| 00:00:01 |
|* 14 | INDEX RANGE SCAN | DATE_TERMINATED_0520 | 4 | | 2 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - access("AGGSENTTERM"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
4 - filter("CAR"."AGGREGATE_SENTENCE_ID" IS NOT NULL)
8 - access("CAR"."REQUEST_TYPE"='CMNTY WORK')
10 - access("CAR"."SERVICE_UNIT"='0ITNMK0020NZD0BE')
11 - access("CAR"."AGGREGATE_SENTENCE_ID"="AGG"."AGGREGATE_SENTENCE_ID")
12 - filter("AGG"."HOURS_REMAINING">0)
14 - access("AGGSENTTERM"."DATE_TERMINATED"<="EPIC"."EPICDATENOW"())now this query is giving the correct result, but performance is slow.
Please help to improve the performance.
SQL> desc epic.eh_aggregate_sentence
Name Null? Type
ENTITY_ID CHAR(16)
AGGREGATE_SENTENCE_ID NOT NULL CHAR(16)
HOURS_REMAINING NUMBER(9,2)
SQL> desc om_cpps_active_requests
Name Null? Type
REQUEST_ID NOT NULL VARCHAR2(16)
AGGREGATE_SENTENCE_ID VARCHAR2(16)
REQUEST_TYPE NOT NULL VARCHAR2(20)
SERVICE_UNIT VARCHAR2(16)
SQL> desc epic.eh_agg_sent_termination
Name Null? Type
TERMINATION_ID NOT NULL CHAR(16)
AGGREGATE_SENTENCE_ID NOT NULL CHAR(16)
DATE_TERMINATED NOT NULL CHAR(20)
.user10594152 wrote:
Thanks for your reply.
Still i am getting same problemIt is not a problem. Bitmap conversion usually is a very good thing. Useing this feature the database can use one or several unselective b*indexes. Combine them and do a kind of bitmap selection. THis should be slightly faster than a FTS and much faster than a normal index access.
Your problem is that your filter criteria seem to be not very usefull. Whcih is the criteria that does the best reduction of rows?
Also any kind of NOT EXISTS is potentiall not very fast (NOT IN is worse). You can rewrite your query with an OUTER JOIN. Sometimes this will help, but not always.
SELECT AGG.AGGREGATE_SENTENCE_ID ,
AGG.ENTITY_ID,
CAR.REQUEST_ID REQUEST_ID
FROM epic.eh_aggregate_sentence agg
JOIN om_cpps_active_requests car ON ar.aggregate_sentence_id =agg.aggregate_sentence_id
LEFT JOIN epic.eh_agg_sent_termination aggSentTerm ON aggSentTerm.aggregate_sentence_id = agg.aggregate_sentence_id and aggSentTerm.date_terminated <= epic.epicdatenow
WHERE car.service_unit = '0ITNMK0020NZD0BE'
AND car.request_type = 'CMNTY WORK'
AND agg.hours_remaining > 0
AND aggSentTerm.aggregate_sentence_id is nullEdited by: Sven W. on Aug 31, 2010 4:01 PM -
Windows 8.1 laptop running slow, high memory and disk utilization
When I restart my laptop its memory utilization is around 50%. If I run several programs(may be 2 or 3) like Internet browser and IDE like Netbeans my pc become very slow.
When I was working in my other laptop which runs Windows 7(with 4GB RAM) I never had such issue (I normally open heavy weight application simultaneously without any issue).
But right now I am unable to my work on my new laptop with Windows 8.1. Most of the time its Disk usage also become 100%.
My RAM is 4GB, Processor Intel Core i5 2.60GHz
I am a software developer and I have to meat the deadlines, but as for my slow pc lot of time wasting happening everyday. I don't want to study about how windows 8.1 work or investigating its issues everyday, instead just want to use it for my software development
purposes.
I went through some other questions in this forum, but those are not clear enough for me. I tried to connect online chat with ask desk, but no lick(always display there busy right now with other customers, I don't prefer phone calls)
So please help.Chitrangi
Install the WPT (windows Performance Toolkit) http://social.technet.microsoft.com/wiki/contents/articles/4847.install-the-windows-performance-toolkit-wpt.aspx ,
open a CMD prompt with administrative rights (right click it>run as admin) and run this command for 60 secs to capture the high HD usage:
xperf -on PROC_THREAD+LOADER+CSWITCH+FILENAME+FILE_IO+FILE_IO_INIT+DISK_IO+DISK_IO_INIT+DRIVERS+PROFILE -f kernel.etl -stackwalk Profile+CSwitch+DiskReadInit+DiskWriteInit+DiskFlushInit+FileCreate+FileCleanup+FileClose+FileRead+FileWrite -BufferSize 1024 -MaxBuffers 1024 -MaxFile 1024 -FileMode Circular && timeout -1 && xperf -d XperfSlowIOcir.etl*
When finished zip the trace and upload it to skydrive or another file sharing service and put a link to it in your next post. -
i have two table master detail, for instance dept and emp
the situtation is this. i need the recod of those departements who has clerks in specific time hired. i wrote following query. but due to millions of records in two tables its very very slow.
select * from emp a,dept b
where a.deptno in (select distinct deptno from emp where job='clerk')
and hiredate >= '01-jun-2004' and hiredate <='01-jan-2007'
and a.deptno=b.deptno
can any body tune it.One thing I am seeing, that I find very troubling, is that posters such as the OP on this thread seem to be facing these tasks in a classroom or testing environment where they don't actually have access to Oracle and SQL*Plus.
They are being asked to tune something that exists only on paper and don't have the tools to provide an explain plan, probably don't even have the ability to do a describe on the table(s) because they don't actually exist.
If this is the case then the education system is cheating these people because tuning is not done in a room with the lights off by guessing. It is done, and only done, with:
EXPLAIN PLAN FOR
DBMS_XPLAN
TKPROF
a look at the statistics, etc.
Here's a real-world example of what is wrong with what I see. Anyone want to guess which of these queries is the most efficient?
SELECT srvr_id
FROM servers
INTERSECT
SELECT srvr_id
FROM serv_inst;
SELECT srvr_id
FROM servers
WHERE srvr_id IN (
SELECT srvr_id
FROM serv_inst);
SELECT srvr_id
FROM servers
WHERE srvr_id IN (
SELECT i.srvr_id
FROM serv_inst i, servers s
WHERE i.srvr_id = s.srvr_id);
SELECT DISTINCT s.srvr_id
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id;
SELECT /*+ NO_USE_NL(s,i) */ DISTINCT s.srvr_id
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id;
SELECT DISTINCT srvr_id
FROM servers
WHERE srvr_id NOT IN (
SELECT srvr_id
FROM servers
MINUS
SELECT srvr_id
FROM serv_inst);
SELECT srvr_id
FROM servers s
WHERE EXISTS (
SELECT srvr_id
FROM serv_inst i
WHERE s.srvr_id = i.srvr_id);
WITH q AS (
SELECT DISTINCT s.srvr_id
FROM servers s, serv_inst i
WHERE s.srvr_id = i.srvr_id)
SELECT * FROM q;
SELECT DISTINCT s.srvr_id
FROM servers s, serv_inst i
WHERE s.srvr_id(+) = i.srvr_id;
SELECT srvr_id
FROM (
SELECT srvr_id, SUM(cnt) SUMCNT
FROM (
SELECT DISTINCT srvr_id, 1 AS CNT
FROM servers
UNION ALL
SELECT DISTINCT srvr_id, 1
FROM serv_inst)
GROUP BY srvr_id)
WHERE sumcnt = 2;
SELECT DISTINCT s.srvr_id
FROM servers s, serv_inst i
WHERE s.srvr_id+0 = i.srvr_id+0;And yes they all return the exact same result set using the test data.
The chance that anyone can guess the most efficient query looking at what I just presented is precisely zero.
Because the query most efficient in 8.1.7.4 is not the most efficient query in 9.2.0.4 which is not the most efficient query in 10.2.0.2. -
Query Execution is very slow through Froms
Hello Friends
I am facing a problem with D2k Forms. when a run a Query through SQL its execution speed is very fast. the same Query when i run through Forms, like create basetable block and
set_block_property and execute it is very slow. How do i overcome this problem.
what are the various steps to keep in mind when writing code in forms.
thanks in AdvanceHi,
In order to gather schema statistics in EBS, you will have to login as Sysadmin and submit a request named as "Gather Schema Statistics"
Refer below link:
http://appsdba.info/docs/oracle_apps/R12/GatherSchemaStatistics.pdf
Also ensure to schedule the respective request so as to have your statistics up to date, either weekly or monthly as per your system environment.
Please refer note:
How Often Should Gather Schema Statistics Program be Run? [ID 168136.1]
How to Gather Statistics on Custom Schemas for Ebusiness Suite 11i and R12? [ID 1065813.1]
Hope this helps!
Best Regards -
Query With BETWEEN Clause Slows Down
hi,
I am experiencing slow down query by using BETWEEN clause. Is there any solution for it?Here is the difference if I use equal not between.
SQL> select to_char(sysdate,'MM-DD-YYYY HH24:MI:SS') from dual;
TO_CHAR(SYSDATE,'MM
11-14-2005 15:44:03
SQL> SELECT COUNT(*) /*+ USE_NL(al2), USE_NL(al3), USE_NL(al4),
2 USE_NL(al5), USE_NL(al6) */
3 FROM acct.TRANSACTION al1,
4 acct.account_balance_history al2,
5 acct.ACCOUNT al3,
6 acct.journal al4,
7 acct.TIME al5,
8 acct.object_code al6
9 WHERE ( al1.reference_num = al4.reference_num(+)
10 AND al1.timekey = al5.timekey
11 AND al5.timekey = al2.timekey
12 AND al3.surrogate_acct_key = al2.surrogate_acct_key
13 AND al3.surrogate_acct_key = al1.surrogate_acct_key
14 AND al1.report_fy = al3.rpt_fy
15 AND al6.object_code = al1.object_adj
16 )
17 AND ((al1.timekey = 20040701
18 or al1.timekey = 20040801
19 or al1.timekey = 20040901
20 or al1.timekey = 20041001
21 or al1.timekey = 20041101
22 or al1.timekey = 20041201
23 or al1.timekey = 20050101
24 or al1.timekey = 20050201
25 or al1.timekey = 20050301
26 or al1.timekey = 20050401
27 or al1.timekey = 20050501
28 or al1.timekey = 20050601
29 or al1.timekey = 20050701
30 or al1.timekey = 20050801
31 or al1.timekey = 20050901)
32 AND al3.dept = '480');
COUNT(*)/*+USE_NL(AL2),USE_NL(AL3),USE_NL(AL4),USE_NL(AL5),USE_NL(AL6)*/
34245
SQL> select to_char(sysdate,'MM-DD-YYYY HH24:MI:SS') from dual;
TO_CHAR(SYSDATE,'MM
11-14-2005 15:44:24 -
Query with contains is slower than like
Hi,
I have a column index with interMedia ConText. However, I run the same query except the where clause like as below:
Both query select same column and from same table.
Query 1.
......where contains (text, 'pass')>0
Query 2.
......where upper(text) like '%PASS%'
Both query returns 476 row of records,
but query 1 return result after 32 secs
and query 2 return result in only 3 secs.
Why? Isn't ConText supppose to be faster?
if not should not be slower so much than the normal like statement?
Please explain and advice if there is possiblity that some thing is wrong.
Thanks.you don't state the version number; I'm assuming it's at least 8.1.5, since you call it interMedia. How long are your documents?
Do your queries really need to do double-wildcarding ("grep")? iMT indexes are "inverted lists" which do a direct look-up of the word you are searching for. In this case, it needs to do a full-table-scan of the $I table to find all matches.
There is a "double-truncation" feature in 8.1.6 that will get rid of the performance problem, but this was built for pharmaceutical searches, where the words get very long and are composed of many compounds. It is usually not useful for general text retrieval.
What are your users querying on, what information needs do they have?
Paul
null -
I am linking my question from Stack Overflow here. The link: http://stackoverflow.com/questions/27943913/sql-azure-query-with-row-number-executes-slow-if-columns-with-nvarchar-of-bi
Appreciate your help!
GorgiHi,
Thanks for posting here.
I suggest you to check this link and optimize your query on sql azure.
http://www.sqlusa.com/articles/query-optimization/
http://sqlblog.com/blogs/paul_white/archive/2011/02/23/Advanced-TSQL-Tuning-Why-Internals-Knowledge-Matters.aspx
Also check this blog which had similar issue.
https://social.msdn.microsoft.com/Forums/en-US/c1da08b4-265d-4ec8-a252-8d7090234e3e/simple-select-query-takes-long-time-to-execute-with-nvarchar-columns?forum=transactsql
Girish Prajwal -
Oracle query / view performing very slow.
Hello,
I had created a view in my application for example
CREATE OR REPLACE VIEW EmpTransfer
( EMPID,EMPNAME,EMPMANAGER,EMPDOB,EMPTRANSFER)
AS
SELECT EMPID,EMPNAME,EMPMANAGER,EMPDOB,EMPTRANSFER
FROM EMP ;
After couple of months if we changed a columnname in the table EMP and added a new column.
We changed column name EMPTRANSFER to OldEMPTRANSFER and added a new column as NEWEMPTRANSFER.
The indexes were recreated on OldEMPTRANSFER and new index is creatd on NEWEMPTRANSFER column.
the view is again recreated.
CREATE OR REPLACE VIEW EmpTransfer
( EMPID,EMPNAME,EMPMANAGER,EMPDOB,OldEMPTRANSFER,NEWEMPTRANSFER )
AS
SELECT EMPID,EMPNAME,EMPMANAGER,EMPDOB,OldEMPTRANSFER ,NEWEMPTRANSFER
FROM EMP ;
This view is working as expected but some times this view is working very slow.
The performance of this view is randomly slow.
Is it possible that column name change will cause slowness...?What's the explain plans for both before and after the column change? It could possibly be running slow the first time because of a hard parse due to the query change, which will remain until it's in the shared_pool.
Edited by: DaleyB on 07-Oct-2009 04:53 -
Broadband speed slow, high noise margin
This is my first time posting on the forums here and I would just like to report an issue with the connection I have, I live right at the end of the main phone line on a farm and have never had a good connection but recently the speeds have become appalingly slow and I dont understand what is causing it. Here are the stats of the line:
Very slow as you can see.
Here are the advanced options when doing the ping test which state that the maximum download speed is 0.25, which I don't understand as the day before I had 0.50. I asked a friend who said it was likely a noise issue so here are the stats for that:
I'm unfamiliar with what these stats mean but my friend who used to work for BT said my noise ratio is to high and would cause caps on my connection. I'm at a loss as to what to do and would appreciate any help on what may be causing this and any ways to fix it. As a bit of background I switched from AOL to BT a month ago due to infernally slow internet speeds, after a brief spell where the speeds under BT were good they suddenly went back to these levels again, I have spoken to BT staff over the phone who have reset the line twice which causes a brief spike in performance then it goes back to this level. Any help would be appreciated and thankyou for your time.Hi Paul_Mastrangelo,
With a line attenuation of 70.9dB your landline is approx 5.1km long from your property to your local exchange.
With a line attenuation of 70dB you may not be able to get a broadband connection much over 1mbps or even much over 0.5mbps as your line is just too long in distance from your property to the exchange.
I also see your on Adsl2, nice too see such a long line that can atleast support it, mine can't
As Keith and imjolly has said, it is recommended to keep the homhub/router connected 24/7 and constantly connected especially for those with longer lines as any slight upset to the sync rate can cause an effect to the speed.
You may also be interested in this here: http://goo.gl/MNb8s
It's about providing superfast broadband for rural communities and to areas with slow broadband speeds.
Hope that helps,
Cheers
EDIT
PS: The quite line test is when you dial 17070 from your landline phone and select option 2 called Quite line test. There should be no noise (such as crackling, hissing, popping, etc...) but a slight hum on a cordless phone is normal.
I'm no expert, so please correct me if I'm wrong -
Extremely slow, high ping internet
I'm again having problems with my internet. I never, ever had problems with BT, but the past month it's been dire. Around a month ago for a few days my internet was really unstable, but it sorted itself out, now today my internet is worse again.
I log in today to find out that it's extremely slow to load pages up and when I check my ping it's in the thousands and won't go down.
Phoned up BT, no help there.
Can't do anything, my ping is way too high. :/
Any help ?
Solved!
Go to Solution.Connection information
Line state
Connected
Connection time
0 days, 0:21:03
Downstream
8,128 Kbps
Upstream
448 Kbps
ADSL settings
VPI/VCI
0/38
Type
PPPoA
Modulation
ITU-T G.992.1
Latency type
Interleaved
Noise margin (Down/Up)
8.3 dB / 22.0 dB
Line attenuation (Down/Up)
23.0 dB / 13.0 dB
Output power (Down/Up)
19.8 dBm / 12.1 dBm
Loss of Framing (Local)
8
Loss of Signal (Local)
1
Loss of Power (Local)
0
FEC Errors (Down/Up)
7 / 0
CRC Errors (Down/Up)
0 / 2147480000
HEC Errors (Down/Up)
nil / 0
Error Seconds (Local)
1 -
Macbook pro running 10.9.4 very slow, high memory pressure
All functions running very slowly. I have to limit the number of open applications to about three or the 4GB memory hits maximum and the system slows to almost completely non responsive. Perhaps was slowing down with earlier versions of OS X, but notice particularly with 10.9 and 10.9.4.
EtreCheck version: 1.9.15 (52)
Report generated September 30, 2014 at 9:30:26 AM CDT
Hardware Information: ?
MacBook Pro (15-inch Early 2008) (Verified)
MacBook Pro - model: MacBookPro4,1
1 2.4 GHz Intel Core 2 Duo CPU: 2 cores
4 GB RAM
Video Information: ?
GeForce 8600M GT - VRAM: 256 MB
Color LCD 1440 x 900
DELL 2408WFP 1920 x 1200 @ 60 Hz
System Software: ?
OS X 10.9.4 (13E28) - Uptime: 0 days 0:19:37
Disk Information: ?
SAMSUNG HM250HJ disk0 : (250.06 GB)
S.M.A.R.T. Status: Verified
EFI (disk0s1) <not mounted>: 209.7 MB
Macintosh HD (disk0s2) / [Startup]: 249.2 GB (120.62 GB free)
Recovery HD (disk0s3) <not mounted>: 650 MB
USB Information: ?
Alps Electric Hub in Apple Extended USB Keyboard
Mitsumi Electric Apple Optical USB Mouse
Alps Electric Apple Extended USB Keyboard
Generic Flash Card Reader
Apple Inc. Built-in iSight
Apple Inc. BRCM2046 Hub
Apple Inc. Bluetooth USB Host Controller
Apple, Inc. Apple Internal Keyboard / Trackpad
Apple Computer, Inc. IR Receiver
Gatekeeper: ?
Mac App Store and identified developers
Kernel Extensions: ?
[not loaded] com.cisco.cscotun (1.0) Support
[loaded] com.logmein.driver.LogMeInSoundDriver (1.0.3 - SDK 10.5) Support
Startup Items: ?
CiscoTUN: Path: /System/Library/StartupItems/CiscoTUN
vpnagentd: Path: /System/Library/StartupItems/vpnagentd
FanControlDaemon: Path: /Library/StartupItems/FanControlDaemon
Launch Daemons: ?
[loaded] com.adobe.fpsaud.plist Support
[running] com.micromat.TechToolProDaemon.plist Support
[loaded] com.microsoft.office.licensing.helper.plist Support
[loaded] com.oracle.java.Helper-Tool.plist Support
[loaded] com.oracle.java.JavaUpdateHelper.plist Support
Launch Agents: ?
[running] com.micromat.TechToolProAgent.plist Support
[loaded] com.oracle.java.Java-Updater.plist Support
User Launch Agents: ?
[failed] com.apple.MobileMeSyncClientAgent.plist
[loaded] com.google.keystone.agent.plist Support
[running] com.microsoft.LaunchAgent.SyncServicesAgent.plist Support
User Login Items: ?
Stickies
LMILaunchAgentFixer
Dropbox
Microsoft Database Daemon
OneDrive
Internet Plug-ins: ?
iPhotoPhotocast: Version: 7.0
FlashPlayer-10.6: Version: 15.0.0.152 - SDK 10.6 Support
JavaAppletPlugin: Version: Java 7 Update 67 Check version
Flash Player: Version: 15.0.0.152 - SDK 10.6 Support
Default Browser: Version: 537 - SDK 10.9
QuickTime Plugin: Version: 7.7.3
SharePointBrowserPlugin: Version: 14.4.4 - SDK 10.6 Support
Google Earth Web Plug-in: Version: 6.1 Support
Scorch: Version: 5.2.5 Support
Silverlight: Version: 5.1.20913.0 - SDK 10.6 Support
Photo Center Plugin: Version: Photo Center Plugin 1.1.2.2 Support
Audio Plug-ins: ?
BluetoothAudioPlugIn: Version: 1.0 - SDK 10.9
AirPlay: Version: 2.0 - SDK 10.9
AppleAVBAudio: Version: 203.2 - SDK 10.9
iSightAudio: Version: 7.7.3 - SDK 10.9
iTunes Plug-ins: ?
Quartz Composer Visualizer: Version: 1.4 - SDK 10.9
3rd Party Preference Panes: ?
Fan Control Support
Flash Player Support
Flip4Mac WMV Support
Java Support
MacFUSE Support
TechTool Protection Support
Time Machine: ?
Time Machine not configured!
Top Processes by CPU: ?
8% WindowServer
6% Microsoft Outlook
3% com.apple.WebKit.WebContent
2% Safari
1% com.apple.WebKit.Networking
Top Processes by Memory: ?
164 MB com.apple.WebKit.WebContent
135 MB Microsoft Outlook
135 MB mds_stores
106 MB com.apple.IconServicesAgent
94 MB Safari
Virtual Memory Information: ?
1.04 GB Free RAM
1.73 GB Active RAM
738 MB Inactive RAM
521 MB Wired RAM
753 MB Page-ins
0 B Page-outsYou might obtain some relief by installing a 4 GB RAM module in place of one of the two 2 GB modules. That would then allow for the MBP to operate with the maximum of 6 GB RAM that an early 15" 2008 MBP supports.
The best sources for Mac compatible RAM are OWC and Crucial.
Ciao. -
Hi
in OC, i understand each question group correspond to a view and we can query it using SQL. I have tried to query on 1 record (by pt, patient ID) and it takes quite a while to respond back. From Explain plan, it can be seen that it takes a long path to return back a row.
When i create a temp table with all records from this view, the speed to return this same record takes less than a split second. As such, may i know is there anyway to improve the query speed on such views ?
for some advice please
Thank you
Boon YiangOne thing that you need to do to ensure good query performance in OC is to make sure the database has statistics created for the OC tables via the "analyze table" command. The cost-based query optimizer in Oracle relies on up-to-date table statistics to optimize queries. Table analysis should be performed regularly by the DBA. OC provides scripts to run the table analysis. They are in the $RXC_INSTALL directory and are named anarxctab.sql, anadestab.sql, and analrtab.sql.
Bob Pierce
Constella Group
[email protected]
Maybe you are looking for
-
When I sign in to the Adobe Reader XI (which is the only option that I have to open a pdf document, I will sign in to the Adobe ID sign-in page. After I do this, the "Convert To" box goes "gray" and I am unable to convert the document I have selected
-
How can I include text lines in Service Invoice?
Dear Experts, I realized unlike Item Invoice, Service Invoice does not have the "Type" column for me to include Text into the document? How can I include a free text into 1 row of the Service Invoice? without any GL Account, Qty and Total input? Than
-
I searched and did not see this listed as a bug, but I seem to have issue anytime i used the Icon Text feature versus just using the Text tool. 2013 SP1 2012 SP1 Solved! Go to Solution.
-
I want firefox to open the home page when i open a new tab instead of a blank page.
whenever i open a new tab, it shows me a blank page. i want firefox to go to my home page as soon as i click on new tab button.
-
[solved] Still having problems with ATI card
Hi there, I posted a message over a week ago concerning issues that I am having getting my ATI Radeon card to function adequately with X Currently i'm using the xf86-video-ati-6.10.0-1 driver however for some reason this doesn't work properly for my