Fixing sluggishness with SMART Utility reading: "1 Reallocated bad sector, and 6 errors"
Specs are as follows for Macbook Pro 15" late 2008:
Processor: 2.93 GHz Intel Core 2 Duo
Memory: 4 GB 1067 MHz DDR3
Graphics: NVIDIA GeForce 9400M 256 MB
Software: OS X 10.9.5
As instructed in other discussions, due to sluggishness (apps loading slowly, web development tasks requiring significant loading downtime) of my recently acquired Macbook Pro 15" late 2008, I've installed Volitans-Software's SMART Utility tool, and the results are that I have are:
1 reallocated bad sector
6 total errors (last error type uncorrectable, prior command READ DMA EXT)
My question therefore is, to fix the sluggishness, to the end of making this a responsive web-development machine, what should I do?
Insofar as I've researched, I see my current options as
Upgrade the RAM to 8GB
Do the above and replace the drive
Do some kind of factory reset or electrickery to fix the sluggishness.
I'll happily provide more info required: Any and all advice would be appreciated
My opinion: Backup, then replace the drive. Increasing the ram may be a good thing too, but unless you're really pushing things 4 gigs of memory should be sufficient, and if that's the original drive then it's high time to replace it anyway. Besides, it will only cost you maybe a hundred bucks and less than an hour of work.
Also, check how full your drive is. If you only have a few gigabytes left it can start to slow things down. Some people recommend that you keep 5% of the drive free at all times for system use; I usually suggest from 1 to 2 times the amount of installed ram.
Similar Messages
-
I can not count the data from the module. Can prompt as it to make. It is desirable with examples (data read-out from the module and data transmission between channels. It is in advance grateful.
Hello. Most of the engineers in developer exchange are more familiar
with NI products. Contacting ICS for technical support is a better
course of action. -
I bought a brand new Mac 27" desktop that came loaded with Adobe CS6. Everything worked like a charm until the hard drive, 1Tb, developed a bad sector and the Apple Store reinstalled a new one as I had extended warranty with them. They recycled the drive immediately (like a fool I didn't ask for it back to get the data off it.) But luckily I have all my data on CrashPlan. I downloaded it and it worked great except I downloaded it to the desktop and not the original location so it got squirrely. I also have an external 1.5Tb drive that I wanted to make bootable so I installed Mavericks 10.9.3 . I then went ahead and installed it on the newly installed drive too.
I think that because I have a new drive Adobe thinks I have a news computer. I bought the 27" Mac brand new from Ebay and it came loaded with software, including CS6. I have a serial number for CS6 but Adobe said it wasn't valid. (I have owned CS2, CS4 and now CS6 which came preloaded onto the Mac by the seller who told me that the software was registered to the Mac???
I am going to re-download the backup but this time to the original location (I still don't think it will work with Adobe. What can I do about this?
iMac 27-inch, Late 2012
Processor 3.2 GHz Intel Core i5
Memory 32 GB 1600 MHz DDR3
Graphics NVIDIA GeForce GTX 675MX 1024 MB
Software OS X 10.9.3 (13D65)@You need to contact Adobe Support either by chat or via phone when you have serial number and activation issues.
Here is a link to a page with options to help make contact:
http://www.adobe.com/support/download-install/supportinfo/ -
Setting Disk Utility to lock out bad sectors
Setting Disk Utility to lock out bad sectors
When my Leopard DVD arrives I'm planning on doing a clean install to my internal HD. I would like to format the drive to lock out bad sectors but I can't seem to find a setting to do that in Disk Utility or anything about it in Help.Hi Peggy Lynn;
To the best of my knowledge there is no way to lock out bad sectors.
What you can do is erase the disk and under security tab have zeros written to the disk. When you do this if any bad sectors are found, they will be mapped out so that will no longer be used.
Disk drive always have some extra sector which can be mapped in to replace any that might go bad. The disk drive can continue doing this until the sectors are all used up then you will have to replace it.
Allan -
Time capsule not working - fixing hdd with disk utility
My time capsule stopped backing up today. It couldn't connect to the airport disk. So I opened up the Time Capsule and connected the HDD to my laptop hoping that running disk utility would find some simple error and fix it. After running the verify disk utility gave me the following message -
At this point I'm definitely beyond my knowledge of what to do ... should I create an EFI partition? But, it says its for boot disks and RAIDs, neither of which applies to the Time Capsule/Airport Disk ... I think.
Any hardware guys out there that can advise me what are my next steps? Please write with the assumption I'm new to Apples and computers in general.
DCRicanshould I create an EFI partition?
No, it will lead to errors in the Time Capsule if you modify its disk in any way.. do not touch the disk.. This is not an error btw.. which you did realise.. the disk is not mean't to have EFI because the TC boots from Flash not EFI.
I am a bit surprised your immediate reaction to a failure to link to the TC you assumed disk errors and then went to this extraordinary method to test it.
We do recommend a disk pull as a last resort to recover files.. or the disk is dead.. not this error.
My time capsule stopped backing up today. It couldn't connect to the airport disk.
I think you mean Time Machine. (backup software on your computer.. not related to Time Capsule other than as a target for the backup).
This is a software problem in the computer and has nothing to do with the Time Capsule.. (hardware)
It looks like you're on Yosemite.. poor blighter.. !!
It is about Apple's worst release at least for network.. just read the discussions .. many many people have issues...
When you hit a problem like this.. first google.. then ask on the discussion forum.. please don't start pulling your hardware apart.
There is no real known fix yet..
There are things to try.. but nothing so far is proving infallible which means we haven't really hit the main cause..
Start with a full factory reset.
Factory reset universal
Power off the TC.. ie pull the power cord or power off at the wall.. wait 10sec.. hold in the reset button.. be gentle.. power on again still holding in reset.. and keep holding it in for another 10sec. You may need some help as it is hard to both hold in reset and apply power. It will show success by rapidly blinking the front led. Release the reset.. and wait a couple of min for the TC to reset and come back with factory settings. If the front LED doesn’t blink rapidly you missed it and simply try again. The reset is fairly fragile in these.. press it so you feel it just click and no more.. I have seen people bend the lever or even break it. I use a toothpick as tool.
N.B. None of your files on the hard disk of the TC are deleted.. this simply clears out the router settings of the TC.
Then in airport utility setup the TC with short names.. no spaces and pure alphanumeric names (and passwords but 8-20 characters).
Good names.. TCgen4 name of the TC. TCwifi as wireless name.
If you then have issues.. please first of all make sure you can mount the TC disk in finder.. ie click TCgen4 in the left pane of finder and the disk appears on the right. Access the disk.
If that fails.. post again and we can go the next step. -
Can't UN-Mount internal HD to fix it with Disk Utility
Hi all,
When booting from an external Firewire back-up clone disk, I am not able to un-mount the internal (original) HD in order to verify or repair it with Disk Utility because supposedly it is in use by some application.
This is no big deal because if I boot in safe mode (from my external back-up disk) I am able to use Disk Utility on it, but I am very curious as to how can I know which is the application that is preventing DU from un-mounting it ?
Thanks in advanceIf your Spotlight is working just fine, I wouldn't monkey with it. Putting the drive into the Privacy pane adds an instruction ON THE DRIVE to turn off Spotlight, so when you reboot to the drive Spotlight would be turned off, turning it back on would force the drive to be re-indexed.
You can try using the Terminal from your Utilities folder to discover what is in use, using the list open files command, "lsof" and the drive name. Thus I have an internal drive named Panther, so if I type this into Terminal:
lsof "/Volumes/Panther"
and hit the Return key, it returns the following information:
Finder 112 francine 15r VDIR 14,19 68 540057 /Volumes/Panther/.Trashes/501
The only file open on the drive is actually the drive's trash folder, and it is open in the Finder. I think any mounted drive would have that open.
Francine
Francine
Schwieder -
Need to backup NTFS drive with bad sectors and restore to new drive
So far what I've tried is downloading clonezilla and I've tried a drive to drive clone, what this resulted in was the new drive thinking it had bad sectors just like the old. SO since each time I attempt this it eats 2-3 hours of my time and I have to leave soon to go back to school, I don't exactly have all the time in the world; what is the recommended course of action here?
What i've been reading up on is possibly doing a clone to image file? From what I understand a clone to image will not copy over bad sectors (there is very small amount in this drive at this time, not worried about a massive amount of corrupted files) and instead just leaves the files that existed in the sectors corrupted on the new drive. Is this true? How do I go about this correctly so I can successfully re-image to my new drive?
Would like to note I'm trying to backup the entire DRIVE, not just the main partition, boot sector and all.
edit:
Before anyone suggests running chkdsk /r /f in windows I've done this, partition is still giving me reports of bad sectors. This gives me the idea this drive is on its way out and I'm trying to limit my attempts at cloning the entire thing at this point so I dont further my problem, someone has had to have delt with this before?
Last edited by whaevr (2014-02-10 00:14:33)I've had good success with dd_rescue in the past. The default behavior of dd_rescue is to skip sectors it cannot read (see the man page for a more precise explanation of its behavior). However, it's fairly simple minded, and requires manual intervention if you want to try to maximize the amount of data you recover.
It seems the new hotness is ddrescue, a gnu project apart from dd_rescue which provides a number of improvements. Most notably, It seems to imbue dd_rescue with greater levels of sophistication to make it capable of recovering more data with greater levels of efficiency, reducing strain on the failing drive.
If I were to recover a drive now, I'd probably give ddrescue a whirl.
Hope this helps.
Last edited by hezekiah (2014-02-10 04:28:19) -
How do I fix problem with Smart Sharpen & CR as filter
Smart Sharpen doesn't apply (or apply completely). Photoshop CC on Mac OSX 10.7.5
new merged layer > Filter > Sharpen > Smart Sharpen > adjust sliders > toggle Preview > click OK (progress bar) > nothing or 50% applied
1. Smart Sharpen (SS) shows changes from sliders in SS dialog box AND on underlaying image in PS but checking and unchecking the Preview box only toggles the image in PS, the SS dialog box does not change.
2. After clicking 'OK' SS does not seem to apply at normal ranges but when the radius is cranked up to super ugly it appears to apply at about 50% (Opacity and Fill are at 100%).
Related Problem:
After using 'Camera Raw as a filter' on a new merged layer, then adding a layer mask, and then painting with a brush at 100% opacity and Flow, it appears that only about 50% opacity is applied to the mask from the brush.I did a lot of such hymn things in the past, but I was alway working on the scans in Photoshop, like adjusting, retouching spots and cropping. And I was always seperating those images to be more flexible so I never run into such a problem.
I am always scanning in a higher quality level than I would need, I scan it in color, this allows me to eliminate paper color easily, even if I need 1-bit images at the end, I do it in color, so I can also turn the image.
I did once run into a similar problem with a scan I have got from a copier-scanner machine (it was not a song). But saving as PSD resolved my problem.
So I would suggest: Open your files in Photoshop, resave them as PSD files and use those instead. If you use 1-bit images (which is fine for this type of images) you should use a resolution equal to the printer's resolution. -
Unable to bootstrap transaction group 7845 when attempting to fix harddrive with disk utility
Hello,
I recently tried burning a cd from the Mac. I pressed cancel because do forgot to put it at the right speed. So I kept clicking cancel them eventually shut it down. I started it up and I got a "prohibited" sign.
The topic title is their error I get.
(unable to bootstrap transaction group 7845)No
I dont have enough money to buy one of those "time machine" thingys.
But it's ok. The only thing that is valuble to me was my documents, and they're all sent to my email .
BDAqua wrote:
I was afraid of that after posting then re-reading your title! (
Do you have a Backup? -
I just bought a 5g iPod touch and was wondering if there were any ihomes out or coming out soon that are compatible with this device. I know there are adapters and cables you can purchase to use with your existing ihome but read the poor reviews on them and was scared away. I use my ihome very regularly and need this accessory to work with my new iPod. If you have any information on this, please help!
iHome already makes docks that support the Lightning connector:
http://www.ihomeaudio.com/products/#/?filter=28
Regards. -
My 12" powerbook Aluminum was refusing to shut down--meaning I couldn't clone it to my external. So I ran SMART Utility. It found Pending Bad Sectors: 2, Reallocated Bad Sectors: 3, Total Errors 30, and Last Error Type: Uncorrectable.
So I ordered a new drive. Everytime I try to clone my current drive to my external it bumps up against some file it fails to copy. The first couple of times these were Adobe Files. So I removede CS2 from the computer (a lengthy process), ran Applejack, and tried again. This is the error message I get from SuperDuper: WARNING: Caught I/O exception(22): Invalid argument
| 11:32:39 PM | Info |WARNING: Source: /private/var/log/system.log, lstat(): 0
| 11:32:39 PM | Info |WARNING: Target: /Volumes/GB 12" Backup/private/var/log/system.log, lstat(): 0
| 11:32:39 PM | Info |Attempting to copy file using copyfile().
| 11:32:39 PM | Info |Attempting to copy file using ditto.
| 11:32:39 PM | Error | ditto: /private/var/log/./system.log: Result too large
So I have a new hard drive to install, but I can't clone my old one. I'd really like to shorten this process as much as possible as I've been having weeks of computer **** (three laptops, two failing hard drives and a bad memory card spread out over all three).
Question: How do I clone this drive? And 2) If I DO successfully clone it, am I just cloning the problems I'm trying to rectify?
Please help.
Cheers,
GilesGiles:
I would like to copy my applications over from my old failing drive but I don't want to copy any corruption. Can you please advise what is the best way to do this?
If there is an issue with the files and apps on the drive there may be concern about it. However, if the files and apps were not having issues, and it is simply a matter of the HDD itself failing, there should not be an issue with moving them over. Then later, if you find there are issues you can simply reinstall the problem apps.
Or should I just bite the bullet and reinstall from scratch?
That is certainly an option. However, as I noted above, you may not need to do so up front.
Even if I reinstall from scratch, can I safely copy my user data (preferences, bookmarks, etc) for Safari and Entourage?
Yes, you can. I suggest that you use the schedule laid out in the article A Basic Guide for Migrating to Intel-Macs. I realize that you are working with PowerPC Macs, but you need the same stuff moved. It works very well. I would be sure to test the drive to which you have copied before deleting the old stuff.
Good luck.
cornelius -
Serial table scan with direct path read compared to db file scattered read
Hi,
The environment
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
8K block size
db_file_multiblock_read_count is 128
show sga
Total System Global Area 1.6702E+10 bytes
Fixed Size 2219952 bytes
Variable Size 7918846032 bytes
Database Buffers 8724152320 bytes
Redo Buffers 57090048 bytes
16GB of SGA with 8GB of db buffer cache.
-- database is built on Solid State Disks
-- SQL trace and wait events
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true )
-- The underlying table is called tdash. It has 1.7 Million rows based on data in all_objects. NO index
TABLE_NAME Rows Table Size/MB Used/MB Free/MB
TDASH 1,729,204 15,242 15,186 56
TABLE_NAME Allocated blocks Empty blocks Average space/KB Free list blocks
TDASH 1,943,823 7,153 805 0
Objectives
To show that when serial scans are performed on database built on Solid State Disks (SSD) compared to Magnetic disks (HDD), the performance gain is far less compared to random reads with index scans on SSD compared to HDD
Approach
We want to read the first 100 rows of tdash table randomly into buffer, taking account of wait events and wait times generated. The idea is that on SSD the wait times will be better compared to HDD but not that much given the serial nature of table scans.
The code used
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_with_tdash_ssdtester_noindex';
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
/Server is rebooted prior to any tests
Whern run as default, the optimizer (although some attribute this to the execution engine) chooses direct path read into PGA in preference to db file scattered read.
With this choice it takes 6,520 seconds to complete the query. The results are shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1948310 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 2 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.84 4.03 320 48666 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.84 4.03 320 48666 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.84 4.03 320 48666 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.84 4.03 320 48666 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.01 0.00 1 52 2 1
Fetch 140 10.88 6519.89 194142805 194831110 0 130
total 278 10.91 6519.91 194142808 194831209 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 5 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1517504 0.05 6199.93
asynch descriptor resize 196 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_16394_test_with_tdash_ssdtester_noindex.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1531657 lines in trace file.
6520 elapsed seconds in trace file.I then force the query not to use direct path read by invoking
ALTER SESSION SET EVENTS '10949 trace name context forever, level 1' -- No Direct path read ;In this case the optimizer uses db file scattered read predominantly and the query takes 4,299 seconds to finish which is around 34% faster than using direct path read (default).
The report is shown below
SQL ID: 78kxqdhk1ubvq
Plan Hash: 1148949653
SELECT *
FROM
TDASH WHERE OBJECT_ID = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER) (recursive depth: 1)
Rows Row Source Operation
1 TABLE ACCESS FULL TDASH (cr=1944909 pr=1941430 pw=0 time=0 us cost=526908 size=8091 card=1)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (FULL) OF 'TDASH' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
DECLARE
type array is table of tdash%ROWTYPE index by binary_integer;
l_data array;
l_rec tdash%rowtype;
BEGIN
SELECT
a.*
,RPAD('*',4000,'*') AS PADDING1
,RPAD('*',4000,'*') AS PADDING2
BULK COLLECT INTO
l_data
FROM ALL_OBJECTS a;
DBMS_MONITOR.SESSION_TRACE_ENABLE ( waits=>true );
FOR rs IN 1 .. 100
LOOP
BEGIN
SELECT * INTO l_rec FROM tdash WHERE object_id = l_data(rs).object_id;
EXCEPTION
WHEN NO_DATA_FOUND THEN NULL;
END;
END LOOP;
END;
call count cpu elapsed disk query current rows
Parse 0 0.00 0.00 0 0 0 0
Execute 1 3.92 4.07 319 48625 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 1 3.92 4.07 319 48625 0 1
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
SQL ID: 9babjv8yq8ru3
Plan Hash: 0
BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.00 0 0 0 1
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 96 (SSDTESTER)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 2 3.92 4.07 319 48625 0 2
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.92 4.07 319 48625 0 2
Misses in library cache during parse: 0
Misses in library cache during execute: 1
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
log file sync 1 0.00 0.00
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
Parse 9 0.01 0.00 2 47 0 0
Execute 129 0.00 0.00 2 52 2 1
Fetch 140 143.44 4298.87 110348674 194491010 0 130
total 278 143.46 4298.88 110348678 194491109 2 131
Misses in library cache during parse: 9
Misses in library cache during execute: 8
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129763 0.01 17.50
Disk file operations I/O 3 0.00 0.00
db file scattered read 1218651 0.05 3770.02
latch: object queue header operation 2 0.00 0.00
102 user SQL statements in session.
29 internal SQL statements in session.
131 SQL statements in session.
1 statement EXPLAINed in this session.
Trace file: mydb_ora_26796_test_with_tdash_ssdtester_noindex_NDPR.trc
Trace file compatibility: 11.1.0.7
Sort options: default
1 session in tracefile.
102 user SQL statements in trace file.
29 internal SQL statements in trace file.
131 SQL statements in trace file.
11 unique SQL statements in trace file.
1 SQL statements EXPLAINed using schema:
ssdtester.plan_table
Schema was specified.
Table was created.
Table was dropped.
1357958 lines in trace file.
4299 elapsed seconds in trace file.I note that there are 1,517,504 waits with direct path read with total time of nearly 6,200 seconds. In comparison with no direct path read, there are 1,218,651 db file scattered read waits with total wait time of 3,770 seconds. My understanding is that direct path read can use single or multi-block read into the PGA. However db file scattered reads do multi-block read into multiple discontinuous SGA buffers. So it is possible given the higher number of direct path waits that the optimizer cannot do multi-block reads (contigious buffers within PGA) and hence has to revert to single blocks reads which results in more calls and more waits?.
Appreciate any advise and apologies for being long winded.
Thanks,
MichHi Charles,
I am doing your tests for t1 table using my server.
Just to clarify my environment is:
I did the whole of this test on my server. My server has I7-980 HEX core processor with 24GB of RAM and 1 TB of HDD SATA II for test/scratch backup and archive. The operating system is RHES 5.2 64-bit installed on a 120GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive.
Oracle version installed was 11g Enterprise Edition Release 11.2.0.1.0 -64bit. The binaries were created on HDD. Oracle itself was configured with 16GB of SGA, of which 7.5GB was allocated to Variable Size and 8GB to Database Buffers.
For Oracle tablespaces including SYS, SYSTEM, SYSAUX, TEMPORARY, UNDO and redo logs, I used file systems on 240GB OCZ Vertex 3 Series SATA III 2.5-inch Solid State Drive. With 4K Random Read at 53,500 IOPS and 4K Random Write at 56,000 IOPS (manufacturer’s figures), this drive is probably one of the fastest commodity SSDs using NAND flash memory with Multi-Level Cell (MLC). Now my T1 table created as per your script and has the following rows and blocks (8k block size)
SELECT
NUM_ROWS,
BLOCKS
FROM
USER_TABLES
WHERE
TABLE_NAME='T1';
NUM_ROWS BLOCKS
12000000 178952which is pretty identical to yours.
Then I run the query as brelow
set timing on
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'test_bed_T1';
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 8';
SELECT
COUNT(*)
FROM
T1
WHERE
RN=1;
which gives
COUNT(*)
60000
Elapsed: 00:00:05.29
tkprof output shows
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.02 5.28 178292 178299 0 1
total 4 0.02 5.28 178292 178299 0 1
Compared to yours:
Fetch 2 0.60 4.10 178493 178498 0 1
It appears to me that my CPU utilisation is by order of magnitude better but my elapsed time is worse!
Now the way I see it elapsed time = CPU time + wait time. Further down I have
Rows Row Source Operation
1 SORT AGGREGATE (cr=178299 pr=178292 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178299 pr=178292 pw=0 time=42216 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 3 0.00 0.00
SQL*Net message from client 3 0.00 0.00
Disk file operations I/O 3 0.00 0.00
direct path read 1405 0.00 4.68
Your direct path reads are
direct path read 1404 0.01 3.40Which indicates to me you have faster disks compared to mine, whereas it sounds like my CPU is faster than yours.
With db file scattered read I get
Elapsed: 00:00:06.95
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1.22 6.93 178293 178315 0 1
total 4 1.22 6.94 178293 178315 0 1
Rows Row Source Operation
1 SORT AGGREGATE (cr=178315 pr=178293 pw=0 time=0 us)
60000 TABLE ACCESS FULL T1 (cr=178315 pr=178293 pw=0 time=41832 us cost=48697 size=240000 card=60000)
Rows Execution Plan
0 SELECT STATEMENT MODE: ALL_ROWS
1 SORT (AGGREGATE)
60000 TABLE ACCESS MODE: ANALYZED (FULL) OF 'T1' (TABLE)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
Disk file operations I/O 3 0.00 0.00
db file sequential read 1 0.00 0.00
db file scattered read 1414 0.00 5.36
SQL*Net message from client 2 0.00 0.00
compared to your
db file scattered read 1415 0.00 4.16On the face of it with this test mine shows 21% improvement with direct path read compared to db scattered file read. So now I can go back to re-visit my original test results:
First default with direct path read
call count cpu elapsed disk query current rows
Parse 1 0.01 0.00 2 47 0 0
Execute 100 0.00 0.00 1 51 0 0
Fetch 100 10.88 6519.89 194142802 194831012 0 100
total 201 10.90 6519.90 194142805 194831110 0 100
CPU ~ 11 sec, elapsed ~ 6520 sec
wait stats
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
direct path read 1517504 0.05 6199.93
roughly 0.004 sec for each I/ONow with db scattered file read I get
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 2 47 0 0
Execute 100 0.00 0.00 2 51 0 0
Fetch 100 143.44 4298.87 110348670 194490912 0 100
total 201 143.45 4298.88 110348674 194491010 0 100
CPU ~ 143 sec, elapsed ~ 4299 sec
and waits:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 129759 0.01 17.50
db file scattered read 1218651 0.05 3770.02
roughly 17.5/129759 = .00013 sec for single block I/O and 3770.02/1218651 = .0030 for multi-block I/ONow my theory is that the improvements comes from the large buffer cache (8320MB) inducing it to do some read aheads (async pre-fetch). Read aheads are like quasi logical I/Os and they will be cheaper compared to physical I/O. When there is large buffer cache and read aheads can be done then using buffer cache is a better choice than PGA?
Regards,
Mich -
Lion: error trying to delete a partition with disc utility
Hi to all. Today I tryed to delete an old and unused partition with disc utility, but after starting the process an error pop up ans says: Necessary support for file system resize, like HFS+ with enabled journaling.
What should i do? I've created a cd with iPartition, and it works, but i prefer to use disc utility. Can anyone help?Select the partition in Disk Utility and see if you can turn on journaling, if it's an HFS+ volume. How is the drive presently partitioned: APM, GUID, or MBR?
How old is this partition and how was it created? -
Well, both token and smart card reader are not detected on OS X 10.9 if not plugged on a USB port during system boot. So, if I am already working within the system and need to use my certificates I have to plug the token or smart card reader on a USB port and restart Mavericks.
Token is a GD Starsign and Smart Card Reader is a SCR3310 v2.
Thoughts?SCS is a very good app, since I've read that Apple has discontinued support for PC/SC interfaces after the release of Mountain Lion.
(My previous installation was a Mavericks upgrade from Lion)
However, I don't know what and how to debug using Smart Card Services. Do you know any commands to use?
Apparently, the SC reader reports no issues: the LED is blinking blue when no smart card is present and becomes fixed blue when a smart card is inserted – according to the manuals, this shows that there is correct communication between the OS and the CCID reader.
I don't know what to do; I'm beginning to hypothesize it's a digital signer issue. In fact, my smart card only supports one application called File Protector (by Actalis) to officially sign digital documents. This application seems to have major difficulties in identifying the miniLector EVO.
The generic and ambiguous internal error comes when I try to manually identify the peripheral.
Athena CNS is one of the Italian smart cards and is automatically recognized and configured (so it's correct – no doubts about this), while "ACS ACR 38U-CCID 00 00" seems to be the real name of the miniLector.
(I'm assuming this because System Information also returns that the real manufacturer is ACS... bit4id is a re-brander)
However, when I click on it and then tap OK, it returns internal error.
As first attempt, I would try to completely erase&clean File Protector files to try a reinstall. Then, if this still doesn't work, I'd debug using the terminal.
So:
- Do you know any applications to 100% clean files created by an installer?
- Do you have in mind any solutions that I might have forgotten?
Thanks in advance from an OS X fan! -
Access to smart card reader on Win 8.1 RDP Host
Hi,
I have a customer that has a couple of Windows 8.1 Pro computers, that has a smart card reader in the local keyboard.
Until a few months ago, they could RDP to the desktop computer from a RDP client such as another Windows PC, a Mac or a mobile device.
The problem is now that when accessing the desktop computer (with the smart card reader keyboard) from a RDP client, the smart card reader is not available in the RDP session anymore. This prevents them from logging on to an application in the network that
requires their smart card.
Can someone perhaps point me in a direction where this can be solved, either with the MS RDP host or with some 3rd party RDP host applications?
(Teamviewer or similar remote support applications works, but that is not what the customer want...)
Since it worked like a charm up until 2-3 months ago, there must have been some update to Win 8.1 that prevents this by default?
Thanks in advance,
/Mikael ForslundHi Mikael Forslund,
I am supposing you attempted to use smart card reader connected directly to Remote Desktop terminal. Basically your RDP session should redirected smart card reader to the client side and will not see readers connected to the host side
because enable Safety equipment such like smart card reader will cause highly insecure and that is not by design.
We suggest using smart card reader on local RDP client for your issue.
“The reverse is also true; if you RDP into a session from the start you will never see any local smartcard readers as Winscard will detect it’s running in an RDP session and no calls to Winscard will ever reach the local PC/SC layer –
everything will be redirected to the connecting client.”
Quote from this TechNet article
http://blogs.technet.com/b/instan/archive/2011/03/27/why-can-t-i-see-my-local-smartcard-readers-when-i-connect-via-rdp.aspx
Similar case has been posted and for your reference
https://social.technet.microsoft.com/Forums/windowsserver/en-US/47972083-b9bd-49fd-8708-b296af81bda3/usb-smart-card-reader-and-smart-card-connected-directly-to-remote-desktop-server?forum=winserverTS
Regards
D. Wu
Maybe you are looking for
-
How to get the alv grid report in another screen when double click on basic
Hi. I have created an alv report using class cl_gui_alv_grid.I got another report in the same screen,when double clicked on basic list(using double_click event).I want to get this report in another screen.What i have to do?(In classical reports i wor
-
How do I set up a conference call that people can call into via phone?
Hi, I need to do the following things: find some kind of provider that will let me register a toll free number to use with Connect set up this provider in my Connect account make a meeting where people can call the toll free number to join the confer
-
Photoshop elements 11 download error
As soon as i type my country of residence and serial number in i receive a message saying that an error has occured with the download and that i must try again. please help. i have now retried a number of times without success. Thank you
-
What is the best data program for the iPad to keep costs at a minimum?
My daughter needs to get a data program from Verizon to supplement her iPhone 5 when traveling without wi-fi. Her current usage is a lot of youtube streaming. What is the best option to keep things under control? Prepaid? Thanks.
-
Help with itunes unusual behavior on Mac
I'm having trouble with how itunes behaves when I simply try to scroll through my music list. It's difficult to describe. When I try to scroll down, a bunch of unwanted information appears. It looks like itunes is trying to guess at music I might wan