ASM: determine total bytes written/read & IOPS
Hello
My Environment:
Oracle 11.2.0.3 EE
SuSE Enterprise Linux 11 SP1, 64 Bit
ASM Diskgroups on RAW Luns (2 per group) in Hitachi Disk Storage
DB-Server: HP DL380 G7
I like to switch ASM disks from disk to ssd's, direct attach to via HP DS2700 enclosure:
Diskgroup +DATA
5x400GB Enterprise SSD, RAID 5, 1.6TB raw capacity
used for Redo, Datafiles
Diskgroup +ARCH
4x400GB Enterprise SSD, RAID0+1, 800GBraw capacity
used for Redo, Archives, Flash Recovery Area
As I could understand SSD technology so far, SSDs have a livetime, meaning there is a guaranteed amount of data which can be read/written on it. My aim is to engineer a system that does not have the point of failure of two broken ssd's in the same RAID group.
How can I determine the total bytes written/read to the ASM diskgroups so far?
My idea was to dermine ASM block size (select block_size from v$asm_diskgroup) and then look at total reads and writes columns via iostat command in asmcmd. I assume these values are since last ASM startup.
Is there a way to determine IOPS per diskgroup?
Thanks
scsi
Doing the calculation for 1. gives me a strange result:
sys@+ASM> select block_size from v$asm_diskgroup;
BLOCK_SIZE
4096
4096
sys@+ASM> select STARTUP_TIME from v$instance;
STARTUP_T
23-MAR-13
ASMCMD> iostat
Group_Name Dsk_Name Reads Writes
ARCH ARCH_0000 56794017285632 39811971420672
ARCH ARCH_0001 50857383503360 38789309743616
DATA DATA_0000 80190065973760 42021664440320
DATA DATA_0001 80085260539392 42192811246080
Total Reads x Block Size = Total amount of data read
(56794017285632 + 50857383503360) x 4096 Bytes = 4.409401376317112e+17 Bytes = 401032.72 Pebibytes
Similar Messages
-
How to get the total bytes read by windows media palyer?
Using the wmp activeX control to play an avi format file, and using cvi's activeX tools to generate a wmp control. It can play. I'd like to know which method can get the bytes that read by wmp?
Thanks.user13019948 wrote:
Hi,
I have a tkprof file. How can I get the total execution time.
call count cpu elapsed disk query current rows
total 1406 0.31 4.85 350 16955 0 521TOTAL ELAPSED TIME is 4.85 seconds from line above -
9582.69043 gigabytes of physical read total bytes and increasing!
In EM
Database Instance: PROD > Top Activity > I got following
physical read total bytes 62763565056 10289335500800 4183122176
cell physical IO interconnect bytes 62763565056 10289335500800 4183122176
physical read bytes 62763565056 10289335500800 4183122176
And the session is running following update procedure:
declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin
-- Time regular updates.
l_start := DBMS_UTILITY.get_time;
FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;
END LOOP;
DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));
end;
There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.
Can someone shed some light on this? Is there any better way of doing the same job?Nabeel Khan wrote:
In EM
Database Instance: PROD > Top Activity > I got following
physical read total bytes 62763565056 10289335500800 4183122176
cell physical IO interconnect bytes 62763565056 10289335500800 4183122176
physical read bytes 62763565056 10289335500800 4183122176
And the session is running following update procedure:
declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin
-- Time regular updates.
l_start := DBMS_UTILITY.get_time;
FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;
END LOOP;
DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));
end;
There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.
Can someone shed some light on this? Is there any better way of doing the same job?Lots of badness going on here.
1) looping / procedural code where none is needed.
2) commit within the loop, one of the worst evils of all in Oracle. Please read this
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2680799800346456179
I'd look into rewriting this as a single SQL (maybe merge) statement. Or at worst .. a bulk process utilizing collections and FORALL's. -
How to determine total clicks in a EOS 5D
How do you determine total clicks in a EOS 5D ??
The Canon EOS-1D X comes with an actuation counter but other than that, I do not know of an accurate way of getting a 'real' count but you can read my Blog post on this topic and download a third party software to estimate your camera's actuation count.
http://blog.michaeldanielho.com/2012/08/canon-dslr-camera-shutter-actuation.html
http://MichaelDanielHo.com -
CharBuffer view on ByteBuffer and No Bytes Written to SocketChannel
Hi,
I've actually got two problems that might be connected. I'm new to the java.nio.* package. I wanted to try SocketChannel's to see if I could improve performance.
If this isn't the appropriate place for java.nio questions, just let me know.
My first problem is that I create a ByteBuffer by allocating x number of bytes. These bytes are the length of the message I want to send to the server. Then, I attempt to get a CharBuffer view of the ByteBuffer (buffer.asCharBuffer). For some reason, it only returns a CharBuffer with half the capacity of the ByteBuffer. So of course, when I stuff my String into the CharBuffer, it doesn't fit.
Well, I hack that and make the ByteBuffer twice as big. Which brings me to problem two, my SocketChannel does not write any bytes to the server when told to.
Here's the code (with hack):
ByteBuffer buf;
CharBuffer cbuf;
SocketChannel sockChan;
try {
int msgLength = message.length();
logger.info("Message length=" + msgLength);
logger.info("message length in bytes=" + message.getBytes().length);
buf = ByteBuffer.allocateDirect(msgLength*2);
logger.info("position=" + buf.position());
logger.info("capacity=" + buf.capacity());
logger.info("limit=" + buf.limit());
cbuf = buf.asCharBuffer();
logger.info("capacity of cbuf=" + cbuf.capacity());
cbuf.put(message);
buf.flip();
sockChan = SocketChannel.open();
sockChan.configureBlocking(true);
sockChan.socket().setSoTimeout(TIMEOUT_MS);
logger.info("socket configured");
sockChan.connect(new InetSocketAddress(ipAddress, portNumber));
int numBytesWritten = sockChan.write(buf);
logger.info("connected and wrote message. NumBytes writen=" + numBytesWritten);
if (numBytesWritten != msgLength) {
//throw error
logger.error("The number of bytes written do not match the " +
"message length (in bytes).");
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}And the console outputs the following:
[Dec 13, 11:46:17] INFO - Message length=50
[Dec 13, 11:46:17] INFO - message length in bytes=50
[Dec 13, 11:46:17] INFO - position=0
[Dec 13, 11:46:17] INFO - capacity=100
[Dec 13, 11:46:17] INFO - limit=100
[Dec 13, 11:46:17] INFO - capacity of cbuf=50
[Dec 13, 11:46:17] INFO - socket configured
[Dec 13, 11:46:17] INFO - connected and wrote message. NumBytes writen=0
[Dec 13, 11:46:17] ERROR - The number of bytes written do not match the message length (in bytes).My batch program freezes at this point. Don't know why it does that either.
Thanks for any help,
CowKingByteBuffer (buffer.asCharBuffer). For some reason, it
only returns a CharBuffer with half the capacity of
the ByteBuffer.The reason is simply that chars are twice as big as bytes, so you can only get half as many of them into the same space. The capacity of a ByteBuffer is measured in bytes. The capacity of a CharBuffer is measured in chars. The capacity of a DoubleBuffer is measured in doubles.
Well, I hack that and make the ByteBuffer twice as
big. Which brings me to problem two, my SocketChannel
does not write any bytes to the server when told to.As it says in the Javadoc for ByteBuffer, a view buffer has its own position, limit, and mark. When you put data into it the data goes 'through' into the underlying ByteBuffer but the revised position/limit do not. You have to do that yourself manually, remembering to multiply by two as above to account for the difference widths of chars and bytes. -
Revision: 14462
Revision: 14462
Author: [email protected]
Date: 2010-02-26 14:11:20 -0800 (Fri, 26 Feb 2010)
Log Message:
Quick fix to ensure total byte counts are accurate for -size-report.
QE notes: None
Doc notes: None
Bugs: SDK-25600
Reviewer: Paul
Tests run: Checkin
Is noteworthy for integration: No
Ticket Links:
http://bugs.adobe.com/jira/browse/SDK-25600
Modified Paths:
flex/sdk/trunk/modules/swfutils/src/java/flash/swf/tools/SizeReport.javaIf this is textual input (as in a CSV file), you wouldn't use a DataInputStream or DataOutputStream. Use FileReader and FileWriter objects instead.
Actually, unless this is a homework assignment that tells you to write it yourself, you should just use a library that can parse and format CSV files and just use that.
My battle plan is to count the number of rows and commas. I would then use these counts in my subsequent loops. Why? What is the purpose of those counts?
Personally, I'd suggest that you start by writing a method that does the changes you need on a single line of input. (The changes you describe sound like they can all happen on a per-line basis.) Write lots of unit tests to try out various inputs and that you get the correct outputs, given the changes you say you need on a given line.
When you're done with that and have it working perfectly, then you can write the code to read a file, loop through it, call the method that fixes a single line, and then output that line. -
Bytes not read from DataInputStream
I want to read a binary file and translate the data to ascii. In order to do so, I've decided to let one method take care of the opening and reading the data and this is where the problem is; no data is read from the binary file. When the for-loop is executed, the index is printed on the screen and since the number is increasing, it cannot be the for-loop that is malfunctioning. However, the data that should be read, should also be displayed, but that does not happen. Can someone tell me what is wrong with my code?
Thanks
Simon
public void readFromBinary() throws FileNotFoundException,IOException
//open stream
FileInputStream fInputStream = new FileInputStream(inFile);
int lengthInFile = fInputStream.available();
inStream = new DataInputStream(fInputStream);
byte[] values = new byte[lengthInFile];
//read data
for(int i=0;i<lengthInFile;i++)
System.out.println("i = "+i);
values[i] = inStream.readByte();
String str = new String(values);
System.out.println(str);
System.out.println("readFromBinary: Data read, length is "+values.length);
//close stream
fInputStream.close();values[ i ] = inStream.readByte();You've only read one (more) of the bytes, the other
values in the array are still zeros
String str = new String(values);
System.out.println(str);This doesn't make sense creating a string out of the
array which just keeps adding one more byte every time
thru the loop.
Why not just read the thing as one operation?
inStream.read(values);Actually meant:
fInputStream.read(values);
You shouldn't be using DataInputStream on a stream that wasn't created by a DataOutputStream. But then I don't know what this file is you're messing with. -
Hi,
I have a TCP connection between 2 computers. Is there any way to read the data without specify the number of bytes to read? Because I don't know the number of bytes before.Hi,
I you go to "help" > "find examples" you'll find a couple of example VIs called "TCP - Communicator ..." I think you can get inspiration from these.
For instance in the TCP - Communicator - active" you can see that a constant "512" is linked to "bytes to read" and apparently if there is less dat at the specified port, only available data will be read, so you can just count the characters you received.
Hope this will help you
When my feet touch the ground each morning the devil thinks "bloody hell... He's up again!" -
Hi
I receive with a DataInputStream with:
Byte b = new Byte((byte)in.read());
If i send bytes < 0x79 or > 0xa0 all works, otherwise I obtain, i.e:
0x80 = -84 (right)
0x81 = -3
0x82 = 26
Why???In sender side seems ok, in dump file the bytes are ok; this is sender side (dataSend is a LinkedList):
for (Iterator iter = dataSend.iterator(); iter.hasNext();) {
Byte b = (Byte)iter.next();
bytePark = b.byteValue();
dataOut.write(bytePark);
// then I dump bytePark on file
receiver side:
BufferedReader in = new BufferedReader(
new InputStreamReader(client.getInputStream()));
Byte e = new Byte((byte)in.read());
if (e.byteValue() == (byte)0xf3) { // I need analize when receive 0xf3
// here e.byteValue() has wrong behavior -
If I am doing a serial read for a specific instrument, how do I know how many bytes are coming in.
The instrument is transfering data once every second.
The data is 20 lines total, each line ending with "\r\n".
Is there a way I can read in each line by successively reading a certain number of bytes?
Cory KI'm here with Ravens fan. Use a byte termination character.
After reading the port you will have exactly 1 (one) message. Then look how much more bytes are available at the port. If the number is big enough, do another read.
Make sure you keep fire reading events otherwise your buffer will burst (by default at 4k).
Ton
Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas
LabVIEW, programming like it should be! -
Problem rendering certain bytes when reading binary file
I have a two part problem. I am trying to read files of any type from a client and transfer them over a pipe to a UNIX host running a C API. I have this process working for text data just fine however when I try and submit binary data there appears to be some data loss. Most of the file appears to end up on the host but it just isn't as long as the source, nor will it render correctly.
In investigating this problem I tried to simply output a temporary copy of the transferred file on the client as I was reading the file just to see if my "thinking/process" was correct. The copy of the file ends up the same length but some bytes seem to have been misread. Upon doing a windiff on the source and copy it appears that several characters that are rendered as blocks in the original show up as '?' in the destination file.
I believe this is an entirely different problem than why I am losing data on the host side but I want to first figure out why this problem is occuring. The below code is how I am reading and writing the binary file. I realize it has some problems with it, it is more of a POC at this point.
final int BUF_SIZE = 1000; // 1K
char[] cBuffer = new char[BUF_SIZE];
byte[] bBuffer = new byte[BUF_SIZE];
int read = BUF_SIZE;
long length = fLocalFile.length();
FileInputStream fis = new FileInputStream(fLocalFile);
DataInputStream dis = new DataInputStream(fis);
FileOutputStream fos = new FileOutputStream("C:\\temp.file", false);
DataOutputStream dos = new DataOutputStream(fos);
for (int start = 0; start < length && reply.getSuccess(); start += read)
System.out.println("length: " + length + " start: " + start);
read = dis.read(bBuffer, 0, BUF_SIZE);
// Send the file data
String sTemp = sDestName + ":" + new String(bBuffer,0,read);
dos.write(bBuffer,0,read);
reply = axBridge.execute (Commands.CMD_FILE_TRANSFER_SEND, sTemp);
dos.close();
}It seems as if when reading or writing on the data streams some of the characters aren't getting converted correctly. Can anyone help? I've been testing with a PDF if that sheds any light.Yes but you ARE converting to a String first which you then send to the axBridge (sTemp!). Try just sending the bytes. You can easily pre-pend the "<filename>:" by sending those first.
I know that some conversions occur when converting to a String, what they are exactly and what the exact effects are escapes me. Past experience though has taught me to ALWAYS send bytes, with no conversions, what you read is what you send.
You may need to modify the send/receive protocol so that you send the command first with the filename then the bytes are sent after...
As for why the file is not being written correctly to: c:\\temp.file, don't know... try the following code, it tends to be one of the "standard" ways of "streaming" data...
byte buf[] = new byte[bufSize];
int bRead = -1;
while ((bRead = in.read (buf)) != -1)
out.write (buf,
0,
bRead);
} And try just using a FileOutputStream or wrapping in a BufferedOutputStream. -
Group Policy for "let printer determine color" in Adobe Reader X
I need to make a group policy to activate the "let printer determine color" for a huge group of users/pcs, as the reader prints the wrong colors now.
It only needs to activate this single thing. The pcs are running Adobe Reader 10.1.0. Can anyone please help me?Do you have the Enterprise Deployment documentation for Acrobat/Reader?
-
External Table - possible bug related to record size and total bytes in fil
I have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
Should I report this as a bug to Oracle, or am I missing something?
Thanks,
BobI have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
Should I report this as a bug to Oracle, or am I missing something?
Thanks,
Bob -
Hex 00 in every other byte while reading a unix file
Hi, I am having trouble reading certain files on the unix server. When I read one of them, I am finding that hex 00 have been inserted in between the characters (e.g. if # represents 00 then "byte" appears as "b#y#t#e#")
I do not have this problem with other files in the same directory.
When I login to unix and look at these files, they all look fine.
Can anyone tell me what might be causing this and what I need to do to fix this?
Thanks,
Jim HoffmanOPEN DATASET i_dsn IN LEGACY TEXT MODE FOR INPUT
for this statement add ENCODING DEFAULT
Reward if useful
Naveen -
JDBC to Lotus Notes, only 255 bytes be read for each field. Any clue?
Hi,
I am programming with JDBC to Lotus Notes. I can extract data from .nsf files but for each field, only 255 bytes can be read, all the rest got truncated. I ran the ResultSetMetaData.DisplayColumnSize(), it returns 255.
What is the problem? How can I extract larger strings (such as 700 bytes)? I tried bloc data type, it doesn't help.
In the API docs, there is a RowSetMetaData that has a SetColumnSize() method, but I can not find an implemented package of it, also not sure it will work. Any suggestion will be deeply appreciated!
Thanks!
WilliamYou must declare the filed Rich Text, not Only Text, because the Method GetString only get 255... although your filed have more caracters...
If you declare rich Text you must use tgetAsciiStream Method...
suppose the filed 2 is Rich Text...
byte[] buffer = new byte[4096];
int size;
InputStream strin = rs.getAsciiStream(2);
if(strin == null){
System.out.println("notas es null\n");
else {
for(;;){
try {
size = strin.read(buffer);
if(size <= 0){
break;
System.out.print(new String(buffer,0,size));
} catch (java.io.IOException e){
e.printStackTrace();
Maybe you are looking for
-
How to add command line arguments to shortcuts in the LV8.5.1 installer
I want to create multiple shortcuts to my LV program, each with different command line arguments. I can do this manually. I want the installer to generate these shortcuts. I can add different shortcuts in de installer build specification, but I canno
-
Xorg config: Emulate3Buttons, EmulateWheel simulateneously
Hi, I just got a Logitech Marble mouse and want to configure it so that simultaneously tapping the left and right main buttons emulates a middle click, and holding these buttons enters scrolling mode. What I have right now is Section "InputClass" Ide
-
I only see the playroom Apple TV on my iPhone remote. I do not see my family room remote. How do I get it to see both?
-
How do i get quick office doc to adobe reader
Docs from quick office to adobe reader?
-
Unfortunately Homescreen has stopped
My phone just updated this morning, I don't know if it was a verizon update or an android update. Once it restarted, I am getting the message, 'Unfortunately, Homescreen has stopped.' When I click the 'OK' button, homescreen restarts and the massage