Cell multiblock physical read on exadata system
delete is taking forever and the session is waiting on 'cell multiblock physical read' in exadata system.
delete from ept.prc_rules_ref where end_dt < ( select min(cycle_dt) from ( select distinct cycle_dt from ept.prc order by 1 desc ) cd where rownum < 4 ) ;
where as the select runs in less than few seconds..
this is the explain plan of the delete..
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop |
| 0 | DELETE STATEMENT | | 1454 | 43620 | 95432 (3)| | |
| 1 | DELETE | PRC_RULES_REF | | | | | |
| 2 | TABLE ACCESS STORAGE FULL | PRC_RULES_REF | 1454 | 43620 | 273 (1)| | |
| 3 | SORT AGGREGATE | | 1 | 9 | | | |
| 4 | COUNT STOPKEY | | | | | | |
| 5 | VIEW | | 3 | 27 | 95159 (3)| | |
| 6 | SORT UNIQUE STOPKEY | | 3 | 24 | 94120 (2)| | |
| 7 | PARTITION RANGE ALL | | 19M| 152M| 21356 (1)|1048575| 1 |
| 8 | INDEX STORAGE FAST FULL SCAN| PRC_2_IE | 19M| 152M| 21356 (1)|1048575| 1 |
when i check the explain plan of the select part, this is what we get ..
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 9 | 95159 (3)| | |
| 1 | SORT AGGREGATE | | 1 | 9 | | | |
| 2 | COUNT STOPKEY | | | | | | |
| 3 | PARTITION RANGE ALL | | 3 | 27 | 95159 (3)|1048575| 1 |
| 4 | VIEW | | 3 | 27 | 95159 (3)| | |
| 5 | SORT UNIQUE STOPKEY | | 3 | 24 | 94120 (2)| | |
| 6 | INDEX STORAGE FAST FULL SCAN| PRC_2_IE | 19M| 152M| 21356 (1)|1048575| 1 |
Sql Statement
delete from ept.prc_rules_ref where end_dt < ( select min(cycle_
dt) from ( select distinct cycle_dt from ept.prc order by 1 desc
) cd where rownum < 4 )
Event Wait Information
SID 564 is waiting on event : cell multiblock physical read
P1 Text : cellhash#
P1 Value : 398250101
P2 Text : diskhash#
P2 Value : 1099358214
P3 Text : bytes
P3 Value : 729088
Any pointers why it is not going for a smart scan? the table is not huge either it has only 27000+ records.
Taking away exadata from the picture, delete's have lot of overhead to be performed ( extra level of locking, maintaining indexes, buffer cache operations, redo/undo maintenance etc ..) than selects. From the delete sql explain plan , its going for a FTS(TABLE ACCESS STORAGE FULL), which is good and expected with Exadata.
Similar Messages
-
What causes BUFFER GETS and PHYSICAL READS in INSERT operation to be high?
Hi All,
Am performing a huge number of INSERTs to a newly installed Oracle XE 10.2.0.1.0 on Windows. There is no SELECT statement running, but just INSERTs one after the other of 550,000 in count. When I monitor the SESSION I/O from Home > Administration > Database Monitor > Sessions, I see the following stats:
BUFFER GETS = 1,550,560
CONSISTENT GETS = 512,036
PHYSICAL READS = 3,834
BLOCK CHANGES = 1,034,232
The presence of 2 stats confuses. Though the operation is just INSERT in database for this session, why should there be BUFFER GETS of this magnitude and why should there by PHYSICAL READS. Aren't these parameters for read operations? The BLOCK CHANGES value is clear as there are huge writes and the writes change these many blocks. Can any kind soul explain me what causes there parameters to show high value?
The total columns in the display table are as follows (from the link mentioned above)
1. Status
2. SID
3. Database Users
4. Command
5. Time
6. Block Gets
7. Consistent Gets
8. Physical Reads
9. Block Changes
10. Consistent Changes
What does CONSISTENT GETS and CONSISTENT CHANGES mean in a typical INSERT operation? And does someone know which all tables are involved in getting these values?
Thank,
...Flake wrote:
Hans, gracias.
The table just have 2 columns, both of which are varchar2 (500). No constraints, no indexes, neither foreign key references are in place. The total size of RAM in system is 1GB, and yes, there are other GUI's going on like Firefox browser, notepad and command terminals.
But, what does these other applications have to do with Oracle BUFFER GETS, PHYSICAL READS etc.? Awaiting your reply.Total RAM is 1GB. If you let XE decide how much RAM is to be allocated to buffers, on startup that needs to be shared with any/all other applications. Let's say that leaves us with, say 400M for the SGA + PGA.
PGA is used for internal stuff, such as sorting, which is also used in determing the layout of secondary facets such as indexes and uniqueness. Total PGA usage varies in size based on the number of connections and required operations.
And then there's the SGA. That needs to cover the space requirement for the data dictionary, any/all stored procedures and SQL statements being run, user security and so on. As well as the buffer blocks which represent the tablespace of the database. Since it is rare that the entire tablespace will fit into memory, stuff needs to be swapped in and out.
So - put too much space pressure on the poor operating system before starting the database, and the SGA may be squeezed. Put that space pressure on the system and you may enbd up with swapping or paging.
This is one of the reasons Oracle professionals will argue for dedicated machines to handle Oracle software. -
Hint or parameter to force physical read
I am using Oracle 11.2.0.3. I have a query which took 45 minute the first time and it take 4 minutes in subsequent run in QC environment. In both the cases it uses same plan. If I try the query again in few days , first time it takes considerable amount of time.Most of the wait is in range index scan - 'db file parallel read'
Same query runs within 2 minutes in lower environment with different plan. I have used hint to make the plan same in QC environment. Now query runs as expected but I suspect it might slow down if the data is not cached. I do not have access to alter system flush buffer cache privilege.
Is there any hint or parameter I can use force physical read?
Which view will tell me if a table is still cached in memory?spur230 wrote:
<snip>
Which view will tell me if a table is still cached in memory?
v$bh will tell you what blocks of an object are cached:
orcla> select file#,block#,status from v$bh where objd=(select data_object_id
2 from dba_objects where owner='SCOTT' and objecT_name='DEPT');
no rows selected
orcla> select * from scott.dept;
DEPTNO DNAME LOC
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
orcla> select file#,block#,status from v$bh where objd=(select data_object_id
2 from dba_objects where owner='SCOTT' and objecT_name='DEPT');
FILE# BLOCK# STATUS
4 131 xcur
4 134 xcur
4 132 xcur
4 135 xcur
4 130 xcur
4 133 xcur
6 rows selected.
orcla>
but you do need to be aware of the status. There may be several versions of a block cached. -
High user input output (I/O) and physical reads
Hi guys
Recently we have noticed that our database performance degraded significantly. as we looked around in grid control we noticed that the user IO and Physical reads are really high which cause the database to be laggy.
at first we thought it was memory problem, so we added 2gb to sga. we have set sga to be automatic and it's been like this for the past 2 years.
how do i trouble shoot this problem. i haven't found any troubleshooting guide for it yet.
please instruct me on how to solve it
database: 10.2.0.4 sparc
regards
SinaTop 5 Timed events
db file sequential read 23,582 679 29 46.3 User I/O
CPU time 626 42.7
db file scattered read 3,089 70 23 4.8 User I/O
db file parallel write 3,478 28 8 1.9 System I/O
log file parallel write 4,201 24 6 1.7 System I/O
SGA Memory Summary
SGA regions Begin Size (Bytes) End Size (Bytes) (if different)
Database Buffers 2,768,240,640 2,835,349,504
Fixed Size 2,050,240
Redo Buffers 14,721,024
Variable Size 3,657,439,040 3,590,330,176
Process Memory Summary
Category Alloc (MB) Used (MB) Avg Alloc (MB) Std Dev Alloc (MB) Max Alloc (MB) Hist Max Alloc (MB) Num Proc Num Alloc
B Other 569.22 0 2.62 3.14 22 24 217 217
Freeable 101.63 0.00 0.77 0.45 3 132 132
SQL 91.51 45.35 0.44 0.76 4 78 209 192
PL/SQL 12.50 5.57 0.06 0.07 0 4 217 217
JAVA 5.40 5.38 1.08 0.09 1 2 5 5
E Other 603.20 3.03 3.62 22 24 199 199
SQL 115.21 64.78 0.60 1.67 21 78 191 182
Freeable 112.31 0.00 0.83 0.43 3 135 135
PL/SQL 13.89 5.94 0.07 0.12 1 4 199 199
JAVA 8.93 8.89 1.12 0.23 2 2
for some reason i can't paste a table into this post but if you want i can email you above tables -
No physical reads, plenty of consistant gets
Hi All,
Oracle v11.2 on Linux.
Please have a look at the query I ran and the output. The SQL_ID is of a SELECT command.
What does this situation represents, where there are zero physical reads and plenty of consistant gets ?
For consistant gets, we do read undo information (correct ??), if that undo is read from disk, will that be a "physical read"? i.e. if we read disks for consistant gets, will that be counted under physical reads or not ?
How can I describe the exact data retrival of the command here? Is it the case of "everything it needs is found in the buffer cache" ?
select a.sid, a.value , B.NAME , s.sql_id
from v$sesstat a, v$statname b, v$session s
where A.STATISTIC# = B.STATISTIC#
and b.name in ( 'redo size','physical read bytes','physical reads cache','consistent gets' )
and a.sid = s.sid
and a.sid=1018
order by a.sid;
SID VALUE NAME SQL_ID
1018 7281396 consistent gets 434u36htuz0s9
1018 0 physical reads cache 434u36htuz0s9
1018 0 physical read bytes 434u36htuz0s9
1018 4448 redo size 434u36htuz0s9
4 rows selected.Thanks in advance.>
There are no physical reads, so weather its doing consistant-gets or not, can I say, all data required for the SELECT was in buffer cache ?
>
The data for those system views is cached in memory so Oracle does not read the disk (except at startup) to gather the information.
Some static information, like dictionary objects, is stored stored on disk in the system tablespace but this data is read when the database is mounted and stored in memory structures.
Other dynamic information, like session info, is only stored in memory structures (similar to C arrays) and Oracle can query these as if they were tables.
So no, the data was not in the buffer cache. It was already in system memory. -
Hi
My oracle version is 10.2.0.4
Is there any way to reduce the physical reads apart from tuning the query and index creation.
Can I have my whole table arranged in blocks sequentially one after the other so that my search becomes simple.
Is there any option for that like Coallesce /deallocate unused space/ Compact.littleboy wrote:
Is there any way to reduce the physical reads apart from tuning the query and index creation.Incorrectly phrased. By reducing PIO (physical I/O) you can imply that you want to increase LIO (logical I/O) as this is faster and will thus increase performance.
That is not tuning. That is hacking of a terrible kind.
In fact, a high percentage LIO is indicative of an application design problem.
The correctly phrased question is "<i>how to reduce I/O</i>?" - as less I/O means less work. And less work does not equate to only less PIOs. It means less I/O (of all kinds). Period.
Can I have my whole table arranged in blocks sequentially one after the other so that my search becomes simple.
Is there any option for that like Coallesce /deallocate unused space/ Compact.From the questions you have asked the past few days, I get the feeling that you are looking for magical silver bullet solutions to performance. A knob to turn somewhere in Oracle, a switch to throw to enable some special behaviour.
That is, and never was, performance tuning. Performance starts with the design of the system. It continues with the architecture used and implemented. And remains with every single line of code written.
You do not pop Oracle's hood and rummage around in the engine, muck about with the fuel injectors, in order to get it to go faster. You design the application to use Oracle correctly. You implemented Oracle correctly. That is where performance start. Not with popping the hood.
Messing with space management to make Oracle go faster? Messing with undocumented parameters? Changing process priorities? Supersizing this and that? That is a <b><font color="red">FAIL</font></b> as far as the correct software engineering approach to performance goes. -
Find physical reads and logical reads ?
Hi,
how will find out physical reads and logical reads ?Well I would suggest you read the report from statspack/awr.They as suggested by Amit,have a load profile section.That would be helpful for you in finding the details of this thing. Also which version you are? If you are in 10g than the EM is able to give you a compare period report where you can compare 2 different days's periods information and can check which particular part has changed.
In addition to this , look for the information of the metrics in the documentation.As the physical read and logical reads are statistics which are happening in the system.So from 10g onwards, oracle is keeping a track in the deflectionin the statistics. So if you see that than it willbe easy for you to manage and monitor it.
I shall try to findthe name of some views related to it and post.
Aman.... -
Logical read Vs Physical Reads
Hi,
I want to know How should we balance the logical reads and physical reads on database.
I generaly assume logical reads is always better but what if too many session accessing the same
objects it will cause latch contention.
If it is physical read then it will take more time to read from disk than reading from SGA.
If I look at statspack report I check the
Logical reads: 59,846.54 840.03
Physical reads: 1,095.91 15.38
Is there any standards for logical reads on database. If there is latch contentions can we go for the
decreasing the SGA to avoid the contentions so that there will balance between the physical reads
and logical reads.amitbansode wrote:
I want to know How should we balance the logical reads and physical reads on database.There is no balance as it implies some kind of "+perfect ratio+" for logical I/O vs. physical I/O.
A high percentage of physical I/O can be perfectly acceptable and normal and correct for a specific database (e.g. think of a system collecting telemetry data where 90% or more of all I/O is writing new telemetry into the database and the remaining 10% is querying the data, with old data being aged out from the database using partition drops which is negligible I/O).
A high percentage of logical I/O can be indicative of a serious application design problem - where 80GB of database data is read and read again and again and again.. resulting in over a TB of logical I/O. (actually saw this in production database some years ago)
So there is no balance (e.g. not true that physical I/O = BAD and logical I/O = GOOD). No perfect cache hit ratio figure that tells you that the database is doing the right amounts of logical and physical I/O.
And I want to emphasise what Mark said - tuning requires you to identify the performance problem first, before trying to solve it.
It is very dangerous to take one metric, like the I/O cache hit ratio, and attempt to tune that. It alone is meaningless. Just like memory utilisation alone is useless and CPU utilisation alone is useless. It does not by any means point to an actual performance problem. E.g. 100% CPU utilisation can mean hardware has insufficient horses, instead of performance issues related with application design, database setting or kernel configuration.
I often repeat the following mantra here on OTN - a fundamental concept IMO for software engineering:
A solution is only as good as the problem definition.
Identify the problem first - correctly and comprehensively. And then solve it. -
9582.69043 gigabytes of physical read total bytes and increasing!
In EM
Database Instance: PROD > Top Activity > I got following
physical read total bytes 62763565056 10289335500800 4183122176
cell physical IO interconnect bytes 62763565056 10289335500800 4183122176
physical read bytes 62763565056 10289335500800 4183122176
And the session is running following update procedure:
declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin
-- Time regular updates.
l_start := DBMS_UTILITY.get_time;
FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;
END LOOP;
DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));
end;
There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.
Can someone shed some light on this? Is there any better way of doing the same job?Nabeel Khan wrote:
In EM
Database Instance: PROD > Top Activity > I got following
physical read total bytes 62763565056 10289335500800 4183122176
cell physical IO interconnect bytes 62763565056 10289335500800 4183122176
physical read bytes 62763565056 10289335500800 4183122176
And the session is running following update procedure:
declare
FM_BBBB MT.BBBB_CODE%TYPE;
l_start NUMBER;
cursor code_upd is select /*+ parallel(FM_KWT_POP_BBBB_MISMATCH, 10) */ DDD_CID, DDD_BBBB, CCCC_BBBB from MT_MISMATCH;
begin
-- Time regular updates.
l_start := DBMS_UTILITY.get_time;
FOR rec IN code_upd LOOP
update /*+ parallel(MT, 10) nologging */ MT
set BBBB_code = rec.CCCC_BBBB
where source= 0
and cid_no = rec.DDD_CID
and BBBB_code = rec.DDD_BBBB;
commit;
END LOOP;
DBMS_OUTPUT.put_line('Bulk Updates : ' || (DBMS_UTILITY.get_time - l_start));
end;
There are 9.5 million records in MT but source=0 have only 3 million records and 376K records in MT_MISMATCH, What I don't understand why this is taking so much of time and so many bytes read? Both Tables are analyzed before running this procedure.
Can someone shed some light on this? Is there any better way of doing the same job?Lots of badness going on here.
1) looping / procedural code where none is needed.
2) commit within the loop, one of the worst evils of all in Oracle. Please read this
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2680799800346456179
I'd look into rewriting this as a single SQL (maybe merge) statement. Or at worst .. a bulk process utilizing collections and FORALL's. -
Solaris 10 - After installation read only file system
Dear All,
I have installed the Solaris 10 on my x86 system with out any problem. The installation was completed successfully. I have followed the installation instructions that are mentioned in the following link.
http://docs.sun.com/app/docs/doc/817-0544/6mgbagb19?a=view
I am facing a different problem. I am not able to create even a single file / sub-directory on any of the existing directories. It always says READ ONLY file system can not create firl / directory.
Please help me how to resolve this issue.
Thanks in advance.
Regards,
Srinivas GWhat do you get for 'svcs -xv' output?
Darren -
Crashes and read-only file systems
Notice: I apolgize for the long post, I've tried to be as thorough as possible. I have searched everywhere for possible solutions, but things I've found end up being temporary workarounds or don't apply to my situation. Any help, even as simple as, "have you checked out XYZ log, it's hidden here", would be greatly appreciated. Thanks
I'm not sure what exactly caused the issues below, but they did start to happen within a day of running pacman -Syu. I hadn't run that since I first installed Arch on December 2nd of this year.
Setup:
Thinkpad 2436CTO
UEFI/GPT
SSD drive
Partitions: UEFISYS, Boot, LVM
The LVM is encrypted and is broken up as: /root, /var, /usr, /tmp, /home
All LVM file systems are EXT4 (used to have /var and /tmp as ReiserFS)
The first sign that something was wrong was gnome freezing. Gnome would then crash and I'd get booted back to the shell with all filesystems mounted as read-only. I started having the same issues as this OP:
https://bbs.archlinux.org/viewtopic.php?id=150704
At the time, I had /var and /tmp as ReiserFS, and would also get reiserfs_read_locked_inode errors.
When shutting down (even during non-crashed sessions) I would notice this during shutdown:
Failed unmounting /var
Failed unmounting /usr
Followed by a ton of these:
device-mapper: remove ioctl on <my LVM group> failed: Device or resource busy
Nother of these errors had ever appeared before.
After hours of looking for solutions (and not finding any that worked) I was convinced (without any proof) that my Reiser file systems were corrupt and so I reformatted my entire SSD and started anew - not the Arch way, I know I set all logical volumes as EXT4.
After started anew, I noticed
device-mapper: remove ioctl on LVM_SysGroup failed: Device or resource busy
was still showing up, even with just a stock Arch setup (maybe even when powering off via Arch install ISO, don't remember). After a lot of searching, I found that most people judged it a harmless error, so I ignored it and continued setting up Arch.
I set up Gnome and a basic LAMP server, and everything seemed to work for a couple of hours. Soon after, I got the same old issues back. The System-journald issue came back and per the workaround on https://bbs.archlinux.org/viewtopic.php?id=150704 and a couple other places, I rotated the journals and stopped journald from saving to storage. That seemed to stop THOSE errors from at least overwhelming the shell, but I would still get screen freezes, crashes, and read-only file systems.
I had to force the laptop to power off, since poweroff/reboot/halt commands weren't working (would get errors regarding the filesystems mounted as read-only).
I utilized all disk checking functions possible. From running the tests (SMART test included) that came as part of my laptop's BIOS to full blown fsck. All tests showed the drive was working fine, and Fsck would show everything was either clean, or
Clearing orphaned inode ## (uid=89, gid=89, mode=0100600, size=###
Free blocks count wrong (###, counted=###)
Which I would opt to fix. Nothing serious, though.
I could safely boot back into Arch and use the system fine until the system decides to freeze/crash and do the above all over again.
The sure way of recreating this for me is to run a cron job on a local site I'm developing. After a brief screen freeze (mouse still moveable but everything is otherwise unreponsive) I'll systemctl status mysqld.service and notice that mysqld went down.
It seems that it's at this point my file systems are mounted as read only, as trying to do virtually anything results in:
unable to open /var/db/sudo/...: Read-only file system
After some time, X/Gnome crashes and I get sent back to shell with
ERROR: file_stream_metrics.cc(37)
RecordFileError() err = 30 source = 1 record = 0
Server terminated successfully (0)
Closing log file.or_delegate.h(30)] sqlite erro1, errno 0: SQL logic error or missing database[1157:1179
rm: cannot remove '/tmp/serverauth.teuroEBhtl': Read-only file system
Before all this happened, I was using Arch just fine for a few weeks. I wiped the drives and started anew, and this still happens with just the minimal number of packages installed.
I've searched for solutions to each individual problem, but come across a hack that doesn't solve anything (like turning off storing logs for journal), or the solution doesn't apply to my case.
At this point, I'm so overwhelmed I'm not even sure where exactly to pick up figuring this issue out.
Thanks in advance for any helpDid this occur when you booted from the live/install media?
What is your current set up? That is, partitions, filesystems etc. I take it you have not yet reinstalled X but are in the default CLI following installation?
If turning off log storage didn't help, reenable it so that you may at least stand a chance of finding something useful.
What services, if any, are you running? What non-default daemons etc.?
Does it happen if you keep the machine off line?
Have you done pacman -Syu since installation and dealt with any *.pacnew files?
Last edited by cfr (2012-12-26 22:17:57) -
Consistent gets and physical reads
Hi all,
I am tuning a DM SQL query, by comparing execution plans with STAR TRANSFORMATION enabled or disabled. I got the following results:
STAR TRANSFORMATION ON
74889 consistent gets
254365 physical reads
STAR TRANSFORMATION OFF
1945892 consistent gets
168028 physical reads
I thought a physical read would be counted as a logical read as well, because the data block would be read from disk (1 physical IO), placed in the buffer cache and then read from there (1 more logical IO or consistent get).
So, one physical IO does not cause a logical IO?
Thanks!
Edited by: user10634835 on 12-Jul-2011 08:40But shouldn't consistent gets be >= physical reads (Since, as per my understanding, 1 PIO causes at least 1 LIO)? In this case it is not.
74889 consistent gets
254365 physical readsJust clarifying for my knowledge.
regards -
Query on data dictionary results in large number of physical reads
I don't understand why I am getting 80,000 physicals for this query. I am not looking for help re-writing this. I just don't understand why I would hit the disk at all.
My understanding had been that v$views where sql structures that pointed to x$tables. These x$tables are sql structures.
underneath, the x$tables were linked lists stored in memory. This is why when you bounce the database, all the data, gets reset. Since it is not saved to disk.
I am doing a simple insert/select off of v$open_cursor that is resulting in 80,000+ physical reads. I am posting the tkprof. It is all from v$open_cursor.
mysid_table has 6 records. It is 1 mb in size
if I index mysid_table.sid the query reduces to 20,000 physical reads. (but all the physical reads are on v$session_event)
the sequence number I am passing returns 2 SIDs
insert into my_save_table
select *
from v$session_event
where sid in (select sid
from my_sid_table
where id = vseq);
vrowcount := sql%rowcount;
call count cpu elapsed disk query current rows
Parse 1 0.01 0.01 0 0 0 0
Execute 1 31.70 47.57 88570 22 0 1
Fetch 0 0.00 0.00 0 0 0 0
total 2 31.71 47.58 88570 22 0 1
Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 22
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
latch: row cache objects 1 0.00 0.00
log file sync 1 0.00 0.00
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************It seems like there is some missing information.
You have a wait for a log file sync, but no commit.
Your table my_sid_table is 1 MB for only 6 records?
Does the target table you are inserting into (my_save_table) have indexes on it? -
How to get the size of physical memory by using system call ?
how to get the size of physical memory by using system call ?What system call can be used for me to get the size of physical memor? thanks.
%vmstat 3
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr s0 -- -- -- in sy cs us sy id
0 0 0 3025816 994456 4 19 6 0 0 0 0 8 0 0 0 459 253 139 1 1 99
0 0 0 2864688 777408 0 2 0 0 0 0 0 3 0 0 0 428 134 175 0 1 99
0 0 0 2864688 777408 0 0 0 0 0 0 0 7 0 0 0 448 112 166 0 0 100
one interesting observation about vmstat I found out is (mostly on Solaris)
the first line of information always off chart, so I usually do a few interval to get constant result.
if you use linux
just
cat /proc/meminfo -
High no. of physical reads of a query in statspack report
we have a Oracle database 9.2.0.6 on solaris box....
SQL ordered by Reads for DB: ic Instance: ic12 Snaps: 19 -20
-> End Disk Reads Threshold: 1000
CPU Elapsd
Physical Reads Executions Reads per Exec %Total Time (s) Time (s) Hash Value
1,103,253 77 14,328.0 53.1 1641.98 11428.17 3825541888
Module: w3wp.exe
select MRH_MRN,DECODE(MRH_SEX,'M','MALE','FEMALE') AS SEX, trim
((mrh_sur_name||' '||mrh_first_name||' '||mrh_middle_name)) as M
EMNAME, decode(nvl(mrh_fellow_status_yn,'333'),'Y','FCA','ACA')
AS ACA_FCA, DECODE(MRH_RESI_STATUS,'I',MRH_PROF_ADDR_LINE_1,'A',
MRH_RES_ADDR_LINE_1) AS L_ADD1, DECODE(MRH_RESI_STATUS,'I',MRH_Pexplain plan
SQL> explain plan for select MRH_MRN,DECODE(MRH_SEX,'M','MALE','FEMALE') AS SEX
, trim((mrh_sur_name||' '||mrh_first_name||' '||mrh_middle_name)) as MEMNAME, de
code(nvl(mrh_fellow_status_yn,'333'),'Y','FCA','ACA')AS ACA_FCA, DECODE(MRH_RESI
_STATUS,'I',MRH_PROF_ADDR_LINE_1,'A',
2 MRH_RES_ADDR_LINE_1) AS L_ADD1, DECODE(MRH_RESI_STATUS,'I',MRH_PROF_ADDR_LI
NE_2,'A',MRH_RES_ADDR_LINE_2) AS L_ADD2, DECODE(MRH_RESI_STATUS,'I',MRH_PROF_ADD
R_LINE_3,'A',MRH_RES_ADDR_LINE_3) ASL_ADD3, DECODE(MRH_RESI_STATUS,'I',MRH_PROF_
ADDR_LINE_4,'A',
3 MRH_RES_ADDR_LINE_4) AS L_ADD4, DECODE(MRH_RESI_STATUS,'I',a.city_name,'A',
C.CITY_NAME) AS L_CITY, DECODE(MRH_RESI_STATUS,'I',MRH_PROF_ZIP_POSTAL_CODE,'A',
MRH_RES_ZIP_POSTAL_CODE) AS L_PIN, DECODE(MRH_RESI_STATUS,'I',b.cou_name,'A',D.C
OU_NAME) as L_Country,
4 DECODE(MRH_RESI_STATUS,'I','NOT APPLICABLE',MRH_PROF_ADDR_LINE_1)AS R_ADD1,
DECODE(MRH_RESI_STATUS,'I',' ',MRH_PROF_ADDR_LINE_2)AS R_ADD2, DECODE(MRH_RESI_
STATUS,'I',' ',MRH_PROF_ADDR_LINE_3)
5 AS R_ADD3, DECODE(MRH_RESI_STATUS,'I',' ',MRH_PROF_ADDR_LINE_4)AS R_ADD4, D
ECODE(MRH_RESI_STATUS,'I',' ','A',A.CITY_NAME) AS R_CITY, DECODE(MRH_RESI_STATUS
,'I',' ','A',MRH_PROF_ZIP_POSTAL_CODE) AS R_PIN, DECODE(MRH_RESI_STATUS,'I',' ',
'A',B.COU_NAME) as
6 R_Country, decode(nvl(mrh_mem_sub_status,'555'),'26','EXPIRED','') as sub_s
tatus, decode(nvl(mrh_mem_status,'777'),'1','ACTIVE','2','REMOVED') as mem_statu
s,mrh_resi_status, DECODE(MRH_COP_STATUS,'1',DECODE(MRH_COP_TYPE ,'13','FULLTIME
-COP','1',
7 'FULLTIME-COP', '12','PARTTIME-COP','2','PARTTIME-COP'),'NOT HOLDING COP')
AS COP_STATUS, TO_CHAR(MRH_ENROL_DT,'RRRR') AS ASSO_YR,TO_CHAR(MRH_FELLOW_DT,'RR
RR') AS FELLOW_YR from om_mem_reg_head,om_city A,
8 om_country B,om_city C,om_country D where mrh_doc_status=5 and mrh_prof_
city_code=A.City_code(+) and mrh_prof_cou_code=B.cou_code(+) and mrh_res_city_c
ode=C.City_code(+) and mrh_res_cou_code=D.cou_code(+) and trim((mrh_sur_name||'
'||mrh_first_name||
9 ''||mrh_middle_name)) like upper('%%') ORDER BY trim((mrh_sur_name||' '||m
rh_first_name||' '||mrh_middle_name))
10 ;
Explained.
SQL> select * from table(dbms_xplan.displaY());
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost
|
| 0 | SELECT STATEMENT | | 2870 | 762K| | 202
PLAN_TABLE_OUTPUT
8 |
| 1 | SORT ORDER BY | | 2870 | 762K| 1592K| 202
8 |
| 2 | NESTED LOOPS OUTER | | 2870 | 762K| | 190
2 |
| 3 | NESTED LOOPS OUTER | | 2870 | 714K| | 190
2 |
PLAN_TABLE_OUTPUT
| 4 | HASH JOIN OUTER | | 2870 | 667K| | 190
2 |
| 5 | HASH JOIN OUTER | | 2870 | 616K| | 189
2 |
| 6 | TABLE ACCESS FULL| OM_MEM_REG_HEAD | 2870 | 566K| | 188
2 |
| 7 | TABLE ACCESS FULL| OM_COUNTRY | 677 | 12186 | |
4 |
PLAN_TABLE_OUTPUT
| 8 | TABLE ACCESS FULL | OM_COUNTRY | 677 | 12186 | |
4 |
| 9 | INDEX UNIQUE SCAN | CITY_CODE_PK | 1 | 17 | |
|
| 10 | INDEX UNIQUE SCAN | CITY_CODE_PK | 1 | 17 | |
|
PLAN_TABLE_OUTPUT
Note: cpu costing is off, PLAN_TABLE' is old version
18 rows selected.
SQL>Please suggest me whjat cab done to overcome the same
Edited by: user00726 on Feb 3, 2009 5:03 AMSQL> show arraysize
arraysize 15
SQL>should i set SDU parameter in tnsnames and listener.ora
for more info related to the same...pls do visit the below thread:
n/w perfromance related problem
Maybe you are looking for
-
The attempt to burn a disc failed. An Unknown error occurred
"The attempt to burn a disc failed. An Unknown error occurred (4450)" This is the error message I get when I try to burn a CD. This PC IS authorized. These files have not been burned before. This is a new Dell Dimension 9100 DVD/RW I've burned CDs su
-
Leopard installation hard drive issues
I recently bought a iBook G4 (2005 model) on eBay under the impression it was just missing an OS. Armed with a copy of Leopard (yes I bought it ) I tried to boot it up for the first time. The first time I was faced with the Open Firmware screen, and
-
Transplanting an Arch install from HD to HD
Hey. Old HD is dying, so transferring arch to my considerably bigger one. I've made logical partitions to house the boot, swap, and root and transferred boot and root. I did mkswap and swapon for the swap partition and changed grub's menu.lst and fst
-
Oracle 9i and 11g installation on same windows machine
Hi All, I am facing strange problem, I installed 9i and 11g on same machine whiche is runng windows server 2003 service pack 2. I set ORACLE_HOME and PATH of both versions different and both are installed properly. But when i try to create new databa
-
Calling/showing transactions to a field
Hi, I have a requirement where i need to show multiple transactions/line items from a table to a field in form like a page, for example invoice # 101 .. date 01-sep-2009 service abc with delivery serivce bbc without delivery service ccc with maximum