INSERT causing lots of buffer gets
select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - ProductionSQL
desc tab
Name Null? Type
DEF_ID NOT NULL VARCHAR2(64)
INST_ID NOT NULL VARCHAR2(64)
BUSINESS_KEY NOT NULL VARCHAR2(64)
BUSINESS_DATA CLOB
TIME NOT NULL TIMESTAMP(6)
REQUEST_CXT NOT NULL VARCHAR2(4000)
IS_PROCESSED NOT NULL NUMBER(3)
ON_STATUS TIMESTAMP(6)
RVN NOT NULL NUMBER(10)
HV NUMBER(10)
ID NOT NULL VARCHAR2(128)
STATE CHAR(1)
insert into tab(def_id, inst_id, business_key,
businessdata, time, is_processed, next_retry_time, on_status, req_cxt, hv,
id, op_type, rvn ) values
(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, 'I', 0)
BUSINESS_DATA IS A clob datatype
table got two indexes
1.idx1(pk) on (inst_id,def_id,rvn)
2.idx2 on (id,state,op_type)
High buffer gets in past
select a.sql_id,TO_DATE(TO_CHAR(b.BEGIN_INTERVAL_TIME,'DD-MON-YY HH24:MI'),'DD-MON-YY HH24:MI') as sdate,a.PLAN_HASH_VALUE,a.EXECUTIONS_DELTA EXE_D,
case (executions_delta) when 0 then 0 else ceil(a.ROWS_PROCESSED_DELTA/a.EXECUTIONS_DELTA) end as RD,
case (executions_delta) when 0 then 0 else ceil(a.ELAPSED_TIME_DELTA/a.EXECUTIONS_DELTA) end as ELA_D_MCR_SEC,
case (executions_delta) when 0 then 0 else ceil(a.BUFFER_GETS_DELTA/a.EXECUTIONS_DELTA) end as BG_D,
case (executions_delta) when 0 then 0 else ceil(a.IOWAIT_DELTA/a.EXECUTIONS_DELTA) end as W_IO_D_MCR_SEC
from dba_hist_sqlstat a ,dba_hist_snapshot b where a.snap_id=b.snap_id and a.sql_id = 'ayzxbxqvp5dk3' and a.PARSING_SCHEMA_NAME='SAL_WORKFLOW'
and BEGIN_INTERVAL_TIME>=TO_DATE('27-MAR-2013 21:00', 'dd-mon-yyyy hh24:mi') AND END_INTERVAL_TIME<=TO_DATE('27-MAR-2013 23:00', 'dd-mon-yyyy hh24:mi')
order by 1,2;
SQL_ID SDATE PLAN_HASH_VALUE EXE_D RD ELA_D_MCR_SEC BG_D W_IO_D_MCR_SEC
ayzxbxqvp5dk3 27.Mar.13/21:00:00 0 2406 1 15876 1396 7052
ayzxbxqvp5dk3 27.Mar.13/21:10:00 0 2502 1 16401 1420 7001
ayzxbxqvp5dk3 27.Mar.13/21:20:00 0 3552 1 17444 1203 7183
ayzxbxqvp5dk3 27.Mar.13/21:30:00 0 2825 1 17860 1474 6310
ayzxbxqvp5dk3 27.Mar.13/22:00:00 0 1942 1 16250 1629 6510
ayzxbxqvp5dk3 27.Mar.13/22:10:00 0 2114 1 16276 1795 6170
ayzxbxqvp5dk3 27.Mar.13/22:20:00 0 2031 1 16769 1746 6604
ayzxbxqvp5dk3 27.Mar.13/22:40:00 0 2233 1 16435 1838 6348
8 rows selected.
select a.sql_id,TO_DATE(TO_CHAR(b.BEGIN_INTERVAL_TIME,'DD-MON-YY HH24:MI'),'DD-MON-YY HH24:MI') as sdate,a.PLAN_HASH_VALUE,a.EXECUTIONS_DELTA EXE_D,
case (executions_delta) when 0 then 0 else ceil(a.ROWS_PROCESSED_DELTA/a.EXECUTIONS_DELTA) end as RD,
case (executions_delta) when 0 then 0 else ceil(a.ELAPSED_TIME_DELTA/a.EXECUTIONS_DELTA) end as ELA_D_MCR_SEC,
case (executions_delta) when 0 then 0 else ceil(a.BUFFER_GETS_DELTA/a.EXECUTIONS_DELTA) end as BG_D,
case (executions_delta) when 0 then 0 else ceil(a.IOWAIT_DELTA/a.EXECUTIONS_DELTA) end as W_IO_D_MCR_SEC
from dba_hist_sqlstat a ,dba_hist_snapshot b where a.snap_id=b.snap_id and a.sql_id = 'ayzxbxqvp5dk3' and a.PARSING_SCHEMA_NAME='SAL_WORKFLOW'
and BEGIN_INTERVAL_TIME>=TO_DATE('26-MAR-2013 21:00', 'dd-mon-yyyy hh24:mi') AND END_INTERVAL_TIME<=TO_DATE('26-MAR-2013 23:00', 'dd-mon-yyyy hh24:mi')
order by 1,2;
SQL_ID SDATE PLAN_HASH_VALUE EXE_D RD ELA_D_MCR_SEC BG_D W_IO_D_MCR_SEC
ayzxbxqvp5dk3 26.Mar.13/21:00:00 0 2948 1 11052 21 10543
ayzxbxqvp5dk3 26.Mar.13/21:30:00 0 2042 1 12381 23 11857
ayzxbxqvp5dk3 26.Mar.13/21:40:00 0 2329 1 12089 23 11586
ayzxbxqvp5dk3 26.Mar.13/21:50:00 0 2421 1 12209 23 11684
ayzxbxqvp5dk3 26.Mar.13/22:00:00 0 2360 1 10889 22 10398
ayzxbxqvp5dk3 26.Mar.13/22:10:00 0 2081 1 11059 22 10562
ayzxbxqvp5dk3 26.Mar.13/22:20:00 0 2384 1 11464 22 10894
ayzxbxqvp5dk3 26.Mar.13/22:40:00 0 2196 1 11514 22 11021
8 rows selected.Table is a non partitioned table without any trigger.
select OWNER,TABLE_NAME,SEGMENT_NAME,COLUMN_NAME,INDEX_NAME,CHUNK,ENCRYPT,COMPRESSION,SECUREFILE from dba_lobs where TABLE_NAME='&TABLE_NAME';
OWNER TABLE_NAME SEGMENT_NAME COLUMN_NAME INDEX_NAME CHUNK ENCR COMPRE SEC
CLASS TAB SYS_LOB0000119958C00004$$ BUSINESS_DATA SYS_IL0000119958C00004$$ 8192 NONE NONE NO
COLUMN_NAME NUM_DISTINCT NUM_NULLS LAST_ANALYZED SAMPLE_SIZE AVG_COL_LEN HISTOGRAM DENSITY
BUSINESS_DATA 0 0 16.Apr.13/09:28:06 14584877 87 NONE .0000000000Can anyone help me troubleshoot such high spike in buffer_gets ?
>
Can anyone help me troubleshoot such high spike in buffer_gets ?
>
You haven't posted anything indicating a 'high spike in buffer_gets'.
All you posted are two sets of data with one set having higher values. Nothing indicates that the values are wrong or larger than they should be.
When a new row is inserted it goes into a block. Oracle has to 'get' that block before it can put the row into it. The more rows you INSERT the more blocks that will likely be needed and the more 'gets' that will likely occur.
You need to post something to support your assertion that there has been a 'spike' and also need to post some reason why that would be an issue even if there has been a spike.
Similar Messages
-
Hi,
I have the table my_log in a 9.2.0.7 database. This table is used only for select and insert purposes (there are no deletions). The insertions are simple insert statement without any append hint. From Enterprise Manager I see that the insert performs a lot of buffer gets. This is reasonable because oracle must read the segment to find blocks to write (I imagine that it must also read dictionary and the indexes segments). In this scenario I imagine that the number of buffer getscan only grow over time. The strange thing is that the number of buffer gets can also decrease!
For example:
at 7:00 am => 9966 buffer gets
at 8:00 am => 9422 buffer gets
at 9:00 am => 8912 buffer gets
at 10:00 am => 9543 buffer gets
Why the number of buffer gets is so variable? I expected it to be increasing...Have Checked below checklist
No joins can be made between a stored procedure and a table in a universe
No Query Filters can be used
No predefined conditions
The procedure itself may contain a variable that will prompt, but it cannot be manipulated.
They cannot be used in Linked Universes.
Not All Databases support stored procedures
These SQL Commands are not ALLOWED: COMPUTE, PRINT, OUTPUT or STATUS
The stored procedures do not support OUT or dynamic result sets parameters
An IF statement cannot be used in the where clause.
You can only create a new universe based on the stored procedure. You cannot add it to an existing universe.
The stored procedure creates all objects in the universe automatically. If there is a long text object it will not generate an object.
If a change is made on the database to the stored procedure. The universe view will not update the schema. The stored procedure must be re-inserted. (This causes the object id to change!)
In order to avoid parsing errors on stored procedures columns, it is recommended that you alias result columns based on complex SQL, for example using the aggregate functions - sum, count. The creation of aliased objects cannot be constrained. -
Hi all,
possible someone could help me on following issue:
I'm working for a software vendor and one of our customers is reporting that especially 2 of the sql statements of our application are " executed ineffective" on their database environment.
They are especially saying that "These statements are consuming a lot of CPU and doing a lot of buffer gets in relation to the number of executions."
They provided following extracts out of the statspack report.
SQL1:
SQL Statistics
~~~~~~~~~~~~~~
-> CPU and Elapsed Time are in seconds (s) for Statement Total and in
milliseconds (ms) for Per Execute
% Snap
Statement Total Per Execute Total
Buffer Gets: 322,101 16.6 .89
Disk Reads: 631 0.0 .48
Rows processed: 19,444 1.0
CPU Time(s/ms): 19 1.0
Elapsed Time(s/ms): 26 1.3
Sorts: 0 .0
Parse Calls: -2 -.0
Invalidations: 0
Version count: 1
Sharable Mem(K): 43
Executions: 19,444 SQL2:
SQL Statistics
~~~~~~~~~~~~~~
-> CPU and Elapsed Time are in seconds (s) for Statement Total and in
milliseconds (ms) for Per Execute
% Snap
Statement Total Per Execute Total
Buffer Gets: 628,517 22.9 3.26
Disk Reads: 128 0.0 .18
Rows processed: 27,492 1.0
CPU Time(s/ms): 27 1.0
Elapsed Time(s/ms): 30 1.1
Sorts: 0 .0
Parse Calls: 0 .0
Invalidations: 0
Version count: 1
Sharable Mem(K): 39
Executions: 27,492 The SQL1 is an update and SQL2 an Insert on the same table.
The accessed table has 6 indexes and a primary key column. On SQL1, the update, the where condition is reffering to the primary key column.
Both statements are using bind variables.
From my point, I would say, that the customer should provide the execution plan of both statements to verify that SQL1 is using the primary key.
As far as I understand "buffer gets" this issn't an issue because it's only saying that the data coumes out of the cache (which would be good) instead of reading the data from disk.
But I don't really see there any bottleneck.
Could you please give me some suggestions?
Many Thanks
JoergHi,
Recently we have encountered one performance issue, which is most likely caused by a sudden increase in the buffer gets per execution.
The SQL is an update statement, updating a table using a primary key (we have checked to confirm the running execution plan is using the primary key), and one field being updated is a BLOB column.
As shown in the below statistics, there is no major change in the number of executions during the every 20 minutes monitoring interval. However, the buffer gets per executions has been more than double, and the CPU time is almost doubled, hence the exec_time (elapsed time) has been doubled.
The same SQL has been running for the past four years in multiple similar databases. The database is Oracle 9.2.0.4 running on Solaris 9. For the past 300 days, the average elapsed time per execution is about 0.0093s, while the average buffer gets per execution is about 670. The update statement has been executed about 9 times per second.
The question is why there is a sudden increase in the buffer gets? The sudden increase happened twice for the past two days.
<pre>
B_TIME E_TIME EXECUTIONS_DIFF EXEC_TIME CPU_TIME BUFFER_GETS EXEC_PER_DAY
2009-11-25-12:23 2009-11-25-12:43 9363 .0081 .008 530.04 671338
2009-11-25-12:43 2009-11-25-13:03 11182 .0083 .008 538.59 799772
2009-11-25-13:03 2009-11-25-13:23 10433 .0078 .0077 474.61 761970
2009-11-25-13:23 2009-11-25-13:43 10043 .008 .0078 496.65 713581
2009-11-25-13:43 2009-11-25-14:04 8661 .0076 .0074 401.22 598169
2009-11-25-14:04 2009-11-25-14:23 8513 .0069 .0068 315.56 646329
2009-11-25-14:23 2009-11-25-14:43 10170 .007 .0068 312.28 726188
2009-11-25-14:43 2009-11-25-15:05 11873 .0072 .0069 320.17 787885
2009-11-25-15:05 2009-11-25-15:23 8633 .011 .0101 844.83 675014
2009-11-25-15:23 2009-11-25-15:44 9668 .0144 .0137 1448.51 680778
2009-11-25-15:44 2009-11-25-16:04 9671 .0163 .0156 1809.04 702163
2009-11-25-16:04 2009-11-25-16:25 10260 .0188 .0177 2107.67 711447
2009-11-25-16:25 2009-11-25-16:44 9827 .0157 .0151 1834.3 739593
2009-11-25-16:44 2009-11-25-17:05 10586 .0171 .0164 2008.25 714555
2009-11-25-17:05 2009-11-25-17:24 9625 .0189 .0181 2214.07 745829
2009-11-25-17:24 2009-11-25-17:44 9764 .016 .0154 1877.34 679782
2009-11-25-17:44 2009-11-25-18:04 8812 .0167 .0163 1989.61 652405
2009-11-26-07:24 2009-11-26-07:43 8230 .0141 .014 1614.6 614051
2009-11-26-07:43 2009-11-26-08:04 11494 .0165 .0159 1833.1 785044
2009-11-26-08:04 2009-11-26-08:24 11028 .0182 .0172 1979.61 800688
2009-11-26-08:24 2009-11-26-08:44 10533 .0154 .0149 1734.62 750248
2009-11-26-08:44 2009-11-26-09:04 9367 .018 .0168 2043.95 685274
2009-11-26-09:04 2009-11-26-09:24 10307 .0214 .0201 2552.43 729938
2009-11-26-09:24 2009-11-26-09:45 10932 .0251 .0234 3111.48 762328
2009-11-26-09:45 2009-11-26-10:05 10992 .0278 .0254 3386.41 797404
2009-11-26-10:05 2009-11-26-10:24 10179 .0289 .0269 3597.24 764088
2009-11-26-10:24 2009-11-26-10:45 10216 .032 .0286 3879.47 681592
2009-11-26-10:45 2009-11-26-11:04 10277 .0286 .0263 3539.44 799219
2009-11-26-11:20 2009-11-26-11:23 1378 .0344 .0312 4261.62 688203
2009-11-26-11:23 2009-11-26-11:36 7598 .0299 .027 3675.36 805481
2009-11-26-11:36 2009-11-26-11:43 3345 .0298 .0272 3610.28 752625
2009-11-26-11:43 2009-11-26-12:03 10383 .0295 .0278 3708.36 728158
2009-11-26-12:03 2009-11-26-12:23 10322 .0332 .03 4002.33 745669
2009-11-26-12:23 2009-11-26-12:43 11847 .0316 .0292 3899.34 852273
2009-11-26-12:43 2009-11-26-13:03 10027 .0331 .0298 4030.5 722546
2009-11-26-13:03 2009-11-26-13:23 10130 .035 .0309 4199.08 730577
2009-11-26-13:23 2009-11-26-13:43 9783 .0331 .0306 4161.3 707915
2009-11-26-13:43 2009-11-26-14:03 10460 .0322 .0291 3947.63 753748
2009-11-26-14:03 2009-11-26-14:23 9452 .0333 .0309 4143.31 678283
2009-11-26-14:23 2009-11-26-14:43 9127 .0318 .03 4051.52 659341
2009-11-26-14:51 2009-11-26-15:03 5391 .0358 .0328 4358.58 652356
2009-11-26-15:03 2009-11-26-15:16 7183 .0425 .0348 4615.42 746824
2009-11-26-15:16 2009-11-26-15:23 2921 .0417 .0373 4887.75 682092
2009-11-26-15:23 2009-11-26-15:43 9597 .0393 .0352 4603.62 679656
2009-11-26-15:43 2009-11-26-16:03 8797 .0411 .0362 4783.66 630755
2009-11-26-16:03 2009-11-26-16:23 9957 .0453 .0391 5168.28 718100
2009-11-26-16:23 2009-11-26-16:43 11209 .0436 .0369 4870.77 808395
2009-11-26-16:43 2009-11-26-17:03 10729 .0428 .0375 5119.56 766103
2009-11-26-17:03 2009-11-26-17:23 9116 .0409 .0363 4912.58 659098
</pre>
GaoYuan
Edited by: user12194561 on Nov 26, 2009 7:34 PM -
What causes BUFFER GETS and PHYSICAL READS in INSERT operation to be high?
Hi All,
Am performing a huge number of INSERTs to a newly installed Oracle XE 10.2.0.1.0 on Windows. There is no SELECT statement running, but just INSERTs one after the other of 550,000 in count. When I monitor the SESSION I/O from Home > Administration > Database Monitor > Sessions, I see the following stats:
BUFFER GETS = 1,550,560
CONSISTENT GETS = 512,036
PHYSICAL READS = 3,834
BLOCK CHANGES = 1,034,232
The presence of 2 stats confuses. Though the operation is just INSERT in database for this session, why should there be BUFFER GETS of this magnitude and why should there by PHYSICAL READS. Aren't these parameters for read operations? The BLOCK CHANGES value is clear as there are huge writes and the writes change these many blocks. Can any kind soul explain me what causes there parameters to show high value?
The total columns in the display table are as follows (from the link mentioned above)
1. Status
2. SID
3. Database Users
4. Command
5. Time
6. Block Gets
7. Consistent Gets
8. Physical Reads
9. Block Changes
10. Consistent Changes
What does CONSISTENT GETS and CONSISTENT CHANGES mean in a typical INSERT operation? And does someone know which all tables are involved in getting these values?
Thank,
...Flake wrote:
Hans, gracias.
The table just have 2 columns, both of which are varchar2 (500). No constraints, no indexes, neither foreign key references are in place. The total size of RAM in system is 1GB, and yes, there are other GUI's going on like Firefox browser, notepad and command terminals.
But, what does these other applications have to do with Oracle BUFFER GETS, PHYSICAL READS etc.? Awaiting your reply.Total RAM is 1GB. If you let XE decide how much RAM is to be allocated to buffers, on startup that needs to be shared with any/all other applications. Let's say that leaves us with, say 400M for the SGA + PGA.
PGA is used for internal stuff, such as sorting, which is also used in determing the layout of secondary facets such as indexes and uniqueness. Total PGA usage varies in size based on the number of connections and required operations.
And then there's the SGA. That needs to cover the space requirement for the data dictionary, any/all stored procedures and SQL statements being run, user security and so on. As well as the buffer blocks which represent the tablespace of the database. Since it is rare that the entire tablespace will fit into memory, stuff needs to be swapped in and out.
So - put too much space pressure on the poor operating system before starting the database, and the SGA may be squeezed. Put that space pressure on the system and you may enbd up with swapping or paging.
This is one of the reasons Oracle professionals will argue for dedicated machines to handle Oracle software. -
Sudden increase in buffer gets per executions in update statement
Hi,
Recently we have encountered one performance issue, which is most likely caused by a sudden increase in the buffer gets per execution.
The SQL is an update statement, updating a table using a primary key (we have checked to confirm the running execution plan is using the primary key), and one field being updated is a BLOB column.
As shown in the below statistics, there is no major change in the number of executions during the every 20 minutes monitoring interval. However, the buffer gets per executions has been more than double, and the CPU time is almost doubled, hence the exec_time (elapsed time) has been doubled.
The same SQL has been running for the past four years in multiple similar databases. The database is Oracle 9.2.0.4 running on Solaris 9. For the past 300 days, the average elapsed time per execution is about 0.0093s, while the average buffer gets per execution is about 670. The update statement has been executed about 9 times per second.
The question is why there is a sudden increase in the buffer gets? The sudden increase happened twice for the past two days.
<pre>
B_TIME E_TIME EXECUTIONS_DIFF EXEC_TIME CPU_TIME BUFFER_GETS EXEC_PER_DAY
2009-11-25-14:04 2009-11-25-14:23 8513 .0069 .0068 315.56 646329
2009-11-25-14:23 2009-11-25-14:43 10170 .007 .0068 312.28 726188
2009-11-25-14:43 2009-11-25-15:05 11873 .0072 .0069 320.17 787885
2009-11-25-15:05 2009-11-25-15:23 8633 .011 .0101 844.83 675014
2009-11-25-15:23 2009-11-25-15:44 9668 .0144 .0137 1448.51 680778
2009-11-25-15:44 2009-11-25-16:04 9671 .0163 .0156 1809.04 702163
2009-11-25-16:04 2009-11-25-16:25 10260 .0188 .0177 2107.67 711447
2009-11-25-16:25 2009-11-25-16:44 9827 .0157 .0151 1834.3 739593
2009-11-25-16:44 2009-11-25-17:05 10586 .0171 .0164 2008.25 714555
2009-11-26-08:04 2009-11-26-08:24 11028 .0182 .0172 1979.61 800688
2009-11-26-08:24 2009-11-26-08:44 10533 .0154 .0149 1734.62 750248
2009-11-26-08:44 2009-11-26-09:04 9367 .018 .0168 2043.95 685274
2009-11-26-09:04 2009-11-26-09:24 10307 .0214 .0201 2552.43 729938
2009-11-26-09:24 2009-11-26-09:45 10932 .0251 .0234 3111.48 762328
2009-11-26-09:45 2009-11-26-10:05 10992 .0278 .0254 3386.41 797404
2009-11-26-15:03 2009-11-26-15:16 7183 .0425 .0348 4615.42 746824
2009-11-26-15:16 2009-11-26-15:23 2921 .0417 .0373 4887.75 682092
2009-11-26-15:23 2009-11-26-15:43 9597 .0393 .0352 4603.62 679656
2009-11-26-15:43 2009-11-26-16:03 8797 .0411 .0362 4783.66 630755
2009-11-26-16:03 2009-11-26-16:23 9957 .0453 .0391 5168.28 718100
2009-11-26-16:23 2009-11-26-16:43 11209 .0436 .0369 4870.77 808395
2009-11-26-16:43 2009-11-26-17:03 10729 .0428 .0375 5119.56 766103
2009-11-26-17:03 2009-11-26-17:23 9116 .0409 .0363 4912.58 659098
</pre>
Yesterday I did a trace on one of the sessions running the update statement, and below is the tkprof output:
<pre>
call count cpu elapsed disk query current rows
Parse 76 0.03 0.00 0 0 0 0
Execute 76 4.58 5.14 0 567843 19034 76
Fetch 0 0.00 0.00 0 0 0 0
total 152 4.61 5.14 0 567843 19034 76
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88
Rows Row Source Operation
1 UPDATE (cr=30 r=0 w=0 time=6232 us)
1 INDEX UNIQUE SCAN <PK Index Name> (cr=3 r=0 w=0 time=58 us)(object id 81122)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
Waited--------------------------------------------------------------------------------
SQL*Net message to client 152 0.00 0.00
SQL*Net message from client 152 0.00 0.22
SQL*Net more data from client 1894 0.00 0.03
SQL*Net break/reset to client 152 0.00 0.00
buffer busy waits 14 0.00 0.00
enqueue 1 0.61 0.61
</pre>
GaoYuanHi,
I've reformatted your output for better understanding (with {noformat}...{noformat}):
B_TIME E_TIME EXECUTIONS_DIFF EXEC_TIME CPU_TIME BUFFER_GETS EXEC_PER_DAY
2009-11-25-14:04 2009-11-25-14:23 8513 .0069 .0068 315.56 646329
2009-11-25-14:23 2009-11-25-14:43 10170 .007 .0068 312.28 726188
2009-11-25-14:43 2009-11-25-15:05 11873 .0072 .0069 320.17 787885
2009-11-25-15:05 2009-11-25-15:23 8633 .011 .0101 844.83 675014
2009-11-25-15:23 2009-11-25-15:44 9668 .0144 .0137 1448.51 680778
2009-11-25-15:44 2009-11-25-16:04 9671 .0163 .0156 1809.04 702163
2009-11-25-16:04 2009-11-25-16:25 10260 .0188 .0177 2107.67 711447
2009-11-25-16:25 2009-11-25-16:44 9827 .0157 .0151 1834.3 739593
2009-11-25-16:44 2009-11-25-17:05 10586 .0171 .0164 2008.25 714555
2009-11-26-08:04 2009-11-26-08:24 11028 .0182 .0172 1979.61 800688
2009-11-26-08:24 2009-11-26-08:44 10533 .0154 .0149 1734.62 750248
2009-11-26-08:44 2009-11-26-09:04 9367 .018 .0168 2043.95 685274
2009-11-26-09:04 2009-11-26-09:24 10307 .0214 .0201 2552.43 729938
2009-11-26-09:24 2009-11-26-09:45 10932 .0251 .0234 3111.48 762328
2009-11-26-09:45 2009-11-26-10:05 10992 .0278 .0254 3386.41 797404
2009-11-26-15:03 2009-11-26-15:16 7183 .0425 .0348 4615.42 746824
2009-11-26-15:16 2009-11-26-15:23 2921 .0417 .0373 4887.75 682092
2009-11-26-15:23 2009-11-26-15:43 9597 .0393 .0352 4603.62 679656
2009-11-26-15:43 2009-11-26-16:03 8797 .0411 .0362 4783.66 630755
2009-11-26-16:03 2009-11-26-16:23 9957 .0453 .0391 5168.28 718100
2009-11-26-16:23 2009-11-26-16:43 11209 .0436 .0369 4870.77 808395
2009-11-26-16:43 2009-11-26-17:03 10729 .0428 .0375 5119.56 766103
2009-11-26-17:03 2009-11-26-17:23 9116 .0409 .0363 4912.58 659098
call count cpu elapsed disk query current rows
Parse 76 0.03 0.00 0 0 0 0
Execute 76 4.58 5.14 0 567843 19034 76
Fetch 0 0.00 0.00 0 0 0 0
total 152 4.61 5.14 0 567843 19034 76
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 88
Rows Row Source Operation
1 UPDATE (cr=30 r=0 w=0 time=6232 us)
1 INDEX UNIQUE SCAN <PK Index Name(cr=3 r=0 w=0 time=58 us)(object id 81122)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
SQL*Net message to client 152 0.00 0.00
SQL*Net message from client 152 0.00 0.22
SQL*Net more data from client 1894 0.00 0.03
SQL*Net break/reset to client 152 0.00 0.00
buffer busy waits 14 0.00 0.00
enqueue 1 0.61 0.61
********************************************************************************Can you please provide a DDL for the table, indexes, type of the tablespace(s) they reside in (ASSM/MSSM, extents sizes), the UPDATE statement, how many sessions on average/peaks are doing the same thing concurrently, how many sessions are working this table concurrently and how do they use it? -
Inspection lot is not getting updated in quant
Hi Experts,
Need your help..
I have posted two material doc with same material following is the consequences :
1.we have immediate TO creation process , so now we have 2 TO's and 2 inspection lots.
2.When i am doing UD for first inspection lot PCN does get created.
3.Now i do TO confirmation for putaway for TR and then TO creation and confirmation for PCN and material is in bin.
4.When i am confirming the TO(TR i.e putaway TO) ,Inspection lot is not getting updated in the quant due to which i am not able to convert PCN to TO
Because in quant i found previous inspection lot which is different from having in PCN document.
So is there any solution which will update the inspection lot in quant once we confirm TO(putaway TO) ?
Any help on this will be highly appreciated.
Regards,
OMHi jugen,
Thanks for you help but now i going for below development :
Once UD is done system will take the inspection lot from from the quant and will overwrite to the
newly created PCN
So that during PCN to TO creation (LU04) inspection lot will remain same both in quant and PCN .
Do you have any idea on this i mean where in QA32 tcode i have to write this logic ?
Thank you..
Regards, -
I've found this page
[http://www.billmagee.co.uk/oracle/sqltune/080_identify.html]
in which is said:
BUFFER_GETS Cumulative total of memory blocks read for this statement
so if I want to see how many blocks a query read per execution I must do ((disk_reads+buffer_gets)/executions) as suggested from the query on the same page.
select sql_text,
executions,
to_char((((disk_reads+buffer_gets)/executions) * 8192)/1048576,
'9,999,999,990.00') as total_gets_per_exec_mb,
to_char((( disk_reads /executions) * 8192)/1048576,
'9,999,999,990.00') as disk_reads_per_exec_mb,
to_char((( buffer_gets /executions) * 8192)/1048576,
'9,999,999,990.00') as buffer_gets_per_exec_mb,
parsing_user_id
from v$sqlarea
where executions > 0
order by 6 descThis is correct?
Buffer gets refers only to the block found in the buffer cache (not loaded from disk) or to the total amount of db block on which the query works (indipendetly if they are found immediately in the buffer cache or must be read from disk) ?
Hope you can help me.
Thanks
Adriano AristarcoYes, the index is appearing on the dba_indexes table, however its state is 'UNUSABLE'.
It looks impdp takes its state from origin metadata, puts into destination, and even is not trying to rebuild it.
Of course, after running ALTER INDEX REBUILD its status was changed to VALID.
So, what's the point impdp tells about its state? It can be hundreds of unusable indexes, why it's not just rebuilding it? -
I'm using the latest photoshop cc 2014 with most updated camera raw... i am having A LOT/REPEATED trouble getting files to "synch" with corrections, no matter what i options i select (i.e. "everything")... WTH is wrong with me/my computer/adobe?! help. fast. please
BOILERPLATE TEXT:
Note that this is boilerplate text.
If you give complete and detailed information about your setup and the issue at hand,
such as your platform (Mac or Win),
exact versions of your OS, of Photoshop (not just "CS6", but something like CS6v.13.0.6) and of Bridge,
your settings in Photoshop > Preference > Performance
the type of file you were working on,
machine specs, such as total installed RAM, scratch file HDs, total available HD space, video card specs, including total VRAM installed,
what troubleshooting steps you have taken so far,
what error message(s) you receive,
if having issues opening raw files also the exact camera make and model that generated them,
if you're having printing issues, indicate the exact make and model of your printer, paper size, image dimensions in pixels (so many pixels wide by so many pixels high). if going through a RIP, specify that too.
etc.,
someone may be able to help you (not necessarily this poster, who is not a Windows user).
a screen shot of your settings or of the image could be very helpful too.
Please read this FAQ for advice on how to ask your questions correctly for quicker and better answers:
http://forums.adobe.com/thread/419981?tstart=0
Thanks! -
How to unblock the payment to the vendor , even though the lot will be get
Hi,
we receive the material from vendor . when store people unloaded, it was get broken.
so quality will reject the lot for the reason. but if we reject vendor will not get the payment
since this is not the vendor mistake. he expect for the payment.
how to unblock the payment to the vendor , even though the lot will be get reject .Hi
Can you not make UD code as Partially accepted.
Put OK material in Unrestricted
For NOT oK material put into block stock.
From Block stock you can move to QRNT (Quarantine storage Location - 311 mvt type) or directly move to Unrestricted(mvt type 343)
Form here you can move material to Scrap by 551
This will give payment to vendor & Qty can be scrapped. -
Data Loader On Demand Inserting Causes Duplicates on Custom Objects
Hi all,
I am having a problem that i need to import around 250,00 records on a regular basis so have built a solution using Dataloader with two processes, one to insert and one to update. I was expecting that imports that had an existing EUI would fail therefore only new records would get inserted (as it says in the PDF) but it keeps creating duplicates even when all the data is exactly the same.
does anyone have any ideas?
Cheers
MarkYes, you have encountered an interesting problem. There is a field on every object labelled "External Unique Id" (but it is inconsistent as to whether there is a unique index there or not). Some of the objects have keys that are unique and some that seemingly have none. The best way to test this is to use the command line bulk loader (because the GUI import wizard can do both INSERT/UPDATE in one execution, you don't always see the problem).
I can run the same data over and over thru the command line loader with the option to INSERT and you don't get unique key constraints. For example, ASSET and CONTACT, and CUSTOM OBJECTS. Once you have verified whether the bulk loader is creating duplicates or not, that might drive you to the decision of using a webservice.
The FINANCIAL TRANSACTION object I believe has a unique index on the "External Unique Id" field and the FINANCIAL ACCOUNT object has a unique key on the "Name" field I believe.
Hope this helps a bit.
Mychal Manie ([email protected])
Hitachi Consulting -
Good or bad to have buffer gets
Is it good to have high buffer gets?
In the manuals it seems they say it is not good ...There is not a query, application, database table or forum that can answer this question. You might see a statement that takes 10 hours to run, performs 42 billion consistent reads and 42 million disk reads...and it is running just fine. There might be a statement that takes 1 minute to run, performs 1000 consistent reads and no disk reads that is the worst performing sql in the application.
Before you decide that I am out of my mind...consider...
The first statement is an overnight batch job that performs an ETL process for the data warehouse. As long as it finishes by 7am the next morning, it does not matter how fast it runs (no one logs in to the system until 8am).
The second statement retrieves the customer account information that is used hundreds of thousands of times during the day by the sales force. I know that I would be an unhappy customer if I had to wait 1 minute every time I called in to place an order.
The only way to find the worst performing sql is to talk to the users and business, have them tell you what their important processes are and how they are impacted by response time.
There is nothing worse than spending hours tuning a sql statement that did not need to be tuned!
Regards,
Daniel Fink -
How to calculate #Buffer Gets # Exec Buffer Gets/Exec
Hi,
How to calculate #Buffer Gets,# Execution time,Buffer Gets/Exec for a sql query?Nirmal
You can find out these statistics from two places
1) using SQL_TRACE (10046 trace) and then TKPROF (or Autotrace in SQL*Plus)
2) or looking at V$SQL which records the cost assigned to each SQL statement since the statement was first cached.
If you use Statspack or AWR, you can see the difference between two points in time, so you can calculate the cost for a period of time.
See Using SQL_Trace and TKPROF
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96533/sqltrace.htm#1018
and Using Statspack
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96533/statspac.htm#34837
(see the 10g documentation equivalents if necessary).
Remember, ratios (eg gets/exec) aren't always very helpful. You're best off concentrating on those operations which take longest (ie where there is the most potential to save time). See (eg) www.hotsos.com, www.oraperf.com, and others to identify effective performance methodologies.
HTH
Regards Nigel -
Hello !
Does the counter of event buffer gets include the logical reads ?
Does the mertic buffer gets include the event of reading from undo buffer ?
Thanks and regards,
Pavel
Edited by: Pavel on Jun 27, 2012 3:08 AM
Edited by: Pavel on Jun 27, 2012 3:35 AM
Edited by: Pavel on Jun 27, 2012 4:13 AMHi,
buffer gets = number of times a block was requested from buffer cache. A buffer get always request in a logical read. Depending on whether or not a copy of the block is available in the buffer cache, a logical read may or may not involve a physical read. So "buffer gets" and "logical reads" are basically synonyms and are often used interchangeably.
Oracle doesn't have a special "undo buffer". Undo blocks are stored in rollback segments in UNDO tablespace, and are managed in the same way data blocks are (they're even protected by redo). If a consistent get requires reading from UNDO tablespace, then statistics counters will show that, i.e. there will be one more consistent get in your autotrace.
For more information and some examples, see a thread at askTom:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:549546900346542976
Best regards,
Nikolay -
XQuery is causing lots of disk writing
Hi,
In our system, we found that XQuery is causing lots of disk writing. I guess it is due to cache.
In dbxml document, it says:
All Berkeley DB XML applications are capable of writing temporary files to disk. This happens when the disk cache fills up and so BDB XML is forced to write overflow pages. For the most part, these temporary files can be safely ignored.
I don't understand what it says above.
My question is: Why Xquery operations will cause lots of disk write? (No other proocesses are writing bdb. )
Thanks.
-YuanHi Yuan,
DB XML uses temporary Berkeley DB databases for a number of things during query evaluation, including re-sorting index entries and storing the parsed documents from a WholedocContainer format container. Berkeley DB will write this information out to disk if the cache becomes full. I would suggest that the behaviour you're seeing indicates that you need to allocate more cache for DB XML to work with.
John -
Consistent gets and buffer gets
Hi,
what is the difference between Consistent gets and buffer gets ?
Many thanks before.plz the documentation is your friend. You can search from tahiti.oracle.com or 10.2 (or your version) docu library.
Maybe you are looking for
-
Changing the data of a customize field in purchase requisition
Hi, I need to assign value to the customise field that i created in the account assignment of the purchase requisition but i don't know why it cannot update the field at all. Have tried using set data but of no use. Asha
-
Account hacked and I'm now locked out
Basically the title says it all. My account was hacked into, password and security question has been changed and now I can't get back into it to even disable my credit card. If anyone has any advice on what to do it would be greatly appreciated.
-
Pdf file (pages template) is too big
I created a newsletter from the "Green Grocer" newsletter template on pages 08. It looks great. I have exported it as a pdf file, but now the file size is 18 MBs and too large for most people to read. It's only two pages. Help! The artwork was not th
-
Can I use coldfusion 8 standard version in Sun / Solaris?
Hi, I would like to know if I can use coldfusion 8 standard version in Sun / Solaris Thank you.
-
What is the function of this forum?
I have been following this forum since I bought an Xperia Neo V and this was about three months ago. From the time I started reading threads on, I just see the same problems are being told again and again, and almost none of them could be able to fin