Estimate Redo Size
Hii All
Is there any math to estimate how much redo a sql will generate ? I know redo size statistic shows but I am just asking for understand basic concept.
For example I am updating LOC column on dept table that is the scoot schema and has not index on column so this dml generate about 200-300 byte redo
Best Regards..
>
What purpose of the giving my user statistics ?
>
The purpose is to remind you that you appear to be violating forum etiquette by not marking your questions answered.
You have 66 previous questions that you have not marked as ANSWERED and it is statistically unlikely that none of them have actually been answered.
The likely reason is that you are not following forum etiquette and marking questions answered. See the FAQ (link in upper right corner of this page) for forum rules.
>
What is proper discussion forum etiquette?
When asking a question, provide all the details that someone would need to answer it including your database version, e.g. Oracle 10.2.0.4.
Format your code using code tags (see "How do I format code in my post?" below). Consulting documentation first is highly recommended. Furthermore, always be courteous; there are different levels of experience represented. A poorly worded question is better ignored than flamed - or better yet, help the poster ask a better question.
Finally, it is good form to reward answerers with points (see "What are 'reward points'?" below) and also to mark the question answered when it has been.
>
When people review forum questions for possible answers they do NOT want to waste their time on questions that have already been answered. So when you do not mark your questions answered when they have been those questions just junk up the forum and waste people's time.
No marking questions answered also suggests that you are not a team player; you want people to help you but you are unwilling to help them by keeping the forum clean.
Please revisit those 66 previous questions, give HELPFUL or ANSWERED credit where credit is due and then mark them ANSWERED if they have been answered.
Similar Messages
-
Estimate db size and change tablespace configuration before restoring RMAN
Hi,
I am wondering whether it is possible to estimate the size of the restored database just from the RMAN files I have?
Also, Is there a setting I can change in these files to disable 'autoextend' feature in the tablespace of the restored database?
thank you in advance.Hi
But using this method I will damage the existing control file of the original server.
SQL 'alter database mount';
sql statement: alter database mount
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of sql command on default channel at 09/16/2009 17:18:59
RMAN-11003: failure during parse/execution of SQL statement: alter database mount
ORA-01103: database name 'OLD_DB' in control file is not 'JUPITER'
And when I start JUPITER:
startupORACLE instance started.
Total System Global Area 192937984 bytes
Fixed Size 2169752 bytes
Variable Size 134079592 bytes
Database Buffers 54525952 bytes
Redo Buffers 2162688 bytes
ORA-00205: error in identifying control file, check alert log for more info
Is there a way to restore the control file without damaging the existing server?
thank you for your support. -
Estimate index size on a table column before creating it
Hi
Is it possible to estimate the size of the index before actually creating it on a table column.
I tried the below query. It gives size of the index after creating it.
SELECT (SUM(bytes)/1048576)/1024 Gigs, segment_name
FROM user_extents
WHERE segment_name = 'IDX_NAME'
GROUP BY segment_name.
Can anyone through some light which system table will give this information.You can get an approximation by estimating the number of rows to be indexes, the average column lengths of the columns in the index, and the overheads for an index entry - once you have some reasonable stats on the table.
I wrote a piece of code to demonstrate the method a few years ago - it's got some errors, but I've highlighted them in an update to the note: http://www.jlcomp.demon.co.uk/index_efficiency_2.html
Regards
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
http://www.jlcomp.demon.co.uk
To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
"Science is more than a body of knowledge; it is a way of thinking"
Carl Sagan -
Audit dml vs noaudit dml - session stat huge difference in redo size DB 9.2
Hi,
I've just finished test ,where compared audit update table, delete table, insert table by user by access with noaudit dml statements.
DB version is 9.2.0.8 , same test run twice on same configuration and data so its comparable .
What concerns me the most is difference in redo size ,and redo entries , here goes table with results:
noaudit audit statname
486 439,00 878 484,00 calls to kcmgas
40 005,00 137 913,00 calls to kcmgcs
2 917 090,00 5 386 386,00 db block changes
4 136 305,00 6 709 616,00 db block gets
116 489,00 285 025,00 deferred (CURRENT) block cleanout applications
1,00 3 729,00 leaf node splits
361 723 368,00 773 737 980,00 redo size
4 235,00 50 752,00 active txn count during cleanoutCould You explain that differences in statistics, especially in redo size.
I'm suprissed because in 9.2 DB audit dml doesnt log actual sql statements, only indication of usage .
Regards.
GregHi,
I've just finished test ,where compared audit update table, delete table, insert table by user by access with noaudit dml statements.
DB version is 9.2.0.8 , same test run twice on same configuration and data so its comparable .
What concerns me the most is difference in redo size ,and redo entries , here goes table with results:
noaudit audit statname
486 439,00 878 484,00 calls to kcmgas
40 005,00 137 913,00 calls to kcmgcs
2 917 090,00 5 386 386,00 db block changes
4 136 305,00 6 709 616,00 db block gets
116 489,00 285 025,00 deferred (CURRENT) block cleanout applications
1,00 3 729,00 leaf node splits
361 723 368,00 773 737 980,00 redo size
4 235,00 50 752,00 active txn count during cleanoutCould You explain that differences in statistics, especially in redo size.
I'm suprissed because in 9.2 DB audit dml doesnt log actual sql statements, only indication of usage .
Regards.
Greg -
Looking for average and max redo size generated per second
Please,
I'm implementing Dataguard physical standby on Oracle 10g R2 on Windows 2003.
My issue is now how could I get redo size generated per second to compare with the actual bandwidth between primary and standby database.
I know I can use the Database Console, but It wasn't installed at the database production.
There's any link or script or view that I could use to resolve this issue?ThanksIt depends on the statmemnst and the datatpyes that are inserted or update or deleted
select b.name,a.value from v$sesstat a, v$statname b where a.statistic#=b.statistic# and b.name = 'redo size' and a.sid=<your SID>;
courtesy to Daljit -
We are planning to use Oracle database (Oracle 10g in Unix). We are in the process of estimating the hard disk capacity(in GB) required based on the past data for each business process. But we are faced with following queries
1. Assuming the structure for a table is as follows :-
create table temp1(
Name varchar2(4),
Age Number(2),
Salary Number(8,2),
DOB Date)
The estimated records per year is assumed to be 500. How can we estimate the size that will be required for this table.
2. We are planning to allocate 20% space for indexes on each table. Is it ok?
3. Audit Logs (to keep track of changes made through update) :- Should it be kept in different partition/hard disk?
Is there anything else to consider. Is there a better way to estimate the hard disk capacity required in more accurate manner?
Our current database in Informix takes around 100GB per year, but there r lot of redundant data and due to business process change we cannot take that into consideration.
Kindly guide. Thanks in advance.Well you can estimate the size of a table by estimating the average row size then multiplying this by the expected number of rows then add in overhead.
3 for the row header + 4 for name + 2 for age + 5 for Salary + 7 for a date + 1 for each column null/length indicator is about 25 bytes per row (single byte character set) * 500 rows * 1.20 (20% overhead) = small.
The overhead is the fixed block header, the ITL, the row table (2 bytes per row) and the pctfree. You can estimate this but I just said 20% which is probably a little high.
The total space needed by indexes often equals or surpassed the space needed by tables. You need to know the design and be pretty sure additional indexes will not be necessary for performance reasons before you allocate such a small percentage of the table space to index space.
Where you keep audit tables or extracts is totally dependent on your disk setup.
HTH -- Mark D Powell -- -
APPEND (direct path) - redo size
I am sure, I might be missing something obvious. I am under the impression that DIRECT PATH loads (such as inserts with APPEND hint )would generate less redo. But, not sure why I am seeing this...
Regular Insert (Not direct path):
====================
SQL> insert into c2 Select * from dba_objects where rownum < 301;
300 rows created.
Statistics
10 recursive calls
74 db block gets
353 consistent gets
1 physical reads
*31044 redo size*
821 bytes sent via SQL*Net to client
752 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
300 rows processed
Direct Path Insert
===========
SQL> insert /*+ APPEND */ into c2 Select * from dba_objects where rownum < 301;
300 rows created.
Statistics
8 recursive calls
13 db block gets
346 consistent gets
1 physical reads
*39048 redo size*
809 bytes sent via SQL*Net to client
770 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
300 rows processed
Not sure, why am I seeing more redo being generated with DIRECT PATH .... either I am missing something obvious or got the DIRECT PATH load completely wrong...would really appreciate any help with this.Hello,
Check out this thread:
[http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3224814814761]
Additionally, if you were to switch off logging at the table level, before the INSERT, you should see a (further) reduction in redo:
ALTER TABLE your_table NOLOGGING; -
Redo size(KB) over the period from AWR
Hi,
Could anyone please let me know how to write an SQL query (involving AWR tables/views) to report the redo size(KB) generated for 1 hour interval over the last one week. My current AWR settings are 1 hr interval and the retention period is 7 days. Using EM DBConsole, when I click a particular snapshot id, it gives, among other metrics, the details of redo size(KB) generated during the corresponding snapshot interval but I don't know where this data is stored.
Thanks,
Sreekanthdba_hist_sysstat
Sybrand Bakker
Senior Oracle DBA -
Hi,
I have the query below:
with D_FLO_PTN as (SELECT *
FROM FLO_PTN
WHERE FLO_PTN.ID_ETU IN (SELECT id_etu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR FLO_PTN.PDTETU IN
(SELECT id_pdtetu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR EXISTS
(SELECT 1
FROM music_adm.adm_users_rights
WHERE id_etu = 0
AND id_pdtetu = '0'
AND user_name = UPPER ('SALG_CB'))),
D_FAI_VIS as (SELECT *
FROM FAI_VIS
WHERE FAI_VIS.ID_ETU IN (SELECT id_etu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR FAI_VIS.PDTETU IN
(SELECT id_pdtetu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR EXISTS
(SELECT 1
FROM music_adm.adm_users_rights
WHERE id_etu = 0
AND id_pdtetu = '0'
AND user_name = UPPER ('SALG_CB'))),
d_dim_etu as (SELECT *
FROM DIM_ETU
WHERE DIM_ETU.ID_ETU IN (SELECT id_etu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR DIM_ETU.PDTETU IN
(SELECT id_pdtetu
FROM music_adm.adm_users_rights
WHERE user_name = UPPER ('SALG_CB'))
OR EXISTS
(SELECT 1
FROM music_adm.adm_users_rights
WHERE id_etu = 0
AND id_pdtetu = '0'
AND user_name = UPPER ('SALG_CB')))
SELECT TRIM (TO_CHAR (D_FLO_PTN.PTNNUMETU, '00000')),
FLO_ITM.FMLNOMPAP,
FLO_ITM.ITMNOMPAP,
FLO_ITM.ITMVALCHR,
FLO_ITM.ITMVALNUM,
D_FAI_VIS.VISCOD,
TRIM (TO_CHAR (D_FLO_PTN.NBRCEN, '0000')),
TRIM (TO_CHAR (D_FAI_VIS.ID_ETU, '000')) || '-'
|| TRIM(TO_CHAR (
SUBSTR (D_FAI_VIS.ID_PTN,
1,
LENGTH (D_FAI_VIS.ID_PTN) - 5),
'00000'
|| '-'
|| D_FAI_VIS.VISCOD,
D_DIM_ETU.NUMETU,
D_FLO_PTN.LIBPAY,
FLO_ITM.ITMVALNUM,
D_FAI_VIS.VISDATTXT,
FLO_ITM.ITMVALNUM,
FLO_ITM.FMLKEYVAL
FROM
D_FLO_PTN,
FLO_ITM,
D_FAI_VIS,
D_DIM_ETU
WHERE (FLO_ITM.ID_VIS = D_FAI_VIS.ID_VIS)
AND (D_FLO_PTN.ID_PTN = D_FAI_VIS.ID_PTN)
AND (D_DIM_ETU.ID_ETU = D_FLO_PTN.ID_ETU)
AND ( D_DIM_ETU.NUMETU IN ('CL2-38093-011')
AND FLO_ITM.FMLNOMPAP IN ('MMS')
AND (D_FLO_PTN.INDSLC = 'YES')
AND TRIM (TO_CHAR (D_FLO_PTN.PTNNUMETU, '00000')) IN
(SELECT TRIM (
TO_CHAR (D_FLO_PTN.PTNNUMETU, '00000')
FROM D_FLO_PTN,
FLO_ITM,
D_FAI_VIS
WHERE (FLO_ITM.ID_VIS = D_FAI_VIS.ID_VIS)
AND (D_FLO_PTN.ID_PTN = D_FAI_VIS.ID_PTN)
AND ( FLO_ITM.ITMNOMPAP IN ('TMMS_D')
AND FLO_ITM.ITMVALNUM <= 20
AND D_FAI_VIS.VISCOD IN ('ASSE'))))2 things I don't understand
- almost LIO ~ PIO but I have 2.5GB for the buffer cache
- redo size, I know about delayed block cleanout
but the second execution should return redo size =0 no ?
I am the only user on the database
11.2.0.1
1st execution
1879 recursive calls
332 db block gets
250090 consistent gets
248221 physical reads
1804 redo size
631470 bytes sent via SQL*Net to client
14706 bytes received via SQL*Net from client
1090 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
16329 rows processed
2nd execution
1879 recursive calls
332 db block gets
250423 consistent gets
248220 physical reads
1732 redo size
1149536 bytes sent via SQL*Net to client
14706 bytes received via SQL*Net from client
1090 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
16329 rows processeduser12045475 wrote:
2 things I don't understand
- almost LIO ~ PIO but I have 2.5GB for the buffer cache
- redo size, I know about delayed block cleanout
but the second execution should return redo size =0 no ?
I am the only user on the database
11.2.0.1
2nd execution
1879 recursive calls
332 db block gets
250423 consistent gets
248220 physical reads
1732 redo size
1149536 bytes sent via SQL*Net to client
14706 bytes received via SQL*Net from client
1090 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
16329 rows processed
Given the amount of work you're doing, the actual redo generated is not worth worrying about other than for reasons of curiosity.
LIO ~ PIO often indicates table scans or index fast full scans - and on 11.2 such scans can use serial direct reads; this would be sufficient to answer both your questions:
a) On direct path reads the blocks are read into the PGA, not the buffer cache, so you have to re-read them on the second execution
b) On direct path read you can do cleanout and generate redo, but the blocks are not in the cache so they don't get written back to the database, so you repeat the cleanout. (A small variation in redo could indicate some cleanout taking place on space management blocks which, I think, will go through the cache).
Regards
Jonathan Lewis -
OEM Instance Activity/ Physical I/O / REDO Size
This is Oracle 10g R2 running on zLinux.
When I look at our Physical I/O activity, the most activity by far, is listed as REDO Size. Sometimes it shows we are doing 10,000 I/O per second. Yep, we can handle that. However, most other I/O activity is shown in the 50 - 100 per second range.
It seems to me that I have something, rather inefficient going on. I haven't found any documentation that discusses what overhead factors relates to high redo activity. Yep, I see, from a query side, how that contributes to redo activity, but I don't see any reason for such high redo activity vs relatively low application activity.
Any suggestions?
Thanks
Tom Duerbusch
THD ConsultingDear Mr. Hunt,
I'm an experienced Oracle DBA.
The tech staff in organizing VLDB systems (very large databases) is quiet similar to what you have showed here.
May I suggest, Perhaps, for your readers benefits ( especially those who do professional large scale editing projects) I think it will be a good idea to elaborate about some modern storage systems architectures like NetApp , EMC etc.
Using a modern storage systems can solve much of the I/O problems, and it seems to me after reading this article that me and you are dealing with the same IO problems: large amounts of data who needs to be wriiten and accessed very fast.
Thank you for your effort.
Sincerely Yours,
shimon. -
Estimate table size for last 4 years
Hi,
I am on Oracle 10g
I need to estimate a table size for the last 4 years. So what I plan to do is get a count of data in the table for last 4 years and then multiply that value by avg_row_length to get the total size for 4 years. Is this technique correct or do I need to add some overhead?
ThanksYes, the technique is correct, but it is better to account for some overhead. I usually multiply the results by 10 :)
The most important thing to check is if there is any trend in data volumes. Was the count of records 4 years ago more or less equal to the last year? Is the business growing or steady? How fast is it growing? What are prospects for the future? Last year in not always 25% of last 4 years. It happens that last year is more than 3 other years added together.
The other, technical issue is internal organisation of data in Oracle datafiles. The famous PCTFREE. If you expact that the data will be updated then it is much better to keep some unused space in each database block in case some of your records get larger. This is much better for performance reasons. For example, you leave 10% of each database block free and when you update your record with longer value (like replace NULL column with actual 25-characters string) then your record still fits into the same block. You should account for this and add this to your estimates.
On the other hand, if your records get never updated and you load them in batch, then maybe they can be ORDERed before insert and you can setup a table with COMPRESS clause. Oracle COMPRESS clause has very little common with zip/gzip utilities, however it can bring you significant space savings.
Finally, there is no point to make estimates too accurate. They are just only estimates and the reality will be almost always different. In general, it is better to overestimate and have some disk space unused than underestimate and need to have people to deal with the issue. Disks are cheap, people on the project are expensive. -
How to estimate the size of the database object, before creating it?
A typical question arise in mind.
As DBA's, we all known to determine the objects size from the database.
But before creating an object(s) in database or in a schema, we'll do an analysis on object(s) size in relation with data growth. i.e. the size we are estimating for the object(s) we are about to create.
for example,
Create table Test1 (Id Number, Name Varchar2(25), Gender Char(2), DOB Date, Salary Number(7));
A table is created.
Now what is the maximum size of a record for this table. i.e. maximum row length for one record. And How we are estimating this.
Please help me on this...To estimate a table size before you create it you can do the following. For each variable character column try to figure out the average size of the data. Say on average the name will be 20 out of the allowed 25 characters. Add 7 for each date column. For numbers
p = number of digits in value
s = 0 for positive number and 1 for a negative number
round((( length((p) + s) / 2)) + 1
Now add one byte for the null indicator for each colmn plus 3 bytes for the row header. This is your row length.
Multiply by the expected number of rows to get a rough size which you need to adjust for the pctfree factor that will be used in each block and block overhead. With an 8K Oracle block size if you the default pctfree of 10 then you lose 892 bytes of storage. So 8192 - 819 - 108 estimated overhead = 7265 usable. Now divide 7265 by the average row length to get an estimate of the number of rows that will fit in this space and reduce the number of usable bytes by 2 bytes for each row. This is your new usable space.
So number of rows X estimate row length divided by usable block space in blocks. Convert to Megabytes or Gigabytes as desired.
HTH -- Mark D Powell --
ed add "number of usable" -
Impdp := Estimate schema size
I have compressed dump of one schema of 4 GB. Now i wanted to import it . can anyone please tell me that how much space i should reserve for importing dump file.
Thanks in advance ..You should be able to get it from the dump file. This is what you need to do if you are importing the complete dumpfile(s):
impdp user/password directory=your_dir dumpfile=your_dump.dmp master_only=y keep_master=y jobname=est_size
sqlplus user/password
select sum(dump_orig_length) from est_size where process_order > 0 and duplicate = 0 and object_type = 'TABLE_DATA'
If you only want some of the dumpfile, then
add your filters to the impdp command, like schemas=foo, or tables=foo.tab1, or whatever... then
select sum(dump_orig_length) from est_size where process_order > 0 and duplicate = 0 and object_type = 'TABLE_DATA' and processing_state != 'X';
This will give you the uncompressed size of the data that was exported.
when you are done
sql> drop table user.est_size;
Hope this helps.
Dean -
How to estimate the size of Encore final authoringoutput for a dvd or .iso image?
I animated a lot of pictures on timelines, but did not realised some are over 10 gigs!, meaning it won't even fit on a double layer dvd! Is it possible to reduce this size? I thought I'd run into this problem only with HD clips! I know I can always divide them but I did not want too many dvd.
Well I did not post this and went on authoring instead to see what would happen. I had a few 300mb files and two over 10 gb but after encoding (the lot), it was only 2.6 gb! What a relief, but how can I predict my output. I put the question to the forums in various forms for previous discussion but somehow it did not return anything relevant. ( I seem to have run into this problem b4 but can't recall the outcome!)>some are over 10 gigs
Photo Scaling for Video http://forums.adobe.com/thread/450798
-Too Large May = Crash http://forums.adobe.com/thread/879967 -
I'm trying to pick a handful of tables in a particular schema which are taking up quite a bit of space, and move them onto another tablespace. There's one I know of in particular, but I'd like to move a couple more if they're also using up a good deal of space. Could I use a query like the following to determine which tables are taking up the majority of the space:
select table_name, num_rows, blocks, num_rows/blocks as div
from dba_tables
where owner = <schema>
and blocks > 0 and num_rows > 10000
order by 4It's mostly a blunt instrument to find the 10 or so most space-intensive tables. Or it there a better way to multiply (row count) * (# of max bytes per row) to get a better estimate?
Thanks
--=ChuckHello,
select table_name, num_rows, blocks, num_rows/blocks as div from dba_tablesBe aware that the columns num_rows and blocks are populated if you collect Statistics on the Table (with DBMS_STATS for instance). So, you need to have fresh Statistics, if you want to get a liable value of these columns.
Hope this help.
Best regards,
Jean-Valentin
Maybe you are looking for
-
HT1977 Can you transfer your iTunes account to a new computer?
I've got a new pc and want to Sync my iPhone to it. The problem is, I can't access my previous apps, music, playlists, etc. because they're saved on the old (out of memory) laptop. I turned on Home Share, but I only see my music, not the apps and mo
-
I can no longer send from my apple mail since icloud.
I have made a mess of things. I can no longer send from my apple mail since icloud. My outgoing mail remains offline no matter how I reconfigure it. How do you check your tls certificate? Is is me.com or mac.com? I am so frustrated.
-
HT1338 I am running MAC OS X 10.5.8. What do I upgrade to and how?
I am running Mac OS X 10.5.8. what should I upgrade to amd how?
-
I execute my progrem its goes to infinite loop?
Hi frnds, I execute my program its goes to infinite loop? How can i stop that program, before goes to dump?
-
hi all abap editor in ecc 6.0 is having a icon enhancement...wat is the use of tht and how to use it...