Archived redo log size more less than online redo logs

Hi,
My database size around 27 GB and redo logs sizes are 50M. But archive log size 11M or 13M, and logs are switching every 5-10min. Why?
Regards
Azer Imamaliyev

Azer_OCA11g wrote:
1) Almost all archive logs sizes are 11 or 13M.. But sometimes 30, 37M.
2)
select to_char(completion_time, 'dd.mm.yyyy HH24:MI:SS')
from v$archived_log
order by completion_time desc;
10.02.2012 11:00:26
10.02.2012 10:50:23
10.02.2012 10:40:05
10.02.2012 10:29:34
10.02.2012 10:28:26
10.02.2012 10:18:07
10.02.2012 10:05:04
10.02.2012 09:55:03
10.02.2012 09:40:54
10.02.2012 09:28:06
10.02.2012 09:13:44
10.02.2012 09:00:17
10.02.2012 08:45:04
10.02.2012 08:25:04
10.02.2012 08:07:12
10.02.2012 07:50:06
10.02.2012 07:25:05
10.02.2012 07:04:50
10.02.2012 06:45:04
10.02.2012 06:20:04
10.02.2012 06:00:12
3) There arent any serious change at DB level.. almost these messages show in alert log since creating DB..Two simple thoughts:
1) Are you running with archive log compression - add the "compressed" column to the query above to see if the archived log files are compressed
2) The difference may simply be a reflection of the number and sizes of the public and private redo threads you have enabled - when anticipating a log file switch Oracle leaves enough space to cater for threads that need to be flushed into the log file, and then doesn't necessarily have to use it.
Here's a query (if you can run as SYS) to show you your allocation of public and private threads
select
     PTR_KCRF_PVT_STRAND           ,
     FIRST_BUF_KCRFA               ,
     LAST_BUF_KCRFA                ,
     TOTAL_BUFS_KCRFA              ,
     STRAND_SIZE_KCRFA             ,
     indx
from
     x$kcrfstrand
;Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: <b><em>Oracle Core</em></b>

Similar Messages

  • Lines size to less than 0,29cm?

    It is possible to reduce the lines size to less than 0,29cm?

    I don't know what the minimum is in centimeters but 0,29 sounds pretty close.
    What is your goal? Why do you need/want it smaller?

  • Why should texture size be less than power of 2?

    Hi!
    I just read in the packagerforiphone_devguide.pdf
    Make bitmaps in sizes that are close to, but less than, 2^n by 2^m bits. The dimensions do not have to be power of 2,
    but they should be close to a power of 2, without being larger. For example, a 31-by-15–pixel image renders faster
    than a 33-by-17–pixel image. (31 and 15 are just less than powers of 2: 32 and 16.) Such images also use memory
    more efficiently.
    can someone explain me this? As Gamedeveloper i'm used to have power of 2 textures. why should they be smaller than power of 2 instead of exactly power of 2?

    I have read that too, and it doesn't make sense. The article would have been a lot clearer if they had used 32x16 as the second example, instead of 33x17. Like you, I would almost expect advice that said to not exceed a power of 2, but it does't seem right to say that the exact power of 2 isn't recommended either.
    Hey, I just found an earlier passage that pretty well states that we're right, on page 34:
    "The GPU uses memory allocations that are powers of 2 for each dimension of the bitmap image. For example, the GPU can reserve memory in 512 x 1024 or 8 x 32 sizes. So a 9 x 15-pixel image takes up the same amount of memory as a 16 x 16-pixel image. For cached display objects, you may want to use dimensions that are close to powers of 2 (but not more) in each direction. For example, it is more efficient to use a 32 x 16-pixel display object than a 33 x 17-pixel display object."
    So there you go, a 32x16 bitmap does use less memory than a 33x17 bitmap.
    I think, or hope, that the later part was just making the point that you don't have to use exact powers of 2, but in doing so you want to not go over the power of 2. Hence using examples of 31x15 and 33x17. They didn't use 32x16 as an example because those are powers of 2, which wasn't what was being illustrated.

  • Display more / less than 3 Radio Buttons in RadioButtonGroupByIndex

    Hi
    I have started learning web dynpro ABAP recently.
    My question is -:
    When we use "RadioButtonGroupByIndex"  by default 3 radiobuttons are displayed.
    If I want more / less radio buttons (say 6 or 2) then how can i proceed ?
    Regards,
    Debi
    Edited by: Debidutta Mohanty on May 25, 2010 2:47 PM

    Like all ByIndex UI elements, the number of inner instances (in this case radioButtons) is controlled by the number of elements in the context node that the UI element is bound to. You control the actual number of radioButtons by how many context elements are in the context node at runtime (this will not be refected in the View Editor preview area since this is controlled at runtime).
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/bb/69b441b0133531e10000000a155106/frameset.htm

  • ORA-21560: while size is less than 1 GB for a CLOB variable

    Hi ,
    I use "DBMS_XSLPROCESSOR.CLOB2FILE' to write data from a CLOB variable (Oracle 10g) to TXT file in unix. I assumed that
    the clob can collect data of around 4 gb , but it is throwing the "Out of Range" error if the size exceeds
    22502381 bytes, which is not at all sufficient to my requirement. I want to collect around 3GB in the variable.
    Is it possible ? Please help
    2008-12-22 11:08 Error Occurred
    ORA-21560: argument 2 is null, invalid, or out of range
    Declare
    Xfile CLOB ;
    v_buffer VARCHAR2(32767);
    v_eol VARCHAR2(2);
    v_eollen PLS_INTEGER;
    c_maxline CONSTANT PLS_INTEGER := 32767;
    v_lines PLS_INTEGER := 0;
    v_dir          VARCHAR2(1000) := 'XXX/YYY' ;
    v_data_string VARCHAR2(32767) ;
    x integer ;
    Begin
    v_eol := CASE
    WHEN DBMS_UTILITY.PORT_STRING LIKE 'IBMPC%'
    THEN CHR(13)||CHR(10)
    ELSE CHR(10)
    END;
    v_eollen := LENGTH(v_eol);
    DBMS_LOB.CREATETEMPORARY(Xfile, TRUE);
    FOR r in ( Select desc1||'~'||desc2||.....desc30 AS csv from desc_t )
    Loop
    IF LENGTH(v_buffer) + v_eollen + LENGTH(r.csv) <= c_maxline THEN
    v_buffer := v_buffer || v_eol || r.csv;
    ELSE
    IF v_buffer IS NOT NULL THEN
    DBMS_LOB.WRITEAPPEND(
    v_file, LENGTH(v_buffer) + v_eollen, v_buffer || v_eol
    END IF;
    v_buffer := r.csv;
    END IF;
    v_lines := v_lines + 1;
    END LOOP;
    IF LENGTH(v_buffer) > 0 THEN
    DBMS_LOB.WRITEAPPEND(
    v_file, LENGTH(v_buffer) + v_eollen, v_buffer || v_eol
    END IF;
    DBMS_XSLPROCESSOR.CLOB2FILE(Xfile, v_dir, 'V_DESC,TXT');
    DBMS_LOB.FREETEMPORARY(v_file);
    Exception
    When Others Then
    dbms_output.put_line('Error '||v_lines||' --'||SQLERRM) ;
    dbms_output.put_line('Error '||dbms_lob.getlength(Xfile)) ;
    End ;
    Thanks

    Well, ORA-21560 could indicate file opening issue. Check the file name 'V_DESC,TXT'. Most likely your UNIX either does not allow commas in file name or requires them to be escaped. And anyway, I believe it is a typo and you meant 'V_DESC.TXT'.
    SY.

  • Creating Custom Page Size less than one inch

    I currently have Acrobat 6 on Windows XP, and it appears that I cannot set a custom page size less than one inch (Using Printer Preferences). Is there another method I can use to create a PDF from Publisher that has a page size less than one inch? If not, would updating to Acrobat 9 allow me to print to a custom page size of less than one inch?

    Thanks Peggy,
    I tried deleting the preference file and restarting Pages but I still have the same problem with the custom page size. When I start Pages I am presented with a bunch of templates so I choose Blank then I go through my procedure to select the custom page size that I created and always end up back at an A4 document.
    I have used the demo version of Stone Create to do what I needed to do with the small page size (ie. create a small ad) but it is a pain that I paid for Pages and now will need to buy something like Stone Create to do what I need. Although, that said, having used Stone Create for this little ad I must say that for the price it seems like a pretty decent program that will do a whole lot of the desktop publishing type of functions without the rather steep prices of the Adobe products. I kind of wish I had bought this program instead of Pages now.
    Cheers,
    Graham

  • How to configure an image to specific parameters.  Needed: no more than 1200 pixels on long side   jpeg format   no less than 5 mb.  Image will be viewed on large standup screen.(original image is 22 mb)

    I often have requests for images to be sent within certain parameters:  such as "no longer than 1200 pixels on longest side + jpeg format + file size no less than 5 mb."  Images will be viewed on a large stand up screen.  My original image sizes are usually around 22 mb.  When I use Export and specify 1200 pixels (quality setting is all the way over to 12), Save the file to a folder on the desktop, and then open the image in Preview, Inspector shows that the file size is now 233 KB.  How do I meet all specifications without reducing the image size so drastically?
    Thank you.

    . Pardon, it reduced it down to 833 KB, not 233 (I mixed it up with another image) from the Original image of 22 MB, RAW.  Should it reduce it down that much??
    That will depend on the amount of of detail in your jpeg. Given a certain pixels size and level of quality, a jpeg with many details will require a larger filesize than a jpeg with large plain regions of constant color.
    Is 833 KB a big enough file size to view on a big screen?
    The file size is really unimportant for the quality. What will count is the pixel size and jpeg quality you set. What is the pixel size of the large screen? That should match the pixel size you are using in your export preset. 1200 longest dimension is extremely unlikely for a large screen. Even the 17" display of my MacBook Pro has a larger pixelsize of 1920x1200.
    I am sharing William Lloyd's doubts, that the specifications you were given are correct.  I'd try to export with a larger pixelsize and a high jpeg quality - perhaps 10.
    If you export with 10 Megapixel  and a good jpeg quality, you will probably get a 5MB file, not that the size should matter.

  • IPhone4 stores only less than 3GB of photos/videos

    Hello All,
    I have a 16GB iphone4, I now realised that it stores only less than 3 GB worth photos/videos (HD) and then says that there is no more space left.
    When I check through iTunes it says 12.3 GB of photos, 1.17 GB apps and 0.06 GB Others, and nothing free...
    But when I copy this to my machine the folder size is less than 3 GB -:(
    Should it not be able to store much more ?
    Once I had 120 photos + videos
    and another time 75 photos + videos...
    I know the length of the videos matter, but again the storage in size should be much more than 3 GB, right ?
    Any information is appreciated !!
    Thanks in Advance !!
    Best
    Naresh
    PS - I am at the moment upgrading the OS to see if that helps

    please ignore this question, Iguess I had created another library in the phone which I had forgotten about.
    Thanks for reading !!

  • OR is taking much more time than UNION

    hi gems..
    i have written a query using UNION clause and it took 12 seconds to give result.
    then i wrote the same query using OR operator and then it took 78 seconds to give the resultset.
    The tables which are referred by this qurey have no indexes.
    the cost plans for the query with OR is also much more lesser than that with UNION.
    please suggest why OR is taking more time.
    thanks in advance

    Here's a ridiculously simple example.  (these tables don't even have any rows in them)
    If you had separate indexes on col1 and col2, the optimizer might use indexes in the union but not in the or statement:
    Which is faster will depend on the usual list of things.
    Of course, the union also requires a sort operation.
    SQL> create table table1
      2  (col1 number, col2 number, col3 number, col4 number);
    Table created.
    SQL> create index t1_idx1 on table1(col1);
    Index created.
    SQL> create index t1_idx2 on table1(col2);
    Index created.
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1> = 123
      5  or col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |        |     1 |    52 |     2   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| TABLE1 |     1 |    52 |     2   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter("COL1">=123 OR "COL2"<=456)
    SQL> explain plan for
      2  select col1, col2, col3, col4
      3  from table1
      4  where col1 >= 123
      5  union
      6  select col1, col2, col3, col4
      7  from table1
      8  where col2 <= 456;
    Explained.
    SQL> @xp
    | Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   1 |  SORT UNIQUE                  |         |     2 |   104 |     4  (75)| 00:00:01 |
    |   2 |   UNION-ALL                   |         |       |       |            |          |
    |   3 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  4 |     INDEX RANGE SCAN          | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
    |   5 |    TABLE ACCESS BY INDEX ROWID| TABLE1  |     1 |    52 |     1   (0)| 00:00:01 |
    |*  6 |     INDEX RANGE SCAN          | T1_IDX2 |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("COL1">=123)
       6 - access("COL2"<=456)

  • After system copy new system size is leeer than old system

    Hii,
    Our client system is upgrading now. So Production BW system copy is done to new system.
    New system size is lesser than the Production system.
    Could any one let me know why this was happen. All the available in production BW is availalable in New system then also the new system size is less.
    Regards,
    sharma.

    Usually when you delete data from a table , the table space will not get released- you will have to run a reorg on the database so that the additional space gets released.... if you do the same on the production system then the spaces should get aligned.  Also run BRCONNECT on the copy as well so that everything is in line.
    Arun

  • The file structure online redo log, archived redo log and standby redo log

    I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
    1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
    2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
    3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?

    FZheng:
    Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
    But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
    My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
    This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
    Edited by: 853153 on Jun 22, 2011 9:42 AM

  • Recover Database is taking more time for first archived redo log file

    Hai,
    Environment Used :
    Hardware : IBM p570 machine with P6 processor Lpar of .5 CPU and 2.5 GB Ram
    OS : AIX 5.3 ML 07
    Cluster: HACMP 5.4.1.2
    Oracle Version: 9.2.0.4 RAC
    SAN : DS8100 from IBM
    I have used flash copy option to copy the database from production to test machine. Then tried to recover the database to an consistent state using the command "recover automatic database until cancel". System is taking long time and after viewing the alert log it was found that, for the first time, if it is reading all the datafiles and it is taking 3 seconds for each datafile. Since i have more than 500 datafiles, it is nearly taking 25 mins for applying the first archived redo log file. All other log files are applied immeidately without any delay. Any suggession to improve the speed will be highly appreciated.
    Regards
    Sridhar

    After chaning the LPAR settings with 2 CPU and 5GB RAM, the problem solved.

  • Where RFS exactly write redo data ?  ( archived redo log or standby redo log ) ?

    Good Morning to all ;
    I am getting bit confused from oracle official link . REF_LINK : Log Apply Services
    Redo data transmitted from the primary database is received by the RFS on the standby system ,
    where the RFS process writes the redo data to either archived redo log files  or  standby redo log files.
    In standby site , does rfs write redo data in any one file or both ?
    Thanks in advance ..

    Hi GTS,
    GTS (DBA) wrote:
    Primary & standby log file size should be same - this is okay.
    1) what are trying to disclose about  largest & smallest here ? -  You are confusing.
    Read: http://docs.oracle.com/cd/E11882_01/server.112/e25608/log_transport.htm#SBYDB4752
    "Each standby redo log file must be at least as large as the largest redo log file in the redo log of the redo source database. For administrative ease, Oracle recommends that all redo log files in the redo log at the redo source database and the standby redo log at a redo transport destination be of the same size."
    GTS (DBA) wrote:
    2) what abt group members ? should be same as primary or need  to add some members additionally. ?
    Data Guard best practice for performance, is to create one member per each group in standby DB. on standby DB, one member per group is reasonable enough. why? to avoid write penalty; writing to more than one log files at the standby DB.
    SCENARIO 1: if in your source primary DB you have 2 log member per group, in standby DB you can have 1 member  per group, additionally create an extra group.
    primary
    standby
    Member per group
    2
    1
    Number of log group
    4
    5
    SCENARIO 2: you can also have this scenario 2 but i will not encourage it
    primary
    standby
    Member per group
    2
    2
    Number of log group
    4
    5
    GTS (DBA) wrote:
    All standby redo logs of the correct size have not yet been archived.
      - at this situation , can we force on standby site ? any possibilities ? 
    you can not force it , just size your standby redo files correctly and make sure you don not have network failure that will cause redo gap.
    hope there is clarity now
    Tobi

  • Hoping for a quick response : EXP and Archived REDO log files

    I apologize in advance if this question has been asked and answered 100 times. I admit I didn't search, I don't have time. I'm leaving on vacation tomorrow, and I need to know if I'm correct about something to do with backup / restore.
    we have 10g R2 running a single instance on a single server. The application vendor has "embedded" oracle with their application. The vendor's backup is a batch file using EXP - thus:
    exp system/xpwdxx@db full=y file=D:\Orant\admin\db\EXP\db_full.dmp log=D:\Orant\admin\db\EXP\db_full.txt direct=y compress=y
    This command is executed nightly at midnight. The files are then backed up by our nightly backup to offsite storage media.
    Te database is running in autoarchive mode. The problem is, the archived redo files filled the drive they were being stored on, and it is the drive the database is on. I used OS commands to move 136G of archived redo logs onto other storage media to free the drive.
    My question: Since the EXP runs at midnight, when there is likely NO activity, do I need to run in AutoArchive Mode? From what I have read, you cannot even apply archived redo log files to this type of backup strategy (IMP) Is that true? We are ok losing changes since our last EXP. I have read a lot of stuff about restoring consistent vs. inconsistent, and just need to know: If my disk fails, and I have to start with a clean install of Oracle and nothing else, can I IMP this EXP and get back up and running as of the last EXP? Or do I need the autoarchived redo log files back to July 2009 (136G of them).
    Hoping for a quick response
    Best Regards, and thanks in advance
    Bruce Davis

    Bruce Davis wrote:
    Amardeep Sidhu
    Thank you for your quick reply. I am reading in the other responses that since I am using EXP without consistent=y, I might not even have a backup. The application vendor said that with this dmp file they can restore us to the most recent backup. I don't really care for this strategy as it is untested. I asked them to verify that they could restore us and they said they tested the dmp file and it was OK.
    Thank you for taking the time to reply.
    Best Regards
    BruceThe dump file is probably ok in the sense it is not corrupted and can be used in an imp operation. That doesn't mean the data in it is transactionally consistent. And to use it at all, you have to have a database up and running. If the database is physically corrupted, you'll have to rebuild a new database from scratch before you can even think about using your dmp file.
    Vendors never understand databases. I once had a vendor tell me that Oracle's performance would be intolerable if there were more than 5 concurrent connections. Well, maybe in HIS product ..... Discussions terminated quickly after he made that statement.

  • SAP ISR -XI - SAP POS. File size more than 11 KB failing in Inbound

    Hi All,
    We are implementing SAP ISR- XI - POS Retail implementation using
    Standard Content Store Connectivity 2.0, GM Store Connectivity 1.0, and
    other contents.
    In our Inbound Scenario File-RFC , we are picking files from FTP server
    for sales and totals data and if the size of this sales data file in
    format *_XI_INPUT.DAT is greater than 11 kb , it is failing at XI
    Mapping level, saying Exception occurred during XSLT
    mapping "GMTLog2IXRetailPOSLog" of the application. We have tried and tested at mapping level no error found as this is processing files below 11 Kb successfully with same mappings, Also this is standard Mapping by SAP in form of XI Content Store connectivity 2.0.
    At XI Side we have processed the file of eg: 40 KB  by splitting the record data and making
    file size less than 11KB and it is being processed successfully, but file of 40 kb fails.
    XI Server: AIX  Server.
    There may be some memory setting missing or some basis problem also. Kindly let me know how to proceed.
    Regards,
    Surbhi Bhagat

    hi,
    this is hard to believe that such small files cannot be processed
    do your XI mappings work for any other flows with something more then 11kb?
    let me know about that and then we will know some more
    as this is realy very small size
    maybe your XI was installed in on PocketPC
    Regards,
    Michal Krawczyk

Maybe you are looking for