SQLSERVER 2012 LOG SIZE INCREASE EXTENSIVELY

Hi Friends,
I want to inquire, as our sqlserver 2012 in full  recovery mode , I notices whenever we are executing rebuild indexes maintenancejob the logfile grown up extensively every time.  h
thank you.
regards,
asad

Hi Friends,
I want to inquire, as our sqlserver 2012 in full  recovery mode , I notices whenever we are executing rebuild indexes maintenancejob the logfile grown up extensively every time.  h
Asad,
Why are you posting same thread about SQL Server log file every time. I can see all your previous threads on same Log file issue.
This clearly shows that you don't spend event single moment reading links posted as answer. Why dont you go and read about SQL Server before asking *almost* same question every time.
Yes log file will grow that is default behavior why are you executing index rebuild for all index. Did you searched about index rebuild and how it works and what gets logged.
You can read here
Curious case of logging in Online and offline index rebuild
Use Ola Hallengren solution for index rebuild it will only rebuild index that is fragmented.
Don't just blindly post question spend some time on net searching I am sure you would find lot of articles
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP

Similar Messages

  • SQLSERVER 2012 LOG BACKUP

    Hi Friends,
    I want to inquire, as our sqlserver 2012 in full  recovery mode , when ever we are taking backup using netbackup , the logfile size did not shrink or reduce. My question is what is default behavior of sqlserver 2012 after taking logfile backup the logfile
    size should reduce or not. As the logfile size keep increase which creating problem.
    thank you.
    regards,
    asad

    Please find below we experience suddenly extensive growth while rebuilding indexes and manually we shrink the log file.
    PROD
    102775.9922
    98.71157074
    1324.195956
    3/9/15 1:00 PM
    PROD
    248184
    99.48280334
    1283.599347
    3/9/15 2:00 PM
    PROD
    248184
    99.4907074
    1263.982753
    3/9/15 3:00 PM
    PROD
    104823.9922
    99.01555634
    1031.93315
    3/9/15 1:00 PM
    PROD
    248184
    99.62797546
    923.3053748
    3/9/15 4:00 PM
    PROD
    749.8046875
    4.465681076
    716.3208015
    3/8/15 11:00 AM
    PROD
    749.8046875
    5.280020714
    710.2148447
    3/10/15 8:00 AM
    PROD
    749.8046875
    6.103217125
    704.0424794
    3/11/15 10:00 AM
    PROD
    749.8046875
    7.214378834
    695.7109368
    3/9/15 6:00 PM
    PROD
    749.8046875
    8.520057678
    685.9208957
    3/8/15 12:00 PM
    PROD
    749.8046875
    9.84631443
    675.9765604
    3/8/15 1:00 PM
    PROD
    749.8046875
    9.94217205
    675.2578154
    3/8/15 2:00 PM
    PROD
    749.8046875
    10.02917385
    674.6054718
    3/10/15 6:00 PM
    PROD
    749.8046875
    10.04324055
    674.4999991
    3/8/15 3:00 PM
    PROD
    749.8046875
    10.60015583
    670.3242222
    3/8/15 4:00 PM
    PROD
    749.8046875
    13.66189098
    647.3671885
    3/8/15 5:00 PM
    PROD
    749.8046875
    13.92289639
    645.4101578
    3/10/15 11:00 PM
    PROD
    749.8046875
    14.33654594
    642.308594
    3/11/15 3:00 PM
    PROD
    749.8046875
    14.72414684
    639.4023443
    3/10/15 9:00 AM
    PROD
    749.8046875
    17.50924683
    618.5195341
    3/10/15 2:00 PM
    PROD
    749.8046875
    18.69497299
    609.6289037
    3/8/15 6:00 PM
    PROD
    749.8046875
    18.86376572
    608.3632879
    3/9/15 7:00 PM
    PROD
    749.8046875
    19.96249008
    600.1250011
    3/9/15 2:00 PM
    PROD
    749.8046875
    21.41690636
    589.2197197
    3/8/15 7:00 PM
    PROD
    749.8046875
    23.57749367
    573.0195348
    3/11/15 11:00 AM
    PROD
    749.8046875
    24.80978203
    563.7797789
    3/8/15 8:00 PM
    PROD
    749.8046875
    24.98254776
    562.4843733
    3/10/15 7:00 PM
    PROD
    749.8046875
    26.69236755
    549.6640644
    3/8/15 9:00 PM
    PROD
    749.8046875
    26.81687927
    548.7304697
    3/8/15 10:00 PM
    PROD
    749.8046875
    26.84032249
    548.5546913
    3/8/15 11:00 PM
    PROD
    749.8046875
    26.89606667
    548.1367189
    3/9/15 7:00 AM
    PROD
    749.8046875
    27.09090996
    546.6757747
    3/9/15 8:00 AM
    PROD
    749.8046875
    27.33628464
    544.8359439
    3/11/15 7:00 AM
    PROD
    749.8046875
    27.69386482
    542.1547909
    3/5/15 10:04 AM
    PROD
    749.8046875
    28.27455139
    537.8007758
    3/11/15 4:00 PM
    PROD
    749.8046875
    28.71086311
    534.5292901
    3/9/15 9:00 AM
    PROD
    749.8046875
    28.7757225
    534.0429713
    3/9/15 8:00 PM
    PROD
    749.8046875
    29.53060722
    528.3828103
    3/10/15 10:00 AM
    PROD
    749.8046875
    31.06902885
    516.8476528
    3/10/15 3:00 PM
    PROD
    749.8046875
    31.37275314
    514.5703138
    3/9/15 10:00 AM
    asad

  • Weblogic 8.1 Server log size increase in Production environment

    Hi,
    Issue:: One of the log file is increasing in size and exceeding beyond the size mentioned in the configuration file resulting in application outage.
    Issue description:
    We are having problems with the log size in the Weblogic 8.1 server. The fileminsize has been mentioned in the config.xml.
    New log files like MYsvr.log00001,MYsvr.log00002, MYsvr.log00003, MYsvr.log00004 etc are also being generated appropriately when the max file size has been reached. But simultaneously, one of the files is growing in size, exceeding the limit mentioned in the configuration file. Eg.. the MYsvr.log00001 file is 800MB in size while the other files(MYsvr.log00002, MYsvr.log00003 etc are 10MB in size)
    This increase in size of the log has been resulting in an application outage.
    More Details:
    1. Server: BEA Weblogic 8.1 server
    2. Log size is fine in other environements. This is a problem only in the production environment.
    3. The entry in the config.xml is as follows:
    <Server ListenPort="6313" Name="MYsvr" NativeIOEnabled="true" TransactionLogFilePrefix="./">
    <ServerStart Name="MYsvr"/>
    <Log FileMinSize="10000" FileName="MYsvr.log" Name="MYsvr"
    NumberOfFilesLimited="true" RotationType="bySize"/>
    <SSL Name="MYsvr"/>
    <ServerDebug Name="MYsvr"/>
    <WebServer Name="MYsvr"/>
    <ExecuteQueue Name="default" ThreadCount="15"/>
    <KernelDebug Name="MYsvr"/>
    </Server>
    Could you please help with this issue ?
    Thank you.

    Can someone please provide a solution for the issue

  • SQL Logs Size Increasing automatically

    Hi,
    I am facing a very strange issue in my SQL Server 2008 R2 Logs folder.
    Actually in every 10 seconds SQLDump0000,SQLDump0001,.....     and so on named files are being created automatically in the Logs folder where SQL stores it's logs.I don't know why is this happening.
    Due to this issue logs folder size is increased to 160GB and  my c drive where windows is installed is keep showing Low space message in fact 0MB space is showing.
    Please help urgently.

    Hi,
    I am facing a very strange issue in my SQL Server 2008 R2 Logs folder.
    Actually in every 10 seconds SQLDump0000,SQLDump0001,.....     and so on named files are being created automatically in the Logs folder where SQL stores it's logs.I don't know why is this happening.
    Due to this issue logs folder size is increased to 160GB and  my c drive where windows is installed is keep showing Low space message in fact 0MB space is showing.
    Please help urgently.
    Hello Zubair,
    This dumps are getting created due to some issue SQL server is facing.Its not a SQL Server transaction log file dump.I guess your system is not updated to latest Service pack.
    Latest Service Pack for SQL server 2008 R2 is SP2.Apply this SP and see if this dump generation subsides.If not you need to raise a case with Microsoft to get these dumps analyzed.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Slow inserts after log size increase

    I have an app that does 10k-50k consecutive SELECTs and INSERTs as part of a loading operation. Recently, I enlarged the redo logs from 10M to 100M, which appears about right, based on the frequency of log switches (half a dozen/day). Trouble is, now the load takes 5 times longer (20-25 min vs. 5). Any thoughts? I thought redo logs that were too small caused poor performance, b/c of increased checkpoint activity, but I can't imagine why larger redo logs would slow inserts down.
    Rob

    Logically the only difference a larger online redo log should make is that the number of checkpoints invoked due to log switches should decrease. But a larger online redo log will take longer to archive.
    Where did you place the larger online redo log and where does the archive redo log go to? Is this performance decrease consistent or does it appear when certain redo logs are in use?
    Even if your logical disk are different it could be the physical disk for some of your files are now the same where before this was not true.
    The suggested statspack is probably a good place to start but you might also want to take some OS IO stats during the run also.
    HTH -- Mark D Powell --

  • Why size of archive log file increasing in merge clause

    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong).
    please suggest........
    Edited by: 855516 on Mar 13, 2012 11:18 AM

    855516 wrote:
    my database is running in archive log mode.
    someone is running oracle merge statement. still it is running.
    He will issue commit after the operation.
    in that period redolog file increasing now.
    my question is why size of archive log file increasing with redolog file.
    i know that after commit archive log file should generate.(may be it is wrong). No this is not correct that after commit archive log will generate....You know merge statement causes the insert (if data not present already) or update if database is present.. Obviously these operation will generate lots of redo if the amount of data been processed is high.
    If you feel that this operation is causing excessive of redo then root cause analysis should be done...
    For that use Logminer (excellent tool to provide segment level breakdown of redo size). V$logmnr_contens has columns redo block and redo byte address associated with the current redo
    change
    There are some gudlines in order to reduce redos( which may vary in any environment)
    1) check if there are unwanted indexes being used in tables which are refereed in merge. If yes then remove those could bring down the redo
    2) Use global temporary tables to reduce redo (if there is a need to keep data only temporarily in a session)
    3) Use nologging if possible (but see its implications)
    Hope this helps

  • SSIS 2008 Rebuild Index Task increasing Log Size

    I am testing SSIS 2008 Rebuild Index Task on a single database (db1). I shrank the
    db1's log file to initial size. I also checked the box Sort results in tempdb on the SSIS package.
    However, when I run the package, db1's log file size increased about 55 times the original size.
    When I run a rebuild of all the index on db1
    with (sort_in_tempdb=ON) ; there was only a slight increase in the log file size ( did not even double the initial size).
    Is this a SSIS bug? The check box is not actually sorting in tempdb?

    I am testing SSIS 2008 Rebuild Index Task on a single database (db1). I shrank the
    db1's log file to initial size. I also checked the box Sort results in tempdb on the SSIS package.
    However, when I run the package, db1's log file size increased about 55 times the original size.
    When I run a rebuild of all the index on db1
    with (sort_in_tempdb=ON) ; there was only a slight increase in the log file size ( did not even double the initial size).
    Is this a SSIS bug? The check box is not actually sorting in tempdb?
    Arthur can you please move this thread in Database engine forum. IMO its what changed from 2008 .Index rebuild is fully logged from 2008 onwards it was previously (in 2005) minimally logged .Refer below link for information regarding same
    http://support.microsoft.com/kb/2407439/en-gb
    Now about when sort intempdb is used.The intermediate sort results that are used to build the index are stored in tempdb .When you rebuild without sort_in_tempdb ( i guess your data and log file are on same drive) index will utilize disk space on which it
    resides so it might seem to you log has increased.Is it so ,am i correct.
    What query you used to measure log file size,are you absolutely sure it was log file that increased
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • I am handling logistics department in a company, i am handling more than 100 calls in a day. But my iPhone 4 have only 100 number call history. How to increase my call log size or kindly suggest a better app for storing 1 month call history..

    I am handling logistics department in a company, i am handling more than 100 calls in a day. But my iPhone 4 have only 100 number call history. How to increase my call log size or kindly suggest a better app for storing 1 month call history..

    Here's one:
    https://itunes.apple.com/us/app/callog/id327883585?mt=8

  • How to Increase database / log size of mirrored database

    Hi All,
    I have been asked to increase the default database and log size of two databases, both of which are mirrored.
    I have been trying to find out what else I need to do in this case other than the standard resizing.
    Are there extra steps I need to perform as these dbs are mirrored (SQL Mirroring)?
    Farren

    Hi Farren,
    Based on my test, if you choose to expand the mirrored database by increasing the size of an existing data or log file, you can directly execute the ALTER DATABASE statement on the Principal server as Uri’s post, but before that please make sure the mirroring
    status is synchronized. You can check the mirroring configuration and partner status with the following query.
    SELECT DB_NAME(DATABASE_ID) 'DBNAME',mirroring_state,
    mirroring_state_desc,mirroring_role,mirroring_role_desc,
    mirroring_partner_name,mirroring_partner_instance
    FROM sys.database_mirroring
    WHERE mirroring_guid IS NOT NULL
    However, if you choose to expand the mirrored database by adding a new file to the database, you need to perform extra steps as follows.
       1. Check the mirroring configuration and partner status with above query.
       2. Run the below command on the principal server to break the mirror.  
    ALTER DATABASE DatabaseName SET PARTNER OFF
       3. Create your database file on the principal server.
       4. Run a log backup on the principal server.
       5. Restore this log backup on the mirrored server using the NORECOVERY and MOVE options.
       6. Re-establish mirroring between the servers for this database.
    Reference:
    Increase the Size of a Database
    How to add a database file to a mirrored SQL Server database
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • Why the flashback log'size smaller than the archived log ?

    hi, all . why the flashback log'size smaller than the archived log ?

    Lonion wrote:
    hi, all . why the flashback log'size smaller than the archived log ?Both are different.
    Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
    Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
    Some more information:-
    Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
    Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
    Edited by: CKPT on Jun 14, 2012 7:34 PM

  • How to change redo log size in oracle 10g

    Hi Experts,
    Can anybody confirm how to change redo log size in oracle 10g?
    Amit

    Dear Amit,
    You can enlarge the size of existing Online Redo log files, by adding new groups with different size of files (origlog$/mirrlog$) and then carefully droping the old groups with  their associated inactive files.
    Please refer SAP Note 309526 - Enlarging redo log files to perform the activity.
    Steps to perform:
    STEP-1. Analyze the exisiting situation and prepare an action plan.
    A. You have to ensure that no more than one log switch per minute occurs during peak times.
    It may also be necessary to increase the size of the online redo logs until they are large enough.
    Too many log switches lead to too many checkpoints, which in turn lead to a high writing load in the I/O subsystem.
    Use ST04 -> Additional Functions --> Display GV$-Views
    There you can select
    Gv$LOG_HISTORY --->for determing your existing LOG switching frequency.
    GV$LOG -
    > list the status(INACTIVE/CURRENT/ACTIVE) /size/sequence no. of existing online redolog files
    GV$LOGFILE  --- > list the information of existing online  redolog files with their storage paths
    You can document the existing situation of Online Redo Log Fiile management before going to enlarge Redo Log Files.
    It will be helpful, if something goes wrong while performing activities.
    B. Based on above Situation analysis, Plan your New Redo Log Group and there Members with new optimal size.
    e.g.
    Group No.         Redo Log File Locations  u201C/oracle/<SID>/u201D                  Size
                                 /origlogA                  /mirrlogA            
    15                        log_g15m1.dbf         log_g15m2.dbf               100 MB
    17                        log_g17m1.dbf            log_g17m2.dbf               100 MB
                                /origlogB                    /mirrlogB
    16                       log_g16m1.dbf          log_g16m2.dbf            100 MB
    18                       log_g18m1.dbf            log_g18m2.dbf            100 MB
    Continue to next.....

  • Change SCSM 2012 Portal Size File Attachment

    Hi friends.
    Im looking a way to change the SCSM 2012 Portal Size File Attachment I've been looking the solution but i can´t find it.
    This questions & answers don't helped me. SCSM 2012 Portal Size File Attachment --> http://social.technet.microsoft.com/Forums/en-US/b955d2ec-a1ad-4e50-9d4c-ad22b8a61c5d/portal-file-attachment-max-size?forum=portals
    Many Thanks for your great support.
    Regards.

    Thanks Thomas. 
    The problem is in the self web portal (incidents) that dont allow to upload a file that contains more than
    1mb and i can't change it. i tried it as here says (console and  modifyng
    the web.config) 
    http://social.technet.microsoft.com/Forums/en-US/b955d2ec-a1ad-4e50-9d4c-ad22b8a61c5d/portal-file-attachment-max-size?forum=portals
    http://technet.microsoft.com/en-us/library/ff460924.aspx 
    but dont work.
    I want to increase the limit up to 2mb

  • Archived redo log size more less than online redo logs

    Hi,
    My database size around 27 GB and redo logs sizes are 50M. But archive log size 11M or 13M, and logs are switching every 5-10min. Why?
    Regards
    Azer Imamaliyev

    Azer_OCA11g wrote:
    1) Almost all archive logs sizes are 11 or 13M.. But sometimes 30, 37M.
    2)
    select to_char(completion_time, 'dd.mm.yyyy HH24:MI:SS')
    from v$archived_log
    order by completion_time desc;
    10.02.2012 11:00:26
    10.02.2012 10:50:23
    10.02.2012 10:40:05
    10.02.2012 10:29:34
    10.02.2012 10:28:26
    10.02.2012 10:18:07
    10.02.2012 10:05:04
    10.02.2012 09:55:03
    10.02.2012 09:40:54
    10.02.2012 09:28:06
    10.02.2012 09:13:44
    10.02.2012 09:00:17
    10.02.2012 08:45:04
    10.02.2012 08:25:04
    10.02.2012 08:07:12
    10.02.2012 07:50:06
    10.02.2012 07:25:05
    10.02.2012 07:04:50
    10.02.2012 06:45:04
    10.02.2012 06:20:04
    10.02.2012 06:00:12
    3) There arent any serious change at DB level.. almost these messages show in alert log since creating DB..Two simple thoughts:
    1) Are you running with archive log compression - add the "compressed" column to the query above to see if the archived log files are compressed
    2) The difference may simply be a reflection of the number and sizes of the public and private redo threads you have enabled - when anticipating a log file switch Oracle leaves enough space to cater for threads that need to be flushed into the log file, and then doesn't necessarily have to use it.
    Here's a query (if you can run as SYS) to show you your allocation of public and private threads
    select
         PTR_KCRF_PVT_STRAND           ,
         FIRST_BUF_KCRFA               ,
         LAST_BUF_KCRFA                ,
         TOTAL_BUFS_KCRFA              ,
         STRAND_SIZE_KCRFA             ,
         indx
    from
         x$kcrfstrand
    ;Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

  • Update Multiple Columns when concerned about redo/undo log sizes.

    Hi ,
    I have update statements that updates multiple columns at once if any of them is changed. What I see that even though the value of column is not changed it still increases the redo size.
    Below is a sample code similar to the ones in my code. Basically I check whether there is a difference in any of the columns to be updated and update all of them.
    Is there a way to improve redo log size without splitting the update statement for every column that I will be updating. Redo/Undo log size is a concern for us..
      For i In 1.rec.Count Loop
        Update employees e
           Set e.first_name = rec(i).first_name, e.last_name = rec(i).last_name
         Where e.first_name != rec(i).first_name
            Or e.last_name != rec(i).last_name;
      End Loop;My database is 10g.

    Muhammed Soyer wrote:
    Redo/Undo log size is a concern for us..You are worried about the wrong thing.
    If you are concerned about the amount of undo and redo, you should be less concerned about the small diffrence between updating 1 or 3 columns and remove the loop that is contributing to a massive increase in both undo and redo.
    Re: global temporary table row order
    Name                                  Run1        Run2        Diff
    STAT...undo change vector size     240,500   6,802,736   6,562,236
    STAT...redo size                 1,566,136  24,504,020  22,937,884Run2 shows what adding a loop to a regular SQL statement will do to undo and redo. It made the redo used 15 times greater and the undo almost 30 times greater.

  • Increse of redo log size

    Hi,
    Please help me to increase the redo log size.
    As i am in DB - oracle 10G and OS - Suse linux 10SP2
    SQL> SELECT * FROM v$log;
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
    FIRST_CHANGE# FIRST_TIME
             1          1        358  157286400          2 YES INACTIVE
       2972903289 28-NOV-11
             2          1        359  157286400          2 YES INACTIVE
       2972957401 28-NOV-11
             3          1        357  157286400          2 YES INACTIVE
       2972839164 27-NOV-11
        GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
    FIRST_CHANGE# FIRST_TIME
             4          1        360  157286400          2 NO  CURRENT
       2973005629 29-NOV-11
    SQL> SELECT * FROM v$logfile;
        GROUP# STATUS  TYPE
    MEMBER
    IS_
             4         ONLINE
    /oracle/JID/origlogB/log_g14m1.dbf
    NO
             4         ONLINE
    /oracle/JID/mirrlogB/log_g14m2.dbf
    NO
        GROUP# STATUS  TYPE
    MEMBER
    IS_
             3         ONLINE
    /oracle/JID/origlogA/log_g13m1.dbf
    NO
             3         ONLINE
    /oracle/JID/mirrlogA/log_g13m2.dbf
        GROUP# STATUS  TYPE
    MEMBER
    IS_
    NO
             2         ONLINE
    /oracle/JID/origlogB/log_g12m1.dbf
    NO
             2         ONLINE
        GROUP# STATUS  TYPE
    MEMBER
    IS_
    /oracle/JID/mirrlogB/log_g12m2.dbf
    NO
             1         ONLINE
    /oracle/JID/origlogA/log_g11m1.dbf
    NO
        GROUP# STATUS  TYPE
    MEMBER
    IS_
             1         ONLINE
    /oracle/JID/mirrlogA/log_g11m2.dbf
    NO
    8 rows selected.
    Please help me how to execute for the above query.
    Thanks,
    Hariharan

    Hello
    Complete step:
    Step 1 SQL> select a.group#, a.member, b.bytes/1024/1024 mb from v$logfile a, v$log b where a.group# = b.group#;
    This query will show current group with redo log members and their size.
    Step 2 Make the last redo log CURRENT one
    To find which group is current at this moment use following query
    SQL> select group#, status from v$log;
    GROUP#         STATUS
    1                    CURRENT
    2                    INACTIVE
    3                    INACTIVE
    4                    INACTIVE
    Now as you can see that the first group is marked as current but we need to make group 4 as current. So force group 4 to become current one by switching log file. To switch log file use following query.
    SQL> alter system switch logfile;
    GROUP#         STATUS
    1                     INACTIVE
    2                    CURRENT
    3                    INACTIVE
    4                  INACTIVE
    SQL> alter system switch logfile;
    GROUP#        STATUS
    1                     INACTIVE
    2                     INACTIVE
    3                     INACTIVE
    4                    CURRENT
    Step 3 Drop the first online redo log
    After making the last online redo log file the CURRENT one, drop the first online redo log:
    SQL> alter database drop logfile group 1;
    Database altered.
    Note:
    Be aware that if you are going to drop a logfile group, it cannot be the current logfile group. However, where attempting to drop the logfile group resulted in the following error as a result of the logfile group having an active status:
    SQL> ALTER DATABASE DROP LOGFILE GROUP 1;
    ALTER DATABASE DROP LOGFILE GROUP 1
    ERROR at line 1:
    ORA-01624: log 1 needed for crash recovery of instance ORA920 (thread 1)
    ORA-00312: online log 1 thread 1: ''
    Easy problem to resolve. Simply perform a checkpoint on the database:
    SQL> ALTER SYSTEM CHECKPOINT GLOBAL;
    System altered.
    SQL> ALTER DATABASE DROP LOGFILE GROUP 1;
    Database altered.
    Step 4 You need to re-create dropped online redo log group with different size. Use the following query to achieve this.
    SQL> alter database add logfile group 1 ('<path>/origlogA/log_g11m1.dbf','<path>/mirrlogA/log_g11m2.dbf') size 200M reuse;
    Database altered.
    Step 5 Force another log switch
    After re-creating the online redo log group, force a log switch. The online redo log group just created should become the "CURRENT" group:
    SQL> select group#, status from v$log;
    GROUP#         STATUS
    1                     UNUSED
    2                     INACTIVE
    3                     INACTIVE
    4                    CURRENT
    SQL> alter system switch logfile;
    SQL> select group#, status from v$log;
    GROUP#         STATUS
    1                      CURRENT
    2                      INACTIVE
    3                      ACTIVE
    Step 6 # Loop back to Step 3 until all logs are rebuilt
    After re-creating an online redo log group, continue to re-create (or resize) all online redo log groups until all of them are rebuilt.
    Regards,
    Rajan

Maybe you are looking for