Force log switch using dbmcli

Hello!
How do I force a log switch using dbmcli?
For data protection purpose I would like to do this on daily basis.
The system has autolog=on, and I run the backups using dbmcli via crontab on a linux box. MaxDB version is 7.6.01.12
Regards,
Fredrik
Message was edited by:
        Fredrik  Rosengren
Message was edited by:
        Fredrik  Rosengren

Hi Fredrik,
with Oracle Databases you use logswitches to get online redo logs copied to archive logs even when these online redo logs are not yet completely filled.
Basically you just want to make sure to have the latest changes in your backup too.
Well in MaxDB there is no "switch logfile" because there is no such thing as the logfiles. And there is also no limitation like the one of Oracle when backing up log data.
If you want to backup each and all not yet backed up log entries: no problem!
Deactivate the automatic log backup, perform a manual log backup and restart the automatic logbackup.
The log segments in MaxDB are not like the online redo logfiles of oracle. They are just markers in the logarea that are used by the autolog features to know "oh, there had been some changes again - let's do a backup of that now".
You may also want to have a look into this note:
<a href="https://websmp206.sap-ag.de/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=869267&_NLANG=E">#869267 - FAQ: MaxDB LOG area</a>
KR Lars

Similar Messages

  • Forcing log switch every minute.

    Hi,
    I want to force a log switch every one minute how can i do it?
    What should be the value of fast_start_mttr_target?
    Does a checkpoint force a log switch?
    Do i need to only reduce the size of redo log to a small size?
    How can i make sure that a log switch will happen after a particular time period for ex. 1Minute,2 minute.
    I want to force a log switch every minute because i want to send the archive redo log to standby database so that not more than 1 minute changes in database are lost. I am using 10g R2 on windows 2003 server.
    I am unable to find a solution. Any help?

    Hi,
    I want to force a log switch every one minute how
    can i do it? yes with archive_lag_target parameter
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934
    What should be the value of fast_start_mttr_target?incremental or normal checkpoint "fast instance recovery/downtime concerned" introduced from oracle 8, this feature is enabled with the initialization parameter FAST_START_MTTR_TARGET in 9i.
    fast_start_mttr_target to database writer tries to keep the number of dirty blocks in the buffer cache low enough to guarantee rapid recovery in the event of a crash. It frequently updates the file headers to reflect the fact that there are not dirty buffers older than a particular SCN.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin004.htm#sthref1110
    Does a checkpoint force a log switch? log switch force to checkpoint ,checkpoint never force to log switch.
    Do i need to only reduce the size of redo log to a
    small size?depends yours SLA how far you can risk the data ,but it will effect yours database performance ,recommended to set the size of log which should imply the log swtich after filling to 20 mins,its a trade off risk vs perofrmance.
    How can i make sure that a log switch will happen
    after a particular time period for ex. 1Minute,2
    minute.
    want to force a log switch every minute because i
    want to send the archive redo log to standby database
    so that not more than 1 minute changes in database
    are lost. I am using 10g R2 on windows 2003 server.
    am unable to find a solution. Any help?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934Khurram

  • Continiously log switching, one node unavailable

    We run 4-node RAC 9.2.0.4 on Solaris 9. Recently, one of nodes crashed due to hardware failure. After some time since this crash, i executed the command forcing log switching within the cluster: 'Alter system archive log current'. After this command, our redo logs started switching continuously - every 5-6 seconds. Generated archived logs where almost empty - they contained only a few blocks, often only one - header, i suppose. The problem was resolved only after the thread of the died node was disabled with command 'Alter database disable thread 4;'.
    I made a few experiments to investigate this problem. It seems that every command forcing redo log switch in a RAC leads to such excessive log switching, when one of nodes in cluster is unavailable. I tested 'Alter system archive log current', ARCHIVE_LAG_TARGET parameter, and the command 'Alter system switch logfile', executing from an Oracle job. In all these cases redo logs began switching continuously when the commands had beed executed or it was time to switch logs by ARCHIVE_LAG_TARGET parameter. Is there any possibility to force redo logs switch in all threads in a cluster without any problems, when one node is unavailable? We would like to switch logs every 10 minutes - to limit the amount of data that can be lost in case of whole cluster failure...
    Thanks in advance
    Alexey Sergeyev
    [email protected]

    Joel, thank you for replay. I tested forcing log switch with one node down on different ways - within a job or without one. The command 'Alter system archive log current' was executed without any job from SQL*Plus command line. This triggered continuously log switching on working nodes. The parameter ARCHIVE_LAG_TARGET does'nt produce any job - at least this job is'nt shown in DBA_JOBS. This parameter leads to continuously log switching on survived nodes, when one node goes down.
    I tried make a job, one per instance, with 'Execute immediate ''Alter system switch logfile''' command inside. Executing of such a job also triggered continuously log switching on working nodes...
    It seems that forcing log switching in a cluster works only when all is fine - all nodes are running. Is it Oracle bug? Or is it expected, but not documented behaviour?
    Alexey Sergeyev
    [email protected]

  • Frequent redo log switches

    Oracle 9.2.0.1 on W2k3 server. the redo log is switching every minute, even without any discernable database activity. It's in archive log mode and the redo logs are 100mb in size, so the archive logs are filling up my hard drive. I'm having a hard time figuring out why the redo logs are switching so often. there are 3 redo log groups. Thanks for any help you can give me.

    If the redo logs are defined at 100M in size and the archived redo logs are 100M in size then the online redo logs are being filled. As suggested log miner is one way to determine what is happening.
    Are the redo logs switching all the time or only during periods of peek activity? If the rapid log switches are only happening during certain time periods like 9:30 - 10:30 or the time correspond to the running of certain batch jobs then you should probably increase the size of your online redo logs.
    If the archived redo logs are small then obvious something is forcing log switches prior to their filling. I would check the spfile setting for log_checkpoint_interval, log_checkpoint_timeout, and fast_start_mttr_target to be sure no one had made a mistake changing one of the values prior to running the log miner.
    HTH -- Mark D Powell --

  • Want to reduce Log switch time interval !!!

    Friends ,
    I know that the standard LOG SWITCH TIme interval is 20/30 minutes , i.e., every time it is better switch redolog 1 to redolog 2 (or redolog2 to redolog3) within 20/30 minutes.
    But in my production server , Logfile switches within every 60 minutes every time in the peak hour . Now my question , How I can make a situation where my logfile should switch to another logfile between 20/30 minutes .
    Here my database configuration is :
    Oracle database 10g (10.2.0.1.0 version) in AIX 5.3 server
    AND
    SQL> show parameter fast_start_mttr_target
    NAME TYPE VALUE
    fast_start_mttr_target integer 600
    My every redolog file size is = 50 MB
    In this situation , give me advice plz how I can reduce my logswitch time interval ?

    You could either
    a. Recreate your RedoLog files with a smaller size --- which action I would not recommend
    OR
    b. Set the instance parameter ARCHIVE_LAG_TARGET to 1800
    ARCHIVE_LAG_TARGET specifies (in seconds) the duration at which a log switch would be forced, if it hasn't been done so by the online redo log file being full.
    You should be able to use ALTER SYSTEM to change this value.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Checkpoint and log switch

    I would like to know if when a redo log is filled, the new redo log can be written in parallel with the chekpoint.
    Or simply, what's the sequence for log switching:
    1 - redo log is filled -> checkpoint -> new log writes. or
    2 - redo log is filled -> checkpoint/new log writes (in parallel)
    Best Regards,
    Rogerio C. Schreiner
    null

    I'm tempted to say 'don't worry about it', because L_C_T has been deprecated since 9i: instead use FAST_START_MTTR_TARGET to control the rate of checkpointing.
    But that aside, L_C_T essentially says: "if a block was dirtied (changed) by a transaction whose redo was generated more that X seconds ago -where X is the number of seconds LCT is set to- then that block will be flushed to disk the next time DBWR wakes up on the '3 second rule'"
    L_C_INTERVAL means something similar: "If a block was dirtied by a transaction whose redo can be found in the redo log more than Y redo blocks away from the current checkpoint marker, where Y is the setting for LCI, flush it next time DBWR wakes up"
    A checkpoint does not always result in a log switch, but a log switch always results in a checkpoint.
    You can't really answer your third question, because the LCT parameter doesn't mean "Every 1800 seconds, flush everything to disk" (as it used to in version 8.0). But if a block was dirtied more than 1800 seconds ago, it will be a candidate for flushing to disk next time DBWR wakes up. But that doesn't tell us when a log switch will occur: that will only happen when the redo log fills up and we have to switch to the next log to keep going. I can't tell anything about the size of your logs or the rate at which they will fill up from the parameters you list.
    A parameter does exist which will force a log switch every so often. It's called ARCHIVE_LAG_TARGET, and it's set to a number of seconds. If that was set to 1800, I could say with confidence that you'd log switch every half hour. But that's not what you had in your question, of course!
    You might care to read this slightly more technical description of the parameters:
    http://www.jlcomp.demon.co.uk/faq/log_checkpoint.html
    (though I think Jonathan's timelines are wrong, because the change to the effect of the parameters came in with 8i).
    Message was edited by:
    howardjr

  • Database log switch and WLS connection pool relation

    Hi,
    We have been facing WLS JDBC connection pool disable and suspension issue very frequently in our environment and as a work around have implemented multi-datasource configuration (fail over method).
    But we need to know the root cause for the same and want to fix the issue too.
    We have tried many options like increasing no. of processes and transaction on Database, fine tune the weblogic datasource but still we could not isolate the issue.
    Recently we have been advised to minimize the log switch on database front and increase the redo log size. Not sure if this will help in isolating the issue or not.
    So we are Looking forward for the comments and suggestions on what would be the relationship b/w datasource and log switch be and if someone have faced this issue and resolved the same by fine tuning the database and minimizing the log switch.
    We are using WLS 10.3.3.0
    -Rohit

    turn on jdbc logging. The server log should be showing the troubles WLS is having
    while testing connections and trying/failing to make replacement connections.

  • Is there a way to identify manual log switches?

    Hi!
    A while ago I upgraded a 10g database to 11.2.0.2 64 Bit Windows.
    During the Upgrade we realized that redo logs were configured really small (~10MB) what resulted in a lot of log switches (a few hundred per day). So we adjusted redo log size to 100MB and set archive_lag_target to 1800.
    The amount of log switches went down a little but less far than we expected it. After further analysing the situation we recognized that Oracle is switching logs far before reaching the 100MB log size (and also far before reaching 1800s). All the archived logs have a size of about 15MB. I know that 11g invented something like "preemptive log switching" that switches logs round about 20% before reaching the maximum value (if I remember it correctly..). But switching already at 15% of the maximum size seems strange to me...
    I couldn't find any helpful stuff on Google or Metalink about that topic but today I had a different idea: what if it's the application software that's doing manual log switches?
    (I have no idea why it should do that but I can remember that the application user does require the sysdba privilege - don't ask me why, I didn't write it, I won't defend it...)
    So I checked the alert log but unfortunately I had to realize that there is no difference between an automatic switch and a manual one (only alter system archive log... does get an extra line).
    So my questions are:
    1) Does anybody know a way of distinguishing between an automatic log switch and a manual one? Is there a table or another logfile where this information is recorded?
    2) Has anybody experienced a similar situation where Oracle is switching the logs way before reaching the maximum size?
    Best regards,
    Marcus

    lebigmac wrote:
    1) Does anybody know a way of distinguishing between an automatic log switch and a manual one? Is there a table or another logfile where this information is recorded?
    Off the top of my head - I think the only way to do a manual log switch is to issue "alter system switch log file", and I think that any "alter system" command is written to the alter log in you version of Oracle. (I really ought to check both statements before posting this, but I've been up since 2:30 am).
    2) Has anybody experienced a similar situation where Oracle is switching the logs way before reaching the maximum size?
    It's very common with recent versions of Oracle when private redo threads come into play; but your example seems a little exaggerated. The log file switch has to start when there is enough space left in the log file for all the public and all (or maybe it's the previously used - i'll have to check my book) private redo threads. You could check x$kcrfstrand to see what this sizes look like: http://jonathanlewis.wordpress.com/?s=private+thread
    Regards
    Jonathan Lewis

  • Is there a way on maxdb to force netbackup to use different initSID.utl fil

    (This thread refers to a question that was posted to a blog and where the proper location to handle it is this forum)
    Is there a way on maxdb to force netbackup to use different initSID.utl files ? I have try to do it with different bsi.env file . But if i start the backup it keep on asking for the default bsi.env file in /sapdb/data/wrk/SID directory . I have set the variable BSI.ENV to the new file but it looks like it ignore it. Its on a sun solaris operating system
    Due to security reasons the DBM server process does no longer inherit environment variables from its caller, e.g. dbmcli.
    see PTS 1155045, as especially in the area of external backup tools inheriting environment settings seem to be rather common.
    Idea is to add BSI_ENV to the dbm.cfg file with a command like:
      dbmcli -d ... -u ... dbm_configset BSI_ENV
    For further information please refer to this documentation section
    or for Backint also here 
    Best regards
    Jörg

    Hi Jörg,
    hmmm. I'd guess there are some problem with for which user the environment variables have been defined...
    Anyhow, usually it's easier not to use the environment variables but to provide the dbm-server it's own runtime variables via dbm_configset.
    Check the old thread archive_stage and archive_stage_repeat to different NSR_POOLs for an example application of this.
    regards,
    Lars

  • Advantage of FORCE LOGGING over NOLOGGING

    Hi,
    Can you please help me on the advantages of using force logging mode with a standby database and the effect of it in indexes etc. Also, it may help if you could also share ideas on difference between the two modes?
    Thanks,
    Jennah

    <i>>>  Can you help me what factors would be sacrificed</i>
    This really depends on your system, in most cases you will not be able to see a difference. However i did a small test:
    - drop index, restart db
    - create index with logging (measure time/redo size)
    - drop index, restart db
    - create index with logging (measure time/redo size)
    Result:
    logging - Elapsed: 00:02:40.68  / Redo size: 800mb
    nologging - Elapsed: 00:02:20.29 / Redo size: 1.5mb
    Here the full test:
    [code]SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                             28304
    SQL> CREATE UNIQUE INDEX "SAPR3"."CDCLS~0" ON "SAPR3"."CDCLS"
    ("MANDANT", "OBJECTCLAS", "OBJECTID", "CHANGENR", "PAGENO")
      PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 65536 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "PSAPCLUI" LOGGING;
    Index created.
    Elapsed: 00:02:40.68
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                         834714816
    SQL> select segment_name, bytes/1024/1024 "Size_MB" from dba_segments where segment_name = 'CDCLS~0'
    SEGMENT_NAME            Size_MB
    CDCLS~0                     800
    drop index / db restart here
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                             28992
    SQL> CREATE UNIQUE INDEX "SAPR3"."CDCLS~0" ON "SAPR3"."CDCLS"
    ("MANDANT", "OBJECTCLAS", "OBJECTID", "CHANGENR", "PAGENO")
      PCTFREE 10 INITRANS 2 MAXTRANS 255
      STORAGE(INITIAL 65536 NEXT 65536 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
      TABLESPACE "PSAPCLUI" NOLOGGING; 
    Index created.
    Elapsed: 00:02:20.29
    SQL> select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                           1520824
    SQL> select segment_name, bytes/1024/1024 "Size_MB" from dba_segments where segment_name = 'CDCLS~0';
    SEGMENT_NAME            Size_MB
    CDCLS~0                     800[/code]

  • Redo logs switches too frequent after migrating the db to different server

    Dear Experts,
    A couple of days back, we migrated our database (belonging to EBusiness Suite) to a different server, to get good performance benefit. The database is 10.2.0.4 and it was migrated from AIX 5.3 to Linux x86 64.
    Users are happy with the performance, but I am getting below errors in the alert logs
    a) Thread 1 cannot allocate new log, sequence 498
    b) Private strand flush not complete
    c) ORACLE Instance PROD - Can not allocate log, archival requiredOracle support is saying the issue is coming because of too frequent log switches. I am wondering how the log switches has become frequent in the new server. In the old server it was about 10 mins for every switch, now it is as frequent as 1 minute?
    Any idea, what could be the reason behind this? Do you agree this issue is coming because frequent log switches?
    Thanks
    ARS

    Kanchana Devasurendra wrote:
    Hi ARS,
    Please check the following item in your new database.
    1. log_archive_max_processes Most properbly value set for this parameter is low ( may be it's set to 1.) Please increase up ( 4)I am curious to know what makes you think 4 is the right magic number for log_archive_max_processes - after all, he's only got one archive destination.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    A general reminder about "Forum Etiquette / Reward Points": http://forums.oracle.com/forums/ann.jspa?annID=718
    If you never mark your questions as answered people will eventually decide that it's not worth trying to answer you because they will never know whether or not their answer has been of any use, or whether you even bothered to read it.
    It is also important to mark answers that you thought helpful - again it lets other people know that you appreciate their help, but it also acts as a pointer for other people when they are researching the same question, moreover it means that when you mark a bad or wrong answer as helpful someone may be prompted to tell you (and the rest of the forum) what's so bad or wrong about the answer you found helpful.

  • FORCE LOGGING

    I want to create a physical standby database. The primary database is in archive log mode, and the 50% of the data belong to the staging area of datawarehouse. During ETL processes the database generates great amount of GBs of redologs. This tables contain temporary data that are only used for ETL process.
    Can I set the database like FORCE LOGGING=N and set FORCE LOGGING=Y only for the tablespace that contain the tables of data (production, not intermediate) ???

    If you are trying to run a physical standby, for any database larger and more important than a toy, across the internet you are doomed to failure.
    Databases do not fail on predictable schedules.
    A DR site means a DR site ... it does not mean someplace out there in the cloud. For all you know your redo is being shipped to the moon and
    back. Buy room in a data center no more than 500km away from your current location.
    DR is for serious people with serious issues. If you do not need it then don't build it. If you do need it build it correctly.

  • Force logging in archivelog mode

    Hi !!!
    What happen if I have "force_logging" parameter set to true when the database is in archivelog mode ?
    Thanks.

    rarain wrote:
    Hi Juamd,
    You should only use this option when it is really required because this option will forcibly generate redo for all Nologging operations that means you might find more archives and you need to setup more space for archive.
    Normally we use this option when we need to replicate data changes from one database to another database like in standby configuration, Golden Gate replication etc. I would suggest you to monitor Redo amount generated after enabling this option and accordingly estimate archive space and backup space for archive.
    Thanks...Ah, don't agree with that at all. You can compromise your recovery if you happen to want to restore to a point-in-time when there was a NOLOGGING operation going on. Fine, if it's an index, but if it happens to be on a table...
    (Yes, been there, done that - with a non-Production database, thankfully)
    This is one of the 'must haves', IMO, for Production - set it at the database-level and it overrides any tablespace or object setting.
    Archivelogs are generated for a reason. If you have a particular operation that really does massively benefit from NOLOGGING and is something you are sure that you simply re-run/re-create yourself, fine. If not, by default, you really should FORCE LOGGING.

  • Ora-02331 error - while trying to force logging

    Hi friends,
    i am trying to configure the physical standby database and i am following this documentation http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm.
    when i am trying to do the force logging i am getting the following error.
    SQL> alter database force logging;
    alter database force logging
    ERROR at line 1:
    ORA-02231: missing or invalid option to ALTER DATABASE
    Can anyone please tell me why it's happening?
    Thanks

    Which database are you doing this on? Supposed to run
    it on the primary. I have only one database i.e primary db and trying to set up physical standby db.
    Plus, which version of Oracle? I
    see what the document link uses, but that doesn't
    mean it is the version you are using.
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
    PL/SQL Release 9.0.1.1.1 - Production
    CORE 9.0.1.1.1 Production
    TNS for 32-bit Windows: Version 9.0.1.1.0 - Production
    NLSRTL Version 9.0.1.1.1 - Production
    Can you pls guide me the right way to proceed furthur.

  • Dataguard log switch question

    Wonder if anyone can help me with a question?
    I am new to data guard and only recently setup my first implementation of a primary and standby Oracle 11 g database.
    It's all setup correctly, i.e no gaps sequences showing, no errors in the alert logs and I have successfully tested a switch over and switch back.
    I wanted to re-test the archive logs were going across to the standby database ok, unfortunately I performed an alter system switch logfile on the standby database instead of primary.
    No errors are reported anywhere, no archive log sequence gaps or errors in the alert logs, but I am wondering if this will cause a problem the next time I have to failover to the standby database?
    Apologies for my lack of my knowledge I am new data guard and only been a DBA for a couple of years, have not had time to read up on the 500 page Data Guard book yet.
    Thanks in Advance

    First you have to know what happens when log switch occurs either manually or internally.
    All data & changes will be in online redo log files, once log switch occurred either automatic or forcefully, these information from online redo log files will be dumped to archives.
    Now tell me. Where will be the online redo? There is no concept of online redo data on standby, in case of real time apply you will have only standby redo log files, even you cannot switch standby redo log files.
    First this command on standby won't work, it's applicable only for online redo log files. So onions redo exists/active only in primary.
    So nothing to worry on it. Make sure environment is sync prior to performing switchover.
    Hope this helps.
    Your all questions why unanswered? Close them and keep the forum clean.

Maybe you are looking for

  • Cut-off text. Please help me. Thank you.

    Hello, I published my site several weeks ago with no problems at all. Now, suddenly, text (rendered as images) throughout the site is being cut-off, sometimes to the point of invisibility. my site can be found at: http://www.milescollyer.com/ (I can

  • I have a damaged library file, and a few questions?

    My laptop shut down instantly because of a charger/battery problem and when I started itunes up again I got the message (importing itunes music library.xml) I thought it was going to double my library so I clicked stop and when itunes opened only hal

  • How can I completely wipe the hard drive with out the Mac OS X disc? I don't need any of the info on the computer.

    How can I completely wipe the hard drive with out the Mac OS X disc? I don't need any of the info on the computer.

  • XML Forms & Document Lifetime question?

    Hi All I am creating an xml form and want to use the 'ValidTo' property for document lifetime. I have created the browser and can get the date picker, however the 'ValidTo' requires date and time. How on the xml for can you user a dateTime Picker or

  • IDOC treking through workflow

    Hi All, I am working on a requirement where we need to trek IDOCs through workflow. I have got a standard task(TS30000206) which does error handling for outbound IDOC. If i send 10 outbound IDOC and 9 are processed successfully they should be changed