Steps to empty SAPDB (MaxDB) log file

Hello All,
i am on Redhat Unix Os with NW 7.1 CE and SAPDB as Back end. I am trying to login but my log file is full. Ii want to empty log file but i havn't done any data backup yet. Can anybody guide me how toproceed to handle this problem.
I do have some idea what to do like the steps below
1.  take databackup (but i want to skip this step if possible) since this is a QA system and we are not a production company.
2. Take log backup using same methos as data backup but with Log type (am i right or there is somethign else)
3. It will automatically overwrite log after log backups.
or should i use this as an alternative, i found this in note Note 869267 - FAQ: SAP MaxDB LOG area
Can the log area be overwritten cyclically without having to make a log backup?
Yes, the log area can be automatically overwritten without log backups. Use the DBM command
util_execute SET LOG AUTO OVERWRITE ON
to set this status. The behavior of the database corresponds to the DEMO log mode in older versions. With version 7.4.03 and above, this behavior can be set online.
Log backups are not possible after switching on automatic overwrite. Backup history is broken down and flagged by the abbreviation HISTLOST in the backup history (dbm.knl file). The backup history is restarted when you switch off automatic overwrite without log backups using the command
util_execute SET LOG AUTO OVERWRITE OFF
and by creating a complete data backup in the ADMIN or ONLINE status.
Automatic overwrite of the log area without log backups is NOT suitable for production operation. Since no backup history exists for the following changes in the database, you cannot track transactions in the case of recovery.
any reply will be highly appreciated.
Thanks
Mani

Hello Mani,
1. Please review the document u201CUsing SAP MaxDB X Server Behind a Firewallu201D at MAXDB library
http://maxdb.sap.com/doc/7_7/44/bbddac91407006e10000000a155369/content.htm
           u201CTo enable access to X Server (and thus the database) behind a firewall using a client program such as Database Studio, open the necessary ports in your  firewall and restrict access to these ports to only those computers that need to access the database.u201D
             Is the database server behind a Firewall? If yes, then the Xserver port need to be open. You could restrict access to this port to the computers of your database administrators, for example.
Is "nq2host" the name of the database server? Could you ping to the server "nq2host" from your machine?
2. And if the database server and your PC in the local area NetWork you could start the x_server on the database server & connect to the database using the DB studio on your PC, as you already told by Lars.
See the document u201CNetwork Communicationu201D at
http://maxdb.sap.com/doc/7_7/44/d7c3e72e6338d3e10000000a1553f7/content.htm
Thank you and best regards, Natalia Khlopina

Similar Messages

  • Steps to move Data and Log file for clustered SQL Server

    Hi guys 
    we have Active'passive SQL 2008R2 cluster environment.
    looking for steps to move Data and log files from user Database  and System Database for  SQL Server Clustered Instance. 
    Currently Data and log  files resides on same drive for user and system Databases..
    Thanks
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Try the below link
    http://social.msdn.microsoft.com/Forums/sqlserver/en-US/468de435-3432-45c2-a50b-23519cd2686e/moving-the-system-databases-in-a-sql-cluster?forum=sqldisasterrecovery
    -Prashanth

  • Apply exakt steps from History log file to another image?

    Alright. If you edit an image and the history log option is turned on, you can look at the exakt steps you've made with all the settings for the individual tools and effects used. Also steps you make can be recorded with the actions tool.
    My question is, are there any way to "play" the steps listed in the history log file, as if it were an action?
    I need to apply alot of steps to several images but I never recorded the steps I've made with the actions tool. However they are "recorded" in the history log file and I could go in and look and recreate all those steps manually, but that would take too much time...
    Are there any plug-ins or are there any other way to do what I need to get done?
    Cheers
    Martin

    1. you can set your preferences to record the history to a txt file or the files metadata.
    I assumed that is what the OP already did as indicated in
    If you edit an image and the history log option is turned on,
    wonder why there are no feature in Photoshop to replay all one have done, to any image?
    As you know there is, it’s Actions (as with Scripting not all functionality records; Edit: and not all functionality is applicable to all files because of Color Mode and Bit Depth for example). But it is up to you to employ the necessary foresight to know what to record beforehand.
    Also a non-destructive workflow can help make many effects/compositions/etc. easily transfer-/recreateable. 

  • OES2 SP3 AFP How to empty AFP log file

    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

    If you get a lot of afp activity, your probably best off if you just set the log to rotate.
    create a file under /etc/logrotate.d/ and name it whatever you want.
    then just enter in something like this:
    /var/log/afptcpd/afptcp.log {
    compress
    dateext
    maxage 365
    rotate 99
    size=+4096k
    notifempty
    missingok
    create 644 root root
    postrotate
    /etc/init.d/novell-afptcpd reload
    endscript
    Originally Posted by Skylon5000
    Hello All,
    i find no information how i can empty the AFP log file in /var/log/afptcpd/afptcp.log. It has grown to 1,2 Gigabyte and is very uncomfortable to look for information in this big file.
    Any ideas ?
    Thank you
    Andreas

  • Problem specifying SQL Loader Log file destination using EM

    Good evening,
    I am following the example given in the 2 Day DBA document chapter 8 section 16.
    In step 5 of 7, EM does not allow me to specify the destination of the SQL Loader log file to be on a mapped network drive.
    The question: Does SQL Loader have a limitation that I am not aware of, that prevents placing the log file on a network share or am I getting this error because of something else I am inadvertently doing wrong ?
    Note: I have placed the DDL, load file data and steps I follow in EM at the bottom of this post to facilitate reproducing the problem *(drive Z is a mapped drive)*.
    Thank you for your help,
    John.
    DDL (generated using SQL developer, you may want to change the space allocated to be less)
    CREATE TABLE "NICK"."PURCHASE_ORDERS"
        "PO_NUMBER"      NUMBER NOT NULL ENABLE,
        "PO_DESCRIPTION" VARCHAR2(200 BYTE),
        "PO_DATE" DATE NOT NULL ENABLE,
        "PO_VENDOR" NUMBER NOT NULL ENABLE,
        "PO_DATE_RECEIVED" DATE,
        PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE
        INITIAL 67108864
      TABLESPACE "USERS" ;
    Load.dat file contents
    1, Office Equipment, 25-MAY-2006, 1201, 13-JUN-2006
    2, Computer System, 18-JUN-2006, 1201, 27-JUN-2006
    3, Travel Expense, 26-JUN-2006, 1340, 11-JUL-2006
    Steps I am carrying out in EM
    log in, select data movement -> Load Data from User Files
    Automatically generate control file
    (enter host credentials that work on your machine)
    continue
    Step 1 of 7 ->
      Data file is located on your browser machine
      "Z:\Documentation\Oracle\2DayDBA\Scripts\Load.dat"
       click next
    step 2 of 7 ->
      Table Name
      nick.purchase_orders
      click next
    step 3 of 7 ->
      click next
    step 4 of 7 ->
      click next
    step 5 of 7 ->
      Generate log file where logging information is to be stored
      Z:\Documentation\Oracle\2DayDBA\Scripts\Load.LOG
      Validation Error
      Examine and correct the following errors, then retry the operation:
      LogFile - The directory does not exist.

    Hi John,
    But, i did'nt found any error when i am going the same what you did.
    My Oracle Version is 10.2.0.1 and using Windows xp. See what i did and i got worked
    1.I created one table in scott schema :
    SCOTT@orcl> CREATE TABLE "PURCHASE_ORDERS"
      2  (
      3      "PO_NUMBER"      NUMBER NOT NULL ENABLE,
      4      "PO_DESCRIPTION" VARCHAR2(200 BYTE),
      5      "PO_DATE" DATE NOT NULL ENABLE,
      6      "PO_VENDOR" NUMBER NOT NULL ENABLE,
      7      "PO_DATE_RECEIVED" DATE,
      8      PRIMARY KEY ("PO_NUMBER") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS NOCOMPRESS LOGGING TABLESPACE "USERS" ENABLE
      9  )
    10  TABLESPACE "USERS";
    Table created.I logged into em Maintenance-->Data Movement-->Load Data from User Files-->My Host Credentials
    Here i total 3 text boxes :
    1.Server Data File : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\USERS01.DBF
    2.Data File is Located on Your Browser Machine : z:\load.dat <--- Here z:\ means other machine's shared doc folder; and i selected this option (as option button click) and i created the same load.dat as you mentioned.
    3.Temporary File Location : C:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL\ <--- I did'nt mentioned anything.
    Step 2 of 7 Table Name : scott.PURCHASE_ORDERS
    Step 3 of 7 I just clicked Next
    Step 4 of 7 I just clicked Next
    Step 5 of 7 I just clicked Next
    Step 6 of 7 I just clicked Next
    Step 7 of 7 Here it is Control File Contents:
    LOAD DATA
    APPEND
    INTO TABLE scott.PURCHASE_ORDERS
    FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
    PO_NUMBER INTEGER EXTERNAL,
    PO_DESCRIPTION CHAR,
    PO_DATE DATE,
    PO_VENDOR INTEGER EXTERNAL,
    PO_DATE_RECEIVED DATE
    And i just clicked on submit job.
    Now i got all 3 rows in purchase_orders :
    SCOTT@orcl> select count(*) from purchase_orders;
      COUNT(*)
             3So, there is no bug, it worked and please retry if you get any error/issue.
    HTH
    Girish Sharma

  • How to reduce the size of redo log files

    Hi,
    I am using Oracle Database 9.2.0.1.0. My present redo log files are of 100 MB each
    (redo01.log,redo02.log,redo03.log) which tooks more time to swicth the logs.
    I want to change the size to 20MB each so that log switcjing will be faster.
    Please let me know the exact step to resize the redo log files so that Ican change it.
    Regards,
    Indraneel

    Technical questions cannot be answered here. Please, post in the right forum :
    General Database Discussions

  • Moving oc4j logs files in other filesystems

    Hi,
    I want to change the path of all oc4j logs files. For best performance and maintenance I want to put the logs in other filesystems. It is a good way and does someone try this.
    I need this change because my devlopper want to see all oc4j logs and I don't want to create many symbolic links or many samba share.
    Thanks !

    You can move all oc4j logs files in other filesytems. I have heard of people doing it. The product OCS, oracle collaboration suite, might have provided a single global option to do that in their next release. However, I do not think there is a single global option in oc4j itself that can accomplish that in a single swoop. I would suggest that you try it step by step, moving a subset of log files at a time.

  • Empty Log File - log settings will not save

    Description of Problem or Question:
    Cannot get logging to work in folder D:\Program Files\Business Objects\Dashboard and Analytics 12.0\server\log
    (empty log file is created)
    Product\Version\Service Pack\Fixpack (if applicable):
    BO Enterorise 12.0
    Relevant Environment Information (OS & version, java or .net & version, DB & version):
    Server: windows Server 2003 Enterprise SP2.
    Database Oracle 10g
    Client : Vista
    Sporadic or Consistent (if applicable):
    Consistent
    What has already been tried (where have you searched for a solution to your question/problem):
    Searched forum, SMP
    Steps to Reproduce (if applicable):
    From InfoViewApp, logged in as Admin
    Open ->Dashboard and Analytics Setp -> Parameters -> Trace
    Check "Log to folder" and "SQL Queries", Click Apply.
    Now, navigate away and return to this page - the "Log to folder" is unchecked. Empty log file is created.

    Send Apple feedback. They won't answer, but at least will know there is a problem. If enough people send feedback, it may get the problem solved sooner.
    Feedback
    Or you can use your Apple ID to register with this site and go the Apple BugReporter. Supposedly you will get an answer if you submit feedback.
    Feedback via Apple Developer
    Do a backup.
    Quit the application.
    Go to Finder and select your user/home folder. With that Finder window as the front window, either select Finder/View/Show View options or go command - J.  When the View options opens, check ’Show Library Folder’. That should make your user library folder visible in your user/home folder.  Select Library. Then go to Preferences/com.apple.systempreferences.plist. Move the .plist to your desktop.
    Restart, open the application and test. If it works okay, delete the plist from the desktop.
    If the application is the same, return the .plist to where you got it from, overwriting the newer one.
    Thanks to leonie for some information contained in this.

  • Error in analyzer Log file (/sapdb/data/wrk/ACP/analyzer--- DBAN.err)

    Hello All,
    I am getting the following Error message in analyzer Log file (/sapdb/data/wrk/ACP/analyzer---> DBAN.err).
    the details are as follows:-
    =====================================================
    <i>2006-07-24 08:55:59
    ERROR 5: Cannot execute SQL statement.
    [MySQL MaxDB][LIBSQLOD SO][MaxDB] General error;-4008 POS(1) Unknown user name/password combination
    SELECT YEAR(NOW()),MONTH(NOW()),DAY(NOW()),HOUR(NOW()),MINUTE(NOW()),SECOND(NOW()) FROM DUAL
    2006-07-26 12:15:39
    ERROR 20: Database Analyzer not active in directory "/sapdb/data/wrk/ACP/analyzer".
    2006-08-03 12:33:08
    ERROR 5: Cannot execute SQL statement.
    [MySQL MaxDB][LIBSQLOD SO][MaxDB] Communication link failure;-709 CONNECT: (database not running: no request pipe)
    SELECT YEAR(NOW()),MONTH(NOW()),DAY(NOW()),HOUR(NOW()),MINUTE(NOW()),SECOND(NOW()) FROM DUAL</i>
    =====================================================
    can you please tell me what does that mean for my Database.
    The main problem that I am facing is I am not able to start my SAP application. when I issue startsap from <SID>adm login then I get error messag saying not able to connect to the database. Although the database is already up and running.
    Please help me !
    Regards,
    Premkishan chourasia

    Hi,
    well, the error -4008 denotes that the user/password combination used by the DB Analyzer for accessing the DB are incorrect. The DB Analyzer tries to issue SQL commands with the SYSDBA user.
    Do you know the user/password combination of your SYSDBA user?
    Regards,
    Roland

  • MAXDB copy log file from one server to the other server.

    Hi All,
    I want to copy a log file from one server to the other server , the database is Maxdb and i don't know how to do it. I want to do it through command prompt, (set of commands required) the front end tool which we are using is DBM ( database manager). Please help.
    Regards
    M.A

    Hi,
    Basically, the process is of log shipping. Transferring logs from DC to DR system.
    For that, you can check HowTo - Standby DB log shipping - MaxDB - SCN Wiki
    This is script based. I have not done it but the idea remains the same.
    Regards,
    Divyanshu

  • Data Services 4.0 Designer. Job Execution but empty log file no matter what

    Hi all,
    am running DS 4.0. When i execute my batch_job via designer, log window pops up but is blank. i.e. cannot see any trace messages.
    doesn't matter if i select "Print all trace messages" in execution properties.
    Jobserver is running on a seperate server. The only thing i have locally is just my designer.
    if i log into the Data Services management console and select the job server, i can see trace and error logs from the job. So i guess what i need is for this stuff to show up in my designer?
    Did i miss a step somewhere?
    can't find anything in docs about this.
    thanks
    Edited by: Andrew Wangsanata on May 10, 2011 11:35 AM
    Added additional detail

    awesome. Thanks Manoj
    I found the log file. in it relevant lines for last job i ran are
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  Starting job with command line -PLocaleUTF8 -Utip_coo_ds_admin
                                                    -P+04000000001A030100100000328DE1B2EE700DEF1C33B1277BEAF1FCECF6A9E9B1DA41488E99DA88A384001AA3A9A82E94D2D9BCD2E48FE2068E59414B12E
                                                    48A70A91BCB  -ek********  -G"70dd304a_4918_4d50_bf06_f372fdbd9bb3" -r1000 -T1073745950  -ncollect_cache_stats
                                                    -nCollectCacheSize  -ClusterLevelJOB  -Cmxxx -CaDesigner -Cjxxx -Cp3500 -CtBatch  -LocaleGV
                                                    -BOESxxx.xxx.xxx.xxx -BOEAsecLDAP -BOEUi804716
                                                    -BOEP+04000000001A0301001000003F488EB2F5A1CAB2F098F72D7ED1B05E6B7C81A482A469790953383DD1CDA2C151790E451EF8DBC5241633C1CE01864D93
                                                    72DDA4D16B46E4C6AD -Sxxx.xxx.xxx -NMicrosoft_SQL_Server -Qlocal_repo  coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e" -l"C:\Program Files (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/trace_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -z"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/error_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -w"C:\Program Files
                                                    (x86)\SAP BusinessObjects\Data Services/log/js01/tip coo ds local
                                                    repo_azdzgq4dnuxbm4xeriey1_e/monitor_05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3.txt" -Dt05_11_2011_16_52_27_9
                                                    (BODI-850052)
    (14.0) 05-11-11 16:52:27 (2272:2472) JobServer:  StartJob : Job '05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3' with pid '148' is kicked off
                                                    (BODI-850048)
    (14.0) 05-11-11 16:52:28 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <4> (BODI-850170)
    (14.0) 05-11-11 16:52:28 (2272:2472) JobServer:  AddChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 17:02:32 (2272:2472) JobServer:  RemoveChangeInterest: log change interests for <05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3> from client
                                                    <inet:10.165.218.xxx:56511>. (BODI-850003)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  GetRunningJobs() success. (BODI-850058)
    (14.0) 05-11-11 19:57:45 (2272:2468) JobServer:  PutLastJobs Success.  (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2072) JobServer:  Sending notification to <inet:10.165.218.xxx:56511> with message type <5> (BODI-850170)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    (14.0) 05-11-11 19:57:45 (2272:2472) JobServer:  GetHistoricalLogStatus()  Success. 05_11_2011_16_52_27_9__70dd304a_4918_4d50_bf06_f372fdbd9bb3 (BODI-850001)
    it does not look like i have any errors with respect to connectivity? ( or any errors at all....)
    Please advise on what, if anything you notice from log file and/or next steps i can take.
    thanks.

  • Teradata fast load log file empty

    hei all,
    after update odi 11g teradata fast load script not running, error tells to see the log file but log is empty
    any solution
    naseer

    any solution please

  • Corrupt LOG File MaxDB 7.5 (Netweaver 04)

    At the Installation the LOG went full. LOG-Mode = SINGLE.
    Then I have added a new LOG File. Nothing happens. After rebooting the maschine,
    the database doesnt come up, in krnldiag there is following:
    2004-11-14 22:46:30  7579     11000 vattach  '/sapdb/XI3/sapdata/DISKD0001' devno 1 T58 succeeded
    2004-11-14 22:46:30  7520     11597 IO       Open '/sapdb/XI3/saplog/DISKL001' successfull, fd: 16
    2004-11-14 22:46:30  7633     12821 TASKING  Thread 7633 starting
    2004-11-14 22:46:30  7632     11565 startup  DEVi started
    2004-11-14 22:46:30  7520     11597 IO       Open '/sapdb/XI3/saplog/DISKL001' successfull, fd: 17
    2004-11-14 22:46:30  7634     12821 TASKING  Thread 7634 starting
    2004-11-14 22:46:30  7633     11565 startup  DEVi started
    2004-11-14 22:46:30  7579     11000 vattach  '/sapdb/XI3/saplog/DISKL001' devno 2 T58 succeeded
    2004-11-14 22:46:30  7520     11597 IO       Open '/data2/sapdb/XI3/DISKL002' successfull, fd: 19
    2004-11-14 22:46:30  7635     12821 TASKING  Thread 7635 starting
    2004-11-14 22:46:30  7634     11565 startup  DEVi started
    2004-11-14 22:46:30  7520     11597 IO       Open '/data2/sapdb/XI3/DISKL002' successfull, fd: 20
    2004-11-14 22:46:30  7636     12821 TASKING  Thread 7636 starting
    2004-11-14 22:46:30  7635     11565 startup  DEVi started
    2004-11-14 22:46:30  7579     11000 vattach  '/data2/sapdb/XI3/DISKL002' devno 3 T58 succeeded
    2004-11-14 22:46:30  7579 ERR    30 IOMan    Log volume configuration corrupted: Linkage missmatch between volume 1 and 2
    2004-11-14 22:46:30  7579     11000 vdetach  '/sapdb/XI3/sapdata/DISKD0001' devno 1 T58
    2004-11-14 22:46:30  7520     12822 TASKING  Thread 7631 joining
    2004-11-14 22:46:30  7636     11565 startup  DEVi started
    2004-11-14 22:46:30  7631     11566 stop     DEVi stopped
    2004-11-14 22:46:30  7520     12822 TASKING  Thread 7632 joining
    2004-11-14 22:46:30  7579 ERR    16 Admin    RestartFilesystem failed with 'I/O error'
    2004-11-14 22:46:30  7579 ERR     8 Admin    ERROR 'disk_not_accessibl' CAUSED EMERGENCY SHUTDOWN
    2004-11-14 22:46:30  7579     12696 DBSTATE  Change DbState to 'SHUTDOWN'(25)
    Is there any way to delete the corrupt LOG files in order to add new one? The data file is ok.
    Thanks
    Helmut

    Thread was also opened on the Mailing list:
    Here is the answer
    Hi,
    in most situations an "add log volume" does not solve
    a log full situation, because the added logvolume is
    accessible first at the moment, when the write position
    on the log is at the end of the old logvolume.
    If a LOGFULL occurs, the save log!
    All "parts" of the LogVolume are numbered and each one
    has stored the identifying number of it's predecessor and
    it successor in it's header-page. The message
         "Linkage missmatch between volume"
    means, that these values does not match any more.
    Did you somehow changed the contents of this file/device?
    If so: can you revert the change? Then the restart should
    work again. do a logbackup and your database will run again.
    If this doesn't work for you, you could patch the
    header pages of the logvolume (I assume you do not really
    want this).
    Another way to solve the situation is to do a data backup
    in admin-mode, reinstall the database and recover the
    databackup, because the removal of a logvolume is not
    supported.
    kind regards, Martin

  • How to remove log file on maxdb

    Hi,
    I have installed IDES ECC 5.0 on MaxDB 7.5. I have 4 files size of 1 GB and I would like to remove it.
    Is it possible to remove log files without Complete Data Backup procedure in Database Manager?
    Is it helpfull to switch database in Overwrite Log Mode?
    We use a testing sistem without needs for recovery procedures.
    Regards,
    Aleksandar

    Hi,
    you should not delete the logvolumes.
    For your case it would suffice to switch on the auto-overwrite mode of the log.
    You can do this most conveniently by using the Database Manager GUI.
    1. Put the DB instance in state ADMIN
    2. Goto 'COnfiguration' -> 'Log Settings'.
    3. From the wizard select 'Overwrite Mode...' and follow the described follow-ups.
    Kind regards,
    Roland

  • How to add a log file to an MaxDB?

    I see that we have only one log file there. I want to add more. Please help either the syntax or the GUI menu path.  Thanks!

    Hi Linda,
    of course the topic is covered in the MaxDB documentation (e.g. here http://maxdb.sap.com/documentation/)
    [Adding Log Volumes|http://maxdb.sap.com/doc/7_7/0c/b9bc43f7b24eef9afff1968bf4fcdc/content.htm]
    However, you should be really sure that you actually need to add a volume to the log area.
    Usually this is not the case.
    Most often, people believe they should add a log volume when they should run a log backup.
    regards,
    Lars

Maybe you are looking for