Maxlscript for backup of log files

Hi All,
I want the maxlscript for the taking the backup of the log files in the essbase server and after taking the backup delete the log files content from the server logs.After that when ever i take abackup of the logsfrom the server it should do the same process and replace the logs backup which i have taken earlier.
Thankingyou all
Regards
User2271

what are the log files you want to take backup Essbase server and application logs. right?
If yes. after stopping the server you can copy to any location using file system copy command and create a 0kb file.you dont need Maxl for this.
And this truncation of log files will improve the server or application performance.
Back up is necessary for tracking purpose and i think you dont need to restore them.

Similar Messages

  • I am using MS Office 2010 and Windows XP Professional, can I use iCloud for backup of my files and documents?

    I am using MS Office 2010 and Windows XP Professional, can I use iCloud for backup of my files and documents? I am planning to transition to a Mac so using the iCloud seems to make the most sence.

    No, it is not a backup method at all.  If you delete a sync'd file on any device, you delete it permanently from iCloud.  That by its very essence is not a backup solution therefore.  A backup solution does not automatically and simultaneiously delete the file everywhere when it is deleted from the original source.
    iCloud is strictly a syncronization tool for using files on multiple internet connected devices.

  • Do I need to create new group for standby redo log files?

    I have 10 group of redo log files with 2 members for each group for my primary database , Do I need to create new group for standby redo log files for the standby database
    Group#     Members
    ==============
    1              2
    2              2
    3             2
    4             2
    5             2
    6             2
    7             2
    8             2
    9             2
    10           2
    If So, The following statment is correct? or nto
    ALTER DATABASE ADD STANDBY LOGFILE GROUP 1 ('D:\Databases\epprod\StandbyRedoLog\REDO01.LOG',D:\Databases\epprod\StandbyRedoLog\REDO01_1.LOG');
    please correct me if am doin mistake
    becuase when I issue the statment I getting error message sayin the group is already created.

    Thanks John
    I just find the answer
    Yes, it's recomeded to add new group , for instnace If I have 10 group from 1 to 10 then the standby shoudl be from 11 to 20
    Thanks I found the answer.

  • Needed a code for Creating a Log File in java so that its size is limited

    Hi
    I need the code for developing a log file using threads so that the log file size is limited
    and if the size of the log file is increasing above 1Mb,another log file has to be created automatically and the log have to be printed into that new file.
    Thanks in advance

    package cms.web.log;
    import java.io.*;
    import java.util.Calendar;
    import cms.web.WebUser;
    *     Log is generated by JEditor 1.0.0
    *     @Project      : cms
    *     @Version      : 1.0.0
    *     @Created date : 11:07:40  PM Thursday, 25/07/2002
    *     @Author       :
    *     @Organization :
    *     @Copyright    : (c) 2002
    *     An utility class used to write information, especially error messages, to
    *     log file so that they can be viewed at later time by administrators.
    *     Extra information such as date & time they occures & where they are thrown...
    *     are automatically included and append to the end of log file.
    *     Log files will increase with the format "name_n" where n is file counter
    public class Log implements Serializable
          *     logs marker
         static final String START= "\n\0";
          *     parent directory that contains log files
         private static File parent;
         private PrintStream out;
         private String name;
          *     to count how many log for the current stream
         int counter;
          *     maximum number of logs for each log file
         int max;
         public static void init(File parent)
              if (!parent.exists())
                   parent.mkdirs();
              Log.parent= parent;
         public Log(String name, int max)
              this.name= name;
              this.max= max;
              file= getLastFile();
              counter= countLogs(file);
              out= openStream(file);
         public synchronized void appendLog(String log)
              if (log == null || log.length() == 0)
                   return;
              count();
              try {
                   out.println(START+ counter+ " | "+ getCurrentTime()+ " | "+ log);
                   out.flush();
              } catch (Exception e) {}
          *     Append the given log to log file.
         synchronized void appendLog(String msg, WebUser user)
              if (msg == null || msg.length() == 0)
                   return;
              count();
              try {
                   out.println(START+ counter+ "----------------------------------------------------------");
                   out.println(getCurrentTime());
                   out.println(user != null? "User:"+ user.getFullName(): "User: public user");
                   out.println(msg);
                   out.println("\n----------------------------- end -----------------------------\n");
                   out.flush();
              } catch (Exception e) {}
          *     Append the given exception to log file
         synchronized void appendLog(Throwable error, WebUser user)
              if (error == null)
                   return;
              count();
              synchronized (out)
                   try {
                        out.println(START+ counter+ "----------------------------------------------------------");
                        out.println("Exception occured at "+ getCurrentTime());
                        out.println(user != null? "User: "+ user.getFullName(): "User: public user");
                        error.printStackTrace(out);
                        out.println("----------------------------- end -----------------------------\n");
                        out.flush();
                   } catch (Exception e) {}
         private String getCurrentTime()
              Calendar c= Calendar.getInstance();
              return
                   parse(c.get(Calendar.HOUR_OF_DAY))+ ":"+               // 0 --> 23
                   parse(c.get(Calendar.MINUTE))+ ":"+                     // 0 --> 59
                   parse(c.get(Calendar.SECOND))+ " "+                     // 0 --> 59
                   parse(c.get(Calendar.DAY_OF_MONTH))+ "/"+                 // 1 --> 31
                   parse(c.get(Calendar.MONTH)+ 1)+ "/"+                     // 1 --> 12
                   c.get(Calendar.YEAR);                                        // yyyy
         private String parse(int n)
              return n< 10? "0"+ n: ""+ n;
         private void count()
              counter++;
              if (counter> max)
                   incrementFile();
                   counter= 1;
         private void incrementFile()
              File file= null;
              int n= 0;
              while ((file= new File(parent, name+ n+ ".log")).exists())
                   n++;
              if (out != null)
                   out.close();
              out= openStream(file);
         private PrintStream openStream(File file)
              try {
                   if (file.exists())
                        return new PrintStream(new FileOutputStream(file.getPath(), true));
                   else
                        return new PrintStream(new FileOutputStream(file.getPath()));
              } catch (IOException e) {
                   throw new RuntimeException(e.getMessage());
         private int countLogs(File file)
              int count= 0;
              InputStream in= null;
              try {
                   in= new FileInputStream(file);
                   int n;
                   while ((n= in.read()) != -1)
                        if (n == '\0')
                             count++;
              } catch (IOException e) {
              } finally {
                   if (in != null)
                        try {
                             in.close();
                        } catch (IOException e) {}
              return count;
         private File getLastFile()
              File file= new File(parent, name+ "0.log");
              File curr;
              int n= 1;
              while ((curr= new File(parent, name+ n+ ".log")).exists())
                   file= curr;
                   n++;
              return file;
         protected void finalized()
              if (out != null)
                   out.close();

  • Where can I set the log level for the "Inbox log file" ?

    From the Siebel 8 Bookshelf, it says :
    "To set the level of the Inbox log file for troubleshooting
    *In Siebel Tools, set the Log Level for the Inbox log file (Alias = InboxLog) to 5*."
    But where exactly in Siebel Tools can I find that Log Level ? Which object does the Siebel bookshelf talk about ?

    Hi,
    Loglevels are not configured in Siebel Tools. You have to configure them with the siebel client. You can find a parameter at "Administration - Server Configuration / Server / Events"
    Search for "Inbox General Log Events". Set this parameter to 5. It think this should help you.
    Cheers Andreas

  • How to restore cold backup + archived log files

    Hi,
    Suppose I take a cold backup on 18th. After that I have four days of archived log files. if the database crashes on 5th day, I have to restore the 18th cold back + 4 days of archived log files. How do I restore since it is a cold backup and I cant do incomplete recovery.
    can I use
    Recover database ( with 18th cold backup) in mount state and apply archived logs.
    Prabhath

    The details of how you perform forward recovery using a cold backup depends on
    1- rman or manual backup
    2- using current or backup control file
    3- if rman, recovery catalog or no recovery catalog
    4- if full database recovered or only a few files
    Each of these conditions will affect what is known to Oracle and what needs to be done. For example if you restored the entire cold backup including the control file then Oracle would see a consistent database and not need to perform recovery so you would need to startup mount and tell the database to perform recovery using a backup control file. If using rman and no recovery catalog you might need to catalog some of the archived redo logs, etc....
    It is advisable to consult the Backup and Recovery manuals before attempting recovery for any new scenario.
    HTH -- Mark D Powell --

  • Taking backup of Log Files....

    Hi friends,am using the java.util.logging package to develop the logging for my Multithreaded socket server application.Server is going to run any time so it will appends the data into the log file.I need to take backup of the log files per day without stopping my server application.Anyone know how to achieve this process???Thanks in advance...

    On unix/linux:
    cp <yourlogfile> <some directory where you keep your backups>

  • Reasons for multiplexing archive log files

    hi
    I would like to try multiplexing the archive log files, but I need to explain it to my boss who is the unofficial DBA. I want to tell her that it is necessary. Are there are really convincing, compelling reasons that one would set up more than 1 archive log file destination?
    Cheers.
    DA

    Archives are the thing that protect your data. If you have only one source for them, let's just keep our fingers crossed that nothing ever goes wrong with that source at the same time as something going wrong with the database, for then you will have lost data irrecoverably. So how much data loss is acceptable?
    On the other hand, if your original archive destination uses mirrored disks and if it is backed up regularly to tape, and if daily checks that that backup has taken place successfully are made... there's an at least reasonable case to be made not to bother multiplexing the archives: you already have redundancy in depth and the additional archiving destination could well slow down the whole archiving subsystem... and that can translate into problems for LGWR and hence foreground waits suffered by users.
    Nevertheless, it is true that assuming you back up your archives nightly, there is always going to be a days'-worth of archives which haven't been backed up yet. Should anything nasty happen to those, you will not be in a completely recoverable position ...and I have seen both parts of a mirrored drive die at about the same time, so that's no guarantee!
    If you or your colleague don't mind losing a day's work or if you are betting people and don't mind taking a punt on hardware not failing, then not multiplexing archives is a fair judgement call to make. But if you cannot afford any data loss at all and you want to make sure that it's as guaranteed as you can get that you won't, then multiplexing archives is pretty essential.

  • Error for Generating a log file

    Hi Cezar sanos,
    i am trying to generate a log file for ODI with details like who logged in and what is is doing kind of things.
    For this i am executing the command like
    lagentscheduler.bat "-PORT=20910" "-NAME=localagent" "-V=2" > C:\OraHome_1\logs\agent1.log.
    But its getting the error like
    A JDK is required to execute Web Services with OracleDI. You are currently using a JRE.
    OracleDI: Starting Scheduler Agent ...
    Starting Oracle Data Integrator Agent...
    Version : 10.1.3.5 - 10/11/2008
    DwgJv.main: Exit. Return code:-1

    Just in case,
    the following message :
    A JDK is required to execute Web Services with OracleDI. You are currently using a JRE.
    is only a warning and not an error message....

  • Process Flow ignores name and location for Control- and Log-Files

    Hi!
    Our OWB Version is 10.1.0.3.0 - DB Version 9.2.0.7.0 - OWF Version 2.6.2
    Clients and server are running on Windows. Database contains target schemas as well as OWB Design and Runtime, plus OWF repositories. The source files to load reside on the same server as the database.
    I have for example a SQL*Loader Mapping MAP_TEXT which loads one flat file "text.dat" into a table stg_text.
    The mapping MAP_TEXT is well configured and runs perfect. i.e. Control file "text.ctl" is generated to location LOC_CTL, flat file "text.dat" read from another location LOC_DATA, bad file "text.bad" is written to LOC_BAD and the log file "text.log" is placed into LOC_LOG. All locations are registered in runtime repository.
    When I integrate this mapping into a WorkFlow Process PF_TEXT, then only LOC_DATA and LOC_BAD are used. After deploying PF_TEXT, I execute it and found out, that Control and Log file are placed into the directory <OWB_HOME>\owb\temp and got generic names <Mapping Name>.ctl and <Mapping Name>.log (in this case MAP_TEXT.ctl and MAP_TEXT.log).
    How can I influence OWB to execute the Process Flow using the locations configured for the mapping placed inside?
    Has anyone any helpfull idea?
    Thx,
    Johann.

    I didn't expect to be the only one to encounter this misbehaviour of OWB.
    Meanwhile I found out what the problem is and had to recognize that it is like it is!
    There is no solution for it till Paris Release.
    Bug Nr. 3099551 at Oracle MetaLink adresses this issue.
    Regards,
    Johann Lodina.

  • Dataguard Solution for standby redo log file groups

    Respected Experts,
    My database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.
    Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
    need to be created.
    Thanks and Regards
    Monoj Das

    My database version is 10.2.0.1.0 and Red Hat 5 os.I want to create a standby database using RMAN.To configure standby either you can use duplicate target database for standby
    or
    1) restore standby controlfile
    2) mount standby database
    3) restore database
    and configure standby paraemter then start MRP, will do.
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ps.htm
    Can any one help me with the full steps.And i'm also confuse about number of standby redo log file members
    need to be created.It depends which parameter you want to use, if you mention log_archive_dest_2='service ARCH ' then no need to create any standby redo log file groups,
    If you use log_archive_dest_2='service LGWR ' here transport will be in terms of redo and you need standby redo log files on standby database. Which is realtime.
    When you use LGWR, data lost will be less if in case of any online redo log file lost. which is recommended.
    HTH.

  • Shell script for oracle alert.log file

    Hi Gurus,
    I wanted to write shell script to know the last 10 shutdown timings of the database from alert log file. I'm working on oracle 9i.
    Could anyone please advice on this.
    Thanks in advance
    regards,
    Shaan
    Edited by: Shaan_dmp on Jan 5, 2009 1:27 PM
    Edited by: Shaan_dmp on Jan 5, 2009 1:28 PM

    Use awk. I don't have a 9i to hand but here is a very simple version for 10g XE
    My awk file (the line numbers are for the notes below - don't include them:
    01:BEGIN { prevline = "";}
    02:
    03:/Completed: alter database close/ {print prevline,FS,$0;}
    04:
    05:{prevline = $0;}The command line and results (from my 300k alert log)
    $ awk -f alert.awk.txt alert_xe.log
    Fri Apr 11 18:08:40 2008   Completed: alter database close normal
    Fri May 16 18:53:21 2008   Completed: alter database close normal
    Tue May 20 17:28:23 2008   Completed: alter database close normal
    Thu Jul 17 19:08:52 2008   Completed: alter database close normal
    Fri Aug 15 15:12:48 2008   Completed: alter database close normal
    Wed Nov 05 08:52:59 2008   Completed: alter database close normal
    Fri Nov 14 16:36:03 2008   Completed: alter database close normal
    Tue Dec 09 10:46:23 2008   Completed: alter database close normal
    Mon Jan 05 11:12:22 2009   Completed: alter database close normalWhat it means:
    1) the BEGIN section at line 01 defines the variable to hold the previous line
    2) the /search string/ at line 03 finds the marker in the file for a shutdown, then does the required action (print the time which was in the previous line, and then this line; use FS (the awk field separator - normally space) as a separator
    3) at line 05 is an instruction we do on every line - so we remember it in case it is the timestamp for the shutdown.
    Now, you can include more of the corner cases for shutdowns by adding more search patterns etc. For more information, google for awk examples.
    Awk is really good at this sort of thing!
    HTH
    Regards Nigel

  • Script to Create databases with params to support dir location for data or log files

    Script to Create databases with params to support dir location for data or log files

    DECLARE @Query VARCHAR(MAX)=''
    DECLARE @DbName VARCHAR(400) = '<DBNAME>'
    DECLARE @DbFilePath VARCHAR(400) = '<Valid DataFilePath>'
    DECLARE @DBLogFilePath VARCHAR(400)='<Valid LogFile Path>'
    SET @Query = @Query + 'CREATE DATABASE '+@DbName +' ON PRIMARY '
    SET @Query = @Query + '( NAME = '''+@DbName +''', FILENAME = '''+@DbFilePath+@DbName +'.mdf'' , SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ) '
    SET @Query = @Query + ' LOG ON '
    SET @Query = @Query + '( NAME = '''+@DbName +'_log'', FILENAME = '''+@DblogFilePath+@DbName +'_log.ldf'' , SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)'
    print @query
    exec(@query)
    --Prashanth

  • Cannot backup Transactional log file.

    Hi Experts!!!!!
    We have installed our EP7.0 System on SQL server 2005.
    We are able to take the full back of data files through enterprise manager but we are not able to take the transactional log backup of it.
    Any ideas!!!!!
    Regards,
    Vamshi.

    Dear Vamsi,
    how you are trying to take transaction log backup?
    you can schedule this transaction log backup by using SQL Server Maintenance Plan
    check the following url.
    [SQL Server Maintenance Plan|http://www.databasejournal.com/features/mssql/article.php/3530486]
    Regards,
    Nagendra.

  • Best option for Backup of HDV Files

    I'm just finishing a project and will be needing my harddrive space for the next one, and I'm wondering how best to store HDV footage NOT ON MY HARDDRIVE? (Not that I don't trust them, just don't like having all my eggs in one basket).
    There's not a good intuitive answer that I've found but I'm leaning on taking my 720p24p footage and going MPEG-2... and burning to Data DVD's and storing them away. However I am concerned if there is a significant loss of quality between it's current format (DVCPro 100) and the MPEG-2.
    If I need to retrieve the footage for later edits am I going to have issues?
    Any help or ideas would be appreciated. I have a combination of P2 and Onlocation captured files, and I'd like to cut out all the "Crap" and save just the takes needed...and have a back-up OFF of the hardddrive just in case.
    Thanks in advance.
    H.R. Crystal

    We were on a really tight time schedule, and so the P2- workflow got in our way. about 18% of our film landed on P-2, (Just exteriors where we didn't have power) Everything else was done via on-location.
    We didn't run TAPE -- Format is onlocation's version of .avi which as best I can figure is DVCPro 100. I don't have access to the camera it was recorded on...and even I did, would this be the best option? (Tape export?)
    Thanks.
    H.R. Crystal

Maybe you are looking for

  • BPM and Syns/Sync comunication

    Hi *, a very quick question: is it possible to implement a sync/sync communication via ccBPM? I'd like to call a sync R/3 RFC via a sync call to a web service. I need to put this comunication within BPM since I need to do some other stuff. I tried to

  • MapViewer Demo:

    Hello all, I have deployed successfully MapViewer (I checked it through the enterprise console), I have imported the mvdemo. A week ago I had to reinstall Oracle Database Server and since then I receive the following message when I'm trying to use i.

  • Cannot download lightroom trial

    Tried enabling compatability view as others have, still doesn't work.  (Window 7 64 bit) Is the link not working??  Any ideas?  Thank You.

  • HT204053 Can I use any of two Apple IDs to purchase apps for the same iPad?

    Hello, I have two Apple IDs, one for the iTunes US store and the other for the UK store.  Can I purchase apps from either store for the same iPad (4th. Gen.)? 

  • Erase and sync or Transfer Purchase????

    I am trying to connect to a new computer and am being asked to choose between 'Erase and Sync' and 'Transfer Purchase' Which one should I be selecting please?  I want to keep my current purchases and look at my current library.