Date format inconsistent in log files

Hi All,
I have a cluster spread accross 4 machines(4 different physical boxes).
In three of the machines, the log format is ,
<17/1/2010>
while in the fourth on it is like,
<Jan 17 >
We require the log format in the form <17/1/2010>
Any help or suggestion???
Thanks.

Usually, this is due to a localization difference in the JVMs. When WLS boots, the .log file should show you some detailed information about the locale it is using, make sure that is consistent across the machines.

Similar Messages

  • Working Linux command to grep date range in a log file

    Linux Gurus,
    Could you please help me with command to show only those lines in a log file which falls under some date range, probably using grep command.
    Our server logs are in following format <Jun 23, 2013 12:45:02 AM UTC>
    Regards,
    Varun

    Perhaps you can do the following:
    Go to Google.
    Type "working Linux command to grep date range in a log file"
    See what results you might get from that search.  ( I did, and got more than 600,000 search results.)

  • The format of Audit log file

    We have a perl script to extract data from Audit log files(Oracle Database 10g Release 10.2.0.1.0) which have format as bellow.
    Audit file /u03/oracle/admin/NIKKOU/adump/ora_5037.aud
    Oracle Database 10g Release 10.2.0.1.0 - Production
    ORACLE_HOME = /u01/app/oracle/product/10.2.0
    System name:     Linux
    Node name:     TOYDBSV01
    Release:     2.6.9-34.ELsmp
    Version:     #1 SMP Fri Feb 24 16:54:53 EST 2006
    Machine:     i686
    Instance name: NIKKOU
    Redo thread mounted by this instance: 1
    Oracle process number: 22
    Unix process pid: 5037, image: oracleNIKKOU@TOYDBSV01
    Sun Jul 27 03:06:34 2008
    ACTION : 'CONNECT'
    DATABASE USER: 'sys'
    PRIVILEGE : SYSDBA
    CLIENT USER: oracle
    CLIENT TERMINAL:
    STATUS: 0
    After we update the db from Release 10.2.0.1.0 to Release 10.2.0.4.0, the format of Audit log file had been changed to something likes below.
    Audit file /u03/oracle/admin/NIKKOU/adump/ora_1897.aud
    Oracle Database 10g Release 10.2.0.4.0 - Production
    ORACLE_HOME = /u01/app/oracle/product/10.2.0
    System name:     Linux
    Node name:     TOYDBSV01
    Release:     2.6.9-34.ELsmp
    Version:     #1 SMP Fri Feb 24 16:54:53 EST 2006
    Machine:     i686
    Instance name: NIKKOU
    Redo thread mounted by this instance: 1
    Oracle process number: 21
    Unix process pid: 1897, image: oracle@TOYDBSV01
    Tue Oct 14 10:30:29 2008
    LENGTH : '135'
    ACTION :[7] 'CONNECT'
    DATABASE USER:[3] 'SYS'
    PRIVILEGE :[6] 'SYSDBA'
    CLIENT USER:[0] ''
    CLIENT TERMINAL:[7] 'unknown'
    STATUS:[1] '0'
    Because we have to rewrite the perl script, could anyone tell us where we can find the manual to describe the format of the Audit log file.

    Oracle publishes views of the audit trail data. You can find a list of the views for the 11.1 database here:
    http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/auditing.htm#BCGIICFE
    The audit trail does not really change between patchsets as that would constitute underlying structure changes and right now, the developers are not allowed to change the underlying structure of tables in patchsets. But, we can change what may be displayed in a column from patchset to patchset. For example, we are getting ready to update the comment$text field to display more information like dblinks and program names.
    I personally don't like overloading the comment$text field like that, but sometimes when you need the information, that is the only choice except to wait for the next major release :)
    As for the output of the audit log files, those can change between patchsets because of bugs that were found and some changes to support Audit Vault. My apologies out there for anyone that is reading the audit files written to the OS directly, I would recommend using the views.
    Hope that helps. Tammy

  • Problem with SQL*Loader and different date formats in the same file

    DB: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    System: AIX 5.3.0.0
    Hello,
    I'm using SQL*Loader to import semi-colon separated values into a table. The files are delivered to us by a data provider who concatenates data from different sources and this results in us having different date formats within the same file. For example:
    ...;2010-12-31;22/11/1932;...
    I load this data using the following lines in the control file:
    EXECUTIONDATE1     TIMESTAMP     NULLIF EXECUTIONDATE1=BLANKS     "TO_DATE(:EXECUTIONDATE1, 'YYYY-MM-DD')",
    DELDOB          TIMESTAMP     NULLIF DELDOB=BLANKS          "TO_DATE(:DELDOB, 'DD/MM/YYYY')",
    The relevant NLS parameters:
    NLS_LANGUAGE=FRENCH
    NLS_DATE_FORMAT=DD/MM/RR
    NLS_DATE_LANGUAGE=FRENCH
    If I load this file as is the values loaded into the table are 31 dec 2010 and 22 nov *2032*, aven though the years are on 4 digits. If I change the NLS_DATE_FORMAT to DD/MM/YYYY then the second date value will be loaded correctly, but the first value will be loaded as 31 dec *2020* !!
    How can I get both date values to load correctly?
    Thanks!
    Sylvain

    This is very strange, after running a few tests I realized that if the year is 19XX then it will get loaded as 2019, and if it is 20XX then it will be 2020. I'm guessing it may have something to do with certain env variables that aren't set up properly because I'm fairly sure my SQL*Loader control file is correct... I'll run more tests :-(

  • Error with the data format in the TXT file, sending as an Email attachment

    Hi all,
    I have an problem in the data formating in the TXT file while sending as an attachment via an email by using the FM "SO_DOCUMENT_SEND_API1".
    For eg:
    The data in the TXT file is looking like as follows:
    0 0 0 0 2        L O U D S P E A K R   O T H E R   3 8 W h i t e                  0 0
    0031  L O U D S P E A K R   O T H E R   3 8 Black                               0 000
    38    L O U D S P E A K R   O T H E R   3 8 Brown                                  0 00040
    L O U D S P E A K R   O T H E R   3 8 Brown                                         0 00042
    and so on
    But it should come as :
    0 0 0 0 2  L O U D S P E A K R   O T H E R   3 8 W h i t e
    0 0 0031   L O U D S P E A K R   O T H E R   3 8 Black
    0 00038    L O U D S P E A K R   O T H E R   3 8 Brown
    0 00040    L O U D S P E A K R   O T H E R   3 8 Brown
    All the internal tables are correctly filled.
    The code is as follows:
    gwa_objtxt = 'Please find attached DATA EXTRACT Sheet'.
      append gwa_objtxt to git_objtxt.
    describe table git_objtxt lines gv_cnt.
      clear git_doc_data.
      read table git_objtxt index gv_cnt.
      git_doc_data-doc_size = ( gv_cnt - 1 ) * 255 + strlen( gwa_objtxt ).
      git_doc_data-obj_langu = sy-langu.
      git_doc_data-obj_descr = lv_mtitle.
      append git_doc_data.
      clear git_packing_list.
      refresh git_packing_list.
    git_packing_list-transf_bin = space.
      git_packing_list-head_start = 1.
      git_packing_list-head_num = 0.
      git_packing_list-body_start = 1.
      git_packing_list-body_num   = gv_cnt.
      git_packing_list-doc_type = 'RAW'.
      append git_packing_list.
      Clear : gv_cnt.
      Describe table git_objbin lines gv_cnt.
      git_packing_list-transf_bin = 'X'.
      git_packing_list-head_start = 1.
      git_packing_list-head_num = 1.
      git_packing_list-body_start = 1.
      git_packing_list-body_num = gv_cnt.
      git_packing_list-doc_type = 'TXT'.
      git_packing_list-obj_descr = 'ATTACH.TXT'.
      git_packing_list-obj_name = 'book'.
      git_packing_list-doc_size = gv_cnt * 255.
      APPEND git_packing_list.
      clear git_receivers.
      refresh git_receivers.
      git_receivers-receiver = gv_eid.
      git_receivers-rec_type = 'U'.
      append git_receivers.
    CALL FUNCTION 'SO_DOCUMENT_SEND_API1'
        EXPORTING
          DOCUMENT_DATA              = git_doc_data
          PUT_IN_OUTBOX              = 'X'
          COMMIT_WORK                = 'X'
        TABLES
          PACKING_LIST               = git_packing_list
          CONTENTS_BIN               = git_objbin
          CONTENTS_TXT               = git_objtxt
          RECEIVERS                  = git_receivers
        EXCEPTIONS
          TOO_MANY_RECEIVERS              = 1
          DOCUMENT_NOT_SENT                = 2
          DOCUMENT_TYPE_NOT_EXIST      = 3
          OPERATION_NO_AUTHORIZATION = 4
          PARAMETER_ERROR                    = 5
          X_ERROR                                      = 6
          ENQUEUE_ERROR                        = 7
          OTHERS                                        = 8.

    please give the code of
    contents bin =  git_objbin " how  this is getting populated.
    0 0 0 0 2 L O U D S P E A K R O T H E R 3 8 W h i t e <b>0 0</b>
    0031 L O U D S P E A K R O T H E R 3 8 Black
    0 000
    38
    from this im not able to understand is this over population or concatenation problem
    y dont u make a append to the final table
    like
    data : begin of itxt occurs 0, ,
    s1(132) type c ,
    end of itxt.
    loop at itab.
    itxt-s1+0(4) = itab-f1.
    itxt-s1+4(6) = itab-f2.
    itxt-s1+10(8) = itab-f3.
    itxts1+18(4) = itab-f4.
    append itxt.
    clear itxt.
    endloop.
    exchange this to the contents bin of hte Fm .
    regards,
    vijay.
    can u please mail the text file and the expected o/p to my mail id [email protected]  so that i can see the same from the data provided i m not able to check the result properly .

  • Data format inconsistency problem on Discoverer 4i plus and Desktop 4.1

    I encountered 'data format inconsistency' problem when using
    Discoverer 4i plus web version vs Desktop version 4.1 to run the
    same report :-
    The result returned by Desktop version is as following:-
    Employee Count Percent Employee Count
    Mgt 202 18%
    Occ 891 81%
    Other 1 0%
    Percent 1094 100%
    The result returned by 4i web version is like the following:-
    Employee Count Percent Employee Count
    Mgt 202 18%
    Occ 891 81%
    Other 1 0%
    Percent 109400% 100%
    Pls note that the Employee count column is 1094 in Desktop
    version whereas in web version it's 109400%
    After I export the data to Ms Excel, I obtained below result :-
    Employee Count Percent Employee Count
    Mgt 202 0.184644
    Occ 891 0.814442
    Other 1 0.000914
    Percent 1094 1
    I would like to know if this is Oracle Discoverer bug or it's a
    known issues ?
    Pls advise.
    Thanks in advance.

    Hi Discoverer Technical Team,
    Could u pls response to that issue ?
    Thanks and regards.

  • Got 'bea.jolt.ServiceException: Data conversion failed; check log file'

    Got CHECK APPSERVER LOGS. THE SITE BOOTED WITH INTERNAL DEFAULT SETTINGS, BECAUSE OF: bea.jolt.ServiceException: Data conversion failed; check log file
    when started PIA and login screen appeared.

    What version of PeopleTools are you using?  Have you looked at E-AS: Error "Cannot find or open field table. Maybe FIELDTBLS32 is not set properly" (Doc ID 660607.1) yet?

  • How to add a date suffix to the log file name

    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.date

    AZ wrote:
    In Windows, I want to run certain commands and save the output to a logfile every day. How to add a suffix to the log file name so I can distinguish which log file for which day?
    e.g. cmd >> logfile.datemy best friend name is "google", refer to this [url | http://stackoverflow.com/questions/203090/how-to-get-current-datetime-on-windows-command-line-in-a-suitable-format-for-usi]
    This is what i did
    1) created a dummy file in c drive
    2) copy pasted below lines, you can play around more with the format
    set _my_datetime=%date%_%time%
    set _my_datetime=%_my_datetime: =_%
    set _my_datetime=%_my_datetime::=%
    set _my_datetime=%_my_datetime:/=_%
    set _my_datetime=%_my_datetime:.=_%3) Rename the file from dos
    ren some.txt dummy_file_%_my_datetime%.txt4) Here goes the output
    C:\dir
    dummy_file_Mon_09_20_2010_161347_21.txt
    Most of the code i copied from above url, you can tweak a little bit based on ur requirement and format.
    Regards
    Learner                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Dates appear different in log file vs. debug page for Deferred Task

    I added a deferred task and in the log file, the date appeared correctly as
    Mon Dec 15 16:34:11 PST 2008
    But when I viewed the user in the debug page, the date appeared as
    <Date>2008-12-16T00:34:11.430Z</Date>
    I called an external java class that return a Date object
    <Action id='0' application='com.waveset.session.WorkflowServices'>
    <Argument name='op' value='addDeferredTask'/>
    <Argument name='date'>
    <invoke name='addWeekDays' class='MyDateUtil'/>
    </Argument>
    </Action>
    Do you have any ideas?

    IDM commonly stores dates in a Java or JDBC date format (which is what your debug date is) but often formats the date differently for log files and for web pages. It's annoying if you're trying to line two different outputs up.

  • When using wusa.exe to install MSU update package and enabling logging using the /log switch, what format are the log files in?

    I have been installing a number of hotfixes for Windows 7 using MSU files and the wusa.exe utility included in Windows. I thought it would be a good idea to generate separate log files for each update as it was installed since wusa.exe now supports this
    option using /log:<file name>. However, the log files created do not seem to be regular text files or any other log file format that I immediately recognize. When opened in Notepad or Wordpad you can see that they contain a lot of additional binary data
    which can't be read by a regular text viewer.
    Does anyone know what format these log files are in? What tool should you use to read them?

    I have been installing a number of hotfixes for Windows 7 using MSU files and the wusa.exe utility included in Windows. I thought it would be a good idea to generate separate log files for each update as it was installed since wusa.exe now supports this
    option using /log:<file name>. However, the log files created do not seem to be regular text files or any other log file format that I immediately recognize. When opened in Notepad or Wordpad you can see that they contain a lot of additional binary data
    which can't be read by a regular text viewer.
    Does anyone know what format these log files are in? What tool should you use to read them?
    Only Microsoft can manage to design something as stupid as this. If you start wusa from the command line, it pops up the alternative command line switches. For log, it just says: " /log     - installer will enable logging". It doesn't say that you
    should specify the log file name, hence not HOW you specify it (" /log:<path\filename>.") It doesn't say what extension to use for the file, alas which file type it is (" /log:<path\filename.evtx>."). You open it in notepad, you cannot read it.
    You open it in sccm's cmtrace.exe/trace32.exe, you get nothing. IT IS NOT POSSIBLE TO IMPLEMENT THIS IN A WORSE WAY. How can something be so bad? How can Microsoft get such stupidity inhouse, when you need to go through six interviews or something to get in??
    I cannot believe it - unfortunely, this is seen again and again. 

  • Data format/encoding of Transport files?

    Hello everyone,
    When an export is done, two files are generated - an Rxxxx and Kxxxx file.
    The K file contains transport information (I can see it with a utf-8 text editor) and the R file the contents of the transport.  However, I cannot determine what encoding is used - looks like some type of binary file.
    DOES ANYONE KNOW WHAT KIND OF ENCODING AND HOW I CAN CONVERT THIS TO A NON-BINARY FORMAT?
    I've tried the SAR utility but this does not seem to work.
    I have searched for this a bit but have come up empty.
    Any help is appreciated.
    Thanks,  John

    Hi John,
    the format in which the data are stored in the data file is AFAIK a SAP own format. So, I don't think you will find any other tool which is able to read this file. But you don't need another tool, you have R3trans.
    With the command
    R3trans -l <data file> [-w <logfile>] [-v <verbose level>]
    you can list the content of a data file. R3trans writes this content into the log file trans.log by default, but this can be changed with the -w option. And with the option -v you can increase the verbose level (just specify a number > 1) to let R3trans write more details into the log.
    Just try it, perhaps it matches your needs...
    Best regards,
      Dirk

  • How to see data for particular date from a alert log file

    Hi Experts,
    I would like to know how can i see data for a particular date from alert_db.log in unix environment. I'm suing 0racle 9i in unix
    Right now i'm using tail -500 alert_db.log>alert.txt then view the whole thing. But is there any easier way to see for a partiicular date or time
    Thanks
    Shaan

    Hi Jaffar,
    Here i have to pass exactly date and time, is there any way to see records for let say Nov 23 2007. because when i used this
    tail -500 alert_sid.log | grep " Nov 23 2007" > alert_date.txt
    It's not working. Here is the sample log file
    Mon Nov 26 21:42:43 2007
    Thread 1 advanced to log sequence 138
    Current log# 3 seq# 138 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Mon Nov 26 21:42:43 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 137
    Mon Nov 26 21:42:43 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 137
    ARC1: Unable to archive log 1 thread 1 sequence 137
    Log actively being archived by another process
    Mon Nov 26 21:42:43 2007
    ARCH: Beginning to archive log 1 thread 1 sequence 137
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_137
    .dbf'
    ARCH: Completed archiving log 1 thread 1 sequence 137
    Mon Nov 26 21:42:44 2007
    Thread 1 advanced to log sequence 139
    Current log# 2 seq# 139 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Mon Nov 26 21:42:44 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 138
    ARC0: Beginning to archive log 3 thread 1 sequence 138
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_138
    .dbf'
    Mon Nov 26 21:42:44 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 138
    ARCH: Unable to archive log 3 thread 1 sequence 138
    Log actively being archived by another process
    Mon Nov 26 21:42:45 2007
    ARC0: Completed archiving log 3 thread 1 sequence 138
    Mon Nov 26 21:45:12 2007
    Starting control autobackup
    Mon Nov 26 21:45:56 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0033'
    handle 'c-2861328927-20071126-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Tue Nov 27 21:23:50 2007
    Starting control autobackup
    Tue Nov 27 21:30:49 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071127-00'
    Tue Nov 27 21:30:57 2007
    ARC1: Evaluating archive log 2 thread 1 sequence 139
    ARC1: Beginning to archive log 2 thread 1 sequence 139
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_139
    .dbf'
    Tue Nov 27 21:30:57 2007
    Thread 1 advanced to log sequence 140
    Current log# 1 seq# 140 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Tue Nov 27 21:30:57 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 139
    ARCH: Unable to archive log 2 thread 1 sequence 139
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARC1: Completed archiving log 2 thread 1 sequence 139
    Tue Nov 27 21:30:58 2007
    Thread 1 advanced to log sequence 141
    Current log# 3 seq# 141 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Tue Nov 27 21:30:58 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 140
    ARCH: Beginning to archive log 1 thread 1 sequence 140
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_140
    .dbf'
    Tue Nov 27 21:30:58 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 140
    ARC1: Unable to archive log 1 thread 1 sequence 140
    Log actively being archived by another process
    Tue Nov 27 21:30:58 2007
    ARCH: Completed archiving log 1 thread 1 sequence 140
    Tue Nov 27 21:33:16 2007
    Starting control autobackup
    Tue Nov 27 21:34:29 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0205'
    handle 'c-2861328927-20071127-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Wed Nov 28 21:43:31 2007
    Starting control autobackup
    Wed Nov 28 21:43:59 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-00'
    Wed Nov 28 21:44:08 2007
    Thread 1 advanced to log sequence 142
    Current log# 2 seq# 142 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Wed Nov 28 21:44:08 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 141
    ARCH: Beginning to archive log 3 thread 1 sequence 141
    Wed Nov 28 21:44:08 2007
    ARC1: Evaluating archive log 3 thread 1 sequence 141
    ARC1: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_141
    .dbf'
    Wed Nov 28 21:44:08 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 141
    ARC0: Unable to archive log 3 thread 1 sequence 141
    Log actively being archived by another process
    Wed Nov 28 21:44:08 2007
    ARCH: Completed archiving log 3 thread 1 sequence 141
    Wed Nov 28 21:44:09 2007
    Thread 1 advanced to log sequence 143
    Current log# 1 seq# 143 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo1.log
    Wed Nov 28 21:44:09 2007
    ARCH: Evaluating archive log 2 thread 1 sequence 142
    ARCH: Beginning to archive log 2 thread 1 sequence 142
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_142
    .dbf'
    Wed Nov 28 21:44:09 2007
    ARC0: Evaluating archive log 2 thread 1 sequence 142
    ARC0: Unable to archive log 2 thread 1 sequence 142
    Log actively being archived by another process
    Wed Nov 28 21:44:09 2007
    ARCH: Completed archiving log 2 thread 1 sequence 142
    Wed Nov 28 21:44:36 2007
    Starting control autobackup
    Wed Nov 28 21:45:00 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0202'
    handle 'c-2861328927-20071128-01'
    Clearing standby activation ID 2873610446 (0xab47d0ce)
    The primary database controlfile was created using the
    'MAXLOGFILES 5' clause.
    The resulting standby controlfile will not have enough
    available logfile entries to support an adequate number
    of standby redo logfiles. Consider re-creating the
    primary controlfile using 'MAXLOGFILES 8' (or larger).
    Use the following SQL commands on the standby database to create
    standby redo logfiles that match the primary database:
    ALTER DATABASE ADD STANDBY LOGFILE 'srl1.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl2.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl3.f' SIZE 10485760;
    ALTER DATABASE ADD STANDBY LOGFILE 'srl4.f' SIZE 10485760;
    Thu Nov 29 21:36:44 2007
    Starting control autobackup
    Thu Nov 29 21:42:53 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0206'
    handle 'c-2861328927-20071129-00'
    Thu Nov 29 21:43:01 2007
    Thread 1 advanced to log sequence 144
    Current log# 3 seq# 144 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo3.log
    Thu Nov 29 21:43:01 2007
    ARCH: Evaluating archive log 1 thread 1 sequence 143
    ARCH: Beginning to archive log 1 thread 1 sequence 143
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_143
    .dbf'
    Thu Nov 29 21:43:01 2007
    ARC1: Evaluating archive log 1 thread 1 sequence 143
    ARC1: Unable to archive log 1 thread 1 sequence 143
    Log actively being archived by another process
    Thu Nov 29 21:43:02 2007
    ARCH: Completed archiving log 1 thread 1 sequence 143
    Thu Nov 29 21:43:03 2007
    Thread 1 advanced to log sequence 145
    Current log# 2 seq# 145 mem# 0: /oracle/NEWDB/oradata/NEWDB/redo2.log
    Thu Nov 29 21:43:03 2007
    ARCH: Evaluating archive log 3 thread 1 sequence 144
    ARCH: Beginning to archive log 3 thread 1 sequence 144
    Creating archive destination LOG_ARCHIVE_DEST_1: '/oracle/NEWDB/admin/arch/1_144
    .dbf'
    Thu Nov 29 21:43:03 2007
    ARC0: Evaluating archive log 3 thread 1 sequence 144
    ARC0: Unable to archive log 3 thread 1 sequence 144
    Log actively being archived by another process
    Thu Nov 29 21:43:03 2007
    ARCH: Completed archiving log 3 thread 1 sequence 144
    Thu Nov 29 21:49:00 2007
    Starting control autobackup
    Thu Nov 29 21:50:14 2007
    Control autobackup written to SBT_TAPE device
    comment 'API Version 2.0,MMS Version 5.0.0.0',
    media 'WP0280'
    handle 'c-2861328927-20071129-01'
    Thanks
    Shaan

  • XIR2 - Date format problem in pdf file when scheduled to refresh

    We have a number of Deski reports that are scheduled to refresh daily and then be saved as pdfs to a network drive.  Despite the date format being correctly displayed in the reports stored in the respository, when the scheduled jobs save the reports as pdfs, it is changing the date format to the American format, i.e. 11/13/2007 instead of 13/11/2007.  I've checked both the Date/Time and Regional Settings on the server and they are set correctly.  If I log into Deski on the server and open a report and manually save it as a pdf, it saves it with the UK format.
    So what do I need to change to get the scheduled job to use the UK date format?

    Hi Anne,
    If you schedule reports through Infoview then please perform below steps.
    - Login to Infovinfoview -> go to Preferences.
    - Change the My interface locale is to "English (UK)" and My current Timezone to appropriate time zone under Genera tab.
    - Log off and login again.
    - Try to schedule the report to PDF or any other format and check the issue.
    If you schedule reports through CMC, then change Timezone setting to appropriate timezone in Preferences and schedule the reports.
    Thanks and Regards,
    Purna.

  • Wrong date format when import CSV files

    When you import a CSV file that contains fields with German date formats, these fields are displayed incorrectly.
    Example: Contents of the CSV file "01.01.14". After importing the corresponding cell in Numbers has the content "40178".
    A reformat the cell to a date format is not possible.
    How do I get the date in the new version of Numbers displayed correctly?

    It seems that there are more than a few problems related to import/export with non-US localizations.
    I, personally, don't have a solution to your problem. I started to adjust my Language & Region settings to test your problem but it was several settings, I didn't get it right, and I didn't want to mess up my computer so I set everything back to US/English.
    The only workarounds I can suggest are
    Insert a new column into your table and in it put a formula that adds the number to the date 01.01.1904.  Or,
    Edit the CSV in TextEdit to Replace All "." with "/".  This will work if "." is used for nothing else but these dates.
    I recommend the second one if it will work for you. Hopefully Apple is addressing problems such as the one you are seeing.

  • Regarding Date format in output text file

    Hi Frnds,
    i am taking the data from vbak table into one internal table T_vbak, in that i am having vdatu field also, and after that using GUI_Download function module i am downloading that internal table data to one text file into presentation server.
    But the date format is displaying in yyyymmdd format in the text file, but i want the format like mm/dd/yyyy in the text file, can anybody plz help me....

    Hey Bala,
    You can use the following conversion routine for converting the date from yyyymmdd format into mm/dd/yyyy.
    data: l_date(10) type c.
    Loop at t_vbak into wa_vbak.
    CONCATENATE wa_vbak-vdatu4(2) wa_vbak-vdatu6(2) wa_vbak-vdatu(4) INTO l_date SEPARATED BY '/'.
    wa_vbak-vdatu = l_date.
    modify t_vbak from wa_vbak.
    Endloop.
    Regards,
    Chetan.
    PS: Reward points if this helps.

Maybe you are looking for