T5120 - Fault Manager hevy log files on errlog

My Oracle database server T5120 with Solaris 10 OS dumping continuous error message as shown it is connected to my 3510 storage. Is it a driver problem ? or any patch has to be installed ?
Also this server showing ioerros on the 3510 connected drives, is it related to the same ?
Awaiting your valuable feed back.
thanks
fmdump -eV gives following out put.
Aug 13 2012 03:14:57.783978047 ereport.io.ddi.fm-capability
nvlist version: 0
class = ereport.io.ddi.fm-capability
ena = 0xfc029a43af02401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /
(end detector)
dvr-name = fp
__ttl = 0x1
__tod = 0x50284701 0x2eba8e3f
Aug 13 2012 03:14:57.784554799 ereport.io.ddi.fm-capability
nvlist version: 0
class = ereport.io.ddi.fm-capability
the hba details are as shown.
[email protected] # fcinfo hba-port
HBA Port WWN: 10000000c98e6a13
OS Device Name: /dev/cfg/c3
Manufacturer: Emulex
Model: LPe11000-S
Firmware Version: 2.80x7 (Z3D2.80X7)
FCode/BIOS Version: Boot:5.02a1 Fcode:1.50a9
Serial Number: 0999VM0-09320010MG
Driver Name: emlxs
Driver Version: 2.40s (2009.07.17.10.15)
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 20000000c98e6a13
iostat -en
s/w h/w trn tot device
0 0 0 0 md/d10
0 0 0 0 md/d11
0 0 0 0 md/d12
0 0 0 0 md/d20
0 0 0 0 md/d21
0 0 0 0 md/d22
0 0 0 0 md/d30
0 0 0 0 md/d31
0 0 0 0 md/d32
0 0 0 0 md/d40
0 0 0 0 md/d41
0 0 0 0 md/d42
0 0 0 0 c1t0d0
0 0 0 0 c1t1d0
2 0 0 2 c0t0d0
2 40 38 80 c3t266000C0FF085963d2
2 6 1 9 c3t266000C0FF085963d1
2 15 10 27 c3t266000C0FF085963d0
2 25 21 48 c3t266000C0FF085963d4
0 0 0 0 c3t203200A0B86766CDd0
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
2. c3t203200A0B86766CDd0 <SUN-LCSM100_F-0735 cyl 40958 alt 2 hd 128 sec 64>
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w203200a0b86766cd,0
3. c3t266000C0FF085963d0 <SUN-StorEdge3510-423A cyl 5118 alt 2 hd 64 sec 32> Dev-OCR
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,0
4. c3t266000C0FF085963d1 <SUN-StorEdge3510-423A cyl 40958 alt 2 hd 64 sec 32> Dev-Flsh
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,1
5. c3t266000C0FF085963d2 <SUN-StorEdge3510-423A cyl 58814 alt 2 hd 64 sec 127> Dev-Data
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,2
6. c3t266000C0FF085963d4 <SUN-StorEdge3510-423A cyl 35211 alt 2 hd 64 sec 127>
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,4

My Oracle database server T5120 with Solaris 10 OS dumping continuous error message as shown it is connected to my 3510 storage. Is it a driver problem ? or any patch has to be installed ?
Also this server showing ioerros on the 3510 connected drives, is it related to the same ?
Awaiting your valuable feed back.
thanks
fmdump -eV gives following out put.
Aug 13 2012 03:14:57.783978047 ereport.io.ddi.fm-capability
nvlist version: 0
class = ereport.io.ddi.fm-capability
ena = 0xfc029a43af02401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /
(end detector)
dvr-name = fp
__ttl = 0x1
__tod = 0x50284701 0x2eba8e3f
Aug 13 2012 03:14:57.784554799 ereport.io.ddi.fm-capability
nvlist version: 0
class = ereport.io.ddi.fm-capability
the hba details are as shown.
[email protected] # fcinfo hba-port
HBA Port WWN: 10000000c98e6a13
OS Device Name: /dev/cfg/c3
Manufacturer: Emulex
Model: LPe11000-S
Firmware Version: 2.80x7 (Z3D2.80X7)
FCode/BIOS Version: Boot:5.02a1 Fcode:1.50a9
Serial Number: 0999VM0-09320010MG
Driver Name: emlxs
Driver Version: 2.40s (2009.07.17.10.15)
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: 4Gb
Node WWN: 20000000c98e6a13
iostat -en
s/w h/w trn tot device
0 0 0 0 md/d10
0 0 0 0 md/d11
0 0 0 0 md/d12
0 0 0 0 md/d20
0 0 0 0 md/d21
0 0 0 0 md/d22
0 0 0 0 md/d30
0 0 0 0 md/d31
0 0 0 0 md/d32
0 0 0 0 md/d40
0 0 0 0 md/d41
0 0 0 0 md/d42
0 0 0 0 c1t0d0
0 0 0 0 c1t1d0
2 0 0 2 c0t0d0
2 40 38 80 c3t266000C0FF085963d2
2 6 1 9 c3t266000C0FF085963d1
2 15 10 27 c3t266000C0FF085963d0
2 25 21 48 c3t266000C0FF085963d4
0 0 0 0 c3t203200A0B86766CDd0
AVAILABLE DISK SELECTIONS:
0. c1t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@0,0
1. c1t1d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@0/pci@0/pci@2/scsi@0/sd@1,0
2. c3t203200A0B86766CDd0 <SUN-LCSM100_F-0735 cyl 40958 alt 2 hd 128 sec 64>
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w203200a0b86766cd,0
3. c3t266000C0FF085963d0 <SUN-StorEdge3510-423A cyl 5118 alt 2 hd 64 sec 32> Dev-OCR
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,0
4. c3t266000C0FF085963d1 <SUN-StorEdge3510-423A cyl 40958 alt 2 hd 64 sec 32> Dev-Flsh
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,1
5. c3t266000C0FF085963d2 <SUN-StorEdge3510-423A cyl 58814 alt 2 hd 64 sec 127> Dev-Data
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,2
6. c3t266000C0FF085963d4 <SUN-StorEdge3510-423A cyl 35211 alt 2 hd 64 sec 127>
/pci@0/pci@0/pci@9/SUNW,emlxs@0/fp@0,0/ssd@w266000c0ff085963,4

Similar Messages

  • How to create and manage the log file

    Hi,
    I want to trace and debug the program process.
    I write the code for creating log file and debugging the process.
    But i am not able get the result.
    please help me how to create and manage the log file.
    Here i post sample program
    package Src;
    import java.io.*;
    import org.apache.log4j.*;
    import java.util.logging.FileHandler;
    public class Mylog
         private static final Logger log = Logger.getLogger("Mylog.class");
         public static void main(String[] args) throws Exception {
         try {
           // Create an appending file handler
            boolean append = true;
            FileHandler handler = new FileHandler("debug.log", append);
            // Add to the desired logger
            Logger logger = Logger.getLogger("com.mycompany");
            System.out.println("after creating log file");     
              catch (IOException e)
                   System.out.println("Sys Err");
            }Please give the your valuable suggesstion... !
    Thanks
    MerlinRoshina

    Just i need to write a single line in log file.

  • Remote management audit log file

    I've read the documentation @
    http://www.novell.com/documentation/...a/ad4zt4x.html
    which indicates that the audit file is auditlog.txt and is located in the
    system directory of the managed workstation. The problem is I can't find the
    log file in that location or anywhere else on the computer. I even looked in
    C:\Program Files\Novell\ZENworks\RemoteManagement\RMAgent but I can't find
    anything. Any ideas? Can someone point me in the right direction.
    BTW, I'm using ZDM 6.5 SP2 for both the server and the workstations.
    Jim Webb

    Just an FYI, with ZDM 6.5 HP3 the file name changed from AuditLog.txt to
    ZRMAudit.txt still located under system32 on Windows XP.
    Jim Webb
    >>> On 5/22/2006 at 3:27 PM, in message
    <[email protected]>,
    Jim Webb<[email protected]> wrote:
    > Well I found out the ZDM 6.5 HP2 fixes the problem of the log file not
    > being
    > created.
    >
    > Jim Webb
    >
    >>>> On 5/19/2006 at 8:37 AM, in message
    > <[email protected]>,
    > Jim Webb<[email protected]> wrote:
    >> Well, it does show up in the event log but not in the inventory. If I
    >> disable inventory the log file won't be deleted, correct?
    >>
    >> Jim Webb
    >>
    >>>>> On 5/18/2006 at 10:03 AM, in message
    >> <[email protected]>, Marcus
    >> Breiden<[email protected]> wrote:
    >>> Jim Webb wrote:
    >>>
    >>>> I did a search on a machine I am remote controlling, no log file. What
    >>>> next?
    >>> good question... does the session show up in the eventlog?

  • Managing Alert log files : Best practices?

    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.

    ScottsTiger wrote:
    DB Version : 10.2.0.4
    Several of our DBs' alert logs have become quite large. I know that oracle will create a brand new alert log file if i delete the existing one.
    But i just want to know how you guys manage your alert log files. Do you guys archive (move to a different directory) and recreate a brand new alert log. Just want to know if there any best practices i could follow.Every end of day (or every periodic time) archive your alert.log and move other directory then remove alert.log.Next time Database instance will automatically create own alert.log

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • Large and Many Replication Manager Database Log Files

    Hi All,
    I've recently added replication manager support to our database systems. After enabling the replication manager I end up with many log.* files of many gigabytes a piece on the master. This makes backing up the database difficult. Is there a way to purge the log files more often?
    It also seems that the replication slave never finishes syncronizing with the master.
    Thank you,
    Rob

    So, I setup a debug env on test machines, with a snapshot of the db. We now set rep_set_limit to 5mb.
    Now its failing to sync, so I recompiled with --enable-diagnostic and enabled DB_VERB_REPLICATION.
    On the master we see this:
    2007-06-06 18:40:26.646768500 DBMSG: ERROR:: sendpages: 2257, page lsn [293276][4069284]
    2007-06-06 18:40:26.646775500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35e370
    2007-06-06 18:40:26.646782500 DBMSG: ERROR:: sendpages: 2257, lsn [640947][6755391]
    2007-06-06 18:40:26.646794500 DBMSG: ERROR:: sendpages: 2258, page lsn [309305][9487507]
    2007-06-06 18:40:26.646801500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb35f3b4
    2007-06-06 18:40:26.646803500 DBMSG: ERROR:: sendpages: 2258, lsn [640947][6755391]
    2007-06-06 18:40:26.646809500 DBMSG: ERROR:: send_bulk: Send 562140 (0x893dc) bulk buffer bytes
    2007-06-06 18:40:26.646816500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.647064500 DBMSG: ERROR:: wrote only 147456 bytes to site 10.0.3.235:9003
    2007-06-06 18:40:26.648559500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.648561500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.648562500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.648563500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649966500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type page_req, LSN [0][0]
    2007-06-06 18:40:26.649968500 DBMSG: ERROR:: page_req: file 0 page 2124 to 2124
    2007-06-06 18:40:26.649970500 DBMSG: ERROR:: page_req: Open 0 via mpf_open
    2007-06-06 18:40:26.649971500 DBMSG: ERROR:: sendpages: file 0 page 2124 to 2124
    2007-06-06 18:40:26.651699500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.651702500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb3d801c
    2007-06-06 18:40:26.651704500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.651705500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.651706500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:26.652858500 DBMSG: ERROR:: sendpages: 2124, page lsn [516423][7302945]
    2007-06-06 18:40:26.652860500 DBMSG: ERROR:: bulk_msg: Copying LSN [640947][6755391] of 4152 bytes to 0x2afb2d701c
    2007-06-06 18:40:26.652861500 DBMSG: ERROR:: sendpages: 2124, lsn [640947][6755391]
    2007-06-06 18:40:26.652862500 DBMSG: ERROR:: send_bulk: Send 4164 (0x1044) bulk buffer bytes
    2007-06-06 18:40:26.652864500 DBMSG: ERROR:: /ask/bloglines/db/sitedb rep_send_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391] nobuf
    2007-06-06 18:40:38.951290500 1 28888 dbnet: 0,0: MSG: ** checkpoint start **
    2007-06-06 18:40:38.951321500 1 28888 dbnet: 0,0: MSG: ** checkpoint end **
    On the slave, we see this:
    2007-06-06 18:40:26.668636500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668637500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668644500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66c1fc ep 0x2afb671344 pgrec data 0x2afb66c1fc, size 4152 (0x1038)
    2007-06-06 18:40:26.668645500 DBMSG: ERROR:: PAGE: Received page 2254 from file 0
    2007-06-06 18:40:26.668658500 DBMSG: ERROR:: PAGE: Received duplicate page 2254 from file 0
    2007-06-06 18:40:26.668664500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668666500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668672500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66d240 ep 0x2afb671344 pgrec data 0x2afb66d240, size 4152 (0x1038)
    2007-06-06 18:40:26.668674500 DBMSG: ERROR:: PAGE: Received page 2255 from file 0
    2007-06-06 18:40:26.668686500 DBMSG: ERROR:: PAGE: Received duplicate page 2255 from file 0
    2007-06-06 18:40:26.668703500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668704500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668706500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66e284 ep 0x2afb671344 pgrec data 0x2afb66e284, size 4152 (0x1038)
    2007-06-06 18:40:26.668707500 DBMSG: ERROR:: PAGE: Received page 2256 from file 0
    2007-06-06 18:40:26.668714500 DBMSG: ERROR:: PAGE: Received duplicate page 2256 from file 0
    2007-06-06 18:40:26.668715500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668722500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668723500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb66f2c8 ep 0x2afb671344 pgrec data 0x2afb66f2c8, size 4152 (0x1038)
    2007-06-06 18:40:26.668730500 DBMSG: ERROR:: PAGE: Received page 2257 from file 0
    2007-06-06 18:40:26.668743500 DBMSG: ERROR:: PAGE: Received duplicate page 2257 from file 0
    2007-06-06 18:40:26.668750500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.668752500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.668758500 DBMSG: ERROR:: rep_bulk_page: p 0x2afb67030c ep 0x2afb671344 pgrec data 0x2afb67030c, size 4152 (0x1038)
    2007-06-06 18:40:26.668760500 DBMSG: ERROR:: PAGE: Received page 2258 from file 0
    2007-06-06 18:40:26.668772500 DBMSG: ERROR:: PAGE: Received duplicate page 2258 from file 0
    2007-06-06 18:40:26.668779500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:26.690980500 DBMSG: ERROR:: /ask/bloglines/db/sitedb-slave rep_process_message: msgv = 3 logv 12 gen = 12 eid 0, type bulk_page, LSN [640947][6755391]
    2007-06-06 18:40:26.690982500 DBMSG: ERROR:: rep_bulk_page: Processing LSN [640947][6755391]
    2007-06-06 18:40:26.690983500 DBMSG: ERROR:: rep_bulk_page: p 0x736584 ep 0x7375bc pgrec data 0x736584, size 4152 (0x1038)
    2007-06-06 18:40:26.690985500 DBMSG: ERROR:: PAGE: Received page 2124 from file 0
    2007-06-06 18:40:26.690986500 DBMSG: ERROR:: PAGE: Received duplicate page 2124 from file 0
    2007-06-06 18:40:26.690992500 DBMSG: ERROR:: rep_bulk_page: rep_page ret 0
    2007-06-06 18:40:36.289310500 DBMSG: ERROR:: election thread is exiting
    I have full log files if that could help, these are just the end of those.
    Any ideas? Thanks...
    -Paul

  • Cisco Security manager syslog.log file problem

    Hello
    I have this problem with the CSM, the next file Syslog.log  (C:\Program Files\CSCOpx\log\Syslog.log  ), this file grows very fast to fill the hard disk and saturates the server, I have tried the log rotation of the cisco works but it doesnt work, what else can i do?
    the hard drive fills in 4 hours. tankyou

    In CSM clinet under Tools > CSM Administration > Debugging you can changing the level to something higher than debugging.
    I hope it helps.
    PK

  • Log file size in Sun Access Manager

    Does anyone have an idea about the how the Sun Access Manager's log file size will increase in size with respective to the actions performed?
    Can someone give a data regarding this? If someone has a better scenario and the supportive data w.r.t log file size it will be helpful.
    Thanks,

    I would like to take the log files backup daily (for future reference).
    I need to know the following:
    1) What log files need to be backup? Do I need to take all am*.* files (It will be around 3.5 GB in size)?
    2) Ideally, I believe that only few MB of data goes into these am*.* files but, I cannot store every day 3.5 + GB of logs
    3) I observed that these files (am*.* have every days activity added to them), So I would like to enable the Log rotation
    Please let me know how can I proceed.
    Thanks

  • Logical sql in log file.

    Can someone please tell me how to see the complete sql query in the log file. If I run the same query the sql is not being produced I looked in the server log file and also manage sessions log file. It just says all columns from 'Subject Area'. I want to see all the joins and filters as well. Even for repeated queries how can I see complete sql. I set my logging level to 2.

    http://lmgtfy.com/?q=obiee+disable+query+caching
    http://catb.org/esr/faqs/smart-questions.html#homework

  • Managed server logs

    When you fire up the NodeManager, and start up managed servers from the web
              console, where do the logs of the respective servers go? I couldn't seem to
              find anything telling in my domain directory when I looked.
              Thanx!
              Regards,
              Will Hartung
              ([email protected])
              

    Will,
              You're going to actually see three different types of logs when you use the
              NM:
              1. The Node Manager log files which contain NM specific entries.
              2. The Managed Server log files which contain the managed server specific
              info such as you're used to seeing from stdout, stderr when you start the
              server from the command line. Also you'll find additional info such as the
              server's PID, and copy of the server's configuration called
              nodemanager.config
              3. Node Manger client logs which reside on the admin server and contain
              subdirectories for each managed server issued a NM command.
              You won't see these last two log file types or their subdirectories until
              you actually use the NM to start the managed servers. All of these logs
              however, are stored in a parent subdirectory called NodeManagerLogs. By
              default this directory is created in the directory where you called
              startNodeManager[.cmd .sh] from. So if I ran . startNodeManager.sh from
              mydomain then I would see
              $BEA_HOME/user_projects/domains/mydomain/NodeManagerLogs
              The easiest way I have found to deal with this is to ensure that
              $WL_HOME/server/bin is in your PATH and simply call . startNodeManager.sh
              from the directory you want the logs to be created in --of course with the
              appropriate arguments or values in nodemanger.properties--
              but remember the NM defaults to $WL_HOME/common/nodemanager so you may see
              the logs under their initially.
              //provide a follow up if any of that didn't make sense :)
              ~RU
              "Will Hartung" <[email protected]> wrote in message
              news:[email protected]...
              > When you fire up the NodeManager, and start up managed servers from the
              web
              > console, where do the logs of the respective servers go? I couldn't seem
              to
              > find anything telling in my domain directory when I looked.
              >
              > Thanx!
              >
              > Regards,
              >
              > Will Hartung
              > ([email protected])
              >
              >
              

  • Manging log files

    Backup size increasing day by day as we are not managing our log file and the files created on daily basis.
    how manage size of both nodes (applications and database)on daily basis. FOR R12
    Edited by: 838982 on Oct 12, 2011 9:02 PM

    Backup size increasing day by day as we are not managing our log file and the files created on daily basis.
    how manage size of both nodes (applications and database)on daily basis. FOR R12You need to purge the files/data that you no longer need.
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Purge+AND+Strategy&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    https://forums.oracle.com/forums/search.jspa?threadID=&q=Purging&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • How to fetch and display arbitrary log files from a managed host?

    We are creating a small management portal for a custom application and need a way to display, in a browser, the contents of an arbitrary log file located on an EM managed host. The effect we are looking for is similar to what you get when you show the contents of a database alert log using EM.
    I want to extend the EM agent to get the log and display the contents, but see two problems: (1) it seems like we would have to schedule the fetch of the log contents to happen on a regular basis and would not be able to do it on demand and (2) storing the results in the EM repository seems like it could consume a pile of storage, even if it only stores the logs for 24 hours.
    I am pretty sure we can bypass the repository somehow because I think the "show database alert log" EM process does so. Of course, I can't really be sure how this code is getting the alert log, but it seems reasonable to assume it is using the management agent.
    Is there some API for the EM Agent that I am missing?
    Any ideas would be appreciated.

    From within OEM it doesnt look possible. User defined metrics (UDM) can only return number or a string. The alertlog fetch is done via alertlogViewer.pl while being passed a couple of parameters.
    OEM does have a preliminary/rough api available at http://www.oracle.com/technology/products/oem/emx/index.html but I havent seen anyone make use of it yet.
    If you use just a web connection then you will probably run into security issues as a web server can usually only view/access stuff under their htmldoc dir.
    Seems like some non-OEM very custom code is what you seek..

  • Delete Log File: Correspondence in Training and Event Management ( t77vp)

    Is there a standard way of deleting the Log File: Correspondence in Training and Event Management ( T77VP ) from the system.
    Thanks for your help.
    Andi

    Hi Niladri,
    Please open a new discussion for this as it's a different question. Not only is this stated in the guidelines and makes it easier for other members to search for the right things, but it also increases your chances of getting the right answers, because users know you are looking at LSO rather than TEM and because many users, sadly, are driven by points primarily for giving answers and know you could not mark their answer as correct, because it's not your post.
    Please also give context info: which correspondence solution are you using (Smartforms, Adobe forms, SAPscript) and which version of LSO. 

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Log Files out of Control - How to manage size?

    This is a two part question:
    1) Our Apache2 error log has grown to 41 GB!!! How can we clear it?
    2) Is there a way to limit log file growth?
    3) Is there an application to manage log files on a server?
    We are running Leopard Server 10.5.x.
    Thanks!

    1) How do we set up apache to rotate logs? I was checking server admin->web service for configuration options, but didn't see any (we did advanced server configuration).
    It's automatic, and AFAIK enabled by default within Mac OS X Server. If you're piling up stuff in your logs, then your server is either very busy, or there are issues or problems being reported in the logs.
    2) Where in server admin?
    Server Admin > select server > Web > Sites > Logging
    Or as an alternative approach toward learning more about Mac OS X Server and its technologies, download the PDF of the relevant [Apple manual|http://www.apple.com/server/macosx/resources/documentation.html]. Here, you can brute-force search the manual in the Preview tool. Depending on how you best learn, you can read through the various manuals for details on how to configure and operate and troubleshoot the various components, and (for more detail than is available in the Mac OS X Server manuals) for pointers to the component-specific web sites and documents, too.

Maybe you are looking for

  • How do I archive certain albums in iPhoto?

    Hi, I am using a Macbook Intel core 2 duo, so it's a few years old now and my iPhoto '11 is running incredibly slow. I have over 12,000 photo's in the library so what I want to do is archive most of these photo's and get them out of iPhoto. I plan on

  • I want to track my iPhone because I lost it.

    Hello, I am writting to you today about a lost iphone and I was wondering if there is anyway I can track it using iCloud.

  • Are all these players the same thing n

    I just recently bought a Zen Sleek photo and realized it's the exact same thing as my girlfriend's Zen Micro except better looking, more GB, pictures, and color... Whoopie... They have all the exact same features. There is NOTHING different about the

  • Converting the comma seperator value  format

    i am getting the content of a table in a internal table with comma separator value format, dynamically by using dfies table i am getting the header of the table, and displaying it in the output, now my query is that the content of the data which i am

  • Management Global Employee  Vs C.E + PNP or PNPCE

    All,      We are moving towards implementing u201CManagement Global Employeeu201D. I am new to u201CManagement Global Employeeu201D   and I found some good documentation at www.saphelpbykevin.com  . Kevin you did great job there like you did with us