When is Internal Manager log file updated?

Hi:
I thought SID_date.mgr file (on 12.1.3 and db 11.1.0.7, linux) is created when you start applications. I saw entries today. Would you please tell me when and what this file is updated?
Thanks.

gbite wrote:
Apart from the head of the log saying the ICM is starting etc, I get the following every 2-3 minutes:
Process monitor session ended : 25-APR-2012 17:07:35Process Monitor Started and Ended Every Several Seconds in ICM log file [ID 308809.1]
Many Defunct Processes Created on System [ID 461684.1]
Many Dead Processes Referencing WFMGSMS Concurrent Manager [ID 801362.1]
Concurrent Processing - Concurrent Manager Generic Platform Questions and Answers [ID 105133.1]
Thanks,
Hussein

Similar Messages

  • Concurrent manager Log file is not shwoing.

    User not able to view concurrent manager log file.
    when click on View log buttion below error showed.
    "Client routine fdpvwr failed to prepare for file transfer"
    Thank in advance for you help

    Hi Hussain,
    i am replying to Milind's mail.As per your given note id 466743.1.We have checked that if any interoperablity patch is missing,but in our case we haven't missed interoperability patch before applying RUP 7 patch.After that we compiled apps schema&relinked f60webmx as per given google resolution.Since than the error is not yet resplved.It is comming "Client routine fdpvrw failed for file transfer".Please help us to provide a exact resolution for this.
    Regards,
    Pavan Kumar K.

  • Excessive disk usage when I drag the log file viewer window (why)?

    When I drag the Log File Viewer window in Gnome, I get huge amounts of hard disk usage and the hard drive makes a loud rumbling noise. This happens only while dragging the Log File Viewer window and no other windows (that I've noticed so far).
    Why is this happening?
    Last edited by trusktr (2012-01-11 05:27:54)

    Elements11DRC
    What version of Premiere Elements are you working with and on what computer operating system is it running?
    Can we assume by your selected ID, that the program is Premiere Elements 11?
    Pending further details, I will assume that you are working with Premiere Elements 11 on Windows 7, 8, or 8.1 64 bit.
    Where is this "My Videos" Folder - on a DVD disc being used as a DataDisc for video storage purposes?
    If so, Add Media/DVD Camera or Computer Drive/Video Importer and from there automatically into the project in Project Assets as well as on the Timeline.
    If your "My Videos" Folder is a folder on the computer hard drive, then Add Media/Files and Folders to get the video into Project Assets from where you drag the video to the Timeline.
    Now for the video that you are trying to import...what are its properties
    Video Compression
    Audio Compression
    Frame Size
    Frame Rate
    Interlaced or Progressive
    File Extension
    Pixel Aspect Ratio
    Probably answered the easiest by knowing the brand/model/settings of the camera that recorded the video.
    Prime interest, that video compression. It could be MotionJPEG which can be problematic for Premiere Elements. It could be AVCHD.avi which cannot be imported.
    We can go into greater detail on your project details once we rule in or out any of the factors mentioned above.
    By the way, what is the destination for this project....burn to disc DVD or Blu-ray...export to file saved to the computer hard drive...other?
    More later.
    Thanks.
    ATR

  • C:\Program Files\Microsoft Configuration Manager\Logs not updating

    Hi,
    Anybody exprienced the scenario where logfiles in C:\Program Files\Microsoft Configuration Manager\Logs don't update anymore?
    F.e. wsyncmgr.log is of beginning of April, no recent logs there except for 7 items: smsprov.smspxe, smsdpusage  (date from Today).
    Pls advise.
    J.
    Jan Hoedt

    Open ConfigMgr Service Manager and check the logging options (is logging for each component enabled at all? Is it really logging to the directory you examined?) You can also open the logs using notepad. cmtrace.exe sometimes does not display the entire
    content if there are malformated lines.
    Having installed ConfigMgr on the C:\ drive is not the best setup BTW.
    Torsten Meringer | http://www.mssccmfaq.de

  • Where is Catalog Manager log file stored

    Hi,
    I'm getting error "An internal error occured during: " when trying to create report or use XML search or replace uitility. Since there is no error code or anything that would help to understand the reason, I'm now looking for log file where such events would be logged in more detail.
    Is there such log file at all, and if yes - what is the path or at least log file name?
    Thanks in advance

    The Catalog Manager uses Web Services to connect to the Presentation Services Plug-in so I would think you should any errors on the Presentation Services Plug-in web application log or in the Presentation Services log in the BI directories.

  • Enterprise manager log file sizes

    Hi,
    I was wondering if there is a way of managing the size of the emdb.nohup file in Enterprise Manager. Looking at the documentation, it looks as though you can control the emoms trace and log file sizes, but I can't find anything about the nohup log.
    Ideally I would like to be able to purge the log file.
    Thanks very much!

    Hi again,
    I found the emdb.nohup file in my log directory at the location you noted. It is apparently created when I stopped and restarted my dbconsole (emctl start dbconsole) and is updated each time I make a connection to that database (for which you are connecting), and for every time the page refreshes.
    I think the name of the file using a suffix of nohup is probably intentional on Oracle's part to indicate that this is a log file that is 'active', but in reality, it is not a true nohup file as in the sense of using the unix nohup command. (at least that is what I'm thinking)
    Sorry I'm not knowledgeable enough on this to be sure of my theory, but that is what I basically theorize.
    According to man pages on nohup, it states nohup is "a utility immune to hangups".
    "nohup - run a command immune to hangups, with output to a non-tty"
    To answer your question, there is no problem purging or pruning this file.
    I just cleared it out by sending a date to the file which effectively clears it out except for a new entry in the file with the current date/timestamp.
    e.g., $ date > emdb.nohup
    Then, I reconnected to my OEM console for this database and it updated the file with new entries for the new connection. No problem....
    Wed Aug 6 09:46:53 EDT 2008
    08/08/06 09:47:07 ## oracle.sysman.db.adm.inst.SitemapController: event="doLoad"
    08/08/06 09:47:07 ## 1. newPage = /database/instance/sitemap/sitemap
    08/08/06 09:47:07 ## 2. newPage = /database/instance/sitemap/sitemap
    Ji Li

  • Why is there no error when checkpointing after db log files are removed?

    I would like to test a scenario when an application's embedded database is corrupted somehow. The simplest test I could think of was removing the database log files while the application is running. However, I can't seem to get any failure. To demonstrate, below is a code snippet that demonstrates what I am trying to do. (I am using JE 3.3.75 on Mac OS 10.5.6):
    public class FileRemovalTest {
    public static void main(String[] args) throws Exception
    // Setup the DB Environment
    EnvironmentConfig ec = new EnvironmentConfig();
    ec.setAllowCreate(true);
    ec.setTransactional(true);
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CLEANER, "false");
    ec.setConfigParam(EnvironmentConfig.ENV_RUN_CHECKPOINTER, "false");
    ec.setConfigParam(EnvironmentConfig.CLEANER_EXPUNGE, "true");
    ec.setConfigParam("java.util.logging.FileHandler.on", "true");
    ec.setConfigParam("java.util.logging.level", "FINEST");
    Environment env = new Environment(new File("."), ec);
    // Create a database
    DatabaseConfig dbConfig = new DatabaseConfig();
    dbConfig.setAllowCreate(true);
    dbConfig.setTransactional(true);
    Database db = env.openDatabase(null, "test", dbConfig);
    // Insert an entry and checkpoint the database
    db.put(
    null,
    new DatabaseEntry("key".getBytes()),
    new DatabaseEntry("value".getBytes()));
    CheckpointConfig checkpointConfig = new CheckpointConfig();
    checkpointConfig.setForce(true);
    env.checkpoint(checkpointConfig);
    // Delete the DB log files
    File[] dbFiles = new File(".").listFiles(new DbFilenameFilter());
    if (dbFiles != null)
    for (File file : dbFiles)
    file.delete();
    // Add another entry and checkpoint the database again.
    db.put(
    null,
    new DatabaseEntry("key2".getBytes()),
    new DatabaseEntry("value2".getBytes())
    {color:#ff0000} *// Q: Why does this 'put' succeed?*
    {color}
    env.checkpoint(checkpointConfig);
    {color:#ff0000}*// Q: Why does this checkpoint succeed?*{color}
    // Close the database and the environment
    db.close();
    env.close();
    private static class DbFilenameFilter implements FilenameFilter
    public boolean accept(File dir, String name) {
    return name.endsWith(".jdb");
    This is what I see in the logs:
    2009-03-05 12:53:30:631:CST CONFIG Recovery w/no files.
    2009-03-05 12:53:30:677:CST FINER Ins: bin=2 ln=1 lnLsn=0x0/0xe9 index=0
    2009-03-05 12:53:30:678:CST FINER Ins: bin=5 ln=4 lnLsn=0x0/0x193 index=0
    2009-03-05 12:53:30:688:CST FINE Commit:id = 1 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:690:CST FINEST size interval=0 lastCkpt=0x0/0x0 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:703:CST FINER Ins: bin=8 ln=7 lnLsn=0x0/0x48b index=0
    2009-03-05 12:53:30:704:CST CONFIG Checkpoint 1: source=recovery success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:705:CST CONFIG Recovery finished: Recovery Infonull> useMinReplicatedNodeId=0 useMaxNodeId=0 useMinReplicatedDbId=0 useMaxDbId=0 useMinReplicatedTxnId=0 useMaxTxnId=0 numMapINs=0 numOtherINs=0 numBinDeltas=0 numDuplicateINs=0 lnFound=0 lnNotFound=0 lnInserted=0 lnReplaced=0 nRepeatIteratorReads=0
    2009-03-05 12:53:30:709:CST FINEST Environment.open: name=test dbConfig=allowCreate=true
    exclusiveCreate=false
    transactional=true
    readOnly=false
    duplicatesAllowed=false
    deferredWrite=false
    temporary=false
    keyPrefixingEnabled=false
    2009-03-05 12:53:30:713:CST FINER Ins: bin=2 ln=10 lnLsn=0x0/0x7be index=1
    2009-03-05 12:53:30:714:CST FINER Ins: bin=5 ln=11 lnLsn=0x0/0x820 index=1
    2009-03-05 12:53:30:718:CST FINE Commit:id = 2 numWriteLocks=0 numReadLocks = 0
    2009-03-05 12:53:30:722:CST FINEST Database.put key=107 101 121 data=118 97 108 117 101
    2009-03-05 12:53:30:728:CST FINER Ins: bin=13 ln=12 lnLsn=0x0/0x973 index=0
    2009-03-05 12:53:30:729:CST FINE Commit:id = 3 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:729:CST FINEST size interval=0 lastCkpt=0x0/0x581 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:735:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x193 newLnLsn=0x0/0xb61
    2009-03-05 12:53:30:736:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x820 newLnLsn=0x0/0xc3a
    2009-03-05 12:53:30:737:CST FINER Ins: bin=8 ln=15 lnLsn=0x0/0xd38 index=0
    2009-03-05 12:53:30:738:CST CONFIG Checkpoint 2: source=api success=true nFullINFlushThisRun=6 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:741:CST FINEST Database.put key=107 101 121 50 data=118 97 108 117 101 50
    2009-03-05 12:53:30:742:CST FINER Ins: bin=13 ln=16 lnLsn=0x0/0xeaf index=1
    2009-03-05 12:53:30:743:CST FINE Commit:id = 4 numWriteLocks=1 numReadLocks = 0
    2009-03-05 12:53:30:744:CST FINEST size interval=0 lastCkpt=0x0/0xe32 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:746:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0xb61 newLnLsn=0x0/0x1166
    2009-03-05 12:53:30:747:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0xc3a newLnLsn=0x0/0x11e9
    2009-03-05 12:53:30:748:CST FINER Ins: bin=8 ln=17 lnLsn=0x0/0x126c index=0
    2009-03-05 12:53:30:748:CST CONFIG Checkpoint 3: source=api success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:750:CST FINEST Database.close: name=test
    2009-03-05 12:53:30:751:CST FINE Close of environment . started
    2009-03-05 12:53:30:751:CST FINEST size interval=0 lastCkpt=0x0/0x1363 time interval=0 force=true runnable=true
    2009-03-05 12:53:30:754:CST FINER Mod: bin=5 ln=4 lnIdx=0 oldLnLsn=0x0/0x1166 newLnLsn=0x0/0x14f8
    2009-03-05 12:53:30:755:CST FINER Mod: bin=5 ln=11 lnIdx=1 oldLnLsn=0x0/0x11e9 newLnLsn=0x0/0x15a9
    2009-03-05 12:53:30:756:CST FINER Ins: bin=8 ln=18 lnLsn=0x0/0x16ab index=0
    2009-03-05 12:53:30:757:CST CONFIG Checkpoint 4: source=close success=true nFullINFlushThisRun=4 nDeltaINFlushThisRun=0
    2009-03-05 12:53:30:757:CST FINE About to shutdown daemons for Env .

    Hi,
    OS X, being Unix-like, probably isn't actually deleting file 00000000.jdb since JE still has it open -- the file deletion is deferred until it is closed. JE keeps N files open, where N is configurable.
    We do corruption testing ourselves, in the following test by overwriting a file and then attempting to read back the entire database:
    test/com/sleepycat/je/util/DbScavengerTest.java
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Bottleneck when switching the redo log files.

    Hello All,
    I am using Oracle 11.2.0.3.
    The application team reported that they are facing slowness at certain time.
    I monitored the database and I found that at some switching of the redo log files (not always) I am facing a slowness at the application level.
    I have 2 threads since my database is RAC, each thread have 3 redo log groups multiplexed to the FRA, with size 300 MB each.
    Is there any way to optimize the switch of redo log files? knowing that my database is running in ARCHIVELOG mode.
    Regards,

    Hello Nikolay,
    Thanks for your input I am sharing with you the below information. I have 2 instances so I will provide the info from each instance
    Instance 1:
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                4.9                0.0       0.00       0.00
           DB CPU(s):                1.1                0.0       0.00       0.00
           Redo size:        3,014,876.2            3,660.4
       Logical reads:           32,619.3               39.6
       Block changes:            7,969.0                9.7
      Physical reads:                0.2                0.0
    Physical writes:              164.0                0.2
          User calls:            7,955.4                9.7
              Parses:              288.9                0.4
         Hard parses:               96.0                0.1
    W/A MB processed:                0.2                0.0
              Logons:                0.9                0.0
            Executes:            2,909.4                3.5
           Rollbacks:                0.0                0.0            
    Instance 2:
    Load Profile              Per Second    Per Transaction   Per Exec   Per Call
    ~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
          DB Time(s):                5.5                0.0       0.00       0.00
           DB CPU(s):                1.4                0.0       0.00       0.00
           Redo size:        3,527,737.9            3,705.7
       Logical reads:           29,916.5               31.4
       Block changes:            8,893.7                9.3
      Physical reads:                0.2                0.0
    Physical writes:              194.0                0.2
          User calls:            7,742.8                8.1
              Parses:              262.7                0.3
         Hard parses:               99.5                0.1
    W/A MB processed:                0.4                0.0
              Logons:                1.0                0.0
            Executes:            2,822.5                3.0
           Rollbacks:                0.0                0.0
        Transactions:              952.0
    Instance 1:
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                            1,043          21.5
    log file sync                       815,334         915      1   18.9 Commit
    gc buffer busy acquire              323,759         600      2   12.4 Cluster
    gc current block busy               215,132         585      3   12.1 Cluster
    enq: TX - row lock contention        23,284         264     11    5.5 Applicatio
    Instance 2:
    Top 5 Timed Foreground Events
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                               Avg
                                                              wait   % DB
    Event                                 Waits     Time(s)   (ms)   time Wait Class
    DB CPU                                            1,340          24.9
    log file sync                       942,962       1,125      1   20.9 Commit
    gc buffer busy acquire              377,812         594      2   11.0 Cluster
    gc current block busy               211,270         488      2    9.1 Cluster
    enq: TX - row lock contention        30,094         299     10    5.5 Applicatio
    Instance 1:
    Operating System Statistics        Snaps: 1016-1017
    -> *TIME statistic values are diffed.
       All others display actual values.  End Value is displayed if different
    -> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
    Statistic                                  Value        End Value
    AVG_BUSY_TIME                             17,451
    AVG_IDLE_TIME                             81,268
    AVG_IOWAIT_TIME                                1
    AVG_SYS_TIME                               6,854
    AVG_USER_TIME                             10,548
    BUSY_TIME                                420,031
    IDLE_TIME                              1,951,741
    IOWAIT_TIME                                  288
    SYS_TIME                                 165,709
    USER_TIME                                254,322
    LOAD                                           3                6
    OS_CPU_WAIT_TIME                         523,000
    RSRC_MGR_CPU_WAIT_TIME                         0
    VM_IN_BYTES                              311,280
    VM_OUT_BYTES                          75,862,008
    PHYSICAL_MEMORY_BYTES             62,813,896,704
    NUM_CPUS                                      24
    NUM_CPU_CORES                                  6
    NUM_LCPUS                                     24
    NUM_VCPUS                                      6
    GLOBAL_RECEIVE_SIZE_MAX                4,194,304
    GLOBAL_SEND_SIZE_MAX                   4,194,304
    TCP_RECEIVE_SIZE_DEFAULT                  16,384
    TCP_RECEIVE_SIZE_MAX      9.2233720368547758E+18
    TCP_RECEIVE_SIZE_MIN                       4,096
    TCP_SEND_SIZE_DEFAULT                     16,384
    TCP_SEND_SIZE_MAX         9.2233720368547758E+18
    TCP_SEND_SIZE_MIN                          4,096
    Operating System Statistics - Detail  Snaps: 1016-101
    Snap Time           Load    %busy    %user     %sys    %idle  %iowait
    22-Aug 11:33:55      2.7      N/A      N/A      N/A      N/A      N/A
    22-Aug 11:50:23      6.2     17.7     10.7      7.0     82.3      0.0
    Instance 2:
    Operating System Statistics         Snaps: 1016-1017
    -> *TIME statistic values are diffed.
       All others display actual values.  End Value is displayed if different
    -> ordered by statistic type (CPU Use, Virtual Memory, Hardware Config), Name
    Statistic                                  Value        End Value
    AVG_BUSY_TIME                             11,823
    AVG_IDLE_TIME                             86,923
    AVG_IOWAIT_TIME                                0
    AVG_SYS_TIME                               4,791
    AVG_USER_TIME                              6,991
    BUSY_TIME                                475,210
    IDLE_TIME                              3,479,382
    IOWAIT_TIME                                  410
    SYS_TIME                                 193,602
    USER_TIME                                281,608
    LOAD                                           3                6
    OS_CPU_WAIT_TIME                         615,400
    RSRC_MGR_CPU_WAIT_TIME                         0
    VM_IN_BYTES                               16,360
    VM_OUT_BYTES                          72,699,920
    PHYSICAL_MEMORY_BYTES             62,813,896,704
    NUM_CPUS                                      40
    NUM_CPU_CORES                                 10
    NUM_LCPUS                                     40
    NUM_VCPUS                                     10
    GLOBAL_RECEIVE_SIZE_MAX                4,194,304
    GLOBAL_SEND_SIZE_MAX                   4,194,304
    TCP_RECEIVE_SIZE_DEFAULT                  16,384
    TCP_RECEIVE_SIZE_MAX      9.2233720368547758E+18
    TCP_RECEIVE_SIZE_MIN                       4,096
    TCP_SEND_SIZE_DEFAULT                     16,384
    TCP_SEND_SIZE_MAX         9.2233720368547758E+18
    TCP_SEND_SIZE_MIN                          4,096
    Operating System Statistics - Detail Snaps: 1016-101
    Snap Time           Load    %busy    %user     %sys    %idle  %iowait
    22-Aug 11:33:55      2.6      N/A      N/A      N/A      N/A      N/A
    22-Aug 11:50:23      5.6     12.0      7.1      4.9     88.0      0.0
              -------------------------------------------------------------

  • Data Manager log file deletion

    I belive this program (UJF_FILE_SERVICE_DLT_DM_FILES ) can be used to remove log files.  Couple of questions on this process
    1) Can the DM log be deleted from the front end rather than running the program?  Any issues with deleting the log file from the BPC front end?
    2) Is there a transaction code associated with this program?  If yes, do you know what is the transaction code?
    3) Can this be scheduled so that it deletes everyting except the logs that are created in the last 2 or 4 months?

    First the program you mentioned does not delete DM log files, it deletes only the DM data files.  There is no transaction code for that program and you can delete these files from the frontend with no problem.
    If you want to delete DM log files, you can use this program delivered with this how-to guide.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/a0e72699-6c32-2d10-5d97-a4db8321ef67
    THe only problem is that this program does not handle deleting the generated folders associated with log files.
    Cheers,
    RIch Heilman

  • When to delete Archive Log files ? and is safe to delete ?

    Hi all,
    I have a question, my DB is running in Archive Log mode and is growing day by day, so should I delete archive log files and where i find that my which files are old and how to delete theses files, for info I also take full RMAN and export backup regularly .
    abbas

    Well, if you're not comfortable with Backup and Recovery principles, i'd advise you to first read TFM: Backup and Recovery Concept(click) for you data safety's sake.
    Anyway, just to give you a glance at what could happen, immagine suche a situation where your database is made of 5 different datafiles (whatever they are - SYSTEM, SYSAUX, UNDO, ...), controlfiles and online redo logs. If, for example your backup is done in a timeline such as:T0 : Backup datafile F1 and F2
    T1 : Backup datafile F2 and controlfile
    T2 : Backup datafile F3, F4
    T3 : Backup datafile F5
    T0' : Backup datafile F1 and F2
    T1' : Backup datafile F2 and controlfile
    T2' : Backup datafile F3, F4
    T3' : Backup datafile F5
    ...If your server crashes (for example is thrown though the 5th floor window..) in order to make a full recovery, you could proceed multiple ways.
    . Restore files from Tx' backups. Then you'd need every archived redo logs created from T0' to now in order to be able to do a full recovery.
    . Restore files from backups; T3, T0', T2' and rectreate controlfile. Then you'd need every archived redo log created from T3 (inclusive).
    . Restore Tx backup, ...
    . Restore ...
    and so on.
    What's important is that you're at least able to resynch a full-set backup. What I mean is: if you restored "T3, T0', T2' and rectreate controlfile", in order to open the database you'll absolutely need every archived redo log created between T3 and T2' inclusive. If one is missions, you won't be able to open it.
    That is the principle. Read the concept book I linked to get more infos.
    Regards,
    Yoann.

  • Delete concurrent manager log files

    hai,
    can i delete manually concurrent log files($APPLCSF) tell me the command .
    just last 3 days log files is enough . and then above log and out file .
    how can u remove these log and out files for specific reqiurment
    . in linux flatform
    regards
    dba

    what problem occur if i delete log files at os levelThere should be no issues, but as stated above this can be done by the concurrent request so why to bother yourself and do it manaully?
    If you delete log/out files manaully from the OS for some requests which have not been purged from the tables, then you will not be able to access the log/out file of that request from the application.

  • Concurrent Manager log file

    Hi:
    Does each start creates a SID_0217.mgr file? And this is the only SID_0217.mgr file until next bounce?
    In the file there are line like below. Is one line for one CR? The system is very slow, but I don't
    Process monitor session ended : 24-FEB-2011 14:50:37
    Process monitor session started : 24-FEB-2011 14:52:37
    Process monitor session ended : 24-FEB-2011 14:52:38
    see any error in the file.
    I see hundres wxxxxx.mgr. What are they?
    Thank you for your help.
    For example, at request screen, I click Refresh Data, it takes long time ...
    Edited by: user9231603 on Feb 24, 2011 12:01 PM

    Does each start creates a SID_0217.mgr file? And this is the only SID_0217.mgr file until next bounce?Not necessarily.
    In the file there are line like below. Is one line for one CR? The system is very slow, but I don't
    Process monitor session ended : 24-FEB-2011 14:50:37
    Process monitor session started : 24-FEB-2011 14:52:37
    Process monitor session ended : 24-FEB-2011 14:52:38
    see any error in the file. See (Concurrent Manager Questions and Answers Relating to Generic Platform [ID 105133.1] -- I hit the Restart button to start the Standard manager, but it still did not start?).
    I see hundres wxxxxx.mgr. What are they?See these docs.
    Basic Troubleshooting of the Concurrent Managers on UNIX [ID 2069781.6] -- Where do all the files generated by the concurrent managers go?
    Concurrent Manager Questions and Answers Relating to Generic Platform [ID 105133.1] -- What are the logfile and output file naming conventions?
    Thanks,
    Hussein

  • Crash when trying to view log files

    So i noticed my system time had stopped syncing and wanted to find out why. I went into ksystemlog and tried to view errors.log. Then my system crashed. I then killed all processes with the magic sysrq key and ended up in tty. I tried to view some logs in there too using vi, but nothing appeared and i had to kill all processes again. Then i checked out my system monitor when i was back in kde and tried $vi /var/log/errors.log and noticed that vi eats up all the cpu and ram when doing this. I also tried with nano and it did the same. Luckily i was able to kill them before i was oom.
    The irony of this, i can't find anything from google under "system log crash". And i can't get into my logs to find out what is going on. The only thing i've changed lately is backgrounding "crond fam & kdm".
    Any ideas fellars?
    Oh i'm using arch-64 with kdemod 4.2.
    Last edited by Mountainjew (2009-03-28 21:22:37)

    Try a virtual console terminal.
    Let's see what is there at all: ls -l /var/log
    You might probably go root: su -
    Then use less to review the error log in question.

  • Log Files out of Control - How to manage size?

    This is a two part question:
    1) Our Apache2 error log has grown to 41 GB!!! How can we clear it?
    2) Is there a way to limit log file growth?
    3) Is there an application to manage log files on a server?
    We are running Leopard Server 10.5.x.
    Thanks!

    1) How do we set up apache to rotate logs? I was checking server admin->web service for configuration options, but didn't see any (we did advanced server configuration).
    It's automatic, and AFAIK enabled by default within Mac OS X Server. If you're piling up stuff in your logs, then your server is either very busy, or there are issues or problems being reported in the logs.
    2) Where in server admin?
    Server Admin > select server > Web > Sites > Logging
    Or as an alternative approach toward learning more about Mac OS X Server and its technologies, download the PDF of the relevant [Apple manual|http://www.apple.com/server/macosx/resources/documentation.html]. Here, you can brute-force search the manual in the Preview tool. Depending on how you best learn, you can read through the various manuals for details on how to configure and operate and troubleshoot the various components, and (for more detail than is available in the Mac OS X Server manuals) for pointers to the component-specific web sites and documents, too.

  • MS_SQL Shrinking Log Files.

    Hi Experts,
    We have checked the documentation which have been received from SAP (SBO_Customer portal), Based on the Early Watch Alert.
    AS per SAP requsition we have minimized the size of 'Test Database Log' file  through MS_SQL Management Studio.(Restricted file size 10 percent, size 10 MB)
    Initially it was 50 Percent, 1000 MB
    Doubt is:
    Is that any problem will occur in future regarding this changes of 'LOG FILES'.
    Kindly help me.
    Based on your reply ...
    i will update in live production database....
    By
    kart

    The risk to shrink log file is fairly small.  Current hardware and software has much better reliability than before.  When you shrink your log file, you just lose some history which nobody even know there is any value in it.
    On the contrary, if you keep very large log file, it may cause more troubles than doing anything good.
    Thanks,
    Gordon

Maybe you are looking for