About Log file

hi folks,
-jdev 11.1.1.5.0 - adfbc
i configured log file of integrated wls(10.3.5) into my project folder.but i feel it make a some amount of time for every execution of wls.
for every new execution of Intergrated wls. new log file created into my project folder.
whether am on right way?. or else better un-config that log file.
why am saying means in two place log file are created.
c:\some users\system folder\ default server\ default Domain log.
D:\jdev\my work\my project.
for creating files in two place it take liitle bit amount of time is it so?
i feel that,
before configuing somewhat execution looks good :)

Not sure what you are after, but you can use log levels to print out more info into three log.These Log messages are only in three log if the level its equal our higher then the current log level. You change the level when you get an error you can't otherwise find or resolve.
You find this in the blog John mentioned (in the other parts ).
Timo

Similar Messages

  • Log file sync vs log file parallel write probably not bug 2669566

    This is a continuation of a previous thread about ‘log file sync’ and ‘log file parallel write’ events.
    Version : 9.2.0.8
    Platform : Solaris
    Application : Oracle Apps
    The number of commits per second ranges between 10 and 30.
    When querying statspack performance data the calculated average wait time on the event ‘log file sync’ is on average 10 times the wait time for the ‘log file parallel write’ event.
    Below just 2 samples where the ratio is even about 20.
    "snap_time"     " log file parallel write avg"     "log file sync avg"     "ratio
    11/05/2008 10:38:26      8,142     156,343     19.20
    11/05/2008 10:08:23     8,434     201,915     23.94
    So the wait time for a ‘log file sync’ is 10 times the wait time for a ‘log file parallel write’.
    First I thought that I was hitting bug 2669566.
    But then Jonathan Lewis is blog pointed me to Tanel Poder’s snapper tool.
    And I think that it proves that I am NOT hitting this bug.
    Below is a sample of the output for the log writer.
    -- End of snap 3
    HEAD,SID, SNAPSHOT START ,SECONDS,TYPE,STATISTIC , DELTA, DELTA/SEC, HDELTA, HDELTA/SEC
    DATA, 4, 20081105 10:35:41, 30, STAT, messages sent , 1712, 57, 1.71k, 57.07
    DATA, 4, 20081105 10:35:41, 30, STAT, messages received , 866, 29, 866, 28.87
    DATA, 4, 20081105 10:35:41, 30, STAT, background timeouts , 10, 0, 10, .33
    DATA, 4, 20081105 10:35:41, 30, STAT, redo wastage , 212820, 7094, 212.82k, 7.09k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writer latching time , 2, 0, 2, .07
    DATA, 4, 20081105 10:35:41, 30, STAT, redo writes , 867, 29, 867, 28.9
    DATA, 4, 20081105 10:35:41, 30, STAT, redo blocks written , 33805, 1127, 33.81k, 1.13k
    DATA, 4, 20081105 10:35:41, 30, STAT, redo write time , 652, 22, 652, 21.73
    DATA, 4, 20081105 10:35:41, 30, WAIT, rdbms ipc message ,23431084, 781036, 23.43s, 781.04ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, log file parallel write , 6312957, 210432, 6.31s, 210.43ms
    DATA, 4, 20081105 10:35:41, 30, WAIT, LGWR wait for redo copy , 18749, 625, 18.75ms, 624.97us
    When adding the DELTA/SEC (which is in micro seconds) for the wait events it always roughly adds up to a million micro seconds.
    In the example above 781036 + 210432 = 991468 micro seconds.
    This is the case for all the snaps taken by snapper.
    So I think that the wait time for the ‘log file parallel write time’ must be more or less correct.
    So I still have the question “Why is the ‘log file sync’ about 10 times the time of the ‘log file parallel write’?”
    Any clues?

    Yes that is true!
    But that is the way I calculate the average wait time = total wait time / total waits
    So the average wait time for the event 'log file sync' per wait should be near the wait time for the 'llog file parallel write' event.
    I use the query below:
    select snap_id
    , snap_time
    , event
    , time_waited_micro
    , (time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24) corrected_wait_time_h
    , total_waits
    , (total_waits - p_total_waits)/((snap_time - p_snap_time) * 24) corrected_waits_h
    , trunc(((time_waited_micro - p_time_waited_micro)/((snap_time - p_snap_time) * 24))/((total_waits - p_total_waits)/((snap_time - p_snap_time) * 24))) average
    from (
    select sn.snap_id, sn.snap_time, se.event, se.time_waited_micro, se.total_waits,
    lag(sn.snap_id) over (partition by se.event order by sn.snap_id) p_snap_id,
    lag(sn.snap_time) over (partition by se.event order by sn.snap_time) p_snap_time,
    lag(se.time_waited_micro) over (partition by se.event order by sn.snap_id) p_time_waited_micro,
    lag(se.total_waits) over (partition by se.event order by sn.snap_id) p_total_waits,
    row_number() over (partition by event order by sn.snap_id) r
    from perfstat.stats$system_event se, perfstat.stats$snapshot sn
    where se.SNAP_ID = sn.SNAP_ID
    and se.EVENT = 'log file sync'
    order by snap_id, event
    where time_waited_micro - p_time_waited_micro > 0
    order by snap_id desc;

  • Where to find BIP log file in version 10.3.4.1

    OBIEE 1.3.4.1 on Linux. I have set Server Configuration option Debug Level=Debug. But where can I find the error log file. Often I see error The report cannot be rendered because of an error, please contact the administrator. when configure a report, where can I find the error log?
    Search the furom and find a post talking about log file for 11g in. Found it. BIEE_HOME\user_projects\domains\bifoundation_domain\servers\bi_server1\logs\bipublisher
    But it is not the case in 10.3.4.1.
    Thank you for help.

    Hi,
    Please take a look at this post:
    http://bipconsulting.blogspot.com/2010/01/bi-publisher-logging-debugging-part-3.html
    If your question has been answered then please grant the points and close the thread
    thanks
    Jorge

  • How can I write my Adobe AIR application tracing lines into a log file

    I Have a question about log files for AIR application
    How can I make my application writing all tracing and exceptions into a log file?

    I think if you pubish a -debug SWF it will log to flashlog.txt

  • Problem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • Roblem about space management of archived log files

    Dear friends,
    I have a problem about space management of archived log files.
    my database is Oracle 10g release 1 running in archivelog mode. I use OEM(web based) to config all the backup and recovery settings.
    I config "Flash Recovery Area" to do backup and recovery automatically. my daily backup schedule is every night at 2:00am. and my backup setting is "disk settings"--"compressed backup set". the following is the RMAN script:
    Daily Script:
    run {
    allocate channel oem_disk_backup device type disk;
    recover copy of database with tag 'ORA$OEM_LEVEL_0';
    backup incremental level 1 cumulative copies=1 for recover of copy with tag 'ORA$OEM_LEVEL_0' database;
    the retention policy is the second choice, that is "Retain backups that are necessary for a recovery to any time within the specified number of days (point-in-time recovery)". the recovery window is 1 day.
    I assign enough space for flash recovery area. my database size is about 2G. I assign 20G as flash recovery area.
    now here is the problem, through oracle online manual, it said oracle can manage the flash recovery area automatically, that is, when the space is full, it can delete the obsolete archived log files. but in fact, it never works! whenever the space is full, the database will hang up! besides, the status of archived log files is very strange, for example, it can change "obsolete" stauts from "yes" to "no", and then from "no" to "yes". I really have no idea about this! even though I know oracle usually keep archived files for some longer days than retention policy, but I really don't know why the obsolete status can change automatically. although I can write a schedule job to delete obsolete archived files every day, but I just want to know the reason. my goal is to backup all the files on disk and let oracle automatically manage them.
    also, there is another problem about archive mode. I have two oracle 10g databases(release one), the size of db1 is more than 20G, the size of db2 is about 2G. both of them have the same backup and recovery policy, except I assign more flash recovery area for db1. both of them are on archive mode. both of nearly nobody access except for the schedule backup job and sometime I will admin through oem. the strange thing is that the number of archived log files of smaller database, db2, are much bigger than ones of bigger database. also the same situation for the size of the flashback logs for point-in-time recovery. (I enable flashback logging for fast database point-in-time recovery, the flashback retention time is 24 hours.) I found the memory utility of smaller database is higher than bigger database. nearly all the time the smaller database's memory utility keeps more than 99%. while the bigger one's memory utility keeps about 97%. (I enable "Automatic Shared Memory Management" on both databases.) but both database's cup and queue are very low. I'm nearly sure no one hack the databases. so I really have no idea why the same backup and recovery policy will result so different result, especially the smaller one produces more redo logs than bigger one. does there anyone happen to know the reason or how should I do to check the reason?
    by the way, I found web based OEM can't reflect the correct database status when the database shutdown abnormally. for example, if the database hang up because of out of flash recovery area, after I assign more flash recovery area space and then restart the database, the OEM usually can't reflect the correct database status. I must restart OEM manually to correctly reflect the current database status. does there anyone know in what situation I should restart OEM to reflect the correct database status?
    sorry for the long message, I just want to describe in details to easy diagnosis.
    any hint will be greatly appreciated!
    Sammy

    thank you very much, in fact, my site's oracle never works about managing archive files automatically although I have tried all my best. at last, I made a job running daily to check the archive files and delete them.
    thanks again.

  • I'm a bit confused about standby log files

    Hi all,
    I'm a bit confused about something and wondering if someone can explain.
    I have a Primary database that ships logs to a Logical Standby database.
    Everything appears to be working properly. If I check the v$archived_log table in the Primary and compare it to the dba_logstdby_log view in the Logical Standby, I'm seeing that logs are being applied.
    On the logical standby, I have the following configured for log_archive_dest_n parameters:
    *.log_archive_dest_1='LOCATION=/u01/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_2='LOCATION=/u02/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_3='LOCATION=/u03/oracle/archivedlogs/ORADB1
    VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_4='SERVICE=PNX8A_WDC ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PNX8A_WDC'
    *.log_archive_dest_5='LOCATION=/u01/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_6='LOCATION=/u02/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    *.log_archive_dest_7='LOCATION=/u03/oracle/standbylogs/ORADB1
    VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=PNX8A_GMD'
    Here is my confusion now. Before converting from a Physical standby database to a Logical Standby database, I was under the impression that I needed the standby logs (i.e. log_archive_dest_5, 6 and 7 above) because a Physical Standby database would receive the redo from the primary and write it into the standby logs before applying the redo in the standby logs to the Physical standby database.
    I've now converted to a Logical Standby database. What's happening is that the standby logs are accumulating in the directory pointed to by log_archive_dest_6 above (/u02/oracle/standbylogs/ORADB1). They do not appear to be getting cleaned up by the database.
    In the Logical Standby database I do have STANDBY_FILE_MANAGEMENT parameter set to AUTO. Can anyone explain to me why standby log files would continue to accumulate and how I can get the Logical Standby database to remove them after they are no longer needed on the LSB db?
    Thanks in advance.
    John S

    JSebastian wrote:
    I assume you mean in your question, why on the standby database I am using three standby log locations (i.e. log_archive_dest_5, 6, and 7)?
    If that is your question, my answer is that I just figured more than one location would be safer but I could be wrong about this. Can you tell me if only one location should be sufficient for the standby logs? The more I think of this, that is probably correct because I assume that Log Transport services will re-request the log from the Primary database if there is some kind of error at the standby location with the standby log. Is this correct?As simple configure as below. Why more multiple destinations for standby?
    check notes Step by Step Guide on How to Create Logical Standby [ID 738643.1]
    >
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_2='SERVICE=chicago LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_3='LOCATION=/arch2/boston/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=boston'
    The following table describes the archival processing defined by the initialization parameters shown in Example 4-2.
         When the Boston Database Is Running in the Primary Role      When the Boston Database Is Running in the Logical Standby Role
    LOG_ARCHIVE_DEST_1      Directs archival of redo data generated by the primary database from the local online redo log files to the local archived redo log files in /arch1/boston/.      Directs archival of redo data generated by the logical standby database from the local online redo log files to the local archived redo log files in /arch1/boston/.
    LOG_ARCHIVE_DEST_2      Directs transmission of redo data to the remote logical standby database chicago.      Is ignored; LOG_ARCHIVE_DEST_2 is valid only when boston is running in the primary role.
    LOG_ARCHIVE_DEST_3      Is ignored; LOG_ARCHIVE_DEST_3 is valid only when boston is running in the standby role.      Directs archival of redo data received from the primary database to the local archived redo log files in /arch2/boston/.
    >
    Source:-
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ls.htm

  • Confused about the log files

    I have written an application that has a Primary and Secondary database. The application creates tens-of-thousands of records in the Primary database, with a 1-to-1 relationship in the Secondary. On subsequent runs it will either update existing Primary records (which should not update the secondary as that element does not change) or it will create new records.
    The application actually works correctly, with the right data, the right updates and the right logical processing. The problem is the log files.
    The input data I am testing with is originally 2Mb as a CSV file and with a fresh database it creates almost 20Mb of data. This is about right for the way it splits the information up and indexes it. If I run the application again with exactly the same data, it should just update all the entries and create nothing new. My understanding is that the updated records will be written to the end of the logs, and the old ones in the earlier logs would be redundant and the cleaner thread would clean them up. I am explicitly cleaning as per the examples. The issue is that each run, the data just doubles in size! Logically it is fine, physically it is taking a ridiculous amount of space. RUnning DbSpace shows that the logs are mostly full (over 90%) where I would expect most to be empty, or sparsely occupied as the new updates are written to new files. cleanLog() does nothing. I am at a total loss!
    Generally the processing I am doing on the primary is looking up the key, if it is there updating the entry, if not creating one. I have been using a cursor to do this, and using the putCurrent() method for existing updates, and put() for new records. I have even tried using Database.delete() and the full put() in place of putCurrent() - but no difference (except it is slower).
    Please help - it is driving me nuts!

    Let me provide a little more context for the questions I was asking. If this doesn't lead us further into understanding your log situation, perhaps we should take this offline. When log cleaning doesn't occur, the basic questions are:
    a. is the application doing anything that prohibits log cleaning? (in your case, no)
    b. has the utilization level fallen to the point where log cleaning should occur? (not on the second run, but it should on following runs)
    c. does the log utilization level match what the application expects? (no, it doesn't match what you expect).
    1) Ran DbDump with and withour -r. I am expecting the
    data to stay consistent. So, after the first run it
    creates the data, and leaves 20mb in place, 3 log
    files near 100% used. After the second run it should
    update the records (which it does from the
    applications point of view) but I now have 40mb
    across 5 log files all near 100% usage.I think that it's accurate to say that both of us are not surprised that the second run (which updates data but does not change the number of records) creates a second 20MB of log, for a total of 40MB. What we do expect though, is that the utilization reported by DbSpace should fall to closer to 50%. Note that since JE's default minimum utilization level is 50%, we don't expect any automatic log cleaning even after the second run.
    Here's the sort of behavior we'd expect from JE if all the basics are taken care of (there are enough log files, there are no open txns, the application stays up long enough for the daemon to run, or the application does batch cleanLog calls itself, etc).
    run 1 - creates 20MB of log file, near 100% utilization, no log cleaning
    run 2 - updates every record, creates another 20MB of log file, utilization falls, maybe to around 60%. No log cleaning yet, because the utilization is still above the 50% threshold.
    run 3 - updates every record, creates another 20MB of log file, utilization falls below 50%, log cleaning starts running, either in the background by the daemon thread, or because the app calls Environment.cleanLog(), without any need to set je.cleaner.forceCleanFiles.
    So the question here is (c) from above -- you're saying that your DbSpace utilization level doesn't match what you believe your application is doing. There are three possible answers -- your application has a bug :-), or with secondaries and whatnot, JE is representing your data in a fashion you didn't expect, or JE's disk space utilization calculation is inaccurate.
    I suggested using DbDump -r as a first sanity check of what data your application holds. It will dump all the valid records in the environment (though not in key order, no -r is slower, but dumps in key order). Keys and data should up on different lines, so the number of lines in the dump files should be twice the number of records in the environment. You've done this already in your application, but this is an independent way of checking. It also makes it easier to see what portion of data is in primary versus secondary databases, because the data is dumped into per-database files. You could also load the data into a new, blank environment to look at it.
    I think asked you about the size of your records because a customer recently reported a JE disk utilization bug, which we are currently working on. It turns out that if your data records are very different in size (in this case, 4 orders of magnitude) and consistently only the larger or the smaller records are made obsolete, the utilization number gets out of whack. It doesn't really sound like your situation, because you're updating all your records, and they don't sound like they're that different in size. But nevertheless, here's a way of looking at what JE thinks your record sizes are. Run this command:
    java -jar je.jar DbPrintLog -h <envhome> -S
    and you'll see some output that talks about different types of log entries, and their sizes. Look at the lines that say LN and LN_TX at the top. These are data records. Do they match the sizes you expect? These lines do include JE's per-record headers. How large that is depends on whether your data is transactional or not. Non-transactional data records have a header of about 35 bytes, whereas transactional data records have 60 bytes added to them. If your data is small, that can be quite a large percentage. This is quite a lot more than for BDB (Core), partly because BDB (Core) doesn't have record level locking, and partly because we store a number of internal fields as 64 bit rather than 16 or 32 bit values.
    The line that's labelled "key/data" shows what portion JE thinks is the application's data. Note that DbPrintLog, unlike DbSpace, doesn't account for obsoleteness, so while you'll see a more detailed picture of what the records look like in the log, you may see more records than you expect.
    A last step we can take is to send you a development version of DbSpace that has a new feature to recalculate the utilization level. It runs more slowly than the vanilla DbSpace, but is a way of double checking the utilization level.
    In my first response, I suggested trying je.cleaner.forceCleanFiles just to make it clear that the cleaner will run, and to see if the problem is really around the question of what the utilization level should be. Setting that property lets the cleaner bypass the utilization trigger. If using it really reduced the size of your logs, it reinforces that your idea of what your application is doing is correct, and casts suspicion on the utilization calculation.
    So in summary, let's try these steps
    - use DbDump and DbPrintLog to double check the amount and size of your application data
    - make a table of runs, that shows the log size in bytes, number of log files, and the utilization level reported by DbSpace
    - run a je.cleaner.forceCleanFiles cleanLog loop on one of the logs that seems to have a high utilization level, and see how much it reduces to, and what the resulting utilization level is
    If it all points to JE, we'll probably take it offline, and ask for your test case.
    Regards,
    Linda

  • Huge system.log file filled with over a million messages about NSATSTypeset

    Hi,
    I just happened to notice the system.log file on my MacBook Pro laptop was absolutely huge - over 1.2 GBytes in just one day. Almost all of it is messages like this:
    Mar 31 12:26:23 macbook quicklookd[228]: <NSATSTypesetter: 0x146b40>: Exception * table 0x1473caa0 has block 0x147b3040 rather than 0x147b4850 at index 7573 raised during typesetting layout manager <NSLayoutManager: 0x124140>\n 1 containers, text backing has 16461 characters\n selected character range {16461, 0} affinity: downstream granularity: character\n marked character range {16461, 0}\n Currentl
    y holding 16461 glyphs.\n Glyph tree contents: 16461 characters, 16461 glyphs, 4 nodes, 128 node bytes, 16384 storage bytes, 16512 total bytes, 1.00 bytes per character, 1.00 bytes per glyph\n Layout tree contents: 16461 characters, 16461 glyphs, 7573 laid glyphs, 207 laid line fragments, 3 nodes, 96 node bytes, 13656 storage bytes, 13752 total bytes, 0.84 bytes per character, 0.84 bytes per glyph, 36.58 laid glyphs per laid line fragment, 66.43 bytes per laid line fragment\n, glyph range {7573 0}. Ignoring...
    and there are over 1.3 million of these messages!!!! There is a tiny bit of variation in some of the numbers (but not all). That is just a little bit more than the total number of files I have on the hard drive in the laptop, so it would appear that the quicklookd is doing something to each and every file on the computer. Any idea why all of a sudden these messages appear and why so many. I only have about 7 versions of the system.log files and none of them are even close to this big, but the one one thing I did do today that I have not in a few weeks is reboot my laptop because of another problem with the laptop screen not waking this morning from being put to sleep last night (was just black but computer was running and could login to it from another computer in the LAN it is attached).
    Any ideas why this is happening, or is this something that always happens on a reboot/boot rather than waking from sleep. Why would the quicklookd be printing out so many of these messages that are almost exactly alike???
    I have only had this MacBook for a few weeks, so don't have a good feel for what is normal and what isn't yet.
    THanks...
    -Bob

    Bob,
    Thanks for your further thoughts and the additional information. My guess is that Quick Look does its file processing independently of whether or not or how recently the computer has been rebooted. The NSATSTypesetter messages filling up the log file are almost certainly error messages and should not occur with normal operation of Quick Look. I suspect that your reboot doesn't directly have anything to do with this problem. (It might have indirectly contributed in the sense that either whatever caused the need for the reboot or the reboot process itself corrupted a file, which in turn caused Quick Look to fail and generate all those error messages in the log file.)
    In the meantime I may have a solution for this problem. This morning I rebooted in single user mode and ran AppleJack in manual mode so that I could tell it to clean up all user cache files. (I'd previously downloaded AppleJack application from http://applejack.sourceforge.net/ . To boot in single user mode hold command and s keys at startup chime. ... Run the five AppleJack maintenance tasks in order. The third task will give you the option to enter numbers of users whose cache files will be cleaned. Do this cache cleaning for all users.) In the six hours since I ran AppleJack I've seen exactly two NSATSTypesetter error messages in /var/log/system.log . This compares with hundreds of thousands in the same period yesterday. I just set an iCal alarm to remind me to report back to this discussion thread in two weeks on this issue.
    Best,
    Chris.
    PS: Above you mention 7 log files. Are the older ones of the form system.log.0.bz2 ? If so they have been compressed. Just because they are small doesn't necessarily mean there are not a lot of nearly identical error messages. Uncompress to check. I haven't tried this because large files are very inconvenient to work with on my old iBook.

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • About automatic disappearance of Redo log file

    I had free Oracle9i Release 1(9.0.1) CD and I installed Oracle9i on my PC.
    When using Database assistant to create a database(I choose not to create a database during installation of 9i), after clone of database, it will start database, error comes.
    It says: Error write to redo01.log file, I check that file, it existed in the related folder, while at that time it disappears, once I had redo01, 02, 03 log file. It confused me.
    Does anyone give something on that?
    Thanks.

    Well, seriously you need to read basic oracle documents.
    To give short answers to your question.
    Redo logs are required for instance and crash recovery of your system.
    You need to have minimum two redo groups with minimum one redo member for each group. They are written in a circular fashion, i.e. one after one. If you maintain your database in archivelog mode, before rewriting/reusing the filled redo group member, that will be archived through arch process to a archive file, this can be used for database recovery.
    Every database must required two redo groups and you can't delete them.
    Jaffar

  • About unable to initialize log file!

    Hi,
    I insatll the Sun Java communications suite 5 in one linux host.
    When I restart web server, I got the following error, would you please give me a hand ?
    Thank you in advance.
    *# /var/opt/sun/webserver7/https-comms.swabplus.com/bin/stopserv*
    server has been shutdown
    *# /var/opt/sun/webserver7/https-comms.swabplus.com/bin/startserv*
    Sun Java System Web Server 7.0 B12/04/2006 08:17
    warning: CORE3283: stderr: Java HotSpot(TM) Server VM warning: Can't detect initial thread stack location - find_vma failed+_
    info: CORE5076: Using [Java HotSpot(TM) Server VM, Version 1.5.0_09] from [Sun Microsystems Inc.]
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [amserver]
    warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [ampassword]
    warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [amcommon]
    warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [amconsole]
    warning: WEB6100: locale-charset-info is deprecated, please use parameter-encoding
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [da]
    info: WEB0100: Loading web module in virtual server [com5.my.com] at [commcli]
    info: url: jar:file:/opt/sun/mfwk/share/lib/mfwk_instrum_tk.jar!/com/sun/mfwk/config/MfConfig.class
    info: url: jar:file:/opt/sun/mfwk/share/lib/mfwk_instrum_tk.jar!/com/sun/mfwk/config/MfConfig.class
    info: LogFile is: //var/opt/sun/mfwk/logs/instrum.%g_
    warning: Warning: unable to initialize log file!+_
    warning: Couldn't get lock for //var/opt/sun/mfwk/logs/instrum.%g+_
    info: group = 227.227.227.1, port = 54320
    info: Set Time-to-live to 0
    info: join Group /227.227.227.1
    info: Starting listening thread
    info: sends initial RESP message in SDK
    info: HTTP3072: http-listener-1: http://com5.my.com:80 ready to accept requests
    info: CORE3274: successful server startup

    This happens when you install web server as "non-root" and start the instance as "root".
    If you can avoid this, you can get around of
    warning: CORE3283: stderr: Java HotSpot(TM) Server VM warning: Can't detect initial thread stack location - findvma failed_*
    The cause for log file initialization problem appears to be some permission problem.
    Can you get back once you take care of what suggested above.

  • What to do about NodeManager's .out log file?

    Hi all,
    in your production environments, how do you handle <server name>.out log file created by NodeManager?
    Do you let it run in gigabytes size? Currently our applications are a bit verbatim on stdout which in our test
    environment creates quite large .out files, since the server is not restarted for many weeks. How could
    we handle this? Reduce stdout output? Do not start the managed server through NodeManager? Create a
    separate process to delete the file every few days?
    I've check in the documentation and it says that I can't limit the size of the file.
    Thanks

    You cannot rotate the stdout file unless you are starting weblogic as windows service.
    Link : http://e-docs.bea.com/wls/docs81/adminguide/winservice.html#1188735
    You can use third party log rotation utilities in unix platforms :-
    http://linuxcommand.org/man_pages/logrotate8.html

  • Confusing  about achived log file backup

    From a book, I see
    "we can not combine archived redo log files and datafiles into a single backup",
    But, I do have a command
    "backup...........plus archivedlog"
    they seams contradict with each other,
    why is that?

    They do not conflict which each other:
    "we can not combine archived redo log files and datafiles into a single backup", referes to backup pieces. Oracle cannot combine archivelog and for example tablespace backup in a single backup piece.
    the following command, just says rman to perform a backup of tablespace and archivelogs, but as a result it will create at least two backup pieces one for tablespace and the second for archive redo logs.
    RMAN> backup tablespace users plus archivelog delete input skip inaccessible format "C:\%U.bkf";
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=128 RECID=142 STAMP=690573687
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0SKIOKQ3_1_1.BKF tag=TAG20090629T004553 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:02:45
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00128_0686744258.001 RECID=142 STAMP=690573687
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00129_0686744258.001 RECID=143 STAMP=690588250
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=C:\APP\MOB\ORADATA\ORCL\USERS01.DBF
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\APP\MOB\FLASH_RECOVERY_AREA\ORCL\BACKUPSET\2009_06_29\O1_MF_NNNDF_TAG20090629T004911_54HWVKFO_.BKP tag=TAG20090629T004911 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
    Finished backup at 29-JUN-09
    Starting backup at 29-JUN-09
    current log archived
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting archived log backup set
    channel ORA_DISK_1: specifying archived log(s) in backup set
    input archived log thread=1 sequence=148 RECID=162 STAMP=690770984
    channel ORA_DISK_1: starting piece 1 at 29-JUN-09
    channel ORA_DISK_1: finished piece 1 at 29-JUN-09
    piece handle=C:\0UKIOL1B_1_1.BKF tag=TAG20090629T004946 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    channel ORA_DISK_1: deleting archived log(s)
    archived log file name=C:\APP\MOB\ORADATA\ORCL\ARCH\ARC00148_0686744258.001 RECID=162 STAMP=690770984
    Finished backup at 29-JUN-09
    Starting Control File and SPFILE Autobackup at 29-JUN-09
    piece handle=C:\APP\MOB\PRODUCT\11.1.0\DB_1\DATABASE\C-1213135877-20090629-00 comment=NONE
    Finished Control File and SPFILE Autobackup at 29-JUN-09With kind regards
    Krystian Zieja

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

Maybe you are looking for

  • How can I delete an app that is not on my desktop, thanks.

    My account has been hacked and an app purchased, I now wish to delete this but it does not show on my desktop. Any help appreciated.

  • Jdev database adapter config wizard

    hi, i'm using JDeveloper 10.1.2 and mySQL Server 4.1 i realized that i have to use JDeveloper to be able to set up DBAdapters for my BPEL process .. anyway, i created a new PartnerLink, i click on the Define Adapter Service icon, then i get the ADAPT

  • Depth Manager

    Hello everyone. Here is a simple package with DepthManager, the question is in performance of it. 1) Create .fla AS3, size = 600x400, grey background. 2) Create in library a MovieClip, named THING, linkage = THING, and open it. Draw a 15-corner star

  • Where can I find Photoshop CS6 for download?

    Where can I find Photoshop CS6 for download?

  • New to Developer course

    Hello, I want to know if I require the knowledge of Java Beans, Servlets, JSP, Internet etc to do the assignment for Java Developer course. Thanks NS