Log file question.

Hi All,
i am running on ebsr12 12.1.1 db 11.1.0.7 on OUL5x64
i have 2 questions would like your help.
when applying a patch got this error and when using adadmin to re-generate the form still got this error
The following Oracle Forms objects did not generate successfully:
gl forms/ZHS GLXJEENT.fmx
where is the location of the logfile so i can see more detail about the error.
i was in $APPL_TOP/admin/TEST/log and $APPL_TOP/admin/TEST/out
but it did not have much detail about the error.
found this note from Hussein
Please make sure you have no invalid objects in the database before trying to generate the form again via adadmin or manually.
If the forms fails to compile, please check the error log file for details -- How to Generate Form, Library and Menu for Oracle Applications [ID 130686.1]
to re-generate form exe from command line.
frmcmp_batch.sh module=/TEST/testappl/au/12.0/forms/US/ARXTWMAI.fmb
userid=APPS/APPS output_file=/TEST/testappl/ar/12.0/forms/US/ARXTWMAI.fmx
module_type=form compile_all=special
this form is for US , if i want to generate form for ZHS ,Should i just change the location point to ZHS like this
frmcmp_batch.sh module=/TEST/testappl/au/12.0/forms/ZHS/ARXTWMAI.fmb
userid=APPS/APPS output_file=/TEST/testappl/ar/12.0/forms/ZHS/ARXTWMAI.fmx
module_type=form compile_all=special
Thanks for your help.
Regards

The following Oracle Forms objects did not generate successfully: gl forms/ZHS GLXJEENT.fmxadpatch log will tell you worker which was processing this fmb.....mark that worker number.....in same directory find ad*worker_num*.log and see that log file.
You can find adworker,adpatch,adworker log at:-
Application Tier-- adpatch log - $APPL_TOP/admin/<SID>/log/
Thanks,
JD

Similar Messages

  • Multiplexing Redo Log Files question

    If you are running RAC on ASM on a RAID system, is this required?  We are using an HP autoraid which mirrors at the block level and in the documentation about Multiplexing Redo Log Files it says that you do it to protect against media failure.  The autoraid that we are using gives us multiple levels of redundancy against media failure so I was wondering if Multiplexing would be adding more overhead than is needed.  Thanks for your input.

    ASM is quite compex and I'm not going to outline all the advantages or reasons for ASM, but under ASM you can drop and add devices to maintain your capacity needs online without loosing data, which you cannot do using RAID, which requires a re-initialize, for example, regardless of redundancy. Please see the documentation. ASM, like pretty much everything Oracle will add complexity and you will have to check your requirements. ASM is however pretty much the standard. If you use external RAID, make sure your storage is not using RAID 5 or 0. Regarding logical errors, you could for example overwrite or delete a file by mistake, in which case file redundancy does not protect you. If you are looking for reasons or ways not to use ASM, I'm sure you will find them, but what's the point?

  • Listener log file questions

    I have few queries regarding listener.log file.
    1) What is the use of listener.log file?
    2) Does listener.log file need to be purged?
    3) When is this Listener.log will be created?

    1009230 wrote:
    I have few queries regarding listener.log file.
    1) What is the use of listener.log file?
    take a look at what is IN the log file. Imagine how having access to that information might be useful. The use of the listener log is the same as any othe process log. 99% of the time you have no reason to even look at it. But that other 1%, it is invaluable.
    2) Does listener.log file need to be purged?
    Only if you need the disk space.
    3) When is this Listener.log will be created?Whenever the listener starts and finds it has not already been created.

  • Log file sync question

    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?

    Tony Hasler wrote:
    Metalink note 34592.1 has been mentioned several times in this forum as well as elsewhere, notably here
    http://christianbilien.wordpress.com/2008/02/12/the-%E2%80%9Clog-file-sync%E2%80%9D-wait-event-is-not-always-spent-waiting-for-an-io/
    The question I have relates to the stated breakdown of 'log file sync' wait event:
    1. Wakeup LGWR if idle
    2. LGWR gathers the redo to be written and issue the I/O
    3. Time for the log write I/O to complete
    4. LGWR I/O post processing
    5. LGWR posting the foreground/user session that the write has completed
    6. Foreground/user session wakeup
    Since the note says that the system 'read write' statistic includes steps 2 and 3, the suggestion is that the difference between it and 'log file sync' is due to CPU related work on steps 1, 4, 5 and 6 (or on waiting on the CPU run queue).
    Christian's article, quoted above, theorises about 'CPU storms' and the Metalink note also suggests that steps 5 and 6 could be costly.
    However, my understanding of how LGWR works is that if it is already in the process of writing out one set of blocks (let us say associated with a commit of transaction 'X' amongst others) at the time a another transaction (call it transaction 'Y') commits, then LGWR will not commence the write of the commit for transaction 'Y' until the I/Os associated with the commit of transaction 'X' complete.
    So, if I have an average 'redo write' time of, say, 12ms and a 'log file sync' time of, say 34ms (yes, of course these are real numbers :-)) then I would have thought that this 22ms delay was due at least partly to LGWR 'falling behind' in it's work.
    Nonetheless, it seems to me that this extra delay could only be a maximum of 12ms so this still leaves 10ms (34 - 12 -12) that can only be accounted for by CPU usage.
    Clearly, my analsys contains a lot of conjecture, hence this note.
    Can anybody point me in the direction of some facts?It depends on what you mean by facts - presumably only the people who wrote the code know what really happens, the rest of us have to guess.
    You're right about point 1 in the MOS note: it should include "or wait for current lgwr write and posts to complete".
    This means, of course, that your session could see its "log file sync" taking twice the "redo write time" because it posted lgwr just after lgwr has started to write - so you have to wait two write and post cycles. Generally the statistical effects will reduce this extreme case.
    You've been pointed to the two best bits of advice on the internet: As Kevin points out, if you have lgwr posting a lot of processes in one go it may stall as they wake up, so the batch of waiting processes has to wait extra time; and as Riyaj points out - there's always dtrace (et al.) if you want to see what's really happening. (Tanel has some similar notes, I think, on LFS).
    If you're stuck with Oracle diagnostics only then:
    redo size / redo synch writes for sessions will tell you the typical "commit size"
    redo size + redo wastage / redo writes for lgwr will tell you the typical redo write size
    If you have a significant number of small processes "commit sizes" per write (more than CPU count, say) then you may be looking at Kevin's storm.
    Watch out for a small number of sessions with large commit sizes running in parallel with a large number of sessions with small commit sizes - this could make all the "small" processes run at the speed of the "large" processes.
    It's always worth looking at the event histogram for the critical wait events to see if their patterns offer any insights.
    Regards
    Jonathan Lewis

  • Question about how Oracle manages Redo Log Files

    Good morning,
    Assuming a configuration that consists of 2 redo log groups (Group A and B), each group consisting of 2 disks (Disks A1 & A2 for Group A and Disks B1 and B2 for group B). Further, let's assume that each redo log file resides by itself in a disk storage device and that the device is dedicated to it. Therefore in the above scenario, there are 4 disks, one for each redo log file and, each disk contains nothing else other than a redo log file. Furthermore, let's assume that the database is in ARCHIVELOG mode and that the archive files are stored on yet another different set of devices.
    sort of graphically:
        GROUP A             GROUP B
          A1                  B1
          A2                  B2The question is: When the disks that comprise Group A are filled and Oracle switches to the disks in Group B, can the disks in Group A be taken offline, maybe even physically removed from the system if necessary, without affecting the proper operation of the database ? Can the Archiver process be temporarily delayed until the disks (that were removed) are brought back online or is the DBA forced to wait until the Archiver process has finished creating a copy of the redo log file into the archive ?
    Thank you for your help,
    John.

    Hello,
    Dropping Log Groups
    To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:
    * An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)
    * You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
    * Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.
    SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;
    GROUP# ARC STATUS
    1 YES ACTIVE
    2 NO CURRENT
    3 YES INACTIVE
    4 YES INACTIVE
    Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.
    The following statement drops redo log group number 3:
    ALTER DATABASE DROP LOGFILE GROUP 3;
    When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. Rather, the control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
    When using Oracle-managed files, the cleanup of operating systems files is done automatically for you.
    Your Database wont be affected as you can operate with 2 redo log files in each group as The minimum number of redo log files required in a database is two because the LGWR (log writer) process writes to the redo log files in a circular manner. so the process will hang becuase you are having 2 only groups if you want to remove 1 add a third one and make it the current group then remove the one you want to be offline.
    Please refer to:
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/onlineredo.htm#7438
    Kind regards
    Mohamed
    Oracle DBA

  • Question about full backup and Transaction Log file

    I had a query will taking full backup daily won't allow my log file to grow as after taking the full backup I still see the some of the VLF in status 2. It went away when I manually took the backup of log file. I am bit confused shall I
    perform backup of transaction log and full database backup daily to avoid such things in future also until I run shrinkfile the storage space from server won't get reduced right.

    yes, full backup does not clear log file only log backup does. once the log backup is taken, it will set the inactive vlfs in the log file to 0.
    you should perform log backup as the per your business SLA in data loss.
    Go ahead and ask this to yourself:  
    If  a disaster strikes and your database server is lost and your only option is restore it from backup, 
    how much data loss can your business handle??
    the answer to this question is how frequently your log backup should be?
    if the answer is 10 mins, you should have log backups every 10 mins atleast.
    if the answer is 30 mins, you should have log backups every 30 mins atleast.
    if the answer is 90 mins, you should have log backups every 90 mins atleast.
    so, when you restore, you will restore latest fullbackup+differential(latest one taken after restored fullback)
     and all the logbackups taken since the latest(restored full or differential backup).
    there several resources on web,inculding youtube videos, that explain these concepts clearly.i advice you to look at them.
    to release the file space to OS, you should the shrink the file. log file shrink happens from the end upto the point it reaches an active vlf.
    if there are no inactive vlf's in the end,despite how many inactive vlf's the log file has in the begining, the log file is not shrinkable
    Hope it Helps!!

  • Question regarding alert log file and trace files

    What should be the alert log file size ? And when should it be deleted? And for how many days user trace files should be kept?
    Also will anyone please tell me the importance of these files.
    Thanks

    This may help: http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14231/manproc.htm#sthref729
    There are a few discussions on it here:
    Re: Alert Log File
    alert log file contents viewing
    Re: how to read alert log file? is there any tool available?

  • Is the disk equal to log files and other questions?

    In the web page http://www.oracle.com/technology/documentation/berkeley-db/je/GettingStartedGuide/introduction.html#dplfeatures, there is a statement, " The checkpointer is responsible for flushing database data to *disk* that was written to cache as the result of a transaction commit ".
    I wonder if the disk here means log files under the JE home directory.
    From my understanding of these documents and other web resources, the check pointer is to write records in Cache to Log files (disk), and then cleaner is to reorganize and then to remove unused log files. For the records in a Cache, they are brought from disk to Cache by querying the index organized in a B-Tree structure, and the In-Compressor is to delete some empty internal nodes of B-Tree.
    I wonder if the above is right to describe relations among these components, check pointer, cleaner, B-Tree and In-Compressor.
    Thanks for your help!
    Best,
    Jiangfan

    Jiangfan Shi wrote:
    I wonder if the disk here means log files under the JE home directory. Yes.
    I wonder if the above is right to describe relations among these components, check pointer, cleaner, B-Tree and In-Compressor. Yes.

  • Question on redo log files at the standby

    Oracle version: 10.2.0.5
    Platform : AIX
    We have 2 node RAC primary with 2 node RAC standby
    Primary Instance1 named as cmapcp1
    Primary Instance2 named as cmapcp2
    Standby Instance1 named as cmapcp3
    Standby Instance2 named as cmapcp4At standby side
    SQL> show parameter log_file_name_convert
    NAME                 TYPE                 VALUE
    log_file_name_conver string               cmapcp1, cmapcp3, cmapcp2, cmapcp4
    Despite the value set for log_file_name_convert, I don't see any change in names of Online and Standby redo logs at the Standby site.
    -- From primary
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.-- From standby
    SQL> select member,type from v$logfile;
    MEMBER                                             TYPE
    +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE
    +CMAPCP_DATA01/cmapcp/cmapcp_log11.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log12.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log13.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log14.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log15.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log16.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log17.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log18.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log19.dbf             STANDBY
    +CMAPCP_DATA01/cmapcp/cmapcp_log20.dbf             STANDBY
    16 rows selected.--- Another thing I noticed, v$log doesn't list Standby Redo Logs. This is expected behaviour , I guess
    Below is the output from Primary and Standby (it is the same)
    set linesize 200
    set pagesize 50
    col member for a50
    break on INST SKIP PAGE on GROUP# SKIP 1
    select l.thread# inst, l.group#,lf.member, lf.type
        from v$log l , v$logfile lf
        where l.group# = lf.group#
        order by 1,2 ;
          INST     GROUP# MEMBER                                             TYPE
             1          1 +CMAPCP_DATA01/cmapcp/cmapcp_log01.dbf             ONLINE
                        2 +CMAPCP_DATA01/cmapcp/cmapcp_log02.dbf             ONLINE
                        3 +CMAPCP_DATA01/cmapcp/cmapcp_log03.dbf             ONLINE
          INST     GROUP# MEMBER                                             TYPE
             2          4 +CMAPCP_DATA01/cmapcp/cmapcp_log04.dbf             ONLINE
                        5 +CMAPCP_DATA01/cmapcp/cmapcp_log05.dbf             ONLINE
                        6 +CMAPCP_DATA01/cmapcp/cmapcp_log06.dbf             ONLINE

    John_75 wrote:
    Thank you ckpt, mseberg.
    I think log_file_name_convert is set wrongly as you've mentioned. But If I don't want to any change to name of Online or standby redo log files in standby, I don't have to set log_file_name_convert at all. Right ?From Same link
    If you specify an odd number of strings (the last string has no corresponding replacement string), an error is signalled during startup. If the filename being converted matches more than one pattern in the pattern/replace string list, the first matched pattern takes effect. There is no limit on the number of pairs that you can specify in this parameter (other than the hard limit of the maximum length of multivalue parameters).

  • Question regarding a 95 G config/log file: LabView_32_11.0_Lab.Admin_cur.txt

    Hi Everyone,
    One of our lab computers running Labview was reported to be running out of storage and I was asked to figure out why. I rifled through some windows folders to find the culprit, specifically folder: c:\users\Lab.Admin\AppData\Local\Temp wherein I found a 95 G file entitled LabView_32_11.0_Lab.Admin_cur.txt I did note that the Lab.Admin is the user name and is also included in the filename so I'm assuming this is some sort of config/log file for the current user.
    The file was too large for me to open and look at with any program I had available so I just renamed it, re-started Labview to verify that it would be recreated an then deleted the bloated file. The newly created file has the following inside of itt:
    #Date: Wed, Jun 13, 2012 2:49:00 PM
    #OSName: Windows 7 Professional
    #OSVers: 6.1
    #OSBuild: 7600
    #AppName: LabVIEW
    #Version: 11.0 32-bit
    #AppKind: FDS
    #AppModDate: 06/22/2011 18:12 GMT
    #LabVIEW Base Address: 0x00400000
    Can anyone tell me the purpose of this file and what might have caused it to grow to 95 G. I'm just interested in learning how to prevent this from happening again.
    Cheers,
    Alex
    Alexander H. | Software Developer | CLAD
    Solved!
    Go to Solution.

    Yes it is, or rather was, a 95 Gb text file.
    I suspect you are correct that it is a crash dump/error log file. It makes sense as this computer has been running a test station for the past year that has been reported as less than stable. I'll keep an eye on that file over the next few days to see if anything is added to it while the station is running.
    Thanks for the suggestions,
    Alex
    Alexander H. | Software Developer | CLAD

  • Forms9i / Oracle 91AS question - log file enclosed

    I get the following in my log file - can anyone help me with regard to where to start looking to fix this?
    05/11/03 13:31 Started
    05/11/03 13:31 forms90web: oracle.jsp.runtimev2.JspServlet: init
    05/11/03 13:31 forms90web: 9.0.2.0.0 Started
    05/11/03 13:31 forms90web: oracle.forms.servlet.FormsServlet: init
    05/11/03 13:31 forms90web: FormsServlet init():
    configFileName: d:\OraDev9i/forms90/server/formsweb.cfg
    testMode: false
    05/11/03 13:31 forms90web: oracle.forms.servlet.ListenerServlet: init
    05/11/03 13:31 forms90web: ListenerServlet init()
    05/11/03 14:35 forms90web: Forms session <1> aborted: unable to communicate with runtime process.
    05/11/03 14:35 forms90web: Forms session <1> exception stack trace:
    java.io.InterruptedIOException: Read timed out
         at java.net.SocketInputStream.socketRead(Native Method)
         at java.net.SocketInputStream.read(SocketInputStream.java:90)
         at java.io.BufferedInputStream.fill(BufferedInputStream.java:186)
         at java.io.BufferedInputStream.read(BufferedInputStream.java:204)
         at java.io.DataInputStream.readLine(DataInputStream.java:449)
         at oracle.forms.net.HTTPHeaderTool.parseResponseHeader(Unknown Source)
         at oracle.forms.servlet.ListenerServlet.forwardResponseFromRunform(Unknown Source)
         at oracle.forms.servlet.ListenerServlet.doPost(Unknown Source)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:283)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:336)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:59)
         at oracle.security.jazn.oc4j.JAZNFilter.doFilter(JAZNFilter.java:283)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:523)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:269)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:735)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].server.http.HttpRequestHandler.run(HttpRequestHandler.java:243)
         at com.evermind[Oracle9iAS (9.0.2.0.0) Containers for J2EE].util.ThreadPoolThread.run(ThreadPoolThread.java:64)
    Thanks.

    The following Oracle Forms objects did not generate successfully: gl forms/ZHS GLXJEENT.fmxadpatch log will tell you worker which was processing this fmb.....mark that worker number.....in same directory find ad*worker_num*.log and see that log file.
    You can find adworker,adpatch,adworker log at:-
    Application Tier-- adpatch log - $APPL_TOP/admin/<SID>/log/
    Thanks,
    JD

  • Logical sql in log file.

    Can someone please tell me how to see the complete sql query in the log file. If I run the same query the sql is not being produced I looked in the server log file and also manage sessions log file. It just says all columns from 'Subject Area'. I want to see all the joins and filters as well. Even for repeated queries how can I see complete sql. I set my logging level to 2.

    http://lmgtfy.com/?q=obiee+disable+query+caching
    http://catb.org/esr/faqs/smart-questions.html#homework

  • Total lock-ups with fan running - translate system.log file please!?

    Hi, all. My late 2005 2.3 gig dual G5 has been experiencing random lock ups for as long as I can remember. My system is up to date and I have tested each pair of the 5 gigs of ram that I have and the system freezes with each pair. It can happen at any time, when I am doing absolutely nothing, for example, overnight. I am at my wits end!
    Here's the system log file for the latest freezes. Can anyone tell me what's going on here??? I really need to get to the root of this problem. Thanks so so much in advance.
    Apr 12 17:32:52 Marc-Weinbergs-Computer kernel[0]: AFP_VFS afpfs_Reconnect: connect on /Volumes/Macintosh HD failed 89.
    Apr 12 17:32:52 Marc-Weinbergs-Computer kernel[0]: AFP_VFS afpfs_unmount: /Volumes/Macintosh HD, flags 524288, pid 62
    Apr 12 17:44:46 Marc-Weinbergs-Computer /Library/Application Support/FLEXnet Publisher/Service/11.03.005/FNPLicensingService: Started\n
    Apr 12 17:44:46 Marc-Weinbergs-Computer /Library/Application Support/FLEXnet Publisher/Service/11.03.005/FNPLicensingService: This service performs licensing functions on behalf of FLEXnet enabled products.\n
    Apr 12 18:01:06 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 12 18:01:49 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 12 18:08:29 Marc-Weinbergs-Computer diskarbitrationd[69]: SDCopy [1056]:36091 not responding.
    Apr 12 18:16:18 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 12 18:16:53 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 12 19:24:12 Marc-Weinbergs-Computer ntpd[191]: time reset -0.650307 s
    Apr 13 01:05:45 Marc-Weinbergs-Computer ntpd[191]: time reset -0.496917 s
    Apr 13 03:15:03 Marc-Weinbergs-Computer cp: error processing extended attributes: Operation not permitted
    Apr 13 07:15:03 Marc-Weinbergs-Computer postfix/postqueue[1778]: warning: Mail system is down -- accessing queue directly
    Apr 13 03:15:03 Marc-Weinbergs-Computer cp: error processing extended attributes: Operation not permitted
    Apr 13 15:53:53 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 13 15:53:54 Marc-Weinbergs-Computer KernelEventAgent[62]: tid 00000000 received unknown event (256)
    Apr 13 22:15:48 localhost kernel[0]: standard timeslicing quantum is 10000 us
    Apr 13 22:15:47 localhost mDNSResponder-108.6 (Jul 19 2007 11: 33:32)[63]: starting
    Apr 13 22:15:48 localhost kernel[0]: vmpagebootstrap: 506550 free pages
    Apr 13 22:15:47 localhost memberd[70]: memberd starting up
    Apr 13 22:15:49 localhost kernel[0]: migtable_maxdispl = 70
    Apr 13 22:15:49 localhost kernel[0]: Added extension "com.firmtek.driver.FTATASil3132E" from archive.
    Apr 13 22:15:49 localhost kernel[0]: Added extension "com.firmtek.driver.Sil3112DeviceNub" from archive.
    Apr 13 22:15:49 localhost kernel[0]: Copyright (c) 1982, 1986, 1989, 1991, 1993
    Apr 13 22:15:49 localhost kernel[0]: The Regents of the University of California. All rights reserved.
    Apr 13 22:15:49 localhost kernel[0]: using 5242 buffer headers and 4096 cluster IO buffer headers
    Apr 13 22:15:49 localhost kernel[0]: AppleKauaiATA shasta-ata features enabled
    Apr 13 22:15:49 localhost kernel[0]: DART enabled
    Apr 13 22:15:47 localhost DirectoryService[75]: Launched version 2.1 (v353.6)
    Apr 13 22:15:49 localhost kernel[0]: FireWire (OHCI) Apple ID 52 built-in now active, GUID 001451ff fe1b4c7e; max speed s800.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:48 localhost lookupd[71]: lookupd (version 369.5) starting - Sun Apr 13 22:15:48 2008
    Apr 13 22:15:49 localhost kernel[0]: USBF: 20.590 OHCI driver: OHCIRootHubPortPower bit not sticking (1). Retrying.
    Apr 13 22:15:49 localhost kernel[0]: Extension "com.microsoft.driver.MicrosoftKeyboardUSB" has no kernel dependency.
    Apr 13 22:15:49 localhost kernel[0]: AppleSMUparent::clientNotifyData nobody registed for 0x40
    Apr 13 22:15:49 localhost kernel[0]: Security auditing service present
    Apr 13 22:15:49 localhost kernel[0]: BSM auditing present
    Apr 13 22:15:49 localhost kernel[0]: disabled
    Apr 13 22:15:49 localhost kernel[0]: rooting via boot-uuid from /chosen: 82827EDF-0263-3B93-BEED-4B114E820B85
    Apr 13 22:15:49 localhost kernel[0]: Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOResources</string><key>IOResourceMatch</key><string ID="2">boot-uuid-media</string></dict>
    Apr 13 22:15:49 localhost kernel[0]: Got boot device = IOService:/MacRISC4PE/ht@0,f2000000/AppleMacRiscHT/pci@9/IOPCI2PCIBridge/k2-sat a-root@C/AppleK2SATARoot/k2-sata@0/AppleK2SATA/ATADeviceNub@0/IOATABlockStorageD river/IOATABlockStorageDevice/IOBlockStorageDriver/ST3320620AS Media/IOApplePartitionScheme/AppleHFS_Untitled1@10
    Apr 13 22:15:49 localhost kernel[0]: BSD root: disk0s10, major 14, minor 12
    Apr 13 22:15:49 localhost kernel[0]: jnl: replay_journal: from: 8451584 to: 11420160 (joffset 0x952000)
    Apr 13 22:15:50 localhost kernel[0]: AppleSMU -- shutdown cause = 3
    Apr 13 22:15:50 localhost kernel[0]: AppleSMU::PMU vers = 0x000d00a0, SPU vers = 0x67, SDB vers = 0x01,
    Apr 13 22:15:50 localhost kernel[0]: HFS: Removed 8 orphaned unlinked files
    Apr 13 22:15:50 localhost kernel[0]: Jettisoning kernel linker.
    Apr 13 22:15:50 localhost kernel[0]: Resetting IOCatalogue.
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 1
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 1
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 1
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 1
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 1
    Apr 13 22:15:50 localhost kernel[0]: Matching service count = 3
    Apr 13 22:15:50 localhost kernel[0]: NVDANV40HAL loaded and registered.
    Apr 13 22:15:50 localhost kernel[0]: PowerMac112ThermalProfile::start 1
    Apr 13 22:15:50 localhost kernel[0]: PowerMac112ThermalProfile::end 1
    Apr 13 22:15:50 localhost kernel[0]: SMUNeo2PlatformPlugin::initThermalProfile - entry
    Apr 13 22:15:50 localhost kernel[0]: SMUNeo2PlatformPlugin::initThermalProfile - calling adjust
    Apr 13 22:15:50 localhost kernel[0]: PowerMac112ThermalProfile::adjustThermalProfile start
    Apr 13 22:15:50 localhost kernel[0]: IPv6 packet filtering initialized, default to accept, logging disabled
    Apr 13 22:15:50 localhost kernel[0]: BCM5701Enet: Ethernet address 00:14:51:61:ee:78
    Apr 13 22:15:50 localhost kernel[0]: BCM5701Enet: Ethernet address 00:14:51:61:ee:79
    Apr 13 22:15:51 localhost lookupd[86]: lookupd (version 369.5) starting - Sun Apr 13 22:15:51 2008
    Apr 13 22:15:51 localhost kernel[0]: jnl: replay_journal: from: 21611008 to: 7857152 (joffset 0x952000)
    Apr 13 22:15:51 localhost kernel[0]: jnl: replay_journal: from: 673280 to: 24382976 (joffset 0x952000)
    Apr 13 22:15:51 localhost kernel[0]: jnl: replay_journal: from: 3890176 to: 6294016 (joffset 0x7d01000)
    Apr 13 22:15:51 localhost diskarbitrationd[69]: disk0s10 hfs 82827EDF-0263-3B93-BEED-4B114E820B85 NewestSeagate /
    Apr 13 22:15:52 localhost kernel[0]: NVDA,Display-A: vram [90020000:10000000]
    Apr 13 22:15:52 localhost mDNSResponder: Adding browse domain local.
    Apr 13 22:15:53 localhost kernel[0]: hfs mount: enabling extended security on Maxtor
    Apr 13 22:15:53 localhost diskarbitrationd[69]: disk1s3 hfs 0DBE2113-B1F5-388F-BF70-2E366A095330 Maxtor /Volumes/Maxtor
    Apr 13 22:15:54 localhost kernel[0]: NVDA,Display-B: vram [94000000:08000000]
    Apr 13 22:15:54 Marc-Weinbergs-Computer configd[67]: setting hostname to "Marc-Weinbergs-Computer.local"
    Apr 13 22:15:54 Marc-Weinbergs-Computer /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow: Login Window Application Started
    Apr 13 22:15:56 Marc-Weinbergs-Computer diskarbitrationd[69]: disk2s3 hfs 971CABB3-C211-38FC-8E91-6B4F8EA5FA20 B08-09-07 /Volumes/B08-09-07
    Apr 13 22:15:56 Marc-Weinbergs-Computer loginwindow[110]: Login Window Started Security Agent
    Apr 13 22:15:57 Marc-Weinbergs-Computer kernel[0]: AppleBCM5701Ethernet - en1 link active, 1000-Mbit, full duplex, symmetric flow control enabled
    Apr 13 22:15:57 Marc-Weinbergs-Computer configd[67]: AppleTalk startup
    Apr 13 22:15:57 Marc-Weinbergs-Computer TabletDriver[119]: #### GetFrontProcess failed to get front process (-600)
    Apr 13 22:15:59 Marc-Weinbergs-Computer configd[67]: posting notification com.apple.system.config.network_change
    Apr 13 22:16:00 Marc-Weinbergs-Computer configd[67]: posting notification com.apple.system.config.network_change
    Apr 13 22:16:00 Marc-Weinbergs-Computer configd[67]: executing /System/Library/SystemConfiguration/Kicker.bundle/Contents/Resources/enable-net work
    Apr 13 22:16:00 Marc-Weinbergs-Computer configd[67]: posting notification com.apple.system.config.network_change
    Apr 13 22:16:01 Marc-Weinbergs-Computer lookupd[123]: lookupd (version 369.5) starting - Sun Apr 13 22:16:01 2008
    Apr 13 22:16:01 Marc-Weinbergs-Computer kernel[0]: HFS: Removed 2 orphaned unlinked files
    Apr 13 22:16:01 Marc-Weinbergs-Computer diskarbitrationd[69]: disk3s3 hfs CDA8BCC5-0CE4-33E8-A910-4B0952DBC230 FullBU-09-07 /Volumes/FullBU-09-07
    Apr 13 22:16:04 Marc-Weinbergs-Computer configd[67]: target=enable-network: disabled
    Apr 13 22:16:05 Marc-Weinbergs-Computer configd[67]: AppleTalk startup complete
    Apr 13 22:16:09 Marc-Weinbergs-Computer TabletDriver[237]: #### GetFrontProcess failed to get front process (-600)
    Apr 13 22:16:09 Marc-Weinbergs-Computer launchd[241]: com.wacom.wacomtablet: exited with exit code: 253
    Apr 13 22:16:09 Marc-Weinbergs-Computer launchd[241]: com.wacom.wacomtablet: 9 more failures without living at least 60 seconds will cause job removal
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:28 EDT 2008] : ATA device 'ST3320620AS', serial number '6QF0L6LR', reports it is functioning at a temperature of 95.0F (35C) degrees.
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:28 EDT 2008] : Spare blocks for ATA device 'ST3320620AS', serial number '6QF0L6LR', appear to still be available. (Total Available: 36) (Use Attempts: 0)
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : ATA device 'ST3320620AS', serial number '6QF0LGS4', reports it is functioning at a temperature of 100.4F (38C) degrees.
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : Spare blocks for ATA device 'ST3320620AS', serial number '6QF0LGS4', appear to still be available. (Total Available: 36) (Use Attempts: 0)
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : ATA device 'ST3320620AS', serial number '9RV000FC', reports it is functioning at a temperature of 95.0F (35C) degrees.
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : Spare blocks for ATA device 'ST3320620AS', serial number '9RV000FC', appear to still be available. (Total Available: 36) (Use Attempts: 0)
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : ATA device 'Maxtor 6B300S0', serial number 'B6211G0H', reports it is functioning at a temperature of 89.6F (32C) degrees.
    Apr 13 22:16:29 Marc-Weinbergs-Computer /Applications/DiskWarrior.app/Contents/MacOS/DiskWarriorDaemon: [Sun Apr 13 22:16:29 EDT 2008] : Spare blocks for ATA device 'Maxtor 6B300S0', serial number 'B6211G0H', appear to still be available. (Total Available: 63) (Use Attempts: 0)
    Apr 13 22:16:54 Marc-Weinbergs-Computer /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder: _TIFFVSetField: tiff data provider: Invalid tag "Copyright" (not supported by codec).\n
    Apr 13 22:16:54 Marc-Weinbergs-Computer /System/Library/CoreServices/Finder.app/Contents/MacOS/Finder: _TIFFVSetField: tiff data provider: Invalid tag "Copyright" (not supported by codec).\n
    etc.

    Hi-
    The machine seems to be having trouble with loading certain drivers, but, as this isn't a crash log, and doesn't show the "hang-up" or freeze, it's hard to tell.
    Noted possibilities are:
    -Microsoft keyboard (possible USB power problem)
    -firmtek driver (from archive) questionable due to the "archive" annotation
    -Wacom tablet driver, causing system problems
    Running in Safe mode without freezes would help to determine if one of these drivers is the problem.
    Other possibilities are outdated drivers, or simply a need to reinstall the OS.
    If unnecessary, removing the driver(s) would be a good idea.
    External USB and Firewire devices are all suspect, should all be disconnected, revert to Apple keyboard, and test system performance. Adding one device at a time, and testing each will be necessary to clear each device.
    I have experienced system trouble when a Wacom tablet was not connected, but the driver was left installed.
    Disabling the driver from Startup items may be necessary to test without the Wacom tablet connected.

  • How to have a live feed from application server log file (realtime viewr )

    how to have a live feed from application server log file (realtime viewr for apps log files)
    hi , thank you for reading my post.
    is there any way to have a live feed of Application server log ?
    for example is there any application that can watch the log file and show the changes as new log items come in ?
    can some one with more experience help ?

    Your question would be more suited to the Developer Forums
    http://devforums.apple.com
    but anyway...
    My goal is to develop a web application that is able to run on iPhone too, to capture the audio and video content from its camera and mic.
    Web Apps running in Safari don't have access to the camera or mic hardware.
    Or I should built a native application distributed through Apple store?
    That is your only option, although such a system already exists:
    http://itunes.apple.com/us/app/ustream-live-broadcaster/id319362690?mt=8

  • DATE fields and LOG files  in context with external tables

    I am facing two problems when dealing with the external tables feature in Oracle 9i.
    I created an External Table with some fileds with the DATE data type . There were no issues during the creation part. But when i query the table, the DATE fields are not properly selected though the data is there in the files. Is there any ideas to deal with this ?
    My next question is regarding the log files. The contents in the log file seems to be growing when querying the external tables. Is there a way to control this behaviour?
    Suggestions / Advices on the above two issues are welcome.
    Thanks
    Lakshminarayanan

    Hi
    If you have date datatypes than:
    select
    greatest(TABCASER1.CASERRECIEVEDDATE, EVCASERS.FINALEVDATES, EVCASERS.PUBLICATIONDATE, EVCASERS.PUBLICATIONDATE, TABCASER.COMPAREACCEPDATE)
    from TABCASER, TABCASER1, EVCASERS
    where ...-- join and other conditions
    1. greatest is good enough
    2. to_date creates date dataype from string with the format of format string ('mm/dd/yyyy')
    3. decode(a, b, c, d) is a function: if a = b than return c else d. NULL means that there is no data in the cell of the table.
    6. to format the date for display use to_char function with format modell as in the to_date function.
    Ott Karesz
    http://www.trendo-kft.hu

Maybe you are looking for