Garbled/mangled log records

I've noticed this issue for some time now, as it typically occurs at a point in the boot process which, for several reasons, has been an 'interesting' point' in my boot process for some time. However, I'm going to be showing some logs soon, and this issue is rampant at one point where I'll be needing to provide a rational wha-happin?
Jan 22 07:58:22 rusty Freeing unused kernel memory: 252k freed
Jan 22 07:58:22 rusty scsi0 : pata_via
Jan 22 07:58:22 rusty scsi1 : pata_via
Jan 22 07:58:22 rusty ata1: PATA max UDMA/133 cmd 0x00000000000101f0 ctl 0x00000000000103f6 bmdma 0x000000000001fc00 irq 14
Jan 22 07:58:22 rusty ata2: PATA max UDMA/133 cmd 0x0000000000010170 ctl 0x0000000000010376 bmdma 0x000000000001fc08 irq 15
Jan 22 07:58:22 rusty ata1.00: ATA-7: WDC WD2500JB-00REA0, 20.00K20, max UDMA/100
Jan 22 07:58:22 rusty ata1.00: 488397168 sectors, multi 16: LBA48
Jan 22 07:58:22 rusty ata1.00: configured for UDMA/100
Jan 22 07:58:22 rusty ata2.00: ATAPI: TOSHIBA DVD-ROM SD-R5112, 1034, max UDMA/33
Jan 22 07:58:22 rusty ata2.01: ATAPI: HL-DT-ST DVDRAM GSA-H10N, JL10, max UDMA/33
Jan 22 07:58:22 rusty ata2.00: configured for UDMA/33
Jan 22 07:58:22 rusty ata2.01: configured for UDMA/33
Jan 22 07:58:22 rusty scsi 0:0:0:0: Direct-Access ATA WDC WD2500JB-00R 20.0 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty scsi 1:0:0:0: CD-ROM TOSHIBA DVD-ROM SD-R5112 1034 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty scsi 1:0:1:0: CD-ROM HL-DT-ST DVDRAM GSA-H10N JL10 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty usbcore: registered new interface driver usbfs
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write Protect is off
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write Protect is off
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 07:58:22 rusty sda: sda1 sda2 sda3 sda4 <<6>usbcore: registered new interface driver hub
Jan 22 07:58:22 rusty sda5<6>usbcore: registered new device driver usb
Jan 22 07:58:22 rusty sda6ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
Jan 22 07:58:22 rusty ACPI: PCI Interrupt 0000:00:10.4[C] -> Link [LNKC] -> GSI 10 (level, low) -> IRQ 10
Jan 22 07:58:22 rusty sda7 >
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: EHCI Host Controller
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: new USB bus registered, assigned bus number 1
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: debug port 1
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: irq 10, io mem 0xfebffc00
Jan 22 07:58:22 rusty USB Universal Host Controller Interface driver v3.0
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Attached SCSI disk
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004
Jan 22 07:58:22 rusty usb usb1: configuration #1 chosen from 1 choice
The top of this log excerpt is to provide reference as to where we are in the boot process. At this point, it appears (and has for a long time now) that while booting my machine is at this point attempting to  initialise both PCI(/SCSI) and USB buses simultaneously.
See the 1st log entry, just over 1/2-way down, beginning with "sda5<6>"? Yeah. The "<6>", specifically. Note that the line right above it - where ATA/SCS  is starting to enumerate the disk partitions on sda - but the line is apparently interrupted by a message from the also-initialising USB subsystem. Note how 5 consecutive log entries are mangled badly enough it's questionable whether a log parser could get back it.
This one's been around for a while...maybe an old-timer's laughing ATM?
Blue Skies...grndrush

I've noticed this issue for some time now, as it typically occurs at a point in the boot process which, for several reasons, has been an 'interesting' point' in my boot process for some time. However, I'm going to be showing some logs soon, and this issue is rampant at one point where I'll be needing to provide a rational wha-happin?
Jan 22 07:58:22 rusty Freeing unused kernel memory: 252k freed
Jan 22 07:58:22 rusty scsi0 : pata_via
Jan 22 07:58:22 rusty scsi1 : pata_via
Jan 22 07:58:22 rusty ata1: PATA max UDMA/133 cmd 0x00000000000101f0 ctl 0x00000000000103f6 bmdma 0x000000000001fc00 irq 14
Jan 22 07:58:22 rusty ata2: PATA max UDMA/133 cmd 0x0000000000010170 ctl 0x0000000000010376 bmdma 0x000000000001fc08 irq 15
Jan 22 07:58:22 rusty ata1.00: ATA-7: WDC WD2500JB-00REA0, 20.00K20, max UDMA/100
Jan 22 07:58:22 rusty ata1.00: 488397168 sectors, multi 16: LBA48
Jan 22 07:58:22 rusty ata1.00: configured for UDMA/100
Jan 22 07:58:22 rusty ata2.00: ATAPI: TOSHIBA DVD-ROM SD-R5112, 1034, max UDMA/33
Jan 22 07:58:22 rusty ata2.01: ATAPI: HL-DT-ST DVDRAM GSA-H10N, JL10, max UDMA/33
Jan 22 07:58:22 rusty ata2.00: configured for UDMA/33
Jan 22 07:58:22 rusty ata2.01: configured for UDMA/33
Jan 22 07:58:22 rusty scsi 0:0:0:0: Direct-Access ATA WDC WD2500JB-00R 20.0 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty scsi 1:0:0:0: CD-ROM TOSHIBA DVD-ROM SD-R5112 1034 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty scsi 1:0:1:0: CD-ROM HL-DT-ST DVDRAM GSA-H10N JL10 PQ: 0 ANSI: 5
Jan 22 07:58:22 rusty usbcore: registered new interface driver usbfs
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write Protect is off
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] 488397168 512-byte hardware sectors (250059 MB)
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write Protect is off
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jan 22 07:58:22 rusty sda: sda1 sda2 sda3 sda4 <<6>usbcore: registered new interface driver hub
Jan 22 07:58:22 rusty sda5<6>usbcore: registered new device driver usb
Jan 22 07:58:22 rusty sda6ACPI: PCI Interrupt Link [LNKC] enabled at IRQ 10
Jan 22 07:58:22 rusty ACPI: PCI Interrupt 0000:00:10.4[C] -> Link [LNKC] -> GSI 10 (level, low) -> IRQ 10
Jan 22 07:58:22 rusty sda7 >
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: EHCI Host Controller
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: new USB bus registered, assigned bus number 1
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: debug port 1
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: irq 10, io mem 0xfebffc00
Jan 22 07:58:22 rusty USB Universal Host Controller Interface driver v3.0
Jan 22 07:58:22 rusty sd 0:0:0:0: [sda] Attached SCSI disk
Jan 22 07:58:22 rusty ehci_hcd 0000:00:10.4: USB 2.0 started, EHCI 1.00, driver 10 Dec 2004
Jan 22 07:58:22 rusty usb usb1: configuration #1 chosen from 1 choice
The top of this log excerpt is to provide reference as to where we are in the boot process. At this point, it appears (and has for a long time now) that while booting my machine is at this point attempting to  initialise both PCI(/SCSI) and USB buses simultaneously.
See the 1st log entry, just over 1/2-way down, beginning with "sda5<6>"? Yeah. The "<6>", specifically. Note that the line right above it - where ATA/SCS  is starting to enumerate the disk partitions on sda - but the line is apparently interrupted by a message from the also-initialising USB subsystem. Note how 5 consecutive log entries are mangled badly enough it's questionable whether a log parser could get back it.
This one's been around for a while...maybe an old-timer's laughing ATM?
Blue Skies...grndrush

Similar Messages

  • SNMP trap on OutOfMemory Error Log record

    I would like to implement SNMP trap on OutOfMemory Error Log record.
    In theory SNMP LogFilter with Severity Level "Error" and Message Substring "OutOfMemory" should do the trick.
    In reality it does not work (doh)(see explanations below), I wonder if someone managed to make it work.
    Log entry has following format:
    ----------- entry begin ----------
    ####<Nov 12, 2003 3:09:23 PM EST> <Error> <HTTP> <ustrwd2021> <local> <ExecuteThread: '14' for queue: 'default'> <> <> <101020> <[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>
    java.lang.OutOfMemoryError
         <<no stack trace available>>
    ------------ entry end ------------
    Notice that java.lang.... is NOT part of the log record, yep it seems that exception stack trace is not part of log record! Thus filter could be applied only to "<[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>" string, which is really useless.
    Here is fragment of trap data (i had to remove Message Substring in order to get Error trap to work)
    1.3.6.1.4.1.140.625.100.50: trapLogMessage: [WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception

    Andriy,
    I dont think you could do much here, since Outofmemory is not part of
    log record SNMP agent cannot filter on this. I would be curious to hear
    if anyone got it to work using SNMP.
    sorry,
    -satya
    Andriy Potapov wrote:
    I would like to implement SNMP trap on OutOfMemory Error Log record.
    In theory SNMP LogFilter with Severity Level "Error" and Message Substring "OutOfMemory" should do the trick.
    In reality it does not work (doh)(see explanations below), I wonder if someone managed to make it work.
    Log entry has following format:
    ----------- entry begin ----------
    ####<Nov 12, 2003 3:09:23 PM EST> <Error> <HTTP> <ustrwd2021> <local> <ExecuteThread: '14' for queue: 'default'> <> <> <101020> <[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>
    java.lang.OutOfMemoryError
         <<no stack trace available>>
    ------------ entry end ------------
    Notice that java.lang.... is NOT part of the log record, yep it seems that exception stack trace is not part of log record! Thus filter could be applied only to "<[WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception>" string, which is really useless.
    Here is fragment of trap data (i had to remove Message Substring in order to get Error trap to work)
    1.3.6.1.4.1.140.625.100.50: trapLogMessage: [WebAppServletContext(747136,logs2,/logs2)] Servlet failed with Exception

  • Log records for PA infotype updates

    Hi folks,
    I have a question related to the working of how the logs gets created when an infotype record is updated. Here is the scenario: I updated an action for an employee record in action infotype through PA40 and using an upload BDC program, the process took me through screens updating infotypes 0,1,2,7,8 and 19 to complete the action process.The changes are recorded in these tables. However when I checked to see the log records, a log record was created only for infotype 0.  Also I did verify that all these infotypes are entered in V_t585A, V_t585b and v_t585C to create logs.
    My question is: why the log records was not created for other infotypes inspite of defining these infotypes in the log maintainence tables? I am trying to understand this log creation process because most of our processes is directly dependant on it and poses problems .
    Any kind of help is really appreciated.
    Thanks in advance.
    SK

    It is a standard setting. The infotypes, their field groups and field group charcterstics are defined in V_T585A, V_T585B and V_T585C. I believe the end users are missing some process. Because it started to happen since a week or so. The same program was picking the records fine earlier and it has not changed.
    I do not know what kind of process they follow. what changed now? Since I am the only SAP guy out here, got to find it out.
    They are using PA30/PA40 to enroll and the web application. Both these records did not create the log data. The reocrds went through to SAP.
    Could there be any step they might be missing?
    Thanks for the quick reply,
    SK

  • "Could not redo log record..." error

    The following error killed the entire database:
    "Could not redo log record (21816:100853:13), for transaction ID (0:100066184), on page (1:621028), database 'MyDatabase' (database ID 5). Page: LSN = (21816:99572:9), type = 2. Log: OpCode = 2, context 4, PrevPageLSN: (21816:99584:9). Restore from a backup of the database, or repair the database."
    I have no idea what caused this.  Thank goodness, my backup worked.  Could anyone offer any clue?  Does anyone know how to repair a database under this condition?
    Thanks,
    hz

    The following error killed the entire database:
    "Could not redo log record (21816:100853:13), for transaction ID (0:100066184), on page (1:621028), database 'MyDatabase' (database ID 5). Page: LSN = (21816:99572:9),
    type = 2. Log: OpCode = 2, context 4, PrevPageLSN: (21816:99584:9). Restore from a backup of the database, or repair the database."
    I have no idea what caused this.  Thank goodness, my backup worked.  Could anyone offer any clue?  Does anyone know how to repair a database under this condition?
    Thanks,
    hz
    Hello,
    Hong this seems to me as if your log file is damaged or corrupted ,DBCC CHECKDB does not checks consistency for log file.YOu can run checkdb for database see if it comess clean.
    Please see errormessages in SQL Server errorlog and eventviewer and please post it here,its important you need to fins out why this message occured.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Track log records

    Please any body tell me how can i track log records of users in authorware. I want track the look up options and the test used by the users.

    Hello Steve can you explain me in details to create own log using AppendExtFile because i am newbie in authorware.
                                      Thank you for helping.

  • Log Records Being Scanned

    We make extensive use of replication on our system (which is Sql Server 2012 SP2 Enterprise).  We have a publisher, 3 subscribers and a distributor.  It is all transactional immediate push.
    Lately our replication is running behind many mornings and here are the symptoms:
    1.  I look on the distributor and the cpus are all pegged at 100%
    2.  The log reader history shows that for hours it has been doing the following:
    "Approximately 8000000 log records have been scanned in pass #4, 0 of which where marked for replication"
    Basically, for hours the log reader has been scanning log records.  We have our publisher currently set to SIMPLE and the log file is usually "medium" in size - maybe 20-30G.  
    To solve the problem, I usually do a restart of the services on the distributor and then that sends the cpu down to more normal levels.  It will do usually a little more scanning but not hours of it and then replication will almost instantly catch up.
    Any idea what would cause this?  This is causing production issues almost every night and so any tips on how to debug/solve this would be much appreciated.

    Hi clm2,
    According to your description, you performed transactional replication. But CPU of distributer was pegged at 100% and log reader agent history scanned amount of log records. And CPU became normal after restarting services on the distributer.
    Firstly, I would like to deliver some information about log reader agent and distributor to you.
    The distributor is a server that contains the distribution database and stores metadata and history data for all types of replication. The distributor can be the same server as the publisher (a local distributor) or it can be a separate server from the publisher
    (a remote distributor).
    Log reader agent moves transactions marked for replication from the transaction log on the publisher to the distribution database. Each database published using transactional replication has its own log reader agent that runs on the distributor and connects
    to the publisher (the distributor can be on the same computer as the publisher).
    According to your description, since the CPU is pegged at 100% on distributor, we need to verify if publisher and distributor exist on the same server. If yes, it can make distributor CPU higher. As other post, the distributor is a separate server from the
    publisher or you can use a remote distributor. In addition, the server you select as the distributor should have adequate disk space and processor power to support replication and any other activities on that server.
    Besides, the log reader has to do much more reading of the log and it consumes lots of CPU resource. From this article: Impact on Log Reader Agent after reindex operations (http://blogs.msdn.com/b/repltalk/archive/2011/03/30/impact-on-log-reader-agent-after-reindex-operations.aspx
    ), we can know reindex transaction can impact Transactional Replication latency. I suggest you check reindex maintenance plan and use reindex options that generate less records in the transactions logs. For more information, please refer to this article: Impact
    on Log Reader Agent after reindex operations (http://blogs.msdn.com/b/repltalk/archive/2011/03/30/impact-on-log-reader-agent-after-reindex-operations.aspx).
    Finally, because it is a performance problem, we can use performance monitor to perform troubleshoot. For more information, please refer to this article: Transactional Replication Conversations (
    http://blogs.msdn.com/b/chrissk/archive/2009/05/25/transactional-replication-conversations.aspx ).
    Best regards,
    Qiuyun Yu

  • Logging Record Locks in 9i

    hi everyone,
    We have a 9i database that sometimes experiences several times per day where the user tries to save a work order only to have to app crash with a 'record in use' error.
    I searched \bdump\alert to see if I could find any loging for locks - didn't find any.
    Are severe record locks - severe enough to cause an update to fail - logged to a file? looking for a historical "hard-copy" because when I've looked at dba_blockers, v$locked_object etc I don't see any at that time.
    Thanks,
    John

    From an Oracle perspective, there is no such thing as a row-level lock that would cause an application to crash. I'm also not aware of any Oracle error with the words "record in use" in it, so this is probably an application-specific error and an application-specific crash. If this is a custom app, you probably want to modify the code so that it doesn't crash when a row is locked.
    Logging every lock of a row would be a substantial burden on scalability, so I don't believe there is any option to do this. If the application waits long enough to acquire the lock, you may be able to set up a job to query DBA_BLOCKERS/ DBA_WAITERS frequently enough to catch the error. But if the application isn't waiting for the lock, that's probably not going to work.
    Justin

  • After doing a hard reset, the log records an sms m...

    I decided to hard reset my 5530 and format the memory for the first time because it was running slow and the music library takes to long to refresh.
    To my surprise after doing a hard reset there was an enrty on the log that show an sms message was sent to an unknown number the time show it was sent after setting up the time in the phone setup.
    Just wondering if its a virus or something cause i bought the phone brand new about 2 months ago, and i just realized it just now after a couple of months.
    Solved!
    Go to Solution.

    It's not a virus.
    It was the phone registering with the my nokia service again, like it did the first time it was ever switched on with a sim inserted.

  • Opendirectoryd.log recording many,many "triggered" since   update.security.10.8.4.12E1009.2013.003

    How can I change logging level -or- turn off triggering of opendirectoryd to reduce log file activity?
    2013-07-11 17:33:58.903093 EDT - opendirectoryd (build 197.17.1) launched...
    2013-07-11 17:33:59.181908 EDT - Logging level limit changed to 'debug'
    2013-07-11 17:33:59.230907 EDT - Initialize trigger support
    2013-07-11 17:34:00.308131 EDT - Trigger - new node trigger watching for 'opendirectoryd:nodes;register;/Search'
    2013-07-11 17:34:00.308464 EDT - created endpoint for mach service 'com.apple.private.opendirectoryd.rpc' with work limit 10
    2013-07-11 17:34:00.308486 EDT - set default handler for RPC 'reset_cache'
    2013-07-11 17:34:00.308493 EDT - Registered RPC over XPC 'reset_cache' for service 'com.apple.private.opendirectoryd.rpc'
    2013-07-11 17:34:00.308508 EDT - set default handler for RPC 'reset_statistics'
    2013-07-11 17:34:00.308518 EDT - Registered RPC over XPC 'reset_statistics' for service 'com.apple.private.opendirectoryd.rpc'
    2013-07-11 17:34:00.308527 EDT - set default handler for RPC 'show'

    You've got an  incompatible Logitech driver and java was incompletely uninstalled.
    You may have a problem with the Wacom driver.
    I don't know if fixing those things will help.
    There also a few window server errors, but I don't know if they are causal.
    If you can note the time of the hangs, that might help narrow it down in the logs.

  • Autonomus Transacion/Log Recording into DB table

    Hi All,
    I have to log several events happening to my DB schemas directly into a single table created for such auditing purpose.
    Up to now I'm using a generic proc. (invoked by triggers and pkgs) in which I'm calling an Autonomus Transaction.
    Now my team is figuring out to create few functs/procs which will call one external DB (through db-link). In this case Autonomus Transaction will not be able to work anymore (for the session of course).
    I'm asking you all for an advice: does a 'special' log function exist (ex. inside DBMS_something pkg) or have I to workaroud in some way?
    I'm running 8i vertion at the moment... but - if solutions are present on newer Oracle releases - please advice me the same.
    Thanks a lot,
    Marco

    As of Oracle 9i, this limitation has been removed to a limited extent and
    ORA-00164: autonomous transaction disallowed within distributed transaction
    is redefined to:
    ORA-00164 distributed autonomous transaction disallowed within migratable distributed transaction
    "Does a 'special' log function exist (ex. inside DBMS_something pkg)" ?
    You can use (in 8i too) DBMS_LOGMNR, but practically you must have archived redo logs.
    Regards,
    Zlatko Sirotic

  • Sending AAA accouting log records to multiple AAA servers

    IOS version c3640-a3jk9s-mz.123-18.bin
    aaa group server tacacs+ cciesec
    server 192.168.3.10
    aaa group server tacacs+ ccievoice
    server 192.168.3.11
    aaa authentication login VTY group cciesec local
    aaa accounting exec cciesec start-stop broadcast group cciesec group ccievoice
    aaa accounting commands 0 cciesec start-stop broadcast group cciesec group ccievoice
    aaa accounting commands 1 cciesec start-stop broadcast group cciesec group ccievoice
    aaa accounting commands 15 cciesec start-stop broadcast group cciesec group ccievoice
    tacacs-server host 192.168.3.10 key 123456
    tacacs-server host 192.168.3.11 key 123456
    C3640#sh tacacs
    Tacacs+ Server : 192.168.3.10/49
    Socket opens: 8
    Socket closes: 8
    Socket aborts: 0
    Socket errors: 0
    Socket Timeouts: 0
    Failed Connect Attempts: 0
    Total Packets Sent: 21
    Total Packets Recv: 21
    Tacacs+ Server : 192.168.3.11/49
    Socket opens: 0
    Socket closes: 0
    Socket aborts: 0
    Socket errors: 0
    Socket Timeouts: 0
    Failed Connect Attempts: 0
    Total Packets Sent: 0
    Total Packets Recv: 0
    C3640#
    As you can see, I can receive AAA accounting logs on server 192.168.3.10 but I am not getting logs on 192.168.3.11. I can confirm this with
    tcpdump on host 192.168.3.11 and that I am not seeing any sent AAA to host 192.168.3.11.
    Anyone know why?

    http://www.cisco.com/en/US/docs/ios/12_1t/12_1t1/feature/guide/dt_aaaba.html
    It stated the following:
    "Before the introduction of the AAA Broadcast Accounting feature, Cisco IOS AAA could send accounting information to only one server at a time. This feature allows accounting information to be sent to one or more AAA servers at the same time. Service providers are thus able to simultaneously send accounting information to their own private AAA servers and to the AAA servers of their end customers. This feature also provides redundant billing information for voice applications."

  • Which log records form error information?

    After clicked a form to review, there was a summary error message showed. But I want to know details about it. who can tell me where to check it? I used EPM1112 version, thanks.

    Try the logs, if you are on windows the services logs sometimes are the best
    MIDDLEWARE_HOME/user_projects/epmsystem1/diagnostics/logs/services
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Automatically SFC start when NC log recorded

    Hi,
    When we use NC500 activity with new or in queue status SFC, it seems to make start the SFC automatically.
    Doesn't ME allow to log NC to not started SFC?  I think that's possible case in customer manufacturing.
    Otherwise, is there any setting to allow log NC without start SFC?
    Best Regards,
    Takahiro

    Hi ,Sergiy-san,
    Thanks for your information.
    Based on your advice, we confirm 2 cases in ME 5.2 SP04 Patch 5. (Because of customer's version.)
    (1.1) Use Standalone Failure Tracking activity (NC540). When we log NC with NC code A which has COMMENT data type.
        => In this time, the SFC logged NC is kept as "new" status, as you said.
    (1.2) Use SFT and try to log NC which has DEFECT_COUNT data type. (As you know, NC500 activity allows to handle partial NC logging. So, we try to use it with SFT in order to keep the SFC as new/in queue status.)
        => But it's not possible to handle partial NC.
             I guess the reason why is that NC540 activity doesn't have ALLOW_PARTIAL activity.
    Is there any NC logging solution to allow both of to partial scrap and to keep SFC status as "new" or "in queue" status?
    That's our customer want to do when they loss some of SFC during 2 operations.
    We already develop custom functionality without automatic starting for mobile device. But some of users need this function by SAP ME original GUI.
    Your kind information will be really appreciated.
    Best Regards,
    Takahiro

  • What trigger should I use if I want to keep record amendment log?

    I am a newbie to Oracle forms, I would like to raise a simple question here ... hope you won't mind ... I have a form interface which accepts user input to record attributes for an object (e.g. personal information of a user). Now, because the importance of data, I was required to write log records in a journal table for changes made on the data. What trigger am I recommended to place and use?
    Thanks for any replies!

    Duncan, thanks for your reply first. However, I have two queries:
    1) Any generic event state to indicate change of record rather than 'POST-INSERT', 'POST-UPDATE' and 'POST-DELETE'?
    2) Given that I use the 'POST' event, how can I get the pre-update value? (My log record is supposed to record the before-update value)
    Thanks for your reply!

  • DML ERROR LOGGING - how to log 1 constraint violation on record

    Hi there
    We are using DML error logging to log records which violate constraints into an error table.
    The problem is when a record violates > 1 constraint it logs the record but details only 1 constraint violation - is there a way to get it to record all constraint violations on an individual record.
    Many Thanks

    In the Netherlands several years ago a framework called CDM RuleFrame was introduced that did just this. Their main thought was that it is desirable to collect all error messages from one transaction and display them all at the end of the transaction.
    [url http://www.dulcian.com/papers/ODTUG/2001/BR%20Symposium%202001/Boyd_BR.htm]Here is an article that explains the concept.
    In short: it involves coding every single business rule as a database trigger using transaction management of CDM RuleFrame.
    I would not recommend it however, because I think [url http://rwijk.blogspot.com/2007/09/database-triggers-are-evil.html]database triggers are evil. However, it may appeal to first time users of an application.
    Hope this helps.
    Regards,
    Rob.
    Message was edited by:
    Rob van Wijk
    But if cannot be "turned on" by some switch: you have to design your system this way. So the short answer to your question is: no, it is not possible.

Maybe you are looking for

  • Is there a simulator for E-Series Cards?

    Hi Folks, I developed an application that uses an E-Series card. To make any updates, I need to connect remotely to the system that has the card. I'd like to try several programming techniques in a controlled environment first, then move the program

  • HT4859 How do I replace Mobileme with ICloud on my preference settings?

    I have Mobileme on my preference settings and it won't let me log in. How can I replace Mobile me with ICloud? Thanks for any help

  • QuickTime on Windows Server 2003

    What version of QuickTime will work on Windows Server 2003, if any? I had issues a year ago with version 7, and I'm going through my list of apps to upgrade. None of the application pages state that Windows 2003 is a supported OS for version 7.1.6. D

  • Unable to move mp3 to iphone 4s

    I am no longer able to upload/move mp3 songs into my iphone 4s. "Get Info" section no longer allows me to change/rename nor add lyrics or pics to the songs. Please help...

  • What is Matrix Management Environment?

    hello Can somebody tell me what is Matrix Management Environment in reference to BI project? Thanks