NonSerializableException / setting log writer to DataSource

Hi,
Moving from 2.2.2 to 2.2.3 there arised 2 different problems:
With 2.2.3 I get a
java.io.NotSerializableException:
com.solarmetric.kodo.impl.jdbc.JDBCPrefsConfiguration
when binding a JDBCPersistenceManagerFactory to the jndi. Are there some
classes not implementing Serializable that are part of the
JDBCPersistenceManagerFactory in some configurations?
Another problem that arised with 2.2.3 (I think I had the same problem
in some early kodo implementations) is this:
Constructing a JDBCPersistenceManagerFactory with a DataSource that is
part of the jndi leads to problems if JDBCPersistenceManagerFactory
..setLogWriter is invoked.
The method then calls DataSource.setLogWriter. Some pooling layer
implementations as the one of jboss throw an exception if
DataSource.setLogWriter or other resetting methods are invoked after the
first binding of the DataSource.
In my opinion the problem could easily be solved:
-No call to DataSource.setLogWriter from
JDBCPersistenceManagerFactory .setLogWriterif
DataSource.getLogWriter is not null, or maybe better
-A configurable behaviour
Best Regards,
Christian

How soon can we expect the next version to be released. Is it possible
to obtain a build of the version that has this bug fixed.
Amol.
Marc Prud'hommeaux wrote:
Christian Elsen <[email protected]> wrote:
Hi,
Moving from 2.2.2 to 2.2.3 there arised 2 different problems:
With 2.2.3 I get a
java.io.NotSerializableException:
com.solarmetric.kodo.impl.jdbc.JDBCPrefsConfiguration
when binding a JDBCPersistenceManagerFactory to the jndi. Are there some
classes not implementing Serializable that are part of the
JDBCPersistenceManagerFactory in some configurations?
This is a known problem with 2.2.3:
https://bugzilla.solarmetric.com/show_bug.cgi?id=117
In 2.2.3, it is not possible to manually bind a PersistenceManagerFactory
into JNDI unless the application server is able to bind an object
that is Referencable but not Serializable.
This problem has been fixed for the next release.
Another problem that arised with 2.2.3 (I think I had the same problem
in some early kodo implementations) is this:
Constructing a JDBCPersistenceManagerFactory with a DataSource that is
part of the jndi leads to problems if JDBCPersistenceManagerFactory
.setLogWriter is invoked.
The method then calls DataSource.setLogWriter. Some pooling layer
implementations as the one of jboss throw an exception if
DataSource.setLogWriter or other resetting methods are invoked after the
first binding of the DataSource.
In my opinion the problem could easily be solved:
-No call to DataSource.setLogWriter from
JDBCPersistenceManagerFactory .setLogWriterif
DataSource.getLogWriter is not null, or maybe better
-A configurable behaviour
I have made an enhancement request for this:
https://bugzilla.solarmetric.com/show_bug.cgi?id=133
It is technically a bug in JBoss's DataSource implementstion, since
setLogWriter() is only supposed to throw a SQLException (which
we currently handle and ignore), but there is probably no harm
in just catching all Exceptions when setLogWriter() fails.
Best Regards,
Christian

Similar Messages

  • On RAID 10 - How to relieve Log Writer Slow Write Time Trace Files

    We have a DELL 8 CPU 5460 3.16Ghz Xeon with Dell Open Manage RAID 10 array
    Oracle 10g 10.2.0.4 on RedHat EL 5 with
    filesystemio_options='' (DEFAULT)
    disk_asynch_io='TRUE' ( NOT DEFAULT)
    Running 2 instances 64 bit 10g 10.2.0.4 with an app that does a lot of row updates and uses BLOBs heavily.
    Our storage (RAID 10) is presented through a single mount point.
    I periodically see these messages in a lgwr trc file as follows
    Warning: log write time 560ms, size 5549KB
    *** 2010-02-25 17:22:24.574
    Warning: log write time 650ms, size 6759KB
    *** 2010-02-25 17:22:25.103
    Warning: log write time 510ms, size 73KB
    *** 2010-02-25 20:33:00.015
    Warning: log write time 540ms, size 318KB
    *** 2010-02-25 20:35:17.956
    Warning: log write time 800ms, size 5KB
    Note that most of these are larger chunks of data.
    Our log wait histogram is as follows:
    106 log file parallel write 1 465780158
    106 log file parallel write 2 5111874
    106 log file parallel write 4 5957262
    106 log file parallel write 8 2171240
    106 log file parallel write 16 1576186
    106 log file parallel write 32 1129199
    106 log file parallel write 64 852217
    106 log file parallel write 128 2092462
    106 log file parallel write 256 508494
    106 log file parallel write 512 109449
    106 log file parallel write 1024 55441
    106 log file parallel write 2048 11403
    106 log file parallel write 4096 1197
    106 log file parallel write 8192 29
    106 log file parallel write 16384 5
    In discussions with the group that builds and maintains the systems (DBA's do not) we have asked for more spindles / hba's / mount points to address this issue. We have been advised that since the RAID 10 spreads the I/Os across multiple drives this is not going to affect the situation.
    Our thoughts are that multiple HBAs going to separate RAID 10 devices would help relieve the pressure.
    Thank you.

    Is this an internal RAID array? Is it composed of SCSI (SAS) or SATA drives? How many drives are in the array?
    Does the RAID controller have a built in battery backed cache (some of Dell's RAID controllers have 128MB, 256MB, or 512MB of battery backed cache). If the RAID controller has a battery backed cache, consider disabling the caching of all read operations, and set the write policy to write-back (see: http://support.dell.com/support/edocs/software/svradmin/5.1/en/omss_ug/html/cntrls.html ).
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Changing Log Writer

    Hello,
    Our application uses TopLink with JBoss and Log4J. We are trying to route all TopLink logging into our Log4J logs (this is dynamic and could be a console logger or a file logger). When we are logging to a file, we are using a DailyRollingFileAppender, which changes the log file over every night. It appears that when Log4J closes the log file, renames it and recreates the original, that TopLink doesn't have a valid Writer to the log file. We would like to dynamically set the Writer in TopLink at runtime. Can this be done? Any help would be greatly appreciated.
    Jason

    Hi,
    How exactly do you route toplink logging to log4j?
    I think the best way to do it is:
    1. write a class Log4JSessionLog that implements oracle.toplink.sessions.SessionLog (and send logging events to a log4j logger)
    2. set it as sessionLog for the toplink session.
    Log4JSessionLog log4JSessionLog = new Log4JSessionLog()
    topLinkServerSession.logMessages();
    topLinkServerSession.setSessionLog(log4JSessionLog);
    At least that's what I did.
    You should give it a try, I'm pretty sure it will solve your problem, since only log4j will be manipulating your log files and not log4j AND toplink.
    Maia

  • Apache Sling Logging Writer Configuration

    Hi,
    I'm having an issue where my custom log writer configuration is not being picked up and used by CQ5 sometimes.  I've created a custom error log writer based on the example provided at http://helpx.adobe.com/cq/kb/HowToRotateRequestAndAccessLog.html which I've installed on an author, replicated it to the publishers, and all seemed ok and working correctly.  The settings are:
    Log File: ../logs/error.log
    Number of Log Files: 5
    Log File Threshold: 200MB
    After installing this, the error logs rotated at 200MB resulting in error.log.0, error.log.1 etc as expected.
    However after rebuilds on the machines, this configuration was overwritten (as expected).  So I've installed and replicated the package again, but now the configuration is not taking effect.  I'm using the exact same config package as I was previously, but the logs don't seem to be rotating at all now (not even daily).  I've deleted the config from the Felix console on all authors and publishers, reinstalled on the authors, replicated to the publishers, then restarted the CQ5 service on all machines, but it's still not working.
    So I have a couple of questions about this:
    Is there somewhere else in CQ5 that might be overriding these log writer config settings?
    Is ../logs/error.log correct for the standard log file location?  A note in step 3 of Creating Your Own Loggers and Writers on http://dev.day.com/docs/en/cq/current/deploying/configure_logging.html states:
    Log writer paths are relative to the crx-quickstart/launchpad location.
    Therefore, a log file specified as logs/thelog.log writes to crx-quickstart/launchpad/logs/thelog.log.
    To write to the folder crx-quickstart/logs the path must be prefixed with ../ (as../logs/thelog.log).
    So a log file specified as ../logs/thelog.log writes to crx-quickstart/logs/thelog.log.
         The configs in the log rotation example also use ../logs.  However when looking at the default/standard logging writer config, it uses logs/error.log.  Which one is correct?
    Any help on what's going on here would be appreciated!!
    Thanks,
    K

    Hi,
    I am answering your last question and this one here only.
    1. Log configuration can be override at project via creating config and overriding factory "org.apache.sling.commons.log.LogManager.factory.config-<identifier>" and writer (if required) "org.apache.sling.commons.log.LogManager.factory.writer-<identifier>". So plz check if you have configure log at project level. Now if you are not able to see then plz recheck you log configuration in felix console http://localhost:4502/system/console/slinglog here you can see entire log configuration for CQ system.
    2. both will work but the location will change
         i. CQ 5.5 - log file located under "crx-quickstart" ../logs/<filename>.log will create under this
         ii. CQ 5.4 - logs creates under "crx-quickstart\logs" also under "crx-quickstart\launchpad\logs" so if you select ../logs/ it will go under "crx-quickstart\logs" only but you can change wherever you want to store. Again you can see this configuration info at http://localhost:4502/system/console/slinglog
    3. You can also customize you log via creating at project level and assigning the "identifire" (with all other configure parameters) for more information you can refer http://sling.apache.org/site/logging.html apart from earlier share links.
    I hope above will help you to proceed. Please let me know for more information.
    Thanks,
    Pawan

  • Log writer taking more time to write

    Hi All,
    we have oracle 10g(10.1.0) installed on rhel 5.3.. We are facing an issue regarding the logwriter..the logwriter is taking more time to write to the online redolog file...the trace file shows a warning message as " log writer time 840ms" ... Not sure what is the root cause of it..Please help me out in that regards..Thanks in advance...

    imran khan wrote:
    The archived log mode is disabled for the db..The online redolog files are not in FRA instead they are kept in diskgroups (multiplexed in two disk groups)..The controlfiles and datafiles are also stored in the same location as of online redolog files.The db is having the release as 10.2.0.4.0 and all the components are in 10.2.0.4.0 as well.
    I also found that the ASM instance is having the release as 10.2.0.4.0(v$version view) but the compatibility attribute of the diskgroups is 10.1.0.0 . The compatible parameter in ASM instance shows 10.2.0.3.0 .
    Do i need to change the compatibility of the diskgroups and the compatible parameter to 10.2.0.4.0 in ASM instance?
    Not sure whether ASM compatibility makes any impact on delaying the lgwr to write to the online redolog files ?
    Note : The online redolog files are stored in ASM diskgroups
    please suggest...If your redo is in the same physical location as your datafiles, don't you think there might be some contention? (For my OLTP type of experience, undo is the most hit data file, too).
    There could also be some more fundamental thing wrong, such as misconfigured I/O, and your redo just seems small. How often are they switching? Are you using hugefiles (as opposed to configuring them and unknowingly not using them)? Do you see any actual swapping to disk going on?
    You likely have an OS I/O problem, but appear to only be looking at Oracle's view of it. What does the O/S say about it?
    Are you sure you want to be running a production database without archive logging?

  • How to reduce "Wait for Log Writer"

    Hi,
    in a production system using MaxDB 7.6.03.07 i checked follow acitivities about Log:
    Log Pages Written: 32.039
    Wait for Log Writer: 31.530
    the docs explains that "Wait for Log Writer", Indicates how often it was necessary to wait for a log entry to be written.
    then what steps i must follow to reduce this?
    thanks for any help
    Clóvis

    Hi,
    when the log Io queue is full all user tasks who want to insert entries into the log io queue have to wait until the log entries of the queue have been written to the log volume - they are wiating for LOG Writer
    First you should check the size of the LOG_IO_QUEUE Parameter.
    Think about increaseing the parameter value.
    Second will be to check the time for write I/O to the log -> use DB-Analyzer and activate time measurement via x_cons <DBNAME> time enable command.
    Then you will get time for write I/O on the log in the DB-Analyzer log files (expert)
    You will find more information about MaxDb Logging and Performance Analysis on maxdb.sap.com -> [training material|http://maxdb.sap.com/training] chapter logging and performance analysis.
    Regards, Christiane

  • Redo Log Writer Problem

    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger

    Street wrote:
    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger
    Why do you think that this time, 30ms is not good enough for your database ? Did you get any redo log related issues in the Statspack/AWR report?
    There is only one LGWR possible, you can't have more than one LGWR.
    Aman....

  • Oracle Performance 11g - Warning: log write elapsed time

    Hello ,
    We are facing quite bad performance with our SAP cluster running Oracle 11g .
    In the ora alert file we are having constant message for "
    Thread 1 cannot allocate new log, sequence xxxxxx
    Private strand flush not complete"
    However , this seems to be quite old as we have recently started facing the performace issue.
    Moreover , in the sid_lgwr_788.trc file we are getting warning for log write elapsed time as follow.
    *** 2013-07-25 08:43:07.098
    Warning: log write elapsed time 722ms, size 4KB
    *** 2013-07-25 08:44:07.069
    Warning: log write elapsed time 741ms, size 32KB
    *** 2013-07-25 08:44:11.134
    Warning: log write elapsed time 1130ms, size 23KB
    *** 2013-07-25 08:44:15.508
    Warning: log write elapsed time 1161ms, size 25KB
    *** 2013-07-25 08:44:19.790
    Warning: log write elapsed time 1210ms, size 10KB
    *** 2013-07-25 08:44:20.748
    Warning: log write elapsed time 544ms, size 3KB
    *** 2013-07-25 08:44:24.396
    Warning: log write elapsed time 1104ms, size 14KB
    *** 2013-07-25 08:44:28.955
    Warning: log write elapsed time 1032ms, size 37KB
    *** 2013-07-25 08:45:13.115
    Warning: log write elapsed time 1096ms, size 3KB
    *** 2013-07-25 08:45:46.995
    Warning: log write elapsed time 539ms, size 938KB
    *** 2013-07-25 08:47:55.424
    Warning: log write elapsed time 867ms, size 566KB
    *** 2013-07-25 08:48:00.288
    Warning: log write elapsed time 871ms, size 392KB
    *** 2013-07-25 08:48:04.514
    Warning: log write elapsed time 672ms, size 2KB
    *** 2013-07-25 08:48:08.788
    Warning: log write elapsed time 745ms, size 466KB
    Please advice to further understand the issue.
    Regards

    Hi,
    Seem the I/O issue, Check the metalink id
    Intermittent Long 'log file sync' Waits, LGWR Posting Long Write Times, I/O Portion of Wait Minimal (Doc ID 1278149.1)

  • Archive log writes frequency rate (4.73 minute(s)) is too high

    hello,
    DB version=10.2.0.1.0
    OS sun Solaris 5.10
    i got one alert telling
    Archive log writes frequency rate (4.73 minute(s)) is too high could someone help me out to know what is this error and what causingit to occure?
    thanks

    Raman wrote:
    please go throrgh this URL
    http://www.dba-oracle.com/t_redo_log_tuning.htm
    Well, the first step on that is wrong. The link in the step corrects it. However, both pages advocate putting redo logs on SSD, and if you google about for recent blog postings about that, you see that that is a bad idea. Even on Exadata, it only works because it is also writing to normal spinning rust, and says the write to redo is done when the first write is acknowledged. For normal non-Exadata databases, it's at best an expensive waste of time, and at worst shows up the deterioration of SSD's as redo log corruption. So, you might not want to link there.
    You should size the redo for the data rate expected at maximum, and use the parameters CKPT mentioned to switch for normal operating data rates.

  • Nightmare with Log Writer, active files please explain !

    Hi
    Can anybody explain how many redo log groups server may need in no-archiving mode ?
    The documentation from Oracle web site says:
    If you have enabled archiving, Oracle cannot re-use or overwrite an active online log file until ARCn has archived its contents. If archiving is disabled, when the last online redo log file fills, writing continues by overwriting the first available active file.
    From this it looks only 2 groups are needed. I dont get how is it possible to overwrite active file ??? I think I should be written by dbw and become inactive first. Is that the reason I get "checkpoint has not completed" warnings in the log and pure performance ?
    Andrius

    I believe the minimum required by oracle is 2 groups.
    Obviously, this won't cut it in ARCHIVELOG mode for most databases. But then, you were referring to NOARCHIVELOG mode. I tend to go with 3 in this type of scenario.
    As for the 2nd part. Only one redo log is 'ACTIVE' at a time. When a log switch occurs, it goes to the next INACTIVE redo log and starts writing to that. Thus, overwriting what was previously in it. DBWn doesn't write redo entries, that is handled by LGWR. As far as Oracle8i, the only part DBWn had on redo logs was triggering log writting by LGWR.

  • More log writer contention when using JDBC Batch Updates?

    So if I'm already seeing log writer contention when I'm commiting inserts one row at a time.
    Why would I see even more contention of log writer ie. commit waits in Enterprise Manager when I batch these inserts using JDBC updates.
    Today I observed significantly more commit waits when I used JDBC batching vs commit one row at a time.

    Please refer
    http://www.oracle.com/technology/products/oracle9i/daily/jun07.html

  • Not able to do a set up for 2lis_03_um datasource

    Hi all - I am trying to do a set up for 2lis_03_um datasource and not even a single record is retrieved, but I get like 1000 records for movements ie 2lis_03_bf.  Please let me know....
    Sabrina.

    Hello Sabrina,
    The reason why data is not coming for 2LIS_03_UM could be that there is no stock revaluation in your system. As we know that this datasource contains data from valuated revaluations in Financial accounting.
    If you are trying to upload data from a test system, chances are that stock revaluation is not there.
    Hope it helps.
    Regards,
    Praveen

  • Why we have only one log writer in oracle

    Why we have only one log writer in oracle while we have more than one DB writer and archiver in oracle.

    skvaish1 wrote:
    Was this a interview question? Looks like to me..
    High DML allows multiple log writer processes as well by spawning multiple log writer processes.
    No - there is only one log writer process per instance.
    Don't confuse the function of I/O slaves with the function of the log writer.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Warning  : log write time 7706 , size 2KB

    Below are the contents of the lgwr trace file:
    Warning: log write time 4040ms, size 1KB
    *** 2010-01-06 11:28:03.962
    Warning: log write time 3680ms, size 8KB
    *** 2010-01-06 11:28:42.510
    Warning: log write time 4010ms, size 1KB
    *** 2010-01-06 11:28:50.576
    Warning: log write time 8060ms, size 18KB
    *** 2010-01-06 11:28:58.582
    Warning: log write time 8010ms, size 2KB
    *** 2010-01-06 11:29:06.609
    Warning: log write time 8020ms, size 36KB
    *** 2010-01-06 11:29:10.626
    Warning: log write time 4010ms, size 3KB
    *** 2010-01-06 11:29:18.653
    Warning: log write time 8030ms, size 7KB
    *** 2010-01-06 11:29:26.704
    Warning: log write time 8050ms, size 3KB
    *** 2010-01-06 11:29:30.737
    Warning: log write time 4030ms, size 5KB
    *** 2010-01-06 11:29:38.495
    Warning: log write time 7760ms, size 2KB
    User insert takes forever, Note 601316.1 states that this can be ignored but the problem still persist
    KK

    The MOS note says also to check for some OS issue such as disk too slow. Did you check this ?

Maybe you are looking for

  • Songs from CD show up as individual albums

    Each song on the CD shows as a separate album in the Library. How do I combine the songs into the actual Album?

  • Popup on click of a toolbar button during the call

    Hi, I created a custom button in the Interaction center toolbar to open up a decision popup. The popup opens fine when i am not in the call whereas if i am in a call the popup doesn't open. Is this the standard behaviour? If that is the case is there

  • Reg: table name find

    Hi All, i need some information, how can i find the table name for the below feild. i am using the T:Code : MC44, in this i gave the materian, plant and selected the check box all plant cumulated and  then press enter. in the inside screen i hava sel

  • How to change group order?

    Hello. I created three groups in my query. There are links from G_1 to G_2 to G_3 Is there a way to sort (link) these groups as: G_3 to G_2 to G_1 Thanks.

  • Solaris 10 Jumpstart error: panic - boot: boot: scratch memory overflow.

    I am setting up a Solaris Jumpstart server using a Linux server as the Boot/Config/Install server. The Sun box I am using is a v120 that will be running Solaris 10 update 5. After running the "boot net - install" command and running through the setup