Warning  : log write time 7706 , size 2KB

Below are the contents of the lgwr trace file:
Warning: log write time 4040ms, size 1KB
*** 2010-01-06 11:28:03.962
Warning: log write time 3680ms, size 8KB
*** 2010-01-06 11:28:42.510
Warning: log write time 4010ms, size 1KB
*** 2010-01-06 11:28:50.576
Warning: log write time 8060ms, size 18KB
*** 2010-01-06 11:28:58.582
Warning: log write time 8010ms, size 2KB
*** 2010-01-06 11:29:06.609
Warning: log write time 8020ms, size 36KB
*** 2010-01-06 11:29:10.626
Warning: log write time 4010ms, size 3KB
*** 2010-01-06 11:29:18.653
Warning: log write time 8030ms, size 7KB
*** 2010-01-06 11:29:26.704
Warning: log write time 8050ms, size 3KB
*** 2010-01-06 11:29:30.737
Warning: log write time 4030ms, size 5KB
*** 2010-01-06 11:29:38.495
Warning: log write time 7760ms, size 2KB
User insert takes forever, Note 601316.1 states that this can be ignored but the problem still persist
KK

The MOS note says also to check for some OS issue such as disk too slow. Did you check this ?

Similar Messages

  • Oracle Performance 11g - Warning: log write elapsed time

    Hello ,
    We are facing quite bad performance with our SAP cluster running Oracle 11g .
    In the ora alert file we are having constant message for "
    Thread 1 cannot allocate new log, sequence xxxxxx
    Private strand flush not complete"
    However , this seems to be quite old as we have recently started facing the performace issue.
    Moreover , in the sid_lgwr_788.trc file we are getting warning for log write elapsed time as follow.
    *** 2013-07-25 08:43:07.098
    Warning: log write elapsed time 722ms, size 4KB
    *** 2013-07-25 08:44:07.069
    Warning: log write elapsed time 741ms, size 32KB
    *** 2013-07-25 08:44:11.134
    Warning: log write elapsed time 1130ms, size 23KB
    *** 2013-07-25 08:44:15.508
    Warning: log write elapsed time 1161ms, size 25KB
    *** 2013-07-25 08:44:19.790
    Warning: log write elapsed time 1210ms, size 10KB
    *** 2013-07-25 08:44:20.748
    Warning: log write elapsed time 544ms, size 3KB
    *** 2013-07-25 08:44:24.396
    Warning: log write elapsed time 1104ms, size 14KB
    *** 2013-07-25 08:44:28.955
    Warning: log write elapsed time 1032ms, size 37KB
    *** 2013-07-25 08:45:13.115
    Warning: log write elapsed time 1096ms, size 3KB
    *** 2013-07-25 08:45:46.995
    Warning: log write elapsed time 539ms, size 938KB
    *** 2013-07-25 08:47:55.424
    Warning: log write elapsed time 867ms, size 566KB
    *** 2013-07-25 08:48:00.288
    Warning: log write elapsed time 871ms, size 392KB
    *** 2013-07-25 08:48:04.514
    Warning: log write elapsed time 672ms, size 2KB
    *** 2013-07-25 08:48:08.788
    Warning: log write elapsed time 745ms, size 466KB
    Please advice to further understand the issue.
    Regards

    Hi,
    Seem the I/O issue, Check the metalink id
    Intermittent Long 'log file sync' Waits, LGWR Posting Long Write Times, I/O Portion of Wait Minimal (Doc ID 1278149.1)

  • Mail: postfix/pipe : warning: pipe_command_write: write time limit ..

    Anyone run into the following situation:
    1) "postfix/pipe : warning: pipecommandwrite: write time limit exceeded" error messages start showing up in system.log
    2) The postfix queue starts filling up with mail to be delivered for a particular user. User reports that mail is not being delivered.
    3) The user has a hung IMAPd process that refuses to die. kill -9 <pid> does not work for the process, and a hard reset of the server is needed (push the button since shutdown hangs due to the hung imapd process) to get things back in order. Also, a reconstruct of the users mailbox seems to smooth things out after the reboot.
    The postfix/pipe message seems to indicate that postfix is unable to deliver the mail, which is why it's sitting in the queue. I'm guessing that the hung IMAPd process has locked out the users mailbox, or something of that nature.
    Right now, I have a script that tails the system.log, grep's for these messages and sends me an email with the contents of the postfix queue. This helps alert me to the situation, but it would be nice to either fix it or get a better idea of what's going on.
    The user usually affected by this is using Mail.app on 10.4.11.
    Thanks in Advance,
    Flatrack

    YESSSSS!!! I am having this same problem on a server that I'm administering. Talk about incredibly frustrating. I'm seeing the exact same behavior:
    One user's IMAP process gets hung up and gradually takes down the entire service. No amount of Force Quitting or "kill"-ing will work...that process just keeps on staying hung up. And soft restarting is ineffective.
    In my case, it is the same user every time. I had it happen once in January (while I was out of town, of course), then not again until 3-4 times this past Thursday-Saturday (yep, the server knew I was out of town again). End user told me today that he'd had to force quit Mail several times recently...during the same timeframe as the hangups. I have done some major cleaning up, overhauling, and mailbfr-ing of this user's account, and will be watching it like a hawk for some time to come, but the main problem I see is:
    *SO WHAT* if a lone user's account gets corrupted?!?! Or if the end user jerks the rug out from under Mail by Force Quitting?!?! In NO WAY should that cause such a problem that the server won't even soft restart or allow that IMAP process to be killed. This is causing me a large Mac OS X Server credibility headache right now. My tech liaison and I have been gradually transitioning from PC + Exchange Server to Mac + Leopard Server, and having a single user's fouled up email account bring down the whole works is NOT acceptable.
    What can we do to further fix/debug/repair this? Of course the user account can be rebuilt, but this should not be taking the entire IMAP service down and then not allowing the server to restart or kill off the process.
    Fred

  • On RAID 10 - How to relieve Log Writer Slow Write Time Trace Files

    We have a DELL 8 CPU 5460 3.16Ghz Xeon with Dell Open Manage RAID 10 array
    Oracle 10g 10.2.0.4 on RedHat EL 5 with
    filesystemio_options='' (DEFAULT)
    disk_asynch_io='TRUE' ( NOT DEFAULT)
    Running 2 instances 64 bit 10g 10.2.0.4 with an app that does a lot of row updates and uses BLOBs heavily.
    Our storage (RAID 10) is presented through a single mount point.
    I periodically see these messages in a lgwr trc file as follows
    Warning: log write time 560ms, size 5549KB
    *** 2010-02-25 17:22:24.574
    Warning: log write time 650ms, size 6759KB
    *** 2010-02-25 17:22:25.103
    Warning: log write time 510ms, size 73KB
    *** 2010-02-25 20:33:00.015
    Warning: log write time 540ms, size 318KB
    *** 2010-02-25 20:35:17.956
    Warning: log write time 800ms, size 5KB
    Note that most of these are larger chunks of data.
    Our log wait histogram is as follows:
    106 log file parallel write 1 465780158
    106 log file parallel write 2 5111874
    106 log file parallel write 4 5957262
    106 log file parallel write 8 2171240
    106 log file parallel write 16 1576186
    106 log file parallel write 32 1129199
    106 log file parallel write 64 852217
    106 log file parallel write 128 2092462
    106 log file parallel write 256 508494
    106 log file parallel write 512 109449
    106 log file parallel write 1024 55441
    106 log file parallel write 2048 11403
    106 log file parallel write 4096 1197
    106 log file parallel write 8192 29
    106 log file parallel write 16384 5
    In discussions with the group that builds and maintains the systems (DBA's do not) we have asked for more spindles / hba's / mount points to address this issue. We have been advised that since the RAID 10 spreads the I/Os across multiple drives this is not going to affect the situation.
    Our thoughts are that multiple HBAs going to separate RAID 10 devices would help relieve the pressure.
    Thank you.

    Is this an internal RAID array? Is it composed of SCSI (SAS) or SATA drives? How many drives are in the array?
    Does the RAID controller have a built in battery backed cache (some of Dell's RAID controllers have 128MB, 256MB, or 512MB of battery backed cache). If the RAID controller has a battery backed cache, consider disabling the caching of all read operations, and set the write policy to write-back (see: http://support.dell.com/support/edocs/software/svradmin/5.1/en/omss_ug/html/cntrls.html ).
    Charles Hooper
    Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
    http://hoopercharles.wordpress.com/
    IT Manager/Oracle DBA
    K&M Machine-Fabricating, Inc.

  • Log writer taking more time to write

    Hi All,
    we have oracle 10g(10.1.0) installed on rhel 5.3.. We are facing an issue regarding the logwriter..the logwriter is taking more time to write to the online redolog file...the trace file shows a warning message as " log writer time 840ms" ... Not sure what is the root cause of it..Please help me out in that regards..Thanks in advance...

    imran khan wrote:
    The archived log mode is disabled for the db..The online redolog files are not in FRA instead they are kept in diskgroups (multiplexed in two disk groups)..The controlfiles and datafiles are also stored in the same location as of online redolog files.The db is having the release as 10.2.0.4.0 and all the components are in 10.2.0.4.0 as well.
    I also found that the ASM instance is having the release as 10.2.0.4.0(v$version view) but the compatibility attribute of the diskgroups is 10.1.0.0 . The compatible parameter in ASM instance shows 10.2.0.3.0 .
    Do i need to change the compatibility of the diskgroups and the compatible parameter to 10.2.0.4.0 in ASM instance?
    Not sure whether ASM compatibility makes any impact on delaying the lgwr to write to the online redolog files ?
    Note : The online redolog files are stored in ASM diskgroups
    please suggest...If your redo is in the same physical location as your datafiles, don't you think there might be some contention? (For my OLTP type of experience, undo is the most hit data file, too).
    There could also be some more fundamental thing wrong, such as misconfigured I/O, and your redo just seems small. How often are they switching? Are you using hugefiles (as opposed to configuring them and unknowingly not using them)? Do you see any actual swapping to disk going on?
    You likely have an OS I/O problem, but appear to only be looking at Oracle's view of it. What does the O/S say about it?
    Are you sure you want to be running a production database without archive logging?

  • Redo Log Writer Problem

    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger

    Street wrote:
    Hello
    What can I do, when Average redo log write time is 17'300ms (averaged over 30ms)?
    I have only one redolog writer. Should I start more then one redolog writer? We have 3 Redolog Groups (64MB) and we work with oracle dataguard. Its Oracle 11.2 (Unix Solaris 10).
    The system switch every 45 min the redogroup.
    Thanks for your support...
    Best regards...
    Roger
    Why do you think that this time, 30ms is not good enough for your database ? Did you get any redo log related issues in the Statspack/AWR report?
    There is only one LGWR possible, you can't have more than one LGWR.
    Aman....

  • How to reduce "Wait for Log Writer"

    Hi,
    in a production system using MaxDB 7.6.03.07 i checked follow acitivities about Log:
    Log Pages Written: 32.039
    Wait for Log Writer: 31.530
    the docs explains that "Wait for Log Writer", Indicates how often it was necessary to wait for a log entry to be written.
    then what steps i must follow to reduce this?
    thanks for any help
    Clóvis

    Hi,
    when the log Io queue is full all user tasks who want to insert entries into the log io queue have to wait until the log entries of the queue have been written to the log volume - they are wiating for LOG Writer
    First you should check the size of the LOG_IO_QUEUE Parameter.
    Think about increaseing the parameter value.
    Second will be to check the time for write I/O to the log -> use DB-Analyzer and activate time measurement via x_cons <DBNAME> time enable command.
    Then you will get time for write I/O on the log in the DB-Analyzer log files (expert)
    You will find more information about MaxDb Logging and Performance Analysis on maxdb.sap.com -> [training material|http://maxdb.sap.com/training] chapter logging and performance analysis.
    Regards, Christiane

  • Archive log writes frequency rate (4.73 minute(s)) is too high

    hello,
    DB version=10.2.0.1.0
    OS sun Solaris 5.10
    i got one alert telling
    Archive log writes frequency rate (4.73 minute(s)) is too high could someone help me out to know what is this error and what causingit to occure?
    thanks

    Raman wrote:
    please go throrgh this URL
    http://www.dba-oracle.com/t_redo_log_tuning.htm
    Well, the first step on that is wrong. The link in the step corrects it. However, both pages advocate putting redo logs on SSD, and if you google about for recent blog postings about that, you see that that is a bad idea. Even on Exadata, it only works because it is also writing to normal spinning rust, and says the write to redo is done when the first write is acknowledged. For normal non-Exadata databases, it's at best an expensive waste of time, and at worst shows up the deterioration of SSD's as redo log corruption. So, you might not want to link there.
    You should size the redo for the data rate expected at maximum, and use the parameters CKPT mentioned to switch for normal operating data rates.

  • User log in & log out time SAO

    Dear Experts,
    I hope every on fine.
    How to check the user log in & log out time daily & weekly in SAP.
    Thanks in advance
    Regards
    Dilip Pasila

    HI,
        You can see the details about user last logon & logoff by different ways.
    1)run the tcode suim->user ->By logon date and password Change->for all users logon details just enter(f8),again if you want a perticuler user's logon details just write down the name of the user & then execute.
    2)run the tcode se16->write table name usr02->execute you will able to see the users logon details.
    3)With the help of CCMS also You can see the last logon details
    rz20-> expand ccms monitor templates-> entire system->instance no->expand systen configuration->named user.

  • HotSpot(TM)64-BitServer VM warning: (benign)Hit CMSMarkStack max size limit

    Hi I am getting the CMSMarkStack max size limit. Could anyone explain why i am getting that.
    61083.003: [GC 61083.003: [ParNew: 523392K->0K(523840K), 0.1802670 secs] 2866168K->2364464K(4193856K), 0.1804250 secs]
    61087.107: [GC 61087.107: [ParNew: 523392K->0K(523840K), 0.1970010 secs] 2887856K->2396761K(4193856K), 0.1971990 secs]
    61087.349: [GC [1 CMS-initial-mark: 2396761K(3670016K)] 2408426K(4193856K), 0.0330660 secs]
    61087.382: [CMS-concurrent-mark-start]
    61089.382: [CMS-concurrent-mark: 2.000/2.000 secs]
    61089.382: [CMS-concurrent-preclean-start]
    61089.637: [CMS-concurrent-preclean: 0.253/0.255 secs]
    61089.637: [CMS-concurrent-abortable-preclean-start]
    CMS: abort preclean due to time 61090.703: [CMS-concurrent-abortable-preclean: 0.224/1.067 secs]
    61090.721: [GC[YG occupancy: 336074 K (523840 K)]61090.721: [Rescan (parallel) , 0.4475020 secs]61091.169: [weak refs processing, 1.8464740 secs]Java HotSpot(TM) 64-Bit Server VM warning: (benign) Hit CMSMarkStack max size limit
    [1 CMS-remark: 2396761K(3670016K)] 2732836K(4193856K), 2.5285500 secs]
    61095.521: [CMS-concurrent-sweep-start]
    61104.793: [CMS-concurrent-sweep: 9.271/9.271 secs]
    61104.793: [CMS-concurrent-reset-start]
    61104.821: [CMS-concurrent-reset: 0.029/0.029 secs]
    61110.338: [GC 61110.338: [ParNew: 523392K->0K(523840K), 0.6183310 secs] 2133391K->1628588K(4193856K), 0.6184950 secs]
    61162.032: [GC 61162.032: [ParNew: 523392K->0K(523840K), 0.2259220 secs] 2151980K->1662904K(4193856K), 0.2261040 secs]
    61171.154: [GC 61171.155: [ParNew: 523392K->0K(523840K), 0.1890640 secs] 2186296K->1686907K(4193856K), 0.1892200 secs]
    regards
    R.Sriram

    "errno = 28" is an error code from the OS which means "No space left on device" this could indicate you don't have enough swap space.
    I suspect you are using too much memory even when you don't think you are.
    I would start with a simple "hello world" program and increase the memory until you get an error. If you can't run even a hello world program you have a serious system error.

  • Improving redo log writer performance

    I have a database on RAC (2 nodes)
    Oracle 10g
    Linux 3
    2 servers PowerEdge 2850
    I'm tuning my database with "spotilght". I have alredy this alert
    "The Average Redo Write Time alarm is activated when the time taken to write redo log entries exceeds a threshold. "
    The serveres are not in RAID5.
    How can I improve redo log writer performance?
    Unlike most other Oracle write I/Os, Oracle sessions must wait for redo log writes to complete before they can continue processing.
    Therefore, redo log devices should be placed on fast devices.
    Most modern disks should be able to process a redo log write in less than 20 milliseconds, and often much lower.
    To reduce redo write time see Improving redo log writer performance.
    See Also:
    Tuning Contention - Redo Log Files
    Tuning Disk I/O - Archive Writer

    Some comments on the section that was pulled from Wikipedia. There is some confusion in the market as their are different types of solid state disks with different pros and cons. The first major point is that the quote pulled from Wikipedia addresses issues with Flash hard disk drives. Flash disks are one type of solid state disk that would be a bad solution for redo acceleration (as I will attempt to describe below) they could be useful for accelerating read intensive applications. The type of solid state disk used for redo logs use DDR RAM as the storage media. You may decide to discount my advice because I work with one of these SSD manufacturers but I think if you do enough research you will see the point. There are many articles and many more customers who have used SSD to accelerate Oracle.
    > Assuming that you are not CPU constrained,
    moving the online redo to
    high-speed solid-state disk can make a hugedifference.
    Do you honestly think this is practical and usable
    advice Don? There is HUGE price difference between
    SSD and and normal hard disks. Never mind the
    following disadvantages. Quoting
    (http://en.wikipedia.org/wiki/Solid_state_disk):[
    i]
    # Price - As of early 2007, flash memory prices are
    still considerably higher  
    per gigabyte than those of comparable conventional
    hard drives - around $10
    per GB compared to about $0.25 for mechanical
    drives.Comment: Prices for DDR RAM base systems are actually higher than this with a typical list price around $1000 per GB. Your concern, however, is not price per capacity but price for performance. How many spindles will you have to spread your redo log across to get the performance that you need? How much impact are the redo logs having on your RAID cache effectiveness? Our system is obviously geared to the enterprise where Oracle is supporting mission critical databases where a hugh return can be made on accelerating Oracle.
    Capacity - The capacity of SSDs tends to be
    significantly smaller than the
    capacity of HDDs.Comment: This statement is true. Per hard disk drive versus per individual solid state disk system you can typically get higher density of storage with a hard disk drive. However, if your goal is redo log acceleration, storage capacity is not your bottleneck. Write performance, however, can be. Keep in mind, just as with any storage media you can deploy an array of solid state disks that provide terabytes of capacity (with either DDR or flash).
    Lower recoverability - After mechanical failure the
    data is completely lost as
    the cell is destroyed, while if normal HDD suffers
    mechanical failure the data
    is often recoverable using expert help.Comment: If you lose a hard drive for your redo log, the last thing you are likely to do is to have a disk restoration company partially restore your data. You ought to be getting data from your mirror or RAID to rebuild the failed disk. Similarly, with solid state disks (flash or DDR) we recommend host based mirroring to provide enterprise levels of reliability. In our experience, a DDR based solid state disk has a failure rate equal to the odds of losing two hard disk drives in a RAID set.
    Vulnerability against certain types of effects,
    including abrupt power loss
    (especially DRAM based SSDs), magnetic fields and
    electric/static charges
    compared to normal HDDs (which store the data inside
    a Faraday cage).Comment: This statement is all FUD. For example, our DDR RAM based systems have redundant power supplies, N+1 redundant batteries, four RAID protected "hard disk drives" for data backup. The memory is ECC protected and Chipkill protected.
    Slower than conventional disks on sequential I/OComment: Most Flash drives, will be slower on sequential I/O than a hard disk drive (to really understand this you should know there are different kinds of flash memory that also impact flash performance.) DDR RAM based systems, however, offer enormous performance benefits versus hard disk or flash based systems for sequential or random writes. DDR RAM systems can handle over 400,000 random write I/O's per second (the number is slightly higher for sequential access). We would be happy to share with you some Oracle ORION benchmark data to make the point. For redo logs on a heavily transactional system, the latency of the redo log storage can be the ultimate limit on the database.
    Limited write cycles. Typical Flash storage will
    typically wear out after
    100,000-300,000 write cycles, while high endurance
    Flash storage is often
    marketed with endurance of 1-5 million write cycles
    (many log files, file
    allocation tables, and other commonly used parts of
    the file system exceed
    this over the lifetime of a computer). Special file
    systems or firmware
    designs can mitigate this problem by spreading
    writes over the entire device,
    rather than rewriting files in place.
    Comment: This statement is mostly accurate but refers only to flash drives. DDR RAM based systems, such as those Don's books refer to, do not have this limitation.
    >
    Looking at many of your postings to Oracle Forums
    thus far Don, it seems to me that you are less
    interested in providing actual practical help, and
    more interested in self-promotion - of your company
    and the Oracle books produced by it.
    .. and that is not a very nice approach when people
    post real problems wanting real world practical
    advice and suggestions.Comment: Contact us and we will see if we can prove to you that Don, and any number of other reputable Oracle consultants, recommend using DDR based solid state disk to solve redo log performance issues. In fact, if it looks like your system can see a serious performance increase, we would be happy to put you on our evaluation program to try it out so that you can do it at no cost from us.

  • Nightmare with Log Writer, active files please explain !

    Hi
    Can anybody explain how many redo log groups server may need in no-archiving mode ?
    The documentation from Oracle web site says:
    If you have enabled archiving, Oracle cannot re-use or overwrite an active online log file until ARCn has archived its contents. If archiving is disabled, when the last online redo log file fills, writing continues by overwriting the first available active file.
    From this it looks only 2 groups are needed. I dont get how is it possible to overwrite active file ??? I think I should be written by dbw and become inactive first. Is that the reason I get "checkpoint has not completed" warnings in the log and pure performance ?
    Andrius

    I believe the minimum required by oracle is 2 groups.
    Obviously, this won't cut it in ARCHIVELOG mode for most databases. But then, you were referring to NOARCHIVELOG mode. I tend to go with 3 in this type of scenario.
    As for the 2nd part. Only one redo log is 'ACTIVE' at a time. When a log switch occurs, it goes to the next INACTIVE redo log and starts writing to that. Thus, overwriting what was previously in it. DBWn doesn't write redo entries, that is handled by LGWR. As far as Oracle8i, the only part DBWn had on redo logs was triggering log writting by LGWR.

  • More log writer contention when using JDBC Batch Updates?

    So if I'm already seeing log writer contention when I'm commiting inserts one row at a time.
    Why would I see even more contention of log writer ie. commit waits in Enterprise Manager when I batch these inserts using JDBC updates.
    Today I observed significantly more commit waits when I used JDBC batching vs commit one row at a time.

    Please refer
    http://www.oracle.com/technology/products/oracle9i/daily/jun07.html

  • Want to reduce Log switch time interval !!!

    Friends ,
    I know that the standard LOG SWITCH TIme interval is 20/30 minutes , i.e., every time it is better switch redolog 1 to redolog 2 (or redolog2 to redolog3) within 20/30 minutes.
    But in my production server , Logfile switches within every 60 minutes every time in the peak hour . Now my question , How I can make a situation where my logfile should switch to another logfile between 20/30 minutes .
    Here my database configuration is :
    Oracle database 10g (10.2.0.1.0 version) in AIX 5.3 server
    AND
    SQL> show parameter fast_start_mttr_target
    NAME TYPE VALUE
    fast_start_mttr_target integer 600
    My every redolog file size is = 50 MB
    In this situation , give me advice plz how I can reduce my logswitch time interval ?

    You could either
    a. Recreate your RedoLog files with a smaller size --- which action I would not recommend
    OR
    b. Set the instance parameter ARCHIVE_LAG_TARGET to 1800
    ARCHIVE_LAG_TARGET specifies (in seconds) the duration at which a log switch would be forced, if it hasn't been done so by the online redo log file being full.
    You should be able to use ALTER SYSTEM to change this value.
    Hemant K Chitale
    http://hemantoracledba.blogspot.com

  • Semantic Logging Custom sink logs multiple times Event entry

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using Microsoft.Practices.EnterpriseLibrary.SemanticLogging;
    using Microsoft.Practices.EnterpriseLibrary.SemanticLogging.Formatters;
    using Microsoft.WindowsAzure;
    using Microsoft.WindowsAzure.Storage;
    using Microsoft.WindowsAzure.Storage.Blob;
    using System.Globalization;
    using System.IO;
    using Microsoft.Practices.EnterpriseLibrary.TransientFaultHandling;
    using Microsoft.WindowsAzure.Storage.RetryPolicies;
    namespace SemanticLogging.CustomSink
    public class AzureBlobSink : IObserver<EventEntry>
    private readonly IEventTextFormatter _formatter;
    private string ConnectionString { get; set; }
    private string ContainerName { get; set; }
    private string BlobName { get; set; }
    public AzureBlobSink() : base()
    System.Net.WebRequest.DefaultWebProxy.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
    public AzureBlobSink(string connectionString, string containerName, string blobname) : base()
    this.ConnectionString = connectionString;
    this.ContainerName = containerName;
    this.BlobName = blobname;
    _formatter = new EventTextFormatter();
    public void OnCompleted()
    //throw new NotImplementedException();
    public void OnError(Exception error)
    //throw new NotImplementedException();
    SemanticLoggingEventSource.Log.CustomSinkUnhandledFault("Exception: " + error.Message + Environment.NewLine + "Stack trace:" + error.StackTrace + Environment.NewLine + "Inner Exception" + error.InnerException + Environment.NewLine);
    public void OnNext(EventEntry value)
    if (value != null)
    using (var writer = new StringWriter())
    _formatter.WriteEvent(value, writer);
    Postdata(Convert.ToString(writer), BlobName);
    /// <summary>
    /// create container and upsert block blob content
    /// </summary>
    /// <param name="content"></param>
    private void Postdata(string content, string blobname)
    List<string> blockIds = new List<string>();
    var bytesToUpload = Encoding.UTF8.GetBytes(content);
    //to set default proxy
    System.Net.WebRequest.DefaultWebProxy.Credentials = System.Net.CredentialCache.DefaultNetworkCredentials;
    try
    using (MemoryStream stream = new MemoryStream(Encoding.UTF8.GetBytes(content),false))
    CloudStorageAccount account = CloudStorageAccount.Parse(ConnectionString);
    CloudBlobClient blobClient = account.CreateCloudBlobClient();
    //linear Retry Policy create a blob container 10 times. The backoff duration is 2 seconds
    IRetryPolicy linearRetryPolicy = new LinearRetry(TimeSpan.FromSeconds(2), 10);
    //exponential Retry Policy which retries the code to create a blob container 10 times.
    IRetryPolicy exponentialRetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(2), 10);
    blobClient.RetryPolicy = linearRetryPolicy;
    CloudBlobContainer container = blobClient.GetContainerReference(ContainerName);
    container.CreateIfNotExists();
    CloudBlockBlob blob = container.GetBlockBlobReference(blobname + System.DateTime.UtcNow.ToString("MMddyyyy") + ".log");
    //stream.Seek(0, SeekOrigin.Begin);
    if (!blob.Exists())
    using (var insertempty = new MemoryStream(Encoding.UTF8.GetBytes("")))
    blob.UploadFromStream(insertempty);
    blockIds.AddRange(blob.DownloadBlockList(BlockListingFilter.Committed).Select(b => b.Name));
    var newId = Convert.ToBase64String(Encoding.UTF8.GetBytes(blockIds.Count.ToString(CultureInfo.InvariantCulture).PadLeft(64, '0')), Base64FormattingOptions.None);
    //var newId = Convert.ToBase64String(Encoding.Default.GetBytes(blockIds.Count.ToString()));
    blob.PutBlock(newId, new MemoryStream(bytesToUpload), null);
    blockIds.Add(newId);
    blob.PutBlockList(blockIds);
    catch (Exception ex)
    SemanticLoggingEventSource.Log.CustomSinkUnhandledFault("Exception: " + ex.Message + Environment.NewLine + "Stack trace:" + ex.StackTrace + Environment.NewLine + "Inner Exception" + ex.InnerException + Environment.NewLine);
    I have created custom sink for logging into azure blob.
    The code below were giving issue. 
    when i execute this method,
    First time it write log one time in to blob 
    second time it write log two time. 
    n-th time it writes n entry to blob. 
    Actually, the below code executed n time.
    public void OnNext(EventEntry value)
    if (value != null)
    using (var writer = new StringWriter())
    _formatter.WriteEvent(value, writer);
    Postdata(Convert.ToString(writer), BlobName);
    Help me!
    Thanks, SaravanaBharathi.A

    Hi Saravanabharathi,
    Thank you for posting in here.
    We are looking into this and will get back to you at earliest. Your patience is greatly appreciated.
    Regards,
    Manu Rekhar

Maybe you are looking for