Real time logging: best practices and questions ?

I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
server access logs in real time.
At a first glance, each directory generates about 1,1 Mb of access log per second.
1)
I'd like to know if there're known best practices / experiences in such a case.
2)
Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
sub-sytem (SAN, NAS ....) ?
3)
In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
Ramdisk on windows
[http://msdn.microsoft.com/en-us/library/dd163312.aspx]
Ramdisk on solaris
[http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
[http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
I should ask, how realtime should this log correlation be?
Edited by: etst123 on Jul 23, 2009 1:04 PM

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Best Practices and Usage of Streamwork?

    Hi All -
    Is this Forum a good place to inquire about best practices and use of Streamwork? I am not a developer working with the APIs, but rather have setup a Streamwork Activity for my team to collaborate on our activities.
    We are thinking about creating a sort of FAQ on our team activity and I was thinking of using either a table or a collection for this. I want it to be easy for team members to enter the question and the answer (our team gets a lot of questions from many groups and over time I would like to build up a sort of knowledge base).
    Does anyone have any suggestions for such a concept in StreamWork? Has anyone done something like this and can share experiences?
    Please let me know if I should post this question in another place.
    Thanks and regards,
    Rob Stevenson

    Activities have a limit of 200 items that can be included.  If this is the venue you wish to use,  it might be better to use a table rather than individual notes/discussions.

  • Coherence Best Practices and Performance

    I'm starting to use coherence and I'd to know if someone could point me out some doc on Best Practices and Performance optimizations when using it.
    BTW, I haven't had the time to go through the entire Oracle documentation.
    Regards

    Hi
    If you are new to Coherence (or even for people who are not that new) one of the best things you can do is read this book http://www.packtpub.com/oracle-coherence-35/book I know it says Coherence 3.5 and we are currently on 3.7 but it is still very relevant.
    You don't need to go through all the documentation but at least try the introductions and try out some of the examples. You need to know the basics otherwise it makes it harder for people to either understand what you want or give you detailed enough answers to questions.
    For performance optimizations it depends a lot on your use cases and what you are doing; there are a number of things you can do with Coherence to help performance but as with anything there are trade-offs. Coherence on the server-side is a Java process and often when tuning, sorting out issues and performance I spend a lot of time with the usual tools for Java such as VisualVM (or JConsole), tuning GC, looking at thread dumps and stack traces.
    Finally, there are plenty of people on these forums happy to answer your questions in return for a few forum points, so just ask.
    JK

  • What is the real time use of implicit and explicit cursors in pl/sql

    what is the real time use of implicit and explicit cursors in pl/sql.............please tell me

    You can check the following link ->
    http://www.smart-soft.co.uk/Oracle/oracle-plsql-tutorial-part5.htm
    But, i've a question ->
    Are you student?
    Regards.
    Satyaki De.

  • What is the best practice and Microsoft best recommended procedure of placing "FSMO Roles on Primary Domain Controller (PDC) and Additional Domain Controller (ADC)"??

    Hi,
    I have Windows Server 2008 Enterprise  and have
    2 Domain Controllers in my Company:
    Primary Domain Controller (PDC)
    Additional Domain Controller (ADC)
    My (PDC) was down due to Hardware failure, but somehow I got a chance to get it up and transferred
    (5) FSMO Roles from (PDC) to (ADC).
    Now my (PDC) is rectified and UP with same configurations and settings.  (I did not install new OS or Domain Controller in existing PDC Server).
    Finally I want it to move back the (FSMO Roles) from
    (ADC) to (PDC) to get UP and operational my (PDC) as Primary. 
    (Before Disaster my PDC had 5 FSMO Roles).
    Here I want to know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    In case if Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    Example like (FSMO Roles Distribution between both Servers) should be……. ???
    Primary Domain Controller (PDC) Should contains:????
    Schema Master
    Domain Naming Master
    Additional Domain Controller (ADC) Should contains:????
    RID
    PDC Emulator
    Infrastructure Master
    Please let me know the best practice and Microsoft best recommended procedure for the placement of “FSMO Roles.
    I will be waiting for your valuable comments.
    Regards,
    Muhammad Daud

    Here I want to know the best practice
    and Microsoft best recommended procedure for the placement of “FSMO Roles both on (PDC) and (ADC)” ?
    There is a good article I would like to share with you:http://oreilly.com/pub/a/windows/2004/06/15/fsmo.html
    For me, I do not really see a need to have FSMO roles on multiple servers in your case. I would recommend making it simple and have a single DC holding all the FSMO roles.
    In case if
    Primary (DC) fails then automatically other Additional (DC) should take care without any problem in live environment.
    No. This is not true. Each FSMO role is unique and if a DC fails, FSMO roles will not be automatically transferred.
    There is two approaches that can be followed when an FSMO roles holder is down:
    If the DC can be recovered quickly then I would recommend taking no action
    If the DC will be down for a long time or cannot be recovered then I would recommend that you size FSMO roles and do a metadata cleanup
    Attention! For (2) the old FSMO holder should never be up and online again if the FSMO roles were sized. Otherwise, your AD may be facing huge impacts and side effects.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • PI best practice and integration design ...

    Im currently on a site that has multiple PI instance's for each region and the question of inter-region integration has been raised my initial impression is that each PI will be in charge of integration of communications for its reginal landscape and inter-region communications will be conducted through PI - PI interface . I havent come across any best practice in this regard and have never been involved with a multiple PI landscape ...
    Any thoughts ? or links to best practice for this kind of landscape ?...
    to Summaries
    I think this is the best way to set it up, although numerous other combinations are possible, this seems to be the best way to avoid any signifcant system coupling. When talking about ECC - ECC inter-region communications
    AUS ECC -
    > AUS PI -
    > USA PI -
    > USA ECC

    abhishek salvi wrote:
    I need to get data from my local ECC to USA ECC, do i send the data to their PI/my PI/directly to their ECC, all will work, all are
    valid
    If LocalECC --> onePI --> USA ECC is valid, then you dont have to go for other PI in between...why to increase the processing time....and it seems to be a good option to bet on.
    The issue is
    1. Which PI system should any given peice of data be routed through and how do you manage the subsequent spider web of interfaces resulting from PI AUS talking to ECC US, ECC AU, BI US, BI AU and the reverse for the PI USA system.
    2. Increased processing time Integration Engine - Integration Engine should be minimal and it will mean a consistent set of interfaces for support and debug, not to mention the simplification of SLD contents in each PI system.
    I tend to think of like network routing, the PI system is the default gateway for any data not bound for a local systems you send and let PI figure out what the next step is.
    abhishek salvi wrote:
    But then what about this statement (is it a restriction or business requirement)
    Presently the directive is that each PI will manage communications with its own landscape only respectively
    When talking multiple landscapes dev / test / qa / prod, each landscape has its own PI system generally, this is an extention of the same idea except that both systems are productive, from a interface and customisation point of view given the geographical remotness of each system local interface development for local systems and support makes sense, whilst not limited to this kind of interaction typically interfaces for a given business function for a given location (location specific logic) would be developed in concert with the interface and as such has no real place on the remote system (PI).
    To answer your question there is no rule, it just makes sense.

  • Music on Hold: Best Practice and site assignment

    Hi guys,
    I have a client with multiple sites, a large number of remote workers (on and off domain) and Lync Phone Edition devices.
    We want to deploy a custom music on hold file. What's the best way of doing this? I'm thinking
    Placing the file on a share on one of the Lync servers. However this would mean (I assume) that clients will always try to contact the UNC path every time a call is placed on hold, which would result in site B connecting to site A for it's MoH file. This
    is very inefficient and adds delay onto placing a call on hold. If accessing the file from a central share is best practice, how could I do this per site? Site policies I've tried haven't worked very well. For example, if a file is on
    \\serverB\MoH\file.wma for a site called "London Site" what commands do I need to run to create a policy that will force clients located at a site to use that UNC path? Also, how do client know what site
    they are in?
    Alternatively, I was thinking of pushing out the WMA file to local devices via a Group Policy, and then setting Lync globally to point to %systemdrive%\MoH\file.wma. Again, how do I go about doing this? Also, what would happen to LPE devices that wouldn't
    have the file (as they wouldn't get the GPO)?
    Any help with this would be appreciated. Particularly around how users are assigned to sites, and the syntax used to create a site policy for the first option. Any best practice guidance would be great!
    Thanks - Steve

    Hi StevehootMITS,
    If Lync Phone Edition or other device that doesn’t provide endpoint MOH, you can use PSTN Gateways to provide music on hold. For more information about Music On Hold, you can check
    http://windowspbx.blogspot.in/2011/07/questions-about-microsoft-lync-server.html
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or
    suitability of any software or information found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best regards,
    Eric

  • Redo log best practice for performace - ASM 11g

    Hi All,
    I am new to ASM world....can you please give your valuable suggestion on below topic.
    What is the best practice for online redo log files ? I read it somewhere that oracle recommand to have only two disk group one for the all DB fils, control files online redo log files and other disk group for recovery files, like mulitplex redo file, archive log files etc.
    Will there be any performance improvment to make different diskgroup for online redo log (separate from datafiles, contorl files etc) ?
    I am looking for oracle document to find the best practise for redo log (peformance) on ASM.
    Please share your valuable views on this.
    Regards.

    ASM is only a filesystem.
    What really count on performance about I/O is the Storage Design i.e (RAID used, Array Sharing, Hard Disk RPM value, and so on).
    ASM itself is only a layer to read and write on ASM DISK. What mean if you storage design is ok ... ASM will be ok. (of course there is best pratices on ASM but Storage must be in first place)
    e.g There is any performance improvement make different Diskgroup?
    Depend...and this will lead to another question.
    Is the Luns on same array?
    If yes the performance will end up on same point. No matter if you have one or many diskgroup.
    Coment added:
    Comparing ASM to Filesystem in benchmarks (Doc ID 1153664.1)
    Your main concern should be how many IOPs,latency and throughput the storage can give you. Based on these values you will know if your redologs will be fine.

  • Time Machine best practices after Lion to Mountain Lion upgrade

    I've made the upgrade from Lion to Mountain Lion and everything seems to be OK.
    I have been using Time Machine for backups since I deployed my first and, so far, only Mac (Mac Mini running Lion) in 2011.  I run my TM backups manually.  Since upgrading to Mountain Lion, I have not yet kicked off a TM backup, so my questions involve best practices with TM after an upgrade from Lion to Mountain Lion:
    Can I simply use the same drive as I use currently, do what I've always done, start the backup manually, and TM handles gracefully the new backup from the new OS?  
    At this point, since I have only backups to the Lion system, what I see when I doubleclick on the Time Machine drive is a folder called “Backups.backupdb”, then a subfolder called “My Mac mini”, and then all the backup events.   Nothing else.  What will I see once I do a backup now, after the Mountain Lion upgrade?
    If I for some reason needed to boot to my old Lion system (I cloned the startup disk prior to upgrading to ML) and access my old Lion backups with TM, would I be successful?  In other words does the system know that I'm booted to Lion, so give me access to the TM backups created under Lion?   Conversely when booted to the new Mountain Lion system, will I have access only to the backups created since the upgrade to Mountain Lion?
    Any other best practices steps I should take prior to my first ML backup?
    Time Machine is a great straightforward system to use (although I have to say I’ve not (yet) needed to depend on it for recovery...I trust that will go will when needed) but I don't want to make any assumptions as to how it works after a major OS upgrade.
    Thank you for reading.

    1. Correct. If you want to downgrade to OS X Lion, your Mac will still keep backups created with OS X Lion, so just start into Internet Recovery and select one of the backups made with OS X Lion. If you don't want that Time Machine backs up automatically, you may need to use TimeMachineEditor.
    2. After making a backup with Mountain Lion, it will be the same, but with a new folder that belongs to the new backup you have created.
    3. See my first answer.
    4. One advice: when your Time Machine drive gets full, Time Machine deletes old backups, so maybe it can remove all your OS X Lion backups. However, I don't think that you would need to go back to OS X Lion.
    If you have any questions apart from those, see Pondini's website > http://pondini.org

  • Semantic Logging - RollingFlatFileSink - Real-Time Logging

    We're currently using semantic logging and are able to both save to rolling flat files and a sql database.
    A difference we're seeing is that the sql sink inserts every x seconds while the rolling flat file sink doesn't insert data until the file rolls over or the service stops. 
    Is there a way to have inserts happen in real-time for the rollingFlatFileSink?

    Please post questions related to this product in their
    forums at CodePlex.

  • IronPort best practices and configuration guide

    Hi there,
    I manage a Cisco IronPort ESA appliance for my organisation and made a quick blog post last night about things I thought should be a best practice for a new ESA appliance.
    The reason I wrote this is because some of these things are not configured from the start or are configured poorly by default.
    Take a look and let me know what you think - I plan to make a part 2 because there are some things I did not have time to go through and it was quite long already!
    Remember that your environment will be different from mine so you should understand the things I say before blindly implementing them!
    http://emtunc.org/blog/06/2014/cisco-ironport-e-mail-security-appliance-best-practices-part-1/

    First of all, I think your question is related to the WebCenter (Framework) as such, not just OUCSS.
    As for JDev. vs. run-time, this question is well discussed in Yannick Ongena's tutorial: http://www.yonaweb.be/webcenter_tutorial/part1_configure_webcenter_portal_application
    "Let me first talk a bit about the architecture of WebCenter and the runtime customizations. ADF (and WebCenter) has an additional component since 11g called the MDS (MetaDataServices). The MDS is a repository that stores all the customizations. The page we just created at runtime is not stored in the project folder of JDeveloper but is instead stored in the MDS."
    I guess the answer when to use which methods depends on the situation what page you want to create.
    I am surprised, however, that you state that
    Pages created in JDeveloper are not searchable online. It is possible to link it to a Navigation Model but the path needs to be manually entered.Could you elaborate on your use case?
    As for navigation models, you can check another tutorial: http://docs.oracle.com/cd/E21764_01/webcenter.1111/e10148/jpsdg_navigation.htm#BABJHFCE
    Maybe, what your are looking for is the way how to create a navigation model according to your needs?

  • Transferring the film to video best practice and combining PAL and NTSC

    Could anyone help me with the following 2 questions that I was asked in our small school video lab, I don't really have much experience with negative film and NTSC. Thankyou so much.
    1. "I may be going back to the film negative to cut it, based on the FCP EDL. This means that Final Cut has to maintain perfect synch. I know that with AVID, it's more reliable to transfer the film to video at 25 fps rather than 24 fps. Do you have any idea whether this is also the case with Final Cut??"
    2. "Some of my source materials is on PAL and some is on NTSC. Is that going to be a nightmare?? Will I be able to convert from one to the other when I import?? Or will I need to get the NTSC miniDV tapes transfered to PAL so that your PAL deck can read them? "
    we normally use PAL (In UK).

    1. This is where Cinema Tools comes into play. It can conform your edit list from FCP back to film.
    There is a wealth of information in the Cinama Tools handbook and Help menu item.
    Someone else might be able to contribute more information, my experience with CT is very limited.
    2. Some decks are switchable between PAL and NTSC. If yours can do this then you can capture your footage in a preliminary project and convert it for free with [JES Deinterlacer|http://www.xs4all.nl/~jeschot/home.html] which does a decent job or for $100 with [Natress Standards Conversion|http://www.nattress.com/Products/standardsconversion/standardsconver sion.htm] which does a very good job. Both will take some time, best to capture only what you really need.
    The best possible conversion is done with dedicated hardware solutions such as those offered by Snell & Wilcox. Real time with excellent results. This would be the way to go if you have a lot of material or if your deck is not PAL - NTSC switchable.

  • Best Practices needed -- question regarding global support success stories

    My customer has a series of Go Lives scheduled throughout the year and is now concerned about an October EAI (Europe, Asia, International) go live.  They wish to discuss the benefits of separating a European go Live from an Asia/International go live in terms of support capabilities and best practices.  The European business is definitely larger and more important than the Asia/International business and the split would allow more targeted focus on Europe.  My customer does not have a large number of resources to spare and is starting to think that supporting the combined go live may be too much (i.e., too much risk to the businesses) to handle.
    The question for SAP is regarding success stories and best practices.
    From a global perspective, do we recommend this split?  Do most of our global customers split a go live in Europe from a go live in Asia/International (which is Australia, etc.).  Can I reference any of these customers?  If the EAI go live is not split, what is absolutely necessary for success, etc, etc?  For example, if a core team member plus local support is required in each location, then this may not be possible with the resources they have u2026u2026..
    I would appreciate any insights/best practices/success stories/or u201Cwaru201D stories you might be aware of.
    Thank you in advance and best regards,
    Barbara

    Hi, this is purely based on customer requirement.
    I  have a friend in an Organization which went live in 38 centers at the same time.
    With the latest technologies in networking, distances does not make any difference.
    The Organization where I currently work in, has global business locations. In my current organization the go live was in phases. Here they went live in the region where the business was maximum first because this region was their largest and most important as far as revenue was concerned. Then after stabilizing this region, a group of consultants went to the rest of the regions for the go live in that region.
    Both the companies referred above are successfully into SAP and are leading partners with SAP. Unfortunately I am not authorized to give you the names of the Organizations as a reference for you as you requested.
    But in your case if you have shortage of manpower, you can do it in phases by first going live in the European Market and then in phases you can go live in the other regions.
    Warm Regards

  • Logging Best Practices in J2EE

    Hi,
    I've been struggling with Apache Commons Logging and Log4J Class Loading problems between module deployments in the Sun App Server. I've also had the same problems with other App servers.
    What is the best practice for Logging in J2EE.
    i.e. I think i may be java.util.logging. But what is the best practise for providing different logging config (i.e. Levels for classes and output) for each deployed module. and how would you structure that in the EAR.
    Thanks in advance.
    Graham

    I find using the java.util.logging works fine. For configuration of the log levels I use a LifeCycle module that sets up all my levels and handlers. That way I can set up the server.policy to allow only the LifeCycle module jar to configure logging (with a codebase grant), but no other normal modules can.
    The LifeCycle module gets its properties as event data with the INIT event and configures the logging on the STARTUP event.
    Hope this helps.

Maybe you are looking for

  • Bug in view link creation

    Here's an interesting bug in JDev 9.0.3.1, Win2k: I'm working with a view link using the Edit View Link dialog. It links two view objects, ViewA and ViewB, in a 1:* relationship. Entities EntityA and EntityB are the respective underliers of ViewA and

  • Windows 8.1 Enterprise Evaluation- Strange behaviour of network adapter

    Hello there are discussions and questions around this community about network adapter dropping connection while system is running. But I have found unique problem with my network adapter driver. I have tried to state my problem as clearly as possible

  • Can I get Apps for my Ipod version 3.1.3

    Hello I have looked for apps for music like garage band, music recording related, then I thought I would just settle for a Uchre game.  I have a very old ipod version 3.1.3 (7E18) .   Thanks TR@aim4you

  • New mid 09 macbook pro weak speaker problem

    I just got a mid 09 macbook pro 15''. and notice the full volume on the speaker is just about 60-70% of the late 08 macbook pro when turn to full 100%. anyone notice this? not sure is this a defect or apple did something to the mid 09 model maybe add

  • I came up with the brilliant idea to add Photoshop Elements 6 to Adobe CS3

    My original purpose was to add the OneOn Software with Elements.  Well I found I started having some problems with Photoshop.  I was adding Nik filters to both the elements and the CS3.  I didn't realize that I had Elements 6 so I downloaded 8 to hel