CSM Logs growing at abnormal rate

Installed Cisco Security Manager 3.2.1
Log files such as jrm and other files keeps growing to 65 GB every month and we have to manually delete them, restart the CSM server again.
please notify of any such issues and workarounds

Hi Reza,
Thanks for your reply.We reset pct_increase parameter to zero ,earlier it was 50.From this point we are monitoring the growth of the next_extent parameter.
thanks & regards
Shekhar

Similar Messages

  • Transactional log grows abnormally to 160GB

    Dear All
    I am new to SQL database . Im my production server transactional log grow upto 160GB and only 7 GB left in the file system
    Like dont want to add the any space in the file system and we have everyday full backup schedule.
    Is there any permanent solution. I want to take the proactive action before my database hang.
    Is also check the note 421644.

    Hi Tarik!
    Probably the fact that your TLOG is so high is because you did not back it up at all in the beginning or you just did not back it up for some time. This does not mean that its allocated space is full, and in fact, if you defined a proper backup policy you can shrink it to a more reasonable value as explained in SAP note 363018 (section E.b ).
    Now, in order to avoid having it grown so much again is to ensure that you perform TLOG backups regularly (the frequency depends on your business needs, you should take your own conclusions depending on your system activity); check the autogrow settings just in case a single grow means a huge increase (imagine you set 120 GB!).
    In case that you have any problems still e.g. because the TLOG grows even though you back it up regularly, you can check if there is a open transaction that prevents it to be backed up, as you already know note 421644.
    Good luck! 
    -Jesú

  • Backup log is terminating abnormally

    Hi Experts
    I am unable to take the backup in tape,  i am getting a error "sqlstate 42000" operation on device R3DUMP exceeded retry count backup log is terminating abnormally.
    Thanks in advance
    Regards
    Venkat

    Hi Experts
    I am unable to take the backup in tape,  i am getting a error "sqlstate 42000" operation on device R3DUMP exceeded retry count backup log is terminating abnormally.
    Thanks in advance
    Regards
    Venkat

  • SCCMReporting.log growing under %AppData%\Local\Temp

    Tried to lookup this from the net, but couldn't find anything.. so the problem is that my reporting services account on the reporting services point is having this logfile called SCCMReporting.log under %AppData%\Local\Temp and it consumes 30GB of disk
    space... Any idea where this all is coming from or how to disable the entire log? Everything is working fine on the reports.

    Already tried searching the registry, no go.
    After stopping reporting services service, I could delete the file and obtain the used space but when the service is started again, the logfile starts growing.
    grab a copy of the logfile whilst it's still small, so you can analyse it
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Ons.log growing enormously in 10g on RHEL3

    This log is growing at between 5g and 10g per weeek.
    The entries are as follows :-
    05/02/05 12:42:26 [4] Local connection 0,127.0.0.1,6100 missing form factor
    05/02/05 12:42:26 [4] Local connection 0,127.0.0.1,6100 missing form factor
    05/02/05 12:42:26 [4] Local connection 0,127.0.0.1,6100 missing form factor
    On my system the location of the logs is here :-
    /oracle/product/10.1.0/ccd/opmn/logs
    Thanks
    George

    I've run into this problem myself and have made the suggested change. The tns listener is no longer running crazy and adding to the log. The original oracle instance seems to be behaving, as well.
    Thank you very much for the tip.
    Follow-up question: I'm not well versed on this type of installation. I installed 10g under ...../product/10.2.0/Db_1 and then tried to install the http server. Docs indicated that it wants to have it's own ORACLE_HOME. So be it. It's in ...../product/10.2.0/Db_2. Now the questions arise:
    1) What's common between the two?
    2) Do I need to switch from one set of env vars to the other for startup/shutdown?
    3) /etc/init.d/*ora now do not look like they will work.
    Which doc covers this type of installation - and is this not recommended?
    My "play DBA" days were over several years ago.
    thanks again.

  • Csm log problem

    below log is generated with csm
    server is correct.(normal)
    csm is correct.(normal)
    service is correct.(normal)
    why below log was contiuned with csm ?
    Mar 7 05:20:13: %CSM_SLB-6-RSERVERSTATE: Module 3 server state changed: SLB-NETMGT: Got different MAC address from server 100.8.50.34 in response to ARP
    Mar 7 05:20:13: %CSM_SLB-6-RSERVERSTATE: Module 3 server state changed: SLB-NETMGT: Got different MAC address from server 100.6.50.34 in response to ARP

    The message is just informing you that the CSM is getting a different mac address each time it does an arp request.
    So, you have either duplicate ip, or a device doing proxy-arp, or sth similar.
    Gilles.

  • Redo log tuning - improving insert rate

    Dear experts!
    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is performed (the system architecture can't be changed - for example to commit every 10th record).
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?
    I would be grateful for any information!
    Thanks
    Markus

    We've an OLTP system which produces large amount of data. After each record written to our 11.2 database (standard edition) a commit is >>performed (the system architecture can't be changed - for example to commit every 10th record).Doing commit after each insert (or other DML command) doesn't means that dbwriter process is actually writing this data immediately in db files.
    DBWriter process is using an internal algorithm to decide where to apply changes to db files. You can adjust the writing frequency into db files by using "fast_start_mttr_target" parameter.
    So how can we speed up the insert process? As the database in front of the system gets "mirrored" to our datawarehouse system it is running >>in NOARCHIVE mode. I've already tried placing the redo log files on SSD disks which speeded up the insert process.Placing the redo log files on SSD disks is indeed a good action. Also you can check buffer cache hit rate and size. Also stripping for filesystems where redo files resides should be taken into account.
    Another idea is putting the table on a seperate tablespace with NOLOGGING option. What do you think about this?It's an extremely bad idea. NOLOGGING option for a tablespace will lead to an unrecovearble tablespace and as I stated on first sentence will not increase the insert speed.
    Further more I heard about tuning the redo latches parameter. Does anyone have information about this way?I don't think you need this.
    Better check indexes associated with tables where you insert data. Are they analyzed regularly, are all of them used indeed (many indexes are created for some queries but after a while they are left unused but at each DML all indexes are updated as well).

  • SNMP clogHistoryTable does not return CSM logs

    Good afternoon
    I´m trying to retrieve some specific logs through SNMP, and I can extract some logs from clogHistoryTable MIB. But I have no output from the logs I´m trying to retrieve (from CSM), something like this:
    May 17 04:13:12: %CSM_SLB-6-RSERVERSTATE: Module 3 server state changed: SLB-NETMGT: HTTP health probe re-activated server 192.XXX.XXX.XXX:80 in serverfarm 'TEST-HTTP'
    These are the ones I need and I can´t find any OID that returns me these logs.
    Anyone know if there´s an OID to extract them?
    Best regards

    Ajay
    Thank you for your reply.
    I´ve tried the OID you sent me, but it´s not quite what I need. That query only returns the truth value if the
    ciscoSlbRealStateChange is enabled or not.
    In my case I have a integer=2 output.
    What I really need is a query that returns me an history of how many times and when (date) a specific probe was successful or not testing a real. Basically I need a history of when a real was in service on a serverfarm and not. And for this specific purpose, I cannot create a syslog server.
    I was trying to find an OID that returns me the log and then I would parse the logs I need.

  • Read and Log Data at Different Rates

    Hi all, I
    'm certain this a fairly easy task with the right training but I just can't seem to figure it out.  What I would like to do is be able to read from a source and display the values at one rate while simultaneously storing the data to be analyzed later at a different rate.  For example if I was to read a thermocouple and display the readings every 3 seconds on the screen but only log the data every ten seconds.  What's the most efficient way to accomplish this?  I've come up with ways that have given me the results I want but I know it wasn't good programming practice.  So in an effort to get better, does anyone have any suggestions?
    Thanks.
    LabVIEW 2012 - Windows 7
    CLAD

    You could try something like the attached vi.
    Cheers!
    CLA, CLED, CTD,CPI, LabVIEW Champion
    Platinum Alliance Partner
    Senior Engineer
    Using LV 2013, 2012
    Don't forget Kudos for Good Answers, and Mark a solution if your problem is solved.
    Attachments:
    Untitled 3.vi ‏11 KB

  • JRM.log growing back rapidly

    I have scheduled logrot for the jrm.log file to run everyday but still jrm.log is growing back to 10 mb everyday. Even for the testing, I ran Logrot which reduced the size of jrm.log to 0 kb but after few seconds it again grew back to 10 mb. Could someone please help me in this regards?

    The jrm.log can only be rotated in offline mode.  That is, you need to run logrot.pl -s so that Daemon Manager is shutdown first.

  • EM Application Log and Web Access Log growing too large on Redwood Server

    Hi,
    We have a storage space issue on our Redwood SAP CPS Orcale servers and have found that the two log files above are the main culprits for this. These files are continually updated and I need to know what these are and if they can be purged or reduced down in size.
    They have been in existence since the system has been installed and I have tried to access them but they are too large. I have also tried taking the cluster group offline to see if the file stops being updated but the file continues to be updated.
    Please could anyone shed any light on this and what can be done to resolve it?
    Thanks in advance for any help.
    Jason

    Hi David,
    The file names are:
    em-application.log and web access.log
    The File path is:
    D:\oracle\product\10.2.0\db_1\oc4j\j2ee\OC4J_DBConsole_brsapprdbmp01.britvic.BSDDRINKS.NET_SAPCPSPR\log
    Redwood/CPS version is 6.0.2.7
    Thanks for your help.
    Kind Regards,
    Jason

  • SQL Server Database - Transaction logs growing largely with Simple Recovery model

    Hello,
    There is SQL server database on client side in production environment with huge transaction logs.
    Requirement :
    1. Take database backup
    2. Transaction log backup is not required. - so it is set to Simple recovery model.
    I am aware that, Simple Recovery model also increases the transaction logs same as in Full Recovery model as given on below link.
    http://realsqlguy.com/origins-no-simple-mode-doesnt-disable-the-transaction-log/
    Last week, this transaction log became of 1TB size and blocked everything on the database server.
    How to over come with this situation?
    PS :  There are huge bulk uploads to the database tables.
    Current Configuration :
    1. Simple Recovery model
    2. Target Recovery time : 3 Sec
    3. Recovery interval : 0
    4. No SQL Agent job schedule to shrink database.
    5. No other checkpoints created except automatic ones.
    Can anyone please guide me to have correct configuration on SQL server for client's production environment?
    Please let me know if any other details required from server.
    Thank you,
    Mittal.

    @dave_gona,
    Thank you for your response.
    Can you please explain me this in more details -- 
    What do you mean by one batch ?
    1. Number of rows to be inserted at a time ?
    2. or Size of data in one cell does matter here.
    As in my case, I am clubbing together all the data in one xml (on c# side) and inserting it as one record. Data is large in size, but only 1 record is inserted.
    Is it a good idea to shrink transaction log periodically, as it is not happening itself in simple recovery model.
    HI Mittal,
    Shrinking is bad activity yu should not shrink log files regularly, in rare case if you want to recovery space you may do it.
    Have manual chekpoints in Bulk insert operation.
    I cannot tell upfront what should be batch size but you can start with 1/4 th of what you are currently inserting.
    Most important what does below query return for database
    select log_reuse_wait_desc from sys.databases where name='db_name'
    The value it returns is what stopping the log from getting cleared and reused.
    What is version and editon of SQl server we are talking about. What is output of
    select @@version
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • System Events Log growing in size

    My Imac is 4 months old & the Syatem Events Log is already 10 meg. in size. I have this feeling that because of it Imac starting time slightly increased (I switch off at nights).
    I tried utilities like maintenance & Onyx to delete the file without success. Is there a way to delete it? What will happen if I trash it manually?
    Appreciate any suggestion.

    OS X will automatically compress and discard old logs, IF you let it run during the night, or if you put it to sleep during the night. If you do a cold restart every morning, OS X won't automatically run the cleanup scripts.
    If you like to power it off when not in use, do the following:
    - on an admin account, open Terminal
    - enter 'sudo periodic daily weekly monthly', without the quotes, then press return. You will be asked for your admin password.
    - wait a few minutes until the command completes
    This will force the OS X maintenance scripts to run. It's not critical how often you do this, but once per month will be OK.
    Alternatively, there is a way to reschedule the daily, weekly and monthly scripts to run during the day.

  • DFM.log grows up to 3 Giga in 3 Weeks

    Hello,
    im using cisco works on a windows server system  every 3 or 4 Weeks my backup fails because of  "no disk space".
    Searching for the reason i found that DFM.log is too big. After removing  this file the backup runs propperly.
    Removing this file needs to stop/start crmdmgtd.
    Looking into DFM.log shows many lines of discovery output and i suppose it is the output of a debug command.
    in DFM>configuration>logging  logging level configuration is not set.
    Month ago i had a problem with DFM and got help from Cisco by an opend Case. i remember that during the work for
    this case i was instructed to setup few  commands  like :
    dmctl -s DFM put  ICF_TopologgyManager::ICF-TopologgyManager::DebugEnebled TRUE
    Maybe i have to reverse  this commands, but not able tu use dmctl commands.

    sorry, when i  wrote "i am not able" i wanted to say that i am not familiar with the usage of
    the command dmctl. In my last posting i gave only an example of a command.
    the full listing of the commands i have done is this:
    1.      Enable the Discovery Logs.
    Go to NMSROOT/objects/smarts/bin then execute the following command.
           dmctl -s DFM put ICF_TopologyManager::ICF-TopologyManager::DebugEnabled TRUE
           dmctl -s DFM put ICF_TopologyManager::ICF-TopologyManager::LogDiscoveryProgress TRUE
           dmctl -s DFM put ICF_TopologyManager::ICF-TopologyManager::TraceRule TRUE
           dmctl -s DFM1 put ICF_TopologyManager::ICF-TopologyManager::DebugEnabled TRUE
           dmctl -s DFM1 put ICF_TopologyManager::ICF-TopologyManager::LogDiscoveryProgress TRUE
           dmctl -s DFM1 put ICF_TopologyManager::ICF-TopologyManager::TraceRule TRUE
    is it possible to set  all to  false  without  stopping the normal work of TopologyManager ?
    thank you for your Help
    Eberhard

  • Archive log growing rapidly

    Hi,
    In our database suddenly we are seeing the rapid growth of archive logs.
    How to find which session is causing the growth,
    thanks in advacne.
    Regards
    JM

    Found Re: How to check user/session generating a lot of archivelogs? using this

Maybe you are looking for

  • Loop through a model to get variables and values

    I use an AS model (named AppGlobals.as) to store the application variables. I need to replace the hard-coded values of the elements to ones pulled from a database. As such, I need to iterate through the model to find all the elements that match my qu

  • How to export report to xls format ?

    hi guys, i am running apex 3.2.1.00. I came across this in the documenation You can download an interactive report back by selecting Download from the Actions menu. Available download formats depend upon your installation and report definition but ma

  • How to generate the output of BSP application in PDF format?

    Hi, I need to modify one BSP application, which generates its output in the form of PDF. I have checked all the methods in the bsp pages and its corresponding controller class's. I couldn't find any relevant method, which deals with generating the ou

  • How do you troubleshoot Flash animations that wont load?

    I've recently switched to Safari 3.1 from Firefox and am finding occasional pages with flash animations that wont play (currently trying to get to "Product tour" on this page: http://www.logitech.com/index.cfm/remotes/universal_remotes/devices/3898&c

  • Counter Task issue with NI6229 and Boralnd C++ builder

    Hello! I am using the NI-6229 M series Acquisition Board and I am writing my application in Borland C++ Builder 6. My goal is to generate digital pulses on the counter output that are syncronized with a video camera. To do this I created a task that