Cisco LMS - logs exceed limit

Hi friends,
First of all, let me introduce myself since I'm new with cisco community. I'm syafiq currently working as an network administrator in Malaysia.
I need guidance on cisco LMS as I'm really new with the LMS systems. Currently I'm having issue with LMS where the log files are exceeding the size limit set by pervious administrator. Can anyone help me on how to backup/ remove the logs file and set it to a new locations i.e. new drive ?
thanks in advanced!

Hi,
Welcome to CSC.
Ciscoworks has many features and all of those features has dedicated processes. All those processes and many ciscoworks agnets write their output (logggin) to the *.log files. Most of them are under NMSROOT/logs.
These files are usually not important enough (except syslog.log mostly) unless there is any issue with any of the LMS function and it needs to be t/shooted.
But these logs files should not be deleted, nor there data needs to be preserved for long, unless your organisation policy says so.
Because processes keeps on writing constantly to these log files, these log files can grow and can continue to grow to become extravagantly huge sometimes, hence need to be controlled, which is why the 'Logrot' feature was inlcuded.
Logrot is a log rotation program that enables you to control the log files size growth. It helps you to:
•Rotate log files while CiscoWorks is running.
•Optionally archive and compress rotated logs.
•Rotate log files only when they have reached a particular size.
You can configure it from :
Common Services > Server > Admin > Log Rotation
Please check configuring log Maintaining Log Files section of the user guide for more details.
-Thanks
Viinod

Similar Messages

  • Cisco LMS 3.2 SYSLOG not storing after 10 days

    Hi ,
    Im facing one issue with Cisco LMS 3.2
    Issue : The logs is generating only for 10 days and post that im not able to see the logs. I have not done any config changes. The only change i have done is i have completely reinstalled the LMS. i did multiple troubleshoot but not able to resolve this isse. It would be great If any some one is  able to help me in this isse.  Thanks.
    Regards,
    Juliet

    Dear Vinod
    Thanks for ur response and the problem has been resolved.
    The purge policy was set to 60 days only .The problem in reports viewing setting.
    Syslog folder under LMS would store syslog reports of both the device as well as applications for defined folder size , which in your case was 1 MB ( same can be viewed under log generator option).  The  older reports would get deleted from the folder upon reaching the limit.
    The only way to view device syslog is under following option :  Reports -> Reports Generator  in LMS  GUI where we will have to choose syslog with desired attribute.
    Regards,
    Juliet

  • Cisco LMS 4.0 Graph some time not coming

    I have installed cisco lms 4.0.
    I have added 10 devices, previously it was working fine, Since last few days some time syslog and graph are not coming. after every reboot on server it started working. It is happening on daily.plz help me out with permanent solution
    and also i have added manualy one device 7609-s router but not able to see cisco view
    Error:-
    Cannot find applicable device package for 10.133.224.131.
    This error could be due to one of the following:
    - The device package for this device type is not installed.
    - Device support for this device type is not available.
    - You are trying to open a component inside a device.
    To correct the problem, either install a device package for the device type, or open the parent device to manage the component.
    In device attribute it is showing 868 integrated router, I had try to delete and add again but problem is still same...
    Windows 2008 r2 standard
    RAM-16 Gb, Swap memory 8096

    Evrything looks good. Are you able to access LMS in the server itself? Try to install another browser on your server and try to login.
    Try both :
    http://x.x.x.x:1741
    https://x.x.x.x:443
    Share NMSROOT/MDC/tomcat/logs/stdout.log and stderr.log.
    -Thanks

  • Cisco lms - backup out of the box

    when using cisco lms 4.2.5 linux version, what would be the best way to backup the config files out of the box?
    what i'm lookin for is a way to take the shadow folder to another box for disaster recovery.

    Hi, i found this solution to work very well for exporting files out of the box:
    As user root:
    1) on cisco-lms install sshfs, which lets you mount a folder on a remote box through sftp:
    install in the following order:
    rpm -ivh fuse-libs-2.7.4-8.el5.x86_64.rpm
    rpm -ivh fuse-2.7.4-8.el5.x86_64.rpm
    rpm -ivh fuse-sshfs-2.5-1.el5.rf.x86_64.rpm
    2) mount 
    mkdir /mnt/backup
    sshfs root@<remote-linux-box-ip>:<remote-folder> /mnt/backup -o allow_other
    3) rsync with cron
    00 1 * * * rsync -r -v /var/adm/CSCOpx/files/rme/dcma/shadow/* /mnt/backup/ >> /root/rsync.log 2>&1

  • ORA-19566: exceeded limit of 0 corrupt blocks

    Hi All,
    We have been encountering some issues with RMAN backup; it has been erroring out with same errors (max corrupt blocks). As of now, I ran the db verify for affected files and found that indexes are failing. When I tried to find out the indexes from extent views, I was unable to find it. Looks like these blocks are in free space as I found it and also the V$backup corruption view shows the logical corruption.
    Waiting for you suggestion....
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bi
    PL/SQL Release 10.2.0.3.0 - Production
    CORE 10.2.0.3.0 Production
    TNS for HPUX: Version 10.2.0.3.0 - Production
    NLSRTL Version 10.2.0.3.0 - Production
    RMAN LOG:
    channel a3: starting piece 1 at 14-DEC-09
    RMAN-03009: failure of backup command on a2 channel at 12/14/2009 05:43:42
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd142.dbf
    continuing other job steps, job failed will not be re-run
    channel a2: starting incremental level 0 datafile backupset
    channel a2: specifying datafile(s) in backupset
    including current control file in backupset
    channel a2: starting piece 1 at 14-DEC-09
    channel a1: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_292_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a1: backup set complete, elapsed time: 01:14:45
    channel a2: finished piece 1 at 14-DEC-09
    piece handle=TERP_1769708180_level0_296_1_1_20091213065437.rmn tag=TAG20091213T065459 comment=API Version 2.0,MMS Version 5.0.0.0
    channel a2: backup set complete, elapsed time: 00:24:54
    RMAN-03009: failure of backup command on a4 channel at 12/14/2009 06:14:33
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub834/oradata/TERP/applsysd143.dbf
    continuing other job steps, job failed will not be re-run
    released channel: a1
    released channel: a2
    released channel: a3
    released channel: a4
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on a3 channel at 12/14/2009 06:41:00
    ORA-19566: exceeded limit of 0 corrupt blocks for file /ub806/oradata/TERP/icxd01.dbf
    Recovery Manager complete.
    Thanks,
    Vimlendu
    Edited by: Vimlendu on Dec 20, 2009 10:27 AM

    dbv file=/ora/oradata/binadb/RAT_TRANS_IDX01.dbf blocksize=8192
    The result:
    DBVERIFY: Release 10.2.0.3.0 - Production on Thu Nov 20 11:14:01 2003
    (c) Copyright 2000 Oracle Corporation. All rights reserved.
    DBVERIFY - Verification starting : FILE =
    /ora/oradata/binadb/RAT_TRANS_IDX01.dbf
    Block Checking: DBA = 75520968, Block Type = KTB-managed data block
    **** row 80: key out of order
    ---- end index block validation
    Page 23496 failed with check code 6401
    DBVERIFY - Verification complete
    Total Pages Examined : 34560
    Total Pages Processed (Data) : 1
    Total Pages Failing (Data) : 0
    Total Pages Processed (Index): 31084
    Total Pages Failing (Index): 1
    Total Pages Processed (Other): 191
    Total Pages Empty : 3284
    Total Pages Marked Corrupt : 0
    Total Pages Influx : 0
    Seems like I have 1 page failing. I try to run this script:
    select segment_type, segment_name, owner
    from sys.dba_extents
    where file_id = 18 and 23496 between block_id
    and block_id + blocks - 1;
    No rows returned.
    Then, I try to run this script:
    Select tablespace_name, file_id, block_id, bytes
    from dba_free_space
    where file_id = 18
    and 23496 between block_id and block_id + blocks - 1
    Resulting 1 row.
    Seems like I have the possible corrupt block on unused space.
    Edited by: Vimlendu on Dec 20, 2009 2:30 PM
    Edited by: Vimlendu on Dec 20, 2009 2:41 PM

  • Alerts from Cisco LMS

    Dears I have some concerns about SNMP and Cisco LMS :
    Our network device sent a LINK DOWN trap to CISCO LMS followed by LINK UP trap . I can see these messages from the device logs . My questions is :
    1-If the device is configured correctly to send snmp traps .And the syslog is showing both messages (LINK DOWN , LINK UP) then we can say the device is sending the snmp traps ?
    2-Why the LINK UP trap is missing in Cisco LMS .. What should we check to make sure LMS is sending and receiving these alert in proper way  ???
    Thanks in advance .

    Hi ,
    -If the device is configured correctly to send snmp traps .And the syslog is showing both messages (LINK DOWN , LINK UP) then we can say the device is sending the snmp traps ?
    Syslogs are different from SNMP traps , If you are seeing syslogs for linkup and linkdown ,it does not mean that device is sending the SNMP traps as well.
    2-Why the LINK UP trap is missing in Cisco LMS .. What should we check to make sure LMS is sending and receiving these alert in proper way  ???
    You can verify the snmp trap by packet capture
    Thanks-
    Afroz
    ****Ratings Encourages Contributors ****

  • FullOffline Backup - ORA-19566: exceeded limit of 0 corrupt blocks for file

    Dear SAP gurus,
    I am getting an error from the DBA Planning Calendar every time the job for "Full Offline backup" is run. And it is always as you can see from the log on the same file "oracle/SHD/sapdata4/sr3_16/sr3.data16".
    The oracle error is the following:
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    I found the SAP Note 969192 - RMAN Backup of SYSTEM tablespace terminates with ORA-19566
    but it does no apply because this is for the tablespace SYSTEM and not PSAPSR3.
    Please find below the log:
    BR0051I BRBACKUP 7.00 (46)
    BR0055I Start of database backup: begomwsv.ffd 2011-08-17 10.01.37
    BR0484I BRBACKUP log file: /oracle/SHD/sapbackup/begomwsv.ffd
    BR0477I Oracle pfile /oracle/SHD/102_64/dbs/initSHD.ora created from spfile /oracle/SHD/102_64/dbs/spfileSHD.ora
    BR0101I Parameters
    Name                           Value
    oracle_sid                     SHD
    oracle_home                    /oracle/SHD/102_64
    oracle_profile                 /oracle/SHD/102_64/dbs/initSHD.ora
    sapdata_home                   /oracle/SHD
    sap_profile                    /oracle/SHD/102_64/dbs/initSHD.sap
    backup_mode                    FULL
    backup_type                    offline_force
    backup_dev_type                disk
    backup_root_dir                /mnt/backup/oracle/SHD
    compress                       no
    disk_copy_cmd                  rman
    cpio_disk_flags                -pdcu
    exec_parallel                  0
    rman_compress                  no
    system_info                    shdadm/orashd eccdev01 Linux 2.6.16.60-0.87.1-smp #1 SMP Wed May 11 11:48:12 UTC 2011 x86_64
    oracle_info                    SHD 10.2.0.4.0 8192 17654 1114483454 eccdev01 UTF8 UTF8
    sap_info                       700 SAPSR3 0002LK0003SHD0011Y01548735220015Maintenance_ORA
    make_info                      linuxx86_64 OCI_102 Jan 29 2010
    command_line                   brbackup -u / -jid FLLOF20110817100136 -c force -t offline_force -m full -p initSHD.sap
    BR0116I ARCHIVE LOG LIST before backup for database instance SHD
    Parameter                      Value
    Database log mode              No Archive Mode
    Automatic archival             Disabled
    Archive destination            /oracle/SHD/oraarch/SHDarch
    Archive format                 %t_%s_%r.dbf
    Oldest online log sequence     17651
    Next log sequence to archive   17654
    Current log sequence           17654            SCN: 1114483454
    Database block size            8192             Thread: 1
    Current system change number   1114501246       ResetId: 664011854
    BR0118I Tablespaces and data files
    BR0202I Saving /oracle/SHD/sapdata3/sr3_15/sr3.data15
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data15 ...
    #FILE..... /oracle/SHD/sapdata3/sr3_15/sr3.data15
    #SAVED.... /mnt/backup/oracle/SHD/begomwsv/sr3.data15  #1/15
    BR0280I BRBACKUP time stamp: 2011-08-17 10.28.42
    BR0063I 15 of 48 files processed - 44100.117 of 121180.346 MB done
    BR0204I Percentage done: 36.39%, estimated end time: 11:15
    BR0001I ******************________________________________
    BR0202I Saving /oracle/SHD/sapdata4/sr3_16/sr3.data16
    BR0203I to /mnt/backup/oracle/SHD/begomwsv/sr3.data16 ...
    BR0278E Command output of 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog':
    Recovery Manager: Release 10.2.0.4.0 - Production on Wed Aug 17 10:28:42 2011
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    RMAN>
    RMAN> connect target *
    connected to target database: SHD (DBID=1683093070, not open)
    using target database control file instead of recovery catalog
    RMAN> *end-of-file*
    RMAN>
    host command complete
    RMAN> 2> 3> 4> 5> 6>
    allocated channel: dsk
    channel dsk: sid=223 devtype=DISK
    executing command: SET NOCFAU
    Starting backup at 17-AUG-11
    channel dsk: starting datafile copy
    input datafile fno=00019 name=/oracle/SHD/sapdata4/sr3_16/sr3.data16
    released channel: dsk
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of backup command on dsk channel at 08/17/2011 10:30:30
    ORA-19566: exceeded limit of 0 corrupt blocks for file /oracle/SHD/sapdata4/sr3_16/sr3.data16
    RMAN>
    Recovery Manager complete.
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0279E Return code from 'SHELL=/bin/sh /oracle/SHD/102_64/bin/rman nocatalog': 1
    BR0536E RMAN call for database instance SHD failed
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.30
    BR0506E Full database backup (level 0) using RMAN failed
    BR0222E Copying /oracle/SHD/sapdata4/sr3_16/sr3.data16 to/from /mnt/backup/oracle/SHD/begomwsv failed due to previous errors
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0307I Shutting down database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0308I Shutdown of database instance SHD successful
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.34
    BR0304I Starting and opening database instance SHD ...
    BR0280I BRBACKUP time stamp: 2011-08-17 10.30.47
    BR0305I Start and open of database instance SHD successful
    Do you guys have any idea on how to solve this issue??
    Thanks in advance, Marc

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Cisco lms - config collection

    Hi, i'm looking for a way to collect config's from specific devices periodically, is this possible?
    for example, collect configs from firewalls everyday at midnight , and collect routers config once a week.
    firewalls: (about 3 devices, config changes every day)
    routers: (about 800 devices, rarely change the config)
    the only config collection schedule i have found so far is too simple:
    admin > collection settings > config > config collecion settings > periodic collection
    second question, is it possible for cisco lms to send via e-mail the diff of the config pulled from devices? i mean the specific commands that have appeared in the new configuration file.
    regards, ignacio

    Unfortunately, LMS syslog mechanism is very minimalistic and doesn't have a lot of options to it.
    However, the feature you're requesting is not very much LMS dependent. As, Ciscoworks depends on the kind of syslog message it receives from device, based on it, it captures some characters to send a notification as automated actions.
    So usually it is the device which won't send a lot of information on what changes was done by which user in normal IOS syslog messages.
    But, to certain extent, you can try to configure you device for Configuration-Change logger to receive details on what changes were made by users and check it on the syslog report, or configure AA on it for all or important devices.
    You can enable a configuration logger to keep track of configuration changes made with the command-line interface (CLI). When you enter the logging enable configuration-change logger configuration command, the log records the session, the user, and the command that was entered to change the configuration. You can configure the size of the configuration log from 1 to 1000 entries (the default is 100). You can clear the log at any time by entering the no logging enable command followed by the logging enable command to disable and reenable logging.
    Use the:
    show archive log config {all | number [end-number] | user username [session number] number [end-number] | statistics} [provisioning] privileged EXEC command to display the complete configuration log or the log for specified parameters.
    This example shows how to enable the configuration-change logger and to set the number of entries in the log to 500:
     Switch(config)# archive 
     Switch(config-archive)# log config
     Switch(config-archive-log-cfg)# logging enable
     Switch(config-archive-log-cfg)# logging size 500
     Switch(config-archive-log-cfg)# end
    So, in all, it depends on the device and the kind of syslogs it send for LMS to react on it.
    -Thanks
    Vinod
    **Encourage Contributors. RATE Them.**

  • My iTunes Match won't load past step 1; All songs in my library say "Exceeded Limit", even the ones that were uploaded before.

    My iTunes Match hasn't been working for a couple of days. It said I had exceeded the song limit, so I was going through my library and clearing out some songs. When I deleted the songs, the box that asked if I wanted to delete them from the cloud wasn't showing up. I decided to update my iTunes Match before deleting anything else. When I attempted the update, Step 1 moved incredibly slowly, then stopped working before reaching Step 2. When this has happened in the past, I have been advised to try several troubleshooting tactics from the support forum and from writing to Apple support in the past. The first one is to turn off iTunes Match, restart the computer, and turn on iTunes Match again. I did that. The same problem occurred. I tried the second trouble shooting tactic, which was to turn off iTunes Match, close iTunes, find the iTunes Library Genius.itdb and iTunes Library Extras.itdb and move them to the trash, then run a Repair Disk Permissions using Disk Utility, then turn iTunes Match back on. I tried that. The same problem occurred. I went to my iTunes library to see the iCloud status of my songs, and every single song was listed as not being in the cloud yet and had the "Exceeded Limit" status, even the songs that have been in my cloud for over a year. I checked a few of the forum topics here on the support site. One suggested opening up iTunes while holding down the Option button, creating a new library, then turning on iTunes Match and deleting songs from the cloud that way. Then open up the original library and turn on iTunes Match. I did that. The status of my songs hasn't changed, nor can iTunes Match load past Step 1. Does anyone know what I can do to fix this?

    Maybe I didn't explain it properly, but that's one of the things I described trying in my first post. It helped get songs out of my cloud that I didn't need there, but iTunes Match still couldn't get past the first step. Last night I deleted the iTunes Library Extras.itdb, iTunes library Genius.itdb, iTunes Library.it, and iTunes Library.xml files from my computer, entered Time Machine, and recovered those same files that had been backed up after the last time I added music (the time the problem began.) After recovering those files I opened iTunes and turned Match back on. For some reason, it worked, even though those were the files that existed when the problem initially occurred.The combination of having songs removed from the cloud as well as restoring those original files must have done something, but I have no idea why that fixed the problem.

  • ORA-19566: exceeded limit of 999 corrupt blocks for file

    Hi All,
    I am new to Oracle RMAN & RAC Administration. Looking for your support to solve the below issue.
    We have 2 disk groups - +ETDATA & +ETFLASH in our 3 node RAC environments in which RMAN is configured in node-2 to take backup. We do not have RMAN catalog and the RMAN is fetching information from control file.
    Recently, the backup failed with the error ORA-19566: exceeded limit of 999 corrupt blocks for file +ETFLASH/datafile/users.6187.802328091.
    We found that the datafiles are present in both disk groups and from the control file info, we got to know that the datafiles in +ETDATA are currently in use and +ETFLASH is having old datafiles.
    RMAN> show all;
    RMAN configuration parameters for database with db_unique_name LABWRKT are:
    CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 3 DAYS;
    CONFIGURE BACKUP OPTIMIZATION ON;
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP ON;
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/CONTROLFILE/snapcf_LABWRKT.f';
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '+ETFLASH/controlfile/snapcf_labwrkt.f';
    Above configuration shows that SNAPSHOT CONTROLFILE is pointing to +ETFLASH. So I changed the configuration with the SNAPSHOT CONTROLFILE points to '+ETDATA/controlfile/snapcf_labwrkt.f'. At the end of backup, SNAPSHOT file was created in +ETDATA and I was expecting it to be the copy of control file being used which has dbf located in +ETDATA. But still the backup was pointing to old datafiles in +ETFLASH. Since we dont have RMAN catalog, resync also is not possible.
    When I ran it manually, it was successfull without any error and was pointing to the exisiting datafiles.
    RMAN> backup database plus archivelog all;
    I hope the issue will get resolved if the RMAN points only to the datafiles present in +ETDATA. If I am correct, please let me know how can i make it possible? Also please explain me why the newly created snapshot file does not reflect the existing control file info?

    Hi,
    I am getting an error from the DBA Planning Calendar every time the job ...
    So when was your last successfull backup of this datafile. Check if still available.
    If this is some time ago, and may be you are currently without any backup, try to backup without rman at once,
    to have at least something to work with in case you get additional errors right now.
    Then you need to find out what object is affected. You are on the right way already. You need the statement,
    that goes to dba_extents to check what object the block belongs to.
    Has the DB been recovered recently, so the block might possibly belong to an index created with nologging ?
    (this could be the case on BW systems).
    If the last good backup of that file is still available and the redologs belonging to this backup up to current time are as well, you could try to recover that file. But I'd do this only after a good backup without rman and by not destroying the original file.
    If the last good backup was an rman backup, you can do a verify restore of that datafile in advance, to check if the corruption is really not inside the file to be restored.
    Check out the -w (verify) option of brrestore first, to understand how it works.
    (I am not sure it this is already available in version 7.00, may be you need to switch to 7.10 or 7.20)
    brrestore -c -m /oracle/SHD/sapdata4/sr3_16/sr3.data16  -b xxxxxxxx.ffr -w only_rmv
    You should do a dbv check of that file as well, to check if it gets more information. I.E if more blocks are
    affected. rman stops right after the first corruption, but usually you have a couple of those in line, esp. if these are
    zeroed ones. (This one would also work with version 7.00 brtools)
    brbackup -c -u / -t online -m /oracle/SHD/sapdata4/sr3_16/sr3.data16 -w only_dbv
    Good luck.
    Volker

  • Cisco LMS 4.0.1 to Prime Infrastructure 2.x Upgrade

    Hello,
    we are using Cisco LMS and want to upgrade to Cisco Prime Infrastructure.
    Now we have received the following license update: L-L-PI2X-100-U     -      LMS to Prime Infrastructure 2.x Upgrade 100 Device
    and I wonder what is the best way to upgrade.
    After a long search I found the following document on cisco:
    http://www.cisco.com/c/en/us/products/collateral/cloud-systems-management/prime-infrastructure/guide-c07-729990.html#wp9000727
    There is only the talk of the upgrade from 4.2 LMS to Prime.
    The problem is that we can not update to version 4.2 because of an error. Thats one reason why we need to upgrade to prime.
    Is the only possibility now to export databases?
    thanks in advance

    Hi Heiko,
    Upgarding from LMS 4.0.1 to LMS 4.2  , you need a New Licesne . Are you getting that error..
    Upgarding from LMS to PI is alomost like a new Installation of PI. because only thing you can IMPORT from LMS to PI is the device list .
    you can download the software from the below link:
    http://software.cisco.com/download/release.html?mdfid=284422771&flowid=45323&softwareid=284272932&release=2.0.0&relind=AVAILABLE&rellifecycle=&reltype=all
    Installation  link:
    http://www.cisco.com/c/en/us/td/docs/net_mgmt/prime/infrastructure/2-0/install/guide/Cisco_PI_Hardware_Appliance_Installation_Guide/cpi_higmain.html
    hope it will help
    Thanks-
    Afroz
    [Do rate the useful post]
    ****Ratings Encourages Contributors ****

  • Error in Cisco LMS

    Hi,
    I get this error when i run some actions in CISCO LMS  3.2 i.e if I click in Campus Manager - Start Data Collection. it gives error " you are not access to request this action ".
    What can be the problem?
    Please find the attachment for your reference.
    Please Suggest. waiting for your prompt reply.
    Thanks

    to fix the database issue , you need to reintilaze the ANI database:
    1) Stop the daemon manager
    "net stop crmdmgtd"
    2) Go to NMSROOT\CSCOpx\bin.
     NMSROOT\bin\perl.exe NMSROOT\bin\dbRestoreOrig.pl dsn=ani dmprefix=ANI
    for linux:
    /opt/CSCOpx/bin/dbRestoreOrig.pl dsn=ani dmprefix=ANI
     3)  Wait for a few minutes before starting the CW daemons again:
     "net start crmdmgtd"   (wait for at least 5 minutes before using CW again)
    Thanks-
    Afroz
    ***Ratings Encourages Contributors ***

  • Cisco LMS 4.2 Appliance on VMware vSphere 4.0

    Hi all,
    I'm currently trying to install the Cisco LMS 4.2 Appliance on a VMware vSphere 4.0 environment.
    I'm following the http://www.cisco.com/en/US/docs/net_mgmt/ciscoworks_lan_management_solution/4.2/install/guide/instl.html#wp1689675
    guide.
    I downloaded the Cisco_Prime_LAN_Management_Solution_4_2.iso and I started the server.
    I get this screen and I choose option 1:
    A Welcome message appears:
    Welcome to Cisco Prime LAN Management Solution 4.2
    To boot from hard disk, press <Enter>
    The following options appears:
    •[1] Cisco Prime LAN Management Solution 4.2 Installation
    This option allows you to perform installation of LMS 4.2.
    •[2] Recover administrator password
    This option allows you to recover the administration password. Follow Step 3 to Step 17 of Recovering Admin Password on Soft Appliance Installed on VMware, to recover the administration password.
    •<Enter> Boot existing OS from Hard Disk
    This allows you to boot up the existing OS available in the hard disk.
    However, unlike the installation on my laptop (using VMware Workstation) I cannot get to this part:
    Enter the following configuration details of the server:
    •Hostname
    •IP Address
    •Subnetmask
    •Gateway
    •DNS Domain
    et cetera
    What follows looks like an installation to me.
    When this stops I get a
    localhost login:
    I cannot enter Setup as I get a password statement afterwards.
    Can somebody please help me as I'm having no idea what went wrong..
    Many thanks in advance.

    Thanks for your help, Marvin.
    We went ahead with the .ova file and it worked alright.
    We also did a little test and it seems that the .iso file does work on some VMware platforms. Our test environment had a VMware vSphere 4.1. When we started a VM with the same specs and the ISO file mounted, the installation did succeed.

  • Listener.log size limit on Linux 64-bit

    Hi!
    We have listener.log file growing very fast because of very active database. Every month or two I truncate that file to free up disk space but this time I forgot to truncate it for some time.
    File grew to 4294967352 bytes and stopped there on that size. Everything is working as it should with listener service - only listener.log file isn't updating.
    I've tried to search for more informations about listener.log size limit but haven't found answer that satisfies me.
    Where can I find more information why my listner.log file is limited to 4294967352 bytes?
    I suppose that this is some OS limit but how can I check this?
    It is Linux 64-bit OS with Oracle 10.2.0.4.
    Thanks for possible answers and best regards,
    Marko Sutic

    Ah, yes... thanks Sybrand for reminder, my brain just stopped working :)
    Just resolved my problem:
    LSNRCTL> set current_listener LISTENER_DB
    Current Listener is LISTENER_DB
    LSNRCTL> set log_file listener_db1
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.2.10.40)(PORT=1521)))
    LISTENER_DB parameter "log_file" set to listener_db1.log
    The command completed successfully
    LSNRCTL> set log_file listener_db
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.2.10.40)(PORT=1521)))
    LISTENER_DB parameter "log_file" set to listener_db.log
    The command completed successfully
    LSNRCTL>
    Regards,
    Marko

  • Project duration exceeds limit

    hey,
    I made a movie in iMovie and exported it to iDVD. After the export process it says "project duration exceeds limit".
    My movie is 82 min long and 4.35 GB. Why can it not be burned onto a 4.7GB DVD???
    is there a possibility to burn it onto 2 different DVDs, because I do not want to loose any quality.
    thanks in advance
    Thomas

    Possibly you set the wrong encoding setting:
    iDVD encoding settings:
    http://docs.info.apple.com/article.html?path=iDVD/7.0/en/11417.html
    Short version:
    Best Performance is for videos of up to 60 minutes
    Best Quality is for videos of up to 120 minutes
    Professional Quality is also for up to 120 minutes but even higher quality (and takes much longer)
    That was for single-layer DVDs. Double these numbers for dual-layer DVDs.
    Professional Quality: The Professional Quality option uses advanced technology to encode your video, resulting in the best quality of video possible on your burned DVD. You can select this option regardless of your project’s duration (up to 2 hours of video for a single-layer disc and 4 hours for a double-layer disc). Because Professional Quality encoding is time-consuming (requiring about twice as much time to encode a project as the High Quality option, for example) choose it only if you are not concerned about the time taken.
    In both cases the maximum length includes titles, transitions and effects etc. Allow about 15 minutes for these.
    You can use the amount of video in your project as a rough determination of which method to choose. If your project has an hour or less of video (for a single-layer disc), choose Best Performance. If it has between 1 and 2 hours of video (for a single-layer disc), choose High Quality. If you want the best possible encoding quality for projects that are up to 2 hours (for a single-layer disc), choose Professional Quality. This option takes about twice as long as the High Quality option, so select it only if time is not an issue for you.
    Use the Capacity meter in the Project Info window (choose Project > Project Info) to determine how many minutes of video your project contains.
    NOTE: With the Best Performance setting, you can turn background encoding off by choosing Advanced > “Encode in Background.” The checkmark is removed to show it’s no longer selected. Turning off background encoding can help performance if your system seems sluggish.
    And whilst checking these settings in iDVD Preferences, make sure that the settings for NTSC/PAL and DV/DV Widescreen are also what you want.
    http://support.apple.com/kb/HT1502?viewlocale=en_US

Maybe you are looking for