Upgrading controller cache

Is there anything software or firmware related that needs to be considered when upgrading controller caches? I am swapping 128 caches for 512's on a G4 Xserve Raid. The hardware instructions are simple, just unclip, remove and exchange.

Can anyone share what the RAM specs were? I have an older RAID, 128 MB cache - these are described as PC100 SODIMM. I want to go to 512 MB. I would prefer using Apple RAM, which i can get, but I'd have to order as though for an older PowerBook G4 Titanium. Has anyone used PC133?
Thanks!
Dean

Similar Messages

  • Controllable Cache Store issue ?

    Hi,
    I try to use the controllable cache store, but i am getting the following error.below i have mentioned the scenario,error and environment.please help
    Scenario :
    i have multiple caches which is using the distributed scheme. And i created one cache called control cahe with distributed scheme.then i tried the sample in the following site to implement the
    controllable cache store
    http://coherence.oracle.com/display/COH35UG/Sample+CacheStores
    In the site i tried the first type, while putting the data in to the cache, i am getting the error,(below i mentioned).And i want to know how to configure in the Coherence-config.xml, i configured like
    <cache-mapping>
    <cache-name>controllCache</cache-name>
    <scheme-name>distributedControll</scheme-name>
    </cache-mapping>
    <distributed-scheme>
    <scheme-name>distributedControll</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-name>inMemory</scheme-name>
    </local-scheme>
    </internal-cache-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    is this configuration is correct ? and in my java program before putting the data in cache i am calling the disable method and setting the boolean value, while storing i am checking the boolean variable if its true means i am writing the data in DB, if false means i am not writing the data in db.
    Error :
    com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:33)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:920)
    at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:1296)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:297)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:204)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:181)
    at com.coherence.cache.CoherenceLinkageCacheStore.store(CoherenceLinkageCacheStore.java:81)
    at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5677)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4763)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1235)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:745)
    at java.util.AbstractMap.putAll(AbstractMap.java:256)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postPut(PartitionedCache.CDB:66)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:17)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:47)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.onReceived(PartitionedCache.CDB:85)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:662)
    Environment :
    Coherence 3.7,Weblogic 10.3.5,Java 1.6

    saiyansharwan wrote:
    Hi,
    I try to use the controllable cache store, but i am getting the following error.below i have mentioned the scenario,error and environment.please help
    Scenario :
    i have multiple caches which is using the distributed scheme. And i created one cache called control cahe with distributed scheme.then i tried the sample in the following site to implement the
    controllable cache store
    http://coherence.oracle.com/display/COH35UG/Sample+CacheStores
    In the site i tried the first type, while putting the data in to the cache, i am getting the error,(below i mentioned).And i want to know how to configure in the Coherence-config.xml, i configured like
    <cache-mapping>
    <cache-name>controllCache</cache-name>
    <scheme-name>distributedControll</scheme-name>
    </cache-mapping>
    <distributed-scheme>
    <scheme-name>distributedControll</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-name>inMemory</scheme-name>
    </local-scheme>
    </internal-cache-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    is this configuration is correct ? and in my java program before putting the data in cache i am calling the disable method and setting the boolean value, while storing i am checking the boolean variable if its true means i am writing the data in DB, if false means i am not writing the data in db.
    Error :
    com.tangosol.util.AssertionException: poll() is a blocking call and cannot be called on the Service thread
    at com.tangosol.coherence.Component._assertFailed(Component.CDB:12)
    at com.tangosol.coherence.Component._assert(Component.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.poll(Grid.CDB:11)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:29)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.ensureCache(PartitionedCache.CDB:36)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache$Router(SafeCacheService.CDB:1)
    at com.tangosol.coherence.component.util.safeService.SafeCacheService.ensureCache(SafeCacheService.CDB:33)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:920)
    at com.tangosol.net.DefaultConfigurableCacheFactory.configureCache(DefaultConfigurableCacheFactory.java:1296)
    at com.tangosol.net.DefaultConfigurableCacheFactory.ensureCache(DefaultConfigurableCacheFactory.java:297)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:204)
    at com.tangosol.net.CacheFactory.getCache(CacheFactory.java:181)
    at com.coherence.cache.CoherenceLinkageCacheStore.store(CoherenceLinkageCacheStore.java:81)
    at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5677)
    at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4763)
    at com.tangosol.net.cache.ReadWriteBackingMap.putInternal(ReadWriteBackingMap.java:1235)
    at com.tangosol.net.cache.ReadWriteBackingMap.put(ReadWriteBackingMap.java:745)
    at java.util.AbstractMap.putAll(AbstractMap.java:256)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.postPut(PartitionedCache.CDB:66)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.put(PartitionedCache.CDB:17)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onPutAllRequest(PartitionedCache.CDB:47)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$PutAllRequest.onReceived(PartitionedCache.CDB:85)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:662)
    Environment :
    Coherence 3.7,Weblogic 10.3.5,Java 1.6Hi,
    the control cache must be in a different (preferably replicated) cache service from the cache with the cache store.
    Also, you seem to have only a single cache, and that is not how it is supposed to work.
    The pattern itself works by having 2 caches. One of the caches is a distributed cache with a cache store. The other is the control cache which contains information whether the cache store should be allowed to write or not.
    Best regards,
    Robert

  • Upgrading Controller and AP Help

    I'm planning on moving from 4.1.185 to 4.2.130 tomorrow late but am having an issue in testing. My original plan was with 3 4404's and one of them being emtpy, have the 3rd running 4.2.130 and just move AP's from one of the older controllers over to it until it was empty. However that doesn't seem to be working. In WCS, my test AP is registered to Controller 1, I set it's primary to Controller 3 and it doesn't do anything. How can I force AP's over to the controller with the new code so they'll get updated? The reason for this is I can't take down the whole Hospital for a period of time while I upgrade a controller and wait (and hope) all of the AP's come back up over a period of time.

    First verify that the mobility groups is confiugred with all the mac and ip address of all 3 WLC's. Make sure you can mping from one controller to antoher. Then what you can do is change the name of the primary controller to that of the new upgraded WLC. Now you can either enable ap fallback on the wlc that the ap is on or reset the ap, so when it boots back up it will try to join the new wlc.

  • Upgrade from Cache 3.1.x Software to ACNS 4.2.3

    I have a problem in upgrading my CE590 from Cache 3.1.x Software to ACNS 4.2.3 Software.
    The following error appear :
    Integrity check failed for /sw/tmp_bundle.
    Install addon "Cisco Upgrade Diamond Ruby Bundle" failed (256)
    I tried to install deltmppatch.addon patch file, but I still have the same problem
    Can anyone help me to solve this problem ?
    content#dir
    size time of last change name
    509 Tue Jan 7 10:53:35 2003 ACNS-4.2.3-deltmppatch.addon
    105699182 Tue Jan 7 10:46:19 2003 CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon
    17192652 Wed Oct 30 13:31:41 2002 ce590-cache-311.bin
    4096 Tue Jan 7 10:29:37 2003 <DIR> errorlog
    4096 Thu Oct 31 12:36:09 2002 <DIR> logs
    16384 Wed Oct 30 13:23:56 2002 <DIR> lost+found
    3684140 Tue Jan 7 10:50:41 2003 syslog.txt
    content#
    content#install ACNS-4.2.3-deltmppatch.addon
    Cleanup Done.
    content#
    content#install /local1/CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon
    Integrity check failed for /sw/tmp_bundle.
    Install addon "Cisco Upgrade Diamond Ruby Bundle" failed (256)
    content#
    content#install ACNS-4.2.3-deltmppatch.addon
    Cleanup Done.
    content#install /local1/CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon
    Integrity check failed for /sw/tmp_bundle.
    Install addon "Cisco Upgrade Diamond Ruby Bundle" failed (256)
    content#
    content#show disk-partitions disk00
    Disk size in 512 byte blocks: 35566448
    num: type start size status
    0: SWFS 32 3145728 System Reserved
    1: SYSFS 3145760 7113289 mounted at local1
    2: NONE
    3: NONE
    Free disk space: 25307431 blocks (12357 M)
    content#show disk-partitions disk01
    Disk size in 512 byte blocks: 35566448
    num: type start size status
    0: CFS 32 35566448 mounted
    1: NONE
    2: NONE
    3: NONE
    Free disk space: 0 blocks (0 M)
    content#
    Can anyone help me ?
    Thanks
    Mohamed Abdallah

    Hi Pete,
    Thanks for your reply
    file : CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon and its size is 105699182
    as you can see from dir command output below
    content#dir
    size time of last change name
    509 Tue Jan 7 10:53:35 2003 ACNS-4.2.3-deltmppatch.addon
    105699182 Tue Jan 7 10:46:19 2003 CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon
    17192652 Wed Oct 30 13:31:41 2002 ce590-cache-311.bin
    4096 Tue Jan 7 10:29:37 2003 errorlog
    4096 Thu Oct 31 12:36:09 2002 logs
    16384 Wed Oct 30 13:23:56 2002 lost+found
    3684140 Tue Jan 7 10:50:41 2003 syslog.txt
    content#
    is it possible to delete the file from the flash of the content then download it to the flash again via ftp ? what is the command to delete the CE-Cache-3.1.x-TO-ACNS-4.2.3-K9.addon from the flash ???
    Can you please send me the URL again that contain the file name CE-"Cache-3.1.x-TO-ACNS-4.2.3-K9.addon " and its size and MD5 checksum
    Regards
    Mohamed Abdallah

  • Upgrading Controller and Config Files

    Hello,
    I am upgrading my 4402 controller from 4.1.171.0 to 4.2.130. It says I need to save my current config because it will get dumped when I do the upgrade. What extension type do I save the config files as? It does not state this anywhere...
    Cisco sure doesn't make this process very easy. Save config file. Upgrade code. Download saved config file to controller. Hope nothing gets corrupted with all that and hope everything is re-enabled and configured like it was before upgrade? Seems like a lot can go wrong...

    Jerome,
    This is from the 4.2.112.0 release notes:
    In controller software 4.2, the controller's bootup configuration file is stored in an Extensible Markup
    Language (XML) format rather than in binary format. Therefore, you cannot download a binary
    configuration file onto a controller running software release 4.2. However, when you upgrade a
    controller from a previous software release to 4.2, the configuration file is migrated and converted to
    XML.
    This conflicts with what Cisco states earlier in their documentation. I thought when you upgrade it wipes everything and you loser your current config? After reading this section I gather that I only lose the running config and that it will upgrade the config stored in NVRAM? That all makes a lot more sense than what they say earlier in the guide. Once I do the upgrade I can't impor the old config due to it being in the wrong format.
    So in closing this mess my take is that I just need to upload the new software image to the controller. It will load the new software version and then convert the existing config in NVRAM into the new .xml format? Yes?

  • HBA controller name changes after upgrading to solaris 11.2 from solaris 11.1

    we have upgraded OS on control domain from Sol 11.1 to Sol 11.2 . After upgrade controller name of HBAs and NPIV wwns  changed ( like if it is c2 prior to upgrade  , post upgrade controller name changes to  c8). Due to this we have to redo virtual disk mapping for all ldoms hosted on this machine. .  Would it be possible to preserve the HBA controller names across OS versions ? we even observed this issue while upgrading SRU of solaris 11.2 system.  We have to upgrade around 20 systems to solaris 11.2 and with this issue upgrade seems to be a tedious task. Any idea if there is any fix or workaround for it ?

    Are your disks only connected to one single HBA?
    We use MPXIO to MultiPath over Multiple HBAs. Controller never changed here for the MPXIO devices.
    Best regards,
    Marcel

  • Why upgraded version of web cache 2.0.0.2.0 do not accept OSRECV_TIMEOUT parameter?

    Hi all:
    I did upgrade Web Cache 2.0.0.1.0 to 2.0.0.2.0 with a path
    downloaded from OTN and I am trying to set the attribute
    OSRECV_TIMEOUT according to the instruction on readme.html file.
    The webcached do not start because OSRECV_TIMEOUT is not in dtd
    definition.
    Is internal:///internal.dtd upgraded into this patch? or I need
    to install a complete version of 2.0.0.2 (not patch 2.0.0.1).
    Best regards, Marcelo.

    Wow, no one responded to this. Sorry about that. Did you resolve your problem? Note that the 9.0.2.0 release is out now, so that's the one you should be using.

  • WISM 7.0.235.0 post-upgrade problem?

    I upgraded one of our WISM-1 modules from 7.0.98.0 to 7.0.235.0 last night.  For some reason, APs don't join with it unless I tell them specifically to do so.  We haven't specified primary & secondary controllers on purpose, allowing the APs to determine from DHCP & the controllers' responses and decide on one based on load.  This has always worked great for us.
    After upgrading, I couldn't get an AP to join the upgraded wism even after I specified the controller in the AP config.  So then I changed my 6506 load balancing to src-dst-mac since that is a known (although seldom seen) issue with APs joining a controller for the first time.  We usually keep it set at
    src-dst-mixed-ip-port.  That worked & the AP joined the upgraded wism.  Then I reset the load-balancing algorithm on the 6506, removed the specific controller from the AP's config, rebooted the AP, and all is fine.  I thought that solved it.  Not.
    Any other AP that I reboot tries to join the upgraded wism since it has only 1 AP connected, but it fails and the AP joins the other wism running 7.0.98.0.  Even if I change the load balancing algorithm to src-dst-mac, src-dst-ip, or src-mac, it won't join unless I specify the upgraded controller, which I don't want to do.  I can see the wism responding to the join requests, but the APs still end up on the other controller.  It soulds like the years-old load balancing algorithm issue, but that doesn't seem to be the whole answer this time.
    I hope this information makes sense to those that are aware of the issues I bring up.  Any ideas why the 7.0.235.0 wism isn't getting APs to join it successfully without my specifying that controller?  The config hasn't changed, except for what new or changed defaults exist.  I suspect it might have to do with one of those....  Or could it be the two different controller versions returning confusing and different responses to the initial query?
    Thanks.
    Bill

    We're not using the built-in DHCP service on the WISMs.
    A wise decision.
    I've upgrade our WiSM-1's to this version and we don't have any issues. 
    However, in the past, we use to host our DHCP server for the WAPs and the clients on a plain Linux box and we noticed that the Linux box took a painfully long, long time to dish out IP addresses to around 1k WAPs.  We later moved our DHCP server to InfoBlox and the problems go away. 
    Any other AP that I reboot tries to join the upgraded wism since it has only 1 AP connected, but it fails and the AP joins the other wism running 7.0.98.0. Even if I change the load balancing algorithm to src-dst-mac, src-dst-ip, or src-mac, it won't join unless I specify the upgraded controller, which I don't want to do. I can see the wism responding to the join requests, but the APs still end up on the other controller. It soulds like the years-old load balancing algorithm issue, but that doesn't seem to be the whole answer this time.
    Never saw this problem at all. 

  • Pre-loading the cache

    I'm attempting to pre-load the cache with data and have implemented controllable caches as per this document (http://wiki.tangosol.com/display/COH35UG/Sample+CacheStores). My cache stores are configured as write-behind with a 2s delay:
    <cache-config>
         <caching-scheme-mapping>
         <cache-mapping>
              <cache-name>PARTY_CACHE</cache-name>
              <scheme-name>party_cache</scheme-name>
         </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <distributed-scheme>
                <scheme-name>party_cache</scheme-name>
                <service-name>partyCacheService</service-name>
                <thread-count>5</thread-count>
                <backing-map-scheme>
                    <read-write-backing-map-scheme>
                         <write-delay>2s</write-delay>
                        <internal-cache-scheme>
                            <local-scheme/>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>spring-bean:partyCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
         </caching-schemes>
    </cache-config>
    public static void enable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.TRUE);
    public static void disable(String storeName) {
            CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).put(storeName, Boolean.FALSE);
    public static boolean isEnabled(String storeName) {
            return ((Boolean) CacheFactory.getCache(CacheNameEnum.CONTROL_CACHE.name()).get(storeName)).booleanValue();
    public void store(Object key, Object value) {
            if (isEnabled(getStoreName())) {
                throw new UnsupportedOperationException("Store method not currently supported");
        }The problem I have is that what seems to be happening is:
    1) bulk loading process calls disable() on the cache store
    2) cache is loaded with data
    3) bulk loading process calls enable() on the cache store ready for normal operation
    4) the service thread starts to attempt to store the data as the check to see if the store is enabled returns true because we set it to true in step 3
    so is there a way of temporarily disabling the write-delay or changing it programatically so step 4 doesn't happen?

    Adding
    Thread.sleep(10000);after loading the data seems to solve the problem but this seems dirty, any better solutions?

  • Satellite P50t - how caching SSD is built in?

    Hi,
    I am considering to buy a P50t, with the 1TB HDD and 8gb caching SSD.
    I have searched this forum and the web, but was unable to determine how the SSD is built into the laptop.
    I know other laptops the caching SSD is plugged into a mini PCIe slot using mSata with the formfactor M2. 2242 (or NGFF 42mm).
    Is there anyone that can confirm Toshiba is doing the same?
    I want to know, since I would like to upgrade the caching SSD to a 128gb version.
    Thanks in advance!
    Johan Germs

    Just bought the P50t-A0EE
    http://www.mytoshiba.co.nz/products/computers/satellite/p50/pspmha-0ee04s/specifications
    Can confirm that the HDD is a hybrid. 1GB HDD with 8GB of NAND flash.
    http://www.tweaktown.com/reviews/5740/toshiba-1tb-sshd-mq01abd100h-review/index.html
    Can't say any thing about the laptop yet as it has to be replaced with another, as there appears to be what looks like an eyelash, between the glass and screen. lol.

  • Upgrade WLC 5508 to ver 7.5

    hi,
    I need upgrade a WLC 5508 to ver 7.5.  we have  aironet 1242 AG y 1131 AG  and  I dont see any ver AP to join
    please, are there IOS to do these?
    thanks         
    RAHA
    [email protected]      

    Below AP models supported by WLC 7.5.x software. (see the below release note for detail)
    Cisco 1040, 1130, 1140, 1240, 1250, 1260, 1600, 2600, 3500, 3500p, 3600, Cisco 600 Series OfficeExtend Access Points, 700 Series, AP801, and AP802
    http://www.cisco.com/en/US/docs/wireless/controller/release/notes/crn75.html
    AP will get updated image from WLC once you upgrade  controller software to 7.5.x
    HTH
    Rasika
    **** Pls rate all useful responses ****

  • Dell PowerVault MD3600f - Unable to upgrade firmware - Data validation error

    Upgrade Controller Firmware wizard reports the following error.
    1 problem detected.
    Data validation error: The storage array cannot be upgraded as host port type representation in Controller A did not match with those of Controller B. Contact a technical support representative to resolve this issue.
    What does this mean? and how to fix it.

    Hi,
    yes 100% the error is reported from the dell mdsm software.
    steps.
    1. open mdsm application
    2. right click on powervault md3600f
    3. select upgrade raid controller module firmware
    4. upgrade raid controller module screen apears and refreshes unit(s) status changes to no upgradeable.
    5. error apears in the details window.
    6. press view log.
    [May 17, 2013 9:25:15 AM] [MD3600] [pre-upgrade tests] [pre-upgrade tests start]
    Pre-upgrade tests started
    [May 17, 2013 9:25:37 AM] [MD3600] [pre-upgrade tests] [check RAID controller module state]
    Test passed - RAID controller module 1
    [May 17, 2013 9:25:37 AM] [MD3600] [pre-upgrade tests] [check RAID controller module state]
    Test passed - RAID controller module 0
    [May 17, 2013 9:25:38 AM] [MD3600] [pre-upgrade tests] [check spm database]
    Test passed - RAID controller module 1
    [May 17, 2013 9:25:39 AM] [MD3600] [pre-upgrade tests] [check spm database]
    Test passed - RAID controller module 0
    [May 17, 2013 9:25:39 AM] [MD3600] [pre-upgrade tests] [Physical Disks Unavailable]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check configuration database]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check physical disks]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check disk groups and disk pools]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check hot spares]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check current operations]
    Test passed
    [May 17, 2013 9:25:42 AM] [MD3600] [pre-upgrade tests] [check virtual disks]
    Test passed
    [May 17, 2013 9:25:43 AM] [MD3600] [pre-upgrade tests] [check internal data validity]
    Data validation error: The storage array cannot be upgraded as host port type representation in Controller A did not match with those of Controller B. Contact a technical support representative to resolve this issue.
    [May 17, 2013 9:25:44 AM] [MD3600] [pre-upgrade tests] [check event log]
    The event log was cleared within the last 24 hours (event type: 100).
    [May 17, 2013 9:25:45 AM] [MD3600] [pre-upgrade tests] [check event log]
    Test passed
    [May 17, 2013 9:25:45 AM] [MD3600] [pre-upgrade tests] [pre-upgrade tests complete] [0h 0m 30s]
    Pre-upgrade test(s) failure - the storage array cannot be upgraded in its present state

  • Performance Testing While Caching?

    Fellas,
    I am doing performance optimization tasks on oracle DB 10G R2 running on Red Hat Linux.
    The problem is that whenever I run the query twice or more, oracle caches it, and I can no Longer see the delays in execution times.
    I tried to clear the cache (this is Dev environment) but query execution times behaved as if the execution plans are still cached.
    Any insight, please?

    Charlov wrote:
    Fellas,
    I am doing performance optimization tasks on oracle DB 10G R2 running on Red Hat Linux.
    The problem is that whenever I run the query twice or more, oracle caches it, and I can no Longer see the delays in execution times.
    I tried to clear the cache (this is Dev environment) but query execution times behaved as if the execution plans are still cached.
    Any insight, please?If you are doing performance optimization, why do you not want caching? Don't you want to optimize the access to data, whether it is cached or not? User perceived execution time is a reason to investigate performance, but it is difficult to map a development execution time to real world performance. That's why people say things like minimize consistent gets or concentrate on logical I/O.
    So if you are investigating how to maximize getting data from a disk to the SGA, because you have infrequent unique queries, do that. Multiblock reads or not bothering with the SGA may be your friend. If you have data that is getting flushed out of the SGA even though it is being accessed moderately frequently, check the SGA advisor and consider the KEEP pool. If you don't know where the caching is occurring - it could be the SGA, OS user buffers, controller cache, SAN cache - you need to either figure it out or ignore it.
    Perhaps if you told us how exactly you are "doing performance optimization tasks" we could give better advice.
    Oracle purposefully reuses execution plans - this avoids much worse performance problems from hard parsing creating new ones. There are situations where this is bad (google the bind peeking problem). If you want a new execution plan, give a different query. Comments are useful for that, as well as being something to look for in the sql area.

  • Wireless controller ap1130 h-reap

    We are about to pull these things out. TAC cannot find the problem. Customer had ap350s and runs fine over their wan. Tried to put h-reap with local switching and it has been nothing but a nightmare. clients disconnect from ap yet controller shows them associated and authenticated, remove from controller and client returns. Upgraded controller to 4.1 same result, changed aps, wireless cards and nothing solves the problem. it can be one client or 10 clients,it doesn't matter. All we are doing is running continuous ping. Have tried wpa, wpa2, and wep with the same result.

    yep running into the following bug
    CSCsj94675 which has to do with a and g radio being active at the same time.

  • History not registrated anymore, no new entries in date, also upgrade and cacheremoval etc don't fix it

    history not registrated anymore, no new entries in date, days like "today, yesterday etc " don't appear anymore,just only "history"
    in toolbar last 10entries are registrated, however not in day/week overview.
    also upgrade and cache-removal, history removal,cookies removal etc don't fix it.
    newest version of firefox, g5mac, 4.11os.
    befora a few days no problems at all. ? may be opening account on facebook or other site has caused it , but that's unclear.
    how to repair this function without deleting and reinstalling firefox andlosing all settings. is there a prefs thing i can throw away or so ?
    thanks a lot beforehand for the effort of advicing

    sorry , a bit later: some more info:
    Toepassingsbasics
    Naam Firefox
    Versie 3.6.6
    Profielmap
    Tonen in Finder
    Geïnstalleerde plug-ins
    about:plugins
    Buildconfiguratie
    about:buildconfig
    Extensies
    Naam
    Versie
    Ingeschakeld
    ID
    British English Dictionary 1.19 true [email protected]
    ReloadEvery 3.6.3 false {888d99e7-e8b5-46a3-851e-1ec45da1e644}
    Woordenboek Nederlands 2.2.0 true [email protected]
    DownloadHelper 4.7.3 true
    Aangepaste voorkeuren
    Naam
    Waarde
    browser.history_expire_days.mirror 180
    browser.history_expire_days_min 7
    browser.places.smartBookmarksVersion 2
    browser.startup.homepage_override.mstone rv:1.9.2.6
    browser.tabs.warnOnClose false
    extensions.lastAppVersion 3.6.6
    network.cookie.lifetimePolicy 1
    network.cookie.prefsMigrated true
    places.last_vacuum 1277904749
    privacy.clearOnShutdown.cookies false
    privacy.clearOnShutdown.downloads false
    privacy.clearOnShutdown.formdata false
    privacy.clearOnShutdown.history false
    privacy.clearOnShutdown.sessions false
    privacy.sanitize.migrateFx3Prefs true
    privacy.sanitize.sanitizeOnShutdown true
    privacy.sanitize.timeSpan 0

Maybe you are looking for

  • I have a 3g iPhone and when I open iTunes, there is no "Purchased" button.  How can I get one?

    I have some past purchases that show up on my "Purchased" list when I am on my computer (Windows Vista machine), but they do not appear in my Library - so I cannot simply drag the songs from Library to the iPhone when it is connected to the computer.

  • Batch list import is picky about line terminations in the text file

    After running into trouble trying to compose a batch list for import, I isolated the issue: The text file that you attempt to import must have "carriage return" characters at the end of each line. It can have a "carriage return" and a "line feed", bu

  • Front Row missing artwork issue

    Hello everyone, Here is what i've experienced with missing artwork in front row. 2 albums of my collection refused to display their artwork in front row (these artworks were displaying perfectly fine in itunes, coverflow, finder songs icons and ipods

  • WebElements  Can't get rid of border inside iFrame when CR report is loaded

    I am runing reports from the CR Crystal report 2008 view that launches from Infoview and I can't get rid of a border that is in the inside of the iframe when the iframe has a CR loaded within it  .  (I uploaded screenshots here...  https://cw.sdn.sap

  • Generate key in ABAP

    Hi all, I need to generate a random key in order to insert a row in some table (SRGBTBREL). A key like that :'DEE01961112730F1BF77001A6431651A' Maybe there is already a function for that but have no clue How could I do it ? Thanks for answer !