RDM migration from Dell Compellent SAN

Hello,
I have to do a filer migration from server 2003 to server 2008 R2.
But the problem is there are few RDMs are presented to the VM (Physical mode) from Dell Compellent SAN(6.3.5.7).So I am wondering how do I get this RDM migrated to the new server(2008 R2) without any issues.
I have heard that you can use storage replies to re-create the same RDM volumes,if its possible someone tell me how I can get the latest changes from the LIVE server to Test VM which is on the DR site.
We are doing an asynchronous replication of everything from Live Storage Center to the DR Storage Center.
any help would be appreciated.
thanks
Joh

These steps are based off of the vSphere client, not the web client (don't use that often enough to know the exact steps off my head, though they'll be similar).
- Shut down the original VM (2003)
- Right click the VM and go to edit settings
- identify the disk(s) that are the RDMs (probably not "hard disk 1")
- select the disk and click remove towards the top
- confirm by clicking OK
- edit settings on the new VM (2008 R2)
- click ADD along the top
- select to add a hard disk
- select the Raw Device Map option
- select the correct Compellent disk
- confirm with OK
- if the 2008 R2 VM isn't already (still) powered up, power it up now
- if the 2008 R2 VM is already/still running, go to disk management, select action and rescan
- assign/change the drive letter on the new disk to meet your needs

Similar Messages

  • SAP Data Storage Migration from HP EVA SAN to NetApp FAS3070 FMC for M5000s

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

    Good day all
    We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
    I have some questions, which I hope someone can assist me in answering:
    - Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
    - Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
    - I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
    - Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
    Please assist, I can get further answers, if there are any questions in this regard.

  • FCoE options for Cisco UCS and Compellent SAN

    Hi,
    We have a Dell Compellent SAN storage with iSCSI and FCoE module in pre-production environment.
    It is connected to new Cisco UCS infrastructure (5108 Chassis with 2208IOM + B200 M2 Blades + 6248 Fabric Interconnect) via 10G iSCSI module (FCoE module isn't being used at th is moment).
    I reviewed compatibility matrix on interconnect but Compellent (Dell) SAN is only supported on FI NXOS 1.3(1), 1.4(1) without using 6248 and 2208 IOM which is what we have. I'm sure some of you have similar hardware configuration as ours and I'd like to see if there's any supportive Cisco FC/FCoE deployment option for the Compellent. We're pretty tight on budget at this moment so purchasing couple of Nexus 5K switches or something equipvalent for such a small number of chassis (only only have one) is not a preferred option. If additional hardware acquisition is inevitable, what would be the most cost effective solution to be able to support FCoE implementation?
    Thank you in advance for your help on this.

    Unfortunatly there isn't really one - with direct attach storage there is still the requirement that an upstream MDS/N5k pushes the zoning to it.  Without a MDS to push the zoning the system it's recommended for production.
    http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.0/b_UCSM_GUI_Configuration_Guide_2_0_chapter_0101.html#concept_05717B723C2746E1A9F6AB3A3FFA2C72
    Even if you had a MDS/N5K the 6248/2208's wouldn't support the Compellent SAN - see note 9.
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/interoperability/matrix/Matrix8.pdf
    That's not to say that it won't work, it's just that we haven't tested it and don't know what it will do and thus TAC cannot troubleshoot SAN errors on the UCS.
    On the plus side iSCSI if setup correctly can be very solid and can give you a great amount of throughput - just make sure to configure the QoS correctly and if you need more throughput then just add some additional links

  • Oracle 10g RAC Database Migration from SAN to New SAN.

    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    Will the above idea work in Production env? I think there is a lot of risks in doing the above.
    Customer does not have Oracle RAC on Test env so there isn't any chance of trying out any method.
    Any suggestion is appreciated.
    Rgds,
    Thiru.

    user1983888 wrote:
    Hi All,
    Our client has implemented a Two Node Oracle 10g R2 RAC on HP-UX v2. The Database is on ASM and on HP EVA 4000 SAN. The database size in around 1.2 TB.
    Now the requirement is to migrate the Database and Clusterware files to a New SAN (EVA 6400).
    SAN to SAN migration can't be done as the customer didn't get license for such storage migration.
    My immediate suggestion was to connect the New SAN and present the LUNs and add the Disks from New SAN and wait for rebalance to complete. Then drop the Old Disks which are on Old SAN. Exact Steps To Migrate ASM Diskgroups To Another SAN Without Downtime. (Doc ID 837308.1).
    Clients wants us to suggest alternate solutions as they are worried that presenting LUNs from Old SAN and New SAN at the same time may give some issues and also if re-balance fails then it may affect the database. Also they are not able to estimate the time to re-balance a 1.2 TB database across Disks from 2 different SAN. Downtime window is ony 48 hours.Adding and removing LUNs online is one of the great features of ASM. The Rebalance will be perfomed under SAN. No downtime!!!
    If your customer is not entrusting on ASM. So Oracle Support can answer all doubt.
    Any concern .. Contat Oracle Support to guide you in the best way to perform this work.
    >
    One wild suggestion was to:
    1. Connect the New SAN.
    2. Create New Diskgroups on New SAN from Oracle RAC env.
    3. Backup the Production database and restore on the same Oracle RAC servers but on New Diskgroups.
    4. Start the database from new Diskgroup location by updating the spfile/pfile
    5. Make sure everything is fine then drop the current Diskgroups from Old SAN.
    ASM Supports many Terabytes, if you need to migrate 3 Database with 20TB each using this way described above would be very laborious .. .. So add and remove Luns online is one feature that must work.
    Take the approval from Oracle support and do this work using the ASM Rebalance.
    Regards,
    Levi Pereira

  • Migrate from server core 2008 r2 hyper-v with failover cluster volumes to server core 2012 r2 hyper-v with failover cluster volumes on new san hardware

    We are getting ready to migrate from server core 2008 r2 hyper-v with failover cluster volumes on an iscsi san to server core 2012 r2 hyper-v with failover cluster volumes on a new iscsi san.
    I've been searching for a "best practices" article for this but have been coming up short.  The information I have found either pertains to migrating from 2008 r2 to 2012 r2 with failover cluster volumes on the same hardware, or migrating
    to different hardware without failover cluster volumes.
    If there is anyone out there that has completed a similar migration, it would be great to hear any feedback you may have on your experiences.
    Currently, my approach is as follows:
    1. Configure new hyper-v with failover cluster volumes on new SAN with new 2012 r2 hostnodes and 2012 r2 management server
    2. Turn off the virtual machines on old 2008 r2 hyper-v hostnodes
    3. Stop the VMMS service on the 2008 r2 hostnodes
    4. copy the virtual machine files and folders over to the new failover cluster volumes
    5. Import vm's into server 2012 r2 hyper-v.
    Any feedback on the opertain I have in mind would be helpful.
    Thank you,
    Rob

    Hi Rob,
    Yes , I agree with that "file copy " can achieve migration.
    Also you can try "copy cluster wizard " :
    https://technet.microsoft.com/en-us/library/dn530779.aspx
    Best Regards,
    Elton Ji
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] .

  • ActiveSync stops working after migrating from Exchange 2007 to Exchange 2013

    We have started the migration from Exchange 2007 to Exchange 2013. We've followed best practices and everything is working great except ActiveSync. I've performed Exchange migrations in the past so this is nothing new for me. I've also been referring to
    a great guide which has been a big help,
    http://www.msexchange.org/articles-tutorials/exchange-server-2013/migration-deployment/planning-and-migrating-small-organization-exchange-2007-2013-part1.html.
    Once a user is migrated from Exchange 2007 to 2013, ActiveSync stops working properly. Email can be pulled to the device (Nokia Lumia 625 running Windows Phone 8) by performing a manual sync. But DirectPush is not working. The strange part is it's not affecting
    everyone who's been migrated. Anyone who is still on Exchange 2007 is not affected.
    At first I thought it was our wildcard certificate. 99% of our users are running Outlook 2013 on Windows 7 or higher but we do have a few terminal servers still running Outlook 2010. Outlook 2010 was giving us certificate errors. I realized it was the wildcard
    certificate. Rather than making changes to the OutlookProvider I simply obtained a new SAN certificate. Although that resolved the issues for Outlook 2010 users, ActiveSync was still a problem.
    Rebooting the phones and removing the email account from the user's device and re-adding it didn't resolve the issue either.
    Then I performed an iisreset on the CAS server. This didn't help either. I didn't know it at the time, but I was getting closer...
    I tried using the cmdlet Test-ActiveSyncConnectivity but it gave me the following error:
    WARNING: Test user 'extest_0d9a45b025374' isn't accessible, so this cmdlet won't be able to test Client Access server
    connectivity.
    Could not find or sign in with user DOMAIN.com\extest_0d9a45b025374. If this task is being run without
    credentials, sign in as a Domain Administrator, and then run Scripts\new-TestCasConnectivityUser.ps1 to verify that
    the user exists on Mailbox server EX02.DOMAIN.COM
    I started reviewing how Exchange 2013 proxied information from the CAS to the mailbox server and realized the issue may in fact be on the mailbox server.
    I performed an iisreset on the mailbox server and all of a sudden ActiveSync started working again. Awesome!
    I can't explain why. The only thing I can assume is when some users were migrated from 2007 to 2013 something wasn't being triggered on the Exchange 2013 side. Resetting IIS resolved the issue. I guess I'll have to do an IIS reset after I perform a batch
    of migrations. Disabling ActiveSync and re-enabling it for the affected users didn't help - only the IISRESET resolved the issue.
    If anyone has any information as to why this happens, please chime in. Also, if anyone knows why I can't run the Test-ActiveSyncConnectivity cmdlet, I'd appreciate the help.
    Thanks.

    Hi,
    In Exchange 2013, the Public Folder is changed to Public Folder mailbox instead of Public Folder in Exchange 2007 database.
    Due to the changes in how public folders are stored, legacy Exchange mailboxes are unable to access the public folder hierarchy on Exchange 2013 servers. However, user mailboxes on Exchange 2013 servers or Exchange Online can connect to legacy
    public folders. Exchange 2013 public folders and legacy public folders can’t exist in your Exchange organization simultaneously. This effectively means that
    there’s no coexistence between versions.
    For this reason, it’s recommended that prior to migrating your public folders, you should
    first migrate your all legacy mailboxes to Exchange 2013. For more information about migrating public folder from previous versions, please refer to:
    http://technet.microsoft.com/en-us/library/jj150486(v=exchg.150).aspx
    (Please note the What do you need to know before you begin part in this link)
    Regards,
    Winnie Liang
    TechNet Community Support

  • Just got a new imac and my iwork software wont work. I have the serial numbers but after migrating from my old mac running OSX tiger it won't even give me that opportunity. I do not have any discs as this was downloaded. Help

    Just got a new imac and my iwork software wont work. I have the serial numbers but after migrating from my old mac running OSX tiger it won't even give me that opportunity. I do not have any discs as this was downloaded. Help

    Hi, hopefully you guys can help me out, I just
    purchased a G5 iMac to replace my aging dell desktop,
    and now i'm more or less completely os x dependant
    (I've had an ibook for a few months now). Anyway, I
    have a few questions:
    1. Does anyone know of a (free) mail notifier tool,
    that will alert me when I recieve new pop3 mail? I
    used to use AIM for this in windows
    The built in email program does sound an alert for new messages.
    2. Anyone know of a good (free, again) IRC client?
    iChat works well.
    3. I noticed earlier that my screen was flickering,
    it seems to have subsided for now, but is that normal
    to encounter in a new display? It wasn't really bad
    flicker, but I could see it.
    No idea.
    4. I leave my desktops on 24/7x365, will that be a
    problem with this iMac? Display shuts off of course
    after 30 mins
    You could set the iMac to go to sleep after non use. My Macs are
    always sleeping when not in use, waking them up takes seconds.
    Unlike windoze, never knew if it would be locked up or not.
    5. Should I get the extended warranty? I'm usually
    against them, but I am expecting this computer to
    last me at least 3 years (for $1300 it better!)
    before I upgrade, as I got that much out of a Dell
    and from what I understand, Mac's do not age nearly
    as fast as Windows pc's.
    It is well worth the cost. Never know what might happen. Also
    it is good insurance for future upgrades. I traded in my G4 tower
    purchased two years ago for 75% of what I paid for it. Apple care
    transfers, giving the buyer the remainder of your warranty.
    Thank you for any help,
    -Evan

  • Dell Equallogic SAN HIT Kit support for SAP in Linux 11.2

    Hello Everyone,
    I would like to use an Equallogic SAN as storage in SUSE Linux Enterprise Server 11.2. Is the Equallogic SAN supported in Linux when using the HIT Kit to provide MPIO and snapshot support? The article from thorsten.staerk describes Equallogic SAN + Linux + native iSCSI (no mention of the HIT Kit). I understand that there are restrictions on using third party kernel modules.
    My understanding is that the Equallogic MPIO driver will provide a faster, more efficient, connection to the SAN along with additional features.
    References:
    Playing with an Equallogic storage
    http://scn.sap.com/people/thorsten.staerk/blog/2009/02/10/playing-with-an-equallogic-storage
    Thorsten describes using Equallogic with SAP on SUSE Linux Enterprise
    Thorsten uses native iSCSI drivers
    How to configure Multipath using Equallogic Host Integration Toolkit (HIT) on Linux for Oracle
    http://en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/3795.how-to-configure-multipath-using-equallogic-host-integration-toolkit-hit-on-linux-for-oracle.aspx
    Describes using Equallogic multipath on Linux
    Mentions that the Equallogic mpio is a combination of user mode and kernel mode binaries
    Dell EqualLogic Host Integration Tools
    http://www.dell.com/downloads/global/products/pvaul/en/equallogic-host-software-new.pdf
    Host Integration Tools Overview from Dell
    Support for CentOS 5.7, Red Hat Enterprise Linux (RHEL) 5.7, 6.1 and 6.2, SUSE Enterprise Linux Server (SLES) 11 SP1
    Dell EqualLogic Host Integration Tools for Linux
    Installation and User's Guide Version 1.2.0
    From the documentation "Installs a compatible prebuilt kernel module, if available. Otherwise, the installation script loads the Dynamic Kernel Module Support (DKMS) package and compiles the Dell EqualLogic-supplied kernel module."
    EqualLogic Integration: Installation and Configuration of Host Integration Tools for Linux – HIT/Linux
    The Dell EqualLogic HIT/Linux MPIO software consists of two components:
    1. The EqualLogic Host Connection Manager daemon, ehcmd, which monitors the iSCSI session state and the configuration of the Linux server and PS Series group. Running in the background, the ehcmd daemon uses the Open-iSCSI management tool (iscsiadm) to add, remove, or modify iSCSI sessions to maintain an optimal number of iSCSI sessions. It also gathers information on the volume layout from the PS Series group.
    2. A loadable kernel module, dm-switch, which implements a new Device Mapper target to the multipath devices. Based on the volume layout on the array members, the dm-switch module routes each I/O to the optimal path within the PS Series group.
    SAP Note 784391 - SAP support terms and 3rd-party Linux
    kernel drivers
    The use of binary-only Linux kernel modules or drivers on the same server as SAP software is not recommended by SAP.
    Thank You,
    Kevin

    I'll take a moment to provide an update on the solution. There were some issues with the initial configuration of the in guest iSCSI volumes. If I had to start from scratch, I would run all the storage through VMware. The last round of software updates appears to have resolved a potentially serious issue with iSCSI disconnects. iSCSI has run perfectly for two months now. I'll consider the issue resolved once I have six months without any iSCSI issues.
    One remaining issue is that LVM won't initialize properly without implementing a custom init script. The init script work-around is simple and it works.
    Configuration Overview:
    VMware ESXi v5.5
    Equallogic SAN (with v.7.0.3 firmware)
    SUSE SLES v11.3
    Equallogic HIT Kit for Linux v1.3

  • Windows 2008 R2 Cluster - migrating data to new SAN storage - Detailed Steps?

    We have a project where some network storage is falling out of warranty and we need to move the data to new storage.  I have found separate guides for moving the quorum and replacing individual drives, but I can't find an all inclusive detailed step
    by step for this process, and I know this has to happen all the time.
    What I am looking for is detailed instructions on moving all the data from their current san storage to new san storage, start to finish.  All server side, there is a separate storage team that will present the new storage.  I'll then have to install
    the multi-pathing driver; then I am looking for a guide that picks up at this point.
    The real concern here is that this machine controls a critical production process and we don't have a proper set up with storage and everything to test with, so it's going to be a little nerve racking.
    Thanks in advance for any info.

    I would ask Hitachi.  As I said, the SAN vendors often have tools to assist in this sort of thing.  After all, in order to make the sale they often have to show how to move the data with the least amount of impact to the environment.  In fact,
    I would have thought that inclusion of this type of information would have been a requirement for the purchase of the SAN in the first place.
    Within the Microsoft environment, you are going to deal with generic tools.  And using generic tools the steps will be similar to what I said. 
    1. Attach storage array to cluster and configure storage.
    2. Use robocopy or Windows Server Backup for file transfer.  Use database backup/recovery for databases.
    3. Start with applications that can take multiple interruptions as you learn what works for your environment.
    Every environment is going to be different.  To get into much more detail would require an analysis of what you are using the cluster for (which you never state in either of your posts), what sort of outages you can operate with, what sort of recovery
    plan you will put in place, etc.  You have stated that your production environment is going to be your lab because you do not have a non-production environment in which to test it.  That makes it even trickier to try to offer any sort of detailed
    steps for unknown applications/sizing/timeframes/etc.
    Lastly, the absolute best practice would be to build a new Windows Server 2012 R2 cluster and migrate to a new cluster.  Windows Server 2008 R2 is already out of mainstream support. That means that you have only five more years of support on your
    current platform, at which point in time you will have to perform another massive upgrade.  Better to perform a single upgrade that gives you a lot more length of support.
    . : | : . : | : . tim

  • Problems migrating from edir 8.7.3 to 8.8

    Hi,
    I need to do a hardware replace for a Netware 6.5 sp 8 server. This
    server has edir 8.7.3.10. I loaded a new Proliant server with NW6.5 sp
    8 to migrate to. The new server has edir 8.8 on it. The Migration
    Wizzard guide says there can be problems migrating from NW6.5 from an
    edir 8.7 to 8.8.
    I tried to install the new server using the [INST: spedir] option
    specifie in this article
    (http://www.novell.com/communities/no...edirectory-873)
    but the server keeps crashing.
    Can I mirgate my old server edir 8.7.3 to the new one 8.8 using the
    Migration Wizzard? If not is there any other way to force an install
    of edir 8.7.3?
    thanks

    You could upgrade your eDir on the "original" server to 8.8
    I've done that before without any problems.
    (obviously this doesn't guarantee it will go right/well). Always have a complete backup of the .dib set (I use the dsrepair -rc on NetWare) as well as a "snapshot" of teh NetWare server if possible (ours is on the SAN, so I can clone the system disk before I do anything to it).

  • Migration from physical server to zone (solaris 10)

    Hello all,
    I found an old thread about the subject Migrate Solaris 10 Physical Server to Solaris 10 Zone but i have a question.
    Using the flarcreare command, will add to the flar archive all the zpools i have in the server?. Right now we have 14 zpools.
    If i execute this command "flarcreate -n "Migration File" > -S -R / -x /flash /flash/Migration_File-`date '+%m-%d-%y'`.flar" will take all the zpools?
    This is for migrating from a E25k Server to a M9k Server
    The E25k (Physical Server) have this release "Solaris 10 10/08 s10s_u6wos_07b SPARC" and the zone server (M9k) have this release "Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC" this could be an issue?
    Thanks for any help.
    Edited by: 875571 on Dec 9, 2011 7:38 AM

    flarcreate will only include data from the root pool (typically called rpool). The odds are that this is what you actually want.
    Presumably, on a 25k you would have one pool for storing the OS and perhaps home directories, etc. This is probably from some sort of a disk tray. The other pools are likely SAN-attached and are probably quite large (terabytes, perhaps). It is quite likely that instead of creating a multi-terabyte archive, you would instead want an archive of the root pool (10's to 100's of megabytes) and would use SAN features to make the other pools visible to the M9000.
    One thing that you need to do that probably isn't obvious from the documentation is that you will need to add dataset resources to the solaris10 zone's configuration to make the other zpools visible in the solaris10 zone. Assuming that these other pools are on a SAN, the zpools are no longer imported on the 25k, and the SAN is configured to allow the M9000 to access the LUNs, you will do something like the following for each zpool:
    # zpool import -N -o zoned=on +poolname+
    # zonecfg -z +zonename+ "add dataset; set name=+poolname+; end"In the event that you really do need to copy all of the zpools to the M9000, you can do that as well. However, I would recommend doing that using a procedure like the one described at http://docs.oracle.com/cd/E23824_01/html/821-1448/gbchx.html#gbinz. (zfs send and zfs recv can be used to send incremental streams as well. Thus, you could do the majority of the transfer ahead of time, then do an incremental transfer when you are in your cut-over window.)
    If you are going the zfs send | zfs recv route and you want to consolidate the zpools into a single zpool, you can do so, then use dataset aliasing to make the zone still see the data that exists in multiple pools as though nothing changed. See http://docs.oracle.com/cd/E23824_01/html/821-1448/gayov.html#gbbst and http://docs.oracle.com/cd/E23824_01/html/821-1460/z.admin.task-11.html#z.admin.task-12.

  • XSan and (Dell) Compellent?

    I am looking at implementing a VMware server running various Windows and Linux VMs for a Mac network, the supplier is likely to recommend a Dell based solution.
    I would like to make this solution as Mac friendly as possible to allow possible further use of the SAN that this will include so I was wondering if people here could advise as to the current state of play regarding Mac i.e. OS X compatibility of Dell's Compellent SAN range. Would it be possible to use this with Apple's XSan software?
    Can the Compellent be managed at all by a Mac?
    I believe this used to be possible before Dell acquired Compellent but could not find an indication if they have maintained such Mac compatibility.

    This may help: Xsan: Compatibility of Fibre Channel storage devices - Apple Support

  • Faster way to migrate from Single byte to Multi byte

    Hello,
    We are in the process of migrating from a 9i Single byte db to a 10g Multi byte db. The size of our DB is roughly 125 GB. We have fixed everything in the source database (9i) in terms of seamlessly migrating from a single byte to a multi byte db. The only issue is the migration window - curently we are doing an export/import since there is a character set migration involved and it's taking about 20+ hrs to do the import in 10g. The management wants to cut this down to less than 10 hours, if that's possible. I know the duration it takes to import depends on many factors like the system/OS configuration, SAN, etc but I wanted to know what , in theory, is considered the fastest method of migrating a database from single byte to multi byte.
    Have anybody here gone through this before?
    Thanks,
    Shaji

    If the percentage of user tables containing some convertible data (I am assuming you will not have any truncation or lossy data) is low, you can export only those tables, truncate them, and rescan the database. This should report no convertible data, except some CLOBs in Data Dictionary. Such database can be migrated to AL32UTF8 using csalter.plb. After the migration, you import only the previously exported subset of tables.
    Note, for this process to work, no convertible VARCHAR2, nor CHAR, nor LONG data can be present in the Data Dictionary.
    The process should be refined by dropping and recreating indexes on the exported tables as recreating an index is faster then updating it during import. You should also disable triggers so that they do not interfere with the migration (for example, they should not update any "last_updated" timestamp columns).
    If the number and size of affected tables is low compared to the overall size of the database, the time saved may be significant.
    There may also be tables that require even more sophisticated approach. Let's say you have a multi-gigabyte table that stores pictures or documents in a BLOB column. The table also has a single text column that keeps some non-ASCII descriptions of the stored entities. Exporting/truncating/importing such table may be still very expensive. A possible optimization is to offload the description column to an auxiliary table (together with ROWIDs), update the original column to NULL, export the auxiliary table, drop it, rescan the database, migrate with csalter.plb, re-import the auxiliary table, and restore the original column. If pictures alone occupy, for example, 30% of the whole database, such approach should yield significant time saving.
    -- Sergiusz

  • Migrate from StorageTek D280 to EMC-CX4 Clariion with RHEL 4U4 RDAC Kernel

    Hi all,
    I was wondering if anyone's had the opportunity to migrate a RHEL 4 server running a modified Linux kernel that includes Engenio's RDAC in the initrd image from a StorageTek SAN to EMC PowerPath on Clariion?
    EMC has nothing up on their PowerLink site that has any value, and I have some concerns as I can't Google anything on a proven working migration process (which tells me no one's really done this and documented it). Looking for confirmation of any of the following scenarios:
    1. Reboot into vanilla, non-RDACized kernel, update Qlogic drivers (QLA2340) install EMC PowerPath, use "inq" utility to locate identical LUNs on STK, add EMC disk to the LVM2 volume group, use pvmove to migrate data from STK to EMC LUN device, remove old STK from volume group.
    2. Leave RDAC modified kernel on system, try installing EMC PowerPath (!), add EMC disk to volume group, pvmove data from STK to Clariion, remove old STK from volume group.
    For option 2, I hope to soon have a test system (that is currently a production system going through a data migration to another server) available that I could try these options upon. However, I have concerns as I hear PowerPath and RDAC do not get along (same holds true for PP and Linux MPIO).
    Questions I have:
    If I go with option 1, and boot into a vanilla kernel, sans RDAC, I "believe" I would see multiple paths (whence the need to identify which disks are identical with EMC's "inq" utility. Does anyone have any thoughts on this?
    If I go with option 2, and leave the modified RDAC kernel alone (having PowerPath and RDAC on the same server at once), does anyone portend data corruption / hung server / end of the world and general doominess?
    There's also option 3, which I probably have to consider as well:
    3. Based on EMC's Grab utility's recommendation, upgrade the Qlogic driver, install new kernel, make new initrd, install RDAC (or not), and generally enter into new unknown territory on an unknown kernel (the kernel could be solidly proven to someone else, but not yet in my shop).
    But option 3 is really simply a rehash of options 1 & 2 and understanding what to expect during the migration. Any advice is appreciated.
    System details:
    [root@prltec01 ~]# lsmod | grep ql
    qla2300 129792 0
    qla2xxx 307576 1 qla2300
    qla2xxx_conf 305924 1
    scsi_mod 116941 10 mppVhba,qla2xxx,libata,mptsas,mptspi,mptfc,mptscsi,mppUpper,sg,sd_mod
    [root@prltec01 ~]# lsmod | grep mpp
    mppVhba 103424 13
    mppUpper 84512 1 mppVhba
    scsi_mod 116941 10 mppVhba,qla2xxx,libata,mptsas,mptspi,mptfc,mptscsi,mppUpper,sg,sd_mod
    [root@prltec01 ~]# cat /etc/modprobe.conf
    alias eth0 tg3
    alias eth1 tg3
    alias scsi_hostadapter mptbase
    alias scsi_hostadapter1 mptscsih
    alias scsi_hostadapter2 ata_piix
    alias scsi_hostadapter3 qla2xxx_conf
    alias scsi_hostadapter4 qla2300
    alias usb-controller ehci-hcd
    alias usb-controller1 uhci-hcd
    options scsi_mod max_scsi_luns=255 scsi_allow_ghost_devices=1
    install qla2xxx /sbin/modprobe qla2xxx_conf; /sbin/modprobe --ignore-install qla2xxx
    remove qla2xxx /sbin/modprobe -r first-time ignore-remove qla2xxx && { /sbin/modprobe -r --ignore-remove qla2xxx_conf; }
    ### BEGIN MPP Driver Comments ###
    # Additional config info can be found in /opt/mpp/modprobe.conf.mppappend.
    # The Above config info is needed if you want to make mkinitrd manually.
    # Please read the Readme file that came with MPP driver for building RamDisk manually.
    # Edit the '/etc/modprobe.conf' file and run 'mppUpdate' to create Ramdisk dynamically.
    ### END MPP Driver Comments ###
    alias bond0 bonding
    options bonding miimon=100 mode=1
    ############### >>>>>>>>>>> We use mppUdate to create new initrd files <<<<<<<<<<<<<<<###################
    [root@prltec01 ~]# cat /etc/r*release
    Red Hat Enterprise Linux ES release 4 (Nahant Update 4)
    [root@prltec01 ~]# mppUtil -V
    Linux MPP Driver Version: 09.01.B2.01
    [root@prltec01 ~]# modinfo qla2300
    filename: /lib/modules/2.6.9-34.ELsmp/kernel/drivers/scsi/qla2xxx/qla2300.ko
    author: QLogic Corporation
    description: QLogic ISP23xx FC-SCSI Host Bus Adapter driver
    license: GPL
    version: 8.01.05 204868D2F9BA2C32F657B45
    vermagic: 2.6.9-34.ELsmp SMP 686 REGPARM 4KSTACKS gcc-3.4
    depends: qla2xxx
    alias: pci:v00001077d00002300sv*sd*bc*sc*i*
    alias: pci:v00001077d00002312sv*sd*bc*sc*i*
    alias: pci:v00001077d00006312sv*sd*bc*sc*i*
    [root@prltec01 ~]# uname -a
    Linux prltec01.lawson.com 2.6.9-34.ELsmp #1 SMP Fri Feb 24 16:54:53 EST 2006 i686 i686 i386 GNU/Linux
    Cheers,
    Andrea

    Jan 6 13:36:01 servername kernel: [ 62.574033] sd 3:0:0:16384: emc: Invalid failover mode 2
    I would try failover mode 4, ALUA instead. Failover mode 2 is not suppor
    ted. This is selected on the array where one registers the connections.

  • LYNC 2010 : Steps to migrate from one pool to another pool

    Hi,
    We have LYNC 2010 deployed in 2 site (sit1 & Site2)
    Currently all user host on Site1 pool. CMS exist in Site1
    we have also mediation role and EDGE role on both site.
    we have dial in conference on site1.
    Currently public DNS record (for edge and as for reverse proxy) point to Site1.
    Certificate Authority for site 1 and and site2 is different. However SAN entries and subject name is same for all certificate and bot CA are public CA.
    We have to move all users to site2 pool and will also move CMS to site2 and we have to decommission site1.
    What is the step by step process to move users, CMS, Edge server Traffic and Mediation server, reverse proxy?
    Thanks
    jitender

    Hi,
    The migration process from one pool to another pool is similar for from Lync Server 2010 to Lync Server 2013.
    You can refer to the link below of “Migration from Lync Server 2010 to Lync Server 2013”
    https://technet.microsoft.com/en-us/library/jj205369.aspx
    Best Regards,
    Eason Huang
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Eason Huang
    TechNet Community Support

Maybe you are looking for

  • Sales order availability

    Dear all, I am developing a report which is a day-wise report, where I need to display the opening stock as on date that is the closing stock for yesterday if he is running the report today morning u2013 MMBE can help? And second is the production st

  • Sound Blaster 5.1 Windows 7 Ultimate 32 & 64 bit drivers

    <Hi, ? I Installed [color="#ff0000"]Creative Sound Blaster 5. in my system, now i am using [color="#ff0000"]windows 7, when i am trying to install the [color="#ff0000"]Creative Inspire Drivers, it showing error and i can't acces the[color="#ff0000"]

  • Partitioning in Oracle -- Need some clarification

    Hello all, While I was reading the document (PDF) on Oracle9i Data Warehousing Guide (Part No: Part No. A90237-01), I found one statement which made me confused. The statement is regarding Hash Partitioning: "Hash partitioning is a good and easy-to-u

  • On chain-request and input

    Hi Experts                Can you pls explain me whats the difference between     on chain-request and on chain-input.             I tried both in a situation, but not able to find the differenciation.            Pls advise me with suitable example.

  • Installing Word2007 via Fusion

    I installed Fusion, installed Windows XP Pro, and now I want to install Word2007. The CD shows up on my Mac desktop, but I do not know how to get the program/application into Fusion. I do have the Word for Mac, but I particularly like a Word feature