Migrating a Physical volume

I do have a SUNV240 server connected to an IBM Shark and to a HP EVA8000 via a MC Data SAN director, I need to decomission the Shark and keep only the EVA
How can I migrate a LUN in the Shark to the EVA? Is there somethink like migratepv for Solaris 9?
Thanks
N2IX

Database server on a virtual server sounds like a bad idea to me.
Databases are huge resource consumers, they require dedicated resources memory/cpu/IO/network.
In virtual settings all this is shared.
So, I predict that in virtual your performance and throughput will suffer greatly.
You can google pro's and con's of virtualized servers for more details.
However, as far as BO product is concerned - all we need is connectivity, if DB connect info doesn't change - we won't see no difference . (except above about performance)

Similar Messages

  • I am trying to stop encryption in fire vault, but I keep getting the message, The target disk isn't eligible for reversion because it wasn't created by conversion or it is not part of a simple setup of exactly one logical and one physical volume.¨

    I am trying to cease encryption in fire vault, Mac os x lion 10.7 4
    I keep getting the message,
    ¨The target disk isn’t eligible for reversion because it wasn’t created by conversion or it is not part of a simple setup of exactly one logical and one physical volume.¨
    Please can someone advise the way to disable?
    Thank you.

    Are you using Boot Camp? See this discussion at MacRumors.
    Clinton

  • Change all the SoftWare RAID partitions to the VLM (physical volume)

    My PC has a RHEL5.0 system and other 4 partitions(Software RAID type). Now I want to change all the SoftWare RAID partitions to the VLM (physical volume) partition and I do not want to reinstall the RHEL5.0 system.
    How to do it?
    Can fdisk tool work? Which tool is easy?

    Hans,
    I am using ASMLib and no raw partitions. All the
    partions must be separated partitions without being
    mounted by any system.For the partitions to be used by ASMLib the partition must still be known to the machine.
    From the ASMLib pages:
    "Every disk that ASMLib is going to be accessing needs to be made available. This is accomplished by creating an ASM disk. The /etc/init.d/oracleasm script is again used for this task: "

  • Anybody know of a free way to copy/migrate a physical RDM in VMware?

    I truly wish I could. We have two Compellent arrays but we're not licensed for SAN replication, so I'm stuck trying to move several TBs of information without the benefit of the SAN helping me out. >:-(

    I need to migrate a physical RDM in a MSCS cluster from one SAN to another SAN. VMDK isn't an option, as my MS cluster has two nodes on two different physical boxes. Is there an easy/cheap/free way to do this?
    This topic first appeared in the Spiceworks Community

  • Migrating a physical database server to a virtual server

    we are migrating a physical database server to a virtual server what are the concerns?
    the server name and IP address will stay the same

    Database server on a virtual server sounds like a bad idea to me.
    Databases are huge resource consumers, they require dedicated resources memory/cpu/IO/network.
    In virtual settings all this is shared.
    So, I predict that in virtual your performance and throughput will suffer greatly.
    You can google pro's and con's of virtualized servers for more details.
    However, as far as BO product is concerned - all we need is connectivity, if DB connect info doesn't change - we won't see no difference . (except above about performance)

  • [SOLVED] Grub2 and LVM -- "Couldn't find physical volume `pv1'"

    Hello Folks
    I'm trying to upgrade from grub-legacy to grub2, following the instructions at https://wiki.archlinux.org/index.php/GRUB2
    I've installed grub-bios, and run this without problem:
    # modprobe dm-mod
    # grub-install --recheck /dev/sda
    But this command
    # grub-mkconfig -o /boot/grub/grub.cfg
    gives this:
    Generating grub.cfg ...
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    Found linux image: /boot/vmlinuz-linux
    Found initrd image: /boot/initramfs-linux.img
    /usr/sbin/grub-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core image..
    done
    So now I'm reluctant to try to reboot the system because it seems likely to be broken.  Should I ignore the warnings, or fix something?
    I'm using LVM2 as you can see.  /boot is on a separate non-LVM partition (/dev/sdc1).  root is on LVM.  This is all on a recently-updated 64-bit Arch installation using systemd.
    Here's a load of information -- I hope it's relevant.
    # fdisk -lu
    Disk /dev/sdb: 250.1 GB, 250059350016 bytes, 488397168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0xea22bb30
    Device Boot Start End Blocks Id System
    /dev/sdb1 63 488392064 244196001 83 Linux
    Disk /dev/sda: 250.0 GB, 250000000000 bytes, 488281250 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000080
    Device Boot Start End Blocks Id System
    /dev/sda1 2048 488281249 244139601 8e Linux LVM
    Disk /dev/sdc: 500.1 GB, 500107862016 bytes, 976773168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    Device Boot Start End Blocks Id System
    /dev/sdc1 * 63 481949 240943+ 83 Linux
    /dev/sdc2 481950 12482504 6000277+ 82 Linux swap / Solaris
    /dev/sdc3 12482505 976773167 482145331+ 8e Linux LVM
    Disk /dev/mapper/vg1-root: 64.4 GB, 64424509440 bytes, 125829120 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk /dev/mapper/vg1-home: 583.0 GB, 583008256000 bytes, 1138688000 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    # pvdisplay
    --- Physical volume ---
    PV Name /dev/sdc3
    VG Name vg1
    PV Size 459.81 GiB / not usable 1.05 MiB
    Allocatable yes (but full)
    PE Size 4.00 MiB
    Total PE 117711
    Free PE 0
    Allocated PE 117711
    PV UUID zaLJiO-1LCH-TGi6-hwBr-OyNs-Sjlm-HggrMo
    --- Physical volume ---
    PV Name /dev/sda1
    VG Name vg1
    PV Size 232.83 GiB / not usable 1.58 MiB
    Allocatable yes
    PE Size 4.00 MiB
    Total PE 59604
    Free PE 22955
    Allocated PE 36649
    PV UUID P05c2d-1d2i-bf0M-u6BX-EEq0-fvZW-VkTLhY
    # lvdisplay
    --- Logical volume ---
    LV Path /dev/vg1/root
    LV Name root
    VG Name vg1
    LV UUID Z68H3p-VvbC-ZNau-7Ds7-GptS-Hpl0-VZNjo4
    LV Write Access read/write
    LV Creation host, time ,
    LV Status available
    # open 1
    LV Size 60.00 GiB
    Current LE 15360
    Segments 1
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:0
    --- Logical volume ---
    LV Path /dev/vg1/home
    LV Name home
    VG Name vg1
    LV UUID uUfmS9-C4CK-Vw3V-cmwD-hEC1-VcwD-90yAyO
    LV Write Access read/write
    LV Creation host, time ,
    LV Status available
    # open 1
    LV Size 542.97 GiB
    Current LE 139000
    Segments 2
    Allocation inherit
    Read ahead sectors auto
    - currently set to 256
    Block device 254:1
    Last edited by Chris Dennis (2013-04-03 19:04:58)

    Chris Dennis wrote:
    Oh well, I took a punt on the word 'Warning' in the message, and rebooted anyway.
    It worked!
    I've just completed a series of experiments involving LVM and GRUB2. The short story is that such warnings are innocuous and arise from extending a volume group.
    Now in some detail, here's what happens (all of which was performed in VirtualBox with the current Arch rolling release just to make it easy to add and remove disk devices):
    a). pvcreate /dev/sde1 /dev/sdf1
    * Use partitions of type 8e, spanning the whole drive, for BOTH devices comprising the physical
    volume to prove that partitioning is irrelevant to the matter.
    b). vgcreate vg_x /dev/sde1
    * Start with just one device in the volume group.
    c). lvcreate --extents 100%VG --name boot vg_x
    d). mkfs.ext4 /dev/vg_x/boot && mount /dev/vg_x/boot /mnt/other
    e). grub2-install --boot-directory=/mnt/other /dev/sde
    Installation finished. No error reported.
    All is well...but now let's extend the vg_x volume group with the pre-allocated device, /dev/sdf1:
    f). vgextend vg_x /dev/sdf1
    g). grub2-install --boot-directory=/mnt/other /dev/sde
    /usr/sbin/grub2-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core
    /usr/sbin/grub2-probe: warning: Couldn't find physical volume `pv1'. Some modules may be missing from core
    Installation finished. No error reported.
    ...and boom goes the dynamite. As Chris Dennis stated, GRUB2 installs fine and the system is bootable in spite of the warning. The grub-2.00 source where the warning arises is in ./grub-core/disk/diskfilter.c and has this comment:
    /* TRANSLATORS: This message kicks in during the detection of
    which modules needs to be included in core image. This happens
    in the case of degraded RAID and means that autodetection may
    fail to include some of modules. It's an installation time
    message, not runtime message. */
    I haven't tried to hack the GRUB code but, based upon my experimentation and the ease of replicating the problem, my guess is that somehow a volume group that extended in manner shown above is mishandled by GRUB. It's arguably a bug, IMHO, since a volume group, even when extended, is still a valid entity.

  • [SOLVED] LVM physical volume not available during early boot

    Note to self:
    Of course it's not available - it's in an encrypted LVM container!
    Hi,
    I'm trying to get myself a working hibernate but I'm stuck on resume hook not seeing my swap partition.
    The swap partition is on an LVM logical volume (LV) which is on one physical volume PV created on a different harddrive than the root partition is. During the bootup I get something similar to "ERROR: hibernation partition '/dev/mapper/volgrp-swap' not found".
    Further debugging gave out that for some reason the early userspace has no access to LVM physical volume. I managed to run a shell using break=premount boot parameter. From there I found out that "lvm pvscan" shows no physical volumes at all. I suspect that mkinitcpio does not give a damn about non-root harddrive though I was unable to confirm that and neither was I able to find any other reason for this behaviour.
    Relevant lines from mkinitcpio.conf:
    MODULES="dm_mod btrfs"
    HOOKS="base udev block lvm2 autodetect keyboard encrypt resume filesystems fsck"
    Any help would be appreciated, including any tips on where to get more debug data.
    cheers,
    -miky
    Last edited by mr.MikyMaus (2013-09-25 21:32:32)

    Fackamato wrote:
    Do you have the correct "root=" line in your bootloader's kernel line? Is it in fstab?
    Perhaps try with rootdelay=5 as well?
    Yes I have root=/dev/mapper/hdgroup0-volroot, and its in fstab (not that fstab is even being read)
    root delay wouldn't help at all because after it drops me to a busybox shell I can't do a manual mount.

  • Partition failed: A problem occurred while resizing Core Storage physical volume structures

    Hi everybody,
    I have just installed Yosemite on my iMac and everything works right.
    iMac 27" late 2012
    3,4 Ghz intel core i7
    32 GB 1600 Mhz DDR3
    3TB Fusion drive.
    Now I need to create a bootable partition to install a copy of Mavericks because some FCPX plugins do not work on Yosemite.
    To create a new partition I use Disc Utility, but each time I try to create it the mac show me this error:
    partition failed A problem occurred while resizing Core Storage physical volume structures
    Why?
    What can I do?
    P.S. FCPX does not work on a virtual machine, so I have to install it on a new partition
    Thanks

    Ditto here.
    Same problem - I have a Apple_Corestorage partition that can't be unlocked.
    You have to do this command to free up the Corestorage partition
    diskutil coreStorage delete logicalGroupUUID
    In your case, it'll be:
    diskutil coreStorage delete E2CDC5FC-DA6E-4F43-B524-8925ABB23883

  • Migrate DST Shadow Volumes

    Hi,
    we have to migrate a server with shadow volumes to new hardware.
    Assuming the following situation:
    Source Server:
    - SLES11 SP1 OES11
    Pool DATA 500 GB with Volume DATA 500 GB almost full (Primary Volume)
    Pool SH_DATA 500 GB with Volume SH_DATA 500 GB almost full (Shadow Volume)
    No more space left on the Server
    Target Server:
    - SLES11 SP3 OES11 SP2
    app. 2 TB free Space
    What I found in the documentation:
    Migrating with the Shadow Volume Relationship: Only 1 GB of data from the source server can be migrated to the primary volume Vol4 of the target server. If you need the data on all the volumes of source server to be migrated to the target server, perform the following:
    NOTE:You require to stop the DST policies temporarily before performing migration.
    1.Stop the existing DST policies.
    2.Create a project to migrate the data less than or equal to 1 GB from the source server to the target server.
    3.Perform the migration.
    4.(Conditional) If some files or folders were open on the source server and did not get migrated to the target server, perform synchronization.
    Synchronization must be performed before performing the next step.
    5.Configure a DST policy on the target server to move the migrated data from the primary volume to the secondary volume.
    As a result, there is space available on the primary volume of the target server to migrate additional data from the source server.
    6.Stop the DST policy after the required data is moved from the primary volume Vol4 to the secondary volume Vol5.
    7.Repeat Step 2 to Step 6 until the entire data is migrated.
    Following the doku I have to perform 1000 migations for 1TB data?
    Is there a smoother way to do the migration of the volumes?
    Best regards
    Nico

    Originally Posted by ndecken
    Hi Kevin,
    sorry, I didn't saw that you was writing another post.
    We didn't performed the migration yet, a step-by-step would be nice, I'll PM you my email address.
    But an other scenario ;-)
    One of our customers has a FC-SAN and has several LUN's on it. some of these are DST volumes.
    Now he get's an iSCSI low performance storage and want to move the DST volumes off from the FC-SAN to the iSCSI storage.
    Both storages are visible by the Vsphere hosts, where the fileservers reside on.
    My plan is:
    1. create new volume on the iSCSI storage and name it DST_VOL2
    2. turn off any DST policies
    3. migrate the old DST Volume named DST_VOL1 to DST_VOL2
    4. dismount and delete DST_VOL1
    5. rename DST_VOL2 to DST_VOL1
    will this work, or is there an other possibility to do this work?
    Best regards
    Nico
    This may be changing since I was intending to refer to this when I said Pool Move:
    https://www.novell.com/documentation...move_pool.html
    So hopefully the link Hans posted is only for DFS moves (which I'm not referring to).
    IF the same rules apply then:
    Well since we have some product limitations with the pool move at this point, I'm going to assume you're in a non-clustered setup?
    If so, then I guess the only ways I can think of:
    1) If you're on OES11, I believe the max size for pools in 9 TB now? Maybe you can expand the Primary pool to be large enough to then re-shift the Shadow pool back and THEN pool move and then re-shadow it? Depending upon your VMware version, you may need to use RAW disks (I think in 5.5 you're still limited to 2 TB disks?)
    2) If you're going to use your above method, I think you'll need some significant downtime (I'm assuming you're using the same server, just diff. disks), because you'll need to break the DST shadow relationship, (not just stop the policies, literally de-shadow the volumes) which means users won't have access to their shadowed data and then do the copy/migrate as you mentioned.
    3) IF you were going to another/different server, then you could use the miggui with my unsupported method (or maybe it's supported now) which only involves a brief break of the shadow (just long enough to build the migration project so it sees 2 diff. volumes) and then away you go.
    Unless someone has another/better way to move storage items and retain the shadow relationship? I'm not a VMware expert, but MAYBE (not sure) there's an option in VMware to migrate a vdisk from one to another (or the vmdk files??) while the system is online? But I don't think so. I know there's a storage migration, but usually that's the entire VM Guest itself, not just a specific disk.
    But, IF Vmware has something nifty, you could have a VM Storage that's on the iSCSI and maybe "move" the Virtual disk to that new iSCSI storage, but again, not sure how that'd be handled or what would happen in the underlying OS. I suppose it's possible to have the VM LUN/Storage be on iSCSI, yet be presented to the Guest as regular SCSI storage since it's virtualized??
    I know Massimo does lots with iSCSI and such.

  • Split Time Machine migration over 2 volumes?

    Hi,
    I've just bought a shiny new MBP with 128GB SSD, which I am migrating to from an iMac.  I have my Time Machine backup of the iMac (which I've sold).  The iMac had just under 1TB used all-in.  My intention is to keep the OS and apps on SSD and keep my photo, music and video libraries onHDD.  My question is how do I split out my apps and settings vs the user data in my time machine backup and migrate them to the right volumes?  I have an external Firewire hard drive and also an optibay HDD kit on its way.  If I need to temporarily restore the whole thing to one volume and manually move things around I have the scope to do that also.
    Thanks.
    EQLZR

    Found this on MacWorld which looks pretty straight forward.  I'll give this a go:
    A brief tutorial on symbolic links 
    Nov 06, '01 10:29:06AM • Contributed by: jmil
    OS X's file structure mounts all partitions under the "/Volumes" directory at the root level of the filesystem. However, when navigating the filesystem with "cd" and other commands, it can be annoying to type "/Volumes/volume_name" each time you want to access a different partition. To learn about symbolic links and use them to add shortcuts at the root level of your filesystem, read the rest of this article. This assumes you are moderately comfortable in the Terminal, and that you have administrative privileges.
    What is a Symbolic Link?
    If you've ever made an "Alias" to a file in classic Mac OS, or a "Shortcut" to a file in Windows, you will easily be able to understand the UNIX equivalent (where "aliases" and "shortcuts" came from in the first place!), called a "Symbolic Link". The easiest definition to understand is directly from the man page, "[A Link] is useful for maintaining multiple copies of a file in many places at once without using up storage for the 'copies'; instead, a link 'points' to the original copy." (If you want to read more about it, type man ln in the Terminal.)
    Let's say you have two partitions or drives - "X" containing OS X and "Classic" containing OS 9. To navigate to "X", you simply type
    cd /
    To navigate to "Classic", though, you'd have to type
    cd /Volumes/Classic
    If you have many partitions and/or drives, and you are trying to manage many files across them, it can get annoying to type "/Volumes" every time. Moreover, you cannot simply create an Alias to the drive/partition to accomplish this because the command line utility "cd" does not handle Mac OS aliases properly. However, a symbolic link will solve the problem.
    To create the link, simply type the following:
    cd /
    ln -s /Volumes/Classic/ Classic
    That's it. ln -s makes a new file called Classic which points to /Volumes/Classic/. To see that this worked, you can simply type
    ls -la | more
    (note the "|" character is called "pipe" and is found above the "Return" key). You should see a line with the following text:
    Classic -> /Volumes/Classic/
    The arrow shows you exactly what you did... A file named Classic points to /Volumes/Classic.
    Links have many possibilities around the filesystem, and I encourage you to read more about them in the man page. While command line utilities cannot recognize Mac OS aliases, the Mac OS Finder will recognize symbolic links you construct in the terminal, or via Third-Party Utilities such as SNAX so you can use links just like aliases in the Finder.
    Jordan

  • MIgration from physical servers to Oracle Solaris Containers

    We are in process of migrating our Oracle Databases from a physical SUN SPARC servers to Oracle Solaris Container.
    Do we need to any extra setting in the Oracle side or server side in a virtualized environment to run Oracle databases?
    Any comment will be of help.
    Thanks
    Abdul

    Oracle databases work fine in zones.
    Some planing is required to find out if you need to use dedicated cpus or other resource management options.
    Use projects to set the appropriate memory settings.
    Depending on the versions of the databases the installer may choke and complain about missing entries in /etc/system (like shminfo_shmmax) but they can be safely ignored.

  • Migration from physical server to zone (solaris 10)

    Hello all,
    I found an old thread about the subject Migrate Solaris 10 Physical Server to Solaris 10 Zone but i have a question.
    Using the flarcreare command, will add to the flar archive all the zpools i have in the server?. Right now we have 14 zpools.
    If i execute this command "flarcreate -n "Migration File" > -S -R / -x /flash /flash/Migration_File-`date '+%m-%d-%y'`.flar" will take all the zpools?
    This is for migrating from a E25k Server to a M9k Server
    The E25k (Physical Server) have this release "Solaris 10 10/08 s10s_u6wos_07b SPARC" and the zone server (M9k) have this release "Oracle Solaris 10 8/11 s10s_u10wos_17b SPARC" this could be an issue?
    Thanks for any help.
    Edited by: 875571 on Dec 9, 2011 7:38 AM

    flarcreate will only include data from the root pool (typically called rpool). The odds are that this is what you actually want.
    Presumably, on a 25k you would have one pool for storing the OS and perhaps home directories, etc. This is probably from some sort of a disk tray. The other pools are likely SAN-attached and are probably quite large (terabytes, perhaps). It is quite likely that instead of creating a multi-terabyte archive, you would instead want an archive of the root pool (10's to 100's of megabytes) and would use SAN features to make the other pools visible to the M9000.
    One thing that you need to do that probably isn't obvious from the documentation is that you will need to add dataset resources to the solaris10 zone's configuration to make the other zpools visible in the solaris10 zone. Assuming that these other pools are on a SAN, the zpools are no longer imported on the 25k, and the SAN is configured to allow the M9000 to access the LUNs, you will do something like the following for each zpool:
    # zpool import -N -o zoned=on +poolname+
    # zonecfg -z +zonename+ "add dataset; set name=+poolname+; end"In the event that you really do need to copy all of the zpools to the M9000, you can do that as well. However, I would recommend doing that using a procedure like the one described at http://docs.oracle.com/cd/E23824_01/html/821-1448/gbchx.html#gbinz. (zfs send and zfs recv can be used to send incremental streams as well. Thus, you could do the majority of the transfer ahead of time, then do an incremental transfer when you are in your cut-over window.)
    If you are going the zfs send | zfs recv route and you want to consolidate the zpools into a single zpool, you can do so, then use dataset aliasing to make the zone still see the data that exists in multiple pools as though nothing changed. See http://docs.oracle.com/cd/E23824_01/html/821-1448/gayov.html#gbbst and http://docs.oracle.com/cd/E23824_01/html/821-1460/z.admin.task-11.html#z.admin.task-12.

  • NetWare Server Migration Utility fills volume on SOURCE srv

    I am trying to migrate our primary NetWare 6.5 w/SP8 file server to VMware. One of the NSS volumes on the source server is nearly 2 TB in size with ~60 GB free. During the migration, that volume on the source server filled up to 100%, however, the migration continued to run (over the weekend), so it wasn't a big deal. However, at ~95% completed, the Migration Utility crashed on my computer (some Visual C++ error) and closed. Now, the volume on the source server is still at 100% usage, and I can't seem to "clean it" up. My questions are:
    #1) Is the Migration Utility using source server disk space to "cache" the files it is going to copy? If so, can I stop it from doing that?
    #2) What must I do to reclaim that free space on the source server?
    Thank you.

    Hi.
    On 24.10.2011 11:46, mkelley 25 wrote:
    >
    > I am trying to migrate our primary NetWare 6.5 w/SP8 file server to
    > VMware. One of the NSS volumes on the source server is nearly 2 TB in
    > size with ~60 GB free. During the migration, that volume on the source
    > server filled up to 100%, however, the migration continued to run (over
    > the weekend), so it wasn't a big deal. However, at ~95% completed, the
    > Migration Utility crashed on my computer (some Visual C++ error) and
    > closed. Now, the volume on the source server is still at 100% usage,
    > and I can't seem to "clean it" up. My questions are:
    >
    > #1) Is the Migration Utility using source server disk space to "cache"
    > the files it is going to copy? If so, can I stop it from doing that?
    > #2) What must I do to reclaim that free space on the source server?
    You obviously have compression enabled on the source, but not on the
    target volume. In that case the migration has to uncompress compressed
    files on the source before it migrates them.
    The "fix" for the full volume now is unfortunately to wait until
    compression kicks in again for those files.
    CU,
    Massimo Rosen
    Novell Knowledge Partner
    No emails please!
    http://www.cfc-it.de

  • SAP ECC Migration from Physical to ESX

    Dear All,
    My company is moving from Physical Servers to Virtual Platform (Vmware ESX). We are using 7.2 kernel on RHEL 5.3 64 bit for SAP HCM. My plan is;
    1) Shutdown all services
    2) Clone Physical Server into VM using vCenter Converter
    3) change all DNS Base information and update IP address
    4) Restart the services on new server with new IP Address
    My take is i need to request new License for new Hardware. Please suggest me what else i need to consider during this process.
    Thanks.

    Hi,
    My take is i need to request new License for new Hardware
    Yes you need to request for new license since your hardware is changing.
    Apart from the steps mentioned by you,
    1) You need to readjust the RFCs
    2) Adjust the TMS configuration
    3) Adjust the SAPlogon pad entries for endusers.
    Regards,
    Deepak Kori

  • Disk Migration under Solaris Volume Manager.

    Hi all,
    I need to plan a disk migration.
    Actually I have on a V240 2x 36GB disks in RAID 0 under SVM.
    But I need to replace the 2 disks with 72GB disks.
    How can do that with the shorter downtime?
    Thanks in advance for your help.

    put the new disks into the machine.
    Partition them up so they have partitions the same size as the original partitions.
    Create miror metadevices and add the original disks to the mirror.
    Reboot so your running of the mirror devices.
    Add the new disks into the mirror.
    Wait for the mirroring to complete.
    remove the original devices from the mirror.
    you can then remove the original disks and your done.
    downtime about a minute for the reboot.

Maybe you are looking for

  • Best practice for E-business suite 11i or R12 Application backup

    Hi, I'm taking RMAN backup of database. What would be "Best practice for E-business suite 11i or R12 Application backup" procedure? Right now I'm taking file level backup. Please suggest if any. Thanks

  • Profit Center in Customer and Vendor Line item

    Hello Guru i have one doubt we have activated the new GL in one of client for profit center level. while posting through any of financial transaction particularly customer and vendor there is no filed for input parameters for profit center. while boo

  • Is there anyway to retrieve files after crash?

    So iPhone wacked out and stopped working saying I had to plug in to iTunes.  Plugged in to a different itunes not on my imac and updated/restored iphone.  I get the phone back to my computer and then itunes isnt up to date phone cant work with my ima

  • Copy and paste automatically in Word?

    After a complete test a number of Word documents are generated via the office module in LabVIEW. Finally I would like to take all contents from all documents and place into one document - automatically. Have anyone tried to simulate the functions Ctr

  • ISight never works

    I bought my iSight last August only to find that it never works once I am outside of the actual Apple Store. I have been to two different stores to try and figure out the problem and both times they have made it work, but as soon as I get home it giv