NFS vs iSCSI IOPS differ - why?

Hello -
I recently setup an environment utilizing 8 IO Analyzers each accessing either an iSCSI LUN or NFS Share (but not both at the same time). The secondary virtual disk was set at 30GB.
For the iSCSI tests, we ran a 50/50 0% random workload and our total IOPS reached 4086.11.
When we created the second disk on an NFS datastore and ran the same test as above, our total IOPS only reached 583.57 for the same time period (2 hours).  Additionally, latency doubled.
I checked IOMeter on the guests, and it appeared they were pushing fewer IOPS as well.
Any ideas as to why we couldn't push as many IOPS using NFS? I would think the amount of IOPS would be the same across tests, regardless of the backend?
Thanks in advance.

For those wondering about ZFS and implications on performance, you may want to visit this thread that helped me understand: https://pthree.org/2013/04/19/zfs-administration-appendix-a-visualizing-the-zfs-intent-log/

Similar Messages

  • NFS vs ISCSI for Storage Repositories

    Anyone have any good guidance in using NFS vs ISCSI for larger production deployments of OVM 3?
    My testing has been pretty positive with NFS but other than the documented "its not as fast as block storage" and the fact that there is no instant clones (no OCFS2), has anyone else contemplated between the two for OVM? If so, what did you choose and why?
    Currently we are testing using NFS thats presented from a Solaris HA Cluster servicing a ZFS pool (basically mimicking ZFS 73xx and 74xx appliances) but I don't know how the same setup would perform if the ZFS pool grew to 10TB of running virtual disk images.
    Any feedback?
    Thanks
    Dave

    Dave wrote:
    Would you personally recommend against using one giant NFS mount to storage VM disk images?I don't recommend against it, it's just most often the slowest possible storage solution in comparison to other mechanisms. NFS cannot take advantage of any of the OCFS2 reflinking, so guests must be fully copied from the template, which is time consuming. Loop-mounting a disk image on NFS is less efficient than loop-mounting it via iSCSI or directly in the guest. FC-SAN is the usually the most efficient storage, but bonded-10Gbps interfaces for NFS or iSCSI may now be faster. If you have dual-8Gpbs FC HBAs vs dual 1Gbps NICs for NFS/iSCSI, the FC SAN will win.
    Essentially, you have to evaluate what your critical success factors are and then make storage decisions based on that. As you have a majority of Windows guests, you need to present the block devices via Oracle VM, so you need to use either virtual disk images (which are the slowest, but easiest to manage) or FC/iSCSI LUNs presented to the guest (which are much faster, but more difficult to manage).

  • Fault Tolerance of NFS and iSCSI

    Hello,
    I'm currently designing a new datacenter core environment. In this case there are also nexus 5548 with FEXs involved. On this fex's there are some servers which speak NFS and iSCSI.
    While changing the core component there will be a disruption between the servers.
    What ks the maximim timeout a NFS or iSCSI protocoll can handle while changing the components. Maybe there will be disruption for a maximimum of 1 sekond.
    Regards
    Udo
    Sent from Cisco Technical Support iPad App

    JDW1:  In case you haven't received the ISO document yet, the relevent section of the cited ISO 11898-2:2003 you want to look at is section 7.6 "Bus failure management", and specifically Table 12 - "Bus failure detection" and Figure 19 - "Possible failures of bus lines".

  • NFS and ISCSI using ip hash load balance policy

    As i know all these days that the best practice for iscsi is to use single nic and one standby with " route based port id" ButI have seen in a client placethat NFS and iscsi are configured to use"route based ip hash" and multiple nic and it has been working all these days. i can not see that iscsi does multi path there.I was told by the sys admin that it is ok to use that since the both protocol are configured in same storage and it does not make sense to separate it ,his explanation that if we want separate policy then use separate storage that is one for nfs and other for iscsi, i do not buy that, i might be wrong.He pointed his link below saying that you can use ip hash.http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalI....Is it ok to use " route based ip hash for iscsi as on the link?
    This topic first appeared in the Spiceworks Community

    When you create your uplink port profile you simply use the auto channel command in your config:
    channel-group auto mode on
    This will create a static etherchannel when two or more ports are added to the uplink port profile from the same host.  Assuming your upstream switch config is still set to "mode on" for the etherchannel config, there's nothing to change.
    Regards,
    Robert

  • Use of NFS or iSCSI datastores

    A few queries..
    1. Is it possible to make use of NFS or iSCSI storage as permanent datastores for a EVO:RAIL appliance?
    2. What are the choices for VSAN data protection when creating VMs inside EVO:RAIL?

    >>1. Is it possible to make use of NFS or iSCSI storage as permanent datastores for a EVO:RAIL appliance?
    Yes, but they have to be manually configured on each one of the nodes (just like you would have to do on a regular vSphere deployment).
    >>2. What are the choices for VSAN data protection when creating VMs inside EVO:RAIL
    Any vSphere solution products (such as VDP) which are supported with vSphere 5.5 and VSAN 1.0 are supported with EVO:RAIL. Same idea applies to 3rd party solution as well as long as they are deemed supported with vSphere 5.5 and VSAN 1.0.

  • Unable to use device as an iSCSI target

    My intended purpose is to have iSCSI targets for a virtualbox setup at home where block devices are for the systems and existing data on a large RAID partition is exported as well. I'm able to successfully export the block files by using dd and added them as a backing-store into the targets.conf file:
    include /etc/tgt/temp/*.conf
    default-driver iscsi
    <target iqn.2012-09.net.domain:vm.fsrv>
    backing-store /srv/vm/disks/iscsi-disk-fsrv
    </target>
    <target iqn.2012-09.net.domain:vm.wsrv>
    backing-store /srv/vm/disks/iscsi-disk-wsrv
    </target>
    <target iqn.2012-09.net.domain:lan.storage>
    backing-store /dev/md0
    </target>
    but the last one with /dev/md0 only creates the controller and not the disk.
    The RAID device is mounted, I don't whether or not that matters, unfortunately I can't try it being unmounted yet because it is in use. I've tried all permutations of backing-store and direct-store with md0 as well as another device (sda) with and without the partition number, all had the same result.
    If anyone has been successful exporting a device (specifically a multi disk) I'd be real interested in knowing how. Also, if anyone knows how, or if it's even possible, to use a directory as the backing/direct store I'd like to know that as well, my attempts there have been unsuccessful as well.
    I will preempt anyone asking why I'm not using some other technology, eg. NFS, CIFS, ZFS, etc., by saying that this is largely academic. I want to compare the performance that a virtualized file server has that receives it's content being served by both NFS and iSCSI, and the NFS part is easy.
    Thanks.

    Mass storage only looks at the memory expansion.
    Did you have a micro SD card in it?
    What OS on the PC are you running?
    Click here to Backup the data on your BlackBerry Device! It's important, and FREE!
    Click "Accept as Solution" if your problem is solved. To give thanks, click thumbs up
    Click to search the Knowledge Base at BTSC and click to Read The Fabulous Manuals
    BESAdmin's, please make a signature with your BES environment info.
    SIM Free BlackBerry Unlocking FAQ
    Follow me on Twitter @knottyrope
    Want to thank me? Buy my KnottyRope App here
    BES 12 and BES 5.0.4 with Exchange 2010 and SQL 2012 Hyper V

  • Slow ISCSI perfomance on 7310

    We just got a sun 7310 cluster the 10TB configuration 2xWrite SSD 1xRead SSD, we configured the 7310 as a single strip (for testing only, will latter change to mirror), we ran several NFS and ISCSI tests, to get a peak performance, all tests where done on solaris 10 clients, while the NFS test's where great, peak at around 115MBs (gigE speed) we where unable to get ISCSI performance greater then 88MBs peak. We tried playing with the ISCSI settings on the 7310 like WCE, etc but where unable to get better results.
    I know we could get better performance as seen with the NFS tests, we where going to buy10gig interfaces but if we can't push ISCSI to greater then 88MBs per client it wont make sense to buy. I would rely appreciated if some one could point us in the right direction what could be changed to get better ISCSI performance.
    Eli

    The iSCSI lun's are setup in a mixed mode some 2k/4k and 8k, the reason for such a small block size (and correct me if I am wrong), all the zfs tunings mention to try and match the db block size, and this luns are going to be used by an informix database which has some 2k/4k/8k db spaces, so I was trying to match the db block size. (but for restores this might slow things down?)
    After testing all kind's of OS/Solaris 10 tunings, the only thing that improved performance was changing the Sessions to "4" by running " iscsiadm modify initiator-node -c 4".
    We are using the 4 built in NIC's, 1&2 are setup in an LACP group, we then use vlan tags, jumboframes are disabled, and 3&4 are used for management on each cluster node. We where questioning if we get/add a dual 10Gig card will the iSCSI performance be better/faster? what is the best performance we can expect on a single client with 10Gig? why single client, because we need to speed up the db restore (we are using netbackup) which is only running on a single client at a time.
    With the Sessions now changed to "4" we get around 120-130MBs, since its only a 1gig link we are not expecting any better speeds.
    Thanks for your help.

  • Server 2012 NFS - 10 minutes to resume after failover?

    I've got a Server 2012 Core cluster running an HA file server role with the new NFS service. The role has two associated clustered disks. When I fail the role between nodes, it takes 10-12 minutes for the NFS service to come back online - it'll sit in 'Online
    Pending' for several minutes, then transition to 'Failed', then finally come 'Online'. I've looked at the NFS event logs from the time period of failover and they look slightly odd. For example, in a failover at 18:41, I see this in the Admin log:
    Time
    EventID
    Description
    18:41:59
    1076
    Server for NFS successfully started virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}
    18:50:47
    2000
    A new NFS share was created. Path:Y:\msd_build, Alias:msd_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
    18:50:47
    2000
    A new NFS share was created. Path:Z:\eas_build, Alias:eas_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
    18:50:47
    2002
    A previously shared NFS folder was unshared. Path:Y:\msd_build, Alias:msd_build
    18:50:47
    2002
    A previously shared NFS folder was unshared. Path:Z:\eas_build, Alias:eas_build
    18:50:47
    1078
    NFS virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd} is stopped
    18:51:47
    1076
    Server for NFS successfully started virtual server {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}
    18:51:47
    2000
    A new NFS share was created. Path:Y:\msd_build, Alias:msd_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
    18:51:47
    2000
    A new NFS share was created. Path:Z:\eas_build, Alias:eas_build, ShareFlags:0xC0AE00, Encoding:7, SecurityFlavorFlags:0x2, UnmappedUid:4294967294, UnmappedGid:4294967294
    In the Operational log, I see this:
    Time
    EventID
    Description
    18:41:51
    1108
    Server for NFS received an arrival notification for volume \Device\HarddiskVolume11.
    18:41:51
    1079
    NFS virtual server successfully created volume \Device\HarddiskVolume11 (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}).
    18:41:58
    1108
    Server for NFS received an arrival notification for volume \Device\HarddiskVolume9.
    18:41:58
    1079
    NFS virtual server successfully created volume \Device\HarddiskVolume9 (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
    18:41:59
    1079
    NFS virtual server successfully created volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
    18:41:59
    1105
    Server for NFS started volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
    18:41:59
    1079
    NFS virtual server successfully created volume \DosDevices\Y:\ (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
    18:44:06
    1116
    Server for NFS discovered volume Z: (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}) and added it to the known volume table.
    18:50:47
    1116
    Server for NFS discovered volume Y: (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}) and added it to the known volume table.
    18:50:47
    1081
    NFS virtual server successfully destroyed volume \DosDevices\Y:\.
    18:50:47
    1105
    Server for NFS started volume Y: (ResolvedPath \Device\HarddiskVolume9\, VolumeId {c5014a4a-d0b8-11e1-bbcb-005056925167}).
    18:50:47
    1105
    Server for NFS started volume Z: (ResolvedPath \Device\HarddiskVolume11\, VolumeId {69d0efca-c067-11e1-bbc5-005056925169}).
    18:50:48
    1106
    Server for NFS stopped volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
    18:50:48
    1081
    NFS virtual server successfully destroyed volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\.
    18:51:47
    1079
    NFS virtual server successfully created volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
    18:51:47
    1105
    Server for NFS started volume \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\ (ResolvedPath \Pfs\Volume{fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}\, VolumeId {fc4bf5c0-c2c9-430f-8c44-4220ff6655bd}).
    From this, I'm not sure what's going on between 18:41:59 and 18:44:06, between 18:44:06 and 18:50:47 or between 18:50:48 and 18:51:47. What's the NFS volume discovery doing and why does it take so long?
    Does anyone have any thoughts as to where I could start looking to work out what's happening here? Is there any tracing that can be enabled for the NFS services to indicate what's going on?
    Thanks in advance!

    I was able to get some downtime this afternoon so I tried
    deleting the NFS share in question and recreating it
    deleting all the NFS shares on the clustered file server (thus removing the NFS Server resource) and recreating them
    deleting all the NFS shares and the ._nfs folder from all the associated drives and recreating them
    deleting the clustered file server altogether, shutting down the cluster, starting it back up and recreating the file server.
    None of these made any difference - this particular NFS resource still took about 10 minutes to return to service. I'm therefore supposing it's some aspect of the disk or the data on it that NFS is taking a long time to enumerate, but it's annoying that
    I don't have any visibility into what's going on. I might try asking this same question in the MS partner forums to see if I get any answers there...

  • ISCSI

    One alternative our NAS provider offers to NFS is iSCSI - (which is SCSI encapsulated in ethernet). It requires an iSCSI initiator in the client environment (Solaris), apparently several initiators are open source. It has been described as a poor-man's fiber-channel SAN.
    Is it possible to use an iSCSI target as the iMS mail store? Would this be better than using NFS - which isn't supported on my device?
    The reason I ask is because our NAS (snap 4200) is redundant RAID 5 with lots of space, etc. and I would feel better about having the mailstore in a failover environment.
    Thanks for any insights,
    s7
    .

    Well, I can tell you that no testing has been done with iSCSI. That doesn't mean it won't work, but . . .
    We have had one report that, "it works fine" in this forum.
    You might want to post your query in the ims-ms list, found at arnold.com. That's where our developers hang out......

  • Verifying NFS

    Hello all,
    Hopefully someone can help me out as I dont know linux at all to verify some information.
    I have MARS configured to backup to a NFS server. I recently received a notification from MARS stating that one of the backups failed.
    Because I do not have access to the NFS server (Managed Services is why), what other ways can I verify that the NFS backup is succeeding? Whenever I attempt to use the GUI to pull raw messages from the NFS server it essentially times out.
    Anything on the cli available? Any and all help/ideas are appreciated.
    MARS is running 4.2.6 ( 2458 ). Yea, old, but customer absolutely refuses to give us the time of day to upgrade it.

    Also there is a 'pnexp' command available on the CLI. Even tough it is meant to export 4.x database before importing it into a new 5.x box (Generation 1). It can be used for 'immediate backups' as per the Cisco engineer on this thread:
    http://forum.cisco.com/eforum/servlet/NetProf?page=netprof&forum=Expert%20Archive&topic=Security&topicID=.ee7f99a&CommCmd=MB%3Fcmd%3Dpass_through%26location%3Doutline%40%5E1%40%40.2cbeaa98/18#selected_message
    Once you enter the pnexp CLI
    Do the following:
    export config 10.1.1.1:/archive.
    Then enter the following command:
    pnexp> status
    Data exporting process is currently running, please use command 'log {all|recent}' to view running logs and/or progress.
    pnexp> log
    Aug 25 09:44:47.192 2008@LM_INFO@Thread 1024:Number of events exported: 1227522, ~0.18% completed, overall speed = 30688.05 eps
    Aug 25 09:44:47.192 2008@LM_INFO@Thread 1024:Estimated-Time-To-Complete: 6 hours 18 minutes
    CTRL-C
    As you can see I'm doing a full backup here (config+event/report data etc), don't run that just to 'test' NFS. Its better to use the 'config' option :)
    Regards
    Farrukh

  • ORA-15041: diskgroup

    Hi,
    Db :11.2.0.1
    Os :Aix 6
    We refresh a db from production to development.
    Production db name is PROD23,size is 3tb.we created pfile with instance name PROD23 in target machine(development machine) and db name also PROD23.
    We started instance and restore completed.We got a error while recovery
    channel t4: reading from backup piece q5nfe9jj_1_1
    channel t3: ORA-19870: error while restoring backup piece q3nfe6f3_1_1
    ORA-19504: failed to create file "+P0DEV_ARCHGROUP01/P0DEV/arch/arch_1_20113_774871139.arc"
    ORA-17502: ksfdcre:4 Failed to create file +P0DEV_ARCHGROUP01/P0DEV/arch/arch_1_20113_774871139.arc
    ORA-15041: diskgroup "P0DEV_ARCHGROUP01" space exhausted
    failover to previous backup
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '+P0DEV_DATAGROUP01/P0DEV/system01.dbf'
    2GB free space available in archivegroup.I didn't understand why it's throwing a error?
    I checked space in asmcmd which shows like
    +P0DEV_ARCHGROUP01*
    ASMCMD> ls -ltr
    WARNING:option 'r' is deprecated for 'ls'
    please use 'reverse'
    Type Redund Striped Time Sys Name
    Y PROD23/
    N PP0DEV/
    can i delete archive logs from PROD23.
    Usually we complete recovery and changed db name to PP0DEV from PROD23.So i am asking above question.
    Thanks & Regards,
    VN

    user3266490 wrote:
    Hi,
    Thanks for your reply.
    NAME STATE TYPE TOTAL_MB FREE_MB
    ASM_DISKGROUP01 MOUNTED EXTERN 102400 102341
    PP0DEV_ARCHGROUP01 CONNECTED EXTERN 921600 144857
    PP0DEV_DATAGROUP01 CONNECTED EXTERN 3686400 1000468
    PP0DEV_REDOGROUP01 CONNECTED EXTERN 409600 409486
    Redolog size is 8GB. the db has 8 redo group and 2 members each.
    Interesting that your status for the PP0DEV groups is CONNECTED ... using NFS or iSCSI? I typically see MOUNTED.
    What is your definition for DB_RECOVERY_DEST, DB_RECOVERY_DEST_SIZE. The error stated that you had exhausted space,but if I read this correctly, I see 144857M (144G) in your ARCHGROUP1, make sure your DEST_SIZE is 900G (TOTAL_MB for the ARCHIVE group)

  • Sun 7110 + Esx 4.1

    Hi ,
    we're setting up a vmware ha cluster to serve some virtual machine from one sun 7110 4,2tb,but iscsi and nfs performances are really really slow.We can't go in production with such slow system.We have tried both nfs and iscsi obtaining througputs inside linux vm starting from 14mb/s arriving to 100 mb/s .While a simple 6k$ sas external storage arrive to 220 mb/s of write throughput inside vm.
    Why Oracle sells sun 7110s for virtualization,as written in www.techdirt.com/iti/resources/midsize_vmware_intel.pdf (written by oracle)??

    Hi Scott,
    Welcome to VMware Community,I saw the error message posted in the previous thread.I doubted it is issue with build version.Can you share your build version of esx host.
    Have a try with re-scan adaptor.
    VMware KB: Performing a rescan of the storage on an ESX/ESXi host
    Based on the error message i found few KB below
    VMware KB: Booting the ESX host fails with this message in the console: Restoring S/W iSCSI volumes
    VMware KB: VMware ESX 4.1 Patch ESX410-201104406-BG: Updates mptsas, mptspi device drivers

  • Assets not capture in Sub-Ledger

    Hi,
    I would appreciate if someone could advice on the below stated matter.
    User had created an AUC asset in 2006 amount IDR31,425,739,33. The amount was IDR31,425,739,33 at the end of 2006 after Settlement from CO to assets.
    In year 2007, the following transaction happened to the asset:
    1. IDR19,893,913,959 - Retirmt transfer of prior-yr acquis. frm cap.asset
    2. IDR5,885,387,305 - Retirmt transfer of curr-yr acquis.
    3. IDR11,421,825,372 - Retmt transfer of prior-yr acquis f. AuC, line itm
    4. IDR5,898,507,610 - Settlement from CO to assets
    At end of 2007, the asset balance for the AUC asset is IDR123,120,305.
    On 1.12.2007, the AUC asset was transfer via intratransfer, t code ABUMN to a existing asset. The amount for the asset was IDR186,011,000 & it was capitalised on 1.12.2008 & depreciated in 12/2008.
    It was mentioned that the user is unable to view the asset transacction details in asset explorer.
    Please advice why the last amount for AUC & new asset transfer amount is differ & why the details of the asset is not seen in asset explorer.
    In advance, i would like to thank anyone who could advice me on the matter

    HI,
    If you change the line item display for Customer/ Vendors you can get Segment values.
    Financial Accounting> Accounts Receivable and Accounts Payable> Customer Accounts> Line Items> Display Line Items> Define Additional Fields for Line Item Display
    Add table BSEG and field Segment. <b>Read the IMG Activity documentation before you do this.</b>
    Thnaks
    VK

  • Table inconsistencies after EHP4 Upgrade

    Hi Experts,
    Weu2019re setting up ALE from Production to Development as well as some manual copy of table content.
    We have completed the ALE setup, but we have an issue with the table content copy.
    Some SAP tables have changed after Development was updated with EHP4.
    To be more specific,we have identified some tables ,which will need to be exported from Production (content only) via a transport, and then imported into Development.
    If we just try to export/import these tables, the transport will most likely fail, and if it doesnu2019t, there may be data inconsistencies
    If we just copy the data as is some fields would be left out unpopulated, potentially causing inconsistencies.
    I need your suggestion for best course of action.

    Hi,
    Is your development and production are on different Support pack level ? If yes, then this is normal and table structure will differ.
    why you need to export tables from production to development ?
    Thanks
    Sunny

  • Sun server datacenter migration

    Our compnay is planning to migrate old Solaris(8/9/10) Servers to new data-center, have to physically re-locate few servers after shutting it down...
    How i can make sure all will be up fine after re-location, what if something goes wrong? what backups i should have and how to take them & restore/recover them? Any other suggestion to make preparation would help .....

    DBA2011 wrote:
    Our compnay is planning to migrate old Solaris(8/9/10) Servers to new data-center, have to physically re-locate few servers after shutting it down...Cool.
    How i can make sure all will be up fine after re-location, Not dropping then them is good. Moving them very gently is even better. Remembering/Marking how to plug them in is good also.
    what if something goes wrong?Consider it a challenge
    what backups i should have and how to take them & restore/recover them?You should have this in place anyway. And a DR plan.
    Any other suggestion to make preparation would help .....You should probably hire specialist help. Any consider whether you should really be buying new servers and migrating onto those, especially if your maintenance has lapsed.
    These answers seem tite, but to give meaningful responses requires a lot more detail and your cost/risk/benefit situation. A NFS or iSCSI NAS solution might be part of your strategy if you have nothing else, but it all really requires planning, and there is probably no way to impart the skills to yourself. But you possibly need a specialist help.

Maybe you are looking for

  • Can't install a package [SOLVED]

    Hi folks ^^ I was trying to install python2-cssutils but pacman shows me the next error: python2-cssutils: /usr/bin/csscapture existe en el sistema de archivos python2-cssutils: /usr/bin/csscombine existe en el sistema de archivos python2-cssutils: /

  • SAP 4.7 / ECC 6.0 integration with POS

    Hi Experts, Is it possible to use POS with SAP 4.7 or ECC 6.0 without using IS Retail. If possible can you please explain how can we do that ? Thanks Christine

  • Setting up ISA B2B in ECC 6.0

    Hi, I need to setup ISA B2B in my ECC 6.0 system...however I'm not sure where to begin:( I read that a seperate IPC component is not reqd...since IPC is now part of SAP_AP 7.00.. Pls. help. Thanks, Saba.

  • TS1398 i restored by i pad and now i have nothing because it will not let me on any wi fi what now

    I restored by i pad back to factory reset because of the wi fi problem now I can't get back in because it will not accept any wi fi.  Don't know how to fix it.

  • SP Executes For Long Time

    Hi There, I'm having 150 lines of SQL statements (SQL Server 2000), with 5 parameters, executes in 12 seconds. But when I make a Stored Procedure out of it (with exactly the same 150 lines of code) with 5 input paramentes, the execution take from 5 t