Treasury - Mirroring Internal Forex Trades
Hi
I am trying to mirror Internal Forex Forward (using T-code TX31) contracts and I am getting an error message
"Customizing mirror mapping incomplete. Transaction is not mirrored"
Can someone help me by sending the correct configuration steps for this?
Thanks
Kalyan
Hi
There are customizing settinngs as part of General settings- Trans mgmt distribution of Mirror trans easy to locate in spro
Please tell me if this works !!
Malolan , SAP Treasury
Similar Messages
-
Hi All,
Request you to share your views on Internal forex trading (Mirror Transaction)
I am working on Internal forex trading , when i enter the details of the transaction in the Tcode TX31 for transaction close and give in the required details, system pops a error maessage saying that
" Rate/price cannot be requested via datafeed ".
There is no online data feed maintained here, is there anyway i can maintain the foreign currency manually for internal forex trading and also deactivate the online data feed...
The data is getting updated in the table VTB_MARKET.
Rgds,
Basheer.Well it depends, where did you get the forex trading platform you are talking about and are you sure that it is a version that is compatible with the phone?
If you find my post helpful please click the green star on the left under the avatar. Thanks. -
Running forex platform on parallels any info welcome
I am a forex trader and need information on running windows based forex platform on a MacBook Air
I have been using Wine however this is not that good.
I am thinking Parallels looks better
Please let me know if you have experience running Windows based programs on Parallels
Many thanksWindows programs run in Parallels as well as they run in Windows, because they are running in Windows in Parallels. The exception is that resource intensive games will run slower because of the demand on shared ,OSx and Windows, resources. Less resource heavy programs like Office, forex, etc. will run fine.
If you are worried about shared resources you could install Windows using Boot Camp Assistant to run Windows natively. This method will allow Windows or OSx to independently use all your computer's resources rather than sharing them as with Parallels. -
Very poor Random IO performance of 24 HDD Mirror-2 CSV on Win2012 R2 (Solved)
We already have a 2 separate Win2012 Mirror-2 Storage Clusters (in production) and had noted that performance was very poor.
We were looking to upgrade to Win2012R2 and storage tiering, we wanted to understand why the storage systems were performing so poorly. So, we have not built out a new storage cluster, with 2 nodes, 3 JBODs and 24 x2TB Seagate ES2 SAS drives.
We have run IOMeter to measure IOPs and the numbers are very poor, regardless of the 'geometries' we choose. Our Write IOPs results for 16KB block (this our typical DB page size) are:
2 columns, 64KB Interleave = 1043
2 columns, 256KB Interleave = 1176
4 columns, 64KB Interleave = 1424
4 columns, 256KB Interleave = 1688
8 columns, 256KB Interleave = 1677
We only tested Write IO because this should be the slowest/worse case IOPs values.
We also tested each HDD individually and they range between 270 and 330 IOPs.
By our math, the IOPs values should be in the range of 3000-3600 == 24 HDDs x IOPs/HDD div by 2 [for mirror-2], but the tested are showing values which are barely 50% of this.
Any assistance/tips would be appreciated.We already have a 2 separate Win2012 Mirror-2 Storage Clusters (in production) and had noted that performance was very poor.
We were looking to upgrade to Win2012R2 and storage tiering, we wanted to understand why the storage systems were performing so poorly. So, we have not built out a new storage cluster, with 2 nodes, 3 JBODs and 24 x2TB Seagate ES2 SAS drives.
We have run IOMeter to measure IOPs and the numbers are very poor, regardless of the 'geometries' we choose. Our Write IOPs results for 16KB block (this our typical DB page size) are:
2 columns, 64KB Interleave = 1043
2 columns, 256KB Interleave = 1176
4 columns, 64KB Interleave = 1424
4 columns, 256KB Interleave = 1688
8 columns, 256KB Interleave = 1677
We only tested Write IO because this should be the slowest/worse case IOPs values.
We also tested each HDD individually and they range between 270 and 330 IOPs.
By our math, the IOPs values should be in the range of 3000-3600 == 24 HDDs x IOPs/HDD div by 2 [for mirror-2], but the tested are showing values which are barely 50% of this.
Any assistance/tips would be appreciated.
16KB I/Os never touch the whole RAID stripe so that's why you see virtually never performance increase (I;m surprised you actually see any). Try using bigger request size to see would it make any difference. You need multiple workers and deep I/O queues.
What are you I/O Meter settings for test except write size?
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
USER Exits FOR TM_55 Tcode
Hi All,
Iam working on the enhancement for the TM_55 Transaction which is for the reverse of interst rate.
I want to do some enhancement when user presses the save button.
can any body give me any exits, badi, or enhancement point which will trigger when user click the save button on the tcode tm_55.
thanks in advance.
vinod.Hi,
There are 2 Exits in Tcode "TM_55".
RFTBCOEX
RFTBCOMO Treasury: Correspondence Monitor
The Badis List as below.
FTR_BAV Open TR-TM: BAV regulatory reporting
FTR_CORR_INC_100 CFM: Inbound confirmation via IDoc
FTR_CORR_OUT_100 CFM: Outgoing Confirmation via IDoc
FTR_CUSTOMER_EXTENT Open TR-TM: Enhancements for Customers
FTR_FINANCIAL_OBJECT Open TR-TM: Finance object connector
FTR_FX_INT_EXIT Enhancements for Internal Forex Trading
FTR_HEDGE_MGMT Open TR-TM: Hedge Management
FTR_HEDGE_MGMT_USER User Exit for Hedge Management Subscreen
FTR_MA_LAUNCH Settings for the MiniApp Launcher
FTR_MA_LAUNCH_1STP Control Start Page for MiniApp Launcher (FTR_MA_LAUNCH)
FTR_MA_LAUNCH_CUSTOM Settings for the MiniApp Launcher
FTR_MIRROR_DEALS Open TR-TM: Connection to Mirror Transactions
FTR_PARTNER_ASSIGN Open TR-TM: Partner Assignments
FTR_SE_DEFAULTS Add-In: Default Issue Structure Data for Sec. Transaction
FTR_TRACA_STATREPORT Posting of TR Transactions: Enhancements for Reporting
FTR_TR_EXTENTION Enhancements to TR-TM
FTR_TR_FACILITY Open TR-TM: Connection to Facilities
FTR_TR_GENERIC Open TR-TM: Generic Connection to Transaction Management
FTR_TR_POSMON BADI Position Monitor
FTR_TR_TBB1_EXIT Enhancements of (Operat.) Posting Interface (Obsolete!!!)
IBS_FS_LIST_OPTIONS Manipulate Field List for Display in MiniApp
SMOD_RFTBCOEX CFM: Enhancement of Transaction Confirmations/Dealing Slips
Thanks -
OK I'm about to give up and abandon my prejudice for surround sound. I suppose I can still listen in stereo and at least the kids have the option of using it when I'm not there.
Someone has told be there is a Harman Kardon receiver in cash converters for just over a £ 100, no idea what model and I don't suppose it's the latest model but it does sound quite cheap and in 'as new condition', so I don't suppose it will hurt too much.
Not being all that interested in this area I really don't know what I'm buying, does anyone know whether this is a good brand or even better know of any issues with the tv. Can't say I've heard of it before or seen it mentioned on these forums, unlike Yamaha and Onkyo. So either no one here has this brand or there aren't any reported issues.
I realise without a model number it's difficult to comment on the receiver, but I'm more interested in a comment on the brand at this time. Whilst I haven't had any interest in receivers to date, I'm not completely blind to the technical side and should be able to decide from the specs exactly what it does, I just won't know how well this brand does what it does.Alley_Cat wrote:
Many high-end audio manufacturers to my mind though have lost out with the digital 'revolution', as reasonable quality goods are available for often a fraction of the cost of similar items 20 years ago.
Just to expand on that, before widespread DVD and home cinema options were available, specialist hi-fi stores could still do well from high-end stereo audio kit.
These days though consumers want kit that is a jack of all trades and the perception of value for money factors highly. 'True hi-fi' products have been marginalised by the home cinema factor, and many hi-fi stores who used to turn their noses up at anything other than pure stereo have had to adopt cheaper AV kit simply to stay afloat.
As with all purchases there's often a law of diminishing returns with hi-fi. The £5000 separates system will be far better than the £500 all in one mini system, but not 10x as much.
How do you attract the average consumer to spend a few hundred pounds on a 'quality' DVD player when they're £15 in Argos or Asda - it has to do a heck of a lot more to justify the price tag, whereas as an early adopter I was happy to pay nearly £600 for my first DVD player some 10 years ago. I would be very reluctant to do so now.
High-end manufacturers also sometimes seem completely out of touch with reality in terms of pricing - for example I've just had an invite from the local hi-fi dealer to go to an evening demo of a Naim hard drive based player.
The spec includes 2x400GB mirrored internal drives, USB and network functionality, and undoubtedly very high quality audio circuitry. It can output/playback non-DRM FLAC at up to 24 bit 192kHz which admitedly AppleTV cannot, but even though I'd love to have one for it's sonic prowess, I have to wonder how much better it can sound compared to the AppleTV when it costs £4,500 vs £200! The interface also looks clunky - onboard TFT display - you may be able to connect to a TV but looks like it's composite or S-video only! You appear to be able to connect a keyboard/mouse and control from a PC.
Now I'm sure the Naim device will sound very good, and if it includes a high-end CD player anyway might be more justifiable, but in terms of VFM AppleTV blows it out of the water.
I guess there are plenty of people out there though that have no desire to use a computer and would be happy to pay for a high-end all in one solution.
Sorry to digress - more of a blog type comment!
http://www.naim-audio.com/products/hdx.html
http://crave.cnet.com/8301-1_105-9932321-1.html
http://www.pinkfishmedia.net/forum/showthread.php?t=47322
AC -
Disk caching on host or guest?
OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:
a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or
b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?
Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?
I've had a good look around and haven't been able to find any relevant recommendations.
More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed
on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation,
or leave it 'spare' on the host.OK, this is probably a noob question, but if we have 64GB RAM on our HyperV (2008R2) host, and we are running disk intensive software, do we:
a) Allocate the 'minimum' RAM to the guest, and leave the rest for the host to use for disk caching, or
b) Allocate the maximum RAM to the guest (leaving 1GB for the host), and let the guest use it for disk caching?
Allocating half & half would seem to be a waste as they will probably both end up caching the same data (will they?), but it's not clear whether we're best letting the host or the guest do the caching. Or does it actually matter at all?
I've had a good look around and haven't been able to find any relevant recommendations.
More Info - the 'disk intensive' software is mainly a PostgreSQL server. We'll give that about 8GB for its shared buffers, but it seems to be recommended to use OS disk caching beyond that. There is a 1GB BBWC P420i RAID controller so write caching is performed
on that. Currently, our biggest performance bottleneck seems to be due to uncached reads, so we are increasing the host RAM from 16GB to 64GB (and adding an SSD for index storage), but just want to know whether it's best to increase the guest RAM allocation,
or leave it 'spare' on the host.
With Windows Server 2008 R2 / Hyper-V 2.0 you don't have that many options as VHD access is not cached by host. At all... So you'd better allocate move VM memory as I/O would be cached inside a VM. Windows Server 2012 R2 / Hyper-V 3.0 would give you more
caching options that include Read-Only CSV Cache, Flash-based Write-Back Cache coming with Tiering and also SMB access is also extensively cached @ both client and server sides. See:
CSV Cache
http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx
Write Back Cache
http://technet.microsoft.com/en-us/library/dn387076.aspx
Hyper-V over SMB
http://technet.microsoft.com/en-us/library/jj134187.aspx
So it could be a good idea to upgrade to Windows Server 2012 R2 now :)
You may deploy third-party software to do a RAM and flash cache but you need to think twice as it could be simply dangerous - no reboot you may lose gigabytes of your transactions...
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Advice Requested - High Availability WITHOUT Failover Clustering
We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2. My question is: Can we accomplish high availability WITHOUT using failover clustering?
So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover. Here's what I mean:
In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment. In other words, there is at least a domain
controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring. The SQL Server VM on each host has about 75% of the
physical memory resources dedicated to it (for performance reasons). We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
So now, to high availability. The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
(we are using an iSCSI SAN for storage).
BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted. With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
or so only in the event of a major failure, rather than running at 50% ALL the time.
Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability. I guess
I'm looking for validation on my thinking.
So what do you think? What am I missing or forgetting? What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
Thanks in advance for your thoughts!Udo -
Yes your responses are very helpful.
Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access? Or can that not run on the same physical box as the Hyper-V host? I guess if the physical box goes down
the LUN would go down anyway, huh? Or can I cluster that role (iSCSI target) as well? If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
- Morgan
That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
for cheap". See:
What's new in iSCSI target with Windows Server 2012 R2
http://technet.microsoft.com/en-us/library/dn305893.aspx
Improved optimization to allow disk-level caching
Updated
iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
any Microsoft iSCSI target and you'll be happy. For references see:
MSFT iSCSI Target in HA mode
http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
Cluster MSFT iSCSI Target with SAS back end
http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
Guest
VM Cluster Storage Options
http://technet.microsoft.com/en-us/library/dn440540.aspx
Storage options
The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
Storage Type
Description
Shared virtual hard disk
New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
Virtual Fibre Channel
Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
Virtual Fibre Channel Overview.
iSCSI
The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
Server 2012.
Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
StarWind VSAN [Virtual SAN] for Hyper-V
http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
File systems available on Windows Server 2012 R2?
What are the supported file systems in Windows Server 2012 R2? I mean the complete list. I know you can create, read and write on Fat32, NTFS and ReFS. What about non-Microsoft file systems, like EXT4 or HFS+? If I create a VM with a Linux OS, will
I be able to acces the virtual hard disk natively from WS 2012 R2, or will I need a third party tool, like the one from Paragon? If I have a drive formated in EXT4 or HFS+, will I be able to acces it from Windows, without any third party tool? Acces it,
I mean both read and write on them. I know that on the client OS, Windows 8.1, this is not possible natively, this is why I am asking here, I guess it is very possible for the server OS to have build-in support for accesing thoose file systems. If Hyper-V
has been optimised to run not just Windows VMs, but also Linux VMs, it would make sense to me that file systems like thoose from Linux or OS X to be available using a build-in feature. I have tried to mount the vhd from a Linux VM I have created in HyperV,
Windows Explorer could not read the hard drive.Installed Paragon ExtFS free. With it loaded, tried to mount on Windows Explorer a ext4 formated vhd, created on a Linux Hyper-V vm, it failed, and Paragon ExtFS crashed. Uninstalled Paragon ExtFS. The free version was not supported on WS 2012 R2
by Paragon, if Windows has no build-in support for ext4, this means this free software has not messed around anything in the OS, I guess.
Don't mess with third-party kernel-mode file systems as it's basically begging for troubles: crash inside them will make whole system BSOD and third-party FS are typically buggy... Because a) FS development for Windows is VERY complex and b) there are very
few external adopters so not that many people actually theist them. What you can do however:
1) Spawn an OS with a supported FS inside VM and configure loopback connectivity (even over SMB) with your host. So you'll read and write your volume inside a VM and copy content to / from host.
(I personally use this approach in a reversed direction, my primary OS is MacOS X but I read/write NTFS-formatted disks from inside a Windows 7 VM I run on VMware Fusion)
2) Use user-mode file system explorer (see sample links below, I'm NOT affiliated with that companie). So you'll copy content from the volume as it would be some sort of a shell extension.
Crashes in 1) and 2) would not touch your whole OS stability.
HFS Explorer for Windows
http://www.heise.de/download/hfsexplorer.html
Ext2Read
http://sourceforge.net/projects/ext2read/
(both are user-land applications for HFS(+) and EXT2/3/4 accordingly)
Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Why does my 10GB iSCSI setup seem see such high latency and how can I fix it?
I have a iscsi server setup with the following configuration
Dell R510
Perc H700 Raid controller
Windows Server 2012 R2
Intel Ethernet X520 10Gb
12 near line SAS drives
I have tried both Starwind and the built in Server 2012 iscsi software but see similar results. I am currently running the latest version of starwinds free
iscsi server.
I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers. I have a dedicated vlan just for iscsi and have enabled
jumbo frames on the vlan.
I frequently see very high latency on my iscsi storage. So much so that it can timeout or hang vmware. I am not sure why. I can run IOmeter and
get some pretty decent results.
I am trying to determine why I see such high latency 100'ms. It doesn't seem to always happen, but several times throughout the day, vmware is complaining
about the latency of the datastore. I have a 10Gb iscsi connection between the servers. I wouldn't expect the disks to be able to max that out. The highest I could see when running IO meter was around 5Gb. I also don't see much load
at all on the iscsi server when I see the high latency. It seems network related, but I am not sure what settings I could check. The 10Gb connect should be plenty as I said and it is no where near maxing that out.
Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this. I
am not able to find what is causing it. I reference this document and for changes to my iscsi settings
http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
Thank you for your time.I have a iscsi server setup with the following configuration
Dell R510
Perc H700 Raid controller
Windows Server 2012 R2
Intel Ethernet X520 10Gb
12 near line SAS drives
I have tried both Starwind and the built in Server 2012 iscsi software but see similar results. I am currently running the latest version of starwinds free
iscsi server.
I have connected it to a HP 8212 10Gb port which is also connected via 10Gb to our vmware servers. I have a dedicated vlan just for iscsi and have enabled
jumbo frames on the vlan.
I frequently see very high latency on my iscsi storage. So much so that it can timeout or hang vmware. I am not sure why. I can run IOmeter and
get some pretty decent results.
I am trying to determine why I see such high latency 100'ms. It doesn't seem to always happen, but several times throughout the day, vmware is complaining
about the latency of the datastore. I have a 10Gb iscsi connection between the servers. I wouldn't expect the disks to be able to max that out. The highest I could see when running IO meter was around 5Gb. I also don't see much load
at all on the iscsi server when I see the high latency. It seems network related, but I am not sure what settings I could check. The 10Gb connect should be plenty as I said and it is no where near maxing that out.
Any thoughts about any configuration changes I could make to my vmware enviroment, network card settings or any ideas on where I can troubleshoot this. I
am not able to find what is causing it. I reference this document and for changes to my iscsi settings
http://en.community.dell.com/techcenter/extras/m/white_papers/20403565.aspx
Thank you for your time.
If both StarWind and MSFT target show the same numbers I can guess it's network configuration issue. Anything higher then 30 ms is a nightmare :( Did you properly tune your network stacks? What numbers (x-put and latency) you get for raw TCP numbers (NTtcp
and Iperf are handy to show)?
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Best practices for setting up virtual servers on Windows Server 2012 R2
I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state. I
expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
use for Web server and Database server.
I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
then 2 more for the Database server in a RAID 1 configuration.
please feel free to poke holes in this and tell me of a better way to do it.
I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
DCSSRI am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state. I
expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
use for Web server and Database server.
I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
then 2 more for the Database server in a RAID 1 configuration.
please feel free to poke holes in this and tell me of a better way to do it.
I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
There are two issues about RAID0:
1) It's not as fast as people think. So with a general purpose file system like NTFS or ReFS (choice for Windows is limited) you're not going to have any great benefits as there are very low chances whole RAID stripe would be updated @ the same time (I/Os
need to touch all SSDs in a set so 256KB+ in a real life). Web server workload is quite far away from sequential reads or writes so RAID0 is not going to shine here. Log-structures file system (or at least some FS with logging capabilities, think about ZFS
and ZIL enabled) *will* benefit from SSDs in RAID0 properly assigned.
2) RAID0 is dangerous. One lost SSD would render whole RAID set useless. So unless you build a network RAID1-over-RAID0 (mirror RAID sets between multiple hosts with a virtual SAN like or synchronous replication solutions) - you'll be sitting on a time bomb.
Not good :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Server 2012 Failover Cluster No Disks available / iSCSI
Hi All,
I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done.
I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
When I right click on Storage -> Disks and then click 'Add Disk', I get No disks suitable for cluster disks were found.
I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
Any ideas?Hi All,
I am testing out the Failover Clustering on Windows Server 2012 with hopes of winding up with a clustered File Server once I am done.
I am starting with a single node in the cluster for testing purposes; I have connected to this cluster a single iSCSI LUN that is 100GB in size.
When I right click on Storage -> Disks and then click 'Add Disk', I get No disks suitable for cluster disks were found.
I get this, even if I add a second server to the cluster, and connect it to the iSCSI drive as well.
Any ideas?
For testing purpose you'd better spawn a set of VMs on a single physical Hyper-V host and use shared VHDX as a back clusterd storage. That would be both much easier and much faster then what you do. + it would be trivial move one of the VMs to another physical
host, shared VHDX to CSV on a shared storage and go from Test & Development to production :) See:
Shared VHDX
http://blogs.technet.com/b/storageserver/archive/2013/11/25/shared-vhdx-files-my-favorite-new-feature-in-windows-server-2012-r2.aspx
Virtual File Server with Shared VHDX
http://www.aidanfinn.com/?p=15145
Guest
VM Cluster with Shared VHDX
http://technet.microsoft.com/en-us/library/dn265980.aspx
For a pure iSCSI scenario you may try this step-by-step guide (just skip StarWind config as you do have a shared storage already with your SAN). See:
Configuring HA File Server on Windows Server 2012 for SMB NAS
http://www.starwindsoftware.com/configuring-ha-file-server-on-windows-server-2012-for-smb-nas
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Best Practice for General User File Server HA/Failover
Hi All,
Looking for some general advice or documentation on recommend approaches to file storage. If you were in our position how would you approach adding more rubustness into our setup?
We currently run a single 2012 R2 VM with around 6TB of user files and data. We deduplicate the volume and use quota's.
We need a solution that provides better redundancy that a single VM. If that VM goes offline how do we maintain user access to the files.
We use DFS to publish file shares to users and machines.
Solutions I have researched with potential draw backs:
Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
- This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
- Is this the prefered enterprise approach to share avialability? How will hosting user shares (documents etc...) cope in a replication environment.
Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
Any thoughts on where I should be focusing my efforts?
ThanksHi All,
Looking for some general advice or documentation on recommend approaches to file storage. If you were in our position how would you approach adding more rubustness into our setup?
We currently run a single 2012 R2 VM with around 6TB of user files and data. We deduplicate the volume and use quota's.
We need a solution that provides better redundancy that a single VM. If that VM goes offline how do we maintain user access to the files.
We use DFS to publish file shares to users and machines.
Solutions I have researched with potential draw backs:
Create a guest VM cluster and use a Continuosly Available File Share (not SOFS)
- This would leave us without support for de-duplication. (we get around 50% savings atm and space is tight)
Create a second VM and add it as secondary DFS folder targets, configure replication between the two servers
- Is this the prefered enterprise approach to share avialability? How will hosting user shares (documents etc...) cope in a replication environment.
Note: we have run a physical clustered file server in the past with great results except for the ~5 mins downtime when failover occurs.
Any thoughts on where I should be focusing my efforts?
Thanks
If you care about performance and real failover transparency then guest VM cluster is a way to go (compared to DFS of course). I don't get your point about "no deduplication". You can still use dedupe inside your VM just will have sure you "shrink" the VHDX
from time to time to give away space to host file system. See:
Using Guest Clustering for High Availability
http://technet.microsoft.com/en-us/library/dn440540.aspx
Super-fast
Failovers with VM Guest Clustering in Windows Server 2012 Hyper-V
http://blogs.technet.com/b/keithmayer/archive/2013/03/21/virtual-machine-guest-clustering-with-windows-server-2012-become-a-virtualization-expert-in-20-days-part-14-of-20.aspx
can't
shrink vhdx file after applying deduplication
http://social.technet.microsoft.com/Forums/windowsserver/en-US/533aac39-b08d-4a67-b3d4-e2a90167081b/cant-shrink-vhdx-file-after-applying-deduplication?forum=winserver8gen
Hope this helped :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Hi All,
Operating System: Windows 2008 R2 Enterprise SP1
H/w: VMware virtual machine vmx 09
Installed: File server role with "Services for network file system"
Server for NFS and Client for NFS services and up and running.
But when I try to create an NFS share using the folder properties the NFS Sharing tab is missing.
I tried to provision an NFS share using server manager "New NFS share folder cannot be created"
Noticed Event ID 1015 from NFSserver " Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated"
Tried to create the NFS share using command line but that too failed.
Request all to kindly assist me in isolating and fixing this issue.
Thank you so much
Shaji P.K.
Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validatedHi All,
Operating System: Windows 2008 R2 Enterprise SP1
H/w: VMware virtual machine vmx 09
Installed: File server role with "Services for network file system"
Server for NFS and Client for NFS services and up and running.
But when I try to create an NFS share using the folder properties the NFS Sharing tab is missing.
I tried to provision an NFS share using server manager "New NFS share folder cannot be created"
Noticed Event ID 1015 from NFSserver " Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated"
Tried to create the NFS share using command line but that too failed.
Request all to kindly assist me in isolating and fixing this issue.
Thank you so much
Shaji P.K.
Server for NFS was unable to validate licensing information at this time, the server will be nonfunctional until this information can be validated
NFS server is so weak (especially for 2008 R2) and keeping in mind you run that all hosted by VMware ESXi the best thing you can do is to get rid of using Windows as a NFS server completely and spawn a FreeBSD or Linux VM with a decent and recent version
of Samba.
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts. -
Hyper Disk Layout and Raid For Essintal Server 2012 R2 With Exchange
Hi would raid mirror be good enough support configuration on single server run both essentials server 2012 and exchange 2013 new to exchange look for input and suggestions
user load is very light just 5
boot disk 120 gig ssd on it own Controller run 2012 run with hyper v installed
2 x 3 Tb raid 1 by Lsi 1064e raid control essentials server 2012 disk are vhdx fixed size two each
2 x 2 Tb raid 1 by Lsi 1064e raid control 2012 R2 server disk are vhdx fixed size two each
System specs is 2 x Amd opertron 4122 with 32 gig of Ram 4 core to each os
Andy AHi would raid mirror be good enough support configuration on single server run both essentials server 2012 and exchange 2013 new to exchange look for input and suggestions
user load is very light just 5
boot disk 120 gig ssd on it own Controller run 2012 run with hyper v installed
2 x 3 Tb raid 1 by Lsi 1064e raid control essentials server 2012 disk are vhdx fixed size two each
2 x 2 Tb raid 1 by Lsi 1064e raid control 2012 R2 server disk are vhdx fixed size two each
System specs is 2 x Amd opertron 4122 with 32 gig of Ram 4 core to each os
1) Boot Hyper-V from cheap SATA or even USB stick (see link below). No point in wasting SSD for that. Completely agree with Eric. See:
Run Hyper-V from USB Flash
http://technet.microsoft.com/en-us/library/jj733589.aspx
2) Don't use RAID controllers in RAID mode rather (as you already got them) stick with HBA mode passing disks AS IS, add some more SSDs and configure Storage Spaces in RAID10 equivalent mode and use SSDs as a flash cache. See:
Storage Spaces Overview
http://technet.microsoft.com/en-us/library/hh831739.aspx
Because having single pool touching all spindles would provide you better IOPS compared to creating "islands of storage" that are waste or performance and hell of management.
3) Come up with IOPS requirements for your workload (no idea from above) keeping in mind RAID10 provides ALL IOPS for reads and half IOPS for writes (because of mirroring). So as single SATA can do maybe 120-150 IOPS and single SAS can do up to 200 (you
don't provide any model names so we have to guess) you may calculate how many IOPS your config would give away in a best and worst case scenario (write back cache from above will help but you always need to come from a WORST case). See calculator link below.
IOPS Calculator
http://www.wmarow.com/strcalc/
Hope this helped a bit :)
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.
Maybe you are looking for
-
Jabber integrate with Windows LDAP SearchBase or BaseFilter Problem
Actually, I have a setting problem about the Windows AD in the Search Base or Filtering. Actually, my Windows AD is for my global office, but I don't want the Jabber can search all global people, so I set the restriction as below methods (but that is
-
Need help on choosing adobe tool
Hi, We are evaluvating a tool to convert word document to pdf for our prestegious client. Below are requirements, Requirement 1: Source- MS Word document Target- PDF 1) Conversion of MS Word document to PDF through online. Here Word document content
-
Commitment for inventory purchase
Hello experts, our customer it is requesting us to manage commitment not only with direct consumption purchases but also with inventory purcahses. I found that the inventory account can be set as "Cost Element Category" = 90 Statistical cost element
-
Hi, I am trying to load a flat file in .csv format but whenever I open the file I am losing the leading zeros. It is very cumbersome for me to do those changes in notepad. Does one know how to keep the leading zeros even when I open in excel and
-
RTP receiving whit AVReceive2.java
Hi , I used AVReceive2.java in order to receiver the RTP flows whit JMF. I create the stream RTP using VLC media player but this error occurs :No format has been registered for payload type 33 (or 96 ,depending the format file in input). How can I ch