Regarding storage outside JVM

Hi,
I am a novice developer. I am trying to implement a java application on tomcat that puts data in cache and gets it replicated to other cluster nodes. In this case I want to store the data off-the-heap i.e outside the JVM. Can anyone suggest me how I could implement this. Are there any issues with storing the data outside JVM? Does it effect cache replication in any way??
Thanks and Regards.
PS

Hi user12216297,
If your goal is to offload the data storage from the tomcat instances then I would suggest setting up a cache client/cache server environment where the tomcat instances are "storage disabled" (i.e. started with -Dtangosol.coherence.distributed.localstorage=false) and running a few cache server instances (see $COHERENCE_HOME/bin/cache-server.sh/.cmd).
If you truely want off heap storage take a look at the [Partitioned Backing Maps documentation|http://coherence.oracle.com/display/COH35UG/Storage+and+Backing+Map].
Rob
:Coherence Team:

Similar Messages

  • WLS 6.1 sp2 crash - exception outside JVM

    In the directory wlserver6.1 we get files named hs_err_pidNNN.log for each server
    crash. Those files looks like this presented below. Any suggestion why we get
    those crashes ? Maybe to low memory (512MB), or to high load. Is this bug can
    be in our application or it is weblogic or jvm fault?
    An unexpected exception has been detected in native code outside the VM.
    Unexpected Signal : EXCEPTION_ACCESS_VIOLATION occurred at PC=0x77fca927
    Function name=RtlFreeHeap
    Library=C:\WINNT\System32\ntdll.dll
    Current Java thread:
         at java.io.FileInputStream.readBytes(Native Method)
         at java.io.FileInputStream.read(FileInputStream.java:183)
         at weblogic.utils.classloaders.FileSource.getBytes(FileSource.java:43)
         at weblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLoader.java:271)
         at weblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.java:156)
         at weblogic.servlet.jsp.JspClassLoader.findClass(JspClassLoader.java:36)

    Sounds like JVM bug, what is the JDK?
    cheers
    mbg
    "Jerzy Krawczuk" <[email protected]> wrote in message
    news:3ed75842$[email protected]..
    >
    In the directory wlserver6.1 we get files named hs_err_pidNNN.log for eachserver
    crash. Those files looks like this presented below. Any suggestion why weget
    those crashes ? Maybe to low memory (512MB), or to high load. Is this bugcan
    be in our application or it is weblogic or jvm fault?
    An unexpected exception has been detected in native code outside the VM.
    Unexpected Signal : EXCEPTION_ACCESS_VIOLATION occurred at PC=0x77fca927
    Function name=RtlFreeHeap
    Library=C:\WINNT\System32\ntdll.dll
    Current Java thread:
    at java.io.FileInputStream.readBytes(Native Method)
    at java.io.FileInputStream.read(FileInputStream.java:183)
    at weblogic.utils.classloaders.FileSource.getBytes(FileSource.java:43)
    atweblogic.utils.classloaders.GenericClassLoader.findLocalClass(GenericClassLo
    ader.java:271)
    atweblogic.utils.classloaders.GenericClassLoader.findClass(GenericClassLoader.
    java:156)
    at weblogic.servlet.jsp.JspClassLoader.findClass(JspClassLoader.java:36)

  • Query regarding Storage type

    Hi,
    Can we have change existing storage type (like geo-redundant to locally-redundant storage)?
    If yes then kindly help me to describe.
    Regards,
    Arvind

    Hi,
    In addition to "Azure-Amjad" reply,
    If you trying to change the replication type from "GeoRedundant" to "Locally Redundant" through Powershell, then you may try below command:
    Set-AzureStorageAccount -StorageAccountName <storageaccountname> -GeoReplicationEnabled $false -Label “disabled geo replication”
    Before, you execute the above command, check if the GeoRedundant is enabled or disable by following command:
    Get-AzureStorageAccount -StorageAccountName <storageaccountname>
    Regards,
    Manu Rekhar

  • Regarding Storage Cost Indicators

    Hello PP Sapperu2019s,
    Can any one guide / brief me with a detailed example about storage cost indicators.
    What is storage cost indicator?
    Why we need to use the storage cost indicator?
    Purpose of maintaining this configuration (OMI4)?
    What will be the impact after maintaining this configuration?
    What kind of negative impact will be faced after maintaining this configuration?
    Requesting you to brief with your valuable points.
    Cheers,
    Kumar.S

    Hi,
    Takes costs into account in proportion to the quantity stored and the unit price. It refers to the average stock value.
    It is constant for the duration of the stocking-up period. The usual values lie between 15 and 35%.
    Use
    The storage costs percentage is only used for calculating the lot size for optimizing lot-sizing procedures.
    e.g.
    If for particular items storage you are incurring any storage costs so by this system you decide what is suitable  lot size quantity to optimise the storage costs.
    Also check below link:
    http://help.sap.com/saphelp_40b/helpdata/fr/7d/c27639454011d182b40000e829fbfe/content.htm
    Regards,
    Alok Tiwari
    Edited by: Alok Kumar Tiwari on Feb 29, 2012 3:28 PM

  • Question regarding Applet and JVM

    Hi all!
    I'm working on an applet now and it's been working quite fine, just that when I run the same applet on different tab in a single browser window, it'll get some error.
    But if I run the applets in different windows, it'll be fine.
    So I'd like to know how does JVM handle the execution of applet?
    What is the difference between:
    - how JVM handles multiple applet in different-tab-in-single-browser and
    - how JVM handles multiple applet in different browser?
    Any help is greatly appreciated :)
    Thanks in advance ^^

    Sounds like you're using static fields. Not a good idea in applets because...
    What is the difference between:
    - how JVM handles multiple applet in different-tab-in-single-browser and
    - how JVM handles multiple applet in different browser?
    ...that's entirely up to the browser. Actually, your question's slightly misconstrued. What you should really ask is,
    What is the difference between:
    - how the browser spawns JVMs in different-tab-in-single-browser and
    - how the browser spawns JVMs in different browser?
    Either way, it's out of your hands. Which is why you're going to have to be very careful about using statics: if you use them for state information then another applet can trash them; if you use them for inter-applet communication you might not reach one applet from another.

  • An Adobe Flash Player message regarding storage has popped up and although I can click "deny", it won;t go away;how do I stop this?

    Whenever this happens, I have been able to click "deny" and it has gone away. However, when trying to play a game on Facebook, it will not go away and allow me to play the game. How do I get rid of this little message? Looks like I'm being hacked as the message is asking me to allow more storage, thereby gaining access to what's on my computer.
    Thank you!

    hello, please refer to http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager.html#117152

  • Regarding storage on macbook

    As you all know that we can view the storage by going to apple icon on the top left corner----->about this mac--------->more info---------->storage. I have included the screenshot.
    As you all can see that there is 10.82gb of movies, 732.74 mb of photos and 29.16 gb of apps
    My problems -
    1. I have NOT stored any movies on my mac. How can i find the location of the movies that are showing up here?
    2. I have roughly 60 wallpapers apart from the default wallpapers on my mac which i am sure will not be 732 mb. how can i find the location of the photos showing up here?
    3. I have installed exactly 9 apps apart from the default apps. All the apps i have installed are not more than utilities and 1 browser. None of the apps is above 200 mb. So why does it show 29.16 gb occupied by apps?
    If any one can help me with this i will be really grateful
    Thanks in advance

    Empty the Trash if you haven't already done so. If you use iPhoto, empty its internal Trash first:
    iPhoto ▹ Empty Trash
    Do the same in other applications, such as Aperture, that have an internal Trash feature.
    When Time Machine backs up a portable Mac, some of the free space will be used to make local snapshots, which are backup copies of recently deleted files. The space occupied by local snapshots is reported as available by the Finder, and should be considered as such. In the Storage display of System Information, local snapshots are shown as Backups. The snapshots are automatically deleted when they expire or when free space falls below a certain level. You ordinarily don't need to, and should not, delete local snapshots yourself. If you followed bad advice to disable local snapshots by running a shell command, you may have ended up with a lot of data in the Other category. Reboot and it should go away.
    See this support article for some simple ways to free up storage space.
    You can more effectively use a tool such as OmniDiskSweeper (ODS) to explore the volume and find out what's taking up the space. You can also delete files with it, but don't do that unless you're sure that you know what you're deleting and that all data is safely backed up. That means you have multiple backups, not just one.
    Deleting files inside an iPhoto or Aperture library will corrupt the library. Any changes to a photo library must be made from within the application that created it. The same goes for Mail files.
    Proceed further only if the problem isn't solved by the above steps.
    ODS can't see the whole filesystem when you run it just by double-clicking; it only sees files that you have permission to read. To see everything, you have to run it as root.
    Back up all data now.
    If you have more than one user account, make sure you're logged in as an administrator. The administrator account is the one that was created automatically when you first set up the computer.
    Install ODS in the Applications folder as usual. Quit it if it's running.
    Triple-click anywhere in the line of text below on this page to select it, then copy the selected text to the Clipboard by pressing the key combination command-C:
    sudo /Applications/OmniDiskSweeper.app/Contents/MacOS/OmniDiskSweeper
    Launch the built-in Terminal application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Terminal in the icon grid.
    Paste into the Terminal window (command-V). You'll be prompted for your login password, which won't be displayed when you type it. You may get a one-time warning to be careful. If you see a message that your username "is not in the sudoers file," then you're not logged in as an administrator.
    The application window will open, eventually showing all files in all folders, sorted by size with the largest at the top. It may take a few minutes for ODS to finish scanning.
    I don't recommend that you make a habit of doing this. Don't delete anything while running ODS as root. If something needs to be deleted, make sure you know what it is and how it got there, and then delete it by other, safer, means. When in doubt, leave it alone or ask for guidance.
    When you're done with ODS, quit it and also quit Terminal.

  • Regarding  storage category in Document creation

    Hi DMS Guru, i am not getting the  storage category in cv01n. Please guide me where i am wrong.
    IAlso i am unable to define the data carrires
    Please help me

    Hi,
    To view your storage category in CV01n,You need to check the button "use KPRO" in DC10 for your DOC type.
    Sandhya..

  • Regarding the Exception JVM INSTR ret 24

    Hai!
    I need help from all of u i am a fresher and I am given a project to analyse. In that I found errors at lines containing
    [  JVM INSTR ret 24   ]
    and another error at the line containing
    [   Exception exception;
    exception; ]
    The data inside the brackets is the code
    i am using eclipse 3.3 and jdk1.6.03 .Pls explain what is the solution to rectify this problem.
    Thanks

    can anybody help me what this problem is The problem is that the code you wrote is not java.
    and how to resolve.Do one of the following
    1. Write java
    2. Get a compiler/interpreter that takes the code that you are writing and compiles/interprets it.

  • What does "other" mean regarding storage

    Hi. I am trying to download the new iOS7 and I don't have enough space on my hard drive to download. I checked the what is bein stored in my phone, but about 4.8 GB is stored under the category "Other." What's "Other?" I want to know what is stored in my phone so I can manage it and delete what I don't want.
    thanks for your help!

    See Here...
    maclife.com/how_remove_other_data_your_iphone
    More Info about ‘Other’ in this Discussion
    https://discussions.apple.com/message/19958116

  • Data storage outside applet?

    I've got an applet which stores quite alot of information in a series of fields. If I need to redesign this applet though I obviously need to remove it from the card and load the new applet but by doing so I will also remove all of the associated data.
    Has anyone attempted a design whereby the applet and data are separated in the card and so changing the applet doesn't affect the data? WHat I mean is, is there an entity in the card where the data can be writen to and read by the applet?
    Thanks
    Tony

    Try creating another applet that implements shareable interface. When you want to persist the data, pass it to the shareable interface and the data will be owned by that context. Now you can remove your applet and that data will still reside in E2P because it belongs to that different context.

  • Regarding storage of messages

    I have Nokia 5310 and around 1350 messages in my inbox and 470 messages as Saved messages.Now I wish to change my handset and also want to have back up of all these messages....Please advise some technique so i could restore all these messages in my other handset.I have not finalized my other handset yet.

    If your next handset is a Nokia then you'll be able to transfer all your messages using the "Switch" application on it. It'll simply pull all the data off the old phone and store it. Or you can use PC Suite to make a backup on your (Windows) computer and then restore it to the new phone.
    If your next phone is not a Nokia then you will have problems.
    Was this post helpful? If so, please click on the white "Kudos!" star below. Thank you!

  • Privacy Policy and Personal Data Storage

    "By using this product, you consent to the storage of your IM, voicemail, and video message communications as described above...Your instant messaging (IM), voicemail, and video message content (collectively “messages”) may be stored by Skype (a) to convey and synchronize your messages and (b) to enable you to retrieve the messages and history where possible. Depending on the message type, messages are generally stored by Skype for a maximum of between 30 and 90 days unless otherwise permitted or required by law. This storage facilitates delivery of messages when a user is offline and to help sync messages between user devices."
    In regards to the Privacy Policy above it seems that even if I delete my chat hisotory from my computer I have already consented to have my "messages" stored by skype for a maximum of 30 and 90 days. Also, what does skype mean by "video message content"? Are these video messages? There are no options I see to control your message storage outside your computer.

    DITTO.  I'm gone too.  I sugest everyone read the new terms, in particular: 3.3 Information Stored on Your Mobile DeviceWith your permission, we may collect information stored on your mobile device, such as contacts, photos, or media files. Local law may require that you seek the consent of your contacts to provide their personal information to Spotify, which may use that information for the purposes specified in this Privacy Policy. “If you don’t agree with the terms of this Privacy Policy, then please don’t use the Service.” – Spotify So,  since I can't not give Spotify permission, I'll not be using the "service" anymore, they simply don't need access to my contacts or photos , that's is way too invasive and just plain creepy.

  • InternalError: Not enough storage is available to process this command

    Hi,
    i have a program where user can define some actions with jython scrips in setup file. For each of these actions java delivers a object to jython. The complete programm is working on standard pc with win xp or win nt, or on dual core machine also with win xp with as many jython actions as i want. But i got the following failure if i define more than 5 jython actions on a four core machine:
    java.lang.InternalError: Not enough storage is available to process this command
    The storage for JVM in the dual core machine is much more higher than on the four core machine, but i can't set the storage of the four core machine.
    Does any one have an idea or can any one hlp me with this failure?

    Had the pleasure of this one myself, due to (excessive) thread creation by JCIFS during AD log ons. This is an Out Of Memory exception. Not in the heap, but in the native memory used by the JVM process. I solved the issue by enabling the /3GB switch on Windows. Alternatively profile the memory usage of your application to minimize its usage of native memory. Look at these links for more inspiration:
    http://cn.forums.oracle.com/forums/thread.jspa?threadID=1062107
    http://blogs.oracle.com/jrockit/2008/09/how_to_get_almost_3_gb_heap_on_windows.html
    http://forums.sun.com/thread.jspa?threadID=5343135
    http://stackoverflow.com/questions/2640239/java-lang-error-not-enough-storage-is-available-to-process-this-command-when-g
    http://stackoverflow.com/questions/507853/system-error-code-8-not-enough-storage-is-available-to-process-this-command
    http://www.microsoft.com/downloads/details.aspx?familyid=5cfc9b74-97aa-4510-b4b9-b2dc98c8ed8b&displaylang=en
    http://support.microsoft.com/kb/126962/
    Regards,
    Allan
    Edited by: Allan Andersen on 2011-01-03 03:27

  • Upgrading a 3-node Hyper-V clusters storage for £10k and getting the most bang for our money.

    Hi all, looking for some discussion and advice on a few questions I have regarding storage for our next cluster upgrade cycle.
    Our current system for a bit of background:
    3x Clustered Hyper-V Servers running Server 2008 R2 (72TB Ram, dual cpu etc...)
    1x Dell MD3220i iSCSI with dual 1GB connections to each server (24x 146GB 15k SAS drives in RAID 10) - Tier 1 storage
    1x Dell MD1200 Expansion Array with 12x 2TB 7.2K drives in RAID 10 - Tier 2 storage, large vm's, files etc...
    ~25 VM's running all manner of workloads, SQL, Exchange, WSUS, Linux web servers etc....
    1x DPM 2012 SP1 Backup server with its own storage.
    Reasons for upgrading:
    Storage though put is becoming an issue as we only get around 125MB/s over the dual 1GB iSCSI connections to each physical server.  (tried everything under the sun to improve bandwidth but I suspect the MD3220i Raid is the bottleneck here.
    Backup times for vm's (once every night) is now in the 5-6 hours range.
    Storage performance during backups and large file syncronisations (DPM)
    Tier 1 storage is running out of capacity and we would like to build in more IOPS for future expansion.
    Tier 2 storage is massively underused (6tb of 12tb Raid 10 space)
    Migrating to 10GB server links.
    Total budget for the upgrade is in the region of £10k so I have to make sure we get absolutely the most bang for our buck.  
    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks

    Current Plan:
    Upgrade the cluster to Server 2012 R2
    Install a dual port 10GB NIC team in each server and virtualize cluster, live migration, vm and management traffic (with QoS of course)
    Purchase a new JBOD SAS array and leverage the new Storage Spaces and SSD caching/tiering capabilities.  Use our existing 2TB drives for capacity and purchase sufficient SSD's to replace the 15k SAS disks.
    On to the questions:
    Is it supported to use storage spaces directly connected to a Hyper-V cluster?  I have seen that for our setup we are on the verge of requiring a separate SOFS for storage but the extra costs and complexity are out of our reach. (RDMA, extra 10GB NIC's
    etc...)
    When using a storage space in a cluster, I have seen various articles suggesting that each csv will be active/passive within the cluster.  Causing redirected IO for all cluster nodes not currently active?
    If CSV's are active/passive its suggested that you should have a csv for each node in your cluster?  How in production do you balance vm's accross 3 CSV's without manually moving them to keep 1/3 of load on each csv?  Ideally I would like just
    a single CSV active/active for all vm's to sit on.  (ease of management etc...)
    If the CSV is active/active am I correct in assuming that DPM will backup vm's without causing any re-directed IO?
    Will DPM backups of VM's be incremental in terms of data transferred from the cluster to the backup server?
    Thanks in advance for anyone who can be bothered to read through all that and help me out!  I'm sure there are more questions I've forgotten but those will certainly get us started.
    Also lastly, does anyone else have a better suggestion for how we should proceed?
    Thanks
    1) You can use direct connection to SAS with a 3-node cluster of course (4-node, 5-node etc). Sure it would be much faster then running with an additional SoFS layer (with SAS fed directly to your Hyper-V cluster nodes all reads and writes would be local
    travelling down the SAS fabric and with SoFS layer added you'll have the same amount of I/Os targeting SAS + Ethernet with its huge compared to SAS latency sitting in between a requestor and your data residing on SAS spindles, I/Os overwrapped into SMB-over-TCP-over-IP-over-Etherent
    requests at the hypervisor-SoFS layers). Reason why SoFS is recommended - final SoFS-based solution would be cheaper as SAS-only is a pain to scale beyond basic 2-node configs. Instead of getting SAS switches, adding redundant SAS controllers to every hypervisor
    node and / or looking for expensive multi-port SAS JBODs you'll have a pair (at least) of SoFS boxes doing a file level proxy in front of a SAS-controlled back end. So you'll compromise performance in favor of cost. See:
    http://davidzi.com/windows-server-2012/hyper-v-and-scale-out-file-cluster-home-lab-design/
    Used interconnect diagram within this design would actually scale beyond 2 hosts. But you'll have to get a SAS switch (actually at least two of them for redundancy as you don't want any component to become a single point of failure, don't you?)
    2) With 2012 R2 all I/O from a multiple hypervisor nodes is done thru the storage fabric (in your case that's SAS) and only metadata updates would be done thru the coordinator node and using Ethernet connectivity. Redirected I/O would be used in a two cases
    only a) no SAS connectivity from the hypervisor node (but Ethernet one is still present) and b) broken-by-implementation backup software would keep access to CSV using snapshot mechanism for too long. In a nutshell: you'll be fine :) See for references:
    http://www.petri.co.il/redirected-io-windows-server-2012r2-cluster-shared-volumes.htm
    http://www.aidanfinn.com/?p=12844
    3) These are independent things. CSV is not active/passive (see 2) so basically with an interconnection design you'll be using there's virtually no point to having one-CSV-per-hypervisor. There are cases when you'd still probably do this. For example if
    you'd have all-flash and combined spindle/flash LUNs and you know for sure you want some VMs to sit on flash and others (no so I/O hungry) to stay within "spinning rust". One more case is many-node cluster. With it multiple nodes basically fight for a single
    LUN and a lot of time is wasted for SCSI reservation conflicts resove (ODX has no reservation offload like VAAI has so even if ODX is present its not going to help). Again it's a place where SoFS "helps" as having intermediate proxy level turns block I/O into
    file I/O triggering SCSI reservation conflicts for a two SoFS nodes only instead of an evey node in a hypervisor cluster. One more good example is when you'll have a mix of a local I/O (SAS) and Ethernet with a Virtual SAN products. Virtual SAN runs directly
    as part of the hypervisor and emulates high performance SAN using cheap DAS. To increase performance it DOES make sense to create a  concept of a "local LUN" (and thus "local CSV") as reads targeting this LUN/CSV would be passed down the local storage
    stack instead of hitting the wire (Ethernet) and going to partner hypervisor nodes to fetch the VM data. See:
    http://www.starwindsoftware.com/starwind-native-san-on-two-physical-servers
    http://www.starwindsoftware.com/sw-configuring-ha-shared-storage-on-scale-out-file-servers
    (feeding basically DAS to Hyper-V and SoFS to avoid expensive SAS JBOD and SAS spindles). The same thing as VMware is doing with their VSAN on vSphere. But again that's NOT your case so it DOES NOT make sense to keep many CSVs with only 3 nodes present or
    SoFS possibly used. 
    4) DPM is going to put your cluster in redirected mode for a very short period of time. Microsoft says NEVER. See:
    http://technet.microsoft.com/en-us/library/hh758090.aspx
    Direct and Redirect I/O
    Each Hyper-V host has a direct path (direct I/O) to the CSV storage Logical Unit Number (LUN). However, in Windows Server 2008 R2 there are a couple of limitations:
    For some actions, including DPM backup, the CSV coordinator takes control of the volume and uses redirected instead of direct I/O. With redirection, storage operations are no longer through a host’s direct SAN connection, but are instead routed
    through the CSV coordinator. This has a direct impact on performance.
    CSV backup is serialized, so that only one virtual machine on a CSV is backed up at a time.
    In Windows Server 2012, these limitations were removed:
    Redirection is no longer used. 
    CSV backup is now parallel and not serialized.
    5) Yes, VSS and CBT would be used so data would be incremental after first initial "seed" backup. See:
    http://technet.microsoft.com/en-us/library/ff399619.aspx
    http://itsalllegit.wordpress.com/2013/08/05/dpm-2012-sp1-manually-copy-large-volume-to-secondary-dpm-server/
    I'd look at some other options. There are few good discussion you may want to read. See:
    http://arstechnica.com/civis/viewtopic.php?f=10&t=1209963
    http://community.spiceworks.com/topic/316868-server-2012-2-node-cluster-without-san
    Good luck :)
    StarWind iSCSI SAN & NAS

Maybe you are looking for

  • Regarding ESOA - Hype or Reality

    Hello Friends,   Im working in Process Integration area.And heard abt the ESOA. My Queries are 1.How the ESOA will enrich Netweaver Process Integration. 2.I have seen in a SAP blog the definition of SOA as "Loose Coupling, Open-Closed or Interface Se

  • TEXT_IO package...

    Is there any restriction on the length of record that can be read from text file using text_io package. I want to read a text file that has number of records with length upto 20000 char. I'm putting this data into a table which has 4 columns of varch

  • Replication on rmi server

    Hi, I m working on rmi client/server chat application and I would like to make my server replicated (e.g having a group of servers), so when it crashes the connection must jump to another server without losing any data and without the client noticing

  • Remove unwanted save as file formats

    There are so many save as options. I'd like to remove some to make scrolling down to the ones I only use such as jpeg, Tiff, PSD etc. Research brought up this option, but before I blunder in are there any risks to this method or is there another "mor

  • Sales Order for Bulk quantity

    Hi, My client has a scenario like Order is created for bulk quantity, say 1000 MT for whole year for one sold to party. Now they create DI (Despatch Instruction) with reference to order for bulk quantity say 100 MT for particular one Ship to Party. N