Coherence needed for SOASuite cluster?

Hello Folks,
We are setting up a dual node cluster for Oracle SOA Suite. Please confirm whether coherence is mandatory to be installed along with weblogic?
Best Regards,
Kartheek

Oracle SOA Suite 11g utilises an embedded Coherence cache to coordinate several cluster-wide activities including composite deployment
https://blogs.oracle.com/ateamsoab2b/entry/coherence_in_soa_suite_11g

Similar Messages

  • Special authorization need for read cluster table??

    In one report, I use following coding to read information from RFDT table:
    form get_f110_parm .
      f110id-laufd   = p_laufd.
      f110id-laufi   = p_laufi.
      f110versionpar = space.
      clear:   buktab, fkttab, slktab, sldtab, trctab, usrtab,
               faetab, jobtab, f110v, f110c,  trcopt, f110versionpar.
      import buktab fkttab slktab sldtab trctab usrtab
             faetab jobtab f110c trcopt f110versionpar
             from database rfdt(fb) id f110id
             accepting padding.
    endform.                    " GET_F110_PARM
    I can fill F110c, trcopt and f110versionpar by this program. But there is no entry in table like fkttab, usrtab.
    Is there any authorization need for read cluster table??
    Thanks in advance.
    Edited by: Amy Xie on Dec 21, 2010 10:41 AM

    Hello,
    After you run your code, check transaction SU53 to see if any authorization check failed.

  • JVM for OSCAR cluster

    I have access to an OSCAR cluster, and I am wonderring if there is a special JVM I would need for this cluster or if any Linux JVM might do the job. I'm sort of a complete rookie when it comes to distributed programming or clusters for that matter, so this might be a stupid question (I don't know).
    Please help, or point me in the right direction please,
    L-

    Nice! (and thanks for the duke, btw, I'll cherish it always). I've been unable to find any 100 Mbps nics for you, but I think there's a box of old token ring cards around here somewhere, maybe we can figure something out there. I don't know if anyone's writing massively parallel Java out there - resources might be a little hard to come by. Maybe we could scale down the initial project to something more like that appointment tracking app that has been giving people around here such trouble?

  • Need for administration server in a cluster

    Hello everybody,
              I'm working on a project where we intend on having a weblogic cluster with an undetermined number of nodes. When deploying to the production environement, all weblogic domain configuration is handled using templates and homemade scripts. The weblogic instances are running on different servers and each of them has all the domain configuration installed, including applications to deploy. Now on the initial startup of the system, it seems that none of the cluster nodes will start up the first time without the administration server running, even though the msi-config.xml file exist for each node. I get an error indicating that it cannot perform authentification using boot.properties (non encrypted username/password at this point). If I start the admin server, then all nodes can start and will then subsequently start without the admin server running. Is there no way to configure a weblogic cluster so it is able to start up the first time without the admin server running?
              In this project, monitoring of the each weblogic server instance is performed by an in house product. The need for the administration server at the first startup complicates things for us. I should mention that the domain configuration works fine when not running in a cluster.
              We are using Weblogic 8.1.4.
              Best regards,
              Anders

    The API is not public.
              This non rmi object that is living in JNDI, sounds like out of band data.
              If this object doesn't contain any state you can bind it from all the
              servers but don't replicate the bindings.
              Hope this helps.
              --- Prasad
              Mario Briggs wrote:
              > Hi,
              > Looks like this question was asked indirectly earlier.
              >
              > I see that when 1 server is a cluster goes down, the others servers get
              > a 'weblogic.rjvm.PeerGoneException'.
              > Is there a way by which i can subscribe to the
              > 'weblogic.rjvm.PeerGoneEvent' using 'EventRegistrationDef'.
              >
              > I am using 5.1 and looking for a way at solving the isssue of Weblogic
              > removing my non RMI Object from all other servers (JNDI tree) when the
              > host server goes down.
              >
              > Thanks
              > Mario
              

  • Need for an aditional enterprise oracle license in a cluster

    Hello,
    I would like to know about the oracle licensing (*enterprise 11g*) of the following structure:
    # Cluster of an active-passive operating system with only one active server. The secondary node will take over the services only in case of failure of the primary.
    # A secondary data center with one more node of the cluster and with data replication from the primary storage.
    I know that for the cluster will require the licensing of only one server.
    I wonder about the need for one more Oracle license for the second mentioned structure, where the passive server would only be used in case of failure of the cluster.
    Thanks in advance!

    You really, really, really want to have this conversation with Oracle Sales. Nothing anyone says here is in any way authoritative-- if there is ever an audit, you really don't want to explain that some guy on the internet with a playing card logo next to his name said it was OK. The answer may depend on the country you're in, your existing license agreements, your ability to negotiate with Oracle, your desire to buy other Oracle products, how much your sales rep likes you, the phase of the moon, how close your rep is to making their quota, your astrological symbol, and dozens of other factors. In other words, any answer would need to be taken with a few hundred grains of salt.
    You'll want to refer to the Software Investment Guide, specifically the Backup/Failover/Standby/Remote Mirroring section. In general, you can have one failover node so long as you do not fail over for more than 10 calendar days per year and immediately go back to the primary once it is repaired. In general, you would need to fully license the failover data center. But, as I said, this is all subject to negotiation.
    Justin

  • Is Veritas- or Sun Cluster needed for RAC in a Solaris Environment

    Is a Veritas- or Sun Cluster needed for RAC in a Solaris Environment?
    Does anyone know, when OCFS will be available for Solaris?

    You don't need Veritas Cluster File System, but until OCFS comes out for Solaris you need to think about backups. If you've not got a backup solution that can integrate with rman for an SBT device then backups become more tricky.
    If you use ASM then you can take a backup to a "cluster filesystem" (although ASM is raw partitions think of it as a cluster filesystem), that both nodes can see. BUT you then need to get these to tape somehow, unless you've got NetBackup et al. that support RMAN and can backup direct to tape you're more stuck.
    Too many people don't think about this. You could create an NFS mount and backup to this from the nodes.

  • Shared data for a cluster

    Hi,
    What is the best way to have some shared data within a cluster. This data needs
    to be in memory only, no need for DB persistence. But it can be upadted (in memory
    only) and updates should be visible to any of the servers with in the cluster.
    For example - I want to create a hashtable to be shared within cluster and the
    hashtable can be modified from time to time.
    One option is to use entity bean, but this data is not really required to persist
    in DB. Is there any other option?
    thanks
    - saurabh

    Hi Saurabh,
    What is the best way to have some shared data
    within a cluster. This data needs to be in memory
    only, no need for DB persistence. But it can be
    upadted (in memory only) and updates should be
    visible to any of the servers with in the cluster.
    For example - I want to create a hashtable to be
    shared within cluster and the hashtable can be
    modified from time to time.Tangosol Coherence does just that:
    :: http://www.tangosol.com/coherence.jsp
    It even supports the same Hashtable API!
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Saurabh" <[email protected]> wrote in message
    news:3ed7741f$[email protected]..
    >

  • Shared disk for weblogic cluster

    Hi Gurus,
    I've googled about the subject, but didn't find any specific.
    Could somebody explain,
    Is it possible/certified to create weblogic domain on a clustered file system (i.e. OCFS2)?
    I would like to build a cluster with common domain home.
    Regards,
    Mikhail

    Hi Saurabh,
    What is the best way to have some shared data
    within a cluster. This data needs to be in memory
    only, no need for DB persistence. But it can be
    upadted (in memory only) and updates should be
    visible to any of the servers with in the cluster.
    For example - I want to create a hashtable to be
    shared within cluster and the hashtable can be
    modified from time to time.Tangosol Coherence does just that:
    :: http://www.tangosol.com/coherence.jsp
    It even supports the same Hashtable API!
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "Saurabh" <[email protected]> wrote in message
    news:3ed7741f$[email protected]..
    >

  • Is it possible for a cluster to be set to allow variable data input?

    I posted this question at the bottom of a different thread but it looks like that one has died out plus this is really a separate question. So I hope no one minds me starting a new thread here.
    I made the program Convert2Matfile.vi which I intend to use as a SubVI in other programs. How I want this program to work is I would like to bundle up double any quantity of 1D/ 2D arrays and or numeric values and pass them to this tool which will then convert what ever it is sent to an appropriate variable in a matlab file using the element name as the matlab variable name.
    The issue I am having here, is that currently I have a 1D, 2D and a numeric value sitting in the cluster definition on my front panel of the Convert2Matlfile.vi. Because I have this, the code is looking for every cluster to be structured exactally like the one I have in the front panel. How do I source that code more generically? See the photo below of the broken wire. Error message is as follows;
    You have connected two arrays of different dimensions.
    The dimension of input cluster -> array #2 is 1.
    The dimension of Cluster In -> array 2 is 2.
    So clearly it is looking for the Cluster to be exactally like the one it is looking for. Is there any way to do this more generically so it can accept any type of cluster of any quantity of values so long as they meet my criteria?
    Attachments:
    conv2mat.vi ‏49 KB
    Convert2Matfile.vi ‏58 KB
    Conv2Matlab Cluster Issue.png ‏33 KB

    Well it looks like I will likely need to treat this as a variant. Unfortunatley for me I hate dealing with Variants in LabVIEW likely more as a result as a lack of understanding when it comes to how to deal with them than anything. My stratedgy I would think should be as follows figure out how many items are in my variant that are either a numeric, a 1D or a 2D array. Then one by one go to each of those values. If say it is a numeric get the numerical value, get the label name and send it over to my convert to Matlab program one by one.
    So that would be the theory behind how I would try to do this. Now how the heck do I actually do anything along those lines? I can't even figure out how to get the number of elements in my variant. I would have thought the "Get Variant Attribute function" would be a great place to start. So I made up the exploreVariant.vi below. See the images of the code plus the front panel after execution. I followed the instructions from the following location;
    http://zone.ni.com/reference/en-XX/help/371361J-01​/lvhowto/retrievingattribsfromvard/
    Add the Get Variant Attribute function to the block diagram.
    Wire variant data to the variantinput of the Get Variant Attribute function.
    Leave the name input and the default valueinput of the Get Variant Attribute function unwired.
    Right-click the names output and select Create»Indicatorfrom the shortcut menu.
    Right-click the values output of the Get Variant Attribute function and select Create»Indicatorfrom the shortcut menu to retrieve the values associated with each attribute.
    Run the VI.
    Even after following the instructions above nothing shows up in either the names or values collumn. So how the heck do I actually do what I want to do? I just can't seem to figure out how to index through a variant at all.
    Attachments:
    exploreVariant.vi ‏10 KB
    variantcode.png ‏35 KB
    VariantFrontPanel.png ‏73 KB

  • Maintaince view creation for a cluster table

    Hi friends,
    I have created a maintaince view for a cluster table bseg and i have activated it its working fine..
    now my problem is i can create maintaince view for mutiple tables if other tables are linked with primary table using foriegn key relationship..
    Now can any one tell me is it possible to create maintance view for cluster table with multiple tables as i need a linked table with bseg which iam not able to get... ie : when i click on the relationship tab iam not getting the linked tables for bseg...
    now can i create a maintaince view with 2 linked cluster tables..
    if so can any one tell me how to create it.
    As sap says we can create projection view for cluster and pooled table and we cannot create database view for cluster and pooled tables , but it does not mentioned like that for maintaince view....
    I assume we can do it.... as iam trying to create a maintaince view with single cluster table then it shoudl allow me to create for multiple linked cluster tables.... and is it mandatory to maintain TMG for this maintaince view....?
    Regards
    KUMAR

    yes.. ur right inserting values into a cluster table other than standard sap tarnactions is dangerious....
    But sap didnot mentioned any where that we cannot maintain data for cluster tables using maintaince view... which it said for database view..that pooled and cluster table cannot be used for database view..
    Regards
    Kumar

  • Hyper-V cluster Backup causes virtual machine reboots for common Cluster Shared Volumes members.

    I am having a problem where my VMs are rebooting while other VMs that share the same CSV are being backed up. I have provided all the information that I have gather to this point below. If I have missed anything, please let me know.
    My HyperV Cluster configuration:
    5 Node Cluster running 2008R2 Core DataCenter w/SP1. All updates as released by WSUS that will install on a Core installation
    Each Node has 8 NICs configured as follows:
     NIC1 - Management/Campus access (26.x VLAN)
     NIC2 - iSCSI dedicated (22.x VLAN)
     NIC3 - Live Migration (28.x VLAN)
     NIC4 - Heartbeat (20.x VLAN)
     NIC5 - VSwitch (26.x VLAN)
     NIC6 - VSwitch (18.x VLAN)
     NIC7 - VSwitch (27.x VLAN)
     NIC8 - VSwitch (22.x VLAN)
    Following hotfixes additional installed by MS guidance (either while build or when troubleshooting stability issue in Jan 2013)
     KB2531907 - Was installed during original building of cluster
     KB2705759 - Installed during troubleshooting in early Jan2013
     KB2684681 - Installed during troubleshooting in early Jan2013
     KB2685891 - Installed during troubleshooting in early Jan2013
     KB2639032 - Installed during troubleshooting in early Jan2013
    Original cluster build was two hosts with quorum drive. Initial two hosts were HST1 and HST5
    Next host added was HST3, then HST6 and finally HST2.
    NOTE: HST4 hardware was used in different project and HST6 will eventually become HST4
    Validation of cluster comes with warning for following things:
     Updates inconsistent across hosts
      I have tried to manually install "missing" updates and they were not applicable
      Most likely cause is different build times for each machine in cluster
       HST1 and HST5 are both the same level because they were built at same time
       HST3 was not rebuilt from scratch due to time constraints and it actually goes back to Pre-SP1 and has a larger list of updates that others are lacking and hence the inconsistency
       HST6 was built from scratch but has more updates missing than 1 or 5 (10 missing instead of 7)
       HST2 was most recently built and it has the most missing updates (15)
     Storage - List Potential Cluster Disks
      It says there are Persistent Reservations on all 14 of my CSV volumes and thinks they are from another cluster.
      They are removed from the validation set for this reason. These iSCSI volumes/disks were all created new for
      this cluster and have never been a part of any other cluster.
     When I run the Cluster Validation wizard, I get a slew of Event ID 5120 from FailoverClustering. Wording of error:
      Cluster Shared Volume 'Volume12' ('Cluster Disk 13') is no longer available on this node because of
      'STATUS_MEDIA_WRITE_PROTECTED(c00000a2)'. All I/O will temporarily be queued until a path to the
      volume is reestablished.
     Under Storage and Cluster Shared VOlumes in Failover Cluster Manager, all disks show online and there is no negative effect of the errors.
    Cluster Shared Volumes
     We have 14 CSVs that are all iSCSI attached to all 5 hosts. They are housed on an HP P4500G2 (LeftHand) SAN.
     I have limited the number of VMs to no more than 7 per CSV as per best practices documentation from HP/Lefthand
     VMs in each CSV are spread out amonst all 5 hosts (as you would expect)
    Backup software we use is BackupChain from BackupChain.com.
    Problem we are having:
     When backup kicks off for a VM, all VMs on same CSV reboot without warning. This normally happens within seconds of the backup starting
    What have to done to troubleshoot this:
     We have tried rebalancing our backups
      Originally, I had backup jobs scheduled to kick off on Friday or Saturday evening after 9pm
      2 or 3 hosts would be backing up VMs (Serially; one VM per host at a time) each night.
      I changed my backup scheduled so that of my 90 VMs, only one per CSV is backing up at the same time
       I mapped out my Hosts and CSVs and scheduled my backups to run on week nights where each night, there
       is only one VM backed up per CSV. All VMs can be backed up over 5 nights (there are some VMs that don't
       get backed up). I also staggered the start times for each Host so that only one Host would be starting
       in the same timeframe. There was some overlap for Hosts that had backups that ran longer than 1 hour.
      Testing this new schedule did not fix my problem. It only made it more clear. As each backup timeframe
      started, whichever CSV the first VM to start was on would have all of their VMs reboot and come back up.
     I then thought maybe I was overloading the network still so I decided to disable all of the scheduled backup
     and run it manually. Kicking off a backup on a single VM, in most cases, will cause the reboot of common
     CSV members.
     Ok, maybe there is something wrong with my backup software.
      Downloaded a Demo of Veeam and installed it onto my cluster.
      Did a test backup of one VM and I had not problems.
      Did a test backup of a second VM and I had the same problem. All VMs on same CSV rebooted
     Ok, it is not my backup software. Apparently it is VSS. I have looked through various websites. The best troubleshooting
     site I have found for VSS in one place it on BackupChain.com (http://backupchain.com/hyper-v-backup/Troubleshooting.html)
     I have tested almost every process on there list and I will lay out results below:
      1. I have rebooted HST6 and problems still persist
      2. When I run VSSADMIN delete shadows /all, I have no shadows to delete on any of my 5 nodes
       When I run VSSADMIN list writers, I have no error messages on any writers on any node...
      3. When I check the listed registry key, I only have the build in MS VSS writer listed (I am using software VSS)
      4. When I run VSSADMIN Resize ShadowStorge command, there is no shadow storage on any node
      5. I have completed the registration and service cycling on HST6 as laid out here and most of the stuff "errors"
       Only a few of the DLL's actually register.
      6. HyperV Integration Services were reconciled when I worked with MS in early January and I have no indication of
       further issue here.
      7. I did not complete the step to delete the Subscriptions because, again, I have no error messages when I list writers
      8. I removed the Veeam software that I had installed to test (it hadn't added any VSS Writer anyway though)
      9. I can't realistically uninstall my HyperV and test VSS
      10. Already have latest SPs and Updates
      11. This is part of step 5 so I already did this. This seems to be a rehash of various other stratgies
     I have used the VSS Troubleshooter that is part of BackupChain (Ctrl-T) and I get the following error:
      ERROR: Selected writer 'Microsoft Hyper-V VSS Writer' is in failed state!
      - Status: 8 (VSS_WS_FAILED_AT_PREPARE_SNAPSHOT)
      - Writer Failure code: 0x800423f0 (<Unknown error code>)
      - Writer ID: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
      - Instance ID: {d55b6934-1c8d-46ab-a43f-4f997f18dc71}
      VSS snapshot creation failed with result: 8000FFFF
    VSS errors in event viewer. Below are representative errors I have received from various Nodes of my cluster:
    I have various of the below spread out over all hosts except for HST6
    Source: VolSnap, Event ID 10, The shadow copy of volume took too long to install
    Source: VolSnap, Event ID 16, The shadow copies of volume x were aborted because volume y, which contains shadow copy storage for this shadow copy, wa force dismounted.
    Source: VolSnap, Event ID 27, The shadow copies of volume x were aborted during detection because a critical control file could not be opened.
    I only have one instance of each of these and both of the below are from HST3
    Source: VSS, Event ID 12293, Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider {b5946137-7b9f-4925-af80-51abd60b20d5}. Routine details RevertToSnashot [hr = 0x80042302, A Volume Shadow Copy Service component encountered an
    unexpected error.
    Source: VSS, Event ID 8193, Volume Shadow Copy Service error: Unexpected error calling routine GetOverlappedResult.  hr = 0x80070057, The parameter is incorrect.
    So, basically, everything I have tried has resulted in no success towards solving this problem.
    I would appreciate anything assistance that can be provided.
    Thanks,
    Charles J. Palmer
    Wright Flood

    Tim,
    Thanks for the reply. I ran the first two commands and got this:
    Name                                                            
    Role Metric
    Cluster Network 1                                              
    3  10000
    Cluster Network 2 - HeartBeat                              1   1300
    Cluster Network 3 - iSCSI                                    0  10100
    Cluster Network 4 - LiveMigration                         1   1200
    When you look at the properties of each network, this is how I have it configured:
    Cluster Network 1 - Allow cluster network communications on this network and Allow clients to connect through this network (26.x subnet)
    Cluster Network 2 - Allow cluster network communications on this network. New network added while working with Microsoft support last month. (28.x subnet)
    Cluster Network 3 - Do not allow cluster network communications on this network. (22.x subnet)
    Cluster Network 4 - Allow cluster network communications on this network. Existing but not configured to be used by VMs for Live Migration until MS corrected. (20.x subnet)
    Should I modify my metrics further or are the current values sufficient.
    I worked with an MS support rep because my cluster (once I added the 5th host) stopped being able to live migrate VMs and I had VMs host jumping on startup. It was a mess for a couple of days. They had me add the Heartbeat network as part of the solution
    to my problem. There doesn't seem to be anywhere to configure a network specifically for CSV so I would assume it would use (based on my metrics above) Cluster Network 4 and then Cluster Network 2 for CSV communications and would fail back to the Cluster Network
    1 if both 2 and 4 were down/inaccessible.
    As to the iSCSI getting a second NIC, I would love to but management wants separation of our VMs by subnet and role and hence why I need the 4 VSwitch NICs. I would have to look at adding an additional quad port NIC to my servers and I would be having to
    use half height cards for 2 of my 5 servers for that to work.
    But, on that note, it doesn't appear to actually be a bandwidth issue. I can run a backup for a single VM and get nothing on the network card (It caused the reboots before any real data has even started to pass apparently) and still the problem occurs.
    As to Backup Chain, I have been working with the vendor and they are telling my the issue is with VSS. They also say they support CSV as well. If you go to this page (http://backupchain.com/Hyper-V-Backup-Software.html)
    they say they support CSVs. Their tech support has been very helpful but unfortunately, nothing has fixed the problem.
    What is annoying is that every backup doesn't cause a problem. I have a daily backup of one of our machines that runs fine without initiating any additional reboots. But most every other backup job will trigger the VMs on the common CSV to reboot.
    I understood about the updates but I had to "prove" it to the MS tech I was on the phone with and hence I brought it up. I understand on the storage as well. Why give a warning for something that is working though... I think that is just a poor indicator
    that it doesn't explain that in the report.
    At a loss for what else I can do,
    Charles J. Palmer

  • Need for multiple ASM disk groups on a SAN with RAID5??

    Hello all,
    I've successfully installed clusterware, and ASM on a 5 node system. I'm trying to use asmca (11Gr2 on RHEL5)....to configure the disk groups.
    I have a SAN, which actually was previously used for a 10G ASM RAC setup...so, reusing the candidate volumes that ASM has found.
    I had noticed on the previous incarnation....that several disk groups had been created, for example:
    ASMCMD> ls
    DATADG/
    INDEXDG/
    LOGDG1/
    LOGDG2/
    LOGDG3/
    LOGDG4/
    RECOVERYDG/
    Now....this is all on a SAN....which basically has two pools of drives set up each in a RAID5 configuration. Pool 1 contains ASM volumes named ASM1 - ASM32. Each of these logical volumes is about 65 GB.
    Pool #2...has ASM33 - ASM48 volumes....each of which is about 16GB in size.
    I used ASM33 from pool#2...by itself to contain my cluster voting disk and OCR.
    My question is....with this type setup...would doing so many disk groups as listed above really do any good for performance? I was thinking with all of this on a SAN, which logical volumes on top of a couple sets of RAID5 disks...the divisions on the disk group level with external redundancy would do anything?
    I was thinking of starting with about half of the ASM1-ASM31 'disks'...to create one large DATADG disk group, which would house all of the database instances data, indexes....etc. I'd keep the remaining large candidate disks as needed for later growth.
    I was going to start with the pool of the smaller disks (except the 1 already dedicated to cluster needs) to basically serve as a decently sized RECOVERYDG...to house logs, flashback area...etc. It appears this pool is separate from pool #1...so, possibly some speed benefits there.
    But really...is there any need to separate the diskgroups, based on a SAN with two pools of RAID5 logical volumes?
    If so, can someone give me some ideas why...links on this info...etc.
    Thank you in advance,
    cayenne

    The best practice is to use 2 disk groups, one for data and the other for the flash/fast recovery area. There really is no need to have a disk group for each type of file, in fact the more disks in a disk group (to a point I've seen) the better for performance and space management. However, there are times when multiple disk groups are appropriate (not saying this is one of them only FYI), such as backup/recovery and life cycle management. Typically you will still get benefit from double stripping, i.e. having a SAN with RAID groups presenting multiple LUNs to ASM, and then having ASM use those LUNs in disk groups. I saw this in my own testing. Start off with a minimum of 4 LUNs per disk group, and add in pairs as this will provide optimal performance (at least it did in my testing). You should also have a set of standard LUN sizes to present to ASM so things are consistent across your enterprise, the sizing is typically done based on your database size. For example:
    300GB LUN: database > 10TB
    150GB LUN: database 1TB to 10 TB
    50GB LUN: database < 1TB
    As databases grow beyond the threshold the larger LUNs are swapped in and the previous ones are swapped out. With thin provisioning it is a little different since you only need to resize the ASM LUNs. I'd also recommend having at least 2 of each standard sized LUNs ready to go in case you need space in an emergency. Even with capacity management you never know when something just consumes space too quickly.
    ASM is all about space savings, performance, and management :-).
    Hope this helps.

  • Authorizations for View Cluster

    Hello all,
    I need to maintain authorization for View cluster.
    Example : I have a view cluster say 'VC_TEST' , now I should have an authorization where other employees can only display it.
    Steps which I have followed:
    1. Assigned authorization object to the views of view cluster then created role for the object S_TABU_DIS and assigned all the activites (Create,Change,Display)
    2. Assigned user to the role
    But still other people can edit or maintain the view cluster.
    So could you please guide me.
    Thanks and regards,
    Anil

    HI,
    Reg:Authority Check object
    http://www.techrepublic.com/article/comprehend-the-sap-authorization-concept-with-these-code-samples/5110893
    Ram.

  • ITAB_NON_NUMERIC_COMPONENT" in the "generation of scrambling programs for non-cluster tables

    Dear Experts,
    We have are experiencing issues in standalone scrambling,
    we are getting error" ITAB_NON_NUMERIC_COMPONENT" in the "generation of
    scrambling programs for non-cluster tables"
    Please help with the needful
    urgently.
    Thanks

    Hello,
    Implement the following SAP note in the execution system and re-execute the     
    activity 'Generation of scrambling programs for non-cluster tables' to solve this issue.
    SAP NOTE : '1915906 - SAP LT V2 SP06: Dump during program generation'
    Regards,
    Jerrin Francis

  • Event Trace Session missing for Failover Cluster

    I have a failover cluster setup and is managed via a Windows Domain controller for the failover cluster network.  I am troubleshooting a potential issue with the failover cluster and the recommendation is to go into event viewer, app & service
    logs, Microsoft, windows and look for failover cluster diagnostics and operations logs and these do not exist. 
    It appears they are created by having an event trace session associated with windows failover clustering but apparently it was't created when the cluster was created for some reason.  I am wondering how to create the proper event trace session
    in order to get these additional failover cluster logs created? 

    Hello,
    the following forum mat be the better one for your needs:
    https://social.technet.microsoft.com/Forums/windowsserver/en-US/home?forum=winserverClustering
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

Maybe you are looking for