2 node Sun Cluster 3.2, resource groups not failing over.

Hello,
I am currently running two v490s connected to a 6540 Sun Storagetek array. After attempting to install the latest OS patches the cluster seems nearly destroyed. I backed out the patches and right now only one node can process the resource groups properly. The other node will appear to take over the Veritas disk groups but will not mount them automatically. I have been working on this for over a month and have learned alot and fixed alot of other issues that came up, but the cluster is just not working properly. Here is some output.
bash-3.00# clresourcegroup switch -n coins01 DataWatch-rg
clresourcegroup: (C776397) Request failed because node coins01 is not a potential primary for resource group DataWatch-rg. Ensure that when a zone is intended, it is explicitly specified by using the node:zonename format.
bash-3.00# clresourcegroup switch -z zcoins01 -n coins01 DataWatch-rg
clresourcegroup: (C298182) Cannot use node coins01:zcoins01 because it is not currently in the cluster membership.
clresourcegroup: (C916474) Request failed because none of the specified nodes are usable.
bash-3.00# clresource status
=== Cluster Resources ===
Resource Name Node Name State Status Message
ftp-rs coins01:zftp01 Offline Offline
coins02:zftp01 Offline Offline - LogicalHostname offline.
xprcoins coins01:zcoins01 Offline Offline
coins02:zcoins01 Offline Offline - LogicalHostname offline.
xprcoins-rs coins01:zcoins01 Offline Offline
coins02:zcoins01 Offline Offline - LogicalHostname offline.
DataWatch-hasp-rs coins01:zcoins01 Offline Offline
coins02:zcoins01 Offline Offline
BDSarchive-res coins01:zcoins01 Offline Offline
coins02:zcoins01 Offline Offline
I am really at a loss here. Any help appreciated.
Thanks

My advice is to open a service call, provided you have a service contract with Oracle. There is much more information required to understand that specific configuration and to analyse the various log files. This is beyond what can be done in this forum.
From your description I can guess that you want to failover a resource group between non-global zones. And it looks like the zone coins01:zcoins01 is reported to not be in cluster membership.
Obviously node coins01 needs to be a cluster member. If it is reported as online and has joined the cluster, then you need to verify if the zone zcoins01 is really properly up and running.
Specifically you need to verify that it reached the multi-user milestone and all cluster related SMF services are running correctly (ie. verify "svcs -x" in the non-global zone).
You mention Veritas diskgroups. Note that VxVM diskgroups are handled in the global cluster level (ie. in the global zone). The VxVM diskgroup is not imported for a non-global zone. However, with SUNW.HAStoragePlus you can ensure that file systems on top of VxVM diskgroups can be mounted into a non-global zone. But again, more information would be required to see how you configued things and why they don't work as you expect it.
Regards
Thorsten

Similar Messages

  • Cluster resource 'SQL Server' in Resource Group 'MSSQL' failed.

    Hi All,
    Last week we face problem on SQL server 2005 Cluster server.
    SQL cluster was down with below issue.
    Event 1069 : Cluster resource 'SQL Server' in Resource Group 'MSSQL' failed.  
    Event 19019 : [sqsrvres] CheckServiceAlive: Service is dead
    [sqsrvres] OnlineThread: service stopped while waiting for QP.
    [sqsrvres] OnlineThread: Error 1 bringing resource online
    Kindly any one provide resolution for my above issue.

    I have checked in event viewer Application error side error:  
    Event 19019 : [sqsrvres]
    CheckServiceAlive: Service is dead
    [sqsrvres] OnlineThread: service stopped while waiting for QP.
    [sqsrvres] OnlineThread: Error 1 bringing resource online
    System error :
    Event 1069 : Cluster resource 'SQL Server' in Resource
    Group 'MSSQL' failed.
    Before this no error is there in event viewer

  • NIC not failing Over in Cluster

    Hi there...I have configured 2 Node cluster with SoFS role...for VM Cluster and HA using Windows Server 2012 Data Center. Current set up is Host Server has 3 NICS (2 with Default Gateway setup (192.x.x.x), 3 NIC is for heartbeat 10.X.X.X). Configured CSV
    (can also see the shortcut in the C:\). Planning to setup few VMs pointing to the disk in the 2 separate storage servers (1 NIC in 192.x.x.x) and also have 2 NIC in 10.x.x.x network. I am able to install VM and point the disk to the share in the cluster volume
    1. 
    I have created 2 VM Switch for 2 separate Host server (using Hyper-V manager). When I test the functionality by taking Node 2, I can see the Disk Owner node is changing to Node 1, but the VM NIC 2 is not failing over automatically to VM NIC 1 (but I can
    see the VM NIC 1 is showing up un-selected in the VM Settings). when I go to the VM Settings > Network Adapter, I get error -
    An Error occurred for resource VM "VM Name". select the "information details" action to view events for this resource. The network adapter is configures to a switch which no longer exists or a resource
    pool that has been deleted or renamed (with configuration error in "Virtual Switch" drop down menu). 
    Can you please let me know any resolution to fix this issue...Hoping to hear from you.
    VT

    Hi,
    From your description “My another thing I would like to test is...I also would like to bring a disk down (right now, I have 2 disk - CSV and one Quorum disk) for that 2 node
    cluster. I was testing by bringing a csv disk down, the VM didnt failover” Are you trying to test the failover cluster now? If so, please refer the following related KB:
    Test the Failover of a Clustered Service or Application
    http://technet.microsoft.com/en-us/library/cc754577.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • VIP is not failed over to surviving nodes in oracle 11.2.0.2 grid infra

    Hi ,
    It is a 8 node 11.2.0.2 grid infra.
    While pulling both cables from public nic the VIP is not failed over to surviving nodes in 2 nodes but remainng nodes VIP is failed over to surviving node in the same cluster. Please help me on this.
    If we will remove the power from these servers VIP is failed over to surviving nodes
    Public nic's are in bonding.
    grdoradr105:/apps/grid/grdhome/sh:+ASM5> ./crsstat.sh |grep -i vip |grep -i 101
    ora.grdoradr101.vip ONLINE OFFLINE
    grdoradr101:/apps/grid/grdhome:+ASM1> cat /proc/net/bonding/bond0
    Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)
    Bonding Mode: fault-tolerance (active-backup)
    Primary Slave: None
    Currently Active Slave: eth0
    MII Status: up
    MII Polling Interval (ms): 100
    Up Delay (ms): 0
    Down Delay (ms): 0
    Slave Interface: eth0
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:1e
    Slave Interface: eth1
    MII Status: up
    Speed: 100 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 84:2b:2b:51:3f:20
    Thanks
    Bala

    Please check below MOS note for this issue.
    1276737.1
    HTH
    Edited by: krishan on Jul 28, 2011 2:49 AM

  • Sun cluster 3.2 - resource hasstorageplus taking too much time to start

    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?
    Jan 28 20:28:01 hnmdb02 Cluster.Framework: [ID 801593 daemon.notice] stdout: becoming primary for data
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 350207 daemon.notice] 24 fe_rpc_command: cmd_type(enum):<3>:cmd=<null>:tag=<hnmdb.data.10>: Calling security_clnt_connect(..., host=<hnmdb02>, sec_type {0:WEAK, 1:STRONG, 2:DES} =<0>, ...)
    Jan 28 20:28:02 hnmdb02 Cluster.RGM.rgmd: [ID 316625 daemon.notice] Timeout monitoring on method tag <hnmdb.data.10> has been resumed.
    Jan 28 20:34:57 hnmdb02 Cluster.RGM.rgmd: [ID 515159 daemon.notice] method <hastorageplus_prenet_start> completed successfully for resource <data>, resource group <hnmdb>, node <hnmdb02>, time used: 23% of timeout <1800 seconds>

    heldermo wrote:
    I have a disk resource called "data" that takes too much time to startup when performing a switchover. Any idea what may control this ?I'm not sure how this is supposed to be related to Messaging Server. I suggest you ask your question in the Cluster forum:
    http://forums.sun.com/forum.jspa?forumID=842
    Regards,
    Shane.

  • Cluster Network name resource could not be updated in domain

    Hi,
    I have the following error and need clarification on the below solution in bold:
    Event Id 1206 -
    The computer object associated with the cluster network name resource 'Cluster Name' could not be updated in domain 'xxxxx.xxxx'. The error code was 'Password change'. The cluster identity 'ClusterGroup1$' may lack permissions required to update the object.
    1 - Move CNO back to the Computers Container
    2- Give the Cluster Node Computer Accounts Change Password permission on the CNO
    3 - Take the Cluster Name Resource offline
    4 - Repair Cluster Name Resource
    5 - Bring Cluster Name Resource back online
    Firstly, is there any implication?
    Secondly if ClusterGroup1$ is my CNO (Cluster Name Object) then is my
    cluster node computer account the server nodes in the cluster? E.g. ClusterGroup1$ is the cluster, the nodes are hyperv01 and hyperv02, do I add hyperv01 and hyperv02 to clustergroup1 -- properties -- security?
    Thank you.

    Hi SPD IT,
    If you have pre-stage your cluster object to a specific OU, you needn’t move them to default computer OU, with your current error information it most possible caused by the
    lack permission of cluster nodes or some unless GPO applied to cluster nodes, please exclude the cluster from any GPO, then test this issue.
    If it does not work please follow this repair action then monitor this issue:
    1. Open Failover Cluster manager
    2. Select Clustername
    3. Under Cluster Core Resources right click on Cluster Name and select Take offline
    4. Right click on Cluster Name and select repair.
    I’m glad to be of help to you!
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How to remove a node from 4 node sun cluster 3.1

    Dear All,
    We are having a four nodes in a cluster.
    Could any one please guide me, how to remove a single node from a 4 node cluster.
    what are the procedure and step's I have to follow.
    Thanks in advance.
    Veera.

    Google is pretty good at finding the right pages in our docs quickly. I tried >how to remove a node Solaris Cluster< and it came up with
    http://docs.sun.com/app/docs/doc/819-2971/gcfso?a=view
    Tim
    ---

  • Sun Cluster and EMC AX150 does not work

    Hi All,
    I tried to setup a Sun Cluster 3.2 (2x T2000) with an EMC AX 150 Storage. the device paths are highly instable and breaking down time and again. Tried to install Powerpath software of EMC but this time the did-devices are not recognized by Sun Cluster.
    Any help will be appriciated.
    P.S: does anybody know whether a MSA 1000 will work with Sun Cluster 3.2.
    Regards

    Looking at the EMC link on http://www.sun.com/software/cluster/osp/ it doesn't look like these storage systems have been tested by EMC and so may or may not work.
    You could try using Sun's MPxIO rather than PowerPath. MPxIO is included in Solaris for free. Again, it may or may not work. You are unlikely to get support on either configuration until EMC has done the prerequisite engineering testing to resolve the sort of issues you are currently seeing.
    Regards,
    Tim
    ---

  • Problems with Oracle FailSafe - Primary node not failing over the DB to the

    I am using 11.1.0.7 on Windows 64 bit OS, two nodes clustered at OS level. The Cluster is working fine at Windows level and the shared drive fails over. However, the database does not failover when the primary node is shutdown or restarted.
    The Oracle software is on local drive on each box. The Oracle DB files and Logs are on shared drive.

    Is the database listed in your cluster group that you are failing over?

  • Http cluster servlet not failing over when no answer received from server

              I am using weblogic 510 sp9. I have a weblogic server proxying all requests to
              a weblogic cluster using the httpclusterservlet.
              When I kill the weblogic process servicing my request, I see the next request
              get failed over to the secondary server and all my session information has been
              replicated. In short I see the behavior I expect.
              r.troon
              However, when I either disconnect the primary server from the network or just
              switch this server off, I just get a message back
              to the browser - "unable to connect to servers".
              I don't really understand why the behaviour should be different . I would expect
              both to failover in the same manner. Does the cluster servlet only handle tcp
              reset failures?
              Has anybody else experience this or have any ideas.
              Thanks
              

    I think I might have found the answer......
    The AD objects for the clusters had been moved from the Computers OU into a newly created OU. I'm suspecting that the cluster node computer objects didn't have perms to the cluster object within that OU and that was causing the issue. I know I've seen cluster
    object issues before when moving to a new OU.
    All has started working again for the moment so I now just need to investigate what permissions I need on the new OU so that I can move the cluster object in.

  • SQL 2005 cluster rejects SQL logins when in failed over state

    When SQL 2005 SP4 on Windows 2003 server cluster is failed over from Server_A to Server_B, it rejects all SQL Server logins. domain logins are OK. The message is "user is not associated with a trusted server connection", then the IP of the
    client. This is error 18452. Anyone know how to fix this? They should work fine from both servers. We think this started just after installing SP4.
    DaveK

    Hello,
    The connection string is good, you're definitely using sql auth.
    LoginMode on Server_B is REG_DWORD 0x00000001 (1) LoginMode on Server_A is REG_DWORD 0x00000002 (2) Looks like you are on to something. I will schedule another test failover. I assume a 2 is mixed mode? If so, why would SQL allow two different modes
    on each side of a cluster?
    You definitely have a registry replication issue, or at the very least a registry that isn't in sync with the cluster. This could happen for various reasons, none of which we'll probably find out about now, but never the less...
    A good test would be to set it to windows only on Node A, wait a minute and then set it to Windows Auth and see if that replicates the registry setting across nodes correctly - this is actually the windows level and doesn't have anything to do with SQL Server.
    SQL Server reads this value from the registry and it is not stored inside any databases (read, nothing stored in the master database) as such it's a per machine setting. Since it's not set correctly on Node B, when SQL server starts up it correctly reads
    that registry key and acts on it as it should. The culprit isn't SQL Server, it's Windows Clustering.
    Hopefully this makes a little more sense now. You can actually just edit the registry setting to match Node A and fail over to B, everything should work correctly. It doesn't help you with a root cause analysis which definitely needs to be done as who knows
    what else may not be correctly in sync.
    Sean Gallardy | Blog |
    Twitter

  • 2012 R2 iSCSI CSV not failing over when storage NICs disabled (no redirected access)

    We have a couple of simple two node Hyper-V clusters. They are fresh installs with 2012R2 (running on Cisco UCS blades).
    They are configured with dedicated NIC for Management, 2x dedicated NICs for storage (using MPIO and NetApp DSM) and then a trunk for VM traffic with virtual adapters for CSV, Live Migration and Heartbeat. Binding orders all set and priorities.
    With storage, we have a 1GB Quorum disk and then a temporary 500GB CSV.
    All is healthy and happy, I can move VMs around, move the CSV around, fail hosts etc and all works fine.
    HOWEVER..... If I disable BOTH of the iSCSI NICs on one of the host (the host that currently owns the CSV), then all hell breaks out. I would have expected that the CSV would go into redirected mode and use the connection from the other node? The CSV disappears
    from FCM temporarily, then comes back and goes red (Offline). It doesn't even try to failover to the other node. If I manually move it over to the other node then the CSV comes straight back online.
    Watching in Disk Manager on both nodes I can see on the effected host that the volumes do not disappear once it looses the iSCSI connection. I'm pretty sure that with the iSCSI disconnected (iscsicpl showing "reconnecting" state) that those disks
    should disappear? But perhaps that is my problem here.
    Is the expected behavior or does it sound wrong? If so, any ideas?
    Also - I've noticed that in FCM, my cluster networks all go to a state of showing a red question mark over them with the exception of the management NIC. It feels like the cluster is having a fit and failing to communicate properly once I disable the iSCSI
    NICs.
    Any input greatly appreciated!

    I think I might have found the answer......
    The AD objects for the clusters had been moved from the Computers OU into a newly created OU. I'm suspecting that the cluster node computer objects didn't have perms to the cluster object within that OU and that was causing the issue. I know I've seen cluster
    object issues before when moving to a new OU.
    All has started working again for the moment so I now just need to investigate what permissions I need on the new OU so that I can move the cluster object in.

  • Thin Client connection not failing over

    I'm using the following thin client connection and the sessions do not failover. Test with SQLPLUS and the sessions do fail over. One difference I see between the two different connections is the thin connection has NONE for the failover_method and failover_type but the SQLPLUS connection show BASIC for failover_method and SELECT for failover_type.
    Is there any issues with the thin client the version is 10.2.0.3
    jdbc:oracle:thin:@(description=(address_list=(load_balance=YES)(address=(protocol=tcp)(host=crpu306-vip.wm.com)(port=1521))(address=(protocol=tcp)(host=crpu307-vip.wm.com)(port=1521)))(connect_data=(service_name=ocsqat02)(failover_mode=(type=select)(method=basic)(DELAY=5)(RETRIES=180))))

    You have to use (FAILOVER=on) as well on jdbc url.
    http://download.oracle.com/docs/cd/B19306_01/network.102/b14212/advcfg.htm#sthref1292
    Example: TAF with Connect-Time Failover and Client Load Balancing
    Implement TAF with connect-time failover and client load balancing for multiple addresses. In the following example, Oracle Net connects randomly to one of the protocol addresses on sales1-server or sales2-server. If the instance fails after the connection, the TAF application fails over to the other node's listener, reserving any SELECT statements in progress.sales.us.acme.com=
    (DESCRIPTION=
    *(LOAD_BALANCE=on)*
    *(FAILOVER=on)*
    (ADDRESS=
    (PROTOCOL=tcp)
    (HOST=sales1-server)
    (PORT=1521))
    (ADDRESS=
    (PROTOCOL=tcp)
    (HOST=sales2-server)
    (PORT=1521))
    (CONNECT_DATA=
    (SERVICE_NAME=sales.us.acme.com)
    *(FAILOVER_MODE=*
    *(TYPE=select)*
    *(METHOD=basic))))*
    Example: TAF Retrying a Connection
    TAF also provides the ability to automatically retry connecting if the first connection attempt fails with the RETRIES and DELAY parameters. In the following example, Oracle Net tries to reconnect to the listener on sales1-server. If the failover connection fails, Oracle Net waits 15 seconds before trying to reconnect again. Oracle Net attempts to reconnect up to 20 times.sales.us.acme.com=
    (DESCRIPTION=
    (ADDRESS=
    (PROTOCOL=tcp)
    (HOST=sales1-server)
    (PORT=1521))
    (CONNECT_DATA=
    (SERVICE_NAME=sales.us.acme.com)
    *(FAILOVER_MODE=*
    *(TYPE=select)*
    *(METHOD=basic)*
    *(RETRIES=20)*
    *(DELAY=15))))*

  • Stateful bean not failing over

              I have a cluster of two servers and a Admin server. Both servers are running NT
              4 sp6 and WLS6 sp1.
              When I stop one of the servers, the client does n't automatically failover to
              the other server, instead it fails unable to contact server that has failed.
              My bean is configured to have its home clusterable and is a stateful bean. My
              client holds onto the remote interface, and makes calls through this. If Server
              B fails then it should automatically fail over to server A.
              I have tested my multicast address and all seems to be working fine between servers,
              my stateless bean work well, load balancing between servers nicely.
              Does anybody have any ideas, regarding what could be causing the stateful bean
              remote interface not to be providing failover info.
              Also is it true that you can have only one JMS destination queue/topic per cluster..The
              JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              servers?
              Thanks
              

    Did you enable stateful session bean replication in the
              weblogic-ejb-jar.xml?
              -- Rob
              Wayne Highland wrote:
              >
              > I have a cluster of two servers and a Admin server. Both servers are running NT
              > 4 sp6 and WLS6 sp1.
              > When I stop one of the servers, the client does n't automatically failover to
              > the other server, instead it fails unable to contact server that has failed.
              >
              > My bean is configured to have its home clusterable and is a stateful bean. My
              > client holds onto the remote interface, and makes calls through this. If Server
              > B fails then it should automatically fail over to server A.
              >
              > I have tested my multicast address and all seems to be working fine between servers,
              > my stateless bean work well, load balancing between servers nicely.
              >
              > Does anybody have any ideas, regarding what could be causing the stateful bean
              > remote interface not to be providing failover info.
              >
              > Also is it true that you can have only one JMS destination queue/topic per cluster..The
              > JMS cluster targeting doesn't work at the moment, so you need to deploy to individual
              > servers?
              >
              > Thanks
              Coming Soon: Building J2EE Applications & BEA WebLogic Server
              by Michael Girdley, Rob Woollen, and Sandra Emerson
              http://learnweblogic.com
              

  • GSLB Zone-Based DNS Payment Gw - Config Active-Active: Not Failing Over

    Hello All:
    Currently having a bit of a problem, have exhausted all resources and brain power dwindling.
    Brief:
    Two geographically diverse sites. Different AS's, different front ends. Migrated from one site with two CSS 11506's to two sites with one 11506 each.
    Flow of connection is as follows:
    Client --> FW Public Destination NAT --> CSS Private content VIP/destination NAT --> server/service --> CSS Source VIP/NAT --> FW Public Source NAT --> client.
    Using Load Balancers as DNS servers, authoritative for zones due to the requirement for second level Domain DNS load balancing (i.e xxxx.com, AND FQDNs http://www.xxxx.com). Thus, CSS is configured to respond as authoritative for xxxx.com, http://www.xxxx.com, postxx.xxxx.com, tmx.xxxx.com, etc..., but of course cannot do MX records, so is also configured with dns-forwarders which consequently were the original DNS servers for the domains. Those DNS servers have had their zone files changed to reflect that the new DNS servers are in fact the CSS'. Domain records (i.e. NS records in the zone file), and the records at the registrar (i.e. tucows, which I believe resells .com, .net and .org for netsol) have been changed to reflect the same. That part of the equation has already been tested and is true to DNS Workings. The reason for the forwarders is of course for things such as non load balanced Domain Names, as well as MX records, etc...
    Due to design, which unfortunately cannot be changed, dns-record configuration uses kal-ap, example:
    dns-record a http://www.xxxx.com 0 111.222.333.444 multiple kal-ap 10.xx.1.xx 254 sticky-enabled weightedrr 10
    So, to explain so we're absolutely clear:
    - 111.222.333.444 is the public address returned to the client.
    - multiple is configured so we return both site addresses for redundancy (unless I'm misunderstanding that configuration option)
    - kal-ap and the 10.xx.1.xx address because due to the configuration we have no other way of knowing the content rule/service is down and to stop advertising the address for said server/rule
    - sticky-enabled because we don't want to lose a payment and have it go through twice or something crazy like that
    - weighterr 10 (and on the other side weightedrr 1) because we want to keep most of the traffic on the site that is closer to where the bulk of the clients are
    So, now, the problem becomes, that the clients (i.e. something like an interac machine, RFID tags...) need to be able to fail over almost instantly to either of the sites should one lose connectivity and/or servers/services. However, this does not happen. The CSS changes it's advertisement, and this has been confirmed by running "nslookups/digs" directly against the CSSs... however, the client does not recognize this and ends up returning a "DNS Error/Page not found".
    Thinking this may have something to do with the "sticky-enabled" and/or the fact that DNS doesn't necessarily react very well to a TTL of "0".
    Any thoughts... comments... suggestions... experiences???
    Much appreciated in advance for any responses!!!
    Oh... should probably add:
    nslookups to some DNS servers consistently - ALWAYS the same ones - take 3 lookups before getting a reply. Other DNS servers are instant....
    Cheers,
    Ben Shellrude
    Sr. Network Analyst
    MTS AllStream Inc

    Hi Ben,
    if I got your posting right the CSSes are doing their job and do advertise the correct IP for a DNS-query right?
    If some of your clients are having a problem this might be related to DNS-caching. Some clients are caching the DNS-response and do not do a refresh until they fail or this timeout is gone.
    Even worse if the request fails you sometimes have to reset the clients DNS-demon so that they are requesting IP-addresses from scratch. I had this issue with some Unixboxes. If I remeber it corretly you can configure the DNS behaviour for unix boxes and can forbidd them to cache DNS responsed.
    Kind Regards,
    joerg

Maybe you are looking for

  • Helix - Two external displays + laptop display

    I would like to extend my desktop to two external displays (1080p) in addition to the Helix screen. Of course, we only have one mini-displayport on the Helix. I don't have displayport-enabled monitors which can daisy-chain. Which of these options wou

  • Problem in Shuttle region in a custom OA framework page

    Hi, We are facing the following problem in a OA framework custom requirement. Urgent help is needed!! Problem Description: 1. We have two shuttle regions in our OA page. One is Organization and next department. 2. The Department shuttle is dependent

  • How to use JFrame's setLayeredPane() method?

    I want to custom JFrame's LayeredPane by setLayeredPane() method,but it does't work. Anyone who can help me? The code as follows: import javax.swing.*; import java.awt.*; public class LayeredTest{     public static void main(String[] args){         S

  • Reports not working after upgrade to SCCM 2012 R2 -

    I am having the same issue described in this post: http://social.technet.microsoft.com/Forums/en-US/30f185ac-c34e-44ad-87eb-7d180a0d76b8/after-2012-r2-upgrade-some-sql-queries-wont-run?forum=configmanagergeneral When running reports after upgrading f

  • How to get broken screen repaired?

    What do I do to send my phone back to apple to get repaired (the screen is shattered)?