Unable to determine the current appliation config directory usning WL Server 6.0

We have an application which try to read application-wise settings in
Startup class. In previous version of WL Server, we use
ConfigServicesDef to find weblogic system home. It was removed from WL
Server 6.0. I try to add T3 File into the domain config. But I encounter
Java internal error when the code calling getFileSystem() from the
Startup class. I also try the io example come with the WL Server
installation. It's fine to call the method from client class. Anyone has
some idea about this? Or do you have any suggestion to get around it?
Thanks for your input. - Yang

Hi,
I tried starting the server from the console as well as from command prompt. Here I am giving the command below.
nohup ./startManagedWebLogic.sh bi_server1 t3://machineIP:portNo > bis1_startup.log &
But each time I got the same exception.
Korandla

Similar Messages

  • Unable to determine the sharedness of /dev/sda, /dev/sdb, /dev/sdc on nodes

    hello, pls how do i fix this issue, the two scripts ran on first node, but failed on second node, after running cluvfy this is the output:
    using cluvfy for storage requirements
    Verifying shared storage accessibility
    Checking shared storage accessibility...
    WARNING:
    Unable to determine the sharedness of /dev/sda on nodes:
    rac2,rac1
    WARNING:
    Unable to determine the sharedness of /dev/sdb on nodes:
    rac2,rac1
    WARNING:
    Unable to determine the sharedness of /dev/sdc on nodes:
    rac2,rac1
    WARNING:
    Unable to determine the sharedness of /dev/sdd on nodes:
    rac2
    Shared storage check failed on nodes "rac2,rac1".
    Verification of shared storage accessibility was unsuccessful on all the nodes.
    using cluvfy for system requirements
    Verifying system requirement
    Checking system requirements for 'crs'...
    Check: Total memory
    Node Name Available Required Comment
    rac2 1011.13MB (1035400KB) 1GB (1048576KB) failed
    rac1 1011.13MB (1035400KB) 1GB (1048576KB) failed
    Result: Total memory check failed.
    Check: Free disk space in "/tmp" dir
    Node Name Available Required Comment
    rac2 1.07GB (1119972KB) 400MB (409600KB) passed
    rac1 1.09GB (1141784KB) 400MB (409600KB) passed
    Result: Free disk space check passed.
    Check: Swap space
    Node Name Available Required Comment
    rac2 2GB (2096472KB) 1.5GB (1572864KB) passed
    rac1 2GB (2096472KB) 1.5GB (1572864KB) passed
    Result: Swap space check passed.
    Check: System architecture
    Node Name Available Required Comment
    rac2 i686 i686 passed
    rac1 i686 i686 passed
    Result: System architecture check passed.
    Check: Kernel version
    Node Name Available Required Comment
    rac2 2.6.18-8.el5 2.6.9 passed
    rac1 2.6.18-8.el5 2.6.9 passed
    Result: Kernel version check passed.
    Check: Package existence for "make-3.81"
    Node Name Status Comment
    rac2 make-3.81-1.1 passed
    rac1 make-3.81-1.1 passed
    Result: Package existence check passed for "make-3.81".
    Check: Package existence for "binutils-2.17.50.0.6"
    Node Name Status Comment
    rac2 binutils-2.17.50.0.6-2.el5 passed
    rac1 binutils-2.17.50.0.6-2.el5 passed
    Result: Package existence check passed for "binutils-2.17.50.0.6".
    Check: Package existence for "gcc-4.1.1"
    Node Name Status Comment
    rac2 gcc-4.1.1-52.el5 passed
    rac1 gcc-4.1.1-52.el5 passed
    Result: Package existence check passed for "gcc-4.1.1".
    Check: Package existence for "libaio-0.3.106"
    Node Name Status Comment
    rac2 libaio-0.3.106-3.2 passed
    rac1 libaio-0.3.106-3.2 passed
    Result: Package existence check passed for "libaio-0.3.106".
    Check: Package existence for "libaio-devel-0.3.106"
    Node Name Status Comment
    rac2 libaio-devel-0.3.106-3.2 passed
    rac1 libaio-devel-0.3.106-3.2 passed
    Result: Package existence check passed for "libaio-devel-0.3.106".
    Check: Package existence for "libstdc++-4.1.1"
    Node Name Status Comment
    rac2 libstdc++-4.1.1-52.el5 passed
    rac1 libstdc++-4.1.1-52.el5 passed
    Result: Package existence check passed for "libstdc++-4.1.1".
    Check: Package existence for "elfutils-libelf-devel-0.125"
    Node Name Status Comment
    rac2 elfutils-libelf-devel-0.125-3.el5 passed
    rac1 elfutils-libelf-devel-0.125-3.el5 passed
    Result: Package existence check passed for "elfutils-libelf-devel-0.125".
    Check: Package existence for "sysstat-7.0.0"
    Node Name Status Comment
    rac2 sysstat-7.0.0-3.el5 passed
    rac1 sysstat-7.0.0-3.el5 passed
    Result: Package existence check passed for "sysstat-7.0.0".
    Check: Package existence for "compat-libstdc++-33-3.2.3"
    Node Name Status Comment
    rac2 missing failed
    rac1 missing failed
    Result: Package existence check failed for "compat-libstdc++-33-3.2.3".
    Check: Package existence for "libgcc-4.1.1"
    Node Name Status Comment
    rac2 libgcc-4.1.1-52.el5 passed
    rac1 libgcc-4.1.1-52.el5 passed
    Result: Package existence check passed for "libgcc-4.1.1".
    Check: Package existence for "libstdc++-devel-4.1.1"
    Node Name Status Comment
    rac2 libstdc++-devel-4.1.1-52.el5 passed
    rac1 libstdc++-devel-4.1.1-52.el5 passed
    Result: Package existence check passed for "libstdc++-devel-4.1.1".
    Check: Package existence for "unixODBC-2.2.11"
    Node Name Status Comment
    rac2 unixODBC-2.2.11-7.1 passed
    rac1 unixODBC-2.2.11-7.1 passed
    Result: Package existence check passed for "unixODBC-2.2.11".
    Check: Package existence for "unixODBC-devel-2.2.11"
    Node Name Status Comment
    rac2 unixODBC-devel-2.2.11-7.1 passed
    rac1 unixODBC-devel-2.2.11-7.1 passed
    Result: Package existence check passed for "unixODBC-devel-2.2.11".
    Check: Package existence for "glibc-2.5-12"
    Node Name Status Comment
    rac2 glibc-2.5-12 passed
    rac1 glibc-2.5-12 passed
    Result: Package existence check passed for "glibc-2.5-12".
    Check: Group existence for "dba"
    Node Name Status Comment
    rac2 exists passed
    rac1 exists passed
    Result: Group existence check passed for "dba".
    Check: Group existence for "oinstall"
    Node Name Status Comment
    rac2 exists passed
    rac1 exists passed
    Result: Group existence check passed for "oinstall".
    Check: User existence for "nobody"
    Node Name Status Comment
    rac2 exists passed
    rac1 exists passed
    Result: User existence check passed for "nobody".
    System requirement failed for 'crs'
    Verification of system requirement was unsuccessful on all the nodes.
    cluvfy check for node connectivity
    Verifying node connectivity
    Checking node connectivity...
    Interface information for node "rac2"
    Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address
    eth0 192.168.2.102 192.168.2.0 0.0.0.0 192.168.2.1 00:0C:29:4D:18:BA
    eth1 192.168.0.102 192.168.0.0 0.0.0.0 192.168.2.1 00:0C:29:4D:18:C4
    Interface information for node "rac1"
    Interface Name IP Address Subnet Subnet Gateway Default Gateway Hardware Address
    eth0 192.168.2.101 192.168.2.0 0.0.0.0 192.168.2.1 00:0C:29:A5:25:87
    eth1 192.168.0.101 192.168.0.0 0.0.0.0 192.168.2.1 00:0C:29:A5:25:91
    Check: Node connectivity of subnet "192.168.2.0"
    Source Destination Connected?
    rac2:eth0 rac1:eth0 yes
    Result: Node connectivity check passed for subnet "192.168.2.0" with node(s) rac2,rac1.
    Check: Node connectivity of subnet "192.168.0.0"
    Source Destination Connected?
    rac2:eth1 rac1:eth1 yes
    Result: Node connectivity check passed for subnet "192.168.0.0" with node(s) rac2,rac1.
    Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect:
    rac2 eth0:192.168.2.102
    rac1 eth0:192.168.2.101
    Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect:
    rac2 eth1:192.168.0.102
    rac1 eth1:192.168.0.101
    WARNING:
    Could not find a suitable set of interfaces for VIPs.
    Result: Node connectivity check passed.
    Verification of node connectivity was successful.
    Verifying administrative privileges
    Checking user equivalence...
    Check: User equivalence for user "oracle"
    Node Name Comment
    rac2 passed
    rac1 passed
    Result: User equivalence check passed for user "oracle".
    Verification of administrative privileges was successful.

    Hi,
    Refer --- *19. Pre-Installation Tasks for Oracle Clusterware* section from the below link:
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi_2.html
    See this thread also seems to be same with ur issue:
    Clusterware (Unable to determine the sharedness of) in VMware 2.0x server
    Regards,
    Xaheer

  • The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles.

    We are in the process of removing a child domain from the forest and are down to two DCs. These are both Server 2008r2 sp1 servers, one physical and virtual (PDC). When I try to remove a DC (not the PDC emulator) I get the following error:
    The operation failed because:
    Active Directory Domain Services could not transfer the remaining data in directory partition DC=DomainDnsZones,DC=mydomain,DC=local to
    Active Directory Domain Controller \\V-Svr03.mydomain.local.
    The directory service is missing mandatory configuration information, and is unable to determine the ownership of floating single-master operation roles."
    I have checked replication with repadmin /showrepl and all connections were successful. The dcdiag /test:kccEvent test on all servers passed.
    Most DCdiag tests are successful. The only failure is on NCSecDesc when running dcdiag /test:NCSecDesc
       Testing server: Default-First-Site\DC1-DEV-OFC
          Starting test: NCSecDesc
             Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have
                Replicating Directory Changes In Filtered Set
             access rights for the naming context:
             DC=ForestDnsZones,DC=hookemup,DC=local
             ......................... DC1-DEV-OFC failed test NCSecDesc
    In researching this I find "If you do not plan to add an RODC to the forest, you can disregard this error."
    We have not successfully run ADprep /rodcPrep nor do we plan on having any Read-Only DCs, so I think we can ignor this error. We did try running ADprep /rodcPrep but got an LDAP error which I can duplicate if this is important.
    Schema and Naming FSMOs are on a DC higher in the forest. RID, PDC, and Infrastructure FSMOs for the child domain are on the Virtual server (PDC).
    Any guidance on where to go from here would be greatly appreciated as I have no more hair on my head to pull.

    Ok... I ran repadmin /showreps /v again and it shows no errors
    C:\>repadmin /showreps /v
    Default-First-Site\DC1-DEV-OFC
    DSA Options: IS_GC
    Site Options: (none)
    DSA object GUID: b294c59f-8b46-4133-89c5-0f30bfd49607
    DSA invocationID: 1054285d-cffe-42b4-8074-e2d44adbb151
    ==== INBOUND NEIGHBORS ======================================
    CN=Configuration,DC=mydomain,DC=local
        Default-First-Site\HESTIA via RPC
            DSA object GUID: b464fde9-29d7-4490-9582-fe9270050d50
            Address: b464fde9-29d7-4490-9582-fe9270050d50._msdcs.mydomain.local
            DSA invocationID: afea3845-9fa8-40a6-a477-84348a206348
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 16381490/OU, 16381490/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 114817/OU, 114817/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\V-SVR01 via RPC
            DSA object GUID: e2f794eb-9658-4bad-b695-3d8c08f46371
            Address: e2f794eb-9658-4bad-b695-3d8c08f46371._msdcs.mydomain.local
            DSA invocationID: 07bb0fe9-bca9-46d1-92ce-308d36da478d
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 66047/OU, 66047/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\ATHENA via RPC
            DSA object GUID: cb00a5b0-6dea-473c-bb42-19356dd9ed36
            Address: cb00a5b0-6dea-473c-bb42-19356dd9ed36._msdcs.mydomain.local
            DSA invocationID: 57313a9c-46a2-4b94-87cc-b3f91d54faed
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 8098197/OU, 8098197/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
    CN=Schema,CN=Configuration,DC=mydomain,DC=local
        Default-First-Site\ATHENA via RPC
            DSA object GUID: cb00a5b0-6dea-473c-bb42-19356dd9ed36
            Address: cb00a5b0-6dea-473c-bb42-19356dd9ed36._msdcs.mydomain.local
            DSA invocationID: 57313a9c-46a2-4b94-87cc-b3f91d54faed
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 8097482/OU, 8097482/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\V-SVR01 via RPC
            DSA object GUID: e2f794eb-9658-4bad-b695-3d8c08f46371
            Address: e2f794eb-9658-4bad-b695-3d8c08f46371._msdcs.mydomain.local
            DSA invocationID: 07bb0fe9-bca9-46d1-92ce-308d36da478d
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 65239/OU, 65239/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 114149/OU, 114149/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\HESTIA via RPC
            DSA object GUID: b464fde9-29d7-4490-9582-fe9270050d50
            Address: b464fde9-29d7-4490-9582-fe9270050d50._msdcs.mydomain.local
            DSA invocationID: afea3845-9fa8-40a6-a477-84348a206348
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 16381373/OU, 16381373/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
    DC=ForestDnsZones,DC=mydomain,DC=local
        Default-First-Site\V-SVR01 via RPC
            DSA object GUID: e2f794eb-9658-4bad-b695-3d8c08f46371
            Address: e2f794eb-9658-4bad-b695-3d8c08f46371._msdcs.mydomain.local
            DSA invocationID: 07bb0fe9-bca9-46d1-92ce-308d36da478d
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 66295/OU, 66295/PU
            Last attempt @ 2012-10-29 13:57:48 was successful.
        Default-First-Site\ATHENA via RPC
            DSA object GUID: cb00a5b0-6dea-473c-bb42-19356dd9ed36
            Address: cb00a5b0-6dea-473c-bb42-19356dd9ed36._msdcs.mydomain.local
            DSA invocationID: 57313a9c-46a2-4b94-87cc-b3f91d54faed
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 8098367/OU, 8098367/PU
            Last attempt @ 2012-10-29 13:58:13 was successful.
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 115032/OU, 115032/PU
            Last attempt @ 2012-10-29 13:58:25 was successful.
        Default-First-Site\HESTIA via RPC
            DSA object GUID: b464fde9-29d7-4490-9582-fe9270050d50
            Address: b464fde9-29d7-4490-9582-fe9270050d50._msdcs.mydomain.local
            DSA invocationID: afea3845-9fa8-40a6-a477-84348a206348
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 16381653/OU, 16381653/PU
            Last attempt @ 2012-10-29 13:58:34 was successful.
    DC=mySUBdomain,DC=local
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 114871/OU, 114871/PU
            Last attempt @ 2012-10-29 13:54:02 was successful.
    DC=DomainDnsZones,DC=mySUBdomain,DC=local
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS WRITEABLE
            USNs: 114017/OU, 114017/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
    DC=mydomain,DC=local
        Default-First-Site\V-SVR03 via RPC
            DSA object GUID: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8
            Address: 53018cc4-b8c9-48ce-9a54-1b987e7b08c8._msdcs.mydomain.local
            DSA invocationID: 45de2c10-ec8b-443d-a645-db4e0a352a23
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS
            USNs: 114017/OU, 114017/PU
            Last attempt @ 2012-10-29 13:52:39 was successful.
        Default-First-Site\HESTIA via RPC
            DSA object GUID: b464fde9-29d7-4490-9582-fe9270050d50
            Address: b464fde9-29d7-4490-9582-fe9270050d50._msdcs.mydomain.local
            DSA invocationID: afea3845-9fa8-40a6-a477-84348a206348
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS
            USNs: 16381614/OU, 16381614/PU
            Last attempt @ 2012-10-29 13:56:52 was successful.
        Default-First-Site\V-SVR01 via RPC
            DSA object GUID: e2f794eb-9658-4bad-b695-3d8c08f46371
            Address: e2f794eb-9658-4bad-b695-3d8c08f46371._msdcs.mydomain.local
            DSA invocationID: 07bb0fe9-bca9-46d1-92ce-308d36da478d
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS
            USNs: 66325/OU, 66325/PU
            Last attempt @ 2012-10-29 13:58:34 was successful.
        Default-First-Site\ATHENA via RPC
            DSA object GUID: cb00a5b0-6dea-473c-bb42-19356dd9ed36
            Address: cb00a5b0-6dea-473c-bb42-19356dd9ed36._msdcs.mydomain.local
            DSA invocationID: 57313a9c-46a2-4b94-87cc-b3f91d54faed
            SYNC_ON_STARTUP DO_SCHEDULED_SYNCS
            USNs: 8098385/OU, 8098385/PU
            Last attempt @ 2012-10-29 13:58:38 was successful.

  • Unable to determine the install root path for the LabVIEW Runtime Engine

    Hi,
    i have an issue with using a LabVIEW interop assembly in a .NET application. I get an exception "Unable to determine the install root path for the LabVIEW Runtime Engine" when calling the assembly.
    The little test program is attached below. It's called dotNETHost.exe. If you excecute the programm a dialog with an button appears. Clicking the button shall open another dialog (the LabVIEW Interop component). But the only thing I get is the exception message. The ZIP folder also contains the complete exception meassage (ExceptionMessange.jpg & ExceptionDetails.txt).
    The Interop Assembly was built with LabVIEW 2011. We use Visual Studio 2010 and .NET 4.0.. The dotNETHost.exe.config file is prepared as mentioned in Knowledge base - Loading .NET 4.0 assemblies.
    The Interop assembly contains only one simple dialog (loop is finished by clicking OK) without calling any other VIs or other DLL's.  In case of this there's also no support directory generated by the build process.
    I have no idea why it doesn' work. I hope anyone can help me.
    Thanks in advance
    Kay
    Attachments:
    Debug.zip ‏75 KB

    This may be unrelated, but Labview and .Net4.0 dont work well together. Not yet anyway. I had to compile my assembly in 3.5 to get it to work.
    Please read the following:
    http://zone.ni.com/reference/en-XX/help/371361H-01/lvhowto/configuring_clr_version/
    http://digital.ni.com/public.nsf/allkb/32B0BA28A72AA87D8625782600737DE9
    http://digital.ni.com/public.nsf/allkb/2030D78CFB2F0ADA86257718006362A3?OpenDocument
    Beginner? Try LabVIEW Basics
    Sharing bits of code? Try Snippets or LAVA Code Capture Tool
    Have you tried Quick Drop?, Visit QD Community.

  • [SOLVED] ERROR: Unable to determine the file system type of /dev/root:

    :: Running Hook [udev]
    :: Triggering uevents...done
    Root device '804' doesn't exist.
    Creating root device /dev/root with major 8 and minor 4.
    error: /dev/root: No such device or address
    ERROR: Unable to determine the file system type of /dev/root:
    Either it contains no filesystem, an unknown filesystem,
    or more than one valid file system signature was found.
    Try adding
    rootfstype=your_filesystem_type
    to the kernelcommand line.
    You are now being dropped into an emergency shell.
    /bin/sh: can't access tty; job control turned off
    [ramfs /]# [ 1.376738] Refined TSC clocksource calibration: 3013.000 MHz.
    [ 1.376775] Switching to clocksource tsc
    That's what I get when I boot my Arch system. It worked fine for quite a while, but suddenly it ran into an error where the SCSI driver module was corrupt. I fixed it by reinstalling util-linux-ng and kernel26, however, I run into this issue now. http://www.pastie.org/2163181 < Link to /var/log/pacman.log for the month of July, just in case. Yes, I bought a new ATI/AMD Radeon HD 5450 this Saturday, but it seemed to work fine till this broke and it works fine on Ubuntu too, so I suppose we can rule it out.
    Last edited by SgrA (2011-07-05 20:45:36)

    Autodetection failed on your first image, in both your previous kernel installs:
    [2011-07-04 16:14] find: `/sys/devices': No such file or directory
    Which means that sysfs was not mounted. You should be able to boot from the fallback image, which does not use autodetect. Figure out why /sys isn't mounted, as well, and fix that.
    This is also a somewhat crappy bug in mkinitcpio that lets you create an autodetect image that's useless. It'll be fixed in the next version of mkinitcpio that makes it to core.
    Last edited by falconindy (2011-07-04 17:41:19)

  • Determining the current viewId

    Hi, I need to determine the current view id, therefore I am using the following code:
    /* adf-settings.xml */
    <adf-settings xmlns="http://xmlns.oracle.com/adf/settings">
      <adfc-controller-config xmlns="http://xmlns.oracle.com/adf/controller/config">
        <lifecycle>
          <phase-listener>
            <listener-id>myPagePhaseListener</listener-id>
            <class>obasi.common.domain.myPagePhaseListener</class>
          </phase-listener>
        </lifecycle>
      </adfc-controller-config>
    </adf-settings>
    /* myPagePhaseListener */
        public void beforePhase(PagePhaseEvent pagePhaseEvent) {           
            try {           
                if (pagePhaseEvent.getPhaseId() == Lifecycle.PREPARE_MODEL_ID) {
                    for (Object child:FacesContext.getCurrentInstance().getViewRoot().getChildren())
                        if (child instanceof RichDocument)
                            System.out.println(FacesContext.getCurrentInstance().getViewRoot().getViewId());
            } catch (Exception ex)
                // ignore
        }But this doesn't seem to work accurately. Some viewId's are only "detected" after leaving a page...
    I noticed that this happens when the URL inside my browser also remains pointing to the previous page.
    Is there a way to solve this?
    Thanks in advance,
    Charles.

    Hi,
    Thanks for helping me out, Simon.
    I am not sure about what changes should be made to my web.xml in order to not have the resources as view id's.
    So here goes my web.xml - in the hope that you would be so kind to point me out where the problem lies.
    <?xml version = '1.0' encoding = 'windows-1252'?>
    <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
        <description>Empty web.xml file for Web Application</description>
        <context-param>
            <param-name>javax.faces.STATE_SAVING_METHOD</param-name>
            <param-value>client</param-value>
        </context-param>
        <context-param>
            <description>If this parameter is true, there will be an automatic check of the modification date of your JSPs, and saved state will be discarded when JSP's change. It will also automatically check if your skinning css files have changed without you having to restart the server. This makes development easier, but adds overhead. For this reason this parameter should be set to false when your application is deployed.</description>
            <param-name>org.apache.myfaces.trinidad.CHECK_FILE_MODIFICATION</param-name>
            <param-value>false</param-value>
        </context-param>
        <context-param>
            <param-name>RedirectToLogin</param-name>
            <param-value>login</param-value>
        </context-param>
        <context-param>
            <description>Whether the 'Generated by...' comment at the bottom of ADF Faces HTML pages should contain version number information.</description>
            <param-name>oracle.adf.view.rich.versionString.HIDDEN</param-name>
            <param-value>false</param-value>
        </context-param>
        <filter>
            <filter-name>JpsFilter</filter-name>
            <filter-class>oracle.security.jps.ee.http.JpsFilter</filter-class>
        </filter>
        <filter>
            <filter-name>trinidad</filter-name>
            <filter-class>org.apache.myfaces.trinidad.webapp.TrinidadFilter</filter-class>
        </filter>
        <filter>
            <filter-name>adfBindings</filter-name>
    <!--
            <filter-class>oracle.adf.model.servlet.ADFBindingFilter</filter-class>
    -->
            <filter-class>obasi.instel.customlogin.DynamicJDBCBindingFilter</filter-class>
        </filter>
        <filter-mapping>
            <filter-name>JpsFilter</filter-name>
            <servlet-name>Faces Servlet</servlet-name>
            <dispatcher>FORWARD</dispatcher>
            <dispatcher>REQUEST</dispatcher>
            <dispatcher>INCLUDE</dispatcher>
        </filter-mapping>
        <filter-mapping>
            <filter-name>trinidad</filter-name>
            <servlet-name>Faces Servlet</servlet-name>
            <dispatcher>FORWARD</dispatcher>
            <dispatcher>REQUEST</dispatcher>
        </filter-mapping>
        <filter-mapping>
            <filter-name>adfBindings</filter-name>
            <servlet-name>Faces Servlet</servlet-name>
            <dispatcher>FORWARD</dispatcher>
            <dispatcher>REQUEST</dispatcher>
        </filter-mapping>
        <servlet>
            <servlet-name>Faces Servlet</servlet-name>
            <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
            <load-on-startup>1</load-on-startup>
        </servlet>
        <servlet>
            <servlet-name>resources</servlet-name>
            <servlet-class>org.apache.myfaces.trinidad.webapp.ResourceServlet</servlet-class>
        </servlet>
        <servlet-mapping>
            <servlet-name>Faces Servlet</servlet-name>
            <url-pattern>/faces/*</url-pattern>
        </servlet-mapping>
        <servlet-mapping>
            <servlet-name>resources</servlet-name>
            <url-pattern>/adf/*</url-pattern>
        </servlet-mapping>
        <servlet-mapping>
            <servlet-name>resources</servlet-name>
            <url-pattern>/afr/*</url-pattern>
        </servlet-mapping>
        <session-config>
            <session-timeout>35</session-timeout>
        </session-config>
        <mime-mapping>
            <extension>html</extension>
            <mime-type>text/html</mime-type>
        </mime-mapping>
        <mime-mapping>
            <extension>txt</extension>
            <mime-type>text/plain</mime-type>
        </mime-mapping>
    </web-app>Please note that I am using Steve Muench's DynamicJDBCCredentials (http://radio.weblogs.com/0118231/stories/2004/09/23/notYetDocumentedAdfSampleApplications.html #129)

  • ADADMIN Relink Applications programs Unable to determine the release ...

    Hi, Everyone -
    Red Hat Enterprise Linux Server release 5.4 Beta (Tikanga)
    ORACLE RDBMS Version 8.0.6.0.0
    Oracle Reports Version 6.0
    Oracle Applications Release 11
    The Relink Application program process is erroring out ... Please assist
    ===================================================
    OPTIONS:
    Do you wish to proceed with the relink [Yes] ? Yes
    Enter the name of your Oracle Applications environment file below.
    File name [TSTERP2_dallindr187.env] : custom_TSTERP2.env
    Reading product executable information...
    Enter list of products to link ('all' for all products) [all] :
    Generate specific executables for each selected product [No] ?
    AD Administration can relink your Oracle Applications programs with debug
    information. Oracle recommends that you do not relink your programs
    with debug information unless asked to do so by Oracle Support Services.
    Relink with debug information [No] ?
    Relinking selected modules in Application Object Library.
    You are running adrelink, version 115.28
    ==============================================================
    Error:
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1409: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1410: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: which: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: file: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1920: awk: command not found
    ORACLE RDBMS Version 8.0.6.0.0
    Oracle Reports Version 6.0
    Oracle Applications Release
    Unable to determine the release number.
    Please make sure the file /tsterp2/appsvr/tsterp2appl/custom_TSTERP2.env is correct
    adrelink is exiting with status 1
    End of adrelink session
    Date/time is Fri Apr 16 18:04:19 CDT 2010
    Line-wrapping log file for readability ...
    Done line-wrapping log file.
    Original copy is /tsterp2/appsvr/tsterp2appl/admin/TSTERP2/log/adrelink.lsv
    New copy is /tsterp2/appsvr/tsterp2appl/admin/TSTERP2/log/adrelink.log

    Hi,
    Error:
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1409: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1410: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: which: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: file: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1686: grep: command not found
    /tsterp2/appsvr/tsterp2appl/ad/11.5.0/bin/adrelinknew.sh: line 1920: awk: command not foundDid you source the correct application env file before starting the relinking (APPSORA<CONTEXT_NAME>.env file under $APPL_TOP)?
    As applmgr user, what does the following return?
    $ which gre
    $ which awk
    $ which file
    Do you have the directory where those files are located in the PATH (issue "echo $PATH" to verify)?
    Unable to determine the release number.
    Please make sure the file /tsterp2/appsvr/tsterp2appl/custom_TSTERP2.env is correctDoes this custom env file exist? If yes, are all the entries correct in the file?
    Regards,
    Hussein

  • PB2 issue Bing Maps says 'Unable to determine your current location'

    PB2 issue Bing Maps says 'Unable to determine your current location'.
    please help..

    That happens if you are indoors and the App canot locate your GPS location.
    You may have to take your PB outside for a minute or so so it can get a fix on your location.
    Have you tried Google Maps or the MapApp from App World that also uses Google Maps?
    They are much better than Bing Map.
    DC-IT

  • Unable to open windows using boot camp.  Get message "The bless tool was unable to set the current boot disk."  Any thoughts, Thank you.

    Unable to open windows using boot camp.  Get message "The bless tool was unable to set the current boot disk."   I am using an Imac , Lion operating system, and Windows 7.  It worked a few days ago.  Any thoughts, Thank you.

    Note that nowhere in the Boot Camp instructions does it tell you to use Disk Utility to format the Windows partition. The Boot Camp Assistant program creates the partition & sets the +partition scheme info+ of the disk as appropriate for the Windows installer but the Windows installer itself is responsible for formatting the new partition with the appropriate +file system scheme+ (NTFS for Windows 7).
    If you follow the instructions in the Boot Camp Installation & Setup Guide to the letter you should have no problems installing Windows.

  • Oracle 11i release 2 error "Unable to get the current group"

    Hi oracle gurus,
         I have been trying to install oracle 11g rel 2 on HPUX 11.31 and i am getting the following error
    # more installActions2010-01-06_10-27-37AM.log
    oracle.install.ivw.db.driver.DBInstaller
    -scratchPath
    /u01/tmp/OraInstall2010-01-06_10-27-37AM
    -sourceLoc
    /u01/install/database/install/../stage/products.xml
    -sourceType
    network
    -timestamp
    2010-01-06_10-27-37AM
    INFO: Loading data from: jar:file:/u01/tmp/OraInstall2010-01-06_10-27-37AM/ext/jlib/installcommons_1.0.0b.jar!/oracle/install/driver/oui/resource/ConfigComma
    ndMappings.xml
    INFO: Loading beanstore from jar:file:/u01/tmp/OraInstall2010-01-06_10-27-37AM/ext/jlib/installcommons_1.0.0b.jar!/oracle/install/driver/oui/resource/ConfigC
    ommandMappings.xml
    INFO: Restoring class oracle.install.driver.oui.ConfigCmdMappings from jar:file:/u01/tmp/OraInstall2010-01-06_10-27-37AM/ext/jlib/installcommons_1.0.0b.jar!/
    oracle/install/driver/oui/resource/ConfigCommandMappings.xml
    SEVERE: [FATAL] An internal error occurred within cluster verification framework
    Unable to get the current group.
    Refer associated stacktrace #oracle.install.commons.util.exception.DefaultErrorAdvisor:11
    INFO: Advice is ABORT
    SEVERE: Unconditional Exit
    INFO: Adding ExitStatus FAILURE to the exit status set
    INFO: Finding the most appropriate exit status for the current application
    INFO: Exit Status is -1
    INFO: Shutdown Oracle Database 11g Release 2 Installer
    $
    >>
    # more oraInstall2010-01-06_10-27-37AM.err
    ---# Begin Stacktrace #---------------------------
    ID: oracle.install.commons.util.exception.DefaultErrorAdvisor:11
    oracle.cluster.verification.VerificationException: An internal error occurred within cluster verification framework
    Unable to get the current group
    at oracle.cluster.verification.ClusterVerification.<init>(ClusterVerification.java:200)
    at oracle.cluster.verification.ClusterVerification.getInstance(ClusterVerification.java:294)
    at oracle.install.driver.oui.OUISetupDriver.load(OUISetupDriver.java:407)
    at oracle.install.ivw.db.driver.DBSetupDriver.load(DBSetupDriver.java:161)
    at oracle.install.commons.base.driver.common.Installer.run(Installer.java:216)
    at oracle.install.ivw.db.driver.DBInstaller.run(DBInstaller.java:126)
    at oracle.install.commons.util.Application.startup(Application.java:869)
    at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164)
    at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181)
    at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265)
    at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114)
    at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)
    ---# End Stacktrace #-----------------------------
    <<
    $ uname -a
    HP-UX rx2600 B.11.31 U ia64 <XXXXXXXX> unlimited-user license
    # swlist | grep -i oe
    HP-Caliper-PERF C.11.31.04 HP Caliper OE Bundle
    HP-WDB-DEBUGGER C.11.31.04 HP DEBUGGER OE Bundle
    HPUX11i-DC-OE B.11.31.0903 HP-UX Data Center Operating Environment
    # swlist | grep -i qpk
    QPKBASE B.11.31.0903.334a Base Quality Pack Bundle for HP-UX 11i v3, March 2009
    # swlist -l product | grep -i c++
    ACXX C.06.20 HP C/aC++ Compiler
    C-ANSI-C C.06.20 HP C/aC++ Compiler
    PHSS_37501 1.0 aC++ Runtime (IA: A.06.16, PA: A.03.76)
    PHSS_39824 1.0 HP C/aC++ Compiler (A.06.23)
    i start the installation as oracle and my group and user id is
    $ id
    uid=109(oracle) gid=102(oinstall) groups=101(dba),104(asmdba)
    Please let me know what else do i need, we might have to use the hardware for testing another application so i am limited in terms of time constraints
    any help is much appreciated, Thank you!!
    Regards,
    Dasjith
    Edited by: user10247524 on Jan 6, 2010 7:52 AM

    Hi Stig Sundqvist
    Where can i find MOS Doc 983713.1 ??You have to login https://support.oracle.com/CSP/ui/flash.html and you have to CSI account. This site is oracle site for tech. documents for can rise SR etc.. for more details please check
    What is CSI:
    Re: Installing Oracle Database 10.2.0.4
    And how do i do to fix this problem ????Login metalink then find upper note and follow document
    Hope it helps
    Regard
    Helios

  • How to determine the mount point for directory /tmp ?

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer and according to the tutorial
    http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. I am on step 7 of 10 (verify Grid installation enviroment) and get this error:
    "Free Space: Rac2: /tmp"
    Cause: Could not determine mount point for location specified.
    Action: Ensure location specified is available.
    Expected value: n/a
    Actual value: n/a
    I have checked the free space using the command:
    [root@Rac2 /]# df -k /tmp
    Output:
    Filesystem     1k-blocks     used     Available     Use%     Mounted on
    /dev/sda1     30470144     7826952     21070432     28%     /
    As you see above, the free space is enough, but could not determine mount point for /tmp.
    Do any folk understand how to determine the mount point for directory /tmp ?
    Thanks.

    I have just checked "/home/oracle/.bash_profile". But in my computer, there is no "oracle" under /home directory.Is this your first time Linux and Oracle installation? I had a brief look at your referenced link. The reason why you do not find a "oracle" user is because the instructions use "ora11g" instead, which, btw, is not standard. The directories of your installation and your installation source can be somewhat different from known standards and you will have to adjust it to your system.
    My best guess is that you have either missed something in the instructions or you need to ask the author of the blog what is wrong. The chance to find someone here who has experience with these custom instructions is probably unlikely.
    I suggest you try to locate the cluster verification tool, which should be in the bin directory of your grid installation. Alternatively you might want to check the RAC, ASM & Clusterware Installation forum: RAC, ASM & Clusterware Installation

  • Unable to determine the value of component  ''! Workflow error, please help

    Hi All:
        I am getting the error "Unable to determine the value of component  ''" and workflows are not getting processed. I found OSS notes 879100 related to this issue but I am new to workflow and not sure where to start from! Here is the notes description:
    The workflow WS20500025 ('Service Connection Order Processing') encounters an error directly after being started. The following error messages are displayed in the workflow log:
    SWP102: Error at start of an IF branch
    WL821: Work item 000000100012: Object FLOWITEM method EXECUTE cannot be executed
    SWF_RUN535: Error '9' when calling service 'SO_OBJECT_SEND'
    SWF_RUN534: Problems occurred when generating a mail
    WL863: Notification of completion cannot be generated
    SWF_EXP_001073: Unable to determine the value of component 'QUOTATION'
    SWF_EXP_001072: Error in the evaluation of expression '<???>' for item 1
    SWF_RLS_001101: Operator 'GT': The value of the left operand cannot be determined
    SWP085: Error when starting work item 000000100012
    SWP010: Error when evaluating the IF condition for node 0000000075
    Other terms
    Service connection processing, service connection workflow
    Reason and Prerequisites
    This is due to a program error.
    The checks in the workflow Basis have been made stricter. A query of the type 'Quotation.number > 0' now causes an error if the container element 'Quotation' is not set. This is always the case if the workflow is is running without quotation prcessing, for instance.
    Solution
    The error is corrected in the next Support Package.
    If you want to implement the corrections manually in your system, change the condition in step 000075 of workflow WS20500025 as follows:
        &IS-U: Kundenangebot& EX
    and &IS-U: Kundenangebot.Verkaufsbeleg& > 0
    Save and activate the workflow.
    I am new to workflow and not sure where to start from! Could someone please help me with this? Rewards assured.
    Thanks.
    Mithun

    ANy ideas? Please help me if anyone knows the answer.
    Thanks.
    Mithun

  • How to determine the current update level of the system

    How can we determine the current update level of the system. uname -a shows the release but how to obtain the update level through a program?
    I have this sample program to display the version
    #include <iostream>
    #include <sys/utsname.h>
    #include <dirent.h>
    using namespace std;
    int main()
      struct utsname osinfo;
      // Call uname to get system info, then extract strings.
      uname(&osinfo);
      if (osinfo.machine)  {
        cout<<" Machine : "<<  osinfo.machine;
      if (osinfo.sysname)  {
        cout << "\nOS Name : " << osinfo.sysname;
    if (osinfo.release[0] != '\0')  {
        cout<<"\nRelease : " << osinfo.release;
    }My aim is to check if the Solaris box is 5.10 update 4 or not.
    Edited by: nidhish9 on Nov 27, 2007 5:11 AM

    nidhish9 wrote:
    How can we determine the current update level of the system. uname -a shows the release but how to obtain the update level through a program?
    I have this sample program to display the version
    #include <iostream>
    #include <sys/utsname.h>
    #include <dirent.h>
    using namespace std;
    int main()
    struct utsname osinfo;
    // Call uname to get system info, then extract strings.
    uname(&osinfo);
    if (osinfo.machine)  {
    cout<<" Machine : "<<  osinfo.machine;
    if (osinfo.sysname)  {
    cout << "\nOS Name : " << osinfo.sysname;
    if (osinfo.release[0] != '\0')  {
    cout<<"\nRelease : " << osinfo.release;
    }My aim is to check if the Solaris box is 5.10 update 4 or not.
    Edited by: nidhish9 on Nov 27, 2007 5:11 AMIt's in /etc/release...
    essapd020-u004$ cat /etc/release
                           Solaris 10 8/07 s10s_u4wos_12b SPARC
               Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
                            Use is subject to license terms.
                                Assembled 16 August 2007
    essapd020-u004$ It can be processed with some simple commands:
    essapd020-u004$ cat /etc/release | head -1
                           Solaris 10 8/07 s10s_u4wos_12b SPARC
    essapd020-u004$ cat /etc/release | head -1 | cut -f2 -d_ | cut -c1,2
    u4
    essapd020-u004$ Best,

  • IWork Numbers -- how to determine the current selected column/row?

    I am trying to write a script that grabs some text from a Numbers spreadsheet based on the selected field. I just can't figure how to actually get the column and row in a way I can use. Here's my start:
    tell application "Numbers"
    tell document 1
    -- DETERMINE THE CURRENT SHEET
    set currentsheetindex to 0
    repeat with i from 1 to the count of sheets
    tell sheet i
    set x to the count of (tables whose selection range is not missing value)
    end tell
    if x is not 0 then
    set the currentsheetindex to i
    exit repeat
    end if
    end repeat
    if the currentsheetindex is 0 then error "No sheet has a selected table."
    -- GET THE VALUES OF THE SELECTED CELLS
    tell sheet currentsheetindex
    set the current_table to the first table whose selection range is not missing value
    tell the current_table
    set the range_values to the value of every cell of the selection range
    set therange to every cell of the selection range
    set thecol to column of first item of therange
    set therow to row of first item of therange
    end tell
    end tell
    end tell
    end tell
    It gives me a result like
    row "30" of table "table" of sheet "sheet 1" of document "Scheduler.numbers" of application "Numbers"
    but what I need is just "30" so I can use it for my scripts. I can't figure how to convert this to a string or some other useful format.
    Can anyone help? I like Numbers but I always run into little problems like this, something I can't figure how to do, compared with other scriptable programs.

    Search forGetSelParams or better for get_SelParams in the existing threads.
    I posted this handler more than 30 times !
    Yvan KOENIG (VALLAURIS, France) vendredi 18 mars 2011 17:03:05

  • CELL-01528: Unable to create the log file in directory

    Hi ,
    We have Qtr-Exadata
    I'm getting error msg in 2 of Storage Servers when trying access cellcli
    CELL-01528: Unable to create the log file in directory
                             /opt/oracle/cell11.2.2.3.2_LINUX.X64_110520/cellsrv/deploy/log.
    Error: Couldn't get lock for
                             /opt/oracle/cell11.2.2.3.2_LINUX.X64_110520/cellsrv/deploy/log/cellcli.lst.
    in that directory there are too many files [cellcli.lst.0.lck...............cellcli.lst.0.99.lck]
    whereas in the third stg-server which run smoothly there are three files [cellcli.lst.0,cellcli.lst.0.1,cellcli.lst.0.2]
    without any file with "lck" extension
      ( Sorry for bad English)
    BR
    Sami

    Hello, check directory permissions

Maybe you are looking for

  • How to use substitution variables in Microsoft Word using Hyperion Smart view

    Can we use Substitution Variables in copy data points and refresh in Microsoft Word? I tried it and it does not work dynamically (copy data points only copies what was in excel cell at that point). It only copies the static value of that variable fro

  • ERP Integrator

    Hi all, We have requirement to load data from oracle GL to Hyperion planning by using ERP integrator and ODI. We configured the source system in ERP integrator and ODI. When we try to Intialize the source system in ERP integrator,we are getting error

  • Can not use FN+F6/F7 to set screen brightness on my Tecra M2

    Hello!! I bought laptop Tecra M2 but when I format it with Windows XP SP2 the keys FN+F7/F6 doesn't Active For Display Screen Quality and I formatted it Appliance Please I want responses.

  • Codebase run RMI server .NoClassDefFoundError:

    If I run the folllowing from the solaris command line java -cp /junk/jdev+/jdev/src -Djava.rmi.server.logCalls=true -Djava.rmi.server.codebase=file:////junk/jdev+/jdev/src/ -Djava.security.policy=/junk/jdev+/jdev/src/jdev/server/policy jdev.server.RM

  • Delete sitecollection with content Database programaticaly

    Hi All, I have created Site collection with separate content database by  programmatically. in the same manner 1) I want to delete the site collection and it's Content database.    or 2)User wants to delete the site collection from settings , after d