Migration of ECC6 from standalone to HA env on solaris 10 and Oracle10g

Hi all,
Currently we are using our ECC 6.0 production system as a standalone
system. Now we need to migrate the system to cluster (High Availability) environment.
We are thinking of taking the export of standalone system using sapinst (we are not
using database backup/restore, as the system is having both ABAP and
Java stack) and using the same export to install the system in cluster
envireonment.
Is this possible? We have noticed that sapinst is having the option of
system copy in cluster environment. Can we use that option? Also How
much of actual system downtime is required for such an activity?
Also please send any standard document for this purpose, if available.
Thanks & Regards,
Tejas

(we are not
using database backup/restore, as the system is having both ABAP and
Java stack)
You could use the backup/restore method even if your installation is ABAP/Java, the difference is that you will do the preparation export steps for Java, remember that the Java Stack is part of the same Database when you have an ABAP/Java  installation (been there, done that).
Basically you just need to follow the guide that is very clear in how to do the copy. Of course you could use export/import to do the copy if you prefer so. Either way moving from non HA to HA is completely possible, just need to prepare the system for HA and if using Solaris this will be a matter of preparing the nodes according to OS/HW vendor and do the installation using the copy option.

Similar Messages

  • Migrate to RAC from Standalone DB

    Hi ALL,
    We need to migrate from NON RAC to RAC , we have EBS R12 (12.1.2) and Database is 11gR2.
    kindly suggest.
    Thanks,

    Hi;
    In addition to Hussein Sawwan great post,please see our previous topic
    Non RAC to RAC clonning
    Cloning non-RAC to RAC ?
    Non RAC to RAC clonning
    Regard
    Helios

  • Hi, i try to migrate the HSC0 from version 6.1 to version 7.1 HELP !!!

    Hi, i try to migrate the HSC0 from version 6.1 to version 7.1 and take this opportunity to consult on the installation of HSC0 where the PARMLIB member LKEYINFO (LKEYDEF) Product for this new version V 7.1 is telling me no longer supported, please I would like to confirm if this is so, and where i set the license application for this new version.
    SLS1511I LKEYDEF DSN('SYS3.ORA.ELS.V700.TGT.VEN1.PARMLIB(-
    SLS1511I LKEYINFO)')
    SLS4639I LKEYDEF COMMAND IS NO LONGER SUPPORTED

    Hi;
    Is it related with Configuring and Managing SMC? If yes review:
    http://docs.oracle.com/cd/E21395_02/en/E28330_02/E28330_02.pdf
    Regard
    Helios

  • Virtual Machine Manager 2012 R2 migrate from standalone to cluster - no starting

    Hello,
    we have migrated our Virtual Machine Manager 2012 R2 UR1 installatin from a standalone machine to a clustered version.
    But now the cluster instance won't startup.
    - uninstall the standalone virtual machine manager 2012 with the retain database option.
    - created failover cluster
    - installed vmm into cluster and pointed to existing database.
    - added second node
    The error we now get is out of the report.txt:
    ------------------- Error Report -------------------
    Error report created 17.04.2014 19:38:57
    CLR is not terminating
    --------------- Bucketing Parameters ---------------
    EventType=VMM20
    P1(appName)=vmmservice.exe
    P2(appVersion)=3.2.7620.0
    P3(assemblyName)=ImgLibEngine.dll
    P4(assemblyVer)=3.2.7620.0
    P5(methodName)=Microsoft.VirtualManager.DB.Adhc.StoredCertificate.CacheVMConnectCertificate
    P6(exceptionType)=System.AggregateException
    P7(callstackHash)=7b6a
    SCVMM Version=3.2.7620.0
    SCVMM flavor=C-buddy-RTL-AMD64
    Default Assembly Version=3.2.7620.0
    Executable Name=vmmservice.exe
    Executable Version=3.2.7510.0
    Base Exception Target Site=140721289766296
    Base Exception Assembly name=mscorlib.dll
    Base Exception Method Name=System.Security.Cryptography.CryptographicException.ThrowCryptographicException
    Exception Message=One or more errors occurred.
    EIP=0x00007ffc469c43c8
    Build bit-size=64
    ------------ exceptionObject.ToString() ------------
    System.AggregateException: One or more errors occurred. ---> System.Security.Cryptography.CryptographicException: The specified network password is not correct.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.X509Certificates.X509Utils._LoadCertFromBlob(Byte[] rawData, IntPtr password, UInt32 dwFlags, Boolean persistKeySet, SafeCertContextHandle& pCertCtx)
    at System.Security.Cryptography.X509Certificates.X509Certificate.LoadCertificateFromBlob(Byte[] rawData, Object password, X509KeyStorageFlags keyStorageFlags)
    at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(Byte[] rawData, SecureString password, X509KeyStorageFlags keyStorageFlags)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.CacheVMConnectCertificate(StoredCertificate cert)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.ImportCertificates(List`1 certificates, ReportCertImportFailure ReportImportFailure)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.ImportAllCertificates(ReportCertImportFailure ReportImportFailure)
    at Microsoft.VirtualManager.Engine.VirtualManagerService.LoadCertificates()
    at Microsoft.VirtualManager.Engine.VirtualManagerService.TimeStartupMethod(String description, TimedStartupMethod methodToTime)
    at System.Threading.Tasks.Task.Execute()
    --- End of inner exception stack trace ---
    at System.Threading.Tasks.Task.Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken)
    at Microsoft.VirtualManager.Engine.VirtualManagerService.WaitForStartupTasks()
    at Microsoft.VirtualManager.Engine.VirtualManagerService.TimeStartupMethod(String description, TimedStartupMethod methodToTime)
    at Microsoft.VirtualManager.Engine.VirtualManagerService.ExecuteRealEngineStartup()
    at Microsoft.VirtualManager.Engine.VirtualManagerService.TryStart(Object stateInfo)
    at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    at System.Threading.TimerQueueTimer.CallCallback()
    at System.Threading.TimerQueueTimer.Fire()
    at System.Threading.TimerQueue.FireNextTimers()
    ---> (Inner Exception #0) System.Security.Cryptography.CryptographicException: The specified network password is not correct.
    at System.Security.Cryptography.CryptographicException.ThrowCryptographicException(Int32 hr)
    at System.Security.Cryptography.X509Certificates.X509Utils._LoadCertFromBlob(Byte[] rawData, IntPtr password, UInt32 dwFlags, Boolean persistKeySet, SafeCertContextHandle& pCertCtx)
    at System.Security.Cryptography.X509Certificates.X509Certificate.LoadCertificateFromBlob(Byte[] rawData, Object password, X509KeyStorageFlags keyStorageFlags)
    at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(Byte[] rawData, SecureString password, X509KeyStorageFlags keyStorageFlags)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.CacheVMConnectCertificate(StoredCertificate cert)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.ImportCertificates(List`1 certificates, ReportCertImportFailure ReportImportFailure)
    at Microsoft.VirtualManager.DB.Adhc.StoredCertificate.ImportAllCertificates(ReportCertImportFailure ReportImportFailure)
    at Microsoft.VirtualManager.Engine.VirtualManagerService.LoadCertificates()
    at Microsoft.VirtualManager.Engine.VirtualManagerService.TimeStartupMethod(String description, TimedStartupMethod methodToTime)
    at System.Threading.Tasks.Task.Execute()<---
    Maybe someone has an idea where to look at.
    Best Regards,
    Marcus
    Marcus Lehmann

    Hi,
    nope, we solved it.
    It seems that this behavior occurs only under special circumstances.
    Scenario is:
    1. RDP Gateway connection to our Hyper-V Hosts (We need it for our Windows Azure Pack installation). A certificate is needed to encrypt the connection between RDPGW and Hyper-V host. This certificate is distributed by the VMM to the Hyper-V Hosts with the
    private key.
    2. Migrating from standalone installation to High Availability therefore migrating from DPAPI to DKM
    It looks like that the key or the password to unencrypt the private key, which is stored in the database, is itself stored in the DPAI/DKM.
    So when you try to get the migrated Service online the error report in my first post is generated.
    What we did was fiddeling around in the database. Make sure you have a backup !!
    You need two tables:
    tbl_ADHC_Host
    tbl_VMM_CertificateStore
    1. Go to the table "tbl_ADHC_Host" and edit the key fk_ADHC_Host_VMM_CertificateStore and set "Enfore Foreign Key Constraint" to "No".
    2. get the value "VMConnectCertficateID" from the table "tbl_ADHC_Host"
    3. Go to the table "tbl_VMM_CertificateStore" and delete the Certficate with the value which equals the "VMConnectCertificateID". Note: Corresponding Certificate in the CErtificatestore table should have something in the column "PrivatekeyPassword"
    and an ObjectType of 6.
    4. The service should now start and you can reconfigure the certificate used for the encryption between your RDPGW and Hyper-V Hosts, like you did before.
    5.  Go to the table "tbl_ADHC_Host" and edit the key fk_ADHC_Host_VMM_CertificateStore and set "Enfore Foreign Key Constraint" back to "Yes".
    Hope that helps. ;)
    Best Regards,
    Marcus
    Marcus Lehmann

  • Migration from Standalone 10.2 with ASM to RAC 11.2

    Hi,
    I have the following situation:
    + My 10.2 Standalone Database "DB" + ASM (Version 11.2) runs on RH 5.4/64 (Node 1),
    + My 11.2 RAC-DB "RAC" run also on Node 1.
    Now I want to migrate the 10.2 Standalone Database "DB" on the RAC-Environment to a 11.2 RAC-Database "DB".
    Can anyone tell me the optimal Upgrade/Migration Path ?
    Thanks in advance
    hqt200475
    Edited by: hqt200475 on 13.08.2010 02:47
    Edited by: hqt200475 on 13.08.2010 02:50

    Hi,
    Normally, if you have a single database in a version and you want it to be converted to be RAC in another version, you will make the following:
    1. Backup the database.
    2. Install binaries of new version in single database
    -- install 11gR2 as stand-alone (ASM already there, you just create RDBMS binaries home)
    3. Upgrade stand-alond database from old version to new version.
    -- upgrade from 10.2 to 11.2
    check Note 251.1 for more details about upgrade path and checklist
    4. After upgrade, convert from single to RAC using following link:
    http://download.oracle.com/docs/cd/E11882_01/install.112/e10813/cvrt2rac.htm (for Linux)
    -- of course check your platform link
    Then you will have your database upgraded to RAC in new version.
    Hope this give you good guidance. Good Luck.
    Regards,
    Amr Ibrahim

  • Migrating from standalone server to Open LDAP

    We have 10.4.10 server that serves DNS, MAIL and WEB only. It is set up as a standalone server and we would like to migrate (convert) it to a an LDAP server, master, I would assume.
    Is there a best practices guide step by step for doing this?
    Or any caveats from others that have made this conversion?
    Thanks in advance,
    Mike

    As far as what LDAP gives me, we need the directory to provide users to a spam appliance. From there, we're implementing Kerberos for the organization to unify access and passwords.
    I did somewhat successfully change the server from standalone to OD. I say somewhat because this is a test machine and the DNS was a little off. Once I corrected the issues, though, and did a bit of reading, I changed it to a standalone back to a OD again, it works well.
    However: and this has to do with the user import and export, I understand that exporting from the localNode will not preserve passwords: fine. I can re-enter the passwords. But when I re-import them to the LDAP directory node, choosing to OVERWRITE the existing users, it does not delete them from the localNode. This may be because the overwrite applies to the current node (though they still have the same ID).
    So then, I assume I can delete the localNode users since they are no longer needed and will screw up the search tree (localNode, then LDAP,...). But doing that deletes any mail they had. No matter if you turn off the mail for that user, turn off the mail services, the mail gets deleted when the user gets deleted.
    Obviously this is a problem since the same user is using the mail store fomr the same user account now in the LDAP node.
    So how do I get the users' mail to stay there once the user is in the LDAP directory node? I am experimenting with backing up and restoring the mail to the directory, but I have 35G of it. And cyrus gets a little funky when it's databases are out of synch...
    Thanks...

  • Migrating from standalone ITS to internal ITS u0096 copy existing templates

    Dear Sirs,
    I am in the process of migrating from Standalone ITS to the internal one. I have found several documents on the issue and the help section for the migration, http://help.sap.com/saphelp_nw04/helpdata/en/cd/8a424089ff2571e10000000a155106/content.htm
    Currently I am using a standalone ITS server, 6.20 PL13, to access/run IAC components on ERP2004 server ZXY. In the new setup I wan to run the internal ITS on server ZXY instead.
    I have however a question regarding the process. Step one in the process is to Copy the existing templates.
    If I enter the transaction SE80, I am not able to see all of the IAC component names which are referenced in my portal iViews.  I therefore assume that the ones which does not show up, has not been converted properly.
    I therefore go into transaction SIAC1 to look for them there (as described in ITS – New storage structures as of Web SAP AS 6.40, 678904), but I am however unable to find them there either.
    Do I have to go the current ITS server and somehow copy the files/IAC manually (save on disk), and then upload then with SIAC_UPLOAD (from se38) onto the R/3 system. Or do they exist already on my R/3 system just that they aren’t shown?
    Could a solution be that the name of the IAC component in the portal iview, differs from the name actually used in the R/3 system.
    Any pointers or hints on this issue are appreciated.
    Best regards,
    Jørgen

    Deepak Sharma,
    Thank you for your reply. But I have already published the services internally and actually have used the 'webgui' service successfully.  What I am trying to do now is to copy the custom services (there are only 2) to the integrated ITS from my standalone ITS server.

  • Question on Migrating SAP server from one system to another

    Hello SAP Gurus
    I need your expert advice with the following issue.
    We have ECC6.0 and Enterprise Portal installed on a windows 2003 server with 4 gig memory for demo work (Pre-sales).
    Lately we are seeing lots of performance issues and so would like migrate to a faster server.  Keeping the budget in mind
    we would like to go for Windows-7 64 bit so that we can have atleast 8 gig memory and about 250 gig disc space.
    How easy is to migrate from the old server to the new server without losing any data that we have created (lots of custom BAPIs and Tables etc.,)
    Is it as simple as install the new server and then create the transport request package of BAPIs and TABLES on the old system and import it into the new server OR is there any thing else that I need to be looking for.  Also, is Windows-7 64 Bit a supported platform for SAP.
    I am sure lots of you have already done this and might have acquired good knowledge. I would appreciate if you can please share that with me.
    Thanks
    Ram
    Edited by: Prasad Ram on Mar 17, 2010 2:39 PM

    Hi Prasad,
    I am not sure whether Windows-7 64 bit is a supported platform or not. Please check that in PAM (Product Availability Matrix).
    Apart from that the thing you are trying to achieve is not much of a complex activity. By means of System Copy you can achieve what you are trying to do. System Copy is the building of a new system on the new platform using an existing offline database and moreover keeping the SID same. This will retain all of your data (Custom BAPIs and tables and all those) and save you from tedious activity of creating innumerable transport requests and importing them.
    Please visit SAP Service market place for System Copy Guide. By the way can you let us know is the current platform is also 64 bit?
    Let us know if you have any more concerns.
    Regards
    Sourabh Majumdar

  • Moving VMs from Standalone Hyper-V to Hyper-v Failover Cluster

    Hello All.................I need to move Virtual Machines from Standalone Hyper-V based on Windows Server 2012 R2 to a Hyper-V Failover Cluster based on Windows Server 2012 R2.
    -  The VMs OS is Windows Server 2012 R2
    -  Applications installed on VMs are SCCM, SCOM, SharePoint, etc.
    1.  I am looking for a Checklist that I can run through and move them.  To make sure I do not miss out anything.
    2.  Would a downtime be required in this scenario while the VM is being moved?
    3.  What requires special attention in this scenario?

    Thanks for the reply.
    So, I do not need to do anything and just simply add both the cluster and standalone into VMM and then simply use the Move option.  Please, confirm if I have understood it correct.
    Yes. But be carefull. First shut down your VM's
    if possible and configure them in such a way they don't automatically start. If you install the SCVMM Agent, some Hyper-V related services are restarted, when some VMs start during that process you system may stall. I have seen this multipe times.
    There is another dirty option that worked of us perfectly. Export the VMs. Then import those VMs to a local disk on one Hyper-V Server. Refresh the Hyper-V Server and Virtual Machines on that Hyper-V Server. Give it a few minutes. Then do
    a Live Migration while at the same time makeing it "Highly Available" by moving it to your CSV. Works flawlessly. DO NOT IMPORT THE VMS TO YOUR CSV, otherwise you can't change it to "Highly Avaiable" unless you have
    two CSVs. Importing it straight to a CSV does not make them Highly Available.
    Boudewijn Plomp | BPMi Infrastructure & Security
    This posting is provided "AS IS" with no warranties, and confers no rights. Please remember, if you see a post that helped you please click "Vote as Helpful", and if it answered your question, please click "Mark as Answer".

  • Migrate Application Role from uat to prod in 11.1.1.6.10

    Hi All,
    We have to migrate the UAT Application Roles to Prod instance. I followed Rittman Mead policy store migration. servers  in LINUX
    http://www.rittmanmead.com/2011/04/oracle-bi-ee-11g-migrating-security-policy-store-part-2/
    But at MigrateSecurityStore step, I am facing an issue with the wlst script which is throwing below error.
    I am getting bellow error
    wls:/offline> migrateSecurityStore(type="appPolicies",srcApp="obi",configFile="/ usr/app/MW/SecurityMigration/jps-config-policy.xml",src="sourceFileStore",dst="t                                                                                                         argetFileStore",overWrite="false")
    Oct 17, 2013 11:41:27 AM oracle.security.jps.internal.config.xml.XmlConfigurationFactory initDefaultConfiguration
    SEVERE: org.xml.sax.SAXParseException: The XML declaration must end with "?>".
    Command FAILED, Reason: The XML declaration must end with "?>".
    Traceback (innermost last):
      File "<console>", line 1, in ?
      File "/usr/app/MW/oracle_common/common/wlst/jpsWlstCmd.py", line 955, in migrateSecurityStore
      File "/usr/app/MW/oracle_common/common/wlst/jpsWlstCmd.py", line 927, in migrateSecurityStoreImpl
            at oracle.security.jps.internal.tools.utility.source.JpsInitializerSource.getSources(JpsInitializerSource.java:155)
            at oracle.security.jps.internal.tools.utility.JpsUtility.<init>(JpsUtilty.java:62)
            at oracle.security.jps.internal.tools.utility.JpsUtilMigrationPolicyImpl.migrateAppPolicyData(JpsUtilMigrationPolicyImpl.java:151)
            at oracle.security.jps.tools.utility.JpsUtilMigrationTool.executeCommand(JpsUtilMigrationTool.java:231)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
            at java.lang.reflect.Method.invoke(Method.java:597)
    oracle.security.jps.JpsException: oracle.security.jps.JpsException: The XML declaration must end with "?>".
    This is config.xml file
    <?xml version='1.0' encoding='utf-8'? standalone='yes'?>
    <jpsConfig xmlns="http://xmlns.oracle.com/oracleas/schema/11/jps-config-11_1.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.oracle.com/oracleas/schema/11/jps-config-11_1.xsd">
       <property name="oracle.security.jps.jaas.mode" value="Off"/>
       <propertySets>
    <propertySet name="sam1.trusted.issuers.1">
    <property name="name" value="www.oracle.com" />
    </propertySet>
    </propertySets>
       <serviceProviders>
          <serviceProvider type="POLICY_STORE" name="policystore.xml.provider" class="oracle.security.jps.internal.policystore.xml.XmlPolicyStoreProvider">
             <description>XML-based PolicyStore Provider</description>
          </serviceProvider>
       </serviceProviders>
       <serviceInstance name="srcpolicystore.xml" provider="policystore.xml.provider" location="/usr/app/MW/SecurityMigration/uat/system-jazn-data.xml">           
      <description>File Based Policy Store Service Instance</description>       
      </serviceInstance>
      <serviceInstance name="policystore.xml" provider="policystore.xml.provider" location="/usr/app/MW/SecurityMigration/prod/system-jazn-data.xml">           
    <description>File Based Policy Store Service Instance</description>       
    </serviceInstance>
       </serviceInstances>
        <jpsContexts default="default">       
    <!-- This is the default JPS context. All the mendatory services and Login Modules must be configured in this default context -->       
    <jpsContext name="sourceFileStore">           
    <serviceInstanceRef ref="srcpolicystore.xml"/>       
    </jpsContext> <jpsContext name="targetFileStore">           
    <serviceInstanceRef ref="policystore.xml"/>     
    </jpsContext>   
    </jpsContexts>
    </jpsConfig>
    Please let me know if i need to provide further inputs.Appreciate your help.

    make sure you are running the wlst.sh from this path /MWHOME/Oracle_BI1/common/bin/wlst.sh
    you can take a look at this too Migrating Security Policies from Development to Standalone WLS 11g
    http://ssssupport.blogspot.com/2013/02/obiee-11g-application-role-migration.html
    Obiee11g: Migrating application role from DEV to Prod server in obiee11g

  • Migrate essbase app from 6 to 9

    planning to migrate the app from essbase 6 to esbase 9.to a whole new server.
    Any issues with that...any doc on migration from 6 would be useful.
    its urgent

    I started to migrate from 6.6 to 9.3.1 and it went quite smooth. What I have done:
    Installed the server.
    Used the config tool and removed shared services registration.
    Copied essbase.cfg to new installation.
    Had to add the env variables manually on AIX from the hyperionenv.doc.
    Installed the eas server.
    Used config tool for config and deployment of eas on std settings.
    Installed the eas client and used the migration tool to migrate some databases to the new server from a 6.6 (6.6 running on different machine than 9.3)
    Tested our batch processing for data load, security transfers, data staging etc. which uses some api interfaces and ESSCMD and maxl scripts. All without any problems. All settings were kept the same as on the old systems (essbase.cfg, db settings, app settings).

  • Migration of content from Subversion repository

    Hi.  I'm trying to migrate some content from some subversion (svn) repositories into SharePoint document libraries.  Since I don't currently have .NET-based svn-reading capabilities, I've got my content in another application that was then calling into the SharePoint copy web service.  With this I can get almost everything I need, but not quite everything.  I want to be able to retain the check-in author, check-in date, and check-in comment from my svn metadata, but I don't see how to force that information into my document library at the time that I upload my document.
    So, how can I create a document in a versioned document library that will allow me to supply my own specific author, modified-by user, and check-in comment?  I am also not opposed to just buying a standalone product that does this, but I can't seem to find anything like it.
    thanks,
    Chris

    Interesting topic..
    We use Tortoise/SVN heavily for our team development.
    We use it for maintaining all kinds of code stuff. VBS, C#, PowerShell, IPF, AU3 etc....
    It's sooo easy to work with, but the backend is either a file share or an  Apache web server.
    Currently there does not seem to exist an IIS backend or even a Sharepoint solution.
    We have been toying with embedding SVN in IIS/MOSS 2007 since versioning is build in there.
    The problem is IMHO the SVN protocol must be implemented in a special MOSS feature and you also need to access the backend trough a web part in order to upload/download subscribe to a repository.
    Supplying the mandatory comment attributes can be done in MOSS in the document library settings by adding columns and making them required.
    We do that kind of stuff in combination with a special autonumbering feature in order to create unique ID numbers for each document.
    I've googled a while about this subject, but there is no real solution in IIS/Sharepoint as far as I know.
    Marcel

  • NameNotFoundException when looking up datasource jndi from standalone clien

    Hi,
    I'm trying to lookup datasource jndi from standalone client, but always get exceptions.
    I configured an oralce datasource with jndi name "oracleDataSource". When looking it up in servlet, I can get connection.
    In order to test it from standalone client, I created following code:
    public class DataSourceTest
    * Attempt to authenticate the user.
    public static void main(String[] args)
    String datasource = null;
    if (args.length == 1 ) {
    datasource = args[0];
    System.out.println("datasource = "+datasource);
    if(datasource == null)
    datasource = "oracleDataSource";
    try{
    Connection conn = null;
    Properties env = new Properties();
    env.put("java.naming.factory.initial","com.sun.jndi.cosnaming.CNCtxFactory");
    env.put("java.naming.provider.url", "iiop://localhost:3700");
    Context initial = new InitialContext(env);
    if(datasource != null){
    DataSource ds = (DataSource)initial.lookup(datasource);
    conn = ds.getConnection();
    if(conn != null){
    System.out.println("datasource is gotten.");
    conn.close();
    else
    System.out.println("datasource is error.");
    System.exit(0);
    } catch (Exception ex) {
    System.err.println("Caught an unexpected exception!");
    ex.printStackTrace();
    When running, I get following exception:
    [java] datasource = oracleDataSource
    [java] Caught an unexpected exception!
    [java] javax.naming.NameNotFoundException. Root exception is org.omg.CosNaming.NamingContextPackage.NotFound
    [java] at org.omg.CosNaming.NamingContextPackage.NotFoundHelper.read(NotFoundHelper.java:34)
    [java] at org.omg.CosNaming._NamingContextExtStub.resolve(_NamingContextExtStub.java:402)
    [java] at com.sun.jndi.cosnaming.CNCtx.callResolve(CNCtx.java:368)
    [java] at com.sun.jndi.cosnaming.CNCtx.lookup(CNCtx.java:417)
    [java] at com.sun.jndi.cosnaming.CNCtx.lookup(CNCtx.java:395)
    [java] at javax.naming.InitialContext.lookup(InitialContext.java:350)
    [java] at com.tbcn.ceap.test.cilent.DataSourceTest.main(DataSourceTest.java:42)
    I also tried many other methods but always got exceptions. What's wrong with it? How can I lookup the datasource jndi from standalone client?
    Thanks in advance!

    Thank Tuan!
    I tried. When running, the server will read security.properties and ejb.properties. But I didn't use ejb and I didn't know how to configure ejb.properties, so I let ejb.properties empty. The security.properties is as following:
    client.sendpassword=true
    server.trustedhosts=*
    interop.ssl.required=false
    interop.authRequired.enabled=false
    interop.nameservice.ssl.required=false
    The result is:
    [java] javax.naming.CommunicationException: Can't find SerialContextProvider
    [java] at com.sun.enterprise.naming.SerialContext.getProvider(SerialContext.java:63)
    [java] at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:120)
    [java] at javax.naming.InitialContext.lookup(InitialContext.java:347)
    [java] at com.tbcn.ceap.test.cilent.DataSourceTest.main(DataSourceTest.java:42)
    Also, I tried it with ACC. In sun sample ConverterClient.java under rmi-iiop/simple, I added following code under with ACC and without ACC. With ACC, I can get connection. But without ACC, I can't get it.
    try{
              DataSource ds = (DataSource)initial.lookup("oracleDataSource");
              Connection conn = ds.getConnection();
              if(conn != null){
              System.out.println("datasource oracleDataSource gotten.");
              conn.close();
              else
              System.out.println("oracleDataSource is error.");
    Does it means that we must lookup datasource jndi with ACC?

  • Migrate the EJBs from JVM to OC4J...?

    We are plan to migrate the EJBs modules from Oracle8i JServer to OC4J.
    From now we have developed the EJBs by using JDeveloper as the following
    conditions;
    - JVM: Oracle8i JServer (8.1.6.3.0)
    - EJBs: Session Beans only
    - IDE tool: JDeveloper 3.2
    And now we wanna migrate or re-deploy to the OC4J. Is there any good
    suggestions or idea to do it? Also I wanna know sth to considerate...
    Thanks in advance for any advice
    Phyllis
    null

    It would be good for you to migrate to OC4J as performance in OC4J is a lot better. Here is a little migration writeup I had done sometime back. It is work in progress. Hope this helps.
    Migrating EJB components from Database EJB container to OC4J
    Author: Ashok Banerjee [[email protected]]
    Oracle recommends the new J2EE containers in Oracle9iAS (OC4J) for deploying J2EE components. The goal of this paper is to recommend ways to migrate existing EJB applications, from database EJB container, to OC4J.
    The database EJB container runs on the aurora session based Java Virtual Machine, whereas the new container runs on standard JDK (version 1.2.2 onward) .
    The database EJB container runs in a database session. The database is itself the middle tier, in such a configuration. Users therefore have a logically three tiered architecture, that runs physically on two tiers. The database serves as both the middle tier and the backend. In such situations users would use the "kprb" driver (used for requests made from within the database). No further authentication is needed if the backend database is the same as the middle tier. Users can also use one database as the middle tier and another database as the backend.
    In OC4J the middle tier does not require a database installation on the middle tier. This helps make the middle tier lightweight. OC4J is also seen to significantly faster than the EJB container inside the database. Since OC4J runs on regular JDK the user can run OC4J on the Java Virtual Machine which is best for his/her specific needs. OC4J however has a limitation in that does not support pure CORBA clients. Currently OC4J does not support RMI over IIOP either.
    To migrate existing EJB applications from inside the database to OC4J the set of changes is minimal.
    Code changes
    1. Change all references in the code and any configuration files using JDBC URLs from kprb driver to either thin or oci. "kprb" driver is only for applications running inside the database. The new OC4J runs outside the database and therefore it must use thin or oci driver.
    2. The database EJB container clientside JNDI lookups were like:
    env.put (Context.URL_PKG_PREFIXES, "oracle.aurora.jndi");
    env.put (Context.SECURITY_PRINCIPAL, user);
    env.put (Context.SECURITY_CREDENTIALS, password);
    env.put (Context.SECURITY_AUTHENTICATION, ServiceCtx.NON_SSL_LOGIN);
    Context ic = new InitialContext (env);
    while in OC4J container the lookup code should read:
    env.setProperty("java.naming.factory.initial","com.evermind.server.ApplicationClientInitialContextFactory");
    env.setProperty("java.naming.provider.url","ormi://localhost");
    env.setProperty("java.naming.security.principal","username");
    env.setProperty("java.naming.security.credentials","password");
    Context ic = new InitialContext (env);
    3) Remove all references to oracle.aurora.* classes - they will not work with OC4J. These classes are specific to the container in the database.
    4) Check for client demarcation of Bean Managed Transactions and remember that transaction in OC4J cannot be propagated across tiers. In other words client demarcated transaction will work provided the client is another EJB or a servlet in the same OC4J instance. This is also listed in the "Current Limitations" section.
    Build changes
    1. For EJB in the database we use loadjava to load classes into the database - this step is not necessary any longer for middle tier EJBs. The user merely needs to ensure that the classes are in the classpath.
    2. Similarly the concept of resolver specification in the database EJB container ceases to be useful in OC4J. To make classes "findable" the user sets the classpath. In OC4J any jars in the $OC4J_HOME/lib directory are on the server side classpath of the application classloader. The user can also include classes in their own ear files before they deploy the ear. [If you do not know about the resolver specification, then you have not used it and do not need to know about it].
    3. The database J2EE container did not have support for .ear files. The OC4J container supports J2EE standard .ear files. Both makefiles and ANT files are available in the code samples on OTN which show how to build ear files and also provides samples for the structure of the EAR files.
    EAR File contains META-INF/application.xml
    META-INF/orion-application.xml [optional]
    applicationEjb.jar
    applicationweb.war
    applicationEjb.jar contains
    META-INF/ejb-jar.xml
    META-INF/orion-ejb-jar.xml
    EJB classes
    applicationWeb.war contains
    WEB-INF/web.xml
    WEB-INF/orion-web.xml
    WEB-INF/lib/jar files needed
    WEB-INF/classes class files needed for JSP servlets
    For further information on how to write the xml files etc. please refer to their respective DTDs.
    Both web.xml and ejb-jar.xml should be unaffected by the migration.
    4. The deployejb tool used to deploy EJB to the database has been deprecated. In OC4J EJBs are deployed as shown below:
    % java -jar admin.jar ormi://localhost:<rmi-port> userName password -deploy -file ejbdemo.ear -deploymentName jndiName
    5. The ApplicationClientContainer does not need aurora_client.jar in the classpath instead it needs oc4j.jar in the classpath.
    Configuration Changes:
    1. In the database EJB container the configurations exist in database tables but in OC4J the configuration exists in industry standard XML files. By default these files are located at $OC4J_HOME/config directory. Users can start their OC4J instance with a different configuration using the -config switch to specify a different server.xml file.
    %java -jar oc4j.jar -config config1/server1.xml
    2. In the database EJB container scalability was achieved using the MTS architecture and session based Java Virtual Machine. In OC4J scalability is achieved by clustering and using the loadbalancer. For more details on loadbalancer please read <link to loadbalancer.doc>
    3. The bindds tool used to bind data-sources to the JNDI namespace of the database EJB container is deprecated. In OC4J datasources are bound to the the namespace using the data-sources.xml in $OC4J_HOME/config. Non oracle data-sources can also be bound to the namespace. A typical entry in data-sources.xml could look like:
    <data-source
    class="com.evermind.sql.DriverManagerDataSource"
    name="OracleDS"
    location="jdbc/OracleCoreDS"
    xa-location="jdbc/xa/OracleXADS"
    ejb-location="jdbc/OracleDS"
    connection-driver="oracle.jdbc.driver.OracleDriver"
    username="scott"
    password="tiger"
    url="jdbc:oracle:thin:@dlsun1688:5521:jis2"
    inactivity-timeout="30"
    />
    4. Database EJB container used database authentication (database roles and database users) to authenticate users to the middle tier. OC4J uses principals.xml file to list groups and users and perform authentication. One can also implement UserManagers for OC4J to perform more complex authentication.
    5. The ports used by OC4J for RMI, JMS and HTTP are all specified in rmi.xml, jms.xml, web-site.xml.
    6. Both OC4J and the database container SSL (client authentication as well as server authentication).
    7. The database EJB container could only be remotely debugged using a jdb like interface by starting a debugagent on the server, and a debugproxy on the client. OC4J containers on the other hand run on JDK and can be debugged with any debugger. This is very useful for debugging EJB/Servlet application running in the server.
    8. The database EJB container comes with a database and hence has AQ (Advanced Queue) available for JMS. With OC4J there is a default in memory JMS implementation. OC4J server program can also connect to other JMS implementations(including AQ) using jms.xml. However a JNDI context may need to be implemented for AQ.
    Current Limitations
    1. The OC4J container at present does not support 2 phase commit.
    2. The OC4J container at present does not support transaction context propagation across tiers. Hence client demarcated transactions are not supported in OC4J except where the client demarcating the transaction is in the same container, for example an EJB having its transaction demarcated by another EJB or servlet, in the same container.
    3. In the classic container Oracle did not support RMI-IIOP tunneling through HTTP however in OC4J RMI over HTTP is supported.
    null

  • We had migrated our servers from a domain to cloud. all the web applications working fine locally, but from outside they are accessible but on editing giving 404 with some request id.

    hi all,
    recently we migrated our moss 2007 from a domain to cloud env. everything goes well and all the application working fine locally.
    but when users are trying from outside first they get promt for basic auth, that is enabled, its fine. then servers serve first page smoothly. all the application browsing is fine. but when authorized users are trying to edit a web page and they click on
    anything(save, submit for publication, publish) they get 404 with a request ID. another thing if they add a new page to app and then make any changes to that page then they are not getting any issue.
    please help if any one resolved that kind of issue.
    thanks in advance.
    SJ

    Hi Sumanpreet,
    Thanks for posting your issue, Kindly check out below mentioned URLs to get the fix for this issue
    http://www.sharepointdiary.com/2011/02/404-page-not-found-error-sharepoint-2010-migration.html#ixzz2jHyVIFcX
    http://vasya10.wordpress.com/category/sharepoint-2010/sharepoint-2007-to-2010-migration/
    https://social.technet.microsoft.com/Forums/en-US/d80e7495-e889-4209-9a0a-82a852c944b1/after-migration-from-moss-2007-to-sharepoint-2010-i-tried-to-check-the-site-with-site-url-getting?forum=sharepointadmin
    I hope this is helpful to you, mark it as Helpful.
    If this works, Please mark it as Answered.
    Regards,
    Dharmendra Singh (MCPD-EA | MCTS)
    Blog : http://sharepoint-community.net/profile/DharmendraSingh

Maybe you are looking for

  • Possible to do make Batch Sequence Searchable by date?

    I want to know if it is possible to edit/create new script to Edit Batch Sequence - Searchable, recognize text Using OCR, but to only do it for a date range? How would I go about doing this. Right now it takes multiple screens and clicks to make docu

  • My first mac and I'm hating iPhoto

    Recently got first MBP, always been a windows guy.  I love this computer but I'm pulling my hair out with iPhoto.  I moved all my windows files into a folder on my mac desktop and gradually moved everything into the right place.  Finally I was down t

  • SOAP Response not return when exception is thrown

    Hi, Wonder if anyone had encountered the same issue as me with SOAP Response. Adobe LiveCycle ES2 (v9.0.0.2) I had a process which is triggered using SOAP Endpoint. Whenever there is exception in the process the SOAP Response will have an error, inst

  • Tax is not calculated in PO but showing in PO Print Out

    Hi, I am facing a PROBLEM OF tax is not calculated but it is dispalying in PO Printout.What is the reson ? I have maintainrd J1ID settings correct eventhough ED @ 16% IS NOT calacualting ? It would be gerat help if anybody would help me suitable poin

  • Query variable maintenance

    Hi Experts, I have the following user requirement and would be interested to here expert ideas on how to solve it:- Our users have hundreds of queries containing budget/forecast versions which each month they have to update to look at the current ver