Deployment Process for a single namespace

Hello Friends,
I have an issue, how can we deploy 1 namespace from 1 server to another.
Currently I created a scenario in sandbox under one software component.
Now I ant to deploy this scenario( namespace ) in client's system nder different software component.
I exported this namespace from sandbox as 'All objects of individual namespace'.
How can I import this namespace in clint system under different software component name?
Regards,
Narendra

check this link :-
Can't see the imported objects!

Similar Messages

  • How to export a whole process for a single image.

    Good morning.
    I would like to know how I can export a whole process for a single image. Is it possible? With that I can print of the way that goes better.
    Thanks.

    Hi Raghavendra,
    Thanks for answering my query. I have created a transport request and exported the folder I need to take backup of. But when I tried testing import of the same request I am getting following error. Do you have any idea what is the cause of this error?
    com.sap.security.core.server.destinations.itsam.DestinationRuntimeException: com.sap.security.core.server.destinations.api.DestinationException: [_DestinationServiceAuthorization1005] Code-based destination service access denied to component sap.com/cafeugpuiadmin. Access to security-relevant internal destination properties (e.g. passwords, tickets, etc.) is restricted to few selected engine components and not generally available to any service or application.
    Kind Regards,
    Urvashi

  • Master data Reorganization Process for a single InfoObject

    Hallo Experts,
    mayby someone else had  this problem bevor us?
    SAP-BWProducion: Master data Reorganization Process for a single InfoObject (selfmade), other InfoObjects worked fine.
    SAP BW 7.0 SP20 or 21
    This element has daily attributes and the Q-Table has actual 180Mio entries. We startet reorganisation with 250Mio. entries.
    Because of the daily loading, we cancel the batchprocess in the evening and started it again in the morning. This was fine for about 5 days. We elliminate 80Mio entries.
    Our destination would be 80Mio entries left.
    Whenever we run this feature now  - in a process chain - no entries were deleted after 30 hours working, and there were no counting-stastics in SM50
         Sequentielles Lesen         
         Insert                       
         Update
         Delete
    The five days before, we saw Mio's of entries in this statistics.
    We have checked the Q-table and  there exist a number of records which have the same attribute values across several records and the date from - date to values of these records are in a row.
    RSRV is fine and we checked note 1234411 Master data reorganization - Performance improvement.
    There are no other notes helping us.
    Please give us a solution-idea.
    Thanks
    Santra

    Hi Santra,
    Could you please provide the solution for this?
    Thanks
    Vinod

  • Deployment process for Large ALSB/OSB project

    Hi,
    I am searching for some details or article about deployment of alsb/osb projects (number of projects) as part of build process.
    How projects manages when number of different alsb projects need to be build, customized and imported into different environments.
    We are starting new release where developers would be working on different projects. i have a some ANT scripts which use WLST script to export single project, created jar file and place them into a specific location.
    For import, the script looks at a specific location, reads config.jar from that location, and imports that config into target environment. Now with multiple projects and each developer will deliver their own config.jar. How to handle that in WLST. I am not sure if the current approach i am using that any good to handle multiple projects deployment.
    Anyone could give some pointers or helpful hints?
    we are on version 10.0 and wont be moving to 10gR3 where we could use ANt to build config.jar from .metadata so not sure.
    I want to know what other projects are doing etc!
    many thanks in advance!
    sal

    It seems you already have certain level of insight into deployment process, but maybe you will find this document interesting:
    http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/pdf/deploybestprac.pdf

  • Testing Process for Gathering Single Object stats.

    Hello Oracle Experts,
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is. And one of such changes which is currently under scrutiny is gathering object stats for single objects. Just to give you a background its an Oracle eBusiness site so fnd_stats is used instead of usual dbms_stats and we've an inhouse job that depending on the staleness of the objects gather stats on them using FND_STATS. (RDBMS : 10.2.0.4 Apps Release 12i).
    Now, we've seen that occasionally it leaves some of the objects that should ideally be gathered so they need to be gathered individually and our senior technical management wants a process around it - for gathering this single object stats (I know!). I think I need to explicitly mention here that this need to gather stale object stats has emerged becs one of the plans has gone pretty poor (from 2 ms to 90 mins) and sql tuning task states that stats are stale and in our PROD copy env (where the issue exists) gathering stats reverts to original good plan! So we are not gathering just because they are stale but instead because that staleness is actually causing a realtime problem!
    Anyway, my point is that it has been gathered multiple times in the past on that object and also it might get gathered anytime by that automatic job (run nightly). There arguments are:
    i. There may be several hundred sql plans depending on that object and we never know how many, and to what, those plan change and it can change for worse causing unexpected issues in the service!
    ii. There may be related objects whose objects have gone stale as well (for example sales and inventory tables both see related amount of changes on column stock_level) and if we gather stats only on one of them and since those 2 cud be highly related (in queries etc.) that may mess up the join cardinality etc. messing up the plans etc.
    Now, you see they know Oracle as well !
    My Oracle (and optimizer knowledge) clearly suggests me that these arguments are baseless BUT want to keep an open mind. So my questions are :
    i. Do the risks highlighted above stand any ground or what probably do you think is there of happening any of the above?
    ii. Any other point that I can make to convince the management.
    iii. Or if those guys are right, Do you guys use or recommend any testing strategy/process that you can suggest to us pls?
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    Thanks so much in advance!

    >
    I work a critical system and due to some high stakes all and every change is very heavily scrutinized here whatever the level is.
    Another interesting point is that, they are not even very clear at this stage how they are gonna 'test' this whole thing as the 'cost' option like RAT (Real Application Testing) is out of question and developing an inhouse testing tool still need analyzing in terms of efforts, worth and reliability.Unfortunately your management's opinion of their system as expressed in the first paragraph is not consistent with the opinion expressed in the second paragraph.
    Getting a stable strategy for statistics is not easy, requires careful analysis, and takes a lot of effort for complex systems.
    >
    In the end, Can I request top experts from the 'Oak Table' network to make a comment so that I can take their backings!? Well I am hoping here they'll back me up but that may not necessarily the case and I obviously want an honest expert assessment of the situation and not merely my backing.
    The ideal with stats collection is to do something simple to start with, and then build on the complex bits that are needed - something along the lines suggested by Dan Morgan works: a table driven approach to deal with the special cases which are usually: the extreme indexes, the flag columns, the time-based/sequential columns, the occasional histogram, and new partitions. Unfortunately you can't get from where you are to where you need to be without some risk (after all, you don't know which bits of your current strategy are causing problems).
    You may have to progress by letting mistakes happen - in other words, when some very bad plans show up, work out WHY they were bad (missing histogram, excess histogram, out of date high values) to work out the minimum necessary fix. Put a defensive measure in place (add it to the table of special cases) and run with it.
    As a direction to aim at - I avoid histograms unless really necessary, I like introducing function-based indexes where possible, and I'm perfectly happy to write small programs to fix columns stats (low/high/distinct) or index stats (clustering_factor/blevel/distinct_keys) and create static histograms.
    Remember that Oracle saves old statistics when you create new ones, so any new stats that cause problems can be reversed out very promptly.
    Regards
    Jonathan Lewis

  • Parallel Processing for a single Package

    Hi,
    I have PKg1 that have mixture of For Each Loop container, DFT's and Seq containers and I want to run more than one thread for this package where i can process data in parallel.
    Please let me know how i can create this using SSIS 2012.
    Thanks,

    Hi,
    DFTs connected by precedence constraints  and I want to run this package more than once (multiple threads)  at a given point of time. is this possible? if
    yes, please let me know how I can achieve this.
    Thanks..
    If the DFTs are connected then there will be absolutely no parallel processing. Running the same package in parallel most likely result in a lock. It depends how it is architectured, but with a RDBMS in default installation or files it is not going to fly.
    When you have a DFT with say OLEDB destination each using its own connection, and they are not connected then each gets opened independently and thus allowing you to ingress data simultaneously.
    Arthur My Blog

  • Application Deployment Process

    Hi,
    I am looking for a definition of a standard application deployment process for J2EE servers. Maybe I missed this information in the specifications, if so, please point me to the right chapter.
    To become a bit more concrete, I am searching for information on if there is a specification on what happens on deployment/startup, i.e., where the entry points are that objectes of an application are created, hence, if the application can be made aware of being "started" by the application server.
    If there is no such general process, maybe someone could help out with specific information on either sun's server or Tomcat.
    Thanks.
    Stefan

    Well, maybe it's in the JSR 88. Unfortunately not accessible at the moment. Thanks for all the other links, they are quite interesting, too.
    My general problem is, that I mostly find the typical information on how to use servlets etc. Just, I do not want to start my application after the invokation of a servlet but on deployment time (i.e., when my application gets deployed to and started by the Java Application Server) and wondered, if there is a standard process besides simply loading the applications classes and making the server aware of new URLs.

  • Spliting a Message to reuse an Integration Process made for a single one.

    Hi,
    I have a Integration Process that works well for a XML message of the type:
    <EMESSAGE>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
    </EMESSAGE>
    My BPM process the data for the Person correctly.
    Now I want to be able to tried more than one person per input message, some thing like:
    <EMESSAGE>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
    </EMESSAGE>
    I build a 1:n Interface Mapping that creates me the following structure:
    <Messages>
    <Message1>
    <EMESSAGE>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
    </EMESSAGE>
    </Message1>
    <Message2>
    <EMESSAGE>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
    </EMESSAGE>
    </Message2>
    <Message3>
    <EMESSAGE>
      <PERSON>
        <TAG_1>
        <TAG_2>
      </PERSON>
    </EMESSAGE>
    </Message3>
    </Messages>
    But when I use a ForEach Block in my new BPM, it doesn't work if there is more then 1 person in the input. The error message I got is:
    <?xml version="1.0" encoding="utf-8" ?>
    - <MappingTrace>
      <Trace level="1" type="T">Mapping-Namespace:http://domain.com/xi/domain_4</Trace>
      <Trace level="1" type="T">Mapping-Name:IM_CPM_AbsSync_to_N_CPM_AbsSync</Trace>
      <Trace level="1" type="T">Mapping-SWCV:3E235261F43111DDB40AC952C0A80C15</Trace>
      <Trace level="1" type="T">Mapping-Step:1</Trace>
      <Trace level="1" type="T">Mapping-Type:XSLT</Trace>
      <Trace level="1" type="T">Mapping-Program:CPM_to_N_ContextPersonMessage</Trace>
      <Trace level="3" type="T">Mapping has one input message.</Trace>
      <Trace level="3" type="T">Dynamic Configuration Is Empty</Trace>
      <Trace level="3" type="T">Multi mapping required.</Trace>
      <Trace level="3" type="T">Creating XSLT mapping CPM_to_N_ContextPersonMessage.</Trace>
      <Trace level="3" type="T">Load 3e235261-f431-11dd-b40a-c952c0a80c15, http://domain.com/xi/domain_4, -1, CPM_to_N_ContextPersonMessage.xsl.</Trace>
      <Trace level="3" type="T">Search CPM_to_N_ContextPersonMessage.xsl (http://domain.com/xi/domain_4, -1) in swcv 3e235261-f431-11dd-b40a-c952c0a80c15.</Trace>
      <Trace level="2" type="T">Call XSLT processor with stylsheet CPM_to_N_ContextPersonMessage.xsl.</Trace>
      <Trace level="2" type="T">Returned form XSLT processor.</Trace>
      <Trace level="3" type="T">XSLT transformation: CPM_to_N_ContextPersonMessage.xsl completed with 0 warning(s).</Trace>
      <Trace level="3" type="T">Dynamic Configuration Is Empty</Trace>
      <Trace level="1" type="T">Content Type application/xml</Trace>
      <Trace level="1" type="T">No interface specified for parameter 2</Trace>
      </MappingTrace>
    Does someone have any idea or a suggestion on how to reuse my BPM that works for a single person?
    Thanks in advance for your suggestions.
    greg

    ok no success for the moment, so I will try to describe my BPM more precisely:
    The DT I use can contain up to 1'000 PERSON record inside a unique EMESAGE reccord (the root element)
    My containers: (all are of type Abstract Interface of my DT)
    Input (Process)
    requestList(Process) Multiline
    request(block)
    response(block)
    Receive Request Step
    Message: input
    Start Process: Yes
    Mode : Async.
    Split (Transformation) Step
    IM: My Interface that does the 1Message with N PERSON records to N Messsages with 1 PERSON record
    Create new transaction: Yes
    Source: input
    Target: requestList
    Block Step
    Mode: ForEach
    Block Start: New transaction
    Block End: New transaction
    Multiline_Element: RequestList
    CurrentLine: request
    No end condition
    Inside the block
    My Sync Send call to a BAPI
    Source: request
    Target: response
    A final Async Send step
    Message: response.
    The workflow stop at the Transform step, as if hte fact that many messages come in result could not be handled back. Here are the details for the Interface mapping used in the transformation step:
    Source: MyDataTypeAbstractAsynchroneInterface
    Occurrence: 1
    Destination:
    Occurrences: 0:unbounded
    Mapping program: the following XSLT:
    <?xml version="1.0" encoding="UTF-8"?>
    <xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
         <xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
         <xsl:template match="EPERSON">
         <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge">
                   <xsl:for-each select="CONTEXTPERSON">
                   <xsl:element name="ns0:Message{position()}">
                   <EPERSON xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="ContextPersonMessage">
                        <xsl:element name="CONTEXTPERSON">
                             <xsl:attribute name="personid"><xsl:value-of select="@personid"/></xsl:attribute>
                        </xsl:element>
                        <!--End of CONTEXTPERSON> -->
                   </EPERSON>
                   </xsl:element>
              </xsl:for-each>
         </ns0:Messages>
         </xsl:template>
    </xsl:stylesheet>
    Still any suggestions ?
    The trace of the error is the same as before, so I won't copy paste it again.

  • OIM: Error while deploying Custom Approval Process for Self-Register

    While deploying the Custom Approval Process for Self-Register, i am getting the following error in scac.log file
    Nov 16, 2011 2:48:58 PM oracle.fabric.common.wsdl.SchemaManager isIncrementalBuildSupported
    INFO: XMLSchema incremental build enabled.
    Nov 16, 2011 2:48:58 PM com.collaxa.cube.CubeLogger info
    INFO: validating "ApprovalProcess.bpel" ...
    oracle.jrf.UnknownPlatformException: JRF is unable to determine the current application server platform.
         at oracle.jrf.ServerPlatformSupportFactory.getInstance(ServerPlatformSupportFactory.java:79)
         at oracle.integration.platform.blocks.WLSPlatformConfigurationProvider.<clinit>(WLSPlatformConfigurationProvider.java:44)
         at oracle.integration.platform.blocks.FabricConfigManager.<clinit>(FabricConfigManager.java:155)
         at oracle.integration.platform.blocks.xpath.FabricXPathFunctionResolver.loadXpathFunctions(FabricXPathFunctionResolver.java:271)
         at oracle.integration.platform.blocks.xpath.FabricXPathFunctionResolver.loadXPathConfigFile(FabricXPathFunctionResolver.java:153)
         at oracle.integration.platform.blocks.xpath.FabricXPathFunctionResolver.init(FabricXPathFunctionResolver.java:51)
         at com.collaxa.cube.xml.xpath.BPELXPathFunctionNameResolver.loadFabricXpathFunctions(BPELXPathFunctionNameResolver.java:57)
         at com.collaxa.cube.xml.xpath.BPELXPathFunctionNameResolver.<init>(BPELXPathFunctionNameResolver.java:48)
         at com.collaxa.cube.xml.xpath.BPELXPathFunctionNameResolver.<clinit>(BPELXPathFunctionNameResolver.java:44)
         at com.collaxa.cube.lang.compiler.bpel.XPathExprValidatorVisitor.<init>(XPathExprValidatorVisitor.java:122)
         at com.collaxa.cube.lang.compiler.bpel.AssignValidator.<init>(AssignValidator.java:89)
         at com.collaxa.cube.lang.compiler.bpel.BpelParser.<init>(BpelParser.java:452)
         at com.collaxa.cube.lang.compiler.bpel.BPELValidator.validate(BPELValidator.java:60)
         at com.collaxa.cube.lang.compiler.BPEL1Processor.validate(BPEL1Processor.java:329)
         at com.collaxa.cube.lang.compiler.BPEL1Processor.process(BPEL1Processor.java:153)
         at com.collaxa.cube.lang.compiler.CubeParserHelper.compile(CubeParserHelper.java:47)
         at oracle.fabric.bpel.bpelc.BPELComponentValidator.validate(BPELComponentValidator.java:40)
         at oracle.soa.scac.ValidateComposite.validateComponentTypeServicesReferences(ValidateComposite.java:1117)
         at oracle.soa.scac.ValidateComposite.doValidation(ValidateComposite.java:500)
         at oracle.soa.scac.ValidateComposite.run(ValidateComposite.java:150)
         at oracle.soa.scac.ValidateComposite.main(ValidateComposite.java:135)
    Nov 16, 2011 2:49:00 PM CubeProcessGenerator compile
    WARNING: classpath is: D:\JDev11g\Middleware\jdeveloper\jdev\extensions\oracle.sca.modeler.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-runtime.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.mgmt_11.1.1\soa-infra-mgmt.jar;D:\JDev11g\Middleware\oracle_common\modules\oracle.fabriccommon_11.1.1\fabric-common.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.mediator_11.1.1\mediator_client.jar;D:\JDev11g\Middleware\oracle_common\modules\oracle.mds_11.1.1\mdsrt.jar;D:\OIMPS1\Middleware\oracle_common\modules\oracle.jps_11.1.1\jps-manifest.jar;;D:\OIMPS1\Middleware\Oracle_IDM1\server\workflows\new-workflow\process-template\SelfRegistrationApprovalApp\SelfRegistrationApproval\SCA-INF\classes;D:\OIMPS1\Middleware\Oracle_IDM1\server\workflows\new-workflow\process-template\SelfRegistrationApprovalApp\SelfRegistrationApproval\SCA-INF\classes;D:\OIMPS1\Middleware\Oracle_IDM1\server\workflows\new-workflow\process-template\SelfRegistrationApprovalApp\SelfRegistrationApproval\SCA-INF\gen-classes;D:\OIMPS1\Middleware\Oracle_IDM1\server\workflows\new-workflow\process-template\SelfRegistrationApprovalApp\SelfRegistrationApproval\SCA-INF\lib\oimclient.jar;D:\JDev11g\Middleware\oracle_common\modules\commonj.sdo_2.1.0.jar;D:\JDev11g\Middleware\oracle_common\modules\oracle.fabriccommon_11.1.1\fabric-common.jar;D:\JDev11g\Middleware\oracle_common\modules\oracle.xdk_11.1.0\xmlparserv2.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\bpel1-1-xbeans.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel-common.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\bpel_coherence_config.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel-exts.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\thirdparty.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\bpm-analytics.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel-thirdparty.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\wsif-binding.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\orabpel-validator.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\monitor-rt-xbean.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.bpel_11.1.1\oracle.soa.bpmn.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\user-patch.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.thirdparty.jar;D:\JDev11g\Middleware\jdeveloper\uddi\lib\oracle.soa.uddi.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\bpm-infra.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\testfwk-xbeans.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-ext.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\soa-infra-scheduler.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\xmlunit-1.1.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-runtime.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\soa-infra-tools.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\soa-xpath-exts.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\oracle-soa-client-api.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.wls.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-client.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-runtime-ext-was.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\fabric-runtime-ext-wls.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.fabric_11.1.1\oracle.soa.fabric.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.workflow_11.1.1\bpm-services.jar;D:\JDev11g\Middleware\jdeveloper\soa\modules\oracle.soa.ext_11.1.1\classes
    In scac_out.xml file following is the error message
    <Fault>
    <severity>error</severity>
    <loc>/ns:composite</loc>
    <line/>
    <col/>
    <file/>
    <msg>
    <![CDATA[SCAC-50012]]>
    </msg>
    </Fault>

    Hi,
    I have run into the same problem with SOA 11.1.1.5 version. In my case after fixing the following two errors it seems to work fine.
    If you have followed the guide, there must be some errors:
    First the java code if copied then contains an extra enter value:
    Instead of:
    "try {
    System.out.println("Prototype for invoking an OIM API from a SOA Composite");
    System.out.println("RTM Usecase: Self Registration Approval by Organization
    Administrator");"
    Use the following:
    "try {
    System.out.println("Prototype for invoking an OIM API from a SOA Composite");
    System.out.println("RTM Usecase: Self Registration Approval by Organization Administrator");"
    The other error is that you should not use <BEAHOME>/oracle_common/modules/oracle.jps_11.1.1/jps-manifest.jar, but the <BEAHOME>/oracle_common/modules/oracle.jps_11.1.1/jps-api.jar in jdeveloper. After these the deployment to the application server works fine for me.

  • The deployment process seems to die or get stuck, iAS 6 SP3 for Solaris 8?

    When I deploy my application in iAS 6 SP3 for Solaris 8 the deployment process seems to die or get stuck. In the shell window i get the messages
    iasdeploy for iPlanet Application Server 6.0 SP3
    Connected to LDAP server on sstu15.auto.com port 389
    iPlanet Application Server is running in international mode
    sstu15:null
    sstu15:10 kas> deployment action ''J2EEInstallEar'' (/u02/home/iplanet/JAR/SSS.ear) running.
    sstu15:10 kas> deployment action ''J2EEInstallEar'' (/u02/home/iplanet/JAR/SSS.ear) running.
    After this nothing happens for a really long time.
    When starting the deployment I get in the beginning of kas.log two entries
    ADMIN-168: kas> deployment get log ''J2EEInstallEar''
    GDS-007: finished a registry load
    but suddenly the second line disappears and the only message I get is the first entry. I need to break the deployment process and kill the processes manually.
    When starting the application server I get two error messages:
    Connected to LDAP server on sstu15.auto.com port 389
    iPlanet Application Server is running in international mode
    Connected to LDAP server on sstu15.auto.com port 389
    iPlanet Application Server is running in international mode
    iPlanet Administrative Server
    Version 6.0 SP3, Build 20010704
    Copyright (c) 1996-1997 KIVA Software Corporation.
    Copyright (c) 1998-1999 Netscape Communications Corporation.
    Copyright (c) 2000-2001 Sun Microsystems, Inc. Some preexisting portions Copyright (c) 2000 Netscape Communications Corp
    . All rights reserved.
    Use of this software is governed by the terms of the executed license agreement between you and iPlanet E-Commerce Solutions.
    [14/Feb/2002 11:02:12:7] error: ADMIN-071: kas> error: failed to either start up or connect to engine ''0'' (CCS0)
    [14/Feb/2002 11:03:08:8] error: ADMIN-071: kas> error: failed to either start up or connect to engine ''1'' (CCS0)
    I think all processes starts OK, I get one for .kas and one for kas, one for .kxs and one for kxs and finally one for .kjs and one for kjs.
    Is someone familiar with this/these problem(s)?

    It seems you are deploying a very big application. Try to deploy this application with following command
    j2eeappreg <filename>
    It should work fine. It was a bug with iasdeploy command in iAS SP3 which had been fixed in iAS SP4.

  • Bpel deployment fails for all processes that have revision other than 1.0.

    Using: Release *10.1.3.3.1*
    Hello All,
    Bpel deployment fails for all processes that have revision other than *1.0*.
    We have been attempting to deploy several BPEL projects via ANT script to a target environment and are encountering failures to deploy for every project which isn’t a (revision 1.0). We are getting the following error whenever we try to deploy a process with a revision other than 1.0:
    D:\TJ_AutoDeploy\BPEL_AutoDeploy_BETA\build.xml:65: BPEL archive doesnt exist in directory "{0}"
         at com.collaxa.cube.ant.taskdefs.DeployRemote.getJarFile(DeployRemote.java:254)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.deployProcess(DeployRemote.java:409)
         at com.collaxa.cube.ant.taskdefs.DeployRemote.execute(DeployRemote.java:211)
         at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
         at org.apache.tools.ant.Task.perform(Task.java:364)
         at org.apache.tools.ant.Target.execute(Target.java:341)
         at org.apache.tools.ant.Target.performTasks(Target.java:369)
         at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216)
         at org.apache.tools.ant.Project.executeTarget(Project.java:1185)
         at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:40)
         at org.apache.tools.ant.Project.executeTargets(Project.java:1068)
         at org.apache.tools.ant.Main.runBuild(Main.java:668)
         at org.apache.tools.ant.Main.startAnt(Main.java:187)
         at org.apache.tools.ant.launch.Launcher.run(Launcher.java:246)
         at org.apache.tools.ant.launch.Launcher.main(Launcher.java:67)
    The structure of our automated deployment script is as follows:
    First, a batch script calls (Jdeveloper_BPEL_Prompt.bat) in order to set all necessary environment variables i.e. ORACLE_HOME, BPEL_HOME, ANT_HOME, etc for ant.
    Next, the script lists every .jar file within the directory to an .ini file called BPEL_List.ini. Furthermore, BPEL_DIR, ADMIN_USER and ADMIN_PSWD variables are set and initialized respectively to:
    -     “.” – point to directory where script is running from because all the BPEL processes are located here
    -     “oc4jadmin”
    -     “*********” (whatever the password for out environment is)
    We’ve developed a method to have the script prompt the user to select the target environment to deploy to. Once the user selects the appropriate environment, the script goes through the BPEL_List.ini files and a loop tells it that for every BPEL process listed:
    DO ant
    -Dprocess.name=%%b
    -Drev= !Rev!
    -Dpath=%BPEL_DIR%
    -Ddomain=default
    -Dadmin.user=%ADMIN_USER%
    -Dadmin.password=%ADMIN_PWD%
    -Dhttp.hostname=%HOST%
    -Dhttp.port=%PORT%
    -Dverbose=true
    (What’s happening is that the variables in the batch file are being passed on to the ANT script where *%%b* is the process name, !rev! is revision #, and so on…)
    The loop goes through each line in the BPEL_List.ini and tokenizes the BPEL process into 3 parts *(%%a, %%b, and %%c)* but we only extract 2 parts: *%%b* (process name) and *%%c* which becomes !Rev! (revision number).
    Example:
    Sample BPEL process:
    bpel_ThisIsProcess1_1.0.jar
    bpel_ThisIsProcess2_SOAv2.19.0.001B.jar
    After tokenizing:
    %%a     %%b     %%c
    bpel     ThisIsProcess1     1.0.jar
    bpel     ThisIsProcess2     SOAv2.19.0.001B.jar
    *!Rev!* and not *%%c* because *%%c* will return whatever the revision number is + the “.jar” file extension as illustrated above. So to circumvent this, we parse *%%c* so that the last 4 characters are stripped. Such is done like this:
    set RevN=%%c
    set RevN=!RevN:~0,-4!
    Hence, the usage of !Rev!.
    Below is a screenshot post of the ANT build.xml that goes with our script:
    <!--<?xml version="1.0"?>-->
    <!--BUILD.XML-->
    <project name="bpel.deploy" default="deployProcess" basedir=".">
         <!--
         This ant build file was generated by JDev to deploy the BPEL process.
         DONOT EDIT THIS JDEV GENERATED FILE. Any customization should be done
         in default target in user created pre-build.xml or post-build.xml
         -->
         <property name="process.dir" value="${basedir}" />
              <!-- Set BPEL process name -->
              <!--
              <xmlproperty file="${process.dir}/bpel/bpel.xml"/>
              <property name="process.name" value="${BPELSuitcase.BPELProcess(id)}"/>
              <property name="rev" value="${BPELSuitcase(rev)}"/>
              -->
         <property environment="env"/>
         <!-- Set bpel.home from developer prompt's environment variable BPEL_HOME -->
              <condition property="bpel.home" value="${env.BPEL_HOME}">
                   <available file="${env.BPEL_HOME}/utilities/ant-orabpel.xml" />
              </condition>
         <!-- show that both bpel and oracle.home are located (TESTING purposes ONLY) -->
         <!-- <echo>HERE:${env.BPEL_HOME} ${env.ORACLE_HOME}</echo> -->
         <!-- END TESTING -->
         <!--If bpel.home is not yet using env.BPEL_HOME, set it for JDev -->
         <property name="oracle.home" value="${env.ORACLE_HOME}" />
         <property name="bpel.home" value="${oracle.home}/bpel" />
         <!--First override from build.properties in process.dir, if available-->
         <property file="${process.dir}/build.properties"/>
         <!--import custom ant tasks for the BPEL PM-->
         <import file="${bpel.home}/utilities/ant-orabpel.xml" />
         <!--Use deployment related default properties-->
         <property file="${bpel.home}/utilities/ant-orabpel.properties" />
         <!-- *************************************************************************************** -->
         <target name="deployProcess">
              <tstamp>
                   <format property="timestamp" pattern="MM-dd-yyyy HH:mm:ss" />
              </tstamp>
              <!-- WRITE TO LOG FILE #tjas -->
              <record name="build_verbose.log" loglevel="verbose" append="true" />
              <record name="build_debug.log" loglevel="debug" append="true" />
              <echo></echo>
              <echo>####################################################################</echo>
              <echo>BPEL_AutoDeploy initiated @ ${timestamp}</echo>
              <echo>--------------------------------------------------------------------</echo>
              <echo>Deploying ${process.name} on ${http.hostname} port ${http.port} </echo>
              <echo>--------------------------------------------------------------------</echo>
              <deployProcess
                   user="${admin.user}"
                   password="${admin.password}"
                   domain="${domain}"
                   process="${process.name}"
                   rev="${rev}"
                   dir="${process.dir}/${path}"
                   hostname="${http.hostname}"
                   httpport="${http.port}"
                   verbose="${verbose}" />
              <sleep seconds="30" />
              <!--<echo message="${process.name} deployment logged to ${build_verbose.log}"/>
              <echo message="${process.name} deployment logged to ${build.log}"/> -->
         </target>
         <!-- *************************************************************************************** -->
    </project>
    SUMMARY OF ISSUE AT HAND:
    ~ Every bpel process w/ 1.0 revision deploys with no problems
    ~ At first I would get an invalid character error most likely due to the “!” preceding “Rev”, but then I decided to set rev=”false” in the build.xml file. That didn’t work quite well. In another attempt, I decided to leave the –Drev= attribute within the batch script blank. That still led to 1.0s going through. My next thought was deploying something other than a 1.0, such as 1.2 or 2.0 and that’s when I realized that if it wasn’t a 1.0, it refused to go through.
    QUESTIONS:
    1.     IS THERE A WAY TO HAVE ANT LOOK INTO THE BPEL PROCESS AND PULL THE REVISION ID?
    2.     WHAT ARE WE DOING WRONG? ARE WE MISSING ANYTHING?
    3.     DID WE GO TOO FAR? MEANING, IS THERE A MUCH EASIER WAY WE OVERLOOKED/FORGOT/OR DON’T KNOW ABOUT THAT EXISTS?
    Edited by: 793292 on Jul 28, 2011 12:38 PM

    Only thing i can think of is instead of using a MAC ACL , u cud jus use the default class
    Policy Map Test
    class class-default
    police 56000 8000 exceed-action drop
    Class Map match-any class-default (id 0)
    Match any
    You would be saving a MAC-ACL ;-).

  • 02 STO 's for a single process order

    Hi All,
    Can anyone explain me whether there any problem in getting two STO's for a single process order?
    If its a problem how to generate only one STO for a one process order?

    Hi Mallika,
    can you explain your sceanrio . i am not able to understand your requirement.....
    Regards,
    Venkat.

  • Deploy() and undeploy() for a single managed server do not work properly

    i tried undeploying an application using the following command undeploy('app_name','managed_server1') which in the WebLogic documentation states that this can undeploy the application from the single managed server, but this untargets the application from ALL managed servers.
    Is the command being invoked properly for a single managed server deployment?
    For the deploy() command, the same issue exists but for deployments. It either deploys to a single vm but cannot be later updated to be re-targeted.

    hi,
    Is managed servers are in cluster or stand-alone servers? pls check it properly,if so there is no chance to undeploy the application from all servers even if try to undeploy from onr server.
    regards,
    abhi

  • Error: 1056603 Unable to spawn process for application [AppName]

    Hi,
    After a fresh installation of Essbase Server (11.1.1.4), I'm experiencing the following error when attempting to perform basically any action on any Application (Start, Stop, Delete) through EAS:
    *Error: 1056603 Unable to spawn process for application [Demo]. Please ensure that adequate memory is available.*
    System memory seems abundant at around 5% usage with all relevant services started, but I even doubled it (since it's a VM) to 16Gb just to make sure, the issue persists.
    Having a look at the Essbase server log, I didn't find any additional info that could be helpful:
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///2412/Info(1051160)
    Received Validate Login Session request
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///2380/Info(1051001)
    Received client request: Get App and Database Status (from user [admin])
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///568/Info(1051001)
    Received client request: MaxL: Execute (from user [admin])
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///568/Error(1056603)
    Unable to spawn process for application [Demo]. Please ensure that adequate memory is available.
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///568/Error(1054001)
    Cannot load application Demo with error number [1056603] - see server log file
    [Thu Oct 11 15:07:03 2012]Local/ESSBASE0///568/Warning(1051003)
    Error 1054001 processing request [MaxL: Execute] - disconnecting
    And when the Essbase service is started, essbase.log also seems to be pretty normal:
    [Thu Oct 11 15:18:13 2012]Local/ESSBASE0///2376/Info(1051001)
    Received client request: Logout (from user [admin])
    [Thu Oct 11 15:18:13 2012]Local/ESSBASE0///2376/Info(1051037)
    Logging out user [admin], active for 29 minutes
    [Thu Oct 11 15:18:19 2012]Local/ESSBASE0///1236/Info(1051243)
    Exclusive operation security file compaction started. This may take a while
    [Thu Oct 11 15:18:19 2012]Local/ESSBASE0///1236/Info(1051244)
    Security file compaction completed
    [Thu Oct 11 15:18:19 2012]Local/ESSBASE0///1236/Info(1051052)
    Essbase Server - finished
    [Thu Oct 11 15:18:23 2012]Local/ESSBASE0///2620/Info(1051283)
    Retrieving License Information Please Wait...
    [Thu Oct 11 15:18:23 2012]Local/ESSBASE0///2620/Info(1051286)
    License information retrieved.
    [Thu Oct 11 15:18:29 2012]Local/ESSBASE0///2620/Info(1051199)
    Single Sign-On Initialization Succeeded !
    [Thu Oct 11 15:18:29 2012]Local/ESSBASE0///2620/Info(1051232)
    Using English_UnitedStates.Latin1@Binary as the Essbase Locale
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2620/Info(1051134)
    External Authentication Module: [Single Sign-On] enabled
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2620/Info(1051051)
    Essbase Server - started
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2620/Info(1051243)
    Exclusive operation security file compaction started. This may take a while
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2620/Info(1051244)
    Security file compaction completed
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2620/Info(1051052)
    Essbase Server - finished
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2120/Info(1051283)
    Retrieving License Information Please Wait...
    [Thu Oct 11 15:18:34 2012]Local/ESSBASE0///2120/Info(1051286)
    License information retrieved.
    [Thu Oct 11 15:18:37 2012]Local/ESSBASE0///2120/Info(1051199)
    Single Sign-On Initialization Succeeded !
    [Thu Oct 11 15:18:37 2012]Local/ESSBASE0///2120/Info(1051232)
    Using English_UnitedStates.Latin1@Binary as the Essbase Locale
    Any ideas on what may be causing this?
    Thanks in advance!
    n

    Hi Nelson,
    Did you try increasing the Heap size for EAS ? Try the following and check if you still get the error -
    increase the Java heap settings in the EAS script on a Unix platform that was configured using the automatic deployment method, do the following:
    1. Navigate to the $HYPERION_HOME/deployments/{appserver}/bin directory.
    2. Edit the setCustomParamseas.sh(.bat) script.
    3. Modify or add the -Xms and -Xmx settings to the JAVA_OPTIONS variable. For example:
    +JAVA_OPTIONS="-Xms256m -Xmx1024m -DComponentName=eas -DcomponentId=1e3bf92b2d9bb0493bcd3380127b0ca49ee7f50 -Dsun.net.inetaddr.ttl=0 -DHYPERION_HOME=/usr/local/oracle/hyperion -Dhyperion.home=/usr/local/oracle/hyperion -DEAS_HOME=/usr/local/oracle/hyperion/products/Essbase/eas -DESS_ES_HOME=/usr/local/oracle/hyperion/products/Essbase/eas/server -DEAS_LOG_LEVEL=5000 -DEAS_LOG_LOCATION=/usr/local/oracle/hyperion/logs/eas/easserver.log -DCLIENT_SERVER_DIFF_MC=true -Dweblogic.j2ee.application.tmpDir=/usr/local/oracle/hyperion/deployments/temp -Djava.io.tmpdir=/usr/local/oracle/hyperion/tmp ${JAVA_OPTIONS} "+
    +*Note: For Tomcat deployments, the setting to modify is JAVA_OPTS:*+
    +*JAVA_OPTS="-Xms256m -Xmx1024m*+
    4. Stop and restart EAS for the setting to take effect.
    Hope it helps....
    KosuruS

  • How to publish an own and deployed process? Via WSDL?

    Hi There,
    i'm kinda new to BPM and i got some questions. I've gone through some tutorials and created my own process. I build and deployed it, startet it through the NWA and tested it (mostly human activitys and mappings). Everything works fine but i'm facing some unsolved problems:
    1. Is it possible to get the sourcecode of the deployed process from the repository? I've already downloaded the WSDL-file from the "Single Service Administration" and want to get the process itself too.
    2. Within the "Services Registry" i published the deployed process. I imported this published WSDL within the process composer (import per Service Registry) and linked it in an automated activity, but it seems that the process can't invoke itself. Neither when i use the imported interface at the end-event it doesn't "restart". My question here is: can't a process invoke itself or does it just not work with the default wsdl? It would be intresting for me, if a process could invoke another process when it ends.
    3. Is there any good tutorial how to create an own WSDL-file with the process composer? I haven't seen any yet.
    Any information would be helpful!
    With kind regards,
    Markus
    Edited by: Markus Alfers on Jan 14, 2009 3:44 PM

    Hi Fazal,
    thanks for your answer. I had already found the process in the web service navigator. I tested it there and it was ok. But i wasnt able to find it in the UDDI-registry, so i downloaded the WSDL-File from the "single service administration" and registered it in the UDDI-registry. I imported this published WSDL-file from the UDDI-registry within the process composer to be able to test if a process can invoke itself, but it didnt work. Than i tried a new process which only has an automated activity which trys to invoke my first process, but this doesn't seem to work too. I need to know how i can invoke another process within an automated task or at the end-event of a process. I think this could be helpfull if i want to create Sub-Process and route input and output data through it.
    I also need to know how i can create my own WSDL-file for a process which i want to create with the process composer, because not every process i want to define has an empty start nor an empty end. Is there any good tutorial for this? Somebody also mentioned that he/she created an own WSDL-file with the PI server, but i dont know how to do that. It is also confusing when you got ~ten or more default services within the web service navigator.
    With kind regards
    Markus

Maybe you are looking for

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools. The RC is somewhat faster than the CP version. Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with "Bursts". Usin

  • Business Objects and SAP BI Integrated Planning material

    hi guys      i am looking for Business Objects and SAP BI Integrated Planning any material site  plz send me regards suresh

  • Validation at run time

    Hi,   i am trying to use the keyword "order_time" in for/next.   it says on the admin guide on page 111 that, "you can use the passed members of a dimension in a for/next loop, when the validation is performed at run time".   my question is: 1) what

  • Login problems using safari5.1

    Hi there. I'm on a macbook pro, running safari. I just updated to safari 5.1, and since then all my automatic logins have gone. Ticking the 'keep me logged in' box isn't working, and I have to re-enter all the details every time. Any ideas? Cheers.

  • Simple Filedeployer and supporting files

    Using the Simple Filedeployer to move updates from my staging server to the live web server, supporting PDF files do not always get copied. When I was testing Contribute 3.1.1 and the CPS I didn't run into this issue. Now that it's live I do, and tha