No Resulting DCs for Deployment

Dear SDN,
I am developing a web dynpro DC which has a reference to the Public DC (Jar DC). An activity for this public DC has been created, build. It is then checked in, activated and released. The activation log didnt show any error
When the activity is under consolidation, following info comes up
20091224141450 Info   :Starting Step SDM-deployment-notification at 2009-12-24 14:14:50.0591 +5:00
20091224141450 Info   :Deployment is performed asynchronously.
20091224141450 Info   :Following DCs are marked for deployment (buildspace = EPD_AXCMS_C):
20091224141450 Info   :
20091224141450 Info   :RequestId: 585
20091224141450 Info   :==> no resulting DCs for deployment
20091224141450 Info   :Follow-up requests:
20091224141450 Info   :
20091224141450 Info   :
20091224141450 Info   :Step SDM-deployment-notification ended with result 'success' at 2009-12-24 14:14:50.0591 +5:00
Please advise...
Regards,
Ganesh N

Hi Ganesh N
Check which types of Development Component are included in the activity. Not all DC types produce archives for deployment (EAR, SDA). For example, Web Module DC or EJB Module DC do not. It might be that the activity includes only such DC.
Check that CBS rebuilt all EAR or WebDynpro components dependent on the "Jar DC".
Check that the consolidation request consists some EAR or WebDynpro components. Only the components can be deployed.
BR, Siarhei

Similar Messages

  • How can we determine the order of DCs getting deployed in a SCA

    Hi,
    I have a set of DCs to be deployed on to the server. I create a SCA out of them, and then try to deploy it through the SDM or directly deploy the DCs with the IDE.
    Dcs are like below,
    tc/mdm/srmcat/uiprod
    tc/mdm/srmcat/uisearch
    tc/srmcat/custom/pr/convert.
    Now the last DC 'convert' is been used in the DC mentioned above 'uiprod'. (its a dependent DC to uiprod).
    So when I try to deploy the SCA (or in the IDE) for the first time on my J2EE engine, i get a warning saying that uiprod is not deployed properly as the dependent DC 'convert' is not present. But when I deploy it for the second time the message vanishes as the 'convert' is already present in the J2EE now. So can I avoid this double deployment, by altering the order and deploying the convert before the other DCs. Can I do this when I am forming the SCA, or is there a option in IDE to set this.
    Generally while deployment the dependencies are checked and the DCs are deployed in proper order, I feel the issue is occuring here because of different package structure one being tc/mdm/srmcat/.. and other being tc/srmcat/custom/pr/..
    Please do let me know your suggestions on the same.
    Thanks,
    Prakash

    Hi Siarhei,<br>
    <br>
    Thanks for your reply.<br>
    <br>
    Yes all these DCs do belong to the same SCA. I tried adding the deploy time dependency (by changing the dc def file) as you <br>had mentioned but the issue is still there. Below is the log attached.
    <br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53553sap.comtcmdmsrmcatconfig.sda<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53554sap.comtcmdmsrmcatimport.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53555sap.comtcmdmsrmcatredirect.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53556sap.comtcmdmsrmcatsortutility.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53557sap.comtcmdmsrmcatuiconf.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53558sap.comtcmdmsrmcatuiprod.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53559sap.comtcmdmsrmcatuisearch.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53560sap.comtcmdmsrmcatuiutil.ear<br>
    URL to deploy : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53561sap.comtcsrmcatcustompr~convert.ear<br>
    <br>
    Result<br>
    => successfully deployed : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53553sap.comtcmdmsrmcatconfig.sda<br>
    => successfully deployed : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53554sap.comtcmdmsrmcatimport.ear<br>
    => successfully deployed : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53555sap.comtcmdmsrmcatredirect.ear<br>
    => successfully deployed : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53556sap.comtcmdmsrmcatsortutility.ear<br>
    => successfully deployed : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53557sap.comtcmdmsrmcatuiconf.ear<br>
    deployed with warning : file:/C:/DOCUME1/I054568/LOCALS1/Temp/temp53558sap.comtcmdmsrmcatuiprod.ear<br>
    Finished with warnings: development component 'tc/mdm/srmcat/uiprod'/'sap.com'/'MAIN_SMDM3VAL_D'/'20091215181625'/'0':<br>
    Caught exception during application startup from SAP J2EE Engine's deploy service:<br>
    java.rmi.RemoteException: Error occurred while starting application sap.com/tc/mdm/srmcat/uiprod and wait. Reason: <br>Clusterwide exception: server ID 301385150:com.sap.engine.services.deploy.container.DeploymentException: Clusterwide <br>exception: Failed to prepare application ''sap.com/tcmdmsrmcatuiprod'' for startup. Reason=Clusterwide exception: <br>Failed to start application ''sap.com/tcmdmsrmcatuiprod'': The referenced <br>application ''sap.com/tcsrmcatcustomprconvert'' can''t be started. Check the causing exception for details. Hint: Is the <br>referenced application deployed correctly on the server?<br>
    <br>
    <br>
    <br>I do understand the issue here, while deploying uiprod it searches for pr~convert.  But I am not sure why the sca gets <br>deployed in this order. Even from IDE and directly in SDM i get this issue. When I do the same deployment again, then the warnings are gone.<br>
    <br>
    <br>So is there a possiblity that pr~convert gets deployed before uiprod. Is that in our control ?
    <br>
    Thanks,<br>
    <br>Prakash

  • Using linked flv content in swf file -for deployment on Breeze server

    Can anyone tell me the procedure for creating swf and linked
    flv file with Sorenson Squeeze for deployment on the Breeze server
    or in Breeze Presenter? All Adobe will share is the how to this
    with Flash. I feel certain this can be done but all my attempts
    have failed to have the swf show the referred flv link. To
    interpret the Flash procedure, it seems that all one has to do is
    add /output "file name" to the swf player flve linked url, but the
    published flv file only shows a white screen. It does not seem to
    load the linked flv file - even if I have allowed public viewing of
    the flv content.

    The procedure may be the same - I'm pretty familiar wit h
    that by now - but the results are not. I think the key difference
    is that the linked file url addendum "/output/filename" required
    from Breeze when creating the linked flv url call-out from the swf
    file, does not work is advertised in Sorenson Squeeze. Before
    attempting to insert this linked swf file in Presenter, I have
    attempted to elicit the same interactivity just by uploading the
    fwo files to the Breeze server. Sorenson does not seem to be able
    to recognize that additional linked url with the additional "output
    / file name" addendum that Breeze requires. I have replicated that
    same procedure in Flash 8 and the results work perfectly. I just
    don't like being forced to use a 900 lb gorilla like Flash 8 to
    create this simple task when a less costly program like Sorenson is
    squeeze should work correctly. Frankly, I think a lot of other
    Breeze authors, who are not as adept with this video encoding
    process, or who want to be using a developer tool like Flash to
    create their video clips. My experience with our users is that that
    most shy away from using video in Breeze because it is way too
    cumbersome. - Something Adobe should consider before more prospects
    move over to another similar authoring program that has actually
    dedicated itself to advancing their authoring software
    capabilities.

  • Slower/worse performance when publishing for deployment vs testing

    When I package my app for deployment (ad hoc right now) the performace of the game drops dramatically.  It's almost unplayable.  In debugging it works much better.  What's different about this build process?  Why does it take longer to publish and give worse results?
    I've tried with iphone packager and with ADT 2.6.

    What about the disk layout? Do you have the same or fewer spindals? Are the disk units rating (average IO, sustained IO) the same or better? What about the total load on the machine? The network card?
    There are many factors that have to be considered. Getting back to just Oracle itself:
    Were any database parameters changed?
    Were the statistics recalculated after the move? In fact, how was the migration accomplished: copied files vs exp/imp?
    Were system statistics in use and if so were they regathered on the new machine?
    Looking at each of the above may lead you to the problem.
    HTH -- Mark D Powell --

  • DistributedManagementException,  null BasicDeploymentMBean for deployment

    Hi All,
    During a wladmin BATCHUPDATE, I got the following error message. My question is: what is the possible reason? What causes an BasicDeploymentMBean to be null? I did a Google-search, I have found similar error messages, but no solution. (Weblogic 10.3, on AIX)
    Any idea is welcome.
    Best regards, József
    On the screen:
    Connection pool "lionPool" created successfully.
    Executing command: SET -mbean listenOnline:Name=lionPool,Type=JDBCConnectionPool -property Targets "listenOnline:Name=ejb,Type=Cluster"
    weblogic.management.DistributedManagementException : Distributed Management [1 exceptions]
    [Deployer:149189]Attempt to operate 'remove' on null BasicDeploymentMBean for deployment CP-lionPool.
    Operation can not be performed until server is restarted.
    in servers/adminServer/logs/adminServer.log:
    ####<Jul 28, 2009 9:15:35 AM CEST> <Info> <DeploymentService> <abgpp091> <adminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1248765335652> <BEA-290063> <commit for request '1,248,765,335,146' will not proceed further since its requires restart flag is set.>
    ####<Jul 28, 2009 9:15:35 AM CEST> <Info> <Deployer> <abgpp091> <adminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <jhalasz53> <> <> <1248765335878> <BEA-149038> <Initiating Task for CP-lionPool : [Deployer:149026]remove application CP-lionPool on adminServer.>
    ####<Jul 28, 2009 9:15:35 AM CEST> <Warning> <Deployer> <abgpp091> <adminServer> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1248765335926> <BEA-149189> <Attempt to operate 'remove' on null BasicDeploymentMBean for deployment CP-lionPool. Operation can not be performed until server is restarted.>
    ####<Jul 28, 2009 9:15:35 AM CEST> <Warning> <Deployer> <abgpp091> <adminServer> <[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1248765335946> <BEA-149004> <Failures were detected while initiating remove task for application 'CP-lionPool'.>
    ####<Jul 28, 2009 9:15:35 AM CEST> <Warning> <Deployer> <abgpp091> <adminServer> <[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1248765335946> <BEA-149078> <Stack trace for message 149004
    weblogic.management.DeploymentException: [Deployer:149189]Attempt to operate 'remove' on null BasicDeploymentMBean for deployment CP-lionPool. Operation can not be performed until server is restarted.
    at weblogic.deploy.internal.targetserver.DeploymentManager.assertDeploymentMBeanIsNonNull(DeploymentManager.java:1285)
    at weblogic.deploy.internal.targetserver.DeploymentManager.findDeploymentMBean(DeploymentManager.java:1326)
    at weblogic.deploy.internal.targetserver.DeploymentManager.createOperation(DeploymentManager.java:1098)
    at weblogic.deploy.internal.targetserver.DeploymentManager.createOperations(DeploymentManager.java:1372)
    at weblogic.deploy.internal.targetserver.DeploymentManager.handleUpdateDeploymentContext(DeploymentManager.java:160)
    at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.updateDeploymentContext(DeploymentServiceDispatcher.java:155)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doUpdateDeploymentContextCallback(DeploymentReceiverCallbackDeliverer.java:133)
    at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.updateDeploymentContext(DeploymentReceiverCallbackDeliverer.java:27)
    at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.callDeploymentReceivers(ReceivedPrepare.java:203)
    at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.handlePrepare(ReceivedPrepare.java:112)
    at weblogic.deploy.service.internal.statemachines.targetserver.ReceivedPrepare.receivedPrepare(ReceivedPrepare.java:52)
    at weblogic.deploy.service.internal.targetserver.TargetRequestImpl.run(TargetRequestImpl.java:211)
    at weblogic.deploy.service.internal.transport.CommonMessageReceiver$1.run(CommonMessageReceiver.java:410)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    >

    I am not so sure any more, that it is a JDBC problem. Though, it is possible, because I could not find any ojdbc6.jar for Oracle 10.2.0.4. [http://www.oracle.com/technology/software/tech/java/sqlj_jdbc/htdocs/jdbc_10201.html]
    We use WebLogic 10.3, and it requires Java *6*. That is why I thought, that it would be good to find an ojdbc6.jar for this Oracle version.
    I tried to run the application with ojdbc14.jar and ojdbc6.jar. Interestingly, the results were the same. (I had to use the ojdbc6.jar, that was released for Oracle 11.1.0.7.0)
    Yes, Vijay, I do think, that I should contact the WebLogic support team. Anyway, we use AIX LPARs (logical machines on a PowerPC). The application is installed in my home directory. What is more, the same home directory is mounted on all LPARS. So, the application is always mounted through a kind of logical network. But the WebLogic Server is not in my home directory. So, the WebLogic and the application are not in the same fs.
    József

  • How to manage transfer to OLTP system for deployment

    hello experts,
    Please could you suggest me how to solve this problem:
    The rule is that only TLB results must be transferred to ECC
    However, some Plants are not using TLB and I need to get the results of deployment sent to ECC for those plants.
    I have set the checkbox TLB immediate transfer in SPRO (Configure transfer to OLTP system). I have also maintained all the plants in SPRO maintain distribution definition with external procurement. For deployment I ticked "no transfer"
    I don't know whether it is possible to send results of deployment only for a couple of selected plants.
    Is it possible without using CHange Pointers??
    Thank you for your answers,
    Best regards
    EL

    Hi,
    this whole activity needs to be handled in two steps.
    Step 1 - Run TLB for only those plants which need to have STOs. This can be done as a standard APO requirement. In the variant exclude the plants for which you do not want to run TLB
    Step 2 - Once step 1 is completed publish Deployments orders (STRs) from APO to ECC using T.Code /sapapo/c5. Here in the varaint select the correct order type and locations for which you want to sent STRs.
    Please run these in batch jobs ensuring to maintain the sequence, to avoid confusion.
    Thanks,
    Harsh

  • Step-by-Step Help Needed for Deploying Some Adobe Software

    We are a K-12 educational institution.  I am currently working at setting up a computer lab with about 16 MacMini computers.
    I'm VERY new to the realm of servers, deployment, using terminal, etc.
    I've done lots of research to make it to where I am, but just can't wrap my head around what needs done next with the Adobe products.
    I am using Server OSX (v3) and DeployStudio (v 1.6.3) to try and manage the computers.
    Both my server and clients are running Mavericks 10.9.
    I deployed my core images successfully and would like to now send Adobe products only the computers I choose since I have limited licenses.
    We have volume licenses of Adobe InDesign CS6 and Photoshop Elements 12 that we would like to deploy.
    I have been referencing the following sites, but just can't seem to wrap my head around the EXACT steps that need to be taken.
    http://blogs.adobe.com/oobe/2010/10/adobe-provisioning-toolkit-enterprise-edition.html
    http://helpx.adobe.com/creative-cloud/packager/provisioning-toolkit-enterprise.html
    http://helpx.adobe.com/photoshop-elements/kb/silent-install-instructions-photoshop-element s2.html
    http://forums.adobe.com/message/5781663
    For Adobe InDesign CS6 - I've done the following:
    Created an "Installation Package" using Adobe Application Manager Enterprise Edition (AAMEE v6.2.112.0) using the following instructions.
    http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/devnet/creativesuite/pdfs/Ad obeApplicationManagerEnterpriseEditionDeploymentGuide_v_3_1.pdf
    If I log into the client computer, put the package ON the client computer, and run it...it takes me through the install and works wonderfully.It's not automated like I wanted/thought I set it up to do. I have to click through the steps of the installer.
    If I try to deploy it using the "Package install" Workflow in DeployStudio...it loads it during the workflow, then skips through it, acts like it installed it, but it isn't actually installed.Is there a setting that I need to possibly change somewhere?
    For Adobe Photoshop Elements 12 - I'm a bit lost and mostly just have questions:
    I know that I can't use AAMEE and need to use APTEE (Adobe Provisioning Toolkit Enterprise Edition).
    How do I deploy the software? Create my own Package using PackageMaker?
    Which computer do I run APTEE on? All the instructions I can find just say, "do this" - but don't specify where.
    Do I have to open terminal and run the commands on each client computer AFTER installing PE12?
    OR
    Do I run the commands BEFORE deployment on my package stored on my server that is ready for deployment?
    It really comes down to me just not 100% understanding how APTEE works.
    I hope that pertrays my dilemma. I've tried to explain best I can.  If you have any questions - let me know!
    Thanks in advance for any insight you can give!

    We are a K-12 educational institution.  I am currently working at setting up a computer lab with about 16 MacMini computers.
    I'm VERY new to the realm of servers, deployment, using terminal, etc.
    I've done lots of research to make it to where I am, but just can't wrap my head around what needs done next with the Adobe products.
    I am using Server OSX (v3) and DeployStudio (v 1.6.3) to try and manage the computers.
    Both my server and clients are running Mavericks 10.9.
    I deployed my core images successfully and would like to now send Adobe products only the computers I choose since I have limited licenses.
    We have volume licenses of Adobe InDesign CS6 and Photoshop Elements 12 that we would like to deploy.
    I have been referencing the following sites, but just can't seem to wrap my head around the EXACT steps that need to be taken.
    http://blogs.adobe.com/oobe/2010/10/adobe-provisioning-toolkit-enterprise-edition.html
    http://helpx.adobe.com/creative-cloud/packager/provisioning-toolkit-enterprise.html
    http://helpx.adobe.com/photoshop-elements/kb/silent-install-instructions-photoshop-element s2.html
    http://forums.adobe.com/message/5781663
    For Adobe InDesign CS6 - I've done the following:
    Created an "Installation Package" using Adobe Application Manager Enterprise Edition (AAMEE v6.2.112.0) using the following instructions.
    http://wwwimages.adobe.com/www.adobe.com/content/dam/Adobe/en/devnet/creativesuite/pdfs/Ad obeApplicationManagerEnterpriseEditionDeploymentGuide_v_3_1.pdf
    If I log into the client computer, put the package ON the client computer, and run it...it takes me through the install and works wonderfully.It's not automated like I wanted/thought I set it up to do. I have to click through the steps of the installer.
    If I try to deploy it using the "Package install" Workflow in DeployStudio...it loads it during the workflow, then skips through it, acts like it installed it, but it isn't actually installed.Is there a setting that I need to possibly change somewhere?
    For Adobe Photoshop Elements 12 - I'm a bit lost and mostly just have questions:
    I know that I can't use AAMEE and need to use APTEE (Adobe Provisioning Toolkit Enterprise Edition).
    How do I deploy the software? Create my own Package using PackageMaker?
    Which computer do I run APTEE on? All the instructions I can find just say, "do this" - but don't specify where.
    Do I have to open terminal and run the commands on each client computer AFTER installing PE12?
    OR
    Do I run the commands BEFORE deployment on my package stored on my server that is ready for deployment?
    It really comes down to me just not 100% understanding how APTEE works.
    I hope that pertrays my dilemma. I've tried to explain best I can.  If you have any questions - let me know!
    Thanks in advance for any insight you can give!

  • Limitation on Service size for deploying

    Is there any limitation on Service size for deploying? The size of my service is 5MB approx. and I am not able to migrate it through Catalog deployer as well as Exporting and Importing file.

    There is no limit as such. The only limit imposed depends on the heap allocated for
              the JVM.
              Nitesh wrote:
              > Hi.
              >
              > Is there a limitation on the session size for a clustered environment. i'm not
              > sure whether its true or not. can anyone please clarify. Also is it for the entire
              > session object or per user.
              >
              > Thanks
              >
              > Nitesh
              Rajesh Mirchandani
              Developer Relations Engineer
              BEA Support
              

  • Result analysis for sub-items on SD-order

    Hello,
    I would like to know, whether also the document flow for subitems is being checked while executing result analysis.
    Example:
    We have got an SD order, a higher-level item and a sub-item.
    Both of them have assigned the same result analysis key. Within this key it is defined in customizing (OKG3), that the Sales Order Structure is A-"Summarize Plan and Actual Data of Subitems on Highest Item"....so the result analysis for the sub-item is being performg via the higher-level item.
    The higher-level positions is already completed - the delivery is made and also the invoice is created.
    The sub-item is not completed - neighter delivery is made nor the invoice.
    While executing the result analysis for the higher-level item, the status FNBL (final billing) is recognized and all cost are settled from WIP to CO-PA.
    As far as I know, the document flow should be checked withitn the result analysis and based on this the status should be recognized.
    Therefore It seems, that only the document flow from the higher-level item is being checked. If also the document flow from the sub-item is checked, there would be no way to get the status FNBL as the sub-item is not complete.
    Is it SAP standard delivered behaviour?
    Thanks in advance
    Peter

    Thanks Waman,
    I have tried to delete the result analysis key from the sub-item, but still the same result in result analysis...
    Could you tell my the consequences from the 1A setup from the note? It is adviced withitn the note to not use the 1A setup. But we use this and it works fine. Just some messages during the result analysis, that it could not be performed for sub-items, but this has no influence on the values.
    Regards
    Peter
    Edited by: Peter Jankech on Jun 9, 2010 8:57 AM

  • How to get total number of result count for particular key on cluster

    Hi-
    My application requirement is client side require only limited number of data for 'Search Key' form total records found in cluster. Also i need 'total number of result count' for that key present on the custer.
    To get subset of record i'm using IndexAwarefilter and returning only limited set each individual node. though i get total number of records present on the individual node, it is not possible to return this count to client form IndexAwarefilter (filter return only Binary set).
    Is there anyway i can get this number (total result size) on client side without returning whole chunk of data?
    Thanks in advance.
    Prashant

    user11100190 wrote:
    Hi,
    Thanks for suggesting a soultion, it works well.
    But apart from the count (cardinality), the client also expects the actual results. In this case, it seems that the filter will be executed twice (once for counting, then once again for generating actual resultset)
    Actually, we need to perform the paging. In order to achieve paging in efficient manner we need that filter returns only the PAGESIZE records and it also returns the total 'count' that meets the criteria.
    If you want to do paging, you can use the LimitFilter class.
    If you want to have paging AND total number of results, then at the moment you have to use two passes if you want to use out-of-the-box features because LimitFilter does not return the total number of results (which by the way may change between two page retrieval).
    What we currently do is, the filter puts the total count in a static variable and but returns only the first N records. The aggregator then clubs these info into a single list and returns to the client. (The List returned by aggregator contains a special entry representing the count).
    This is not really a good idea because if you have more than one user doing this operation then you will have problems storing more than one values in a single static variable and you used a cache service with a thread-pool (thread-count set to larger than one).
    We assume that the aggregator will execute immediately after the filter on the same node, this way aggregator will always read the count set by the filter.
    You can't assume this if you have multiple client threads doing the same kind of filtering operation and you have a thread-pool configured for the cache service.
    Please tell us if our approach will always work, and whether it will be efficient as compared to using Count class which requires executing filter twice.
    No it won't if you used a thread-pool. Also, it might happen that Coherence will execute the filtering and the aggregation from the same client thread multiple times on the same node if some partitions were newly moved to the node which already executed the filtering+aggregation once. I don't know anything which would even prevent this being executed on a separate thread concurrently.
    The following solution may be working, but I can't fully recommend it as it may leak memory depending on how exactly the filtering and aggregation is implemented (if it is possible that a filtering pass is done but the corresponding aggregation is not executed on the node because of some partitions moved away).
    At sending the cache.aggregate(Filter, EntryAggregator) call you should specify a unique key for each such filtering operation to both the filter and the aggregator.
    On the storage node you should have a static HashMap.
    The filter should do the following two steps while being synchronized on the HashMap.
    1. Ensure that a ConcurrentLinkedQueue object exists in a HashMap keyed by that unique key, and
    2. Enqueue the total number count you want to pass to the aggregator into that queue.
    The parallel aggregator should do the following two steps while being synchronized on the HashMap.
    1. Dequeue a single element from the queue, and return it as a partial total count.
    2. If the queue is now empty, then remove it from the HashMap.
    The parallel aggregator should return the popped number as a partial total count as part of the partial result.
    The client side of the parallel aware aggregator should sum the total counts in the partial result.
    Since the enqueueing and dequeueing may be interleaved from multiple threads, it may be possible that the partial total count returned in a result does not correspond to the data in the partial result, so you should not base anything on that assumption.
    Once again, that approach may leak memory based on how Coherence is internally implemented, so I can't recommend this approach but it may work.
    Another thought is that since returning entire cached values from an aggregation is more expensive than filtering (you have to deserialize and reserialize objects), you may still be better off by running a separate count and filter pass from the client, since for that you may not need to deserialize entries at all, so the cost on the server may be lower.
    Best regards,
    Robert

  • Need new result file for each iteration of a loop

    I am using TestStand 2010 SP1.  I have a main sequence that essentially does the following:
    Initialize the test equipment and set up the test environment [Sequence Call]
    Start Loop
    Run Tests [Sequence Call]
    End Loop
    Because testing can continue for hours, the resultant report file is enormous and difficult to evaluate. I need to create a new result file for each loop iteration.  I know that starting a new execution of “Run Tests” will create a result file for each iteration of the loop, but the new execution will not have access to the handles to the test equipment that were made during initialization.  The testing is time critical, therefore initializing the test equipment and setting up the test environment must occur outside of the loop. 
    How can I programmatically create a new result file at the beginning of the loop and close the result file at the end of the loop?  I am open to any other suggestions.  Thank you in advance for your help!

    Hi,
    You could modify your process model by making a copy of Test UUTs entry point. Then make the loop that usually tests multiple UUTs into your loop. Take the loop and init out of your sequence. You can init in PreUUTLoop or the other pre loop sequence, and maybe store your references in runstate.root.Locals and pass them to MainSequence. Then you can use Report Options to set it for separate report files.
    cc

  • Error while deploying BPM : DC bpm_0/bl/ddic does not contain any archives for deployment

    Hi All,
            I'm using SAP PO 7.31 single stack. I've created a simple BPM in NWDS. I'm able to successfuly build the BPM which I created but when I "Deploy" it throws the below error.
    DC bpm_0/bl/caf/metadata does not contain any archives for deployment
    DC bpm_0/bl/ddic does not contain any archives for deployment
    DC bpm_0/bl/caf/dictionary does not contain any archives for deployment
    Not sure what/where to check and fix the issue. Can you please help me in fixing the issue?
    Thanks
    Raj.

    Dear Raj,
    I am looking into this.
    In the meantime can you try this also.
    xxx/pr/pm: Deployment error,&amp;nbsp;GD&amp;nbsp;|&amp;nbsp;ABAP,&amp;nbsp;SAP,&amp;nbsp;benX AG,&amp;nbsp;benXBrain,&a…
    Thanks & Regards,
    Patralekha

  • How to make jar files availabe for deployed EJBs

    Hi,
    I'm interested on how to make jar files availabe for deployed EJBs.
    My EJB is packed in an ear. It uses a util jar. I now just add the jar to the
    classpath, but I think that shouldn't be the way. Is there somthing in the admin
    console to make jars available or do I have to insert it in the ear file? And
    if so, where do I hve to place it?
    Thanks
    Claudia

    Put the util.jar in the ear with your ejb jars - at the same level (i.e. in
    the root) - but do not include them in the manifest.xml.
    Also each ejb jar that refers to util.jar must have util.jar on its internal
    classpath in the manifest.
    "Claudia" <[email protected]> wrote in message
    news:3d537db5$[email protected]..
    >
    Hi,
    I'm interested on how to make jar files availabe for deployed EJBs.
    My EJB is packed in an ear. It uses a util jar. I now just add the jar tothe
    classpath, but I think that shouldn't be the way. Is there somthing in theadmin
    console to make jars available or do I have to insert it in the ear file?And
    if so, where do I hve to place it?
    Thanks
    Claudia

  • Maven scripts for deploying WAR into Weblogic

    Hello Folks,
    I would need some help in deploying EAR/WAR into weblogic server.
    I already have maven scripts for building but not for deploying. your help is appreciated.
    Thanks.

    Hi Ank2cool,
    Please see my findings, I have tested the Same "build.xml" again...My AdminServer
    UserName is weblogic
    and
    password is weblogic
    <project name="webservices-hello_world" default="deploy">
    <property name="wls.username" value="weblogic" />
    <property name="wls.password" value="weblogic" />
    <property name="wls.hostname" value="localhost" />
    <property name="wls.port" value="7001" />
    <property name="wls.server.name" value="AdminServer" />
    <target name="deploy">
    <wldeploy action="deploy" name="PlanDemoEAR" source="PlanDemoEAR" user="${wls.username}"
    password="${wls.password}1" verbose="true" adminurl="t3://${wls.hostname}:${wls.port}" targets="${wls.server.name}" />
    </target>
    </project>
    Now when i run the AANT task ant deploy
    OUTPUT:
    Buildfile: build.xml
    deploy:
    [wldeploy] weblogic.Deployer -verbose -noexit -name PlanDemoEAR -source C:\JavaTest\PlanDemo\PlanDemoEAR -targets Admin
    Server -adminurl t3://localhost:7001 -user weblogic -password ******** -deploy
    [wldeploy] weblogic.Deployer invoked with options: -verbose -noexit -name PlanDemoEAR -source C:\JavaTest\PlanDemo\Pla
    nDemoEAR -targets AdminServer -adminurl t3://localhost:7001 -user weblogic -deploy
    [wldeploy] <Dec 21, 2009 2:08:33 PM IST> <Info> <J2EE Deployment SPI> <BEA-260121> <Initiating deploy operation for app
    lication, PlanDemoEAR [archive: C:\JavaTest\PlanDemo\PlanDemoEAR], to AdminServer .>
    [wldeploy] Task 1 initiated: [Deployer:149026]deploy application PlanDemoEAR on AdminServer.
    [wldeploy] Task 1 completed: [Deployer:149026]deploy application PlanDemoEAR on AdminServer.
    [wldeploy] Target state: deploy completed on Server AdminServer
    [wldeploy]
    [wldeploy] Target Assignments:
    [wldeploy] + PlanDemoEAR AdminServer
    BUILD SUCCESSFUL
    ========================TO REPRODUCE YOUR ISSUE I JUST CHANGED THE Password from weblogic to "*weblogic1*" or *"weblogic "* (i added a Single Space at the end of password) in the ANT script....But Server's Actual Password is still "weblogic"=======
    OUTPUT:
    [wldeploy] Caused by: java.lang.SecurityException: User: weblogic, failed to be authenticated.
    [wldeploy] at weblogic.common.internal.RMIBootServiceImpl.authenticate(RMIBootServiceImpl.java:116)
    [wldeploy] at weblogic.common.internal.RMIBootServiceImpl_WLSkel.invoke(Unknown Source)
    [wldeploy] at weblogic.rmi.internal.BasicServerRef.invoke(BasicServerRef.java:589)
    [wldeploy] at weblogic.rmi.internal.BasicServerRef$1.run(BasicServerRef.java:477)
    [wldeploy] at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:363)
    [wldeploy] at weblogic.security.service.SecurityManager.runAs(Unknown Source)
    [wldeploy] at weblogic.rmi.internal.BasicServerRef.handleRequest(BasicServerRef.java:473)
    [wldeploy] at weblogic.rmi.internal.wls.WLSExecuteRequest.run(WLSExecuteRequest.java:118)
    [wldeploy] at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201)
    [wldeploy] at weblogic.work.ExecuteThread.run(ExecuteThread.java:173)
    BUILD FAILED
    C:\JavaTest\PlanDemo\build.xml:12: weblogic.Deployer$DeployerException: weblogic.deploy.api.tools.deployer.DeployerExcep
    tion: Unable to connect to 't3://localhost:7001': User: weblogic, failed to be authenticated.. Ensure the url represents
    a running admin server and that the credentials are correct. If using http protocol, tunneling must be enabled on the admin server.
    Total time: 0 seconds
    Above is exactly the same error what u are getting...
    So please recheck the password provideed in the "build.xml" is correct..or any space added before or after the Password.....
    Thanks
    Jay SenSharma
    http://jaysensharma.wordpress.com (WebLogic Wonders Are Here)

  • HT5487 So we just got a new Macbook and I also installed the apple configurator tool. I'm using it to prepare the iPads for deployment and it won't let me prepare them? It gets an error stating "retrieving iOS info from apple" then it stops and says "inte

    So we just got a new Macbook and I also installed the apple configurator tool. I'm using it to prepare the iPads for deployment and it won't let me prepare them? It gets an error stating "retrieving iOS info from apple" then it stops and says "internet error". My Internet connection is fine with the Mac-book. It shows the iPad is listed under the Prepare logo up top as 1 but under supervise none are shown. Although it does show itself in iTunes. Also the profile I created is fresh and has no errors. We have tryed nearly everything I cna think of and online forums are not giving us to much info on this error. 

    A wag at this.  A port issue?
    "Apple Push Notification network setup
    When MDM servers and iOS devices are behind a firewall, some network configuration may need to take place in order for the MDM service to function properly. To send notifications from an MDM server to Apple Push Notification service, TCP port 2195 needs to be open. To reach the feedback service, TCP port 2196 will need to be open as well. For devices connecting to the push service over Wi-Fi, TCP port 5223 should
    be open."
    http://www.google.com/url?sa=t&rct=j&q=ports%20ios%20configure%20ipad&source=web &cd=1&ved=0CC0QFjAA&url=http%3A%2F%2Fimages.apple.com%2Fipad%2Fbusiness%2Fdocs%2 FiOS_MDM.pdf&ei=5lXGUPCcJMXx0gH2wYG4BA&usg=AFQjCNFzINvs7ktT-6o6Q_l4Qk2HkpjtCA&ca d=rja
    google: ports ios configure ipad
    Try it on your home network where there isn't a lot of 'controls' -- network filtering , firewalls, etc.
    Robert

Maybe you are looking for