Deployed uncorrectly under multi instances environment

Hi Experts,
we have 4 instances, 1 CI and 1DI in one server, the other 2 DI in another server.
I developed a java program via SAP NetWeaver Developer Studio. and it was correct.
but after that i could not run the application in one instance of them.
It said that the application is not deployed on the server.
what is the problem? How to solve it without restart server?
Regards,
Sidney

Hi,
What is the exact message you get from this instance?
Are you bale to see this application under :
...serverX\apps\sap.com\irj\servlet_jsp\irj\root\WEB-INF\deployment\pcd or temp? for this instance?
Gilad.

Similar Messages

  • .ear files are not deployed correctly under multi instances environment

    We have 1 CI and 3DI.
    In one server there are 1 CI and 1DI.
    and in the other 2 servers there is 1 DI each.
    I developed a program with Webdynpro Java and deployed correctly.
    But after that I could run the application in only the specific server.
    For example I could run the applicaion in A,B server but not C server.
    What's the problem?

    Thanks for the reply.
    I know it'll be solved after restart.
    But it's not easy to restart DI during working hour.
    Isn't there any other way to solve this problem?

  • QUESTION: Multi-instance memory requirements on Linux

    Hi, all.
    I've been out of the loop on Oracle technical details for awhile, and need to re-educate myself on a few things. I'm hoping someone can point me to a book in the online docs which discusses my question.
    Oracle db 10.2.0.2, on Redhat Linux 2.6.9-67.0.0.0.1. This server is a virtual machine, on a VMWare ESX server.
    My question concerns the utilization of memory resources in a multi-instance environment.
    I currently have 2 instances/dbs on this server. Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    I am tasked with determining if the server can handle a third instance. It's unclear how much load the database will see, so I don't yet know how much memory I will want to allocate to the shared pool for the buffer cache, etc.
    I wanted to see how much memory was being used by the existing instances, so on the server I attempted to capture memory usage information both before, and after, the startup of the second instance.
    I used 'top' for this, and found that the server has a total of 3.12GB of physical memory. Currently there's about 100MB free physical memory.
    the information from 'top' also indicated that physical memory utilization had actually decreased after I started the second instance:
    Before second instance was started:
    Mem: 3115208k total, 3012172k used, 103036k free, 46664k buffers
    Swap: 2031608k total, 77328k used, 1954280k free, 2391148k cached
    After second instance was started:
    Mem: 3115208k total, 2989244k used, 125964k free, 47144k buffers
    Swap: 2031608k total, 89696k used, 1941912k free, 2320184k cached
    Logging into the instance, I ran a 'show SGA', and got an SGA size of about 900MB (as expected). But before I started the instance, there wasn't anywhere near than amount of physical memory available.
    The question I need to answer is whether this server can accomodate a third instance. I gather that the actual amount of memory listed in SGA_TARGET won't be allocated until needed, and I also understand that virtual memory will be used if needed.
    So rather than just asking for 'the answer', I'm hoping someone can point me to a resource which will help me better understand *NIX memory usage behavior in a multi-instance environment...
    Thanks!!
    DW

    Each was configured with an SGA_TARGET of approximately 900MB. java_pool_size, large_pool_size and shared_pool_size are also assigned values in the pfile, which I believe supersedes SGA_TARGET.
    Not quite. If you set non-zero values for those parameters as well as setting SGA_TARGET, then they act as minimum values that have to be maintained before extra free memory is distributed automatically amongst all auto-tuned memory pools. If you've set them as well as SGA_TARGET, you've possibly got a mish-mash of memory settings that aren't doing what you expected. If it was me, I'd stick either to the old settings, or to the new, and try not to mix them (unless your application is very strange and causes the auto-allocate mechanism to apportion memory in ways you know are wrong, in which case setting a floor below which memory allocations cannot go might be useful).
    3GB of physical memory is not much these days. The general rule is that your total SGAs should not make up more than about 50% of physical memory, because you probably need most of the other 50% for PGA use. But if your users aren't going to be doing strenuous sorting (for example), then you can shrink the PGA requirement and nudge the SGA allowance up in its place.
    At 900MB per SGA, you can have two SGAs and not much user activity. That's 1800MB SGA plus, say, 200MB each PGA = 2200MB, leaving about 800MB for contingencies and Linux itself. That's quite tight and I personally wouldn't try to squeeze another instance of the same size into that, not if you want performance to be meaningful.
    Your top figures seem to me to suggest you're paging physical memory out to RAM already, which can't be good!

  • Workflow Custom Activity deploy in multi server environment

    I have been working on a project that involves developing a custom workflow activity for SharePoint 2013. I am developing it in a single server environment working with http.
    My problem occurs when deploying to multi-server environment with https (WFE, APP). My question is how to deploy my solution to the new environment. 
    The steps:
    Create a project - c# activity library
    Add a workflow activity, add .xml
    Deploy the .dll and .xml of project to:
    "C:\Program Files\Workflow%20Manager\1.0\Workflow\Artifacts" "C:\Program Files\Workflow Manager\1.0\Workflow\WFWebRoot\bin"
    net sotp "Workflow Manager Backend"
    net start "Workflow Manager Backend"
    Deploy .DLL to GAC
        - Created MSI using install shield in VS2010 to add .DLL to GAC
        - Verify .DLL in GAC by going to c:\windows\assembly and %windir%\Microsoft.NET\assembly
    iisrest
    Deploy WSP to SharePoint, activate feature, open SharePoint Designer 2013 and choose the custom action that now appears when creating a 2013 workflow
    To recap we have workflow manager on the APP server and and the workflow client on the WFE. We deployed the .DLL and .XML to the workflow manager (APP) only. The .DLL is deployed
    to/in the GAC on the WFE and the APP. We are able to see and create the activity in Designer 2013 and we deploy the workflow to a normal SharePoint list. When we run the workflow we do not get any errors in the ULS logs, event viewer or Workflow Manager Debug
    Logs (event viewer also). The site is not created though. We believe the issue is that the custom C# (.DLL) is not being ran. 
    This all works fine and dandy on my single server environment. Workflow is working like a charm. How can we trouble shoot what the issue is if we are not finding any errors?
    Is there a step that we missed or some other place we need to look for logs? Would the ULS logs show the site creation or show running our custom code? Currently it does not show anything when we run the workflow.
    Let me know if this is unclear or if anyone needs more information. Thanks

    Hi,
    Here is a workaround for your reference:
    We can develop a custom WCF service instead of the Custom Activity in SharePoint. And then use the service from workflow. It use a separate dedicated server for workflow without having any reference to SharePoint DLLs from inside workflow.
    Here is a similar thread for your reference:
    https://social.technet.microsoft.com/Forums/systemcenter/en-US/d462ca07-9861-4133-948a-fc9771306cb1/custom-workflow-how-to-go-from-single-server-to-multiple?forum=sharepointdevelopment
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

  • Best Practices for CS6 - Multi-instance (setup, deployment and LBQ)

    Hi everyone,
    We recently upgraded from CS5.5 to CS6 and migrated to a multi-instance server from a single-instance. Our current applications are .NET-based (C#, MVC) and are using SOAP to connect to the InDesign server. All in all it is working quite well.
    Now that we have CS6 (multi-instance) we are looking at migrating our applications to use the LBQ features to help balance the workload on the INDS server(s). Where can I find some best practices for code deployment/configuration, etc for a .NET-based platform to talk to InDesign?
    We will be using the LBQ to help with load management for sure.
    Thanks for any thoughts and direction you can point me to.
    ~Allen

    Please see if below metalink note guides you:-
    Symmetrical Network Acceleration with Oracle E-Business Suite Release 12 [ID 967992.1]
    Thanks,
    JD

  • Deploying 11i in a grid environment

    Hi,
    I'm curious about the challenges a DBA faces when deploying 11i in a grid environment.
    Once I get a proper mental handle on what the challenges are, I'd obviously like to find some solutions.
    So, assume I have 10 hosts running Linux and that I have the latest version of 11i and 10g.
    Lets assume I decide on this physical deployment:
    DB Tier:
    3 linux nodes running 10g RAC
    Apps Tier:
    7 linux nodes running 11i
    Question 0:
    How sensitive is 11i (11.5.9) to the version of the underlying database?
    Will it run fine on 8i, 9i, 10g?
    Question 1:
    During 11i installation, when I run rapidwiz,
    is it possible to point rapidwiz at a database service rather than a specific Oracle instance?
    q2: (a generalization of q1)
    Is rapidwiz RAC aware?
    q3:
    Ignoring the answers to q1,2, is it possible to configure 11i so that it is unaffected by a crash of any instance in the 10g RAC?
    q4:
    What is the best way to install the 11i software on the 7 Linux nodes?
    Should I run rapidwiz 7 times such that I have 7 copies of the 11i software on the local disk drives of the 7 Linux nodes? If I do this, will the 11i software on node2 be identical to the 11i software on say node7?
    If there is a difference, what is it and how complicated is it?
    q5: (a variation of q4)
    Suppose I don't want to run rapidwiz 7 times;
    could I run rapidwiz once on a central file server and then serve (using NFS maybe) the 11i software to the 7 Linux nodes? If I do this, do I need to worry that the deployed software will function only on its orginal node? If it does function only on the original node, would it be possible to use soft links, environment variables, and properly edited configuration files to properly serve the 11i software to the 7 Linux nodes so that it functions on the 7 Linux nodes?
    q6:
    Assume I can figure out how to properly serve 11i software to the 7 nodes using NFS; what is the best way to protect the application from crash of the file server?
    q7:
    Lets assume that I have figured out an elegant way to deploy the 11i software and I have it properly configured to function with 10g RAC, what is the best way to deploy services across the 7 nodes?
    Should I maybe put the Forms server on nodes1,2,3 and the SelfService server on nodes4,5,6 and the CM on node 7?
    Or, should I put all the services on all 7 nodes so that each node is identical?
    q8:
    Assume I have a good solution to q7, how do I protect the application from a crash of one of the 7 nodes? For example, if a user is interacting with some business object (entering an order for example), and this user's session is connected to node5 and node5 then crashes, is it possible for 11i to be configured so that this user session would be seamlessly failed over to another node?
    Sorry about posting so many questions; I've not found any obvious answers in the documentation.
    -moi

    answers for
    q0:
    11.5.9 comes with DB Oracle server of 9i I guess..
    q1:
    no I guess.. Rapidwiz is shipped with one Oracle database dump.. 2 I should say, prod and vis.. U have to use any of these databases.. so, goes for Oracle home that comes with rapidwiz.. actually, rapidwiz dumps Oracle software into Ur server and then copies datafiles and the creates controlfile as per Ur db name specifications..
    q2:
    As I told U erlier, rapidwiz dumps the Oracle home and datafiles into Ur server.. no concept of installing Oracle server or creating database.. so, U will have to make changes after installation to convert Ur single database into multiple node database..
    q3:
    U can use clustered computers sothat failover can happen at anypoint of time irrespective of software version.. this is at hardware level.. Veritas cluster is one example..
    U can migrate Ur database to Oracle 10g and then recreate controlfile to work as RAC...
    q4:
    I dont think 7 node configuration is possible.. maximum could be 5 node configuration.. db, forms, reports, web and admin server.. and U will have to choose one of these servers for each node and depending on the node configuration, files will be different on each node.. for eg., database node will have just database, forms node will have appl top for all the products with $PROD_TOP/forms directory structure..
    q5:
    nope, it's mot possible.. copying from one to another node.. each node has configuration file and depending on the kind of server the node is configured for, the entries in configuration file vary..
    q6:
    Clustered computer is the only way to protect middle tire servers from crash.. that is at hardware level..
    q8:
    again, hardware level lustering can protect U..
    one important thing to be noted here... apps installation if fulla bugs.. a simple single node installaion itself gives scores of problems.. the kinda set up U R dreaming will need lot of effort.. I expect 100s of bugs here :d

  • Task executions in multi server environment

    Hi All,
    I have a question regarding the task execution in a multi-server environment.
    Below is the scenario:
    We have two SIM applications: one for intranet users (employees and contractors) and one for extranet users (customers). Both these applications point to same database repository.
    Intranet : idm-intra.ear ( this is deployed on IntraServer1 and IntraServer2)
    Extranet: idm-extra.ear ( this is deployed on ExtraServer1 and ExtraServer2)
    And both share same repository WAVESET.
    Now we are deploying some request workflows for intranet users. In our case even the extranet application can see these tasks.
    My question is: if I triggered a request workflow from intranet application and it is pending for approval and when it times out, is there a chance that this workflow is executed by the "extranet" application?
    Thanks,
    kIDMan.

    The answer to your question is yes. If you go to the Configure>>Server tab in the admin console on any of these instances of IdM you should see all four instances. An easy way to test this is happening, is to enable workflow tracing and launch a couple of request from IntraServer1. IdM will try to distribute the work among all the servers it knows about so you'll see portions of the flow within the workflow traces on the individual servers. Well maybe it's not an easy thing to test because it's a timing issue.......
    I had an issue recently in which the guy doing the build to the QA environment messed up and pointed the QA IdM instances to the DEV repository. We were doing some testing of some flows on DEV and they were being executed on the QA IdM instances and bombing out with ClassNotFound errors because the QA environment was not build out completely/correctly. So even though the requests were being launched from the DEV instances they were being executed on the QA instances.
    There is a feature that is supposed to let you restrict which workflows run on which servers. If you go to the admin console and click on Configure>>Servers choose a server and click on the 'Scheduler' tab and then check the 'Task Restrictions' checkbox. From here you can restrict which workflows run on which servers. But my suspicion is that this doesn't work correctly (based on some testing I was doing on 7.1)....might want to test it out....Hope this helps.

  • Setting up Multi server environment in Sql Server 2012 - Enlist Failed Error

    I am trying to Configure the Master target server / Multi server environment in Sql Server 2012.
    I changed :
     - `MSXENCryptChannnelOptions`-->Changed from 2 to 0
     - `AllowDownloadedJobsToMatchProxyName` - changed from 0 to 1 on the target
    When I run the wizard I am getting below error
    >MSX Enlist failed for Job Server 'MasterServerName'
    >The enlist operation Failed(Reason:SQL Server Agent Error: Unable to connect to MSX 'MasterServerName'(Microsoft Sql Server, Error : 22026)
    They both servers SQL Agents are running on the same windows service account.
    Any Suggestions on how to fix this?
    **Adding the Log:**
    Enlist TSX Progress
    - Create MSXOperator (Success)
    Checking for an existing MSXOperator. 
    Updating existing MSXOperator. 
    Successfully updated MSXOperator. 
    - Make sure the Agent service for 'Test3' is running (Success)
    The service 'SQLSERVERAGENT' is running. 
    - Ensure the agent startup account for 'Test4' has rights to login as a target server (Success)
    Checking to see if the startup account for 'Test4' already exists. 
    Login exists on server. 
    Checking to see if login has rights to msdb. 
    Login has rights to msdb. 
    Checking to see if user is a member of the TargetServersRole. 
    User is a member of the TargetServersRole. 
    - Enlist 'Test4' into 'Test3' (Error)
    Enlisting target server 'Test4' with master server 'Test3'. 
    Using new enlistment method. 
    Messages
    MSX enlist failed for JobServer 'Test4'.  (Microsoft.SqlServer.Smo)
    ADDITIONAL INFORMATION:
    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
    The enlist operation failed (reason: SQLServerAgent Error: Unable to connect to MSX 'TEST3'.) (Microsoft SQL Server, Error: 22026)

    hi SmilingLily,
    you can try to run the SQL Agent under a domain account.

  • Web Form Validation Message Language Setting at Runtime when work in multi lingual environment

    Business Catalyst use the default culture language to display web form validation message.
    When we are in multi lingual environment and not using subdoamin to handle multilingual sites, we found that the validation message did appear in the default culture setting. To make this work, we need to add the below script in our template.
    <script type="text/javascript">
    $(document).ready(function(){               
    var head= document.getElementsByTagName('head')[0];
    var script= document.createElement('script');
    script.src= '/BcJsLang/ValidationFunctions.aspx?lang=FR';
    script.charset = 'utf-8';
    script.type= 'text/javascript';
    head.appendChild(script);
    </script>
    Assuming the template is in french. You can change the lang parameter in the script according to your language.

    After user 1 submits the page, it might not even be committed, so there is no way to have the pending data from user1 seen by user2.
    However, we do have a new feature in ADF 11g TP4 that I plan to blog more about called Auto-Refresh view objects. This feature allows a view object instance in a shared application module to refresh its data when it receives the Oracle 11g database change notification that a row that would affect the results of the query has been changed.
    The minimum requirements in 11g TP4 to experiment with this feature which I just tested are the following:
    1. Must use Database 11g
    2. Database must have its COMPATIBLE parameter set to '11.0.0.0.0' at least
    3. Set the "AutoRefresh" property of the VO to true (on the Tuning panel)
    4. Add an instance of that VO to an application module (e.g. LOVModule)
    5. Configure that LOVModule as an application-level shared AM in the project properties
    6. Define an LOV based on a view accessor that references the shared AM's VO instance
    7. DBA must have performed a 'GRANT CHANGE NOTIFICATION TO YOURUSER'
    8. Build an ADF Form for the VO that defined the LOV above and run the web page
    9. In SQLPlus, go modify a row of the table on which the shared AM VO is based and commit
    When the Database delivers the change notification, the shared AM VO instance will requery itself.
    However that notification does not arrive all the way out to the web page, so you won't see the change until the next time you repaint the list.
    Perhaps there is some way to take it even farther with the active data feature PaKo mentions, but I'm not familiar enough with that myself to say whether it would work for you hear.

  • Virtualised Multi-Instance SQL Server Cluster - Processor Resource Management

    Hi - We're in the process of implementing a multi-instance SQL 2014 guest cluster on Windows 2012 R2.  To our dismay, it seems that Windows System Resource Manager (WSRM) is deprecated in Windows 2012 R2, so we're now stuck for how best to manage CPU usage
    between SQL instances....
    As far as I can see, I'm left with two options, but both of these have problems:
    1) Use SQL Processor affinity within the guest cluster, with each SQL instance assigned to dedicated v-CPU.  However, I'm not certain that setting SQL Processor affinity within a VM will actually have the desired affect!?..
    - When there is physical CPU capacity available, I'd hope Hyper-V would provide it to whichever v-CPU is demanding it.  
    - When VM processor demand exceeds the physical CPU capacity, I'd hope the SQL instances would receive a proportion of the physical CPU time according to the number of v-CPU(s) assigned through the affinity settings.
    2) Use a VM (actually 2, because its a 2-node guest cluster) per SQL instance!..  This is not ideal, as we need multiple SQL instances and it would result in have an administrative and performance overhead
    Does anyone have any information or thoughts on this?  How can we manage a virtualised multi-instance SQL deployment now that WSRM has been deprecated?  Help me please!

    I'm not sure what are the requirements for each of the 2 VMs in in the SQL guest cluster.
    I'm assuming the guest cluster resides on a Hyper-V CSV with at least 2 Hyper-V hosts, and the 2 VMs are configured with Anti-affinity to ensure they never reside on the same Hyper-V host.
    I've been able to configure CPU resources to VMs from the standard controls in Hyper-V Manager:
    See this blog post
    What edition of SQL 2014 you're using?
    This matters because of these limitations.
    Also consider running SQL Server with Hyper-V Dynamic Memory - see Best Practices
    and Considerations
    Hyper-V performance tuning - CPU
    Hyper-V 2012 Best Practices
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • UPK11 - multi-user environment - manage author permissions - help

    We are configuring our UPK 11 multi-user environment.
    As we have it set, the "everyone" group has modify access to all folders.
    I would like to have a few admins have modify access to the system folder and everyone else be read only.
    Here's my dilemma -
    My admin profile is also a part of the everyone group - so if I set the permissions on the everyone group to read only, that effects the admin login as well.
    The documentation says that if I set up an author with explicit folder permissions, that they are not included in the everyone group, but everytime I try, they are added (irrevocably) to the everyone group before I can even assign an explicit permission.
    I tried to set the everyone group to read only and nearly hobbled my admin account.
    Can someone who has managed authors in a UPK 11 multii-user environment send out some tips?
    I'm not understanding the documentation well enough to make headway.
    [email protected]

    Hi Paul,
    I recently performed an implementation of UPK 11 at a client and had to show one of their employees how to administer the security surrounding profiles and authors.
    The group "Everyone" is default, and each person will belong to that group. It seems that when a person is assigned to another group/with author-specific permissions, it will override that which is set to the "Everyone" group.
    What I normally suggest (and implement) is that you modularise your security (separate the authors into their respective departments), and then create those departments as groups, and then assign permissions as required. Lets use an example - it assumes you are logged in as an admin user:
    I create 3 folders (where 4 and 5 are defaults):
    1. HR
    2. FIN
    3. CS
    4. Administrators
    5. Everyone
    Each author is defaulted to number 5, and all administrators should be included in group 4.
    Now the goal is to prevent HR and FIN from modifying CS, and vice-versa for the other two groups.
    You will see in your library that the root is defined as "/". Firstly, ensure that the Administrators group has Modify access to this folder (Administration --> Manage Groups --> Permissions). I think you need to click on edit, then click on the "Add" button (under permissions), and choose the root folder "/" - ensure it has modify - by default this value should already exist. This now ensures that the security is filtered down to all sub-ordinate folders that exist under the root folder - you do not need to manually assign "modify" permissions to each folder for the Administrator group, as you have already done so at root level.
    You will also notice that there is a "Members/Users" tab (can't remember off the top of my head which), and this allows you to select the profiles/authors that have Administrator group priviledges - ensure the correct users are selected.
    I normally leave the "Everyone" group as it is - do this for now, ensure everyone is included in it, and only modify after you have implemented and tested your security structure if needs be - normally the other group permissions override the "Everyone" group permissions.
    Next step is to assign department specific permissions to the 3 folders we created. Navigate to Administration --> Manage Groups, and ensure that you begin by editing the first department group (HR) - create the group if not already done, then edit (little pencil logo). You would like all HR users to have full modify access to their folder only, but they must not be able to mess around with the other two departments or the "System", "Getting started with UPK" and "/" folders. Under permissions, click on the Add button, and ensure that the folder "HR" has modify priviledge. Add the remaining 4 folders (CS, FIN, System, Getting started with UPK), and assign either Read or Read/View permission - this will ensure all other folders cannot be modified by HR users. Add the relevant HR users to the group under the "Members/Users" tab.
    Apply the same for FIN and CS respectively.
    Now you could get to a point where you may need cross-departmental access for a few users (FIN may need to access some HR content). I recommed that you do this on the user level, and not the group level, as you will limit who can access other departments. Navigate to Administration --> Manage Authors. Click on the specific user that will require access to another department. Edit his profile and under the permissions tab, ensure that he has required access to the other department folder (or sub-folders only) by adding and assigning the priveledge. This should ovveride the group security set for that specific department - but only for this user!
    Test by attempting to create content in an un-authorised folder (when logged in as a normal user). Test admin account by creating content in each folder. If it works - you will no longer need to modify "Everyone", as your permissions are defined under the other groups. I hope this all makes sense?
    Thats it in a nutshell really.
    I hope that this helps, and feel free to get back to me if you are still unsure?
    Regards,
    Greig

  • CQ5 Search Trends in multi publisher environment

    Hello,
    I've got a question here from one of our team. We've used the Search component in /libs/foundation/search as a base for a search component and have used the search trends feature which writes stats via /bin/statistics/tracker/query to /var/statistics. However, in a multi-publisher environment, we'd get different results based on which publish server is hit by the user. Is there any configuration mechanism or combination of mechanisms that can easily synchronize this data? I'm wondering if we can write this data back to author (by a custom servlet or reverse replication) and then publish it back out?
    Any insights or recommendations would be greatly appreciated.
    Thank you.
    Sarwar

    Hi sarwar,
       Using Adobe Marketing Cloud Integrations such as SiteCatalyst would be right approach for such usecase. In case you are looking within cq then  you might try with approach followed in "Page view Statistics" [1] of earlier cq version which is effectively deprecated now. The approach was develop a bundle  and have configurable tracking url points to author instance. So that statictics gets written to author directly & then using workflow launcher call a activate action so that publish instances are in sync.
        [1]    http://<host>:<port>/system/console/configMgr/com.day.cq.wcm.core.stats.PageViewStatistics
    Thanks,
    Sham

  • Is Modeling Multi-Instance Loops of Activities available in CE 7.1?

    Hi,
    I just want to know if Modeling Multi-Instance Loops of Activities available in CE 7.1. Because I dont see the property for Looping in my version CE 7.1

    No, the parallel looping capability is only added with SAP NetWeaver Composition Environment 7.2.

  • PS Default Install under Admin instance

    Is the default install really under the Admin instance for App Server? I find this hard to believe since if I have to restart the portal server (to accept new authlessanonymous users), I have to do it from the command line?
    Why was this decision made?
    I would think you would have the portal under it's own instance, then admin under another?
    We tried to get it working under another instance and it wouldn't work, any ideas or comments on this?

    The SUN installers in general are not robust. They assume that you want to install everything in the same instance (the DAS instance at that!) I recently attempted to install Sun Portal 7 into a seperate instance using Sun Application 8. The installer asks for the location of the instance root and docroot. (It diodn't even bother to ask if I wanted to deploy across a cluster). The install completed. I loged into the DAS console just to find that the targets of the war files was 'server' (the DAS instance) and all JVM configs were applied to that instance and not the instance I setup. However, the server policy and docroot were updated on the one I setup and not the DAS instance. Very disappointing.

  • Upgrading OA 11i from OA 10.7 with Multi-Instances Schemas

    From an ORACLE Applications 10.7 with Multi-Instances Schemas (Mutiple Set Of Books Architecture : Schemas : APPS, APPS2, APPS3, PO, PO2, PO3, AP, AP2, AP3, AR, AR2, AR3,...), we want to upgrade toward an 11.5.1.
    At the begin of the Step of AutoUpgrade, we have an error :
    "Not able to process an MSOBA, you must pass in the Multi-Org Architecture."
    Could you give me some informations about the way to resolve this problem :
    * Do we must install and initialize the multi-Org in the 10.7 environment ?
    * Can we process this point during the step of upgrading and how ?
    * Do we have a bypass to this problem ?
    Think you for your help !

    Below are the points which I read for r11i migration.
    a) MSOBA clients should migrate/convert to multi-org before migrating to 11i.
    b) Autograde will fail, if it detects MSOBA.
    c) Attachments and Workflow will not work in MSOBA.
    For more details refer Oracle appsnet.
    null

Maybe you are looking for