0x00002EFE Occasionally on existing extended replication

I have a problem I cant find an answer to elsewhere. First my environment. I have 2 primary hypervisors replicating data via http over port 80 to another hypervisor. This is all at the same site with physical connections directly to the
device. This machine is in turn replicating data via extended replication to a separate machine off-site, connected via direct linked fiber. Replication on-site works fine, and extended replication to the off-site machine works fine as well about 99%
of the time. Over the weekend I took a look in the event logs of the machine sending data to the separate machine off-site, and to my surprise there were 2 0x00002EFE events in the logs for 2 separate VM's at the same time. Now it looks like it was a temporary
thing, as no further errors occurred, and viewing the replication health of the machines shows all subsequent replication was successful. I looked back a little further and I noticed a different machine about a week ago had the same issue, and again it resumed
replicating shortly after the error. Doing some google-fu on the error turns up lots of folks having the problem when first setting replication up, but I cant find anyone having the error once replication has been setup successfully, let alone on an http
channel instead of https. The only thing I can think of that may be an issue, is that I have my replication traffic throttled down to 15MB via a PowerShell new-netqospolicy, But during the time of the errors virtually no data was changed, so I have
my doubts about it being a traffic related error, especially when I can review the logs at times where I know there was very heavy traffic and I don't see this error repeated. Anyone have any ideas?

Hi,
" I have my replication traffic throttled down to 15MB via a PowerShell new-netqospolicy, "
Did you try to set more lower bandwidth via QOS to see if the issue can be  reproduced ?
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.

Similar Messages

  • Azure site recovery with existing extended replication

    I'm wondering if this is possible. I have two non-clustered hypervisors, which both replicate to the same primary replication server, which in turn performs extended replication to another offsite replication server. I cannot set more than a single
    replication target on the primary hypervisors or the primary replication system and you cannot extend replication past an already extended replication server. In order to use Azure site recovery would I have to stop my existing extended
    replication and just perform replication to Azure? How does a private cloud work with 2 non-clustered primary hypervisors and primary/secondary private clouds?

    Hi
    I am not sure if I got the nuance of the question - but as of today. Azure Site Recovery does not support the notion or workflow of extend-replication.
    This is applicable for replicating VMs from pointA->pointB->pointC (replace "point" with "server" or "cloud") & pointA->pointB->Azure (same as before, replace "point" with "server" or "cloud").
    Let's assume you have a replication setup for a VM between two VMM clouds - as of today, we do not have a workflow which allows you to "extend" the replica VM in the secondary cloud to Azure. Hope that helps.
    Praveen

  • Production moves to another server with existing streams replication

    Hi All,
    We have oracle bi directional streams replication setup between 2 server(us and uk). Everything is working as expected.
    But,We are going to move UK server from (ex: cam19 to cam29).
    Will it impact on replication setup.
    we will restore new server via cold backup.Also we will stopped all the process on both the sides ,before cold backup.
    Can you please suggest? What should we do before doing that.
    How we can avoid any problem while moving server.
    so it will not impact on replication.
    Thanks in advance!!!!
    Many Thanks
    nick

    Thanks a lotz Anurag..
    I have one small question not on same topic, Its regarding table add in existing replication setup.
    What happened ,When we add a new table to existing replication setup if any reason table is not replicating between two database then we have to remove the rules for that particular table and setup again. Some time what happened we got error "queue has errors" i dont know the ORA number.
    in that case what cases when apply process ABORETED and when we try to start the process it gives same error and ABORTED again.
    then we have remove the whole replication Manually and setup again. It's very horriable....
    Could you please help that before drop the rules for particular table then wht should we do ? Do we need to unscheduled the propagation process and then drop the rules becuase i read on metalink that negative rules drops while propagation process using the same rule set.
    Please Suggest!!!!!!!!!!
    Many Thanks

  • How to enable an existing Extended Events Session or add a new Session?

    Hi,
    I am using Management Studio version 12.0.2000.8 and I have noticed that Extended Events for Azure SQL DB have been added to it.
    However, I can't add a new session because the option is grayed out in the dropdown menu. Also, the existing sessions return an error when I attempt to start them. Here is one example of starting
    azure_xe_query session:
    Failed to start event session 'azure_xe_query' because required credential for writing session output to Azure blob is missing. (Microsoft SQL Server, Error: 25739)
    I have an automated export assigned to my Azure database, so the database must be using a (blob) storage account. Is this the same storage account which Extended Events are trying to reach? So far, my best guess is that the storage key should be provided
    while starting the session. This, obviously can't be done using the GUI.
    The only thing I managed to google about this topic is
    this blogpost from May. 
    Has anybody else tried to profile their Azure DB using Extended Events? Is this feature still in development?
    Thank you,
    Filip

    Hi Bob, thank you for replying.
    You mentioned 12 pre-configured event sessions. May I ask, which version of Management Studio were you using? For some reason, I only see 5 of them:
    azure_xe_errors_warnings
    azure_xe_object_ddl
    azure_xe_query
    azure_xe_query_detail
    azure_xe_waits
    And each fires an error just like the one in the first post. At the moment, viewing live data from 'xe_query' session would be enough for me. If I could only start it somehow ...
    I should probably mention I have tried to do this on a Basic and a Standard service tier. Not yet with a Premium.
    Since there is probably nothing we can do at the moment, I'm going to mark your reply as an answer. Thnx again.

  • Inter-site replication and KCC, when migrating from 2008 SP1 to 2012 R2

    Hello,
    We have a very simple AD infrastructure, two sites.  I have built out a 2012 R2 server, extended the AD schema and promoted it to a domain controller.  Reviewing the existing AD replication topology, we currently have a 2008 SP1 server as a DC
    designated as the source/destination for inter-site replication, which appears to have been decided by KCC.
    I will be decomming this 2008 SP1 server, having migrated off the FSMO roles to a few other DCs.
    Will KCC automatically know to adjust the replication topology when the 2008 SP1 DC is offline?  Should I demote it after migrating off the FSMO roles?
    Thanks,
    Matt

    Hi,
    Thanks for your post.
    By default, the KCC reviews and makes modifications to the Active Directory replication topology every 15 minutes to ensure propagation of data, either directly or transitively, by creating and deleting connection objects as needed. The KCC recognizes
    changes that occur in the environment and ensures that domain controllers are not orphaned in the replication topology.
    For more detail information, you could refer to:
    http://technet.microsoft.com/en-us/library/cc961781.aspx
    For the migration, you could transfer the Flexible Single Master Operations (FSMO) Role and remove the Windows 2008 R2 domain controller, please refer to:
    Step-By-Step: Active Directory Migration from Windows Server 2008 R2 to Windows Server 2012 R2
    http://blogs.technet.com/b/canitpro/archive/2014/05/28/step-by-step-active-directory-migration-from-windows-server-2008-r2-to-windows-server-2012.aspx
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Is It Possible to Add a Fileserver to a DFS Replication Group Without Connectivity to FSMO Roles Holder DC But Connectivity to Site DC???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi,
    A DC in siteB should helpful. I still not see any article mentioned that a DFS client have to connect to PDC every time trying to access a DFS domain based namespace.
    Please see following article. I pasted a part of it below:
    http://technet.microsoft.com/en-us/library/cc782417(v=ws.10).aspx
    Domain controllers play numerous roles in DFS:
    Domain controllers store DFS metadata in Active Directory about domain-based namespaces. DFS metadata consists of information about entire namespace, including the root, root targets, links, link targets, and settings. By default,root servers
    that host domain-based namespaces periodically poll the domain controller acting as the primary domain controller (PDC) emulator master to obtain an updated version of the DFS metadata and store this metadata in memory.
    So Other DC needs to connect PDC for an updated metadata.
    Whenever an administrator makes a change to a domain-based namespace, the
    change is made on the domain controller acting as the PDC emulator master and is then replicated (via Active Directory replication) to other domain controllers in the domain.
    Domain Name Referral Cache
    A domain name referral contains the NetBIOS and DNS names of the local domain, all trusted domains in the forest, and domains in trusted forests. A
    DFS client requests a domain name referral from a domain controller to determine the domains in which the clients can access domain-based namespaces.
    Domain Controller Referral Cache
    A domain controller referral contains the NetBIOS and DNS names of the domain controllers for the list of domains it has cached. A DFS client requests a domain controller referral from a domain controller (in the client’s domain)
    to determine which domain controllers can provide a referral for a domain-based namespace.
    Domain-based Root Referral Cache
    The domain-based root referrals in this memory cache do not store targets in any particular order. The targets are sorted according to the target selection method only when requested from the client. Also, these referrals are based on DFS metadata stored
    on the local domain controller, not the PDC emulator master.
    Thus it seems to be acceptable to have a disconnect between sites shortly when cache is still working on siteB.
    If you have any feedback on our support, please send to [email protected].

  • Transactional Replication: Alter view changes are not reflect on Subscription database

    Hi All,
    we are configured transactional replication in our environment on sql server 2008 R2 , Yesterday I made a view alter on publisher database the view also present in replicated articles but unfortunately the changes not reflect in subscription, I already have
    checked the : Replicate Schema change option in Subscription option its also true, there is not latency exist in replication monitor , i have checked the blocking on subscription and publication. one more thing I tested the changes on replicated table its
    working fine
    Please help me to fix the issue.
    Regards,
    Pawan Singh
    Thanks

    Hi Pawan,
    According to your description, the alter on the view in publication doesn't be reflected in subscription database. As my analysis, the issue could be caused by that the distribution agent job doesn’t run after altering the view.
    I make a test on my computer, and set up transactional replication to replicate tables and views. Firstly, when creating subscription, I set the distribution agent job ‘Run continuously’(as the screenshot below), and alter the view in publication database,
    then the change is successfully reflected to the corresponding view in subscription database.
    However, I also make another test with setting the distribution agent job ‘Run on demand only’(It is determined by you), and find that it is not reflected to subscription database unless I run the distribute agent job manually.
    The distribution agent is used to read the updated transactions written to the distribution database and applies the change to the subscription database, so please check if your distribution agent job runs after you alter the view. If not, please run the
    job and check if the issue still occurs.
    Regards,
    Michelle Li

  • Extend sharepoint site to port 80

    Hi All,
    I hava sharepoint site is to be extend it to port 80 with anonymous access. There are two more sites exist/extended on port 80 which are not anonymous.
    Can some one please guide me the steps to extend it to port 80 with anonymous access without effecting to other sites and if anything goes wrong then, how can i remove/delete the anonymous extended site?
    MercuryMan

    Create a host entry in AD and point it to SharePoint server
    Now from the SharePoint CA > Select manage web application > select the web application > extend
    In the host header put the host name you created in AD
    Provide all information and selective respected zone, make sure you select allow anonymous 
    Not in the CA again > Select manage web application > select the web application > Authentication providers
    It will show you the zones for that application
    Select appropriate zone and change authentication to anonymous 
    Regards,
    Pratik Vyas | SharePoint Consultant |
    http://sharepointpratik.blogspot.com
    Posting is provided AS IS with no warranties, and confers no rights
    Please remember to click Mark As Answer if a post solves your problem or
    Vote As Helpful if it was useful.

  • Extending an Apple wireless network beyond reach of the base station

    Hello; I have an apple network that consists of one AirPort Extreme with two airport expresses acting as wifi extenders for the Extreme. I need to extend my wireless network even further, beyond the distance where another airport express could connect to the extreme. Could I connect, via Ethernet cable, a third airport express to one of the two existing airport express's to accomplish this? If so, would I be using the WAN or LAN port? Thanks!

    Could I connect, via Ethernet cable, a third airport express to one of the two existing airport express's to accomplish this? If so, would I be using the WAN or LAN port? Thanks!
    Yes. You would configure the third Express as a roaming type network with the existing extended network. Since this Express will be in bridge mode, it really doesn't matter which port you use. However, for consistency, I would recommend that you use the WAN port for the connection.

  • Is it Possible to Promote DC on a Subnet With Connectivity to a Site DC But Not DC with FSMO Roles???

    I apologize in advance for the rambling novella, but I tried to include as many details ahead of time as I could.
    I guess like most issues, this one's been evolving for a while, it started out with us trying to add a new member 
    to a replication group that's on a subnet without connectivity to the FSMO roles holder. I'll try to describe the 
    layout as best as I can up front.
    The AD only has one domain & both the forest & domain are at 2008R2 function level. We've got two sites defined in 
    Sites & Services, Site A is an off-site datacenter with one associated subnet & Site B with 6 associated subnets, A-F. 
    The two sites are connected by a WAN link from a cable provider. Subnets E & F at Site B have no connectivity to Site A 
    across that WAN, only what's available through the front side of the datacenter through the public Internet. The network 
    engineering group involved refuses to route that WAN traffic to those two subnets & we've got no recourse against that 
    decision; so I'm trying to find a way to accomplish this without that if possible.
    The FSMO roles holder is located at Site A. I know that I can define a Site C, add Subnets E & F to that site, & then 
    configure an SMTP site link between Sites A & C, but that only handles AD replication, correct? That still wouldn't allow me, for example, 
    to enumerate DFS namespaces from subnets E & F, or to add a fileserver on either of those subnets as a member to an existing
    DFS replication group, right? Also, root scalability is enabled on all the namespace shares.
    Is there a way to accomplish both of these things without transferring the FSMO roles from the original DC at Site A to, say, 
    the bridgehead DC at Site B? 
    When the infrastructure was originally setup by a former analyst, the topology was much more simple & everything was left
    under the Default First Site & no sites/subnets were setup until fairly recently to resolve authentication issues on 
    Subnets E & F... I bring this up just to say, the FSMO roles holder has held them throughout the build out & addition of 
    all sorts of systems & I'm honestly not sure what, if anything, the transfer of those roles will break. 
    I definitely don't claim to be an expert in any of this, I'll be the first to say that I'm a work-in-progress on this AD design stuff, 
    I'm all for R'ing the FM, but frankly I'm dragging bottom at this point in finding the right FM. I've been digging around
    on Google, forums, & TechNet for the past week or so as this has evolved, but no resolution yet. 
    On VMs & machines on subnets E & F when I go to DFS Management -> Namespace -> Add Namespaces to Display..., none show up 
    automatically & when I click Show Namespaces, after a few seconds I get "The namespaces on DOMAIN cannot be enumerated. The 
    specified domain either does not exist or could not be contacted". If I run a dfsutil /pktinfo, nothing shows except \sysvol 
    but I can access the domain-based DFS shares through Windows Explorer with the UNC path \\DOMAIN-FQDN\Share-Name then when 
    I run a dfsutil /pktinfo it shows all the shares that I've accessed so far.
    So either I'm doing something wrong, or, for some random large, multinational company, every sunbet & fileserver one wants 
    to add to a DFS Namespace has to be able to contact the FSMO roles holder? Or, are those ADs broken down with a child domain 
    for each Site & a FSMO roles holder for that child domain is located in each site?

    Hi Matthew,
    Unfortunately a lot of the intricacies of DFS leave my head as soon as I’m done with a particular design or troubleshooting situation but from memory, having direct connectivity to the PDC emulator for a particular domain is the key to managing domain based
    DFS.
    Have a read of this article for the differences between “Optimize for consistency” vs “Optimize for scalability”:
    http://technet.microsoft.com/en-us/library/cc737400(v=ws.10).aspx
    In brief, I’d say they mean:
    In consistency mode the namespace servers always poll the PDCe for the latest and greatest information on the namespaces they are hosting.
    In scalability mode the namespace servers should poll the closest DC for information on the namespaces they are hosting.
    The key piece of information in that article about scalability mode is: “Updates are still made to the namespace object in Active Directory on the PDC emulator, but namespace servers do not discover those changes until the updated namespace object replicates
    (using Active Directory replication) to the closest domain controller for each namespace server.”
    I read that as saying you can have a server running DFS-N as long as it has connectivity to a DC but if you want to make changes, do them from a box that has direct connectivity to the PDCe. Then let AD replication float those changes out to your other DCs
    where the remote DFS-N server will eventually pick them up. Give it a try and see how you get on.
    That being said, you may want to double check that you have configured the most appropriate FSMO role placement in your environment's AD design:
    http://technet.microsoft.com/en-us/library/cc754889(v=ws.10).aspx
    And a DFS response probably wouldn’t be complete without an AskDS link:
    http://blogs.technet.com/b/askds/archive/2012/07/24/common-dfsn-configuration-mistakes-and-oversights.aspx
    These links may also help:
    http://blogs.technet.com/b/filecab/archive/2012/08/26/dfs-namespace-scalability-considerations.aspx
    http://blogs.technet.com/b/josebda/archive/2009/12/30/windows-server-dfs-namespaces-reference.aspx
    http://blogs.technet.com/b/josebda/archive/2009/06/26/how-many-dfs-n-namespaces-servers-do-you-need.aspx
    I hope this helps,
    Mark

  • Hyper-V Replica Doesn't Seem to Replicate Expanded VHDX File?

    I'm running Microsoft Windows Server 2012 R2 Essentials as a VM on Microsoft Hyper-V Server 2012 R2 (Server Core).  I am using Hyper-V replica and extended replication to copy the Essentials VM to two backup Hyper-V servers.  This has been working
    for months.
    This evening my 2 Tb VHDX used by Essentials VM was nearly out of space.  I took down the VM and expanded the VHDX to 4 Tb. Then I restarted Essentials and used Disk Manager to allocate the new 2 Tb to give me a 4 Tb of space for Essentials to use.
     The primary VM seems to be working normally.
    But Hyper-V replication doesn't seem to be functioning correctly.  I'm wondering if I'll need to delete the replication and start over since the size of the VHDX increased compared to the original configuration?
    When I try and get replication to resume (from primary to secondary) I am prompted to resync which I have attempted many times. The primary creates a snapshot and then after a few moments the snapshot is merged back into the primary VM. But the VHDX on the
    secondary still shows it is 2 Tb in size.  So the resynchronization doesn't seem to have completed correctly?  Plus I have the option to "Resume Replication" from the primary which also implies the prior resyncrhonization failed.
    I haven't found any error messages using Server Manager to explain why the resync seems to fail.
    At one point I found I was able to resync from the secondary to the tertiary server.  I don't know what that actually accomplished, but both still have a 2 Tb VHDX instead of the 4 Tb that should now be available.
    Any ideas on how to get replica and extended replica to work without having to delete and start replication over from the beginning?
    Thanks for any assistance.
    Theokrat

    There was an error message that I'd overlooked and unfortunately this tells me I'll have to delete and restart replication from the beginning.
    Cannot perform operation for virtual machine 'Windows Essentials 2012 R2' as virtual size of one or more virtual hard disks are different between primary and Replica servers. Delete and re-enable replication. (Virtual machine ID xxxxxxx)
    Theokrat

  • Hyper v replica for multiple sites

    Hi
    I support 3 sites and would like two of them running hyper v to replicate back to the same replica site and server. Is this possible to have two hyper v sites replicate back to the same server based at another location or do they require their
    own individual replica server?
    Thanks in advance
    Shane

    Let us take 3 sites: A, B, C.
    1) A -> C and B -> C:  this is a supported scenario. C is the replica site for primary site A and primary site B. This is typically a hoster scenario, where C is the hoster with A and B being the customers replicating to the hoster.
    This is supported in Windows Server 2012 and Windows Server 2012 R2.
    2) A -> B -> C:   this is also supported, and is called "Extended Replication". Both B and C are replica sites. This is supported only if B and C are on Windows Server 2012 R2 (site A can be on Windows Server 2012).

  • Urgent - API to update billing rate

    Hi,
    I need to update billing rates for contracts and coverages. Can anyone please let me know the API that I can use to do so? I need to get this done real soon, so a faster response is greatly appreciated (don't mean to be pushy :-))
    Thanks,
    Alka.

    Hi Arka,
    Thanks for the headsup, however i have another problem with one of our client using Service contracts for Extended warranty.
    They want to create a credit memo from the existing Extended warranty line, but the line is fully billed. This scenario is applicable when the customer request a new billing procedure or discount after they had receive the invoice. Then they will create again a new line on the same contract to rebill the negotiated amount.
    Is this a right practice?
    One more thing, when query any Invoiced Extended warranty contracts and open for updated, when I click the reprice button, it will create a new billing schedule for the covered product billing, even if i'm not changing any price attribute. We are using 11.5.9.
    Thanks,
    Ether.

  • Replace Adv. Replicatication with Streams

    I posted this in the Streams forum but not opinions there so I'm hoping for some insights here.
    We have an existing multimaster replication application with 6 worldwide locations runing on 9i. All sites have all data and we have existing rules via out business process for conflict detection and resolution. The system works pretty well and no major issues considering the complexity of the architecture. We are considering moving to 10g Streams or 11g Streams. I know Oracle has said streams is the future and not adv replication but my question is "Is Streams up to the task as a replacement for this scenario? Have heard some rumblings that this many sites is too many and streams won't work (heard the same thing years ago prior to the successful deployment of the current architecture). We really do need the number of sites so I would be interested in any thoughts on doing this. Also is anyone out there in production with 3 or more streams sites in a multi-master type setup?

    I posted this in the Streams forum but not opinions there so I'm hoping for some insights here.
    We have an existing multimaster replication application with 6 worldwide locations runing on 9i. All sites have all data and we have existing rules via out business process for conflict detection and resolution. The system works pretty well and no major issues considering the complexity of the architecture. We are considering moving to 10g Streams or 11g Streams. I know Oracle has said streams is the future and not adv replication but my question is "Is Streams up to the task as a replacement for this scenario? Have heard some rumblings that this many sites is too many and streams won't work (heard the same thing years ago prior to the successful deployment of the current architecture). We really do need the number of sites so I would be interested in any thoughts on doing this. Also is anyone out there in production with 3 or more streams sites in a multi-master type setup?

  • Error accessing the page of the SOA Composer

    Hello friends,
    I have a problem when I try to load the page of the SOA Composer
    - I tried to change the Target of the application (Admin Server), when performing this change if you load the page the soa composer when entering credentials and gives lick the login button reappears on the message page:
    Error 500 - Internal Server Error
    From RFC 2068 Hypertext Transfer Protocol - HTTP/1.1:
    10.5.1 500 Internal Server Error
    The server encountered an unexpected condition Which it prevented us from fulfilling the request.
    - When the deployed application in the cluster and you try to access the soa composer page shown on the page:
    Error 500 - Internal Server Error
    From RFC 2068 Hypertext Transfer Protocol - HTTP/1.1:
    10.5.1 500 Internal Server Error
    The server encountered an unexpected condition Which it prevented us from fulfilling the request.
    Log:
    <May 22, 2013 9:38:41 AM CDT> <Error> <Cluster> <BEA-003111> <No channel exists for replication calls for cluster soa_Cluster>
    <May 22, 2013 9:38:41 AM CDT> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: weblogic.cluster.replication.ReplicationManagerServerRef@
    10, implementation: 'weblogic.cluster.replication.ReplicationManager@32101bc1', oid: '16', implementationClassName: 'weblogic.cluster.replication.Replicatio
    nManager'
    java.lang.SecurityException: Incorrect channel used for replication.
    java.lang.SecurityException: Incorrect channel used for replication
    at weblogic.cluster.replication.ReplicationManagerServerRef.checkChannelPrivileges(ReplicationManagerServerRef.java:108)
    at weblogic.cluster.replication.ReplicationManagerServerRef.checkPriviledges(ReplicationManagerServerRef.java:83)
    at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:314)
    at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:944)
    at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1021)
    Truncated. see log file for complete stacktrace
    >
    <May 22, 2013 9:38:41 AM CDT> <Error> <Cluster> <BEA-003111> <No channel exists for replication calls for cluster soa_Cluster>
    <May 22, 2013 9:38:41 AM CDT> <Warning> <RMI> <BEA-080003> <RuntimeException thrown by rmi server: weblogic.cluster.replication.ReplicationManagerServerRef@
    10, implementation: 'weblogic.cluster.replication.ReplicationManager@32101bc1', oid: '16', implementationClassName: 'weblogic.cluster.replication.Replicatio
    nManager'
    java.lang.SecurityException: Incorrect channel used for replication.
    java.lang.SecurityException: Incorrect channel used for replication
    at weblogic.cluster.replication.ReplicationManagerServerRef.checkChannelPrivileges(ReplicationManagerServerRef.java:108)
    at weblogic.cluster.replication.ReplicationManagerServerRef.checkPriviledges(ReplicationManagerServerRef.java:83)
    at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:314)
    at weblogic.rmi.internal.BasicServerRef.dispatch(BasicServerRef.java:944)
    at weblogic.rjvm.RJVMImpl.dispatch(RJVMImpl.java:1021)
    Truncated. see log file for complete stacktrace
    Any recommendation to fix this problem and I try to create and retrieve the DVM that I have in my environment SOA 11g (11.1.1.3)
    thanks

    Actually, that's exactly what we're doing.
    Later parts of the code have to look at the fill color of each item, check its type (gradient -vs- spot) and compare it to other colors.  In order to avoid a series of 'if' statements every time, we have made a function that takes each page item and returns a path item from it so we can work with that within the rest of the code (it's easier to write all of these comparisons if your'e always expecting to get a path item, and the first path item of every compound path has properties that apply to the entire compound path..) SO...  that function looks like this:
    function getLeadItem(glPage) {   //glPage is the Page Item
        if (glPage.typename == 'CompoundPathItem'){
            return glPage.pathItems[0]    //returns the first path item of the compound path (...but does not work)
        } else {
            return glPage  (if it is a normal page item, it just returns itself)
        glPage = null
    for all of the page items that are regular path items, the code later on that looks like this:
    if(item.fillColor == '[GradientColor]'){
                selectedColor = doc.activeLayer.pageItems[y].fillColor.gradient
    But any of the compound path ones break, which is why we made the function earlier, so that 'item' would ALWAYS be a path item, and not compound.

Maybe you are looking for

  • How can I create popup ?

    Hi All, How can I create popup in workspace.      [Currently only creates Layer ] ?:| tks

  • How do I set Outlook 2007 up on the Ipad2?

    Every time I try, it fails, either the exchange can't be found or something else.  Yahoo, Gmail, etc, etc all work fine.  Any ideas? Its the same with my Iphone 3GS. Thanks for your time.

  • TS1717 my itunes wont work/open

    my itunes wont open no matter what i do, ive tried everything that i found out on here but nothing seems to work, ill open it and 8 hours later it opens but wont respond to anything, can someone please help me..

  • How to read specific line in a file through UNIX shell script..

    Dear Friends, I have generated .ldt file under admin/import/US folder through FND_LOAD script for Download the responsibility. I registred Shell script (Host concurrent program) in Oracle apps. Now i want to read the file in speciific line which i wa

  • 11g Production: Creating an artifact in the wrong project

    I have an application with multiple projects. I accidentally created a JSF page fragment in the wrong project which in turn created a Web Content folder and all the other usual suspects. However, I cannot 'undo' the fragment creation. I can delete th