Return to Parent retaining Cache

Hello,
I have a requirement, when i click on link it will open new page by keeping current page as it is.( I have done it using "Target Frame =_blank").
In second page i am querying and getting one value, this is always single value.
This is similar to LOV(on click open new page, query , and return to item in parent page)
Problem:
Now, i want to return this value to First page(or parent page from which second page was called).
If i do for forwardimmediately it will open new page, instead i want it to return to first page.
IMPORTANT is that, if this page reloads i will lose data which are entered.
How can i do it?
Please help.

Keerthi,
I'm facing a problem.
1. PAGE A --> PAGE B
on click "find" button, page A calls page B. some query is done, value is fetched.
page A is retained and page B opens in new tab.(using Target Frame = _blank)
2. PAGE B --> PAGE A
On click of "select" button, value needs to be sent to page A.
Problem:
If i fill Destination URI and send data, problem is, it is opening PAGE A again, (so there will be 2 Page A) also TextInputBean value is lost in New Page.
How can i send value to actual page A (one that actually called page B)
Please help.

Similar Messages

  • How can I exit a collection window and return to parent folder?

    I would "love" to have a place where I could ask the simple questions like this that come up and seem so simple to those who know, they don't mention the details I need like how to close a working collection window and move back to the parent folder of that collection.
    I've been stuck for two days (it happened before) in a window (library module) with a collection of 8 photos.  I cannot find any way to close that window or go back to the library collection of hundreds of photos from which it was chosen.  It's very frustrating to be stuck in this window, perhaps forever?
    How do I close these windows.  Isn't there a "back" button?
    I started in Photoshop 3, then 5, then 7, then stopped renewing due to difficulty learning PS until the Adobe CC became available with the many tutorials.  It was great learning how to make a collection, but the tutorials didn't mention how to get back to the main folder where the collection was chosen.  So again, I"m stuck at an even lower level of functioning than with photoshop alone.
    I had to copy a photo and open it separatly in Photoshop just to be able to work on it.  Yesterday I could "edit in Photoshop cc" with right click.  Today, it won't even do that.
    I suspect now two things are working to prevent me from working properly on my photos but have no clue how to fix this.

    In the library module, left-hand panel, there are different sections or tabs. Collections is one of those tabs near the bottom out that panel. If you scroll up in that panel you should see other sections, one of which is "Folders". If that isn't showing, then right-click on the Collections section heading and make sure there are checkmarks in the sections that you want to have displayed. To return to the folder you need to go to the folders section and find the folder in your list.
    Another way to return to a folder is to click in the area in this illustration:
    Doing so will present a list of folders that you have recently been inside of. You can click on one of those folders and return directly to it.

  • SubCommunity Navigation (return to parent)

    (version: Foundation G6 )I have created a customized header using adaptive tags (utilizing the provided header.html as a guide). One thing that seems to be missing from that guide is the button that shows up in the default navigation that allows the user to return to the Parent community (it shows up next to the "Related Communities" tab and has an image of an upwards pointed arrow).
    What do I have to do to add that functionality to my custom navigation?
    Thanks.

    I see what you mean. Unfortunately this is not possible without customizing the tag code or some javascript hacking. Each menu tab has a unique ID, so you could add an eventhandler on the parent community tag and set the action to redirect to the parent community.
    The goal for the adaptive navigation tags were to be generic navigation building blocks, while the horizontal menu in 5.x were made for specific usecases and in the conversion, the go to parent community button was removed.

  • ExecuteWithParams is not returning the rows from cache

    Hi,
    We have two VO's based on same entity ,whenever new row is inserted in one VO and we do executeWithParams on another VO ,It is not returning the rows which are uncommitted.
    I replaced the executeWithParams with a custom method in AM in which I am executing the query after applying the view criteria and setting query mode.Its returning rows which are uncommitted.
    Jdev Version which I am using is 11.1.1.7.0
    ExecuteWithParams used to work with uncommitted data in 11.1.1.6.0 .
    Do we need to raise a bug on framework or Am I missing something?
    Edited by: sharavnkumar_malla on Oct 18, 2012 1:02 AM

    Still the same answer, even after you updated the question. We (the public) don't have 11.1.1.7, know nothing about 11.1.1.7, and cannot help you with 11.1.1.7. Questions about internal builds belong on internal forums.

  • Return to parent page in oaf

    Hi All,
    I have created a custom OAF page and linked the same with 3 standard OAF pages(Lets say Standard-Page1, Standard-Page2, Standard-Page3) in such a way that whenever I click the GO button in any of those standard pages it navigates to my custom page.
    As of now when I click the BACK button in the custom page it navigates back to Standard-page1(Because i have given the page1 Path in the pageContext.ForwardImmediately) only even though if I would have opened my custom page from standard-Page2 or 3.
    I need to understand what should I do to navigate back to the exact standard page from which the custom page has been opened.
    Any help on this would be greatly appreciated.
    Many thanks in advance!
    Kind Regards,
    Myvizhi

    Please ask OAF question on the {forum:id=210}
    Timo

  • ADF 11g Context - the pageFlowScope cache is not clearing

    Hi!
    I just noticed one behavior that I find unexplainable to me:
    1. Using ADF Controller
    2. Having one page (p1) on unbounded taskflow and
    3. One bounded taskflow (btf1) with one page and return activity with default property settings.
    Now, entering into bounded taskflow creates "oracle.adf.controller.pageFlowScope.xxxxxxxxxx_y" entries in SessionScope (observable via ADF Data panel when breakpoint is on the page / bounded taskflow).
    The problem is that PageFlowScope of bounded taskflow is not released/cleared from SessionScope on return to parent (unbounded) taskflow. (note: regular return activity is used). Thus, after intensive application usage, the SessionScope is filled with orphaned PageFlowScope entries. Still, the PageFlowScope map (with user defined taskflow paramteres / variables) is emptied on return, but the PageFlowScope object is still preserved.
    Is this a bug or "by design behavior"? In cases ov very long and intensive user activities, I noticed a very long list of orphaned PageFlowScopes which in turn produces memory as well as CPU overhead in managing user session on server. I stress-tested a simple app and performance hit for thousand concurrent users with 300 of taskflow entry-return cycles is significant. Memory rises linearly, but the CPU time is rising exponentially (guessing the searching the large PageFlowScope cache is taking the toll).
    Anyone else had a similar experience? Or explanation? Or pointing out where is possible to find more info on internal ADF Controller design?
    Regards,
    PaKo

    Hi,
    when the pageFlow scope map is preserved on task flow exit then this is a bug. Can you file it ?
    Frank

  • Backing Map Access : Cross-cache joins

    Hi,
    I have been experimenting with cross-cache joins using Entry Processors in Coherence 3.7.1.
    (I have already sent a query to Dave Felcey regarding this - I will post any response from him here - but I just wondered if anyone else has had the same problem)
    h3. Scenario
    A simplified version of the problem is:
    We have two NamedCaches, one called "PARENT_CACHE" and one called "CHILD_CACHE"
    Each cache stores the following types of object respectivley...
    *class Parent implements PortableObject {*
    long id;
    String description;
    *class Child implements PortableObject {*
    long id;
    long parentId;
    String description;
    I want an entry processor that I can send to an instance of "Parent" that will return the "Parent" object together with the associated "Child" objects. The reason I want this is because I do not want to do two out-of-process calls (one to get the parent and one to get it's children - in the real world, we will need to get several different child types and we do not want to do one call for each type) for example...
    *class ExampleEntryProcessor extends AbstractProcessor implements PortableObject {*
    public ParentAndChildrenResult process(Entry entry) {
    I wrote an implementation of this based on Ben Stopfold's blog (see here - particularly the post by "Jonathan Knight")
    So I thought I needed something like this...
         *public ParentAndChildrenResult process(Entry entry) {*
              ParentAndChildrenResult result = new ParentAndChildResult();
              Parent parent = (Parent) entry.getValue();
              result.setParent(parent);
              Filter parentIdFilter = new EqualsFilter(new PofExtractor(Long.class, Child.PARENT_ID_FIELD), parent.getId());
              BinaryEntry binaryEntry = (BinaryEntry) entry;
              Set<java.util.Map.Entry> childEntrySet = queryBackingMap("CHILD_CACHE", parentIdFilter, binaryEntry);
              Converter valueUpConverter = binaryEntry.getContext().getValueFromInternalConverter();
              for (java.util.Map.Entry childEntry : childEntrySet) {
                   result.addChild((Child) valueUpConverter.convert(childEntry.getValue()));
              return result;
         *public Set<Map.Entry> queryBackingMap(String nameOfCacheToSearch, Filter filter, BinaryEntry entry) {*
         BackingMapContext backingMapContext = entry.getContext().getBackingMapContext(nameOfCacheToSearch);
         Map indexMap = backingMapContext.getIndexMap();
         return InvocableMapHelper.query(backingMapContext.getBackingMap(), indexMap, filter, true, false, null);
    I set up key association so I can ensure that the child objects are on the same node as the parent, the keys for each cache look like this...
    *class Key implements KeyAssociation, PortableObject {*
    private long id;
    private long associatedId;
    *public Key() {*
    *public Key(Parent parent) {*
    this.id = parent.getId();
    this.associatedId = parent.getId();
    *public Key(Child child) {*
    this.id = child.getId();
    this.associatedId = child.getParentId();
    *public Object getAssociatedKey() {*
    return associatedId;
    When I send this entry processor to a parent object, I am getting the following exception when the "InvocableMapHelper.query" method is called...
    +"Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry"+
    I was not expecting this as our cluster is POF enabled and I thought that the backing maps always contained BinaryEntries.
    Has anyone else had a similar problem? Has anyone found any simple examples of how to do this anywhere on the web that work?
    Once I figure out how to get this to work, I want to post the solution somewhere (probably here) because there are bound to be other people who want to do something similar to this.
    Thanks in advance,
    -Bret

    Forgot the output...
    STORAGE NODE OUTPUT
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:38:09.860/0.766 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:38:10.016/0.922 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:10.344/1.250 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:38:23.610/14.516 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8088 using SystemSocketProvider
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): Created a new cluster "LOCAL" with Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) UID=0xAC17001A00000134D2FFC167532F1F98
    2012-01-12 18:38:54.454/45.360 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=1
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:54.454|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:38:54.501/45.407 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:38:54.516/45.422 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=1): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:38:54.579/45.485 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=1): Service Management joined the cluster with senior service member 1
    2012-01-12 18:38:54.876/45.782 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=1): Service DistributedCache joined the cluster with senior service member 1
    2012-01-12 18:38:54.891/45.797 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:InvocationService, member=1): Service InvocationService joined the cluster with senior service member 1
    2012-01-12 18:38:54.907/45.813 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=1):
    Services
    ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1}
    InvocationService{Name=Management, State=(SERVICE_STARTED), Id=1, Version=3.1, OldestMemberId=1}
    PartitionedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    InvocationService{Name=InvocationService, State=(SERVICE_STARTED), Id=3, Version=3.1, OldestMemberId=1}
    Started DefaultCacheServer...
    2012-01-12 18:39:03.438/54.344 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) joined Cluster with senior member 1
    2012-01-12 18:39:03.610/54.516 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service Management with senior member 1
    2012-01-12 18:39:03.907/54.813 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): TcpRing disconnected from Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) due to a peer departure; removing the member.
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service Management with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member 2 left service DistributedCache with senior member 1
    2012-01-12 18:39:04.032/54.938 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=1): Member(Id=2, Timestamp=2012-01-12 18:39:04.032, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest) left Cluster with senior member 1
    PROCESS NODE OUTPUT
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/tangosol-coherence.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <Info> (thread=main, member=n/a): Loaded operational overrides from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/tangosol-coherence-override.xml"
    2012-01-12 18:39:02.266/0.328 Oracle Coherence 3.7.1.0 <D5> (thread=main, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
    Oracle Coherence Version 3.7.1.0 Build 27797
    Grid Edition: Development mode
    Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
    2012-01-12 18:39:02.407/0.469 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded cache configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/coherence-cache-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:02.501/0.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Loaded Reporter configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/reports/report-group.xml"
    2012-01-12 18:39:03.063/1.125 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, member=n/a): TCMP bound to /172.23.0.26:8090 using SystemSocketProvider
    2012-01-12 18:39:03.438/1.500 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "LOCAL" with senior Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
    2012-01-12 18:39:03.485/1.547 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, member=n/a): Started cluster Name=LOCAL
    WellKnownAddressList(Size=1,
    WKA{Address=172.23.0.26, Port=8088}
    MasterMemberSet(
    ThisMember=Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    OldestMember=Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    ActualMemberSet=MemberSet(Size=2
    Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)
    Member(Id=2, Timestamp=2012-01-12 18:39:03.274, Address=172.23.0.26:8090, MachineId=21295, Location=site:,machine:J67LQ2J,process:14408, Role=TestBackingmapMainRunTheTest)
    MemberId|ServiceVersion|ServiceJoined|MemberState
    1|3.7.1|2012-01-12 18:38:23.719|JOINED,
    2|3.7.1|2012-01-12 18:39:03.477|JOINED
    RecycleMillis=1200000
    RecycleSet=MemberSet(Size=0
    TcpRing{Connections=[1]}
    IpMonitor{AddressListSize=0}
    2012-01-12 18:39:03.501/1.563 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded POF configuration from "file:/C:/dev2/tech-trading-workspace/backing-map-access/bin/pof-config.xml"; this document does not refer to any schema definition and has not been validated.
    2012-01-12 18:39:03.532/1.594 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster, member=2): Loaded included POF configuration from "jar:file:/C:/users/vclondon/.m2/repository/com/tangosol/coherence/3.7.1.0b27797/coherence-3.7.1.0b27797.jar!/coherence-pof-config.xml"
    2012-01-12 18:39:03.594/1.656 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
    2012-01-12 18:39:03.891/1.953 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1
    Exception in thread "main" Portable(com.tangosol.util.WrapperException): (Wrapped: Failed request execution for DistributedCache service on Member(Id=1, Timestamp=2012-01-12 18:38:23.719, Address=172.23.0.26:8088, MachineId=21295, Location=site:,machine:J67LQ2J,process:13600, Role=TestBackingmapStartStorageNode)) PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.tagException(Grid.CDB:36)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:68)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    Caused by: Portable(java.lang.UnsupportedOperationException): PofExtractor must be used with POF-encoded Binary entries; the Map Entry is not a BinaryEntry
    at com.tangosol.util.extractor.PofExtractor.extractInternal(PofExtractor.java:175)
    at com.tangosol.util.extractor.PofExtractor.extractFromEntry(PofExtractor.java:146)
    at com.tangosol.util.InvocableMapHelper.extractFromEntry(InvocableMapHelper.java:315)
    at com.tangosol.util.SimpleMapEntry.extract(SimpleMapEntry.java:168)
    at com.tangosol.util.filter.ExtractorFilter.evaluateEntry(ExtractorFilter.java:93)
    at com.tangosol.util.InvocableMapHelper.evaluateEntry(InvocableMapHelper.java:262)
    at com.tangosol.util.InvocableMapHelper.query(InvocableMapHelper.java:452)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.queryBackingMap(GetParentAndChildrenEntryProcessor.java:60)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:47)
    at test.backingmap.main.GetParentAndChildrenEntryProcessor.process(GetParentAndChildrenEntryProcessor.java:1)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$Storage.invoke(PartitionedCache.CDB:10)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onInvokeRequest(PartitionedCache.CDB:52)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$InvokeRequest.run(PartitionedCache.CDB:1)
    at com.tangosol.coherence.component.net.message.requestMessage.DistributedCacheKeyRequest.onReceived(DistributedCacheKeyRequest.CDB:12)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:34)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    at <process boundary>
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:57)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.PortableException.readExternal(PortableException.java:150)
    at com.tangosol.io.pof.ThrowablePofSerializer.deserialize(ThrowablePofSerializer.java:59)
    at com.tangosol.io.pof.PofBufferReader.readAsObject(PofBufferReader.java:3316)
    at com.tangosol.io.pof.PofBufferReader.readObject(PofBufferReader.java:2604)
    at com.tangosol.io.pof.ConfigurablePofContext.deserialize(ConfigurablePofContext.java:368)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
    at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
    at com.tangosol.coherence.component.net.message.SimpleResponse.read(SimpleResponse.CDB:6)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:19)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:31)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.PartitionedService.onNotify(PartitionedService.CDB:3)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache.onNotify(PartitionedCache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Unknown Source)
    2012-01-12 18:39:04.016/2.078 Oracle Coherence GE 3.7.1.0 <D4> (thread=ShutdownHook, member=2): ShutdownHook: stopping cluster node
    Ta,
    -Bret

  • Textarea formatting for carriage returns

    Hi all,
    I've created a websheet that includes a textarea column and have found that while any text entered in edit mode displays a carriage return, as soon as the data has been entered and the field returns to read-only mode the carriage returns are not retained . . . all of the content is displayed in one continuous string. Is there a way to accomplish this?
    Note: Using APEX v4
    Thanks.
    Edited by: arrowjf on Jan 6, 2012 1:18 PM

    * The TextArea control uses UNIX-style line endings, which
    means that text data containing Windows-style carriage-return
    line-feed (that is, \r\n) formatting for new lines contain extra
    line breaks. You can use String.replace() with a regular expression
    to convert the text to UNIX-style line endings, as the following
    example shows:
    private static const windowsCRLF:RegExp = /\r\n/gm;
    myTextString = myTextString.replace(windowsCRLF, "\n");
    http://www.adobe.com/support/documentation/en/flex/2/releasenotes_flex2_sdk.html

  • How can I figure out the parent of a thread???

    Hey guys!!!
    I have a tiny question!!
    I have a thread which could be run by 1 of 2 different parents.
    So my idea was to set a variable...for example int a...and initialize it with 1 for the first parent and with 2 for the seond...then in the thread I make if...else...and whether a is 1 or 2 I know who the parent is.
    So what I wanna know is:
    Is there a method which can figure out who of those 2 the parent started the thread without using a variable???
    cheers,
    Tom

    You code do that or if both parents implement some
    interface, call it parentInterface have them pass them selfs in ... and the thread store that as part of its context.
    Then provide a method call it getParent which returns that parent object. And even a getParentName.
    interface parentInterface
      public String getName();
    } // parentInterface
    class myThread extends Thread
      public myThread(parentInterface pi)
        pInterface = pi;
      } // myThread
      public String getParentName()
          return pInterface.getName();
      } // getParentName
      parentInterface pInterface = null;
    } // myThread
    class parent1 implements parentInterface
    } // parent1
    class parent2 implements parentInterface
    }  // parent2

  • URGENT Bridge and cache error photo corrupted NEED HELP

    We were going to edit a photo and upon opening it and error came up and said: "edit bridge encountered a problem and is unable to read the cache. please try purging the central cache in cache peferences to correct situation"
    We solved the cache problem by purging it but the photo is corrupted! Furthermore, the photo turned red and 1/3 of the photo has been mysteriously turned black!  It only briefly returns to normal after cache problem in adobe bridge. It turns normal but only for a split second!!! WE NEED THIS PHOTO RESTORED TO ORIGINAL CONDITION BEFORE THE CACHE ERROR.
    Thank you for help

    Your photo may be lost, I'm sorry to say.
    Check your Windows System Event log and look for disk errors.  Disks do fail, and if there have been unrecoverable errors you'll see them in the event log.
    A failing disk is not something to be left alone.  I hope you have a recent backup - if not, make one right away, then replace any failing disk drive ASAP, or you chance losing more data.  I always recommend high-reliability enterprise class disk drives such as Western Digital RE4.  They're not that much more expensive, and lost data can be worth a lot.
    -Noel

  • HTTP Receiver interface returns with error code 110

    Hi All,
    We are posting the document from XI to a external server as HTTPS request.
    We are able to sucessfully post the request to external server using HTTP destination as address type but not able to post sucessfully with URL as address type.
    Here goes the details -
    We are able to post the HTTPS request successfully on the external server using the HTTP destination as address type in HTTP receiver adapter setup.
    When we setup address type as HTTP destination , we need to provide the following details -
    IN SM59 , Connection type G
    Target Host : host name (  with out "https://"  as prefix)
    Service No : 443
    Path prefix : query string
    SSL : Active
    Certificate : Select the certifacte from the client certificate list.
    We can post the request to external server using URL as address type in HTTP receiver adapter setup.
    When we setup address type as URL , we need to provide the following details -
    Address type : URL Address
    Target Host : host name (  with out "https://"  as prefix)
    Service Number : 443 ( HTTPS port setup on XI)
    Path : query string.
    When we post the same request as we did with HTTP destination as Address type , we are getting a HTTP response code as failure HTTP response code 110
    Please find the details about the return code -
    If a cache returns a stale response, either because of a max-stale directive on a request, or because the cache is configured to override the expiration time of a response, the cache MUST attach a Warning header to the stale response, using Warning 110 (Response is stale).
    110 Response is stale
    MUST be included whenever the returned response is stale.
    Please find the error message from SXMB_MONI
    <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Call Adapter
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>XIAdapter</SAP:Category>
      <SAP:Code area="PLAINHTTP_ADAPTER">ATTRIBUTE_CLIENT</SAP:Code>
      <SAP:P1>110</SAP:P1>
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:ApplicationFaultMessage namespace="" />
      <SAP:Stack>HTTP client code 110 reason</SAP:Stack>
      <SAP:Retry>N</SAP:Retry>
      </SAP:Error>
    Please let me know if some one has faced this issue.
    Regards,
    Reddy
    Edited by: Nanda kishore Reddy Narapu Reddy on Mar 11, 2008 12:35 PM

    Hi All,
    Is some one can confirm that - We can use HTTPS with Address type as URL address in HTTP receiver adapter setup.
    I can confirm that using HTTP destination as Address type in HTTP receiver adapter setup we can attain HTTPS communication with external server.
    If some one who has tried HTTPS communication using HTTP receiver adapter using URL address as Address type can guide me what are the steps need to be done.
    Address type is a parameter in HTTP adapter setup .
    Regards,
    Reddy

  • Bridge CS6 constantly re-caching?

    In Bridge CS6 (64bit on Windows 7) I noticed that each time I visit a folder I see "activity" in the lower left corner of the window. I see the spinning circle, and reports that tumbnails and previews are being extracted.
    This happens on folders that have been previously visited and fully cached. In fact, it happens every time you return to the Bridge window after using a different window (Explorer, Browser, etc.), even though Bridge was left pointing at the same folder. If you use Bridge to open a file in Photoshop and then return to Bridge, re-caching occurs.
    So I did some testing and found the cause. It appears that Bridge will constantly re-cache layered tif files. Flat tif files, DNG files, CR2 files, jpegs are not re-cached. Strangely, layered PSD files are not re-cached either. Only layered tif files. And more strangely, not all layered tif files, but most. I haven't figured out that pattern yet. Re-caching of layered tif files occurs regardless of whether they are 8bit or 16bit, compressed or not compressed.
    This is a little bothersome when you have folders with a large number of layered tif files. The constant re-caching process can take a lot of time, and is using up CPU and I/O bandwidth for no apparent reason. It's generating the same cache thumbnails over and over, both the 1024 and 256 versions.
    I can't find any setting within Bridge that controls this (make it stop). So, questions are:
    1. Why is this? Is it a feature or a bug?
    2. Any way to make it stop?
    3. Can others verify it happens on their system (Windows or Mac)?

    I suspect it may be unintentional, but short of being a real bug.
    If it worked before in CS5 it well might be qualified as a bug. Anyway, since you discovered this behavior you should report it as bug
    Personally I only use tiff for saving in ACR when needed for other applications that can't handle DNG and PSD very well.
    Having a bunch of tiffs for this purpose (did not try create layer sin them) enabling and disabling tiff support in Camera Raw prefs starts the caching on my machine for those files. So be prepared on recaching (although not going on, just until finished) when switching the support on and off!
    When it comes to layers I only use PSD, in fact, I save every file as PSD as basis for archive. Not saying Tiff is disappearing but it is not a format as popular as it used to be in the early days, especially when it comes to layers
    And if you have enabled tiff support and open them in ACR with choosing the option to open as Smart Object they can be saved in a layered file as PSD. As long as they are a Smart Object they can get back to ACR

  • XSLT and Java lookup cache

    Hi,
    I´m trying the "Easy RFC lookup from XSLT mappings using a Java helper class" article and I getting a weird problem.
    The result of the RFC lookup called inside the java class is maintained in a kind of cache and  I always get the same results independent of the parameters I use in the following calls.
    Just after calling a Complete Cache Refresh (SXI_CACHE) I got a new result to the lookup.
    If I call in the Interface Mapping Test option it runs fine. However, when I call it from my scenario (SOAP Adapter Sender) the first result of the lookup will be returned until a forced cache refresh.
    Any ideas?
    Thank you,
    Fabiano.

    Hello Fabiano,
    I had the same problem like you had.
    The main Problem is that with the example code the request variable is created as NodeList object. In XSLT a variable is somekind of a constant and can't be changed. As the request object is empty after the first request the programm fails at the following line:
    Source source = new DOMSource(request.item(0));
    So I've created a workaround for this problem.
    In the call of the template I've put the request as a parameter object at the template call:
    <xsl:with-param name="req">
    <rfc:PLM_EXPLORE_BILL_OF_MATERIAL xmlns:rfc="urn:sap-com:document:sap:rfc:functions">
      <APPLICATION>Z001</APPLICATION>
      <FLAG_NEW_EXPLOSION>X</FLAG_NEW_EXPLOSION>
      <MATERIALNUMBER><xsl:value-of select="value"/></MATERIALNUMBER>
      <PLANT>FSD0</PLANT>
      <VALIDFROM><xsl:value-of select="//Recordset/Row[name='DTM-031']/value"/></VALIDFROM>
      <BOMITEM_DATA/>
    </rfc:PLM_EXPLORE_BILL_OF_MATERIAL>
    </xsl:with-param>
    With this change the request will be provided as a String object and not as a NodeList object.
    Afterwards the RfcLookup.java has to be changed to the following:
    package com.franke.mappings;
    import java.io.ByteArrayInputStream;
    import java.io.ByteArrayOutputStream;
    import java.io.IOException;
    import java.io.InputStream;
    import java.io.PrintWriter;
    import java.io.StringWriter;
    import java.util.Map;
    import javax.xml.parsers.DocumentBuilder;
    import javax.xml.parsers.DocumentBuilderFactory;
    import javax.xml.transform.Source;
    import javax.xml.transform.Transformer;
    import javax.xml.transform.TransformerFactory;
    import javax.xml.transform.dom.DOMSource;
    import javax.xml.transform.stream.StreamResult;
    import org.w3c.dom.Document;
    import org.w3c.dom.Node;
    import org.w3c.dom.NodeList;
    import com.sap.aii.mapping.lookup.Channel;
    import com.sap.aii.mapping.api.StreamTransformationConstants;
    import com.sap.aii.mapping.api.AbstractTrace;
    import com.sap.aii.mapping.lookup.RfcAccessor;
    import com.sap.aii.mapping.lookup.LookupService;
    import com.sap.aii.mapping.lookup.XmlPayload;
    * @author Thorsten Nordholm Søbirk, AppliCon A/S
    * Helper class for using the XI Lookup API with XSLT mappings for calling RFCs.
    * The class is generic in that it can be used to call any remote-enabled
    * function module in R/3. Generation of the XML request document and parsing of
    * the XML response is left to the stylesheet, where this can be done in a very
    * natural manner.
    * TD:
    * Changed the class that request is sent as String, because of IndexOutOfBound-exception
    * When sending multiple requests in one XSLT mapping.
    public class RfcLookup {
         * Execute RFC lookup.
         * @param request RFC request - TD: changed to String
         * @param service name of service
         * @param channelName name of communication channel
         * @param inputParam mapping parameters
         * @return Node containing RFC response
         public static Node execute( String request,
                 String service,
                 String channelName,
                 Map inputParam)
              AbstractTrace trace = (AbstractTrace) inputParam.get(StreamTransformationConstants.MAPPING_TRACE);
              Node responseNode = null;
              try {
                  // Get channel and accessor
                  Channel channel = LookupService.getChannel(service, channelName);
                  RfcAccessor accessor = LookupService.getRfcAccessor(channel);
                   // Serialise request NodeList - TD: Not needed anymore as request is String
                   /*TransformerFactory factory = TransformerFactory.newInstance();
                   Transformer transformer = factory.newTransformer();
                   Source source = new DOMSource(request.item(0));
                   ByteArrayOutputStream baos = new ByteArrayOutputStream();
                   StreamResult streamResult = new StreamResult(baos);
                   transformer.transform(source, streamResult);*/
                    // TD: Add xml header and remove linefeeds for the request string
                    request = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>"+request.replaceAll("[\r\n]+", ""); 
                    // TD: Get byte Array from request String to send afterwards
                    byte[] requestBytes = request.getBytes();
                   // TD: Not used anymore as request is String
                    //byte[] requestBytes = baos.toByteArray();
                    trace.addDebugMessage("RFC Request: " + new String(requestBytes));
                    // Create input stream representing the function module request message
                    InputStream inputStream = new ByteArrayInputStream(requestBytes);
                    // Create XmlPayload
                    XmlPayload requestPayload =LookupService.getXmlPayload(inputStream);
                    // Execute lookup
                    XmlPayload responsePayload = accessor.call(requestPayload);
                    InputStream responseStream = responsePayload.getContent();
                    TeeInputStream tee = new TeeInputStream(responseStream);
                    // Create DOM tree for response
                    DocumentBuilder docBuilder =DocumentBuilderFactory.newInstance().newDocumentBuilder();
                    Document document = docBuilder.parse(tee);
                    trace.addDebugMessage("RFC Response: " + tee.getStringContent());
                    responseNode = document.getFirstChild();
              } catch (Throwable t) {
                   StringWriter sw = new StringWriter();
                   t.printStackTrace(new PrintWriter(sw));
                   trace.addWarning(sw.toString());
              return responseNode;
         * Helper class which collects stream input while reading.
         static class TeeInputStream extends InputStream {
               private ByteArrayOutputStream baos;
               private InputStream wrappedInputStream;
               TeeInputStream(InputStream inputStream) {
                    baos = new ByteArrayOutputStream();
                    wrappedInputStream = inputStream;
               * @return stream content as String
               String getStringContent() {
                    return baos.toString();
              /* (non-Javadoc)
              * @see java.io.InputStream#read()
              public int read() throws IOException {
                   int r = wrappedInputStream.read();
                   baos.write(r);
                   return r;
    Then you need to compile and upload this class and it should work.
    I hope that this helps you.
    Best regards
    Till

  • Parent-Child Unbound Transformation from Linear Structure

    Hi ,
    We need to convert the linear people structure into Parent-Child relation with unbounded depth using XQuery. To give in detail, we have the XML schema as
    <?xml version="1.0" encoding="UTF-8"?>
    <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/ParentChild" xmlns:tns="http://www.example.org/ParentChild" elementFormDefault="qualified">
    <complexType name="People">
    <sequence>
    <element name="id" type="string" />
    <element name="name" type="string" />
    <element name="age" type="string" />
    <element name="parentId" type="string" minOccurs="0" />
    </sequence>
    </complexType>
    <complexType name="Peoples">
    <sequence>
    <element name="people" type="tns:People" maxOccurs="unbounded"></element>
    </sequence>
    </complexType>
    <element name="Peoples" type="tns:Peoples"></element>
    <element name="People" type="tns:People"></element>
    <complexType name="Parent">
    <sequence>
    <element name="id" type="string" />
    <element name="name" type="string" />
    <element name="age" type="string" />
    <element name="child" type="tns:Parent" minOccurs="0" maxOccurs="unbounded" />
    </sequence>
    </complexType>
    <complexType name="Parents">
    <sequence>
    <element name="Parent" type="tns:Parent" maxOccurs="unbounded" ></element>
    </sequence>
    </complexType>
    <element name="Parents" type="tns:Parents"></element>
    <element name="Parent" type="tns:Parent"></element>
    </schema>
    The input structure can appear as
    <?xml version="1.0"?>
    <ns0:Peoples xmlns:ns0="http://www.example.org/ParentChild">
    <ns0:people>
    <ns0:id>1</ns0:id>
    <ns0:name>x</ns0:name>
    <ns0:age>11</ns0:age>
    <ns0:parentId>2</ns0:parentId>
    </ns0:people>
    <ns0:people>
    <ns0:id>2</ns0:id>
    <ns0:name>y</ns0:name>
    <ns0:age>11</ns0:age>
    <ns0:parentId>3</ns0:parentId>
    </ns0:people>
    <ns0:people>
    <ns0:id>3</ns0:id>
    <ns0:name>z</ns0:name>
    <ns0:age>11</ns0:age>
    </ns0:people>
    <ns0:people>
    <ns0:id>5</ns0:id>
    <ns0:name>a</ns0:name>
    <ns0:age>11</ns0:age>
    </ns0:people>
    </ns0:Peoples>
    The response should be like below
    <?xml version="1.0"?>
    <ns0:Parents xmlns:ns0="http://www.example.org/ParentChild">
    <ns0:Parent>
    <ns0:id>3</ns0:id>
    <ns0:name>z</ns0:name>
    <ns0:age>11</ns0:age>
    <ns0:child>
    <ns0:id>2</ns0:id>
    <ns0:name>y</ns0:name>
    <ns0:age>11</ns0:age>
    <ns0:child>
    <ns0:id>1</ns0:id>
    <ns0:name>x</ns0:name>
    <ns0:age>11</ns0:age>
    </ns0:child>
    </ns0:child>
    </ns0:Parent>
    <ns0:Parent>
    <ns0:id>5</ns0:id>
    <ns0:name>a</ns0:name>
    <ns0:age>11</ns0:age>
    </ns0:Parent>
    </ns0:Parents>
    We tried with below XQuery, but it is not resulting as expected.
    (:: pragma bea:global-element-parameter parameter="$peoples" element="ns0:Peoples" location="ParentChild.xsd" ::)
    (:: pragma bea:global-element-return element="ns0:Parents" location="ParentChild.xsd" ::)
    declare namespace ns0 = "http://www.example.org/ParentChild";
    declare namespace xf = "http://tempuri.org/RecursiveParentChild/ParentChild/";
    declare function xf:ParentChild($peoples as element(ns0:Peoples))
    as element(ns0:Parents) {
    <ns0:Parents>
    let $results := <a>{
         for $people1 in $peoples/ns0:people where (not (exists ($people1/*:parentId)))
         return
         <ns0:Parent>
                                  <ns0:id>{ data($people1/ns0:id) }</ns0:id>
                        <ns0:name>{ data($people1/ns0:name) }</ns0:name>
                        <ns0:age>{ data($people1/ns0:age) }</ns0:age>
                             </ns0:Parent>
                   </a>
                   let $result1 :=
                   for $people1 in $peoples/ns0:people where (exists ($people1/*:parentId))
                   return
                        if (data($people1/ns0:parentId) = data($results/ns0:id)) then
                             <ns0:child>
                                  $people1/*
                             </ns0:child>
                        else()
                   return $result1      
    </ns0:Parents>
    declare variable $peoples as element(ns0:Peoples) external;
    xf:ParentChild($peoples)
    Anyone tried similar kind of XQuery?. THere is a possibility that the child node can appear before the parent node, how we can handle that as well?
    Any help on this is appreciated
    Regards
    Venkata Madhu

    You need a recursive function in this situation :
    declare namespace ns0 = "http://www.example.org/ParentChild";
    declare namespace xf = "http://tempuri.org/RecursiveParentChild/ParentChild/";
    declare variable $peoples as element(ns0:Peoples) external;
    declare function xf:getChildren($p as element(ns0:people)) as element(ns0:child)*
      for $c in $peoples/ns0:people[ns0:parentId=$p/ns0:id]
      return <ns0:child>
            <ns0:id>{ data($p/ns0:id) }</ns0:id>
            <ns0:name>{ data($c/ns0:name) }</ns0:name>
            <ns0:age>{ data($p/ns0:age) }</ns0:age>
            { xf:getChildren($c) }
             </ns0:child>
    <ns0:Parents>
      for $p in $peoples/ns0:people[not(ns0:parentId)]
      return <ns0:Parent>
            <ns0:id>{ data($p/ns0:id) }</ns0:id>
            <ns0:name>{ data($p/ns0:name) }</ns0:name>
            <ns0:age>{ data($p/ns0:age) }</ns0:age>
            { xf:getChildren($p) }
             </ns0:Parent>
    </ns0:Parents>

  • Filteredtreemodel, how to hide parent nodes if no children

    hi. i have surfed and read a lot of posts and topics here regarding filteredtreemodel. some i found hard to understand so i chose to use NiceGuy1's code from long ago since it's the easiest to understand.
    package test;
    import javax.swing.tree.DefaultMutableTreeNode;
    import javax.swing.tree.DefaultTreeModel;
    import javax.swing.tree.TreeNode;
    import java.util.ArrayList;
    import java.util.Enumeration;
    public class FilteredTreeModel extends DefaultTreeModel {
        // this hashmap gets the list of pdf files and the filter status
        private String filterText;
        private DefaultMutableTreeNode orig_root;
        public FilteredTreeModel(DefaultMutableTreeNode root) {
            super(root);
            orig_root = root;
        public void setFilterText(String s) {
            this.filterText = s;
            reload();
        public String getFilterText() {
            return this.filterText;
        @Override
        public Object getChild(Object parent, int index) {
            return getFilteredChildren(parent).get(index);
        @Override
        public int getChildCount(Object parent) {
            return getFilteredChildCount(parent);
        @Override
        public int getIndexOfChild(Object parent, Object child) {
            return getFilteredChildren(parent).indexOf(child);
        private int getFilteredChildCount(Object parent) {
            return getFilteredChildren(parent).size();
        private ArrayList<TreeNode> getFilteredChildren(Object parent) {
            ArrayList<TreeNode> filteredChildren = new ArrayList<TreeNode>();
            DefaultMutableTreeNode parentNode = null;
            parentNode = (DefaultMutableTreeNode)parent;
            for (Enumeration e=parentNode.children(); e.hasMoreElements();) {
                DefaultMutableTreeNode nextNode = (DefaultMutableTreeNode) e.nextElement();
                DefaultMutableTreeNode dummy = null;
                  if (nextNode.children().hasMoreElements()) {
                      filteredChildren.add(nextNode);
                } else {
                    // if node is leaf node
                    if (filterText == null ) {
                        // add right away if no filter
                        filteredChildren.add(nextNode);
                    } else {
                        // if filter matches this leaf node, add it. if not, how to check if
                        // parent nodes have empty children and remove it.
            return filteredChildren;
    }i havent had luck the whole day trying to make this work. it seemed that once the filteredChildren.add() is called, it gets rendered to the treecellrenreder right away so if i wnat to remove the parents (in case it has no children based on the filter),the parent nodes will stay put in the jtree. any workaround? or ideas? i dont want to use a different defaulttreemodel for this. i want to avoid that because i have lots of nodes in my jtree.

    Looks like you need this:
    for (Enumeration e=parentNode.getFilteredChildren(); e.hasMoreElements();) Edit: okay, that was just a quick guess. Actually you're going to get into trouble if you try to recursively suppress nodes with no children. First you suppress all the leaf nodes (because they naturally don't have any children). Then you suppress all of their parents, because now they don't have any unsuppressed children. Then you suppress all of the parents' parents, because now they don't have any unsuppressed children... and so on until you only have the root left. So you need to refine that requirement.
    But I get what you're saying. I had to do a similar thing, only I didn't use a FilteredTreeModel, I just built the TreeModel based on a selection from another tree. It's ridiculously hard to do when you don't get the nodes in order from the database, too, but that's a different problem.

Maybe you are looking for

  • How to count number of words in a string?

    Is it only possible by counting the number of white spaces appearing in the string?

  • Hiding submit button from an Infopath form in a Sharepoint Library document

    I have a Sharepoint document library that is setup to collect submissions from a form that faces the public.  Because it faces the public it needs to have the standard/familiar submit button.  The problem is that when the form is submitted to the doc

  • Same Party as Vendor & Customer

    Dear all, Is it possible to have the same party as both vendor & Customer with a single code or some kind of interlinking? Basically i want to adjust the payments to the same party against the simultaneous sale & Purchase. This is to avoid unnecessar

  • Adding new infoobject

    hai i am adding new infoobject to ods and cube,that have data already.how can i do.i want to delete the data both in ods and cube.if so how to delete the data in ods.

  • 10.5.6 update break Apache

    I have my imac at home configured as a small web-server for testing and development purposes. I run a bunch of virtual hosts, using PHP,mySQL, Apache2, the extra/httpd-hosts.conf file and the /etc/hosts file. This has all been working fine for a whil