Handling collection in OBR facts

Is it possible to pass collection of data as input facts to ORB? How can we access individual element in the collection when define the Rules?
For example, I want to send list of employees to the OBR and define rule that will iterate over employees and return something.

Hi
It is possible to pass collection of data to OBR using xsd and jaxb classes..
Create xsd with following elements
create employee structure
create employee collection structure with employee as element which is of type employee structure
create message element which contains employeecollection element which is of type employee collection structure.
Here I am just giving an idea of how xsd will be:
<xs:element name="message" >
<xs:sequence>
<xs:element name="employeeCollection" type="employeeCollectionType"/>
</xs:sequence>
</xe:element>
<xs:complextype name="employeeCollectionType">
<xs:sequence>
<xs:element name="employee" type="employeeStructure" minOccurs="1" maxoccurs="unbounded"/>
</xs:sequence>
</xs:complextype>
<xs:complextype name="employeeStructure">
<xs:sequence>
<xs:element name="id" type="int"/>
<xs:element name="name" type="string"/>
</xs:sequence>
</xs:complextype>
Provide above completed xsd in Buesiness rules it will then create JAXB classes.
1.you can get the employee collection object like following in BR functions
java.util.List set = message.getEmployeeCollection();
2.while iterate through above list object and you can get induvidual employee data as object.
3.once you integrate business rules in BPEL , using transform collection employee's data into employeeCollection element.
Thanks
Seshagiri.Rayala
http://soabpel.wordpress.com

Similar Messages

  • Handling collections in coherence cache

    hi,
    In a mutithreaded environment, how does coherence cache handles collection like HashMap in a multithreaded environment where the large majority of method calls are read-only, instead of structural changes?
    Are the read calls non-synchronized by coherence? Is there any mechanism to handle the write calls different from the read calls?
    Thanks.
    suvasis

    Hi Suvasis,
    Coherence caches are coherent, and use a minimal (or zero if possible) amount of synchronization for read-access.
    Coherence does support double-checked locking for read-heavy access:
    <tt>
    Object value = cache.get(k);
    if (value == null)
      cache.lock(key, -1);
      try
        value = cache.get(key);
        if (value == null)
          value = {something};
          cache.put(key, value);
      finally
        cache.unlock(key);
    // read-access to value
    Object x = value.getSomeAttribute();
    </tt>
    It should be noted that Coherence does not "observe" objects outside of the Coherence API calls (get/put/lock/unlock/etc). So once you "get" a local object instance from Coherence, Coherence doesn't pay attention to that local object until you explicitly "put" it the modified object back into the cache.
    Jon Purdy
    Tangosol, Inc.

  • How to handle Collections

    Hi all. I am exporting some methods as Web Services and find out that jax-rpc does not have collections serializers yet. So, because I don't want to write my owns, I thought that I should be able to change "on the fly" (on demand), lets say, an ArrayList to an Array (that is supported in jax-rpc) in the following way:
    private ArrayList allCriteria = new ArrayList(); // the collection I manage
    private Criteria[] criteria; // the auxiliar structure I need to use jax-rpc
    public void setCriteria(Criteria[] criteria) {
    allCriteria = new ArrayList(Arrays.asList(criteria)); // copy the Array to the ArratList
    this.criteria = criteria; // sets the Array
    public Criteria[] getCriteria() {
    criteria = new Criteria[allCriteria.size()]; // create an Array the same size the ArrayList
    Iterator iterator = allCriteria.iterator();
    for(int i = 0; iterator.hasNext(); i++) { // iterate over the ArrayList
    criteria[i] = (Criteria) iterator.next(); // fill the Array
    return criteria; // return the Array filled like the ArrayList
    I suposed that this strategy leads me to a solution: criteria is a valid bean, and I do some special process inside the get/set methods to exchange info with the ArrayList. But the fact is that the Arrays are created with the right size, but empty elements at all !!!
    I don't know what going wrong. If you have the answer, or any ideas on how to solve the problem around web services and collections please let me know.
    Thanks in advance,
    Leonardo

    This is just a thought but the problem lies with using Arrays.asList(Object []). The list returned by this method actually references the inputted array i.e. the list does not get a copy of the values in the input array but references them. This feature was designed in order that if the underlying array is changed then these changes will be reflected in the list representation.
    The problem is in your setCriteria in the first line. You are using asList with a local instance i.e. the parameter passed in to setCriteria. The value of the local instance or the parameter is the reference to the array not the actual array itself. As you are maintaining a reference, it is possible that the array it points to may be removed somewhere else in the program or changed and as a result you will not get the same values of the array when you use it in getCriteria as when setCriteria was called. .So you have a number of options:
    1 You use System.arraycopy() to create a copy of criteria and you keep a reference in the class which has setCriteria. You will then use the copy array to convert it into an array list.
    private Criteria [] criteria;
    private ArrayList allCriteria = new ArrayList();
    e.g. public void setCriteria(Criteria [] inputcriteria){
    this.criteria = new Criteria[inputcriteria.length];
    System.arraycopy(inputcriteria, 0, criteria, 0, inputCriteria.length);
    //At this point critiera has all the values of inputcriteria
    allCriteria = new ArrayList(criteria);
    NB: I HAVE NOT complied this code but the general idea should work.
    OR 2. In your getCriteria you are iterating through the values in allCriteria and returning the Criteria Array could you not do this in your setCriteria and then you will not have to do a System.arraycopy() but use the input Criteria array?
    HTH.

  • Handling Collection return values best paractice

    hi;
    over 50% of my code returns Collection implementations as return values plus those Collections have proprietry objects that have nested Collections inside them.
    what a best practice for dealing with such a problem when trying to expose these methods as webservices?
    i could not understand by searching the forum if i can return a Collection or not (there are mixed posts on that).
    any help would be great
    P.S: i am using jboss application server + axis

    Your method seems to return an entity with semantic
    maening, otherwise the values wouldn't belong
    together. So what's wrong with an object? Especially
    if you're doing further calculations with the data -
    they should be performed by this object then.They are not particularly well related... apart from they are returned together.
    I have several different cases that I am currently working with. In one instance (c, n_t, s_f_0, s_f_1, s_vPrime) are returned, they are all BigIntegers but that is where the similarities end. c is a MessageDigest (converted to BigInteger), n_t is a random number. The remaining values have something in common s_x := r_x + c * x where x = {s_f_0, s_f_1, s_vPrime}.
    The variable names are taken straight from a research paper... hence their nonsensical meaning. I have opted to keep them throughout for ease of development (I don't fancy having a lookup table....)

  • What is the best way to handle collections that contains different object

    Hi
    Suppose i have two class as below
    class Parent {
           private String name;
    class Child extends Parent {
          private int childAge;
    }I have a list that can contains both Child and Parent Object but in my jsp i want to display a childAge attribute ( preferrably not to use scriplet )
    what is the best way to achieve this?

    Having a collection containing different object types is already a bad start.
    How are parent and child related to each other? Shouldn't Child be a property of Parent?
    E.g.public class Parent {
        private Child child;
    }In any way, you could in theory just check Object#getClass()#getName(), but this is a nasty design.

  • Fact tables are compatible with the query request

    Hi,
    i am using 11g.In 10g working fine without any error.After migrate 10g into 11g below error will returning. What is the problem and How we will fix.Could you pls let me know.Thanks
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request FACT_AGENT_TIME_STATISTICS.FKEY. (HY000)
    SQL Issued: SELECT s_0, s_1, s_2, s_3, s_4, s_5, s_6, s_7, s_8, s_9, s_10 FROM ( SELECT 0 s_0, "Team Performance"."DIMENSION - LOCATION"."COUNTRY CODE" s_1, "Team Performance"."DIMENSION - TIME"."BUSINESS DATE" s_2, CASE WHEN "Team Performance"."DIMENSION - TEAM"."ROLE" ='TEAM LEADER' THEN 'Team Leader' ELSE "Team Performance"."DIMENSION - TEAM"."TEAM" END s_3, CASE WHEN '30 Mins Interval' ='15 Mins Interval' THEN "Team Performance"."DIMENSION - TIME"."15 Mins Interval" ELSE "Team Performance"."DIMENSION - TIME"."30 Mins Interval" END s_4, ((SUM("Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - ACD CALL HANDLING"+"Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - AFTER CALL WORK (ACW)")+SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AVAILABLE TIME"))/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_5, (SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AUX3 - TRAINING"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX4 - MEETING"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX5 - PROJECT"+"Team Performance"."FACT - AGENT TIME STATISTICS"."AUX6 - COACHING")/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_6, (SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."STAFFED TIME")/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") s_7, COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE" s_8, MIN("Team Performance"."DIMENSION - TIME"."FULL DATE TIME") s_9, REPORT_AGGREGATE(((SUM("Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - ACD CALL HANDLING"+"Team Performance"."FACT - AGENT CALL STATISTICS"."TIME - AFTER CALL WORK (ACW)")+SUM("Team Performance"."FACT - AGENT TIME STATISTICS"."AVAILABLE TIME"))/60)/(COUNT(DISTINCT "Team Performance"."FACT - AGENT TIME STATISTICS"."DATE ID")*"Team Performance"."FACT - AGENT TIME STATISTICS"."INTERVAL TYPE") BY CASE WHEN '30 Mins Interval' ='15 Mins Interval' THEN "Team Performance"."DIMENSION - TIME"."15 Mins Interval" ELSE "Team Performance"."DIMENSION - TIME"."30 Mins Interval" END, CASE WHEN "Team Performance"."DIMENSION - TEAM"."ROLE" ='TEAM LEADER' THEN 'Team Leader' ELSE "Team Performance"."DIMENSION - TEAM"."TEAM" END) s_10 FROM "Team Performance" WHERE (("DIMENSION - TIME"."BUSINESS DATE" BETWEEN timestamp '2012-11-23 00:00:00' AND timestamp '2012-11-23 00:00:00') AND ("DIMENSION - LOCATION"."COUNTRY CODE" = 'US') AND ("DIMENSION - LOCATION".DEPARTMENT = 'CSG')) ) djm FETCH FIRST 65001 ROWS ONLY

    Back up your repository before following these steps:
    1 Reduce the problem report to minimum number of columns required to generate the error. This will usually identify which dimension and which fact are incompatible.
    2 Open the repository in the Administration tool and verify that the dimension and fact table join at the physical layer of the repository
    3 Verify that there is a complex join between the dimension and the fact in the business layer.
    4 Check the logical table sources for the fact table. At least one of them must have the Content tab set to a level in the hierarchy that represents the problem dimension. This is usually the detailed level.
    5 Check the logical table source Content tab for the dimension table. Unless there is a valid reason, this should be set to blank.
    6. Save any changes to the repository.

  • Loading data into Fact/Cube with surrogate keys from SCD2

    We have created 2 dimensions, CUSTOMER & PRODUCT with surrogate keys to reflect SCD Type 2.
    We now have the transactional data that we need to load.
    The data has a customer id that relates to the natural key of the customer dimension and a product id that relates to the natural key of the product dimension.
    Can anyone point us in the direction of some documentation that explains the steps necessary to populate our fact table with the appropriate surrgoate key?
    We assume that we need to have an lookup table between the current version of the customer and the incoming transaction data - but not sure how to go about this.
    Thanks in advance for your help.
    Laura

    Hi Laura
    There is another way to handling SCD and changing Facts. This is to use a different table for the history. Let me explain.
    The standard approach has these three steps:
    1. Determine if a change has occurred
    2. End Date the existing record
    3. Insert a new record into the same table with a new Start Date and dummy End Date, using a new surrogate key
    The modified approach also has three steps:
    1. Determine if a change has occurred
    2. Copy the existing record to a history table, setting the appropriate End Date en route
    3. Update the existing record with the changed information giving the record a new Start Date, but retaining the original surrogate key
    What else do you need to do?
    The current table can use the surrogate key as the primary key with the natural key being being a unique key. The history table has the surrogate key and the end date in the primary key, with a unique key on the natural key and the end date. For end user queries which in more than 90% of the time go against current data this method is much faster because only current records are in the main table and no filters are needed on dates. If a user wants to query history and current combined then a view which uses a union of the main and historical data can be used. One more thing to note is that if you adopt this approach for your dimension tables then they always keep the same surrogate key for the index. This means that if you follow a strict Kimball approach to make the primary key of the fact table be a composite key made up of the foreign keys from each dimension, you NEVER have to rework this primary key. It always points to the correct dimension, thereby eliminating the need for a surrogate key on the fact table!
    I am using this technique to great effect in my current contract and performance is excellent. The splitter at the end of the map splits the data into three sets. Set one is for an insert into the main table when there is no match on the natural key. Set two is when there is a match on the natural key and the delta comparison has determined that a change occurred. In this case the current row needs to be copied into history, setting the End Date to the system date en route. Set three is also when there is a match on the natural key and the delta comparison has determined that a change occurred. In this case the main record is simply updated with the Start Date being reset to the system date.
    By the way, I intend to put a white paper together on this approach if anyone is interested.
    Hope this helps
    Regards
    Michael

  • Will Oracle OLAP handle our case(s).

    We are about to build a cube to roughly handle following dimensions and facts:
    15 dimensions ranging from a couple of members to 40,000+ members.
    A fact table holding 200,000,000+ rows
    So my question is: Does anybody has a sense of whether OLAP has a chance of handling this data? We are pretty certain that the data will be sparse.
    A second item relates to whether Oracle OLAP cubes give us the ability to compute what we refer to as "Industry" data. We serve a number of companies and we compute metrics that applies to their data. In order to allow these companies to see what they are doing against the other companies we provide the metrics for every other company; this metrics are considered Industry. So my question is: Do OLAP cubes have any structure or mechanism that allows to compute these metrics within the same cube or do we have to create a cube to hold Industry metrics?
    Thanks,
    Thomas

    Thomas,
    I cannot advise you for or against based on the small amount of information I have. I will not deny that at 15 dimensions you are at the upper limit of what you can achieve using the current (11.1) OLAP technology, but I have seen cubes of this size built and queried, so I know it is possible.
    The answer would depend on many things: hardware, query tools, expectations for query and build performance, and whether you have a viable alternative technology (which will determine how hard you will work to get past problems). It even depends on your project time frames, since release 11.2 is currently in beta and will, we hope, handle greater volumes of data than 11.1 or 10g.
    One important factor is how you partition your cube. At what level do you load the data (e.g. DAY or WEEK)? What is your partition level (e.g. MONTH or QUARTER)? A partition that loads, say, 10 million rows, is going to be much easier to build than a partition with 50 million rows. To decide this you need to know where your users will typically issue queries, since queries that cross partitions (e.g. ask for data at the YEAR level but are partitioned by WEEK) are significantly slower than those that do not cross partitions (e.g. ask for data at WEEK when you are partitioned by MONTH).
    Cube-based MVs can offer advantages for cubes of this size even if you define a single, 15-dimensional, cube. One nice trick is to only aggregate the cube up to the partition level. Suppose, for example, that you load data at the DAY level and partition by QUARTER. Then you would make QUARTER be the top level in you dimension instead of YEAR or ALL_YEARS. The trick is to make YEAR be an ordinary attribute of QUARTER so that it appears in the GROUP BY clause of the cube MV. Queries that ask for YEAR will still rewrite against the cube, but the QUARTERS will be summed up to YEAR using standard SQL. The result will generally be faster than asking the cube to do the same calculation. This technique allows you to lower your partition level (so that there are fewer rows per partition) without sacrificing on query performance.
    Cube-based MVs also allow you to raise the lowest level of any dimension (to PRODUCT_TYPE instead of PRODUCT_SKU say). Queries that go against the upper levels (PRODUCT_TYPE and above) will rewrite against the cube, and those that go against the detail level (PRODUCT_SKU) will go direct to the base tables. This compromise is worth making if your users only rarely query against the detail level and are willing to accept slower response times when they do. It can reduce the size of the cube considerably and is definitely worth considering.
    David

  • Best place in publishServiceProvider to create collection sets on the server?

    I'm developing a LR publishing plugin that talks to a custom-built back-end.  Collections on the server are created as needed within the publishServiceProvider.processRenderedPhotos function, which is working fine.
    Looking towards supporting collection sets, I don't think it's a good approach to try to manage these on the server at publish-time. 
    I haven't found any examples on creating collection sets on the server.  Is the following function a better place to create the collection sets on the server?
    publishServiceProvider.updateCollectionSetSettings
    If so, would it work to talk to the server at this point, get the collection set id, and if that fails, to throw an error?  I'd hope that the collection set creation would then be aborted within LR.
    Any advice is greatly appreciated!

    I've gone ahead and handled the collection set creation and update at the server from within updateCollectionSetSettings, likewise for handling collection creation and updating within updateCollectionSettings.  Failure at the server is correctly aborting the adding/updating of the collection/set within Lightroom.  All is good.

  • Bug/Bad docs trying to bind SelectItem to bean property of type Collection

    Hi,
    I've been trying for the best part of the morning to use a selectOneMenu tag and bind the selectItems value to a backing bean property.
    The tutorial states that:
    "The advantages of using the selectItems tag are as follows:
    You can represent the items using different data structures, including Array, Map, List, and Collection. The data structure is composed of SelectItem instances or SelectItemGroup instances."
    well I've used Map, and List (which are both covered by Collection anyway...) and they don't work. You go silently to the renderResponse phase.
    In all cases the content of my Collection typed property were javax.faces.model.SelectItem instances.
    I changed the backing bean property to be of type SelectItem[] instead and suddenly it works fine.
    It's as if the binding mechanism isn't able to handle collections?
    Is the tutorial wrong or does this just not work?
    I'm using 1.0.
    Neil

    Hi
    I have the same problem whereby my backing beans has implemented as a HashSet of attachments for a client. If this is changed to a type, List say ArrayList, everything works correctly with no errors.
    client.attachments -> contains a HashSet of attachments
    Page goes to display this list and if a Set is used then error below is thrown.
    Has anyone experienced this problem and found a solution, it would very help
    Thanks
    Glenn
    Exception :
    com.sun.facelets.tag.TagAttributeException: /C:/development/42508/apmweb/pages/client/client_notes_list.xhtml @70,53 tes
    t="${not empty note.attachments}" /C:/development/42508/apmweb/pages/client/client_notes_list.xhtml @70,53 test="${not e
    mpty note.attachments}": Bean: org.hibernate.collection.PersistentSortedSet, property: 0
    at com.sun.facelets.tag.TagAttribute.getObject(TagAttribute.java:235)
    at com.sun.facelets.tag.TagAttribute.getBoolean(TagAttribute.java:79)
    at com.sun.facelets.tag.jstl.core.IfHandler.apply(IfHandler.java:49)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.tag.jstl.core.ForEachHandler.apply(ForEachHandler.java:168)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.tag.jsf.ComponentHandler.apply(ComponentHandler.java:147)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.tag.jsf.ComponentHandler.apply(ComponentHandler.java:147)
    at com.sun.facelets.tag.jsf.ComponentHandler.apply(ComponentHandler.java:147)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.tag.jsf.core.ViewHandler.apply(ViewHandler.java:94)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.compiler.NamespaceHandler.apply(NamespaceHandler.java:49)
    at com.sun.facelets.tag.CompositeFaceletHandler.apply(CompositeFaceletHandler.java:47)
    at com.sun.facelets.impl.DefaultFacelet.apply(DefaultFacelet.java:95)
    at com.sun.facelets.FaceletViewHandler.buildView(FaceletViewHandler.java:400)
    at com.sun.facelets.FaceletViewHandler.renderView(FaceletViewHandler.java:434)
    at org.apache.myfaces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:352)
    at javax.faces.webapp.FacesServlet.service(FacesServlet.java:107)
    at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:214)
    at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:120)
    at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:272)
    at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at org.apache.myfaces.component.html.util.ExtensionsFilter.doFilter(ExtensionsFilter.java:122)
    at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:42)
    at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3020)
    at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
    at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
    at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:1925)
    at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:1848)
    at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1288)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:207)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:179)
    Caused by: javax.el.PropertyNotFoundException: /C:/development/42508/apmweb/pages/client/client_notes_list.xhtml @70,53
    test="${not empty note.attachments}": Bean: org.hibernate.collection.PersistentSortedSet, property: 0
    at com.sun.facelets.el.TagValueExpression.getValue(TagValueExpression.java:73)
    at com.sun.facelets.tag.TagAttribute.getObject(TagAttribute.jav
    Code :
    <c:forEach items="#{WriteableClient.notes}" var="note">          
    <c:if test="#{not empty note.attachments}">
    </c:forEach >

  • Folder & collection metadata

    I have a basic but functional plugin that allows you to add text notes to sources - collections, publishing services, folders. It's something I've always wanted but never seen anyone else do. I use it to store client information, shoot details, post processing information and rudimentary task lists at the source level. I'm finding it increasingly useful even in this limited form.
    At the moment, when run, it pops up a dialog with a decent sized editable text field containing persistable text attached to the source but I've, more or less, generalised the code to make it possible to add other data types - buttons, checkboxes, dropdwons ...
    The reasons for posting here:
    Is anyone else doing this, am I duplicating effort
    Does anyone else even think it's useful
    To promote discussion; how could this be made more useful, how could it be extended.
    I look forward to hearing thoughts and comments.

    My plugin handles collections & collectionsets, both published and otherwise, and folders. It stores the data as an xml file in the catalogue's root folder.  It currently only displays and allows editing of text data, although I'm not sure it needs to do much else.
    If anyone would like to play they're welcome, post here or pm me.

  • POF Extractor and Collections (review my code)

    I have the following requirement which I don't think the current POF extractors can meet:
    I have Class-A implementing PortableObject that has a number of fields. One of those fields (lets say POF index 103) is a Collection of Class-B which is also a PortableObject with a number of fields.
    Now... If I want to just extract the value of the Collection from Class-A I can just use a normal POF Extractor like this
    new PofExtractor(java.util.List.class, new SimplePofPath(103)); But... Say I want back all the instances of field 100 of Class-B from the Collection of Class-B instances in Class-A (returned as a Collection).
    As far as I know I cannot do this with SimplePofExtractor so I have written my own PofNavigator that works in the way I require.
    My code will navigate down a tree of PortableObject instances handling Collections as it goes.
    So in my previous example above if the field I wanted from Class-B was itself a PortableObject I could navigate
    further down to get values from that class (just like a normal SimplePofPath).
    So if anyone wuld like to review my code it is below.
    From testing it looks like it works but I would be grateful of any pointers to make it more efficient
    or if I have made any mistakes as being a hard core developer I have jumped straight into code
    rather than reading the POF spec from cover to cover :-)
    Cheers,
    JK
    P.S. Apologies for how wide this looks on the screen but my code appears to have stretched the formatting a little.
    * This is a version of a POF Navigator that
    * can deal with POF fields that are Collections.
    * <p/>
    * If any of the fields in the path of POF indexes
    * supplied to the constructor are Collections
    * then the result will be a Collection of all
    * of the results of the rest of the POF
    * tree navigation.
    * <p/>
    * @author Jonathan Knight
    public class PofCollectionPath extends AbstractPofPath {
         * The POF indexes to navigate
        private int[] elements;
         * Default constructor necessary for PortableObject interface
         * @see com.tangosol.io.pof.PortableObject
        public PofCollectionPath() {
         * Construct a SimplePofPath using an array of indices as a path.
         * <p/>
         * @param indices the list of POF indices to navigate.
        public PofCollectionPath(int... indices) {
            this.elements = indices;
         * Locate the PofValue identified by this PofNavigator within the passed PofValue.
         * @param pofValue the origin from which navigation starts
         * @return the resulting extracted PofValue.
         * @see AbstractPofPath#navigate(PofValue)
        @Override
        public PofValue navigate(PofValue pofValue) {
            return navigate(pofValue, 0);
         * Recursivley navigates down the POF tree
         * of values starting with the specified PofValue
         * and from the specified index in the array
         * of POF indicies for this PofNavigator.
         * @param pofValue the origin from which navigation starts
         * @param idx      the index number to start navigating from
         * @return the resulting extracted PofValue.
        PofValue navigate(PofValue pofValue, int idx) {
            PofValue result = null;
            int lastIndex = elements.length - 1;
            // Is the value a POF Collection
            if (!(pofValue instanceof PofArray)) {
                // No, we have a simple value, so extract the field
                PofValue child = pofValue.getChild(elements[idx]);
                if (idx != lastIndex) {
                    result = navigate(child, idx + 1);
                } else {
                    result = child;
            } else {
                // Yes, we have a collection
                try {
                    // Create the byte stream to hold the serialized results
                    // We make it as big as the current object as I don't think it
                    // could be bigger than this
                    ByteArrayOutputStream outStream;
                    outStream = new ByteArrayOutputStream(((PofArray)pofValue).getSize());
                    DataOutputStream dos = new DataOutputStream(outStream);
                    // Write the POF indicator for a Collection
                    dos.writeByte(0x55);
                    // Write the length of the collection as a POF integer
                    int length = ((PofArray) pofValue).getLength();
                    writePofInt(dos, length);
                    // Iterate over the collection
                    for (int i = 0; i < length; i++) {
                        // Get the child
                        PofValue child = pofValue.getChild(i);
                        // Get the value from the child for the pof index we require
                        PofValue value = child.getChild(elements[idx]);
                        // If we are not on the last index navigate down further
                        if (idx != lastIndex) {
                            value = navigate(value, idx + 1);
                        // write the result to the byte stream
                        dos.write(((SimplePofValue) value).getSerializedValue().toByteArray());
                    // Convert the byte stream's bytes to a POF value
                    result = PofValueParser.parse(new Binary(outStream.toByteArray())
                                                  , ((PofArray) pofValue).getPofContext());
                } catch (IOException e) {
                    e.printStackTrace();
            return result;
         * This method writes an Integer to a POF stream and is taken
         * verbatim from the POF documentation.
        private void writePofInt(DataOutputStream dos, int n) throws IOException {
            int b = 0;
            if (n < 0) {
                b = 0x40;
                n = ~n;
            b |= (byte) (n & 0x3F);
            n >>>= 6;
            while (n != 0) {
                b |= 0x80;
                dos.writeByte(b);
                b = (n & 0x7F);
                n >>>= 7;
            dos.writeByte(b);
         * Return a collection of path elements.
         * @return a collection of path elements
         * @see AbstractPofPath#getPathElements()
        @Override
        protected int[] getPathElements() {
            return elements;
         * Restore the contents of this PofCollectionPath instance by reading its state
         * using the specified PofReader object.
         * <p/>
         * @param pofReader the PofReader from which to read the object's state
         * @throws IOException if an I/O error occurs
         * @see com.tangosol.io.pof.PortableObject
        @Override
        public void readExternal(PofReader pofReader) throws IOException {
            elements = pofReader.readIntArray(101);
         * Save the contents of a POF user type instance by writing its state
         * using the specified PofWriter object.
         * <p/>
         * @param pofWriter the PofWriter to which to write the object's state
         * @throws IOException if an I/O error occurs
         * @see com.tangosol.io.pof.PortableObject
        @Override
        public void writeExternal(PofWriter pofWriter) throws IOException {
            pofWriter.writeIntArray(101, elements);
         * Compare the PofCollectionPath with another object to determine equality.
         * Two PofCollectionPath objects are considered equal if their indices are equal.
         * <p/>
         * @param o the Object to compare this PofCollectionPath to.
         * @return true if this PofCollectionPath and the passed object are equivalent
        @Override
        public boolean equals(Object o) {
            if (this == o) {
                return true;
            if (o instanceof PofCollectionPath) {
                PofCollectionPath that = (PofCollectionPath) o;
                return Arrays.equals(this.elements, that.elements);
            return false;
         * Determine a hash value for the SimplePofPath object according to
         * the general Object.hashCode() contract.
         * <p/>
         * @return an integer hash value for this PofCollectionPath object
        @Override
        public int hashCode() {
            int[] ai = this.elements;
            return HashHelper.hash(ai, ai.length);
         * Return a human-readable description for this PofCollectionPath.
         * <p/>
         * @return a String description of the PofCollectionPath
        @Override
        public String toString() {
            return ClassHelper.getSimpleName(super.getClass())
                    + "(indices=" + toDelimitedString(this.elements, ".") + ')';
    }

    Hi Nitin,
    1. Actually having Java objects on the server side is not really relevant in this case as using POF extractors is much more efficient. If we wanted to do something really complicated we would have the option of deserializing the classes in the cluster, but we don't need to do that now.
    2. The example I gave above will return what you want.
    3. The 101 index is just the POF index of the indicies field in the extractor. You can use any integer you want for the indexes, you do not have to start from zero so I just happen to use bigger numbers.
    You have to use a ContainsFilter for you query. This is because the PofCollectionPath class returns a List of values. In your case you have two extractors:
    PofExtractor nameExtractor = new PofExtractor(null, namePath);
    PofExtractor addressExtractor = new PofExtractor(null, addressPath);...both of these return a list.
    For example if you had an instance of A like that contained a list of B values like this:
    1. B.Name = "ABC" B.Address = "NY"
    2. B.Name = "DEF" B.Address = "WA"
    3. B.Name = "GHI" B.Address = "IL"
    4. B.Name = "JKL" B.Address = "IN"The nameExtractor would return a list of names: {"ABC", "DEF", "GHI", "JKL"}
    The addressExtractor would return a list of addresses: {"NY", "WA", "IL", "IN"}
    A ContainsFilter matches lists that contain a specific value. So if we have a nameFilter like this:
    ContainsFilter nameFilter = new ContainsFilter(nameExtractor, "ABC");...then it will match with our list of names as the list contains the value "ABC". If we have a nameFilter like this:
    ContainsFilter nameFilter = new ContainsFilter(nameExtractor, "STU");...then it will not match as our name list does not contain "STU".
    The same then applies to the address filter.
    If you try to use any of the other filters in Coherence they would be applied to the list of names or addresses in different ways.
    An EqualsFilter would apply to the whole List returned by the Extractor so for example:
    EqualsFilter nameFilter = new EqualsFilter(nameExtractor, "ABC");...would not match because it would try to compare the String "ABC" with a List {"ABC", "DEF", "GHI", "JKL"} which obviously does not match as they are not even the same type of Object. But...
    You could apply an EqualsFilter to the whole List returned by the Extractor, for example:
    System.Collections.ArrayList names = new ArrayList();
    names[0] = "ABC";
    names[1] = "DEF";
    names[2] = "GHI";
    names[3] = "JKL";
    EqualsFilter nameFilter = new EqualsFilter(nameExtractor, names);...this EqualsFilter would match because it is comparing a list with the same set of names in it as our extractor has extracted from the instance of A.
    I hope that helps
    JK

  • Collection Processes in Process Chains

    Hi ALL,
               I want to know what  is the real importance of using AND, OR and EXOR Processes in Process Chains.I have gone through all the material available on SDN.But i could not clearly understand these Processes as of what's the real purpose it serves  when used in Process chains.
    We have Process Chains in our system which contains Planned data as well as Actual data which are parallelly loaded using our Process Chains.Where in  we have Different infopackages.With every infopackage,If it is successful ,it is connected to AND Process And in case,if it is a failure,It is connected to an OR Process.
    So ,if it is Successful ,then it goes to And Process or else it  connected to OR Process which triggers an ERROR event for that particular process chain.
    So,Please Do Provide an good answer as I have to implement certain changes to my process chains.I  have added 3 more infopackages  to my process chain and now ,I want  to know whether I should connect them to OR process,if a Failure occurs and what will be the consequences ,if I connect/Do not  these new packages to OR process when a failure occurs.
    I  hope you understand this Problem and <b>reply as it is very urgent.</b>
    Points will be assigned.
    Regards,
    samir

    Hi
    Definition
    A collection process collects several chain strings to form one string in the process chain maintenance.
    Use
    Process chain management handles collection processes in a particular way. The system makes the variant names consistent and guarantees that all processes of the same name that have been scheduled more than once, trigger the same event. This enables the several chain strings to be collected to form one and also makes multiple-scheduling of the actual application processes unnecessary.
    The following collection processes are available in the process chain maintenance:
    And Process (Last)
    This process does not start before all events of the predecessor processes, that is including the last event, that it has waited for, have been successfully triggered.
    Use this collection process when you want to combine processes and when further processing is dependent on all these predecessors.
    Or Process (Every)
    The application process starts every time a predecessor process event has been successfully triggered.
    Use this collection process when you want to avoid multi-scheduling the actual application process.
    XOR Process (First)
    The application process starts when the first event in one of the predecessor processes has been successfully triggered .
    Use this collection process when you want to process processes in parallel and schedule further independent processes after these ones.
    Owing to the time components, the collection processes do not display logical gates in the normal sense, because the system cannot distinguish between whether a 0 entry (no event received), means that the event was never received or whether it has not been received yet. The reason for this is that the checks do not run continuously but only take place when the event is received.

  • How to set up basic POP email on N8

    I've been reading up on all the email discussions about the N8, and it seems like some things work fine, while others need more development.
    In my case, I would like to set up email using a basic POP email account setup.  I have no need for "push" email right now - as I see it, receiving an email every half hour or so on my phone is an improvement over waiting until I get to my PC.
    Ideally, I would like to leave the messages on my email server, so they will later on be downloaded to my PC.
    I've noticed the following in one of the email discussions here:
    "Cant believe no one else has responded with this before given the time this thread spans. I had this problem until I deleted and re-created my email (POP) accounts and then NOT accept the TOS at the end of the setup process. If you say YES and accept the terms, then your email accts get setup thru the Nokia messaging service and sent emails may get saved somewhere, but certainly not on the phone. You do not have to say yes to the TOS, just decline and finish setting up the email account."
    Since I'm perfectly happy with the way email works on my laptop, using my existing servers, I think I should decline the TOS as described above.
    I've found lots of guides on how to set up email on the N8.  Can anyone here recommend one for how to do (only) what I need, in order to manually set up POP email on a N8, using the identical server information and settings that I now have on my laptop?  Or, if this doesn't exist, can someone post the necessary steps here?
    Thanks!!!

    I have had a little play with this but will just outline what I found seems to happen because I did not thoroughly test it. Others experiences may be different.
    If setting up a new account and one selects one of the listed email service providers and not "Other" then it sets up an IMAP account and gives the choices that Rayhipkiss describes together with the terms and conditions acceptance request. This can be seen after the setting up by looking in the settings for the server (which often, maybe always?, is a nokia.supl.imap one). If one deletes the server and server ports and replaces them with the same email service provider's pop and smtp servers and ports then the same setup choices that Ray mentions still remain. Once the account is set up you can always go back and change the server and port entries in the N8 setup for that account (see below for a difference).
    However, even though one has set the pop ports it still seems to do an imap connection - this can be tested by setting up a Gmail account where one can turn off and on allowing imap server connections in the settings on ones Gmail account's setting on the webmail site. One finds that disallowing imap connections but allowing pop connections blocks the N8 even if one has entered the pop servers and ports on the N8.
    Now, if when setting up the account on the N8 one selects "Other" it normally (see below) gets one to enter the address and pass for the account and then goes straight to allowing you to enter the servers and ports. Now if one enters these and it then takes one straight to the client for you to collect and send email. But now if you go back into settings for that account one gets the setting choices that I outlined i.e. the one that includes "Don't sync to server". If set up as a pop account it seems to behave as a pop account - for example, if one turns on the imap servers at the Gmail denying pop connections then if one tries to collect mail Gmail comes back with a certificate error (the certificate is for imap not pop). Unlike the settings menu giving the other settings Ray outlines this settings menu does not allow one to nominate a reply to address different to the address sent from and nor does it allow one to go back and change the server and ports. To change the server and ports as far as I can see if set up this way one has to reset up the account from the start.
    OK, even stranger, then what I found is if I set up an account using the offered service providers, not using the "Other" choice then deleted that account and then reset up the same account using the "Other" choice it then behaved as Ray outlines including the terms and conditions acceptance request and also behaves as an imap account even if one enters the pop servers and ports. So the "Other" account setup choice seems to behave differently according to whether one has previously setup that account using one of the non "Other" service provider choices or not?????
    I don't know if anyone wants to experiment further and discover more or clarify the behaviours. I just did some quick playing around so did not analyse it much, however the above is the general gist of what I found. It is sometime since I set up email accounts on my E52 but from memory it behaved much the same way as the N8 does and is generally weak on handling pop accounts - in fact I would suggest that many who think they are using pop servers will find that behind the scenes it is really using imap ones.
    A real downer for me is for a pop account there seems to be no way to set up a reply to address in the sent emails that is different to the address sent from. This need for me being due to the way I manage emails across several machines.
    I'm off to have a beer

  • General feedback, comments and questions

    Background: Written many web applications over the years (Java [cocoon, struts, home-grown frameworks], ISAPI, cgi-bin), and also worked with RAD tools like Delphi doing client/server apps. Consequently, JSF looks very interesting to me, especially when you consider .NET and Web Forms. Here are my comments/questions:
    -     The name �Request Events� is confusing. Should be �UI State Change Events� or something.
    -     Why isn't there a subclass of FacesEvent for �Request Events�?
    -     Are there any standard events that all components respond to? (i.e page lifecycle events)
    -     Is there anyway to explicitly state that a component responds to specific types of events (like the way Validators tell you what types of attributes they respond to)? What I�d like to see is the ability to generate an Events tab in the development tool, like most modern RAD environments.
    -     In order to take advantage of a central dispatching mechanism (ApplicationHandler), it would make sense for all pages to flow through the FacesServlet. Is there a pass-through mechanism that doesn�t require a response tree, or would this just be some type of application event? In other words, how does FacesServlet process requests that aren�t mapped to a component tree?
    -     Can you give us an idea about how the model references will work, and what the standard way for the application handler (and its delegates) will get a handle to a model reference will be? (i.e. session parameter).
    -     Why use UIParameters instead of attributes?
    -     I understand that grid and table display functionality is handled by standard Renderers. I still think that JSF needs to have a more complete standard suite of components as well, including grid and table components that can be bound directly to collections and ResultSets. (I know this isn�t EJB-friendly, but in reality many servlet-based applications don�t use EJB). Just requiring a UIPanel alone and the appropriate Renderers isn�t enough. I think that in order for JSF to be successful, it must have at least the standard components that .NET does.
    -     This is a minor detail, but shouldn�t there be a simple addMessage(Message message) method on the FacesContext instead of requiring you to send in null?
    -     You should be able to declaratively associate an ApplicationHandler � you shouldn�t have to add a ServletContextListener to do this.
    -     You should be able to specify a RenderKit for an entire application declaratively. This brings up the question of whether or not an application-wide context is needed (or if the JSF implementation should just subclass ServletContext).
    -     You should also be able to specify the RenderKit for a given session as well as a specific Tree.
    -     Currently, I don�t see any support for modifying the render kit in the JSP implementation. Shouldn�t this be an attribute of the <faces:usefaces> tag?
    -     I�m worried about the performance implications of this design for more complex user interfaces with many controls on a given page. I�ve seen others raise this concern as well, especially when you consider the overhead of copying component values, and constantly traversing the component tree. I recognize that there are a lot of places where object pooling can be used (Renderers, the components themselves, LifeCycle objects, etc.), but does anyone have any specific comments on how this can be achieved with minimal overhead, especially when you consider that a given application may have filters and additional logic in the ApplicationHandler (that may in turn talk to the EIS tier).
    I�m glad to see that for events, encoding, and decoding, you can either handle it directly in the component or delegate it to the proper handler (either a RequestEventHandler or a Renderer). This addresses the fact that in-house controls will likely be bound to a specific client device (at least originally).
    In general, I think this is a great move forward, and it�s one thing that we�ve been lacking for a while (there have been many non-Sun efforts on this front for a while, but we really need a standard if Java is going to remain the preferred platform for rapidly building web apps).
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Kito D. Mann
    [email protected]
    Virtua, Inc.

    In regards to my third point:
    - Is there anyway to explicitly state that a component responds to specific types of events (like the way Validators tell you what types of attributes they respond to)? What I�d like to see is the ability to generate an Events tab in the development tool, like most modern RAD environments.
    I just realized that this is handled implicitly by the fact that all UIComponents are JavaBeans. JavaBeans supports this via EventSetDescriptors. This is what happens when you haven't worked with any GUI tool kits (like Swing) for years :-). It'll be nice to see this functionality in the servlet world.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Kito D. Mann
    [email protected]
    Virtua, Inc.

Maybe you are looking for

  • Obtain a controlling object substitution before message error KI235 appears

    Hello! I' m in trouble... I'd like to define a substitution rule that works out before message error KI235 appears in posting a document by using FB01 or FB50 trx. I've tried to do it by using OKB9 trx and inserting a default cost center. It is not a

  • Why would a SQL 2008 R2 SP2 server agent only see 40 processors when the server has 80?

    The SQL Server iteslf see's 80 but the agent only 40?  I thought that R2 didn't have this limitation?

  • Alsa sequencer init for awe/sblive (awesfx incl.)

    Here are init scripts and pkgbuild for awesfx. I did this, then noticed someone had already posted a build - sorry I don't mean to step on your toes! Since I already went to the work of creating the init-scripts, and being the latest version with som

  • Unable to add accounts on Mail.app

    Hey,there for anyone who's reading. I am having trouble adding any accounts whether it maybe gmail yahoo. I have already enabled imap settings for my gmail but when I add my email it tells me to check network settings and the imap.gmail settings. I b

  • Shared Storage & mulitple Macs

    Hi all, I have a few questions to ask. Im goimng to buy two new Mac Pro and two iMacs for my company. We are doing some editing on FCP and need the ability to share video files. The plan is to have a mac pro capture all the video (tape, dvd, ftp etc)