Coarse-grained atomic actions

I am trying to understand the difference between fine-grained and coarse-grained atomic actions is and what mechanisms guarantees that these actions are atomic?
Would lets say method setBalance() be a coarse-grained atomic action?
Thanks

Googling coarse-grained atomic:
From http://www.cs.arizona.edu/people/greg/mpdbook/glossary.html
Atomic Action
A sequence of one or more statements that appears to execute as a single, indivisible
action. A fine-grained atomic action is one that can be implemented directly by a single
machine instruction. A coarse-grained atomic action is implemented using critical
section protocols.I know that some atomic classes were added in 1.5.0
http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/atomic/package-summary.html
I assume that under the covers they are coarse-grained, but you can probably treat them as fine-grained from the perspective of your use.
I'd be interested in seeing what the real gurus have to say on this topic.
� {�                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

Similar Messages

  • Coarse-grained object Design for Performance

    Hi,
    I’ve several 1-1 mapping between one coarse-grained object with several fine grained objects. In the DB all the fine-grained objects are stored in separate table. In my application the relationship between coarse-grained object and fine-grained objects will not be modified. In this scenario, for good performance which solution is better?
    Maintaining the 1-1 mapping between coarse-grained object and fine-grained objects OR adding all the attribute from the fine-grained objects into the coarse-grained object.
    Thanks
    -Mani

    Mani,
    The answer depends upon the data usage in your application.
    If these fine grained read-only objects are shared between your coarse grain objects then it may be more optimal to keep them as separate classes with 1:1 relationships. If these read-only classes are also of fixed and reasonable quantity (reference data) you could also pre-fetch them all into a full cache (identity map). Then as each course grain object is read the 1:1 relationships can be resolved in memory without additional queries.
    If you combine these objects all into one coarse gain object then you will have to pay the cost of a more complex multi-table join on each read. This may be more efficient if the fine grain objects are never or rarely shared between coarse grain objects. To be sure you will want to check with your DBA to see what they can measure as being more optimal for the given schema.
    If these are shared and/or reference data then you may also want to consider marking them as read-only in the project. That way they will not be considered for change tracking in the UnitOfWork. This should also provide an additional performance boost.
    Doug

  • What does "coarse-grained" and "fine-grained" mean?

    I've seen these terms used in almost every pattern article, but I've never seen them defined. Can someone provide me the definitions for coarse- and fine-grained? Also, what are the pros and cons of each.

    it's an anology to sand or other particles. You can have sand that is very fine: small grains or you can have sand that is very coarse: large grains.
    A fine-grained approach is one uses lots of little Objects and a coarse-grained approach uses a smaller number of larger Objects. The reasoning for using a coarse grained approach is that it limits the 'chattiness' of the application. That is you don't have lots of little messages going back and forth, each one with it's own overhead.
    In practice, I've fouind the coarse-grained approach to not be that great. Chattiness is not that big of a deal anymore and big Objects take longer to load. Your application basically does squat while it waits for the be ball of wax and then it's got to disassemble it. If you use lot's of small Objects, the app can start working on them as soon as it starts receiving them.
    This isn't to say this doesn't have it's place, it's just not for every application.

  • Coarse grained and fine grained

    I want to now the difference b/wn coarse grained and fine grained entity bean, please explain me with some example

    I think the name is self explanatory.However since u do not seem to be very clear i will try and explain.Suppose you have an person entity bean.There are essentially two ways to model the bean.
    Essentially u need to identify the sub parts of this person bean.It would have a first name, last name, address, ph.num,ssn and so on.If you were to model this as a fine grained bean in that case you would have all of these as parameters and would be passing the required parameters in the corresponding create method.However the Address portion can comprise of a state,city street,house number and pin.If you were to make this a separate entity which would present a local interface and use a value object to create and return this, this design would make it a coarse grained bean.Hope it makes some sense,if you require any other clarifications i would oblige.
    cheers
    sicilian

  • Coarse-grained Web Service

    Hi all,
    I noticed, that services in the ES Workplace has small and clear interface. But if i want to develop a complex UI,  let's say object hierarchy contains many levels. How will it be implemented as Web Service? One big Web Service?
    If i have fine-grained Web Service, than i must always save data on DB, as Web Service may requere ID. But it is not the way for UI.
    This restriction is based on the fact, the Web Services must be stateless.
    Regards
    Paul

    well,
    probably things are just a matter of taste.
    I am from time to time building dynamic UIs in a completely SAP agnostic environment by combining calls to several SAP (custom made ones) and non SAP services.
    Personally I did not like BAPIs at all. To call a commit explicitly seemed very anachronistic to me.
    Also I did never ever feel a need to testrun an operation. To my knowledge no enduser wants to <b>test</b> if e.g. a purchase order could be created or not. To me the usual way is to have actual data (availability, pricing, etc...) checked already (which is not determined by a testrun but via Get services) and then try to create that order...with two outcomes, either the order being created sucesfully and a receipt returned or or the order not being created sucesfully and a meaningful error condition returned hinting me on how to overcome that condition.
    To me the possibility to automatically create service consumers (clients) in almost every language (programming environment), get the data type mappings and the transport layer stuff done automatically and having an interface and service description (WSDL) which is (at least in theory) understandable by the SAP agnostic puts web services so far ahead of BAPIs, that IMHO they are not comparable at all.
    regards, anton

  • Lease-granularity and what it really means for atomicity, concurrency

    Hi,
    My goals are to avoid silently rejected updates, and to preserve the atomicity of EntryProcessors (EPs) across a cluster. These goals should be met regardless of whether, for a particular key, (1) multiple threads on multiple storage-enabled node invoke EPs, or (2) multiple clients connected to multiple Extend proxies invoke EPs. Hope that makes sense?
    I dropped a breakpoint into my custom EP and started up several cluster nodes (VMs) in my debugger. Somewhat unexpectedly the breakpoint was hit by two threads simultaneously on the same node. If I step through, both threads are apparently successful in setting the entry value, which was previously unset. Each EP instance returns a different value. The second value is the one retained by the cache.
    I'm using Coherence 3.5.3 on Java 1.6.0_17. I am not using TransactionMaps, nor do I wish to. The cluster has several storage-disabled nodes that serve as Extend proxies, and several storage-enabled nodes. The cache is replicated, with lease-granularity=member on all nodes.
    I found this forum thread lease-granularity locking problem which explains how my chosen lease-granularity may result in a lock being lost. Another forum thread lease-granularity locking problem explains how a cached object's lease may result in cache updates being rejected.
    If I set lease-granularity=thread, I am subsequently unable to reproduce the "issue". Based upon the forum posts and User Guide it makes sense to me that lease-granularity=member essentially enables coarse-grained locking that restricts access to a key to a single node within the cluster, and that lease-granularity=thread restricts access to a key to a single thread within the cluster. In the former case, threads on the node may steal locks from one another.
    Some questions, then:
    First of all, have I understood the semantics of lease-granularity correctly?
    Why are both threads' invocations on my custom EP apparently both successful when lease-granularity=member?
    Should an Extend proxy node use a cache configuration that is distinct from the storage node cache configurations? I.e. should the proxy node use lease-granularity=member, whereas storage enabled nodes use lease-granularity=thread? In this configuration, are Extend clients guaranteed to retain a lock across a conversation, regardless of other client connections competing for the same lock?
    Any advice appreciated.
    Thanks in advance,
    Simon
    Edited by: user8850969 on 22-Feb-2010 05:46

    Hello Simon,
    I will attempt to answer your questions below:
    First of all, have I understood the semantics of lease-granularity correctly?
    Yes, the lease-granularity semantics are:
    lease- granularity =thread -- means locks obtained by thread can only be released by thread
    lease- granularity =member -- means locks obtained by cluster node and any thread on that node can release the lock.
    Why are both threads' invocations on my custom EP apparently both successful when lease-granularity=member?
    EntryProcessors (EP) are executed on the entry owner and execute atomically. EP’s eliminate the need for an explicit lock. Therefore each invocation of the EP is expected complete successfully and atomically regardless of the lease-granularity setting. If the EP's breakpoint was triggered simultaneously for the same key then that is a problem, if it was for different keys then this is expected behavior.
    Should an Extend proxy node use a cache configuration that is distinct from the storage node cache configurations? I.e. should the proxy node use lease-granularity=member, whereas storage enabled nodes use lease-granularity=thread? In this configuration, are Extend clients guaranteed to retain a lock across a conversation, regardless of other client connections competing for the same lock?
    Are you using explicit locking in addition to using EP’s? If using explicit locking in the Extend clients, which is disabled by default then there are concerns which need to be addressed. You avoid this complexity and concerns by using EntryProcessors.

  • Atomic operation?

    When i am going through java docs i have encountered the following statement about Atomic action
    " An atomic action cannot stop in the middle: it either happens completely, or it doesn't happen at all. No side effects of an atomic action are visible until the action is complete."
    So i understand this as, A thread does not give up(does not move out of running state) until it completes the atomic action(a sequence of steps). So atomic action done by one thread is visible to all the threads. i mean to say no other thread executes until the the thread that is executing the atomic action completes. so any modification done by atomic action will be visible to other threads.
    Is my understanding right? Did I understood this rightly?
    Edited by: Muralidhar on Feb 16, 2013 5:37 PM

    teasp wrote:
    Atomic operation is totally different thing from visibility. When a Thread is executing atomic operation, it means no other thread have a chance to disturb the action, but it does not garantee other Threads can "see" the result of the operation immediately.Well if it isn't Mr. Concurrency Expert.
    Based on the ignorance and hostility you demonstrated in your other thread, I wouldn't start dishing out answers to people just yet.
    Offense very much intended.

  • Volatile and object atomicity

    Hi,
    I'm reading the sun concurrency tutorials and am unable to understand how the 'atomicity' is enfored on object when volatile modifier is applied to it. A sample program suggests me that two threads can be in two different methods of a 'volatile' object and modify the 'state' of the object.
    With this observation I'm not understanding how 'volatile' keyword forces atomicity for an object.
    Here is the sample code I've written to verify this along with the result when executed on my machine here.
    In the code that follows -
    VolatileClass - A single volatile instance of this class would be used by two threads.
    IncrementThread - Increments the state of the VolatileClass instance.
    DecrementThread - Decrements the state of the VolatileClass instance.
    My expectation is that as the VolatileClass object is declared as a volatile variable, two threads should not enter the same object. I understand that methods are not synchronized, but the volatile keyword should guarantee the atomicity.
    Any thoughts?
    public class Main {
         private static class VolatileClass {
              int nonVolatileInt = 0;
              public int increment() {
                   try {
                        System.out.println("Increment slept...");
                        Thread.sleep(500L);
                        System.out.println("Increment woke");
                   } catch (final InterruptedException e) {
                        System.out.println("Increment Resuming...");
                   return ++nonVolatileInt;
              public int decrement() {
                   try {
                        System.out.println("Decrement slept...");
                        Thread.sleep(100L);
                        System.out.println("Decrement woke");
                   } catch (final InterruptedException e) {
                        System.out.println("Decrement resuming...");
                   return --nonVolatileInt;
              // VolatileClass
         private static class IncrementThread implements Runnable {
              VolatileClass memberVolatileClass;
              public IncrementThread(final VolatileClass volatileClass) {
                   memberVolatileClass = volatileClass;
              @Override
              public void run() {
                   for (int i = 0; i < 10; i++) {
                        System.out.println("Increment: "
                                  + memberVolatileClass.increment());
              // IncrementThread
         private static class DecrementThread implements Runnable {
              VolatileClass memberVolatileClass;
              public DecrementThread(final VolatileClass volatileClass) {
                   memberVolatileClass = volatileClass;
              @Override
              public void run() {
                   for (int i = 0; i < 10; i++) {
                        System.out.println("Decrement: "
                                  + memberVolatileClass.decrement());
              // DecrementThread
         static volatile VolatileClass volatileClass = new VolatileClass();
         public static void main(final String[] args) {
              final Thread incrementThread = new Thread(new IncrementThread(
                        volatileClass));
              final Thread decrementThread = new Thread(new DecrementThread(
                        volatileClass));
              incrementThread.start();
              decrementThread.start();
              // main
    }The result of the code is
    Increment slept...
    Decrement slept...
    Decrement woke
    Decrement: -1
    Decrement slept...
    Decrement woke
    Decrement: -2
    Decrement slept...
    Decrement woke
    Decrement: -3
    Decrement slept...
    Decrement woke
    Decrement: -4
    Decrement slept...
    Increment woke
    Increment: -3
    Increment slept...
    Decrement woke
    Decrement: -4
    Decrement slept...
    Decrement woke
    Decrement: -5
    Decrement slept...
    Decrement woke
    Decrement: -6
    Decrement slept...
    Decrement woke
    Decrement: -7
    Decrement slept...
    Decrement woke
    Decrement: -8
    Increment woke
    Decrement slept...
    Increment: -7
    Increment slept...
    Decrement woke
    Decrement: -8
    Increment woke
    Increment: -7
    Increment slept...
    Increment woke
    Increment: -6
    Increment slept...
    Increment woke
    Increment: -5
    Increment slept...
    Increment woke
    Increment: -4
    Increment slept...
    Increment woke
    Increment: -3
    Increment slept...
    Increment woke
    Increment: -2
    Increment slept...
    Increment woke
    Increment: -1
    Increment slept...
    Increment woke
    Increment: 0

    I underlined the critical parts that you missed, namely that all the non-atomic actions are on non-volatile longs and doubles.
    Or perhaps you missed my use of the word volatile in the part that you quoted and said you disagree with.
    kilyas wrote:
    >
    2. Every read and write of a volatile variable is guaranteed atomic, including doubles and longs, which are 64 bits. If the underlying hardware does that in two 32-bit steps, the JVM must ensure that the executing code sees only the before or after value, not the intermediate half-and-half.
    [http://java.sun.com/docs/books/jls/third_edition/html/memory.html#17.7]
    [http://java.sun.com/docs/books/jvms/second_edition/html/Threads.doc.html#22258]
    [http://java.sun.com/docs/books/jvms/second_edition/html/Threads.doc.html]
    The bold part is what I am differing with. In his book "Java Concurrency in Practice" Brian Goetz writes in Section 3.1.2
    +"the JVM is permitted to treat a 64-bit read or write as two separate 32-bit operations. If the reads and writes occur in different threads, it is therefore possible to read a nonvolatile long and get back the high 32 bits of one value and the low 32 bits of another"+
    Something similar is mentioned here @ http://java.sun.com/docs/books/jvms/second_edition/html/Threads.doc.html
    *8.4 Nonatomic Treatment of double and long Variables*
    +If a double or long variable is not declared volatile, then for the purposes of load, store, read, and write operations it is treated as if it were two variables of 32 bits each; wherever the rules require one of these operations, two such operations are performed, one for each 32-bit half. The manner in which the 64 bits of a double or long variable are encoded into two 32-bit quantities and the order of the operations on the halves of the variables are not defined by The Java Language Specification.+
    This matters only because a read or write of a double or long variable may be handled by an actual main memory as two 32-bit read or write operations that may be separated in time, with other operations coming between them. Consequently, if two threads concurrently assign distinct values to the same shared non-volatile double or long variable, a subsequent use of that variable may obtain a value that is not equal to either of the assigned values, but rather some implementation-dependent mixture of the two values.+
    An implementation is free to implement load, store, read, and write operations for double and long values as atomic 64-bit operations; in fact, this is strongly encouraged. The model divides them into 32-bit halves for the sake of currently popular microprocessors that fail to provide efficient atomic memory transactions on 64-bit quantities. It would have been simpler for the Java virtual machine to define all memory transactions on single variables as atomic; this more complex definition is a pragmatic concession to current hardware practice. In the future this concession may be eliminated. Meanwhile, programmers are cautioned to explicitly synchronize access to shared double and long variables.
    According to the Java Language Specification [JLS 2005], Section 17.7, "Non-atomic Treatment of double and long"
    +... this behavior is implementation specific; Java virtual machines are free to perform writes to long and double values atomically or in two parts. For the purposes of the Java programming language memory model*, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half.* This can result in a situation where a thread sees the first 32 bits of a 64 bit value from one write, and the second 32 bits from another write.+
    Edited by: jverd on Aug 9, 2010 11:48 AM
    Edited by: jverd on Aug 9, 2010 11:49 AM

  • Using the BusinessDelegate pattern at lower levels

    I was hoping to elicit some feedback on general J2EE architecture...
    We have an application that we separate into three general tiers: web/client tier, service tier (with 'services' that implement business logic) and a data tier (with 'controllers' which basically implement CRUD (create, read, update, delete) for a single domain object or table). We have this thing called a 'manager' that straddles the service and data tier which also implements CRUD, but at a more coarse grained level. Managers allow us to work with big objects that are actually a composition of small objects.
    Our services and managers are almost always implemented as session beans (stateless) so the client tier uses a "Business Delegate" as per Sun's J2EE patterns to hide the implementation details of those services and managers. So a given call in the system would look like:
    Struts action class -> business delegate -> manager or service -> controller -> entity bean
    Managers and services, when they need to work with persistent data then use controllers to get at that data. Most controllers are currently session beans (stateless), but we are starting to add controllers that implement the DAO pattern.
    I'd like to hide the implementation details of those controllers from the managers and services that use them, though. I hate seeing code in managers/services where we use ServiceLocator to look up a home interface, create the EJB controller and then do method calls.
    My question is whether it's possible and appropriate to use a BusinessDelegate between two session beans such as our 'managers' and 'controllers'. Doing so would make a given call in the system would look like:
    Struts action class -> business delegate -> manager or service -> business delegate -> controller -> entity bean (if used). Would you loose the transaction between manager and controllers (managers always use Required for each method and controllers use Supports - as well, controllers only ever expose a local interface).
    Thanks for any advise/opinions you might have.
    Donald

    In your framework, does each delegate become a single
    InvocationHandler instance
    Yes, exactly.
    with (for instance) a
    constructor that does the EJB lookup and an invoke()
    method?
    EJB lookup is being done by a ServiceLocator.
    Do you have ServiceLocator return the proxy object (I
    know that traditionally ServiceLocators merely return
    EJB home interfaces,
    My does the same. But it's used by EJBDelegateFactory, which creates EJBDelegates.
    EJBDelegates implement InvocationHandler and support InvocationDecorators.
    but I have used them to return
    POJO implementations of interfaces so it becomes a
    kind of implementation factory).
    Nice idea. I have POJOLocator for that.
    Do you have your Proxy object implement the EJB
    interface, rather than creating a new one.
    Yes. The only drawback is that client code needs to catch EJB specific and remote exceptions.
    However it's not a big problem, because there is a decorator which catches all EJB exceptions and throws BusinessDelegateException with a user-friendly error message inside. This means that EJB interfaces need to declare BusinessDelegateException in throws clauses, but it doesnt cause any problems.
    So client code is something like:
    try {
          foo.invokeSomeMethod();
    }  catch (BusinessDelegateException  ex)
         displayErrorMessage(ex.getMsg);
    }   catch  (Exception ex)
          //   can be empty
    I have
    been thinking about creating a new one and then seeing
    if I can change xdoclet's templates to have the EJB
    interface extend this interface.
    I have used this idea, too. I think it's even called a pattern. You can make EJB bean to implement this interface and then you have compile time checking of EJB compliance.
    I remember I had problems with that when some CORBA client stub generator got confused.
    Thanks for any little advise you can provide.
    Maybe I will even put my framework as OS in WEB. If my client will allow me to do it.
    best regards,
    Maris Orbidans

  • How to create 'service' tier with functionality multiple struts apps

    I am looking for some advice regarding a future project. I have 3 struts webapps (running on Tomcat 4.1.3) each with some overlapping functionality. I would like to create a 'service tier' to accommodate common functionality which can be shared between the 3 struts web apps.
    What framework would best achieve this? Are EJBs the way to go? My struts apps all use DTOs so are not tied to the Action Form object. I have been looking at Spring but am not sure exactly how this will fit? Will tomcat serve my needs or do I need to migrate to something with full J2EE support. I have also considered JBOSS with Seam linking JSF to EJBs. Is this a viable option?
    The service layer can be on the same machine as the struts apps although I would like to have the flexibility to move it to a separate server in the future if necessary.
    I will be happy to provide more details to any who can offer me some guidance.

    Hi Saish,
    Thanks for your help.
    I have considered both options. The main goal is to
    eliminate duplicate code. However the presentation
    tiers of the applications are quite different. I
    will look into Martins book.
    It's outstanding. One of my top five favorite books.
    Would creating a jar file which is included in each
    .war end up becomming a maintenance nightmare?
    Depends on what you mean by ' maintenance'. In terms of onging development with your Java source, in a modern IDE, no. In terms of deployment, it does add a minor bit of complexity. However, if you are using an automated build and deployment tool such as Ant or Maven, the amount of extra work is trivial.
    In your opinion what would be the advantages of
    implementing the common functionality as a seperate
    tier (with SPRING/EJB) vs using a JAR file and
    distributing it with each app?They are not mutually exclusive. If you want to remove duplicate code, then simple refactoring is the start. As part of that refactoring, you may decide enough classes are providing similar functionality (such as transaction management, a coarse-grained public API, etc.) and create a tier. You mentioned in particular a service tier. This is a design and architectural decision. You could still completely refactor common functionality into a better design without the introduction of a new tier.
    Whether to create a tier is, IMO, more art than science. Adding a tier adds complexity. However, the net effect should be to reduce system complexity. It is more 'work' to implement a true persistence tier than to simply code JDBC in model objects (or use Hibernate/JDO objects). However, as overall system complexity grows, the addition of a persistence tier adds many benefits. Business objects concern themselves solely with business logic, where as data acccess objects concern themselves with persistence. You can even have different developers with different skills specialize within a tier.
    So, tiering really is a big topic. Fortunately, there are many architecture templates and design patterns to guide your decisions. "Patterns of Enterprise Architecture" (also by Fowler) compares and contrasts the more common ones. There is lively debate as to the pros and cons of different strategies. In the end, there is no cookie-cutter architecture. You will need (sometimes through the painful process of making a mistake) to see what works best for your actual system.
    Finally, about remoting. Remember the first law of distributed objects, "Don't distribute your objects". There is always a performance penalty compared to an in-JVM local call. Remoting has its uses, but these should be careful architecture-level decisions. (I should concded that even this paragraph is sometimes contentious and debated).
    - Saish

  • Service Granularity in SAP

    I wants to know if SAP banking supports Service Granularity.
    Are the services Fine grained or Course grained.
    Please provide a pointer or document abt SOA and web services for SAP / SAP banking.

    Hi
    I picked this up, hope this is of some help.
    Service Granularity Challenge with SAP's ESA
    SAP uses its Netweaver based development infrastructure to define services in a SOA.
    The business services are defined in Netweaver Developer Studio (NWDS) which is also known as the CAF (Composite Application Framework) Core. They have three main constructs namely an external service, an entity service and an application service.
    An external service allows for the linkage to services defined outside the CAF Core environment sandbox.
    An entity service essentially models an operation on a business object (BO) as a service. Typically these map to existing BAPI's that work on business objects that are stored in the SAP Business Warehouse (BW). Operations on business objects being used as a basis to define services, to me, is a very parochial approach to service definition and often becomes too fine grained to follow the footsteps of a service-proliferation-syndrome.
    Application Services are defined as coarser-grained services that combine or use multiple entity services with some applied business logic to tie them together. Even here, it is more of a bottom up approach to service definition. I fear that, using this approach, we are missing the key issue of aligning services to business processes. The concept of business aligned services seems to be lost in the mechanism, somewhere !!!
    There is another tool, more of a somewhere-inbetween-a design-and-a-runtime mechanism to define orchestrations of services, based on the SAP Guided Procedures (GP). GP has concepts of actions, blocks and callable objects. Actions and blocks can be simulated as a composition of services together - ala service composition. But then, these service composites, are not pushed down to the business service layer - which to me is an issue.
    The Enterprise Services which SAP defines are done in an altogether differnt way. There the services are conceptualized from a business point of view only and defined and orchestred in SAP Exchange Infrastructure, while implementing proxy remains in the backend business system like ERP, CRM, etc. These Enterprise Services represents a well-defined business functionality only.
    The services that are defined in CAF Core are rather composite services which are predominantly based on the Enterprise Services. Though at present we use BAPI (which are rather technical API for business objects) to define the dependency of the entity service with the existing assests, but in future these will be replaced by Enterprise Services as day by day more and more Enterprise Services are getting published by SAP and added to the UDDI service registry. So in that sense we can define entity and external services based on the Enterprise Services only and in the Application Service we can model the business logic of the composite application, which in turn becomes a composite service.
    Rgds
    suresh

  • Webflow/Pipeline looping

    I've been having some discussions with team members for a project I'm currently
    working on about looping in webflows and pipelines. I'm convinced that this is
    just not the best idea in general, especially when a loop involves webflow AND
    pipeline components. I believe that this type of thing should be handled either
    completely in a webflow component or elsewhere (an EJB for example).
    Here is one case. There is a webflow with a presentation node index.jsp,
    InputProcessors BeginIP and DecisionIP and a Pipeline component WorkPC.
    index.jsp calls BeginIP which calls WorkPC which calls DecisionIP which may call
    BeginIP. The diagram might look something like
    index.jsp ------> BeginIP -----------> WorkPC ---------> DecisionIP
    ^--------------------------------------------'
    (hopefully everyone can see that diagram, but the link on the bottom goes from
    DecisionIP to BeginIP).
    DecisionIP is responsible for determining if the looping should continue or if
    it should end. This could be done by verifying that a PipelineSession parameter
    was set, or whatever. This setup causes the business logic to span two IPs and
    it also forces the IPs to control the looping by returning the correct actions.
    In addition, it also requires that the IPs be programmed so that they can
    determine the current state (i.e. start of the loop, within the loop, outside
    the loop, etc.)
    This type of looping could also be accomplished by removing the DecisionIP and
    having WorkPC call BeginIP. This would centralize the work into a single IP that
    is now responsible for both handling the loop control, and the business logic.
    Of course, this could simplify the situation or complicate it depending on the
    circumstances.
    My solution to this problem is simple. Don't use Webflow/Pipeline to do looping.
    Besides the performance ramifications, doing this sort of things makes the IPs
    to heavy with flow control logic, which should really be left to the
    WebflowExecutor. Likewise, it causes the IPs to be more and more difficult to
    maintain, extend and reuse.
    Now here is some rough (really rough) pseudo code of the two solutions
    Webflow/pipeline looping:
    BeginIP.java
    if not in loop {
    do business logic
    } else if in loop {
    do different logic
    prepare for WorkPC call
    return "success";
    DescisionIP.java
    unpack from WorkPC call
    if WorkPC did not complete work {
    return "continue";
    return "finished";
    Direct looping:
    BeginIP.java
    do {
    business logic
    call Work
    business logic
    } while (Work not finished);
    One thing to notice about these two is that the second one did not use WorkPC
    because it is simply making a call to some Work that could be stored in a normal
    EJB (non-pipeline EJB that is).
    A few last notes and then I'm done. One thing I like about the direct looping
    method is that it is easy to see what is going on. You can tell that it is
    looping by just looking at the code. With the Webflow/Pipeline loop, you'll need
    to either open the EBCC or look in the wf file in order to fully understand the
    looping. This makes maintenance difficult, not to mention readability hard, for
    both the original developer and a new developer. Lastly, although not the
    biggest concern, the Webflow/Pipeline looping does much more work. It could
    potentially cause the BeginIP and DecisionIP to be instantiated each loop
    (depending on the WebflowExecutor's implementation). It forces two JNDI lookups
    and two remote calls each iteration (one for the Pipeline EJB and the second for
    the WorkPC EJB). It requires the IPs marshall and unmarshall data into and out
    of the PipelineSession (the only communicate method to Pipeline components). The
    direct looping does no reflection, no instantiation, one JNDI lookup, one remote
    call per iteration and requires no marshalling and unmarshalling via the
    PipelineSession.
    What does everyone else think?
    Brian Pontarelli

    Brian,
    I agree with Peter. This loop will definitely break.
    You should consider having both IP and PC, let IP do form validation and put
    parameters for the pipeline execution on the pipeline session. Pipeline will
    do the looping for you; which means you need to implement the loop within
    the PC. Also if this loop contains DB access, you might want to consider
    caching in order to avoid multiple trips to DB.
    Regards,
    Michael Goldverg
    "Peter Laird" <[email protected]> wrote in message
    news:[email protected]...
    >
    Hi Brian,
    Your conclusion to not use Webflow for this purpose is correct in myopinion.
    Here are some more reasons:
    1) Webflow node traversal is done via recursion (for reasons too lengthyto explain
    here). Therefore, if your loop iterates for say 10,000 times, you will geta stack
    overflow before it completes.
    2) Ideally, Webflow components should not depend on other components beingin
    place in the Webflow graph. They should only depend on items in theHttpRequest
    or PipelineSession. Doing otherwise makes the Webflow very fragile. I knowin
    practice, this is a lofty goal that is hard to achieve, but its the rightphilosophy.
    So, building a construct that you diagramed makes the graph difficult tomaintain.
    >
    >
    3) I was on the design team for Webflow, and this was not a use case wetried
    to support. Consider a more coarse grained approach to the business logic,meaning
    the looping should be contained within a single IP or PC.
    Good luck with your project Brian!
    PJL
    "Brian Pontarelli" <[email protected]> wrote:
    I've been having some discussions with team members for a project I'm
    currently
    working on about looping in webflows and pipelines. I'm convinced that
    this is
    just not the best idea in general, especially when a loop involves
    webflow
    AND
    pipeline components. I believe that this type of thing should be handled
    either
    completely in a webflow component or elsewhere (an EJB for example).
    Here is one case. There is a webflow with a presentation node index.jsp,
    InputProcessors BeginIP and DecisionIP and a Pipeline component WorkPC.
    index.jsp calls BeginIP which calls WorkPC which calls DecisionIP which
    may call
    BeginIP. The diagram might look something like
    index.jsp ------> BeginIP -----------> WorkPC ---------> DecisionIP
    ^--------------------------------------------'
    >>
    (hopefully everyone can see that diagram, but the link on the bottom
    goes from
    DecisionIP to BeginIP).
    DecisionIP is responsible for determining if the looping should continue
    or if
    it should end. This could be done by verifying that a PipelineSession
    parameter
    was set, or whatever. This setup causes the business logic to span two
    IPs and
    it also forces the IPs to control the looping by returning the correct
    actions.
    In addition, it also requires that the IPs be programmed so that they
    can
    determine the current state (i.e. start of the loop, within the loop,
    outside
    the loop, etc.)
    This type of looping could also be accomplished by removing theDecisionIP
    and
    having WorkPC call BeginIP. This would centralize the work into a single
    IP that
    is now responsible for both handling the loop control, and the business
    logic.
    Of course, this could simplify the situation or complicate it depending
    on the
    circumstances.
    My solution to this problem is simple. Don't use Webflow/Pipeline to
    do looping.
    Besides the performance ramifications, doing this sort of things makes
    the IPs
    to heavy with flow control logic, which should really be left to the
    WebflowExecutor. Likewise, it causes the IPs to be more and moredifficult
    to
    maintain, extend and reuse.
    Now here is some rough (really rough) pseudo code of the two solutions
    Webflow/pipeline looping:
    BeginIP.java
    if not in loop {
    do business logic
    } else if in loop {
    do different logic
    prepare for WorkPC call
    return "success";
    DescisionIP.java
    unpack from WorkPC call
    if WorkPC did not complete work {
    return "continue";
    return "finished";
    Direct looping:
    BeginIP.java
    do {
    business logic
    call Work
    business logic
    } while (Work not finished);
    One thing to notice about these two is that the second one did not use
    WorkPC
    because it is simply making a call to some Work that could be stored
    in a normal
    EJB (non-pipeline EJB that is).
    A few last notes and then I'm done. One thing I like about the direct
    looping
    method is that it is easy to see what is going on. You can tell that
    it is
    looping by just looking at the code. With the Webflow/Pipeline loop,
    you'll need
    to either open the EBCC or look in the wf file in order to fullyunderstand
    the
    looping. This makes maintenance difficult, not to mention readability
    hard, for
    both the original developer and a new developer. Lastly, although not
    the
    biggest concern, the Webflow/Pipeline looping does much more work. It
    could
    potentially cause the BeginIP and DecisionIP to be instantiated each
    loop
    (depending on the WebflowExecutor's implementation). It forces two JNDI
    lookups
    and two remote calls each iteration (one for the Pipeline EJB and the
    second for
    the WorkPC EJB). It requires the IPs marshall and unmarshall data into
    and out
    of the PipelineSession (the only communicate method to Pipelinecomponents).
    The
    direct looping does no reflection, no instantiation, one JNDI lookup,
    one remote
    call per iteration and requires no marshalling and unmarshalling via
    the
    PipelineSession.
    What does everyone else think?
    Brian Pontarelli

  • Should this statement be within synchronized block

    I am trying to generate an unique ID as follows:
    uniqueID = String.valueOf( System.currentTimeMillis() + (long) Math.random() )
    Is this statement thread-safe? Should this statement be enclosed within a 'synchronized' block?
    Thanks

    >
    Sorry I missed the issue with casting to a long. That
    certainly makes it easy to get duplicates.
    The problem with (long) Math.random is that you get always 0
    public class LongTest {
         * @param args
        public static void main(String[] args) {
            int failed = 0;
            for (int i = 0; i < 1000000; i++) {
                if (((long) Math.random()) > 0) {
                    failed++;
            System.out.println(failed);
    With the bug in place, and a single-thread, the code
    requires that calls to this piece of code occur
    further apart than the resolution of
    currentTimeMillis() - which as I said is often as
    coarse as 10ms (but can be 15ms on some Windows
    platforms). If you now throw multiple threads in then
    the chances of duplicates is very real.Yes, since with that call you use only System.currentTimeMillis() that, as you said, is coarse grained
    >
    There are easier ways to get unique IDs - just see
    java.util.UUID. Or just use an atomic counter.Yes.

  • Aggregator Service

    Hi,
    I have a query related to the Aggregator Service.
    We recently made use of aggregator service. The Hourly Report from this service
    looks like this,
    Start Reporting      21 May 2007 02:00:00 GMT          
    End Reporting      22 May 2007 02:00:00 GMT          
    Reporting Interval      Hourly          
    Day      Date/Time                                      Named        Anonymous
    Mon      21. May 2007 02:00 (GMT)      18       27
    Mon      21. May 2007 03:00 (GMT)      21       41
    Mon      21. May 2007 04:00 (GMT)      19       42
    Mon      21. May 2007 05:00 (GMT)      6       20
    Mon      21. May 2007 06:00 (GMT)      4       9
    Mon      21. May 2007 07:00 (GMT)      4       25
    I would like to know if these no's are cummulative?
    Meaning, if a user logged in between 2:00 and 3:00 and he's active till 5:00, in
    this case will the user be counted in the next hour count too(3:00 - 4:00) ?
    Thanks,
    Vikas

    I am not sure about the Aggregator Service in Order to Bill PIP. But the aggregator programming model we introduced for a specific purpose.
    We had use case:
         In the MDM Customer project, the Siebel application has create/update triggers defined at the database level (as opposed to from the UI frames), so any update/create action can potentially lead to multiple events getting raised for integration. Therefore, there is a need to aggregate these events and process them in batches instead of processing each fine-grained event by itself.
         The events can be raised on the following business entities: Account, Contact and Address.
         It is also required to maintain the relationships between the above entities when doing the aggregation: an account can have one or more Contacts/Addresses attached to it. Similarly, a Contact can have one more Addresses attached to it. Also, contacts and addresses can be shared across multiple accounts in Siebel.
    The Event Aggregation Programming Model provides a comprehensive methodology for the business use case where the events / entity / messages aggregation is needed.
         Event Aggregation is needed for the following reasons:
    1.     Multiple events are being raised before the completion of a business message, and each incomplete message triggers an event, which causes a business event in the integration layer.
    2.     Have a holistic view of an entity.
    3.     Synchronizing entity’s to have single view of the entity.
    4.     To increase the performance.
    5.     Several fine-grained events need to be consolidated to a single coarse-grained event.
    6.     Merge the duplicates of the same event
    Thanks

  • Easy way to add a single keyword to an image?

    I just organized a bunch of photos I took at a birthday party, and ran into some <br />awkwardness when keywording.  Hopefully it's just user ignorance, but if not, <br />maybe someone can offer a workaround.<br /><br />I did the import with no keywords, because I wasn't thinking.  Once I'd done <br />that, I selected all images in the "Last Import", hit Ctrl-K, and typed Birthday <br />(which is one of my standard keywords).  Then I deselected all images and <br />started doing selective keywording.  That's where it started getting awkward.<br /><br />I had several shots that had my son in them, so I selected all of them and hit <br />Ctrl-K.  This put my cursor in the keyword box in the right-side panel, with <br />Birthday highlighted.  I hit <End>, then typed comma and MySonsName.  No problem <br />so far.<br /><br />I delselected the shots and selected the ones that had my daughter in them.  The <br />right-side keyword panel now showed something like "Birthday*, MySonsName*".  I <br />wasn't sure whether I could just add MyDaughtersName to the images without <br />actually adding MySonsName to all selected images (even those without my son in <br />them).  That's the awkward part.<br /><br />What I found was that in order to add a single keyword to a bunch of images at <br />once, I had to enable Keyword Stamping and use that.  An alternative was to "Set <br />Keyword Shortcut", enter the keyword, select the images I wanted, and then use <br />"K" to add the shortcutted keyword to the images.<br /><br />Is there any way to simply say "add this keyword to all selected images" in one <br />atomic action?  In other words, hit a keystroke, type the keyword, hit Enter, <br />and it's now applied to all selected images.<br /><br />-- <br />Rob Freundlich<br />"Males ae biologically driven to go out and hunt giraffes." - Newt Gingrich<br />"Some folks you don't have to satirize, you just quote 'em." - Tom Paxton

    <[email protected]> wrote in message <br />news:[email protected]..<br />> In the right hand keyword pane the list should show something like "birthday, <br />> *sonsname". The * means that the keyword isn't<br />>applied to all the images.<br />><br />> To add your daughter's name to the selected images, just add it to the end of <br />> the list and press return. As long as you don't<br />>remove the * you won't apply your son's name to all the images. if you do <br />>remove the *, then you will.<br /><br />Sweet.  I'm a programmer, so I thought the * meant multiple images, not "not all <br />images", so I didn't even think of trying that.  Works like a charm, though.<br /><br />Thanks!<br /><br />-- <br />Rob Freundlich<br />"Males ae biologically driven to go out and hunt giraffes." - Newt Gingrich<br />"Some folks you don't have to satirize, you just quote 'em." - Tom Paxton

Maybe you are looking for

  • How do I create a new Sharepoint site at Home to connect with our own Website?

    How do I connect our website with our home computer. I am not linked with a company server site - just what has been created with the website, so how do I connect to a server or that server; know the server details and how can I link my Sharepoint wi

  • Solution Manager Setup & Configuartion

    Hi There This is a question on Solution Manager setup. We are in the process of configuring Solution manager. Basically we have installed two Sol Man 7.0 (Ehp1) systems development (SOD)  and Production (SOP) We are insterested in configuring EWA and

  • How to determine the size of an InputStream??

    Hello, I need to get the size of an inputstream. In fact, I am working with java.nio.channels and need to tranfer a file from within a Jar to the FileSystem. I have created a ReadableByteChannel from the jar resource, and a FileChannel from the FileS

  • Problem loading from file

    my problem i have array which hold the color --->int this work fine [only one digit] i have another array which hold --->boardpiece [ some of have two digit] each time i call this function(below) it give me a number i am to confuse at moment i was si

  • Workaround for MailTo Yahoo! Mail Helper Application bug?

    I don't know if there is an active bug report for this, but for Firefox 8 with the MailTo Helper Application set to Yahoo! Mail, choosing [i]File: Send Link[/i] opens the Yahoo mail form and inserts [b][i]mailto:?To=[/i]<link>[/b] into the [b]To:[/b]