Concept and situation to use Serializable Object?

Hi Guys ,
i Have read through the java ApI Specs about Serializable Object..
I would like to know in what circumstances we need to serialize / implementing the Serializable class.
Can anyone give me real world analogy?
Then what will happen if we dont serializable our class/object?
Thanks in advance,
Regards,
Willy

you might think about serialization as an easy way to send an object over a stream...
examples could be: save an object on a disk, over the network, over diferent vms, etc...

Similar Messages

  • Photoshop CS6: Pros and Cons of Using Smart Objects

    I haven't had Photoshop CS6 for that long, and have only just got past feeling uncomfortable with using Curves, now I've learnt how to use them properly.
    My concern is - I am currently learning about Smart Objects. The concept, at first, seemed like 'the best thing since sliced bread', being able to non-destructively use filters, Shadows/Highlights command, Unsharp Mask, endlessly scale using Free-Transform etc etc, without harming pixels at all.
    However, the more articles I read about their use in Photoshop, the more I am afraid to start using them in my workflow.
    I understand that when you convert to a Smart Object, this process is non-destructive, i.e. I can perform as many readjustments to a filter, for example, and Photoshop will always work from the embedded container file (which has had no filter adjustment made to it) to adjust the filter to your most recently adjusted settings. If you later decide you don't want to use a filter at all, and rasterize the Smart Object back into a regular layer again, is this process non-destructive as well?
    Then there is this article, which I struggle to understand properly:
    http://bjango.com/articles/smartobjects/
    Please see the part 'Smart Objects Created in Photoshop'. It seems to say I can't scale with a Smart Object without causing interpolation and blurry edges. Please can somebody clarify what the writer of this article is trying to get across, because it is well documented that Smart Objects can be endlessly rescaled non-destructively.
    Please understand I use Photoshop primarily for editing photographs.

    There is much modern focus on "non-destructive" editing, but keep in mind if you don't overwrite or destroy the original file there is no destruction at the highest level.  Put in layman's terms, you could always start over with the raw file.
    That thought segways into my next one:  Non-destructive editing makes sense if you need to use the same information for a variety of somewhat related purposes, or if the work product may need to change (e.g., to suit the whims of a fickle client).
    But at another extreme, if you're editing for a particular purpose - say creating the best possible print from an exposure - sprinting right for the finish line by changing pixel values directly and being done with it can be an extremely effective approach.  This requires that you get things right the first time, and that takes practice.
    Some folks do their Photoshop work by building up layer after layer and using smart objects, smart filters, etc., and this can be effective but no computer has yet been built that can composite all that stuff in real time with a big image.  So there IS a cost to doing it.  What you might gain by being able to re-do things, you might not have needed to gain if your control responses were instantaneous and you could tweak the intermediate result at every step very easily.  Note the number of posts about how slow Photoshop CS6 is/was at editing deep documents, some by people using 2012 computers.
    As with most things, it's horses for courses.  It's good that Photoshop gives us rich tools and choices for how to work.
    Regarding your specific question, bear in mind that what's communicated to the parent document from each of its embedded Smart Objects is a flat, rasterized image.  Think of the embedded smart object kind of like going off and opening another document, making the changes you want, saving the document, then flattening it and pasting the pixels into your parent document.
    In the very first example in the linked article, they show how the smart-object-rasterized image of a vector circle, subsequently scaled by resampling the parent document in which the Smart Object is used, becomes fuzzy as it is scaled up.  Once you understand this you realize that of course you could scale up the smart object itself, e.g., to a size equal to or larger than what's ultimately needed by the parent document, and then it could be crisp in the parent document where it's used.
    Of course, having all your smart objects at a size larger than you need takes up even more resources.
    -Noel

  • Doubt in concepts and scenarios where used

    doubt in differentiating and scenarios for below.
    statistics Vs document update      -  i already know the definitions of V1 V2 V3 updates, pls dont give defn links
    conditions Vs global filters in QD     and scenarios
    btree/bitmap index         and scenarios
    reconstruction Vs repair   and scenarios
    already searched forums, didnt get satisfactory answer.
    pls suggest.
    Edited by: Swetha N on Jan 20, 2012 10:10 AM

    Conditions - You want to get the top 10 customers, you can use condition to get the same. done during the execution, before displayign the data. last step in the report execution.
    Filters - You can filter only particular plant, customer etc. will be done before query execution, kind of starting point for qery execution.
    Btree / Bit map - in addition to Venkatesh said - Bit Map is read optimized and B Tree is write optimized.
    Reconstruction - used in 3.x flow. if a request got deleted by request you can retriev it. Go to reconstruction tab and select the request and do reconstruction.
    Repair - I guess you are asking repair full request, if it so, it is a way to pull the data from R3. with some selection that can be defined at the info package for example you found one purchase order with some incorrect data, what you do is delete selectively the purchase order from the system and do full repail request. You can define the same at info provider level.
    Document update - you change a Purchase order from Me22n, the document will get update in the r/3 backend table EKKO EKPO etc. Statistical update is from the R/3 reporting table perspective, these are LIS tables.

  • Can I use Serialization if I have to update my class files?

    Hi,
    I'm writing a game right now, and I'm using Serialization to implement saving maps in it. Its saving me a whole bunch of time right now, but I'm worried that if I make even the slightest adjustment to my class files, then old saved maps won't work anywhere.
    Is there anyway to work around this?
    Thank you for any suggestions.
    -Cuppo

    Yes there is; the description of that all can be found in the API docs for the
    Serializable interface; in short: you have to write/read the members of
    your classes yourself by implementing two special methods; that's all.You don't even have to do that. As long as you provide a serialVersionUID
    member and obey the versioning constraints in the Serialization specification
    you don't have to do any extra programing.You're right; your scenario is even simpler; I discovered something funny though:
    my 1.4.2. API docs lack all documentation of the serialVersionUID final member in
    the description of the Serializable interface. It is present in the 1.5. docs, strange ...
    kind regards,
    Jos

  • Should I use the object returned from em.merge()

    I am still confused how to use em.merge() correctly. I need to know if I should use the result of em.merge().
    PersistentObject obj = new PersistentObject();
    obj.setSomething("Value#1");
    // Should I do this and continue to use the object returned from merge().
    obj = em.merge(obj);
    obj.setSomething("Value#2");

    Chris,
    First thanks for your and your teams awesome support.
    The reason I always get confused with merge() is my misunderstanding about how clones and caching are handled. Let me give you 2 scenerios and could you tell me what the problems are, is any?
    Scenerio #1: (using return from merge)
    -- Start of Transaction #1
    EntityObject obj = new EntityObject();
    obj.setName("chris");
    obj = em.merge(obj);
    -- End of Transaction #1
    -- Start of Transaction #2
    obj.setName("scott");
    obj = em.merge(obj);
    -- End of Transaction #2
    Is this ok, will I lose data, are the clones and cache ok?
    Scenerio #2: (NOT using return from merge)
    -- Start of Transaction #1
    EntityObject obj = new EntityObject();
    obj.setName("chris");
    em.merge(obj);
    -- End of Transaction #1
    -- Start of Transaction #2
    obj.setName("scott");
    em.merge(obj);
    -- End of Transaction #2
    Is this ok, ?
    These 2 scenerios are different in the fact that one uses the return from merge and one does not. Which one is correct, and what would the problem be with the wrong one?

  • Jndi and serializable objects

    Hi,
    I am attempting to create a cache on the web server to store
    frequently accessed reference data. I do so by running a series of
    queries in a startup class. The data retrieved from the result set of
    each query is stored in a custom class and bound to the server context.
    My understanding is that when serializable objects are bound, they are
    written to disk (so as not.to waste valuable Heap space I assume). As a
    test I made the startup query return a large amount of data and I
    expected to see the amount of free disk space decrease as the objects
    were bound however I did not see this occurring. I examined the memory
    usage of the java process with the NT task manager, and the memory usage
    was increasing pretty dramatically as the query results were performed
    and new objects created to store this data.
    Based on these observations I assume that the objects I created and
    bound are stored in the Java Heap and not written to disk. Would
    Weblogic at some point write these to disk if memory became tight or is
    my understanding that binding an object serializes it incorrect?
    If it turns out that what I am attempting here consumes a lot of heap
    space, I assume that server performance will suffer which is
    unacceptable. Would using read-only entity beans be a better solution?
    The container could manage this memory more effectively but it would
    seem to add a lot of overhead for a simple read only data cache.
    Thanks,
    Steve Snodgrass

    We never write jndi data to the disk. So if you add more objects your are
    going to take up more heap space. You should be looking at other
    alternatives to implement this.
    -- Prasad
    Steve Snodgrass wrote:
    Hi,
    I am attempting to create a cache on the web server to store
    frequently accessed reference data. I do so by running a series of
    queries in a startup class. The data retrieved from the result set of
    each query is stored in a custom class and bound to the server context.
    My understanding is that when serializable objects are bound, they are
    written to disk (so as not.to waste valuable Heap space I assume). As a
    test I made the startup query return a large amount of data and I
    expected to see the amount of free disk space decrease as the objects
    were bound however I did not see this occurring. I examined the memory
    usage of the java process with the NT task manager, and the memory usage
    was increasing pretty dramatically as the query results were performed
    and new objects created to store this data.
    Based on these observations I assume that the objects I created and
    bound are stored in the Java Heap and not written to disk. Would
    Weblogic at some point write these to disk if memory became tight or is
    my understanding that binding an object serializes it incorrect?
    If it turns out that what I am attempting here consumes a lot of heap
    space, I assume that server performance will suffer which is
    unacceptable. Would using read-only entity beans be a better solution?
    The container could manage this memory more effectively but it would
    seem to add a lot of overhead for a simple read only data cache.
    Thanks,
    Steve Snodgrass

  • A problem using serialization and/or not overwritten variables

    I have a problem while writing objects in ObjectOutputStream :
    Here is a simplified version of the program :
    class InDData implements serializable
         private Vector shapeVector = new Vector ();
         public InDData (Vector shapeV)
              this.shapeVector = shapeV;
         public int getShapeVectorSize ()
              return (this.shapeVector.size());
    class InDShape implements serializable
         private Vector points = new Vector();
    // client side
    ObjectOutputStream p = new ObjectOutputStream(new BufferedOutputStream (connection.getOutputStream()));
    InDData objectData = (InDData) vectorObjectsToBeSentThroughNetwork.remove(0);
    System.out.println(objectData.getShapeVectorSize(); //print 1
    p.writeObject(objectData);
    p.flush();
    //server side
    ObjectInputStream in = new ObjectInputStream(new BufferedInputStream (connection.getInputStream()));
    Object oTemp = in.readObject();
    if (oTemp instanceof InDData)
         InDData objectData2 = (InDData) oTemp;
         System.out.println(objectData2.getShapeVectorSize(); //print 2
    Some explanations before the main dish :)
    I am writing a client that allows you to draw a figure and send it to the network. The drawing is composed of shapes and each shape (class InDShape) is composed of points. For the drawing to be sent to the network, i add the shapeVector (== drawing) to the class named InDData (this class allows me to add some more information about the client and the object sent, not shown here) and then i write the object InDData created in the ObjectOutputStream.
    Before writing InDData to the ObjectOutputStream, i test to see if it has a good shapeVector by drawing the shapeVector at the screen. This always shows the same copy as the last drawn panel.
    We suppose that the drawing is sent to the network after each drawn shape
    (mousePressed -> mousseDragged -> mousseReleased)
    (<------------------------------- shape ------------------------------->)
    now the problem ;)
    When i start drawing, the first shape is sent through the network without any problem.
    As soon as i add a second shape to the drawing (shapeVector.size() == 2) things get weird.
    The drawing sent to the network is made only of the first shape, nothing more.
         output of program after the 2nd shape was drawn
         client print 1 : size is 2
         server print 2 : size is 1
    Alright seems like the shapeVector is truncated...
    Now i tried something else to see if the it's only the Vector which is truncated or anything else.
    After adding a second shape to the drawing, i delete the first shape of it:
         reprenstation of the shapeVector:
         ([shape1])
         ([shape1][shape2]) // added the 2nd shape
         ([shape2]) // deleted the first shape
         ([shape2][shape3]) // added a third shape. Vector sent to the network via InDData
         output of program with the vector shown above
         client print 1 : size is 2
         server print 2 : size is 1
    Additionnaly you might expect me to say that first element of shapeVector inside both class InDData (client and server) are the same, but unfortunately they are not.
    The shapeVector received by the server via InDData is the same as when i drew the first shape :((
    Here is the problem (!) :(
    I think that i have a variable that is not overwritten somewhere but i don't know because:
    objectData is overwritten each time a message is sent to the server and has the correct values inside.
    objectData2 is overwritten each time a message is received from clients.
    Sorry for the huge post, but i believe that explanations are necessary ;)
    I am using the 1.4.2 jvm (not tested on others) with Xcode (apple powerbook g4 12").
    Thank you all :)

    Update :)
    In my way of making my program "simple" i forgot an important point in the client side :
    // client side
    ObjectOutputStream p = new ObjectOutputStream(new BufferedOutputStream (connection.getOutputStream()));
    while (connectionNotEnded)
         synchronized (waitingVector)
              try
                   waitingVector.wait();     // the only purpose of the Vector is to make the thread wait until it is interrupted to send InDData
              catch (InterruptedException ie)
                   System.out.println("Thread interrupted");
         InDData objectData = (InDData) vectorObjectsToBeSentThroughNetwork.remove(0);
         System.out.println(objectData.getShapeVectorSize(); //print 1
         p.writeObject(objectData);
         p.flush();
    I need it to explain the solution of my problem.
    when I am creating a client, a thread is created with the above code. It creates the ObjectOutputStream and then wait patiently until said to proceed (Thread.interrupted()).
    I do not close the ObjectOutputStream during the program running time.
    So whenever I am writing an object to the stream, the stream "sees" if the object was created before. I suppose that the ObjectOutputStream has a kind of memory for past written objects.
    So when i send the first InDData, the ObjectOutputStream's memory is "empty", thus the correct sending (and serialization) of InDData.
    But whenever I try to write another object of the same type InDData containing approximately the same data (shapeVector), the ObjectOutputStream calls its "memory" and tries to find it in the past written objects. And finds it in my case ! That's why whatever i put in the shapeVector, it ends by being the first shapeVector sent through the network. (I assume that the recall memory process lacks of "precision" in identifying the memory's object or that the process to give a unique serial to the written object in the ObjectOutputStream "memory" is limited).
    I tried the different ObjectOutputStream writing methods :
    instead of p.writeObject(objectData) i put p.writeUnshared(objectData).
    But as it is said in the docs : " While writing an object via writeUnshared does not in itself guarantee a unique reference to the object when it is deserialized, it allows a single object to be defined multiple times in a stream, so that multiple calls to readUnshared by the receiver will not conflict. Note that the rules described above only apply to the base-level object written with writeUnshared, and not to any transitively referenced sub-objects in the object graph to be serialized."
    And that is exactly my case !
    So i had to take it to the next level :)
    instead of trying to make each written object unique, i simply reset the stream each time it is flushed. That allows me to keep the stream opened and as fresh as new ;) I think the cost of resetting the stream is higher than writeUnshared but lower than closing and creating a new stream each time otherwise it would not have been implemented ;)
    Here is the final code for the client side, the server side remains unchanged :
    // client side
    ObjectOutputStream p = new ObjectOutputStream(new BufferedOutputStream (connection.getOutputStream()));
    while (connectionNotEnded)
         synchronized (waitingVector)
              try
                   waitingVector.wait();     // the only purpose of the Vector is to make the thread wait until it is interrupted to send InDData
              catch (InterruptedException ie)
                   System.out.println("Thread interrupted");
         InDData objectData = (InDData) vectorObjectsToBeSentThroughNetwork.remove(0);
         System.out.println(objectData.getShapeVectorSize(); //print 1
         p.writeObject(objectData);
         p.flush();
         p.reset();
    And that solves my problem :)

  • Testing Object Equality using Serialization

    Hey everyone! I was wondering if somebody could help me figure out how to compare two objects using serialization.
    I have two objects that I'm trying to compare. Both of these objects extend a common "Model" class that has a method getSerialized() that returns a serialized form of an instance, shown below:
              // Serialize the object to an array
             ByteArrayOutputStream baos = new ByteArrayOutputStream(1000);
             ObjectOutputStream oos;
              try {
                   oos = new ObjectOutputStream(baos);
                  oos.writeObject(this);
                  oos.close();
              } catch (IOException e) {
                   e.printStackTrace();
             //Deserialize array into a String array
             return baos.toByteArray();This Model class also has an equals(Model obj) method that allows for the current model object to be compared to a model object that is passed in:
            //Store both models' serialized forms into byte arrays
            byte [] thisClass = this.getSerialized();
            byte [] otherClass = obj.getSerialized();This is where things get a little funny. The byte arrays don't equal - one array is a byte larger than the other. If a byte-by-byte comparison is done, the arrays are equal for the first 15-20% and then not equal for the rest. If I deserialize the byte arrays back into Models and do a toString() on those models, I find that they are equal.
    I have a feeling there's something about the serialization process that I don't fully comprehend. Is there a way to properly implement object comparison using serialization?
    Thanks in advance!

    When you serialize an object, you also serialize the entire tree of references based on that object (except for transient variables). That tree is the complicated business you described there. Serialization stores all the objects in the tree, along with data that explains which objects refer to which other objects. Furthermore if the tree is actually a graph, and there are multiple ways to get to an object, it still only stores each object once. I don't see any reason to believe that all that relationship data would be encoded identically for a pair of trees that you deemed to be equal. And your experiment shows that indeed it isn't.

  • Retreiving more than one object from an object stream using serialization.

    I have written a number of object into a file using the object Output stream. Each object was added separately with a button event. Now when I retrieve it only one object is being displayed. the others are not displayed The code for retrieval and inserting is given below. Please do help me.
    code for inserting is as follows
    Vehicle veh1 and vehicle class implements serializable
    veh1.vehNum=tf1.getText();
              veh1.vehMake=tf2.getText();
              veh1.vehModel=tf3.getText();
              veh1.driveClass=tf4.getText();
              veh1.vehCapacity=tf5.getText();          
              FileOutputStream out = new FileOutputStream("vehicle.txt",true);
              ObjectOutputStream s = new ObjectOutputStream(out);
              s.writeObject(veh1);
    retrieval
    FileInputStream out = new FileInputStream("vehicle.txt");
              String str1,str2;
              str1=str2=" ";
              Vehicle veh=new Vehicle();
              ObjectInputStream s = new ObjectInputStream(out);
              try
              Vehicle veh1=(Vehicle)s.readObject();
              s.close();
              int i=0;
              str1=veh1.vehNum;
              str2+=str1+"\t";
              str1=veh1.vehMake;
              str2+=str1+"\t";
              str1=veh1.vehModel;
              str2+=str1+"\t";
              str1=veh1.driveClass;
              str2+=str1+"\t";
              str1=veh1.vehCapacity;
              str2+=str1+"\t\n";
              ta1.append(str2);
              catch(Exception e)
              e.printStackTrace();
    Pleas give me the code for moving through the object until it reaches the end of file

    You can read objects from the stream one by one. So, what you need is an endless loop like this:
    // Suppose you have an ObjectInputStream called objIn
    // So here is the loop which reads objects from the stream:
    Object inObj;
    while (1) {
        try {
            inObj=objIn.readObject();
            // Do something with the object we got
            parse_the_object(inObj);
        } catch (EOFException ex) {
            // The EOFException will be thrown when
            // we reached the end of the file, so here we break out
            // of our lovely infinite cycle
            break;
        } catch (Exception ex) {
            ex.printStackTrace();
            // Here you may decide what to do...
            // Probably the processing will end here, too. For now,
            // we moving on, hoping there is still something to read
    objIn.close();
    // ...

  • Is there a way to create an object using Trapcode Form and then use that object as a particle in Trapcode Particular?

    I made some objects in Cinema 4D and I tried to import them into After Effects CC 2014.  I get an error saying 'Cannot find Adobe Premiere Pro Dynamic Link'.  So I tried it in AE CC and that worked.  However, I want to create a form object and then use that as a particle for Trapcode Particular.  Is this even possible?
    Thank you for any help!

    There are also size limitations or rather suggestions for particle size. If I were creating an animated 3D shape using Form or C4D and wanted to use it as a particle I would keep the size of the particle about 1/6 to 1/8 of the comp size. You would create a new comp for your particle, animate it, then nest the particle comp in your main comp, turn off visibility, and then use it as a particle in Particular.

  • Implicit and explicit Type conversion using Type object in heap

    Hi,
    I am surprised how Implicit and explicit Type conversion works using Type object in heap. for example when implicit type conversion occur what pointer it returns to object and similarly with explicit type conversion.

    Hello,
    >> I am surprised how Implicit and explicit Type conversion works using Type object in heap.
    For Implicit conversions: Typical examples are conversions from smaller to larger integral types, and conversions from derived classes to base classes. For the first one, the reference would be different which means it would return a different pointer to
    a new object. For the reference type, it actually points to the same memory location, you could use the object.ReferenceEquals() to check it.
    For Explicit conversions (casts):Typical examples include numeric conversion to a type that has less precision or a smaller range, and conversion of a base-class instance to a derived class. For first one, it would perform the same with implicit conversions.
    While for the conversion of conversion of a base-class instance to a derived class, actually, there's no built-in way to do this conversion.
    Regards.
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • I am trying to create a simple animated gif in Photoshop. I've set up my frames and want to use the tween to make the transitions less jerky. When I tween between frame 1 and frame 2 the object in frame two goes out of position, appearing in a different p

    I am trying to create a simple animated gif in Photoshop. I've set up my frames and want to use the tween to make the transitions less jerky. When I tween between frame 1 and frame 2 the object in frame two goes out of position, appearing in a different place than where it is on frame 2. Confused!

    Hi Melissa - thanks for your interest. Here's the first frame, the second frame and the tween frame. I don't understand why the tween is changing the position of the object in frame 2, was expecting it to just fade from one frame to the next.

  • Re: [SunONE-JATO] Re: Using an object to store and display data

    Personally, I think there is little or no value to creating a "domain"
    object that itself relies on a JATO QueryModel internally, but hides that
    fact and requires use of BeanAdapterModel.
    It would be more appropriate (and much less work, and more scalable) to just
    derive a QueryModel subclass and add the domain-specific behavior to the
    model. In other words, what's the point of creating an object that hides
    JATO inside it when you're running in JATO to begin with? Now if the domain
    object were doing plain JDBC, and thus trying to be JATO independent, that
    would be different. However, you could still implement the Model interface
    on the object (or use BeanAdapterModel) to integrate it seamlessly with the
    View tier.
    Todd
    ----- Original Message -----
    From: "grschroeder" <grschroeder@y...>
    Sent: Wednesday, July 31, 2002 12:00 PM
    Subject: [SunONE-JATO] Re: Using an object to store and display data
    Craig,
    I think it all finally makes sense. First, your assumption is
    correct regarding the process flow. The ViewBean will interact with
    a custom Javabean which will then in turn interact with a SQL Model
    to access the database. So now let me make sure I understand what I
    need to do. Basically the custom Javabean will have a method to get
    the SQLModel. I would then invoke the setValue method on the
    SQLModel and call the appropriate execute method( e.g.,
    executeUpdate, etc. ) just like I would do from a ViewBean. Does
    this sound correct?
    Thanks,
    Greg
    --- In SunONE-JATO@y..., "Craig V. Conover" <craig.conover@s...>
    wrote:
    Greg,
    see below...
    grschroeder wrote:
    Thanks for the help Craig. I looked at the sample code that makes
    use of the BeanAdapterModel. Basically it looks like it allows a
    view to interact with a bean the same way it would interact with
    any
    other model. That part I think I understand.
    This is correct.
    The part I'm a little
    unclear on still is how to interface this BeanAdapterModel( which
    is
    a very basic model ) that I now have with a query model toactually
    interact with the database.
    Not sure what you mean by "interface this BeanAdapterModel ... witha
    query model". Does this mean that you have a ViewBean the interactswith
    a custom JavaBean via the BeanAdpterModel wrapper, and from the
    JavaBean you are interacting with the SQL Model?
    So it looks like this: ViewBean > BeanAdapterModel(Custom JavaBean)
    SQL Model > RDBMS
    That's what I am reading anyway. Please explain this to me.
    Would I need to make direct JDBC calls
    from my custom model instead of letting JATO dynamically create
    the
    calls for me?
    If my assumptions above are correct, then the custom JavaBean cansimply
    use the SQL Model in exactly the same manner as the ViewBean,otherwise,
    if no SQL Model is involved, then yes, you need to handle JDBCdirectly,
    which is fine, if you do it "right" (connection pooling, result set
    handling, etc.).
    Thanks,
    Greg
    --- In SunONE-JATO@y..., "Craig V. Conover" <craig.conover@s...>
    wrote:
    I think the best approach would be to treat your Domain Objects
    (DO) as
    the Database (the enterprise tier) from JATO's perspective. You
    could
    create custom models that interface with our DO's and then the
    JATO
    Views could easily bind to the custom DO models just like anyother
    model. This should eliminate the need for pushing data/objectsfrom
    >
    view
    to model to database.
    There is a JATO class called BeanAdapterModel that might be just
    what
    you need, however, I am not experienced with using it. Maybe
    someone
    else on my team or in the community could better explain how to
    use
    >
    this
    class.
    craig
    grschroeder wrote:
    Venki,
    Thanks for the response. Actually, I'm not sure if I can answer
    all
    of your questions because those are some of the same questions
    that
    we're trying to answer ourselves. Basically, what we're trying
    to
    >do
    is incorporate our domain object model into the JATO framework.
    My
    thinking was that one way we could accomplish this was by
    storing
    >a
    Javabean object in the HTTPSession to represent an object in the
    domain model, and that the Viewbean and JATO Model could get and
    set
    data from there. If you have a better suggestion of how to
    accomplish this, I'm definitely open to hearing it.
    Thanks,
    Greg
    --- In SunONE-JATO@y..., Venki <heyvenki@y...> wrote:
    OK grschroeder , first let me get this straight:
    0. Are you going to pass this object to the model, or are u
    going
    >
    to pass the session object to the model?
    1. Is there any specific reason for this approach you are
    taking?
    2. Will there be a notification from the object u havementioned
    >to
    have the model persist the data?
    3. What about the reverse case, when the model refreshes the
    data,
    how are you going to refresh you object?
    4. The JavaBean Object that u are talking about, is it your own
    object or is it implementation of an existing JATO interface?
    ~Venki
    grschroeder wrote:I'm fairly new to JATO, but I think I now
    have
    >a
    basic understanding
    of how Viewbeans and Models interact. However, I want to
    incorporate
    the use of a Javabean object that will be stored in the HTTP
    session
    and will act as the connection between the Viewbean and Model
    instead
    of having the Viewbean and Model interact directly with each
    other.
    Basically I would like to have the Model set data in the object
    stored in the session so that the Viewbean can pull it out to
    display
    it. And vice versa, once the user modifies the data and is
    ready
    >
    to
    persist it, I would like to have the Viewbean set data in the
    object
    stored in the session so that the Model can pull it out to
    store
    >it
    in the database. I'm not sure what the best approach would be
    to
    accomplish this. Any help you could give would be greatly
    appreciated.
    Thanks,
    Greg
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    Service.
    Venki
    IT Solutions
    #6, Pycrofts Garden Road, Nugambakkam, Chennai - 600 006
    91-44-4925740(Home) 91-44-8212877(Work)
    * Luck is what happens when Preparation meets Opportunity.
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    >To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp

    Personally, I think there is little or no value to creating a "domain"
    object that itself relies on a JATO QueryModel internally, but hides that
    fact and requires use of BeanAdapterModel.
    It would be more appropriate (and much less work, and more scalable) to just
    derive a QueryModel subclass and add the domain-specific behavior to the
    model. In other words, what's the point of creating an object that hides
    JATO inside it when you're running in JATO to begin with? Now if the domain
    object were doing plain JDBC, and thus trying to be JATO independent, that
    would be different. However, you could still implement the Model interface
    on the object (or use BeanAdapterModel) to integrate it seamlessly with the
    View tier.
    Todd
    ----- Original Message -----
    From: "grschroeder" <grschroeder@y...>
    Sent: Wednesday, July 31, 2002 12:00 PM
    Subject: [SunONE-JATO] Re: Using an object to store and display data
    Craig,
    I think it all finally makes sense. First, your assumption is
    correct regarding the process flow. The ViewBean will interact with
    a custom Javabean which will then in turn interact with a SQL Model
    to access the database. So now let me make sure I understand what I
    need to do. Basically the custom Javabean will have a method to get
    the SQLModel. I would then invoke the setValue method on the
    SQLModel and call the appropriate execute method( e.g.,
    executeUpdate, etc. ) just like I would do from a ViewBean. Does
    this sound correct?
    Thanks,
    Greg
    --- In SunONE-JATO@y..., "Craig V. Conover" <craig.conover@s...>
    wrote:
    Greg,
    see below...
    grschroeder wrote:
    Thanks for the help Craig. I looked at the sample code that makes
    use of the BeanAdapterModel. Basically it looks like it allows a
    view to interact with a bean the same way it would interact with
    any
    other model. That part I think I understand.
    This is correct.
    The part I'm a little
    unclear on still is how to interface this BeanAdapterModel( which
    is
    a very basic model ) that I now have with a query model toactually
    interact with the database.
    Not sure what you mean by "interface this BeanAdapterModel ... witha
    query model". Does this mean that you have a ViewBean the interactswith
    a custom JavaBean via the BeanAdpterModel wrapper, and from the
    JavaBean you are interacting with the SQL Model?
    So it looks like this: ViewBean > BeanAdapterModel(Custom JavaBean)
    SQL Model > RDBMS
    That's what I am reading anyway. Please explain this to me.
    Would I need to make direct JDBC calls
    from my custom model instead of letting JATO dynamically create
    the
    calls for me?
    If my assumptions above are correct, then the custom JavaBean cansimply
    use the SQL Model in exactly the same manner as the ViewBean,otherwise,
    if no SQL Model is involved, then yes, you need to handle JDBCdirectly,
    which is fine, if you do it "right" (connection pooling, result set
    handling, etc.).
    Thanks,
    Greg
    --- In SunONE-JATO@y..., "Craig V. Conover" <craig.conover@s...>
    wrote:
    I think the best approach would be to treat your Domain Objects
    (DO) as
    the Database (the enterprise tier) from JATO's perspective. You
    could
    create custom models that interface with our DO's and then the
    JATO
    Views could easily bind to the custom DO models just like anyother
    model. This should eliminate the need for pushing data/objectsfrom
    >
    view
    to model to database.
    There is a JATO class called BeanAdapterModel that might be just
    what
    you need, however, I am not experienced with using it. Maybe
    someone
    else on my team or in the community could better explain how to
    use
    >
    this
    class.
    craig
    grschroeder wrote:
    Venki,
    Thanks for the response. Actually, I'm not sure if I can answer
    all
    of your questions because those are some of the same questions
    that
    we're trying to answer ourselves. Basically, what we're trying
    to
    >do
    is incorporate our domain object model into the JATO framework.
    My
    thinking was that one way we could accomplish this was by
    storing
    >a
    Javabean object in the HTTPSession to represent an object in the
    domain model, and that the Viewbean and JATO Model could get and
    set
    data from there. If you have a better suggestion of how to
    accomplish this, I'm definitely open to hearing it.
    Thanks,
    Greg
    --- In SunONE-JATO@y..., Venki <heyvenki@y...> wrote:
    OK grschroeder , first let me get this straight:
    0. Are you going to pass this object to the model, or are u
    going
    >
    to pass the session object to the model?
    1. Is there any specific reason for this approach you are
    taking?
    2. Will there be a notification from the object u havementioned
    >to
    have the model persist the data?
    3. What about the reverse case, when the model refreshes the
    data,
    how are you going to refresh you object?
    4. The JavaBean Object that u are talking about, is it your own
    object or is it implementation of an existing JATO interface?
    ~Venki
    grschroeder wrote:I'm fairly new to JATO, but I think I now
    have
    >a
    basic understanding
    of how Viewbeans and Models interact. However, I want to
    incorporate
    the use of a Javabean object that will be stored in the HTTP
    session
    and will act as the connection between the Viewbean and Model
    instead
    of having the Viewbean and Model interact directly with each
    other.
    Basically I would like to have the Model set data in the object
    stored in the session so that the Viewbean can pull it out to
    display
    it. And vice versa, once the user modifies the data and is
    ready
    >
    to
    persist it, I would like to have the Viewbean set data in the
    object
    stored in the session so that the Model can pull it out to
    store
    >it
    in the database. I'm not sure what the best approach would be
    to
    accomplish this. Any help you could give would be greatly
    appreciated.
    Thanks,
    Greg
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    Service.
    Venki
    IT Solutions
    #6, Pycrofts Garden Road, Nugambakkam, Chennai - 600 006
    91-44-4925740(Home) 91-44-8212877(Work)
    * Luck is what happens when Preparation meets Opportunity.
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    >To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp
    To download the latest version of JATO, please visit:
    http://www.sun.com/software/download/developer/5102.html
    For more information about JATO, please visit:
    http://developer.iplanet.com/tech/appserver/framework/index.jsp

  • Why and how to use events in abap objects

    Dear all,
      Please explain me why and how to use events in abap objects with real time example
    regards
    pankaj giri

    Hi Pankaj,
    I will try to explain why to use events... How to use is a different topic.. which others have already answered...
    This is same from your prev. post...
    Events :
    Technically speaking :
    " Events are notifications an object receives from, or transmits to, other objects or applications. Events allow objects to perform actions whenever a specific occurrence takes place. Microsoft Windows is an event-driven operating system, events can come from other objects, applications, or user input such as mouse clicks or key presses. "
    Lets say you have an ALV - An editable one ...
    Lats say - Once you press some button  you want some kind of validation to be done.
    How to do this ?
    Raise an Event - Which is handled by a method and write the validation code.
    Now you might argue, that I can do it in this way : Capture the function code - and call the validate method.
    Yes, in this case it can be done.. But lets say .. you change a field in the ALV and you want the validation to be done as soon as he is done with typing.
    Where is the function code here ? No function code... But there is an event here - The data changed event.
    So you can raise a data changed event that can be handled and will do the validation.
    It is not user friendly that you ask the user to press a button (to get the function code) for validation each time he enters a data.
    The events can be raised by a system, or by a program also. So in this case the data changed event is raised by a system that you can handle.
    Also, Lets say on a particular action you want some code to trigger. (You can take the same example of validation code). In this case the code to trigger is in a separate class. The object of which is not available here at this moment. (This case happens very frequently).
    Advantage with events : Event handlers can be in a separate class also.
    e.g : In the middle of some business logic .. you encounter a error. You want to send this information to the UI (to user - in form of a pop up) and then continue with some processing.
    In many cases - A direct method call to trigger the pop up is not done. Because (in ideal cases) the engine must not interact with UI directly - Because the UI could be some other application - like a windows UI but the error comes from some SAP program.
    So - A event is raised from the engine that is handled in the UI and a pop up is triggered.
    Here -- I would have different classes (lets say for different Operating Systems). And all these classes must register to the event ERROR raised in application.
    And these different classes for different Operation systems will have different code to raise a pop-up.
    Now you can imagine : If you coded a pop-up for Windows (in your application logic) .. it will not work for Mac or Linux. But of you raise a event.. that is handled separately by a different UI classes for Win, Linux or Mac  they will catch this event and process accordingly.
    May be I complicated this explanation .... but I couldn't think of a simpler and concrete example.
    Cheers.
    Varun.

  • To extract the users permission on files and folders in sharepoint 2010 using client object model

    To extract the users permission on files and folders in sharepoint 2010 using client object model

    Hello,
    This is sample code to get item level permisison: (Just written in notepad so it is not tested)
    public void ItemLevelPermission()
    SecurableObject curObj = null;
    ListItem curItem = ctx.Web.Lists.GetByTitle("LibraryName").GetItemById(ItemId); -> Use Id of file or folder.
    IEnumerable roles = null;
    roles = ctx.LoadQuery(
    curObj.RoleAssignments.Include(
    roleAsg => roleAsg.Member,
    roleAsg => roleAsg.RoleDefinitionBindings.Include(
    roleDef => roleDef.Name, // for each role definition, include roleDef’s Name
    roleDef => roleDef.Description)));
    ctx.ExecuteQuery();
    Hope it could help
    Hemendra:Yesterday is just a memory,Tomorrow we may never see
    Please remember to mark the replies as answers if they help and unmark them if they provide no help

Maybe you are looking for