Transactions documentation and a difficult(?) use-case...

I would like detailed information about how TransactionMap work with different isolation levels, ie how both changes performed by the application holding the map and in the distributed cache is propagated between them etc.
More detailed information about the TransactionMap.Validator would also be very appreciated.
We also have one specific "use case" I would like advice about - it goes like this:
We use one type of main object that has a very tight coupling to a varying number (0 to a few hundred in the extreme case) small detail objects. All the detail objects are always required as soon as the main object is used. A given detail object is never referenced from more than one main object. We have (for performance reasons) decided to treat the detail objects as "part of" the main object. The main objects are stored in the cache.
Users can make changes to the main objects themselves or to there detail objects. A user should be able to perform many changes to many main objects (and there detail objects) and "commit" them all at once pressing a button.
Now to the problem:
We would like to allow users to make "non conflicting" changes to a main objects detail objects - ie if two users has changed different detail objects we want to merge the changes instead of refusing the modification at commit. To be able to do this we intend to keep version numbers not only on the main object but also on the detail objects.
We would like to use "transactions" to handle the requirement that all a users changes should be "committed" at once and either all be introduced or not introduced at all (in the event of hardware failure during update for instance!) but the default behavior of Transaction is as I understand it (I have so far just read about it not played around with it much!) to compare the "whole object" for equality in the prepare (and commit?) steps. We also need exact information about WHAT object(s) that had been concurrently modified in the case a commit cant be performed allowing the user to "refresh" the relevant detail object only and retry committing his changes.
How would we be able to implement our "use case" in a good and reasonably efficient way given Coherents features? Would it for instance be possible (with a reasonable effort) to create our own transaction validation that could perform "merging of "non-conflicting" changes to the same object and in that case how should we go about it?
Best Regards
Magnus

Hi Magnus,
Our entry processor functionality is your best solution, but unfortunately is not fully supported within a transactional context.
I would suggest using a combination of explicit locking (as opposed to implicit transactions) and our entry processor functionality (new in 3.1).
Using explicit locking, you can enforce atomic access to cache entries. Using the entry processor you can perform partial updates locally on the server (allowing you to send only changes).
So the sequence would be:
* lock all "main objects"
* if necessary, validate the main objects (see below)
* use entry processors to perform "delta updates" against those main objects
* unlock the main objects
The locking is only required for atomicity (ensuring that updates don't overlap), and does require that all modifiers follow the same locking pattern. You may either design your objects so that you know the delta updates will complete successfully, or you'll need to verify the updates will succeed prior to actually executing the updates.
Jon Purdy
Tangosol, Inc.

Similar Messages

  • What is WAN compression on AOS 6.4.3.0 and it`s use case?

    Q: What is WAN compression on AOS 6.4.3.0 and it`s use case?
    A: This feature is supported from AOS 6.4.3.0 and above.
    The 7000 Series Controllers contain the Compression/Decompression Engine (CDE) that compresses the raw IP payload data and also decompresses the compressed payload data
    Deflation & Inflation
    The CDE compression process is called Deflation; the decompression process is called Inflation.
    XLP 4xx and XLP2xx Packet Processor Card is a high-performance network processor PCI Express card designed for use in PCI Express compliant systems.
    It features the latest Broadcom XLP 4xx series processor with up to 2.5 Gbps per CDE.
    This processor is ideally suited to both data-plane applications, which are inherently sensitive to memory latencies, and control-plane applications, which will help best-in-class processing performance.
    Advantages
    Four CDE channels on the XLP4XX processor and one CDE channel on the XLP2XX processor.
     2.5 GBps per CDE (Deflation process, Inflation process, or combination of both)
     Deflation context save and restore (at block boundaries)
     Inflation context save and restore (at arbitrary file position)
     Load balancing the input messages to all CDEs
    The Compression/Decompression Engine feature is enabled by default. However, the packets are compressed only if the IP Payload Compression Protocol (IPComp) is successfully negotiated via the Internet Key Exchange (IKE) protocol.
    Use-case
    Data compression reduces the size of data frames that are transmitted over a network link, thereby reducing the time required to transmit the frame across the network. IP payload compression is one of the key features of the WAN bandwidth optimization solution, which is comprised of the following elements:
    You can split a file or data into blocks, and each block can use the mode of compression that suits it best. In this case, it is packet data and there will be only one block.
    IP Payload Compression
    Traffic Management and QoS
    Caching
    Recommendation
    Boc (Branch office controller)can have traffic to destinations other than HQ on the same link, the preferred method is to enable payload compression on the IPsec tunnel between the branch controller and the master controller.
    IP Payload needs to be enabled only between Aruba devices.
    Notes
    When this hardware-based compression feature is enabled, the quality of unencrypted traffic (such as Lync or Voice traffic) is not compromised through increased latency or decreased throughput.

    They didn't do the NVIDIA test on it? At this point I guess it really doesn't matter. It sounds like it needs a new logic board, which has the GPU soldered on it. Price-wise, it's not worth having Apple put a new LB in it, given its age. If you're handy inside a laptop, you might consider installing a used LB yourself if you can find one on eBay, etc. Not good news, I know.

  • What transaction code and entrys we use to post intercompany transactions

    hi,
    I know obya is used to configure intercompany.
    1.can you explain what accounting entrys we post.example?
    2.what tcode we use to post the intercompany transaction
    3.please any relevant documents can email to [email protected]
    thanks
    Kiranmayi

    we have 2 theories after showing the exception trace to folks who r more adept at managed code.
    the first is related to the fact that our 3rd party dlls (I think entity framework is included in these) r older versions.  I don't want to discount this theory but we have some evidence already that this might not be true.
    I hope I can do justice to the 2nd theory but will make it clearer and clearer as I get a better understanding.  I believe this is what Arthur was saying and I applaud his instincts.  They explained that .net is so "smart" that it detected
    a change in namespace  (ie context as Arthur said) and purposely threw an exception 2 save us from ourselves.  The workarounds discussed were a bit over my head but I will continue trying to better understand it.  The fact that many of the methods
    we call after reflection r now merged into one assembly seemed relevant to the discussion and possible workarounds.   
    this link came up in their discussion and I believe the bottom line here is that by qualifying assembly names further (in config?)r, a workaround is possible. 
    http://msdn.microsoft.com/en-us/library/system.type.assemblyqualifiedname(v=vs.110).aspx  .
    This link came up as well and has something to do with ILMerge and workarounds to ILMerge. 
    http://elegantcode.com/2011/04/02/dynamically-load-embedded-assemblies-because-ilmerge-appeared-to-be-out/  .
    Finally, this link came up and seems to have something to do with embedding your dlls in one assembly without them losing their identity.
    http://blogs.msdn.com/b/microsoft_press/archive/2010/02/03/jeffrey-richter-excerpt-2-from-clr-via-c-third-edition.aspx
    I'll post more here as we muddle thru this.

  • Howmany use cases

    This is an excerpt from "UML Distilled" by Martin Fowler & Kendall scott(pageno.47)
    "How many use cases should you have?During a recent OOPSLA panel discussion, several use case experts said that for a 10-person-year project, they would expect around a dozen use cases. These are base use cases; each use case would have many scenarios and many variant use cases. I've also seen projects of similar size with more than a hundred seperate use cases.(If you count the variant use cases for a dozen use cases , the number end up about the same.)As ever, use what works for you."
    EXCERpT END
    So far my understanding of variant use case is that a variant use case represents a functionality that is reapeating across many use cases. The above excerpt does not seem to fit with this understanding.
    During the initial phases of project, if high level use case representation has to be conveyed to a domain expert , making use cases which include a great deal of similar requirements is a good idea as the whole system can be represented in a very simple manner.
    For example , a use case like "manage accounts" can be further subdivided in to open , deposit, withdraw, close account in later stages of development. But the issue is could these be called as use case variants. And how these are to correlated to the original use case "manage accounts"?.
    thanking you,
    sprasad

    Be careful with RUP - it's very documentation centric, and I have yet to see a case where all of the required documentation is useful to the successful development of the project.
    The hierarchial decomposition of a problem domain is best illustrated through dual mechanisms - both the Use Case diagrams, and actual Use Case documentation. The diagrams can be developed in a hierarchial fashion (i.e. a single use case on a high level diagram decomposes into several more detailed use cases on a separate diagram).
    Where you stop the decomposition is really part of the art of use case analysis, and will be impacted by the methodology that your team practices. The teams that I work with practice various "Agile" methodologies (XP, Scrum, Crystal, DSDM, etc.), so what I look for in a "detailed" use case is 1) can a developer build the needed functionality within a single development iteration? and 2) What other functional areas (if any) are similar enough to encourage the development of more generic functionality to address multiple requirements. My guess is that this is equivalent to the "variant" use cases mentioned above (bear in mind I haven't read the book).
    So it really depends on how your project team works. If you are practicing a "heavy" methodology like RUP or Waterfall, where all of the analysis is done up front, it is important to define all of the use cases in advance. This doesn't change the issue of how to establish the functional requirements hierarchy, but it does change when you will put in the effort to identify this hierarchy.
    If, on the other hand, you are practicing an Agile methodology you will still need to identify the hierarchy, but you do it in stages. For instance, at the beginning of a project I will identify the major functional needs (i.e. an accounting system needs AR, AP, journals, GL, etc), but then I will concentrate on the detailed analysis of only one aspect of the application. From there the team will design and build it. Then we move on to the next aspect of the application. At the end you still have a detailed analysis of the application (hierarchial use case information), but you can much more readily adapt to changing business requirements, and you tend to produce a lot less meaningless documentation.

  • Business Process Management use cases within an SAP Environment

    Check out the [BPM use case wiki |http://wiki.sdn.sap.com/wiki/display/BPX/BusinessProcessManagementUseCases]to learn how many SAP customers are profoundly transforming their companies by leveraging the discipline of Business Process Management to optimize, monitor, and measure their business operations.  Join us as we survey over 20 industry and cross-industry use cases where BPM methodologies and tools were applied to help align business goals with IT implementation to rapidly achieve measurable business improvements.  See how other companies got started with BPM and get ideas of how you can begin delivering business value rapidly with a BPM approach in your own company.
    Become part of this effort by providing your feedback in this forum or adding your insight and help grow the knowledge base by becoming a contributor the BPM use case wiki by sending a request to the wiki owners.

    Hi,
    Thanks a Ton for the info. Just to let you know that the link has been changed...
    Here is the new link...
    http://wiki.sdn.sap.com/wiki/display/BPMUC/BusinessProcessManagementUseCases
    Regards,
    SrinivaS

  • DBMS_LDAP package documentation and samples

    Where can I find the DBMS_LDAP package documentation and samples to use it to connect to OID from pl/sql blocks.
    TIA,
    Nishant

    I have been successful using the PL/SQL DBMS_LDAP utilities and enhancing the included examples to bulk create portal users as well as adding them to a default portal group as outlined in the DBMS_LDAP demo examples (search.sql, trigger.sql, empdata.sql).
    Using this PL/SQL trigger on the EMP table, I can add, delete or modify various user entries. However, while I can add a user to a default portal group, I have been unsuccessful in deleting a user from a group as well as modifying a users default group by deleting their "uniquemember" entry from one group and adding it to another using the DBMS_LDAP procedures.
    Has anyone deleted a user from an existing group and how is this done programatically using the DBMS_LDAP utilities? Also, but less important, is there a way to programmatically modify a user from one portal group to another?
    I don't necessarily want the code - just the method of doing this. Do I have to read in all of the 'uniquemember' attributes from the group (via DBMS_LDAP.populate_mod_array, for example), then manipulate the list and write it back. Or, is there a function that will allow me to delete or modify an entry in the 'uniquemember' attribute.
    Regards,
    Edward Girard

  • Checking for the condition types using case statement

    hi folks,
    I have a lot of condition types that I have to check for and I am using case statement to do that. The code goes like this.
    case wac-kschl.
            when 'ZRAT' OR 'ZAGR' OR 'ZRCR' OR
                  'Y098' OR 'Y007' OR 'ZREW' OR 'Y106'        OR 'ZTSR' OR 'Y127' OR 'Y125' OR 'Y126' OR 'Y124' OR 'Y157' OR 'Y092' OR 'Y085' OR 'Y090' OR 'ZMZD'
    OR 'Y215' OR 'Y214' OR 'Y111' OR 'ZC$D' OR 'ZAUD'.
    up till here it is working on errors and when I add few more condition types to the case statement it is throwing the error.
    I have to check for all the condition types out here.
    How can I correct it? Is there a better way to do it?
    thanks
    Santhosh

    Hi Santhosh,
    I think that your CASE statement has a flaw. The line length of one of the lines is too large. You need to insert a carriage-return to shorten it (or press the button 'Pretty Printer').
    The code would look nicer like this:[code]  CASE wac-kschl.
        WHEN 'ZRAT' OR 'ZAGR' OR 'ZRCR' OR 'Y098' OR 'Y007' OR 'ZREW'
          OR 'Y106' OR 'ZTSR' OR 'Y127' OR 'Y125' OR 'Y126' OR 'Y124'
          OR 'Y157' OR 'Y092' OR 'Y085' OR 'Y090' OR 'ZMZD' OR 'Y215'
          OR 'Y214' OR 'Y111' OR 'ZC$D' OR 'ZAUD' OR 'Z001' OR 'Z002'
          OR 'Z003' OR 'Z004' OR 'Z005' OR 'Z006' OR 'Z007' OR 'Z008'
          OR 'Z009' OR 'Z010' OR 'Z011' OR 'Z012' OR 'Z013' OR 'Z014'.
        Do your thing here
          WRITE: / 'OK'.
        WHEN OTHERS.
          WRITE: / 'NOT OK'.
      ENDCASE.[/code]If this will not work for you, you could try a different approach:[code]* Local definition
      DATA:
        var_list(1024).
    Build variable string for checking
      CONCATENATE 'ZRAT ZAGR ZRCR Y098'
                  'Y007 ZREW Y106 ZTSR'
                  'Y127 Y125 Y126 Y124'
                  'Y157 Y092 Y085 Y090'
                  'ZMZD Y215 Y214 Y111'
                  'ZC$D ZAUD'
             INTO var_list
        SEPARATED BY space.
    Check if the correct value is supplied
      IF var_list CS wac-kschl.
      Do your thing here
        WRITE: / 'OK'.
      ENDIF.[/code]Hope this helps you a bit.
    Regards,
    Rob.

  • How to create documentation for report programs and how to use it

    how to create documentation for report programs and how to use it in the selection screen by placing an icon in the Applicatin Tool bar. If i click this icon the help documentation has to display.
      Note: Exaple <b>RSTXSCRP</b> programs selection screen

    Hi
    1 goto SE38 transaction, give the program name
    2 Click on documentation radiobutton & then press change
    3 Write your PURPOSE, PREREQUISITES etc details
    4 Save the same & Activae it.
    The icon will come automatically on selection screen
    Thanks
    Sandeep
    Reward if useful

  • SQ01 and SQ02 transaction  documentation

    Hi All,
    Is there any documentation on transactions SQ01 and SQ02 available any where so as to manipulate those two transactions effectively to get adhoc reports.
    Thanks for any information.

    Hi,
    SQ01 : is used to create Query form infoset you have created here you can select fields for selection screen and for list generation using check boxes.
    you can generated different types of report using basic list, ranked list, statistics to output in graphical form.
    SQ02 : is used to create infoset you need to tell the data source for your infoset like table join, direct read or data reterival using program, you can also put extras fields, structures and tables using extras button. and can code your own select query for that field or structure.
    After generating the infoset you have to assign this to your usergroup you already created.
    go through this link...
    http://help.sap.com/saphelp_47x200/helpdata/en/bf/1d4645bf3211d296000000e82de14a/frameset.htm
    Refer this thread
    Re: sq01
    -Shreya

  • How to use, Case function and Filter in Column Formula?

    Hello All,
    I am using case function and also would like to filter value to populate.
    Below is showing error :
    case
    when '@{Time}' = 'Year' then "Time"."Fiscal Year"
    when '@{Time}' = 'Quarter' then "Time"."Fiscal Quarter"
    when '@{Time}' = 'Month' then FILTER ("Time"."Fiscal Period" USING "Time"."Fiscal Period" NOT LIKE 'A%')
    else ifnull('@{Time}','Selection Failed') end
    Thanks, AK

    when '@{Time}' = 'Month' then FILTER ("Time"."Fiscal Period" USING "Time"."Fiscal Period" NOT LIKE 'A%')I dont think Filter this works here or any other data types except number.
    Try to use option Column's->Filter->Advanced->Convert this filter to SQL
    If helps mark

  • Windows 8.1 PC, using reader, when searching a folder containing approx 100 doc's. If i search for a word, no results are returned. only the doc names can be found but nothing from within the doc. This is a new problem and was not the case before.

    Windows 8.1 PC, using reader, when searching a folder containing approx 100 doc's. If i search for a word, no results are returned. only the doc names can be found but nothing from within the doc.
    This is a new problem and was not the case before.

    Works perfectly fine for me with the latest Reader version (11.0.09).
    You write that it worked "before"; before what?  An update?  Update from what version to what version?

  • Using Case and Joins in update statement

    Hi all,
    I need to update one column in my table...but need to use case and joins...I wrote the query and it is not working
    I am getting an error msg saying...the SQL command not ended properly...
    I am not that good at SQL...Please help...
    update t1 a
    set a.name2=
    (case
    when b.msg2 in ('bingo') then '1'
    when b.msg2 in ('andrew') then '2'
    when b.msg2 in ('sam') then '3'
    else '4'
    end )
    from t1 a left outer join t2 b
    on a.name1 = b.msg1 ;
    Waiting for help on this...!
    Thanks in Advance... :)

    Another approach is to update an inline view defining the join:
    update
    ( select a.name2, b.msg2
      from   t1 a
      join   t2 b on b.msg1 = a.name1 ) q
    set q.name2 =
        case
          when q.msg2 = 'bingo' then '1'
          when q.msg2 = 'andrew' then '2'
          when q.msg2 = 'sam' then '3'
          else '4'
        end;which could also be rewritten as
    update
    ( select a.name2
           , case q.msg2
                when 'bingo'  then '1'
                when 'andrew' then '2'
                when 'sam'    then '3'
                else '4'
             end as new_name
      from   t1 a
      join   t2 b on b.msg1 = a.name1 ) q
    set name2 = new_name;The restriction is that the lookup (in this case, t2.msg1) has to be declared unique, via either a primary or unique key or unique index.
    (You don't strictly need to give the view an alias, but I used 'q' in case you tried 'a' or 'b' and wondered why they weren't recognised outside the view.)

  • Optical drive on iMac at work, barely one year old, won't read CD's....disc utilities shows disc unavailable!  Rarely used and out of warranty, what a rip off!!  I have been on MACS since the 80's and things are getting more and more difficult

    Optical drive on iMac at work, barely one year old, won't read CD's....disc utilities shows disc unavailable!  Rarely used and out of warranty, what a rip off!!  I have been on MACS since the 80's and things are getting more and more difficult to deal with. Why did it quit after barely being used? Any help from Apple?

    It depends on what you mean by "barely one year old". If the iMac is only a couple of days out of warranty, Apple's been known to make exceptions and extend the warranty, though it's by no means guaranteed. If it's several days or more out of warranty, then Apple will fix it, if it's a hardware problem, but the repair won't be free. This is no different now then it would have been with a Mac purchased back in the 80s.
    Why the drive failed, if indeed it has, is impossible to say. Could have been a static shock, a weak component that failed when power was applied, or a mechanical fault in one of the tiny parts used in optical drives and hard drives that finally broke.
    Regards.

  • Photostream, iCloud Photo Sharing, iPhoto, my phone, and use cases all around

    I have a fre pretty basic use case questions about Photostream.
    I take a lot of pictures on my iPhone. They magically appear in iPhoto on my Mac. I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
    So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured? Is there any benefit to importing a duplicate photo if I have Photostream importing turned on? And are they truly, really duplicate photos? Are the iPhone photos in my Photostream duplicated on my phone until they fall out of Photostream? Are iPhone photos duplicated if they're in a shared stream?
    I just don't know what I should do with the photos on my iPhone once they're on my iMac, and I'm not truly confident that they're on my iMac for good or that they're truly the same file as the original.

    on the mac/iphoto, move photos in the Photo stream group into some other album. 
    foatbttpo1567,In iPhoto on a mac you need to download the photos to an event - not an album. An album would only reference them in the photo stream, but not store the photos in the iPhoto library. Turning on "Automatic Import", as Old Toad suggested, will do that and create monthly events.
    I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
    The 30 days/1000 photos limit applies to the temporary storage in iCloud - the time you have to grab them and to import them. Once they are in an event, you have them safe.
    So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured?
    Photo Stream is a handy feature for transmitting photos, but don't rely on it for permanent storage. If you ever have to reset your iPhone or reinstall your Mac, or to reset the photo stream, your photos from the stream may be gone. Keep always a copy of your photos either in the Camera roll on your phone or in iPhoto events on your mac, and make sure that these copies are regularly backed up.
    As for the "truly duplicates" - Photo stream will send optimized versions to the devices, but to a mac the full original version. You may want to read the FAQ:  iCloud: Photo Stream FAQ

  • Does the iphone 5 and iphone 5s use the same case

    Does the iphone 5 and iphone 5s use the same case

    Twice now you've given this useless answer.  What's the point in even replying?
    Case dimensions look to be identical, but some cases will hinder the fingerprint scanner.  My Griffin Survivor will fit my brothers 5s easily, but as the home button is entirely covered, it's useless to him.

Maybe you are looking for