Master-Worker Implementation

Hi all,
We would like to implement the Master-Worker pattern with coherence. To do so, we make our master load at startup all the computation descriptions from the database and put them into the distributed cache. Considering that the local storage parameter is set to false on the master’s configuration file, this action ensures that all our computations are fairly distributed over the workers. At runtime, all the incoming data of the entire platform go trough the master which will dispatch them to the corresponding worker(s). Thus the master must always know which member is responsible for a specific computation in order to minimize network traffic by sending only the relevant data to each concerned worker. To ensure that the master is always aware of the cache distribution, we keep our own Map<ComputationKey, Member> up-to-date by loading it at startup and by reloading it when a member joins or leaves the cluster. To load our map, we use an InvocationService to execute an agent which gets the content of the local BackingMap, on each worker. To send the data to a specific worker, we use another InvocationService to execute an agent which gives the data to the local data manager of the worker. The worker treats the given data then sends back the computed data to the master through a last InvocationService.
Is there a better way to do so? If not, after a MemberEvent has been thrown, is there a way to know if the redistribution is fully completed? Because currently, we launch too soon the agent which gives us the cache distribution, thus our map becomes invalid as it doesn't contain anymore the right amount of data.
Thanks for your Answer,
Nicolas

Hi Nicolas,
If a computation is bound to a single entry, the best approach would be to use the NamedCache.invoke() API, which provides the once-and-only-once execution guarantee. If a computation could be bound to multiple entires residing in a single partition (using a custom key association), that approach could still be used.
Moving forward, you should not assume that re-distribution can only be triggered by fail-over or fail-back events. We are currently working on dynamic weight-base distribution algorithms that would factor in the amount of actual data stored in a partition as well as CPU utilization related to that data. As a result, change in a data usage pattern could cause some re-distribution, though in either case (today or tomorrow), the distribution events are going to be very rare ones and in general the partition ownership could be counted on as being very stable.
Another point I'd like to make that checking the ownership information is a very efficient call, that does not require any network communications. In addition to currently exposed key-based ownership API, Coherence 3.4 introduces a member-based ownership PartitionedService request:
public PartitionSet getOwnedPartitions(Member member);If you choose to use the InvocationService to execute the computation agents, you could use the ownership check to quickly determine the stability of the distribution before and after a computational call (or just in cases when the computational call fails to return reliable data).
Regards,
Gene

Similar Messages

  • Snapshot/master site implementation in one single system

    how to setup Snapshot/Master site implementation in one single system. Can I get Sample code. Oracle 8 documentation has some problems.

    I know what u said(that is understood)?
    To get some practice on replication scenerio can i do it in single system by create two databases. one database as master site and the other as snapshot site. Is it possible on a single system.i'm able to create snapshot and master site in two databases but when i am trying to
    DBMS_REPCAT.CREATE_SNAPSHOT_REPOBJECT () i'm getting problem using source given in oracle 8i document.
    Thank u

  • ERPI master/work repositories

    Hi,
    I was going through the tutorial given at http://www.oracle.com/technetwork/middleware/financial-management/tutorials/configerpi-093532.html
    My question is - If ERPI is installed on the same server as ODI.
    Does ERPI needs its separate repositories (master/work repository) Or can it use the repository already created for ODI ?
    Any pointers will be helpful.
    Thanks.

    It can use the master/work repository created by the RCU or a different one if required, choice is yours.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • I finally got the TV@nywhere master working

    After installing winamp 5 and this plugin:
    http://winamptv.zapto.org/
    I didn't install MSIPVS.
    I finally got my TV@nywhere master working under XP SP1.
    I've got all the channels and a good video and sound quality.
    I didn't install the new drivers, the ones I got with the card are working fine.
    I hope anyone can solve their problems the same way...
    TVANYWHERE is real fun, now it works....

    I have that plugin and it works wonderfully. Although you cannot actually record television from it and thats one big loss. I'm still having trouble with mine blue screaning when shutting down, restarting, hibernating or standby. [ see this threadand help me! ]. I use the card to do a lot of recording and tis just annoying now.

  • Master-detail implemented by two reports

    Does anyone has an example of a master-detail page implemented by two reports? Or can explain to me how to do it?
    When clicking on the row of the master (or a link in the record e.d) the second report should show the details given the id of the master row.
    As an extra the detailtable should refresh in a partial page refresh.
    I know how to get it to work with a full page submit ( column in the master row links to the same page filling a hidden item; detailselect looks at the hidden item)
    but just refreshing the detailtable is something my brain doesn't seems to get.
    It should be simple...
    no?
    (I'm using 4.1 of course)

    Edwin,
    Check out this Patrick Wolf blog:
    [url http://www.inside-oracle-apex.com/resetting-pagination-of-master-detail-report-in-oracle-apex/]Resetting pagination of a Master-Detail Report in Oracle APEX
    Jeff

  • Scripts for Exporting Master/Work Rep

    Hi,
    I want to automate the export process and in turn was looking for any scripts for exporting the whole ODI Work/Master Rep. I checked the CLASS objects and we donot have any CLASS names for Repository exports.
    Thanks in advance for any answers..!

    Created for 10g [ http://odiexperts.com/automize-the-backup-using-odi-exports]
    but should work for 11g too :)

  • Master & work repository Export/Import

    Hi all,
    I have installed ODI on the production server.
    I have to import the repositories on this prod server from the development server.
    1.I am not having seperate schema for the master repository for the prod & I have to use the same master repository that was created for the development server SOothere is no import & export ??
    Question is
    Can I refer the same master repository schema in the production server that means all the connection details i.e it will be the linking of same repository that was there for DEV?? Can I do that??
    2. I have separate schema for work repository in prod but have to use the work repository of the DEV server.
    here I have to do import & export???
    Please guide
    Thanks
    Sourabh

    How are you trying to do the import/export?
    If you are importing the obejcts in designer - it will use the credentials you specified when you connected.
    Craig

  • Change/ removal of Unit of Issue in Material master Work scheduling view

    Hi,
    Is there a way to remove Unit of Issue from Work scheduling view? it is giving error message as "this is being used in BOM". Even if we put deletion indicator also it is showing same error message.
    Regards,
    R. Srinivasan

    Do a CS15 for the material in question and make sure you go in the BOM and select the material and click on delete/remove you can then do this.
    I have tested it and it works.
    Regards
    Adeel

  • Help with Master-Detail implementation

    I'd like to create one table with summary info with each row linked to the details about the items in that row. How can the application figure out which master table row the user selected and make the corresponding details available for display?
    I tried the example provided in "Accessing DB with databound components" tutorial and it was Ok, but it used a dropdown list as Master table. I need to use a "real" table with a chekbox to select the row and a button to submit the request.
    Have anyone ever deal with this issue?
    Thanks,
    Marco

    Yep - the example application mayagiri is referencing should show you what you are describing... there is also a new tutorial you might want to check out:
    http://devservices.sun.com/session/login.jsp?goto=/premium/jscreator/standard/learning/tutorials/inserts_updates_deletes.pdf
    Please be sure to get the update to the product as well for these examples - see the readme:
    http://developers.sun.com/prodtech/javatools/jscreator/reference/docs/updateREADME.pdf
    v

  • Validation for PO text in the material master :- BADI implementation

    Hi experts,
    I am trying to do a validation on the Purchase Order Text (long text) which must always be filled in Language EN and DE.
    Thus if a user tries to edit a material in MM02 or creates a new one in MM01 - this box of long text (PO text) must be filled, else it should throw an error.
    i created a BADI and inside that i am calling the FM read_text , but it check for only existing text already saved in the database.
    WHAT ABOUT THE RUN-TIME (ex creating a material).
    or what if the user purposely deletes an existing PO long text and then tries to save it (database might already have a PO long text saved for that material).
    Kindly help me if there is any FM which can be called to check in the Runtime if the PO long text is filled or if we can do some customizing and make that PO long text Mandatory.
    Please not we need this field mandatory only for only 03 plants, that's where the problem is coming.
    Many Many thanks in advance.
    Rittik.

    Hello Rittik,
    Please use Exit 'EXIT_SAPLMGMU_001' for your requirement. I checked and below code (READ_TEXT) picks latest text from the transaction in processing and hence can be used for raising any error. Let me know if you face any issues.
    if WMARC-WERKS = '...........' or '...'.
      lv_name = WMARA-matnr.
      CALL FUNCTION 'READ_TEXT'
        EXPORTING
    *   CLIENT                        = SY-MANDT
          id                            = 'BEST'
          language                      = 'E'
          name                          = lv_name
          object                        = 'MATERIAL'
    *   ARCHIVE_HANDLE                = 0
    *   LOCAL_CAT                     = ' '
    * IMPORTING
    *   HEADER                        =
        TABLES
          lines                         = lt_line
        EXCEPTIONS
          id                            = 1
          language                      = 2
          name                          = 3
          not_found                     = 4
          object                        = 5
          reference_check               = 6
          wrong_access_to_archive       = 7
          OTHERS                        = 8
      IF sy-subrc <> 0.
    *    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
    *            WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      ENDIF.
    endif.
    Regards,
    Diwakar

  • Master repo,Work Repo and ODI installation

    Hi,
    I have following doubts for windows flatform.
    I am integrating two application systems say A(source) & B(Target) thier data located in different oracle db servers.
    1) ODI server installation will be on source server/target/any other machine?
    2)Where should i create master & work repositories on source/target server?
    3)If my target is remote host then where can i install and run agents?
    4)While creating physical schemas we can select the work schema as different,is this work schema acts like staging area? while designing the interfaces should i select this work schema as staging area? what is benefit of having this work schema different than my soruce/target schema.?
    5) I read that ODI will create temparory tables while executing ODI objects each time and these tables are junk data,should i drop all these temp tables in work schema,even after or before execution of the interface?
    Please clarify,
    Thanks.
    MNK

    Hi,
    Find my answers as below,
    1) ODI server installation will be on source server/target/any other machine? Its always recommend to install ODI in your TARGET server for good performance.
    2)Where should i create master & work repositories on source/target server?In TARGET server and make sure u have dedicated schema for work and master repos.
    3)If my target is remote host then where can i install and run agents?Again in TARGET host , u can only install ODI run time agent in seperate servers, have a look at ODI installation guide.
    4)While creating physical schemas we can select the work schema as different,is this work schema acts like staging area? Yes ODI will make use of work schema to create temparory tables ($ tables) and which will act like a staging area.
    while designing the interfaces should i select this work schema as staging area? No need to select any work schema as such while designing interface, only u need to select the respective LOGICAL schema which implicitly create the $ tables in the work schema u selected in PHYSICAL schema.
    what is benefit of having this work schema different than my soruce/target schema.? You will not require to have a dedicated "staging area" to consolidate your data from multiple/single source.
    5) I read that ODI will create temparory tables while executing ODI objects each time and these tables are junk data,should i drop all these temp tables in work schema,even after or before execution of the interface?No need. ODI will take care of DROPPING and CREATING $ tables on FLY.
    For simple data integration, 2 tables will be created at runtime, C$ and I$ which in turn will be dropped after loading to TARGET table.
    Makes sense?
    P.S: Experts comments are welcome.
    Thanks,
    Guru

  • Minimum things required to take master and work repository backup.

    how to backup the master and work repository?

    Hi,
    In 11g, under topology you can click the top-right icon and select export. You'll be able to export the master and the work repositories.
    You can also use OdiExportMaster and OdiExportWork tools in a package/procedure/bash script to schedule a daily backup.
    A last alternative is to backup the database schema(s) containing your master/work reps.
    Hope it helps,
    JeromeFr

  • Problem in export/import work repo !!!

    Hi All,
    Good Morning.
    I have a problem with exporting and importing repositories. The description of the problem is given below,
    I have a remote machine in that sunopsis is installed and all the developments,implementations,testing are happening there.
    Now i need to stimulate the same environment in my local desktop(ODI is installed and master,work repo is created).
    When i export the work repo from my remote machine and trying to import it in my local machine (from designer level import->work repo) after a while its saying "snp_lschema does not exists" error.
    Any one have idea why this is happening?
    Thanks,
    Guru

    Hi Julien,
    Thanks for your input. It really helpful for me.
    I need some more clarifications,
    Actually i exported my master and work repo from my remote machine (as a zip file) and saved it in my local drive (say D:/downloads/deploy.zip...)
    So when i tried to import master repo from topology ( browsing the zip file and said OK) its not doing anything, i mean nothing happens after i said OK.
    Should i have to copy this master repo zip in the Sunopsis installation folder (in IMPEXP directory) and then import it? Am i doing right?
    Please advise.
    Thanks,
    Guru

  • Create Separate Master Rep for DEV, QA & PROD or just use Contexts

    In line with a previous question I posed on the usage of Contexts for the "ODI touted" one set of code for various environments, I'm wondering in implementation, how many architects have actually used Contexts in the place of different environments (and hence code sets) for DEV, QA and PROD.
    Example, I could have an ODI Master/Work for a DEV environment and then when I move to QA/TEST, I'll have another ODI Master/Work and another ODI Master/Work for PROD. This way, I have true change management and ODI Object Code will be locked from after it leaves the DEV environment. Export from the DEV Master/Work and Import into the QA Master/Work; as well import from QA to PROD Master/Work.
    I've been in an ODI application environment where we did just as above in order to ensure the integrity of the code set (i.e. locked after DEV).
    The other approach is to have just one Master/Work (or multiple Works for Execution or such) and then use the Contexts in the Topology to point to different physical servers. Although this allows for 1 set of code (in the previous approach when Export/Import from DEV to QA you still have to change the physical data server in QA), there's less integrity in the code set.
    So long story short, I'm just wondering what you all have done from an architecture standpoint; pros/cons and etc. Thanks for the input.

    1. One master for DEV & QA ( Multiple Work repository), One Master for prod (Execution work repository)
    Merits:
    -Easier to maintain.
    -If you need the complete project in QA without export/import of each object, solution will be the best approach to implement it. Just restore the solution in QA environment.
    -No need to take special care about the logical schema as you are using the same master repository that was in QA.
    -It will take less time to test each objects.
    -Once the testing is done promote scenarios to production.
    -The very common issue regarding id conflict will not occur frequently.
    Demerits:
    -Sharing the same topology between dev & qa will create unnecessary issues like a developer can have the privilege to change the physical dataserver details. Even if you restrict from editing/deleting, the number of logical schema will goes on increasing day by day. So here you have to have a dedicated admin who will take care of this topology part. But if I would have been there I must have given full privileges to the developers in dev env.
    -If QA env will be a work repository then you have to restrict all testers from modifying the objects or else you can only promote scenarios to QA.
    -If you have different physical connection for QA env, then security concerns may arise as the connection details going to shared between one master repository.
    2. Multiple master and work for each environment.
    Merits:
    -All teams are independent as you have different env.
    -There is nothing to be concerned about security related issues as you have different topology for different env.
    -Once the DEV team is done with the project, promote only scenarios/loadplans to PROD through QA.
    -Easier to maintain the versions as you have separate env.
    Demerits:
    -Chances of getting id conflict issues.
    -Difficult to maintain.
    -Special care must be taken to keep the logical schema name same across all env.
    Other experts also can share their experience. But the common thing for each of us is to keep production completely different in terms of work and master repository.
    Bhabani
    http://dwteam.in

  • Pass data between customer master enhancement screen ALV and BADI

    Hello
    I currently have my custom screen on customer master working. I have an ALV that I display on the custom tab that I created on customer master. It's an editable ALV so the user can change values on it and then when they hit "Save" on the customer master I need to have these values transferred over to an external system using an RFC-JCO communication. I got all of this to work but I still don't know how to pass values from my ALV screen to the GET_DATA method of CUSTOMER_ADD_DATA_CS BADI or for that matter any methods of the BADIs involved.
    Right now to pass values from the BADI to my custom screen I simply use SET PARAMETER/GET PARAMETER combinations but I cannot keep using that when I have over 1000 values to pass over.
    I looking for this so that I can perform some data validations on my ALV fields when the user hits enter after typing the values. Also, to check whether anything was changed or not - to set or not to set the 'fields changed' flag in the BADI, etc.
    Any help would be appreciated. Will award points for useful info.
    Thanks,
    Kushal

    Are these details not available on the Std. Customer Screen ?
    Or you want some additional data. BTW in the method GET_TAXI_SCREEN you won't find access to the Customer data. For this you have to implement the method(s) GET_DATA and/or SET_DATA.
    BR,
    Suhas

Maybe you are looking for