General architecture question.

So I'm back looking at clustering technologies.
I'm writing an application from scratch.
Quite simple it's a server application that receives a message over the network processes it, logs it and forwards it to a 3rd party.
The logs is where the bottle neck will be. The log must be saved to a "database" and must be searchable and highly available and durable. Bassicaly it's an audit log and I can't lose the logs.
And of course the application must be fail over, load balanced etc...
I definitely see the configuration pattern here and as well write behind?

Hi,
While not really a question for the Incubator Forum, using Coherence essentially as a highly-resilient scale-out buffer is something we occasionally see customers do with Coherence. Ok... it may be a pattern, but it's probably not a common one.
You're right in thinking write-behind with Coherence may be useful (in which case you need something like Enterprise Edition), especially to reduce the load on the back-end database as writes are "batched". Ultimately however I think the format of the logs and how they are stored in the database will determine how searchable they will be.
I'm thinking the Coherence part may be the easiest piece. What and how you'll configure the back-end database will probably be much harder.
Regards
-- Brian
Brian Oliver | Architect | Oracle Coherence Engineering
Oracle Fusion Middleware

Similar Messages

  • Can I use coherence as follows? (General architecture question)

    I'm working on creating a "web service" that will typically
    - Receive a request and log it.
    - Process
    - Reply to "client" and log it.
    The idea is to put the logs into the grid and finally to SOR?
    Of course the logs need to be parsed, searchable etc...
    Does the above make sense?
    And some general questions...
    If coherence is configured as partitioned do you really need a SOR, can it replace the SOR?
    Can I have coherence running on seperate machines and the web service on other servers?

    It makes perfect sense.
    You can use a CacheStore to asynchronously ("write behind") to write your log entries to the system of record. You can implement a LogEntry class which would wrap the log messages, and expose getters for the attributes you wished to query by. You can configure an evication policy to determine how long LogEntry's should remain in Coherence.
    So long as you have sufficient memory for the amount of data you wish to store (and backups, if necessary), yes Coherence can replace your SOR.
    Yes, Coherence can run on different servers. Your web service layer can be configured as a "storage disabled" cluster member, or connect to the cache servers using Extend.

  • General Architecture question - EJB

    I am working on defining an architecture to use for an application that will be providing maintenance for database tables (inserts/updates/deletes). My desires are to have the application distributed using Java Web Start.
    I am struggling with suggesting an EJB implementation for the database access or servlets that will access the database and issue the appropriate queries. One issue surrounding the database is that depending upon the user's login, they will be connecting to different databases. So, something would have to exist to determine which database to connect to and issue the queries.
    It sounds like the more object oriented approach would be to use an EJB architecture. If that is the case, since Java Web Start is what would be invoking the application, should a servlet invoke the EJB rather than directly from the application?
    Does the J2EE provide all that is needed to implement such an architecture? If so, what benefits do something like JBoss provide over Tomcat?
    I want to define the architecture before we start developing and any answers to the above questions or other suggestions are greatly appreciated.
    TIA

    So, something would have to exist to
    determine which database to connect to and issue the
    queries.Okay this could be a piece of logic after the user has provided username and password.
    Login page = JSP;
    Logic page = Servlet;
    It sounds like the more object oriented approach would
    be to use an EJB architecture. If that is the case,
    since Java Web Start is what would be invoking the
    application, should a servlet invoke the EJB rather
    than directly from the application?I haven't used Java web start, but the approach you use should depend on your products requirments for the services provided by the container.
    An EJB container provides massive advance over a servlet container, in terms of scalability, security etc.. and if this is necessary for your app then you should go with an EJB container. If not just go for some plain old servlets doing th SQL-DB access for you.
    Object orientation is the power of the java language, not specifically the power of the architecture. The architecture is based on some best practices. Do a search on the MVC pattern to learn more.
    Does the J2EE provide all that is needed to implement
    such an architecture? If so, what benefits do
    something like JBoss provide over Tomcat?Yes. JBoss provides the EJB container, Tomcat is the Servlet Container. They can be downloaded from JBoss together.
    I want to define the architecture before we start
    developing and any answers to the above questions or
    other suggestions are greatly appreciated.Good, that's the best thing to do. Design first. Take a look at Model View Controller.
    Regards
    T

  • General architecture questions

    Hello,
    I am developing a web application and could use some architectural advice. I've done lots of reading already, but could use some direction from those who have more experience in multi-tier development and administration than I. You'll find my proposed solution listed below and then I have some questions at the bottom. I think my architecture is fairly standard and simple to understand--I probably wrote more than necessary for you to understand it. I'd really appreciate some feedback and practical insights. Here is a description of the system:
    Presentation Layer
    So far, the presentation tier consists of an Apache Tomcat Server to run Servlets and generate one HTML page. The HTML page contains an embedded MDI style Applet with inner frames, etc.; hence, the solution is Applet-centric rather than HTML-centric. The low volume of HTML is why I decided against JSPs for now.
    Business Tier
    I am planning to use the J2EE 1.4 Application Server that is included with the J2EE distribution. All database transactions would be handled by Entity Beans and for computations I'll use Session Beans. The most resource intensive computational process will be a linear optimization program that can compute large matrices.
    Enterprise Tier
    I?ll probably use MySql, although we have an Oracle 8 database at our disposal. Disadvantage of MySql is that it won't have triggers until next release, but maybe I can find a work-around for now. Advantage is that an eventual migration to Linux will be easier on the wallet.
    Additional Information
    We plan to use the system within our company at first, with probably about 5 or less simultaneous users. Our field engineer will also have access from his laptop. That means he?ll download the Applet-embedded HTML page from our server via the Internet. Once loaded, all navigation will be Applet-centered. Data transfer from the Applet to Servlet will be via standard HTTP.
    Eventually we would like to give access of our system to a client firm. In other words, we would be acting as an application service provider and they would access our application via the Internet. The Applet-embedded HTML page would load onto their system. The volume would be low--5 simultaneous users max. All users are well-defined in advance. Again, low volume HTML generation--Applet-centric.
    My Questions
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described above? Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but what is the most practical solution that strikes a balance between ease-of-programming and speed?
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    4). I am running Tomcat between our inner and outer firewalls. The database would of course be completely inside both firewalls. Should the J2EE (or other) Application Server also be in the so-called ?dmz? with Tomcat? Should it be on the same physical server machine as Tomcat?
    5). Can Tomcat be used externally without the Apache Web Server? Remember, our solution is based on Servlets and a single Applet-embedded HTML page, so high volume HTML generation isn?t necessary. Are there any pros/cons or security issues with running a standalone Tomcat?
    So far I've got Tomcat and the J2EE Application Server running and have tested my small Servlet /Applet test solution on both. Both servers work fine, although I haven't tested any Enterprise Beans on the application server yet. I?d really appreciate if anyone more experienced than I can comment on my design, answer some of my questions, and/or give me some advice or insights before I start full-scale development. Thanks for your help,
    Regards,
    Itchy

    Hi Itchy,
    Sounds like a great problem. You did an excellent job of describing it, too. A refreshing change.
    Here are my opinions on your questions:
    >
    My Questions
    1). Is the J2EE 1.4 Application Server a good
    production solution for the conditions that I
    described above? Or is it better to invest in a
    commercial product like Sun Java System Application
    Server 7 ? Or should I forget the application server
    concept completely?
    It always depends on your wallet, of course. I haven't used the Sun app server. My earlier impression was that it wasn't quite up to production grade, but that was a while ago. You can always consider JBoss, another free J2EE app server. It's gotten a lot of traction in the marketplace.
    2). If I use the J2EE Application Server, is this a
    good platform for running computational programs (via
    Session Beans)? Or is it too slow for that? How
    would it compare with using a standalone Java
    application--perhaps accessed from the Servlet via
    RMI? I guess using JNI with C++ in a standalone
    application would be the fastest, though a bit more
    complex to develop. I know it is a difficult
    question, but what is the most practical solution that
    strikes a balance between ease-of-programming and
    speed?
    People sometimes forget that you can do J2EE with a servlet/JSP engine, JDBC, and POJOs. (Plain Old Java Objects). You can use an object/relational mapping layer like Hibernate to persist objects without having to write JDBC code yourself. It allows transactions if you need them. I think it can be a good alternative.
    The advantage, of course, is that all those POJOs are working objects. Now you have your choice as to how to package and deploy them. RMI? EJB? Servlet? Just have the container instantiate one of your working POJOs and delegate to it. You can defer the deployment choice until later. Or do all of them at once. Your call.
    3). Can the J2EE 1.4 Application Server be used for
    running the presentation tier (Servlets and HTML)
    internally on our intranet? According to my testing,
    it seems to work, but is it a practical solution to
    use it this way?
    I think so. A J2EE app server has both an HTTP server and a servlet/JSP engine built in. It might even be Tomcat in this case, because it's Sun's reference implementation.
    4). I am running Tomcat between our inner and outer
    firewalls. The database would of course be completely
    inside both firewalls. Should the J2EE (or other)
    Application Server also be in the so-called ?dmz? with
    Tomcat? Should it be on the same physical server
    machine as Tomcat?I'd have Tomcat running in the DMZ, authenticating users, and forwarding requests to the J2EE app server running inside the second firewall. They should be on separate servers.
    >
    5). Can Tomcat be used externally without the Apache
    Web Server? Remember, our solution is based on
    Servlets and a single Applet-embedded HTML page, so
    high volume HTML generation isn?t necessary. Are
    there any pros/cons or security issues with running a
    standalone Tomcat?
    Tomcat's performance isn't so bad, so it should be able to handle the load.
    The bigger consideration is that the DMZ Tomcat has to access port 80 in order to be seen from the outside without having to open another hole in your outer firewall. If you piggyback it on top of Apache you can just have those requests forwarded. If you give port 80 to the Tomcat listener, nothing else will be able to get it.
    >
    So far I've got Tomcat and the J2EE Application Server
    running and have tested my small Servlet /Applet test
    solution on both. Both servers work fine, although I
    haven't tested any Enterprise Beans on the application
    server yet. I?d really appreciate if anyone more
    experienced than I can comment on my design, answer
    some of my questions, and/or give me some advice or
    insights before I start full-scale development. Thanks
    for your help,
    Regards,
    Itchy
    There are smarter folks than me on this forum. Perhaps they'll weigh in. Looks to me like you're doing a pretty good job, Itchy. - MOD

  • General architectural question....

    Say you have a Person object like this:
    class Person {
      private String name,address,phone;
      public String getPhone() {
        return phone;
      public void setPhone(String phone) {
        this.phone=phone;
    }Now you have some "client" application which creates Persons and puts them in the cache:
    Person joe = new Person();
    cache.add("joe", joe);This application is maintaining a reference to joe. Perhaps it's even passing joe around to other classes. You want to make sure that when some class in this client app calls "joe.getPhone()" that it's getting a correct result, one that reflects changes made to the cached value by other JVMs. You could do this a few ways:
    - By maintaining a local hashmap of keys to objects and having that local map kept up to date by a CQC. In that case you'd not pass around object references in your app, just object keys. This doesn't work well for applications which were already written assuming a single-JVM architecture since all your code is passing around objects and calling getter/setter methods on those objects.
    - You could have the object implement MapListener and listen for updates to itself. Every time a MapEvent showed up it would have to repopulate its member variables with the values from the MapEvent. That's not great either because if your object has many member variables, you're doing a lot of unnecessary serializing/deserializing/repopulating everytime one variable is updated. Also if a CQC maintains a reference to a Person implementing MapListener, that person will never be garbage collected(right?).
    - You could change your getter/setter methods to interact with the cached values instead of the local member variable values but that's not good either because in cases of storage disabled nodes you're hitting the network every time you do a getXXX() even if the value has not changed. Also doesn't coherence's usage of reflection depend on the presence of typical getXXX/setXXX methods which look like the one above?
    There has got to be a better solution that I'm missing, right? I'd been running apps with storage disabled and CQCs keeping a local hashmap updated. Maybe it makes more sense to enable storage on each of the client app JVMs. I'd wanted to have many small storage disabled apps running on a single machine with just one or two storage enabled JVMs on it.
    Thanks,
    Andrew

    Hi Andrew,
    snidely_whiplash wrote:
    Say you have a Person object like this:
    class Person {
    private String name,address,phone;
    public String getPhone() {
    return phone;
    public void setPhone(String phone) {
    this.phone=phone;
    }Now you have some "client" application which creates Persons and puts them in the cache:
    Person joe = new Person();
    cache.add("joe", joe);This application is maintaining a reference to joe. Perhaps it's even passing joe around to other classes. You want to make sure that when some class in this client app calls "joe.getPhone()" That is not really safe, particularly it is not thread-safe, and you can't always expect it to work.
    that it's getting a correct result, one that reflects changes made to the cached value by other JVMs. And this will never work with Coherence, because any changes arriving from other nodes will always be deserialized into a new object instance created by Coherence, never into existing ones.
    You could do this a few ways:
    - By maintaining a local hashmap of keys to objects and having that local map kept up to date by a CQC. In that case you'd not pass around object references in your app, just object keys. This doesn't work well for applications which were already written assuming a single-JVM architecture since all your code is passing around objects and calling getter/setter methods on those objects.
    Nonetheless, passing on only keys and accessing the cache by key is the way it is intended to be used.
    - You could have the object implement MapListener and listen for updates to itself. Every time a MapEvent showed up it would have to repopulate its member variables with the values from the MapEvent. That's not great either because if your object has many member variables, you're doing a lot of unnecessary serializing/deserializing/repopulating everytime one variable is updated. Also if a CQC maintains a reference to a Person implementing MapListener, that person will never be garbage collected(right?). Also Coherence will not know that that object is supposed to be the cached object, therefore whenever you do a CQC.get() with the same key, you would get a different object, and poof, you don't have single object per identity anymore.
    >
    - You could change your getter/setter methods to interact with the cached values instead of the local member variable values but that's not good either because in cases of storage disabled nodes you're hitting the network every time you do a getXXX() even if the value has not changed. Also doesn't coherence's usage of reflection depend on the presence of typical getXXX/setXXX methods which look like the one above?I don't really see what you refer to here.
    >
    There has got to be a better solution that I'm missing, right? I'd been running apps with storage disabled and CQCs keeping a local hashmap updated. Maybe it makes more sense to enable storage on each of the client app JVMs. I'd wanted to have many small storage disabled apps running on a single machine with just one or two storage enabled JVMs on it. I don't really see where you are trying to get to. Even in case of storage-enabled JVMs, majority of the data in a partitioned cache (where storage disabled has a meaning) will not be local, similarly to storage-disabled JVMs where all of the data is not local.
    Altogether, Coherence was never intended and is not designed to be used as a pass-by-reference Map. You should expect your objects to be passed by value. Coherence only guarantees that data whicy you get from a cache.get or cache.entrySet or similar stuff is up-to-date (instead of stale data successfully overwritten with another entry) at the point in time of getting it, but you should not try to rely on certain cases where your objects are stored in Java object form.
    If you try to reproduce such functionality like a single objects for each identity being kept up-to-date, you would have to write your own repository of data objects and keep them up-to-date by a non-lite listener.
    Also, you would still have to
    - either resolve race conditions related to putting such an object back into the cache (the object being written to the network is vulnerable to modifications by the listener writing incoming changes to the object),
    - or copy state to your own object before putting an object to the cache, and then comes the problem of having multiple identities at that point for the object and it is problematic to ensure that the object that is kept is holding the correct state... all in all it exposes the problem of maintaining many synchronized instances of data without distributed locking (or having a very slow and not really reliable solution using distributed locking) and therefore having to reconcile changes from multiple directions (similar problems like the ones tackled by active-active push replication)
    Because of the above things, this is probably a bad idea.
    Best regards,
    Robert

  • Oracle VM Server for SPARC - network multipathing architecture question

    This is a general architecture question about how to best setup network multipathing
    I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
    From reading the document is appears it is possible to:
    (a) Configure IPMP in the Service Domain (pg. 155)
    - This protects against link level failure but won't protect against the failure of an entire Service LDOM?
    (b) Configure IPMP in the Guest Domain (pg. 154)
    - This will protect against Service LDOM failure but moves the complexity to the Guest Domain
    - This means the there are two (2) VNICs in the guest though?
    In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
    https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
    Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
    (1) Two (2) Service Domains
    (2) Network Redundancy within the Service Domain
    (3) Service Domain Redundancy
    (4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
    Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
    Edited by: 905243 on Aug 23, 2012 1:27 PM

    Hi,
    there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
    For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
    ldm set-vnet linkprop=phys-state vnetX ldom-name
    If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
    ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
    Bye,
    Alexander.

  • Can FCS be set up in multiple offices - Would that be one database or can we synchronize several - I need general architecture concept

    Can FCS be set up in multiple offices/locations - Would that be one database or can we synchronize several databases - I need general architecture concept

    If you want to link to separated location which are too far from each other to connect via Ethernet or FC you can't. What you can do is build another FCS with a completely independent DB and link both with XML and scripting (or if you have a very good DB knowledge). Other than that you can put the FCS DB in one location and make the clients on the other connect to the first one. But if the issue is to ingest media from both locations to the same DB then you better have a nice and big Ethernet connection between both locations.
    Hope this help

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • Architectural question

    Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
    Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...

    Little architectural question: why is all the stuff
    that is needed to render a page put into the
    constructor of a backing bean? Why is there no
    beforeRender method, analogous to the
    afterRenderResponse method? That method
    can then be called if and only if a page has to be
    rendered. It seems to me that an awful lot of
    resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
    >
    Reason I bring up this question is that I have to do
    a query in the constructor in a page backing bean.
    Every time the backing bean is created the query is
    executed, including when the page will not be
    rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
    If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
    One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
    Craig McClanahan

  • BPEL/ESB - Architecture question

    Folks,
    I would like to ask a simple architecture question;
    We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
    1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
    2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
    Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
    Thanks in advance,
    Ranjith

    You can go through the design pattern called Channel Adapter.
    Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
    Hope this helps.

  • General dreamwevaer question

    hello dreamweavers.
    im a newbie going to use dreamweaver from next week and beyond,so id like to ask the following:
    should i design the website in photoshop and then import it into dreamwevaer,in order to code it?
    is dreamwevaer flexible in design point of view,or is it mostly getting finished designs such as headers,footers,flash banners,
    and then building up the site where things so and such.
    thank you.

    Hello Nancy.
    seems like an informative website,i can see it is easy to understand the
    basics.
    Στις 23 Μαρτίου 2012 4:29 π.μ., ο χρήστης Nancy O. <[email protected]>έγραψε:
       Re: general dreamwevaer question  created by Nancy O.<http://forums.adobe.com/people/Nancy+O.>in
    Dreamweaver - View the full discussion<http://forums.adobe.com/message/4283588#4283588

  • Architecture Question...brain teasing !

    Hi,
    I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
    I have two management servers M1 and M2.
    two VIP's(Virtual IP's) V1 and V2
    two Agents A1 and A2
    the scenerio
    M1 ----> M2
    | |
    V1 V2
    | |
    A1 A2
    Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
    Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
    If M1 dies, and V1 forwards A1 to M2 or
    if M2 dies, V2 forwards A2 to M1
    How woudl this work.
    I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
    and A1 to A2 and just change V1 to V2. Would this work ????
    please advice!!

    SLB is not an option for us here !
    Can we just repoint all A1 to M2 using DNS CNAME change ??

  • Inheritance architecture question

    Hello,
    I've an architecture question.
    We have different types of users in our system, normal users, company "users", and some others.
    In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
    How would you suggest to design this?
    Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
    Inheritance mapping? Which type?
    No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
    Other ways?
    thanks
    Dirk

    sorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
    You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
    Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance.

  • Hand - Off - Auto / Remote - SCADA Architecture Question *All You Super Users Read This*

    I am new to the forum. I have searched the net for almost 6 months on this and haven't really found a good response....
    I have been around the block a few times with various control systems(Arduino/Labview/AllenBradley/general SCADA) and am trying to figure out how to build a "proper" control system on the cheap. I used to use alot of Labview in School so that is how I ended up here.
    What I want to do is have Hand Control at the Machine (Arduino Control System / or something) be able to switch that to Auto/Remote (Labview or some other language) then have that talk to SCADA (Thingspeak, SQL, havent figured this part out yet).
    My question is how would you do it? I am a pretty good programmer and know just about every language so nothing is off the table. Seriously I would like to see some creative action.
    I dont think LIFA is the best cause I need LOCAL control of all the variables. I would like to have PID loops etc running on the Arduino. I need to be controlling serial attached machines. I have some RS-485/232 going to VFDs and a dSPiN as well.
    However, when I want to control it remotely, I would like to be able to pass set points if in auto mode or even better, in addition, be able to override auto and drill down into each attached component and control it in Hand remotely.
    I know I could do this by building everything in VIs then linking through LIFA and using the VI as my hand/auto. However, I do not trust the link between LIFA and the Arduino that much. Also I do not know the capability of LIFA to do the RS-485 and serial data to the dSpiN. I need to be on Ethernet/Wifi, and if the controller goes down we may have a legitimate safety problem; hence why I want local control as my backup.
    Further more how would you do the SCADA? I dont want to spend 4K on a Full Labview License.....I would prefer just using the iPad app and some free online services...I will  have a server running all the time as well.
    If LIFA can be used to do some of this please let me know....Any links to past projects by people would be a huge help.
    My Plan right now is:
    Machine Data/Control sent to Arduino over RS-485 , 4-20ma and SPI.
    Local Control on the Arduino. Local Arduino connected LCD displays Key Parameters.
    Send Local Main Variables *Set Point / Run / Stop Etc / Key Data* over Serial or Hacked LIFA to Labview over ethernet or wi-fi.
    Labview acts as main supervisory "go-between" and provides all main "in house" HMI support
    Real time proccesing happens in Labview and is displayed on labview VIs; Realtime data is also sent out remotely over a service for world wide access.
    Labview also sends all real time data to historical service of some kind either (web based or local based) (will be locally hosted)
    Access historical data remotely by some means (API / Webservice)
    Remote World Wide Access to Labview and thus machine, through Labview iPad app or VPN into house.
    World Wide Historical Access through API or Webpage.
    That is where I am at in my thinking. Please blow it up and make it better.
    I appreciate y'alls help. I posted this in the Arduino Section and they told me to post here.
    ~Colin

    Hello Colin,
    Unfortunately I am not familiar with the overall architecture but, if you need to build a SCADA type system, you can use LabVIEW Datalogging and Supervisory Control (DSC) Module. It’s an add-on to LabVIEW and a lower cost alternate option to other SCADA programs out there.
    The LabVIEW software is not a free tool, but will save you time developing your project. Also if you use one provider, your support request will be handled from one source, therefore making your development process more efficient.
    Please refer to the following links in case you are interested in evaluating the LabVIEW Software and LabVIEW Datalogging and Supervisory Control (DSC) Module.
    http://www.ni.com/trylabview/
    http://sine.ni.com/nips/cds/view/p/lang/en/nid/210561
    Regards
    Luis S
    Application Engineer
    National Instruments

Maybe you are looking for