MultiProcessor Clustering Architecture Question
Comments inlines.
Cheers - Wei
Jim Zhou <[email protected]> wrote in message
news:[email protected]...
> Wei,
> When servlet calls ejb, if that ejb is deployed on four wls servers,
> because the round-robin, only 25% of the chance the servlet and ejb are in
> same JVM. Am I right on this?
If your servlet and ejb are deployed on different clusters, the answer is
yes.
If they are on the same clusters, WLAS will call the ejb on the same local
machine, if the call fails, that's the time load-balancing algorithms plays
into the picture.
Hope it helps.
> What do you mean on "WebLogic rjvm will try to
> call local remote object as much as possible."? How do achieve load
balancing
> if the servlet always get the local ejb to serve its request?
> I have a client going to use 50 low-end Sun box as JSP/Servlet
cluster,
> 4~6 high-end Sun box as EJB cluster. Just curious how much performance
penalty
> I will get. Thanks.
>
> Jim Zhou
> BEA Professional Service.
>
> Wei Guan wrote:
>
> > If I were you, I would like to create one cluster with four weblogic
> > servers. Every server will serve both servlet and ejb. The benefits:
> > 1) High availabilty and utlilization.
> > 2) RMI calls between servlet and ejb will be optimized. WebLogic rjvm
will
> > try to call local remote object as much as possible. Save
> > serizliation/deserilization overhead and communication cost.
> > 3) Better load balancing.
> >
> > My 2 cents
> > --
> > Cheers - Wei
> > bahar <[email protected]> wrote in message news:[email protected]...
> > > Hello,
> > >
> > > I have a client who needs to cluster two Sun E420R (4 procesor)
> > > machines.
> > >
> > > I have presented the following architecture for a cluster of two
> > > machines running 5.1.0 and JDK 1.2.2:
> > >
> > > Three Apache WebServers are load balanced via a director. On the
> > > Application Servers, I have created two clusters (on each machine) for
> > > the servlets and EJBs. So, basically, each machine has two clusters
> > > (web and ejb) with two JVMs. So, there are two members of the web
> > > cluster and two members of the EJB cluster. The apache plugin
contains
> > > a comma delimited list of the IP addresses of the machines in the web
> > > cluster.
> > >
> > > I have added a Multi-IP URL in the JNDI ProviderURL for the calls in
the
> > > Web Cluster to point to the list of servers in the EJB cluster
> > > (comma-delimited list).
> > >
> > > The Http cluster uses In-Memory replication for HttpSessions.
> > >
> > > Is this a valid architecture that will support failover and
clusterable
> > > service load balancing? If not, what suggestions should I make?
> > >
> > > Thanks in advance.
> > >
>
Similar Messages
-
Oracle VM Server for SPARC - network multipathing architecture question
This is a general architecture question about how to best setup network multipathing
I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
From reading the document is appears it is possible to:
(a) Configure IPMP in the Service Domain (pg. 155)
- This protects against link level failure but won't protect against the failure of an entire Service LDOM?
(b) Configure IPMP in the Guest Domain (pg. 154)
- This will protect against Service LDOM failure but moves the complexity to the Guest Domain
- This means the there are two (2) VNICs in the guest though?
In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
(1) Two (2) Service Domains
(2) Network Redundancy within the Service Domain
(3) Service Domain Redundancy
(4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
Edited by: 905243 on Aug 23, 2012 1:27 PMHi,
there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
ldm set-vnet linkprop=phys-state vnetX ldom-name
If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
Bye,
Alexander. -
Architecture question, global VDI deployment
I have an architecture question regarding the use of VDI in a global organization.
We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
ThanksYes, simply bind individual interfaces for each domain on your web server,
one for each.
Ensure the appropriate web servers are listening on the appropriate
interfaces and it will work fine.
"Paul S." <[email protected]> wrote in message
news:407c68a1$[email protected]..
>
Hi,
We want to host several applications which will be accessed as:
www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
Is it possible to have a separate Weblogic domain for each application,all listening
to ports 80 and 443?
Thanks,
Paul -
Running MII on a Wintel virtual environment + hybrid architecture questions
Hi, I have two MII Technical Architecture questions (MII 12.0.4).
Question1: Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris. Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower. While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies. Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
Thanks for your help
MichelThis is a great source for the ins/outs of SAP Virtualization: https://www.sdn.sap.com/irj/sdn/virtualization
-
Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...Little architectural question: why is all the stuff
that is needed to render a page put into the
constructor of a backing bean? Why is there no
beforeRender method, analogous to the
afterRenderResponse method? That method
can then be called if and only if a page has to be
rendered. It seems to me that an awful lot of
resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
>
Reason I bring up this question is that I have to do
a query in the constructor in a page backing bean.
Every time the backing bean is created the query is
executed, including when the page will not be
rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
Craig McClanahan -
BPEL/ESB - Architecture question
Folks,
I would like to ask a simple architecture question;
We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
Thanks in advance,
RanjithYou can go through the design pattern called Channel Adapter.
Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
Hope this helps. -
Hi,
I am working for one of the Bank and we are going to implement infra for SIT,UAT,DR and PROD Env and we are still in designing phase.
We have provisioning of two physical boxes,Please suggest the best suitable clustering architecture for our banking app with which can achieve HA as well as Load Balancing features of WLS cluster.
Regards,
Jaya Rathod.you can go for a vertical clustering environment.. you can create a cluster and the members of the cluster would reside on different machines, So in case of failure of any of the machines..the request could be directed to another server running on different machine..
please respond if it looks good to you.. -
Architecture Question...brain teasing !
Hi,
I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
I have two management servers M1 and M2.
two VIP's(Virtual IP's) V1 and V2
two Agents A1 and A2
the scenerio
M1 ----> M2
| |
V1 V2
| |
A1 A2
Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
If M1 dies, and V1 forwards A1 to M2 or
if M2 dies, V2 forwards A2 to M1
How woudl this work.
I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
and A1 to A2 and just change V1 to V2. Would this work ????
please advice!!SLB is not an option for us here !
Can we just repoint all A1 to M2 using DNS CNAME change ?? -
Inheritance architecture question
Hello,
I've an architecture question.
We have different types of users in our system, normal users, company "users", and some others.
In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
How would you suggest to design this?
Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
Inheritance mapping? Which type?
No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
Other ways?
thanks
Dirksorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance. -
Scalability and Architecture Question
I am currently working on an app that will generate a resume
from a set of user defined input into several different formats
from an XML file (MS Word, PDF, TXT, HR-XML, and HTML). We are
thinking that we will write all the files once at publish time and
then store them (not sure where yet). We are doing this because we
will be hosting the online version of the resume as a CFM file with
access to all the other formats of the resume from their online
resume. We are assuming that there will be many more reads then
their will be writes over the life of the resume. So we don't want
to compile these each time a user requests one (that is a Word,
PDF, HTML, or HR-XML version).
The question I have now is should we store the files in the
database or the webserver.
I would think that it makes sense to store them on the
webserver. But as this will need to be in a clustered environment
then I will need to replicate these across the farm as each new
user creates a resume. So does anyone know if the penalty for
replicating these across the farm is higher then calling them from
database. Assuming that the average file size is 50K and on average
50 files will be called over the life of the resume. Thoughts?Originally posted by: fappel.innoopract.com
Hi,
RAP doesn't support session switch over at the moment, that's true. But
it supports load-balancing by using multiple workers. But once a session
is opened at one worker all requests of that session are dispatched to
this worker.
Ciao
Frank
-----Ursprüngliche Nachricht-----
Von: Mike Wrighton [mailto:[email protected]]
Bereitgestellt: Freitag, 22. August 2008 11:35
Bereitgestellt in: eclipse.technology.rap
Unterhaltung: Will RAP work in a load-balanced system?
Betreff: Will RAP work in a load-balanced system?
Hi,
Some of my colleagues were reviewing scalability in our web architecture
and the question was raised about RAP scalability, in particular the
issue that since session data is stored in memory and not in a central
database, RAP sessions would not survive a server switch-over by a load
balancer. Hope that makes sense?
I was just wondering if anyone had come across this issue before and
found a decent solution? It may just be a case of configuring the load
balancer properly.
Thanks,
Mike -
Enterprise Manager 11g Sybase Plugin architecture question
Hi,
have successfully installed and configured Grid 11g on RedHat Enterprise 5.5. Deployed and configured agents to solaris and linux environments..so far so good.
However, we're going to test the Sybase ASE plugin to monitor ASE with EM. My question is a simple one and I think I know the answer but I'd like to see what you guys think of this.
We'd like to go with a simple centralised agent rather than one agent/plugin per sybase machine, atleast for the tests. No doubt there may be pro's (first one clearly being one of a single point of failure - welll we can live with this for now) to this approach. My instinct is to install the oracle agent/plugin on a machine other than the grid machines itself, however the question arose - why not install the ASE plugin on the grid infrastructure machine agents themselves? Pros and cons?
The architecture we have currently : repository database configured to failover between 2 redhat boxes. 2 OMS running 1 on each of these boxes configured behind SLB using nfs based shared upload directory. One 'physical agent' running on each box. Simple for now. But I have the feeling , given that the Sybase servers will communicate or be interrogated via the sybase plugin directly to the grid infrastructure machines placing load etc on them , and in case of problems might interfere with the healthy running of the grid. Or am I being over cautious?
John
Edited by: user1746618 on 12-Jan-2011 09:01well I have followed the common sense approach and avoided the potential problem by installing on a remote server and configuring the plugin on this.
Seems to be working fine and keeps the install base clean.. -
Windows Clustering Networks question...
Hi all;
This is my scenario:
I have installed Windows Server 2012 on two servers. Then enabled Windows Clustering feature on it. The shared storage is based on Fibre Channel technology. Each server has 4 NICs and I have splitted them as followis:
One NIC for remote mangement of the servers with the range of 172.16.105.0/24.
One NIC dedicated for heartbeat communication.
Two NICs has been bundled together with NIC Teaming feature of the operating system.
But as you see in the following figure there are 4 Cluster Network links:
Is it normal?
Thanks
Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading
the thread.Hi,
Just want to confirm the current situations.
Please feel free to let us know if you need further assistance.
Regards.
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Three tier architecture questions
Hello,
My question is in regards to using Toplink in a three tier architecture situation. If I wish to send an object A which has a collection of Bs and B has a collection of C ( A nested object structure with two or more levels of indirection). Is the best solution to have the named query be part of a unit of work so that even if on the client side somebody unknowingly were to make the modification to one of the entity objects ( a POJO) the shared session cache would not be affected ?
This is assuming the client side HTTP layer and the RMI/EJB layer are on different JVMs.
Some of the other suggestions I have heard is to retrieve it from the shared session cache directly and if in case I need to modify one or more of the objects do a named query lookup on that object alone and then proceed to register that object in a unit of work and then commit the changes.
Also the indirection would have to be utilised before the data objects are sent to the Servlet layer I presume ?(That is if I do a a.getAllOfBObjects() on the servlet side I would get a nullpointer exception unless all of B were already instatiated on the server side). Also when the objects are sent back to the server do I do a registerObject on all the ones that have changed and then do a deepMergeClone() before the uow.commit() ?
Thanks,
Aswin.Aswin,
If your client is remote to the EJB tier then all persistent entities are detached through serialization. In this architecture you do not need to worry about reading and modifying the shared instance as it never the one being changed on the client (due to serialization).
Yes, you do need to ensure that all required indirect relationships are instantiated on the server prior to returning them from the EJB call.
Yes, you do need to merge the changes of the detached instance when returned to the server. I would also recommend first doing a read for the entity being merged (by primary key) on the new UnitOfWork prior to the merge. This will handle the case where you are merging into a different node of the cluster then where you read as well as allowing you to check for the case where the entity no longer exists in the database (if the read returns null then the merge will result in an INSERT and this may not be desired).
Here is an example test case that does this:
public void test() throws Exception {
Employee detachedEmp = getDeatchedEmployee("Jill", "May");
assertNotNull(detachedEmp);
// Remove the first phone number
PhoneNumber phone = detachedEmp.getPhoneNumber("Work");
assertNotNull("Employee does not have a Work Phone Number",
detachedEmp.getPhoneNumber("Work"));
detachedEmp.removePhoneNumber(phone);
UnitOfWork uow = session.acquireUnitOfWork();
Employee empWC = (Employee) uow.readObject(detachedEmp);
if (empWC == null) { // Deleted
throw new RuntimeException("Could not update deleted employee: " + detachedEmp);
uow.deepMergeClone(detachedEmp);
uow.commit();
* Return a detached Employee found by provided first name and last name.
* Its phone number relationship is instantiated.
public Employee getDeatchedEmployee(String firstName, String lastName) {
ReadObjectQuery roq = new ReadObjectQuery(Employee.class);
ExpressionBuilder builder = roq.getExpressionBuilder();
roq.setSelectionCriteria((builder.get("firstName").equal(firstName)).and(builder.get("lastName").equal(lastName)));
Employee employee = (Employee)session.executeQuery(roq);
employee.getPhoneNumbers().size();
return (Employee)SerializationHelper.serialize(employee);
}One other note: In these types of application optimistic locking is very important. You should also make sure that the locking field(s) are mapped into the object and not stored only in the TopLink cache. This will ensure the locking semantics are maintained across the detachment to the client and the merge back.
Doug -
Architecture question...where to put the code
Newbie here, so please be gentle and explicit (no detail is
too much to give or insulting to me).
I'm hoping one of you architecture/design gurus can help me
with this. I am trying to use good principals of design and not
have code scattered all over the place and also use OO as much as
possible. Therefore I would appreciate very much some advice on
best practices/good design for the following situation.
On my main timeline I have a frame where I instantiate all my
objects. These objects refer to movieClips and textFields etc. that
are on a content frame on that timeline. I have all the
instantiation code in a function called initialize() which I call
from the content frame. All this works just fine. One of the
objects on the content frame is a movieClip which I allow the user
to go forward and backward in using some navigation controls.
Again, the object that manages all that is instantiated on the main
timeline in the initialize() function and works fine too. So here's
my question. I would like to add some interactive objects on some
of the frames of the movieClip I allow the user to navigate forward
and backward in (lets call it NavClip) . For example on frame 1 I
might have a button, on frame 2 and 3 nothing, on frame 4 maybe a
clip I allow the user to drag around etc. So I thought I would add
a layer to NavClip where I will have key frames and put the various
interactive assets on the appropriate key frames. So now I don't
know where to put the code that instantiates these objects (i.e.
the objects that know how to deal with the events and such for each
of these interactive assets). I tried putting the code on my main
timeline, but realized that I can't address the interactive assets
until the NavClip is on the frame that holds the particular asset.
I'm trying not to sprinkle code all over the place, so what do I
do? I thought I might be able to address the assets by just
providing a name for the asset and not a reference to the asset
itself, and then address the asset that way (i.e.
NavClip["interactive_mc"] instead of NavClip.interactive_mc), but
then I thought that's not good since I think there is no type
checking when you use the NavClip["interactive_mc"] form.
I hope I'm not being too dim a bulb on this and have missed
something really obvious. Thanks in advance to anyone who can help
me use a best practice.1. First of all, the code should be:
var myDraggable:Draggable=new Draggable(myClip_mc);
myDraggable.initDrag();
Where initDrag() is defined in the Draggable class. When you
start coding functions on the timeline... that's asking for
problems.
>>Do I wind up with another object each time this
function is called
Well, no, but. That would totally depend on the code in the
(Draggable) class. Let's say you would have a private static var
counter (private static, so a class property instead of an instance
property) and you would increment that counter using a
setInterval(). The second time you enter the frame and create a new
Draggable object... the counter starts at the last value of the
'old' object. So, you don't get another object with your function
literal but you still end up with a faulty program. And the same
goes for listener objects that are not removed, tweens that are
running and so on.
The destroy() method in a custom class (=object, I can't
stress that enough...) needs to do the cleanup, removing anything
you don't need anymore.
2. if myDraggable != undefined
You shouldn't be using that, period. If you don't need the
asset anymore, delete it using the destroy() method. Again, if you
want to make sure only one instance of a custom object is alive,
use the Singleton design pattern. To elaborate on inheritance:
define the Draggable class (class Draggable extends MovieClip) and
connect it to the myClip_mc using the linkage identifier in the
library). In the Draggable class you can define a function unOnLoad
(an event fired when myClip_mc is removed using
myClip_mc.removeMovieClip()...) and do the cleanup there.
3. A destroy() method performs a cleanup of any assets we
don't need anymore to make sure we don't end up with all kinds of
stuff hanging around in the memory. When you extend the MovieClip
Class you can (additionally) use the onUnLoad event. And with the
code you posted, no it wouldn't delete the myClip_mc unless you
program it to do so. -
Architectural question for CCM failover WAN best practices
I have a client that has a large CCM cluster in Texas. Approx 2000 phones register here over the wan from branch offices, HQ, etc. In Milwaukee, there is a call center that is going to go in for about 200 agents 24/7 operations. We are looking at the architecture of this design and are wondering if it would be wise to setup another cluster in Milwaukee just for the Call Center, then use Intercluster trunking between the two clusters.
Or, could we just place a (2)subscribers at the Milwaukee location for a DR between the two sites? (Texas and Milwaukee).
The WAN backbone is MPLS, so we could configure multiple T's back to the data center, etc. The problem we see is if what the Texax CCM cluster falls down... what happens then? Will the CallCenter be able to function?
Any good advice, simple is better.
Thanks much!What type of callcenter is it? IPCC Enterprise or Express?
Maybe you are looking for
-
Transfer upgrade in Apple store?
I wasn't able to order the 5s early Friday morning and had to wait until Saturday to put my order in. I used an upgrade from a different line. Since the phone isn't expected to ship for another month (specifically, 10/28), I was wondering if it's pos
-
Quickly create 8 minute highest quality DVD with no menus from Motion 4?
What's the fastest/simplest way to create a high quality DVD without any menus that just plays from Motion 4? I set quality to best. I just tried using Share, and I can see one of my processors working at 100%, but I've got no indication of any progr
-
IOS8: why is the airplay icon missing from embedded web video players?
Can someone explain to me why the airplay icon that used to reside in the lower right hand corner of videos played inside safari or youtube or amblify is now gone? The only way I can use airplay is to mirror my whole phone from the control center whi
-
Report - Numeric Format Change
Dear Friends Based on UOM i want the Qty format should change for ex if UOM = PCS THEN QTY SHOULD BE ROUNDUP IF UOM = KGS THEN QTY SHOULD BE WITH 2 DECIMAL. Please suggest how to configure in Report. Sandy
-
Editing in Photoshop CS5 and return to LR5 as .jpg
Hello, I am fairly new to LR, so this may be a stupid question, but here goes.... Occasionally I have a photo that I have made my edits to in LR5 and now want to take into Photoshop CS5 to do a few minor things and I understand how that is done. My i