Interesting wl clustering anomaly

I have noticed (on a prev. project, and the one I am currently on) that
          when we deploy without clustering, we dont have to deploy the stubs with
          the clients, but when we implement clustering (and have more than one
          server running) we have to send those out with the client.
          Is there a way around this or is it just something we have to do?
          Our configuration:
          wl 403
          jdk 1.1.8
          1 sun 450 (2 proc) 2 GB RAM
          1 sun ?? (1 proc) 512 MB RAM
          Chris
          

We do not have to deliver the stub files with the client, until we implement
          the server clustered. Is this intentional? or something that just happens
          with clustering?
          Prasad Peddada wrote:
          > Not quite sure what you are trying to do.
          >
          > chris humphrey wrote:
          >
          > > I have noticed (on a prev. project, and the one I am currently on) that
          > > when we deploy without clustering, we dont have to deploy the stubs with
          > > the clients, but when we implement clustering (and have more than one
          > > server running) we have to send those out with the client.
          > >
          > > Is there a way around this or is it just something we have to do?
          > >
          > > Our configuration:
          > > wl 403
          > > jdk 1.1.8
          > > 1 sun 450 (2 proc) 2 GB RAM
          > > 1 sun ?? (1 proc) 512 MB RAM
          > >
          > > Chris
          

Similar Messages

  • Load balancing not happending but fail over is for Read only Entity beans

              The following are the configuration.
              Two NT servers with WL5.1 sp9 having only EJBs(Read only entity beans)
              One Client with WL5.1 sp9 having servlet/java application as
              EJB client.
              I am trying to make a call like findbyprimarykey in one of the
              entity bean. I could see the request is being directed only to the one of the
              server always. When I bring that server, fail over is happening to the other server.
              Here are the settings I have in the ejb-jar.xml :
                        <entity>
                             <ejb-name>device.StartHome</ejb-name>
                             <home>com.wl.api.device.StartHome</home>
                             <remote>com.wl.api.device.StartRemote</remote>
                             <ejb-class>com.wl.server.device.StartImpl</ejb-class>
                             <persistence-type>Bean</persistence-type>
                             <prim-key-class>java.lang.Long</prim-key-class>
                             <reentrant>False</reentrant>
                             <resource-ref>
                                  <res-ref-name>jdbc/wlPool</res-ref-name>
                                  <res-type>javax.sql.DataSource</res-type>
                                  <res-auth>Container</res-auth>
                             </resource-ref>
                        </entity>
              Here are the settings I have in the weblogic-ejb-jar.xml.
              <weblogic-enterprise-bean>
                        <ejb-name>device.StartHome</ejb-name>
                        <caching-descriptor>
                             <max-beans-in-cache>50</max-beans-in-cache>
                             <cache-strategy>Read-Only</cache-strategy>
                             <read-timeout-seconds>900</read-timeout-seconds>
                        </caching-descriptor>
                        <reference-descriptor>
                             <resource-description>
                                  <res-ref-name>jdbc/wlPool</res-ref-name>
                                  <jndi-name>weblogic.jdbc.pool.wlPool</jndi-name>
                             </resource-description>
                        </reference-descriptor>
                        <enable-call-by-reference>False</enable-call-by-reference>
                        <jndi-name>device.StartHome</jndi-name>
                   </weblogic-enterprise-bean>
              Am I doin any mistake in this?
              Any one's help is appreciated.
              Thanks
              Suresh
              

    we are using 5.1
              "Gene Chuang" <[email protected]> wrote in message
              news:[email protected]...
              > Colocation optimization occurs if your client resides in the same
              container (and also in the same
              > EAR for 6.0) as your ejbs.
              >
              > Gene
              >
              > "Suresh" <[email protected]> wrote in message
              news:[email protected]...
              > > Ok....the ejb-call-by-reference set to true is making the call to one
              server
              > > only. i am not sure why it is. I removed the property name and it
              works.
              > > Also I have one question, in our prduct environment, when i cache the
              ejb
              > > home it is not doing the load balancing. can any one help me for that.
              > > thanks
              > >
              > > Mike,
              > > From the sample pgm I sent, even from single client calls get load
              > > balanced.
              > >
              > > Suresh
              > >
              > >
              > > "Gene Chuang" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > In WL, LoadBalancing will ONLY WORK if you reuse your EJBHome! Take
              your
              > > StartEndPointHome lookup
              > > > out of your for loop and see if this fixes your problem.
              > > >
              > > > I've seen this discussion in ejb-interest, and some other vendor
              (Borland,
              > > I believe it is), brings
              > > > up an interesting point: Clustering and LoadBalance is not in the
              J2EE
              > > specs, hence implementation
              > > > is totally up to the vendor. Weblogic loadbalances from the remote
              > > interfaces (EJBObject, EJBHome,
              > > > etc..), while Borland loadbalances from JNDI Context lookup.
              > > >
              > > > Let me suggest a third implmentation: loadbalance from BOTH Context
              > > lookup as well as stub method
              > > > invocation! Or create a smart replica-aware list manager which
              persists
              > > on the client thread
              > > > (ThreadLocal) and is aware of lookup/evocation history. Hence if I do
              the
              > > following in a client
              > > > hitting a 3 node cluster, I'll still get perfect round-robining
              regardless
              > > of what I do on the
              > > > client side:
              > > >
              > > > InitialContext ctxt = new InitialContext();
              > > > EJBHome myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #1
              > > > myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #2
              > > > myHome.findByPrimaryKey(pk); <== hits Node #3
              > > > myHome = ctxt.lookup(MY_BEAN);
              > > > myHome.findByPrimaryKey(pk); <== hits Node #1
              > > > ...
              > > >
              > > >
              > > > Gene
              > > >
              > > > "Suresh" <[email protected]> wrote in message
              > > news:[email protected]...
              > > > > Mike ,
              > > > >
              > > > > Do you have any reasons for the total number of machines to be 10.
              > > > >
              > > > > I tried with 7 machines.
              > > > >
              > > > >
              > > > > Here is my sample client java application running individual in the
              > > seven
              > > > > machines.
              > > > >
              > > > > StartEndPointHome =
              > > > > (StartEndPointHome)ctx.lookup("dev.StartEndPointHome");
              > > > > for(;;)
              > > > > {
              > > > > // logMsg(" --in loop "+currentTime);
              > > > > if (currentTime > nextRefereshTime)
              > > > > {
              > > > > logMsg("****- going to call");
              > > > > currentTime=getSystemTime();
              > > > > nextRefereshTime=currentTime+timeInterval;
              > > > > StartEndPointHome =
              > > > > (StartEndPointHome)ctx.lookup("dev.StartEndPointHome");
              > > > > long rndno=(long)(Math.random()*10)+range;
              > > > > logMsg(" going to call remotestub"+rndno);
              > > > > retVal =
              > > > >
              > >
              ((StartEndPointHome)getStartHome()).findByNumber("pe"+rndno+"_mportal_dsk36.
              > > > > mportal.com");
              > > > >
              > > > > logMsg("**++- called stub");
              > > > > }
              > > > >
              > > > >
              > > > >
              > > > > The range value is different for each of the machines in the
              cluster.
              > > > >
              > > > > If the first request starts at srv1, all request starts hitting the
              same
              > > > > server.
              > > > > If the first request starts at srv2, all request starts hitting the
              same
              > > > > server.
              > > > >
              > > > > I have the following for the url , user and pwd values for the
              context
              > > .
              > > > >
              > > > > public static String url="t3://10.11.12.14,10.11.12.117:8000";
              > > > > public static String user="guest";
              > > > > public static String password="guest";
              > > > >
              > > > >
              > > > >
              > > > > It would be great if you could help me.
              > > > >
              > > > > Thanks
              > > > > suresh
              > > > >
              > > > >
              > > > > "Mike Reiche" <[email protected]> wrote in message
              > > > > news:[email protected]...
              > > > > >
              > > > > > If you have only one client don't be surprised if you only hit one
              > > server.
              > > > > Try
              > > > > > running ten different clients and see if the hit the same server.
              > > > > >
              > > > > > Mike
              > > > > >
              > > > > >
              > > > > > "suresh" <[email protected]> wrote:
              > > > > > >
              > > > > > >The following are the configuration.
              > > > > > >
              > > > > > > Two NT servers with WL5.1 sp9 having only EJBs(Read only entity
              > > beans)
              > > > > > >
              > > > > > > One Client with WL5.1 sp9 having servlet/java application as
              > > > > > > EJB client.
              > > > > > >
              > > > > > >
              > > > > > >I am trying to make a call like findbyprimarykey in one of the
              > > > > > >entity bean. I could see the request is being directed only to
              the
              > > one
              > > > > > >of the
              > > > > > >server always. When I bring that server, fail over is happening
              to
              > > the
              > > > > > >other server.
              > > > > > >
              > > > > > >
              > > > > > >Here are the settings I have in the ejb-jar.xml :
              > > > > > > <entity>
              > > > > > > <ejb-name>device.StartHome</ejb-name>
              > > > > > > <home>com.wl.api.device.StartHome</home>
              > > > > > > <remote>com.wl.api.device.StartRemote</remote>
              > > > > > > <ejb-class>com.wl.server.device.StartImpl</ejb-class>
              > > > > > > <persistence-type>Bean</persistence-type>
              > > > > > > <prim-key-class>java.lang.Long</prim-key-class>
              > > > > > > <reentrant>False</reentrant>
              > > > > > > <resource-ref>
              > > > > > > <res-ref-name>jdbc/wlPool</res-ref-name>
              > > > > > > <res-type>javax.sql.DataSource</res-type>
              > > > > > > <res-auth>Container</res-auth>
              > > > > > > </resource-ref>
              > > > > > > </entity>
              > > > > > >
              > > > > > >
              > > > > > >Here are the settings I have in the weblogic-ejb-jar.xml.
              > > > > > >
              > > > > > ><weblogic-enterprise-bean>
              > > > > > > <ejb-name>device.StartHome</ejb-name>
              > > > > > >
              > > > > > > <caching-descriptor>
              > > > > > > <max-beans-in-cache>50</max-beans-in-cache>
              > > > > > > <cache-strategy>Read-Only</cache-strategy>
              > > > > > > <read-timeout-seconds>900</read-timeout-seconds>
              > > > > > > </caching-descriptor>
              > > > > > >
              > > > > > > <reference-descriptor>
              > > > > > > <resource-description>
              > > > > > > <res-ref-name>jdbc/wlPool</res-ref-name>
              > > > > > > <jndi-name>weblogic.jdbc.pool.wlPool</jndi-name>
              > > > > > > </resource-description>
              > > > > > > </reference-descriptor>
              > > > > > > <enable-call-by-reference>False</enable-call-by-reference>
              > > > > > > <jndi-name>device.StartHome</jndi-name>
              > > > > > > </weblogic-enterprise-bean>
              > > > > > >
              > > > > > >
              > > > > > >Am I doin any mistake in this?
              > > > > > >
              > > > > > >Any one's help is appreciated.
              > > > > > >Thanks
              > > > > > >Suresh
              > > > > >
              > > > >
              > > > >
              > > >
              > > >
              > >
              > >
              >
              >
              

  • Welcome to the BEA WebLogic Server Version 6.0 Beta Program!

    Welcome to the BEA WebLogic Server Version 6.0 Beta Program!
    We are very excited about this beta program and appreciate your
    participation. In the past, our public betas have been very well received
    by our developer community. So, we have once again organized a public beta
    program to enable everyone to preview our latest release.
    We do ask that you follow a few guidelines:
    -- There will be no voice, e-mail, or fax support for this beta through the
    technical support organization. All questions, bug reports, or comments on
    the beta program should be directed to to the WebLogic beta newsgroups at
    news://newsgroups.bea.com. These newsgroups are:
    weblogic.developer.interest.60beta.transaction
    weblogic.support.install.60beta
    weblogic.developer.interest.60beta.ejb
    weblogic.developer.interest.60beta.clustering
    weblogic.developer.interest.60beta.security
    weblogic.developer.interest.60beta.jdbc
    weblogic.developer.interest.60beta.jms
    weblogic.developer.interest.60beta.performance
    weblogic.developer.interest.60beta.misc
    weblogic.developer.interest.60beta.servlet
    weblogic.developer.interest.60beta.jsp
    weblogic.developer.interest.60beta.tools
    weblogic.developer.interest.60beta.rmi-iiop
    weblogic.developer.interest.60beta.management
    weblogic.developer.interest.60beta.management.console
    weblogic.developer.interest.60beta.management.general_and_jmx
    weblogic.developer.interest.60beta.internationalization
    weblogic.developer.interest.60beta.xml
    weblogic.developer.interest.60beta.jndi
    weblogic.developer.interest.60beta.documentation
    weblogic.developer.interest.60beta.javamail
    -- Please remember that this release is currently beta code. This means that
    it should not be put into production deployments until the final release
    occurs.
    -- It is very likely that this release will contain bugs and errors. This is
    the nature of beta code. Please the patient with us as we do our best to fix
    any problems that we find. We will do our absolute best to make sure that
    your issues are addressed as soon as possible.
    -- Please do not post any issues relevant to the beta on the standard
    newsgroups also available at news://newsgroups.bea.com.
    -- Please use the newsgroup for all communication and do not contact any BEA
    employees directly. They have been instructed to direct you to comment only
    via the newsgroup.
    -- Please review previous posts in the newsgroups before posting. If you
    locate a bug or need to ask a question, it is very likely that it will have
    been asked before.
    -- Please do not post on the newsgroup using hostile or profane language.
    Inappropriate posts will be removed and offenders will be blocked from the
    beta program.
    Thank you again for your support and participation. We very much appreciate
    all that you will be doing to make this release of the BEA WebLogic Server
    as great as possible.
    Michael Girdley
    BEA Systems Inc

    Welcome to the BEA WebLogic Server Version 6.0 Beta Program!
    We are very excited about this beta program and appreciate your
    participation. In the past, our public betas have been very well received
    by our developer community. So, we have once again organized a public beta
    program to enable everyone to preview our latest release.
    We do ask that you follow a few guidelines:
    -- There will be no voice, e-mail, or fax support for this beta through the
    technical support organization. All questions, bug reports, or comments on
    the beta program should be directed to to the WebLogic beta newsgroups at
    news://newsgroups.bea.com. These newsgroups are:
    weblogic.developer.interest.60beta.transaction
    weblogic.support.install.60beta
    weblogic.developer.interest.60beta.ejb
    weblogic.developer.interest.60beta.clustering
    weblogic.developer.interest.60beta.security
    weblogic.developer.interest.60beta.jdbc
    weblogic.developer.interest.60beta.jms
    weblogic.developer.interest.60beta.performance
    weblogic.developer.interest.60beta.misc
    weblogic.developer.interest.60beta.servlet
    weblogic.developer.interest.60beta.jsp
    weblogic.developer.interest.60beta.tools
    weblogic.developer.interest.60beta.rmi-iiop
    weblogic.developer.interest.60beta.management
    weblogic.developer.interest.60beta.management.console
    weblogic.developer.interest.60beta.management.general_and_jmx
    weblogic.developer.interest.60beta.internationalization
    weblogic.developer.interest.60beta.xml
    weblogic.developer.interest.60beta.jndi
    weblogic.developer.interest.60beta.documentation
    weblogic.developer.interest.60beta.javamail
    -- Please remember that this release is currently beta code. This means that
    it should not be put into production deployments until the final release
    occurs.
    -- It is very likely that this release will contain bugs and errors. This is
    the nature of beta code. Please the patient with us as we do our best to fix
    any problems that we find. We will do our absolute best to make sure that
    your issues are addressed as soon as possible.
    -- Please do not post any issues relevant to the beta on the standard
    newsgroups also available at news://newsgroups.bea.com.
    -- Please use the newsgroup for all communication and do not contact any BEA
    employees directly. They have been instructed to direct you to comment only
    via the newsgroup.
    -- Please review previous posts in the newsgroups before posting. If you
    locate a bug or need to ask a question, it is very likely that it will have
    been asked before.
    -- Please do not post on the newsgroup using hostile or profane language.
    Inappropriate posts will be removed and offenders will be blocked from the
    beta program.
    Thank you again for your support and participation. We very much appreciate
    all that you will be doing to make this release of the BEA WebLogic Server
    as great as possible.
    Michael Girdley
    BEA Systems Inc

  • WLS 5.1

    I'm setting up a PeopleSoft V8. Student Administration web server.
    The PeopleSoft Admin wants me to setup two seperate servers on one
    physical box.
    I setup the generic /weblogic/myserver fine. I now want to duplicate
    that and run another site. The only documentation I have is what is
    on http://locahost.
    I'm not interested in clustering these, but basically making them
    virtual hosts.
    If any of you have done this before, or have any documentation that
    will get me through this process, I'd be extremely grateful.
    What I've done so far:
    Copied /weblogic/myserver to /weblogic/secondserver. I changed the
    weblogic.properties in the /secondserver to listenAddr=second IP for
    box and also gave it path to the /weblogic/secondsite.
    We're running this on WIN2k and I'm using the InstallasService Script.
    I cannot get the second service to start.
    Thanks!
    Billy

    Hi.
    Sounds like you have the right approach. Try starting both servers from the
    command line and look for error messages on the stdout or log file for your
    secondserver and post them here. Does secondserver start ok when myserver
    is not running? Once you get them running from a command prompt then it
    shouldn't be too difficult to get them running as a service.
    Michael
    "Billy" <[email protected]> wrote in message
    news:[email protected]..
    I'm setting up a PeopleSoft V8. Student Administration web server.
    The PeopleSoft Admin wants me to setup two seperate servers on one
    physical box.
    I setup the generic /weblogic/myserver fine. I now want to duplicate
    that and run another site. The only documentation I have is what is
    on http://locahost.
    I'm not interested in clustering these, but basically making them
    virtual hosts.
    If any of you have done this before, or have any documentation that
    will get me through this process, I'd be extremely grateful.
    What I've done so far:
    Copied /weblogic/myserver to /weblogic/secondserver. I changed the
    weblogic.properties in the /secondserver to listenAddr=second IP for
    box and also gave it path to the /weblogic/secondsite.
    We're running this on WIN2k and I'm using the InstallasService Script.
    I cannot get the second service to start.
    Thanks!
    Billy

  • Interesting performance anomaly during video encode

    I just noticed an interesting anomaly while my iDVD6 project was encoding video. I found that if I cover up the 'Creating Your DVD' progress bar and preview window with a Finder window, that video encoding seems to move a whole lot faster. I could tell by listening to the disk activity which seemed to be occurring about twice as fast. When I moved the Finder window away and exposed progress bar/preview window, then disk activity slowed down.
    I opened Activity Monitor and looked at CPU% which showed about 70-80% with progress bar/preview window exposed. After covering up, the CPU% increased from 125-150% So apparently exposing progress bar/preview window slows things down quite a bit during video encode. I tried same trick during audio encode, and it made no difference. CPU% was about 7-8% during audio encode.
    I was wondering if anyone in this discussion group noticed this performance anomaly previously.
    Paul

    Hi Paul
    Yes I noted some relatively related phenomena during the rendering process
    in FinalCutExpress that just selecting Finder seemed to speed things up.
    Second: Please don't rely on the Activity monitor and most so when it says
    iDVD doesn't answer. It's just rubisch. iDVD keeps on working - just wait and see.
    Yours Bengt W

  • Advice Requested - High Availability WITHOUT Failover Clustering

    We're creating an entirely new Hyper-V virtualized environment on Server 2012 R2.  My question is:  Can we accomplish high availability WITHOUT using failover clustering?
    So, I don't really have anything AGAINST failover clustering, and we will happily use it if it's the right solution for us, but to be honest, we really don't want ANYTHING to happen automatically when it comes to failover.  Here's what I mean:
    In this new environment, we have architected 2 identical, very capable Hyper-V physical hosts, each of which will run several VMs comprising the equivalent of a scaled-back version of our entire environment.  In other words, there is at least a domain
    controller, multiple web servers, and a (mirrored/HA/AlwaysOn) SQL Server 2012 VM running on each host, along with a few other miscellaneous one-off worker-bee VMs doing things like system monitoring.  The SQL Server VM on each host has about 75% of the
    physical memory resources dedicated to it (for performance reasons).  We need pretty much the full horsepower of both machines up and going at all times under normal conditions.
    So now, to high availability.  The standard approach is to use failover clustering, but I am concerned that if these hosts are clustered, we'll have the equivalent of just 50% hardware capacity going at all times, with full failover in place of course
    (we are using an iSCSI SAN for storage).
    BUT, if these hosts are NOT clustered, and one of them is suddenly switched off, experiences some kind of catastrophic failure, or simply needs to be rebooted while applying WSUS patches, the SQL Server HA will fail over (so all databases will remain up
    and going on the surviving VM), and the environment would continue functioning at somewhat reduced capacity until the failed host is restarted.  With this approach, it seems to me that we would be running at 100% for the most part, and running at 50%
    or so only in the event of a major failure, rather than running at 50% ALL the time.
    Of course, in the event of a catastrophic failure, I'm also thinking that the one-off worker-bee VMs could be replicated to the alternate host so they could be started on the surviving host if needed during a long-term outage.
    So basically, I am very interested in the thoughts of others with experience regarding taking this approach to Hyper-V architecture, as it seems as if failover clustering is almost a given when it comes to best practices and high availability.  I guess
    I'm looking for validation on my thinking.
    So what do you think?  What am I missing or forgetting?  What will we LOSE if we go with a NON-clustered high-availability environment as I've described it?
    Thanks in advance for your thoughts!

    Udo -
    Yes your responses are very helpful.
    Can we use the built-in Server 2012 iSCSI Target Server role to convert the local RAID disks into an iSCSI LUN that the VMs could access?  Or can that not run on the same physical box as the Hyper-V host?  I guess if the physical box goes down
    the LUN would go down anyway, huh?  Or can I cluster that role (iSCSI target) as well?  If not, do you have any other specific product suggestions I can research, or do I just end up wasting this 12TB of local disk storage?
    - Morgan
    That's a bad idea. First of all Microsoft iSCSI target is slow (it's non-cached @ server side). So if you really decided to use dedicated hardware for storage (maybe you do have a reason I don't know...) and if you're fine with your storage being a single
    point of failure (OK, maybe your RTOs and RPOs are fair enough) then at least use SMB share. SMB at least does cache I/O on both client and server sides and also you can use Storage Spaces as a back end of it (non-clustered) so read "write back flash cache
    for cheap". See:
    What's new in iSCSI target with Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn305893.aspx
    Improved optimization to allow disk-level caching
    Updated
    iSCSI Target Server now sets the disk cache bypass flag on a hosting disk I/O, through Force Unit Access (FUA), only when the issuing initiator explicitly requests it. This change can potentially improve performance.
    Previously, iSCSI Target Server would always set the disk cache bypass flag on all I/O’s. System cache bypass functionality remains unchanged in iSCSI Target Server; for instance, the file system cache on the target server is always bypassed.
    Yes you can cluster iSCSI target from Microsoft but a) it would be SLOW as there would be only active-passive I/O model (no real use from MPIO between multiple hosts) and b) that would require a shared storage for Windows Cluster. What for? Scenario was
    usable with a) there was no virtual FC so guest VM cluster could not use FC LUs and b) there was no shared VHDX so SAS could not be used for guest VM cluster as well. Now both are present so scenario is useless: just export your existing shared storage without
    any Microsoft iSCSI target and you'll be happy. For references see:
    MSFT iSCSI Target in HA mode
    http://technet.microsoft.com/en-us/library/gg232621(v=ws.10).aspx
    Cluster MSFT iSCSI Target with SAS back end
    http://techontip.wordpress.com/2011/05/03/microsoft-iscsi-target-cluster-building-walkthrough/
    Guest
    VM Cluster Storage Options
    http://technet.microsoft.com/en-us/library/dn440540.aspx
    Storage options
    The following tables lists the storage types that you can use to provide shared storage for a guest cluster.
    Storage Type
    Description
    Shared virtual hard disk
    New in Windows Server 2012 R2, you can configure multiple virtual machines to connect to and use a single virtual hard disk (.vhdx) file. Each virtual machine can access the virtual hard disk just like servers
    would connect to the same LUN in a storage area network (SAN). For more information, see Deploy a Guest Cluster Using a Shared Virtual Hard Disk.
    Virtual Fibre Channel
    Introduced in Windows Server 2012, virtual Fibre Channel enables you to connect virtual machines to LUNs on a Fibre Channel SAN. For more information, see Hyper-V
    Virtual Fibre Channel Overview.
    iSCSI
    The iSCSI initiator inside a virtual machine enables you to connect over the network to an iSCSI target. For more information, see iSCSI
    Target Block Storage Overviewand the blog post Introduction of iSCSI Target in Windows
    Server 2012.
    Storage requirements depend on the clustered roles that run on the cluster. Most clustered roles use clustered storage, where the storage is available on any cluster node that runs a clustered
    role. Examples of clustered storage include Physical Disk resources and Cluster Shared Volumes (CSV). Some roles do not require storage that is managed by the cluster. For example, you can configure Microsoft SQL Server to use availability groups that replicate
    the data between nodes. Other clustered roles may use Server Message Block (SMB) shares or Network File System (NFS) shares as data stores that any cluster node can access.
    Sure you can use third-party software to replicate 12TB of your storage between just a pair of nodes to create a fully fault-tolerant cluster. See (there's also a free offering):
    StarWind VSAN [Virtual SAN] for Hyper-V
    http://www.starwindsoftware.com/native-san-for-hyper-v-free-edition
    Product is similar to what VMware had just released for ESXi except it's selling for ~2 years so is mature :)
    There are other guys doing this say DataCore (more playing for Windows-based FC) and SteelEye (more about geo-cluster & replication). But you may want to give them a try.
    Hope this helped a bit :) 
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • How can I do "pkgupdate" and/or create private patch clusters?

    I hope this is the right forum for this post :)
    I maintain a number of Solaris 8-10 systems which are automated by my scripts
    (i.e. for dumping data to a backup server, watchdog the services, start up portal
    services, etc.)
    Long ago I made my life easier by creating such scripts; then I made it easier by
    packaging these scripts into thematic Solaris packages (as in pkgadd/pkgrm).
    Since this concerns maintaining older systems, references to the new OpenSolaris
    Indiana Packaging System are not interesting.
    Now I want to simplify maintenance and update of such packages by building a
    patch cluster of them. In short, there should be one archive to copy to all these
    servers and one command to run (i.e. patchadd), and it should reinstall only(!)
    those of my script packages which are already installed.
    An interim solution would be to come up with a sort of "pkgupdate" utility which
    REinstalls only the existing package instances which are older than the one I
    currently provide (perhaps someone can share a pkgadd admin file with options
    to force such behavior?)
    I guess I can script up another solution for this automation as well, but I wonder
    if this wheel is already invented? It actually is, with the Sun-provided patch clusters
    proving that, but I've not seen any tools to create such clusters myself :)
    This is particularly important while maintaining a Solaris 10 system with lots of
    Solaris zones. Having different tasks, these zones are configured with different
    sets of my packages. Simply running pkgrm/pkgadd in the global zone would
    result in all zones having all of these packages, and I don't want that since I
    believe in specialization and minimization. And checking each zone is rather
    tedious, there's lots of them.
    Looking forward to constructive suggestions,
    //Jim
    Edited by: JimKlimov on Dec 2, 2008 7:47 PM

    Can you expand on that? How do you use automator in your game? Can you give me a rough idea of how and what you do with it? I have watched a tutorial on Automator and it seems more useful for documents.

  • Render Benchmarks: GPU vs Quicktime X vs Compressor Quick Clusters vs Compressor distributed

    I've been using Final Cut since the early days when Apple acquired the original technology and began bundling apps to create Final Cut Studio,  Along the way, I have used Compressor and now have the latest version installed.  I don't use Compressor much because I've always felt the performance and interface lagged behind third-party apps and now FCPX's GPU rendering.
    I've been producing wedding videos for a while.  Due to the long form nature of the ceremonies and the various formats I provide (web/h264, DVD, Blu-ray), I'm even more eager to optimize my rendering times.
    I have tried setting up QMaster clusters multiple times over the years but found they were characteristically unreliable and didn't want to spend a lot of time troubleshooting and nursing them.  With the addition of quick clusters in Compressor, I decided to give it a go again this past week. 
    I have 3 Macs on a gigabit LAN: 2008 Mac Pro 2.8 (8 threads), 2011 iMac i7 3.4 (8 threads), 2011 Mac Mini i5 (4 threads).  All have the same/latest version of ML, FCPX and Compressor.  The Mac Pro is my primary workstation: 24GB RAM, SSD boot drive, RAID0 Media drive for Projects and Events, ATI 5770 GPU.
    I have a 50 min 1080 30p timeline with some mullticam clips, titles multiple audio tracks, CC, NR, etc.  After allowing FCPX to background render the entire Timeline (original media/high quality playback), I choose the Share button and send a 3 min segment to a PR422 1080 30p Master destination.
    From FCPX shared to PR422 Master: 1:09
    Then the same segment:
    From FCPX timeline shared to the default Compressor 1080p Apple Devices (10Mb) destination (not opening Compressor): 14:52
    From FCPX timeline shared to Apple Devices 1080p Best Quality destination: 9:21
    From Finder using the 3Min PR422 Master Segment file and the "Encode Selected Files" Service at 1080p: 3:50
    From Quicktime X using the 3min PR422 Master segment file and 1080p Output format: 3:46
    From Quicktime 7 Pro using the 3min PR422 Master segment file and 1080p Output format: 10:44
    From Compressor using Apple Devices 1080p (10Mbps): 7:19
    Adobe Media Encoder CS6 using the 3Min PR422 Master segment file w/H264 2Pass VBR 10Target/12Max m192Kb Audio 1080 30p: 6:54
    This segment was too short to get a reliable test using Compressor with my "This Computer+" distributed rendering but my tests showed that even with all available threads, the distributed rendering is much slower than GPU rendering or QTX encoding.  I have included screen composites here of the various output files, settings, CPU loads and utilized threads: https://www.dropbox.com/sh/6tgamkjs5z2r6zv/Zg6zcyn3Ya
    I realize this IS NOT scientific in that most of the tests have small variations in compression bit rates and 32 vs 64bit processes.  Regardless, I'm confident that for most circumstances, Compressor is slower than most other workflows.  I couldn't find a situation where I would want to use it. 
    Fastest hands-on workflow: Output a Master PR file then right click the file in Finder and encode to 1080p, about 5min total.
    As a side note: Despite setting the quick clusters up by the book, I could never get the nodes to start rendering.  Interesting because I had no problems with "This Computer+" using the available nodes on my  LAN.
    Observations? Recommendations?

  • SAP note 709354 - DB Clustering

    An extract from the sap note:
    --->>>
    With SAP Enterprise Portal 6.0 on Web AS 6.40, DB Clustering will not be officially supported. This applies to Microsoft SQL, as well as to Oracle Database cluster implementations. The technical feasability of active/active DB-Cluster solutions (like ORACLE RAC or MS SQL in active/active cluster mode) in conjuction with SAP Enterprise Portal 6.0 on Web AS 6.40 Support Pack Stack 01 is under investigation. But there are currently no plans to release it within the  NetWeaver '04 timeframe. Since some customer projects have shown that EP 6.0 SP2 is able work  on top of an active/passive DB-Cluster-Solution SAP is willing to help customers on a project to get DB-Cluster related problems solved (in case of Oracle this is true for Fail Over mechanisms, but not  for Real Application Clustering (RAC)). If you are interested in setting up DB-clustering on your own risk and on a project base, please contact SAP NetWeaver Product Management via your SAP contact in sales or consulting.
    <<<---
    Would someone from SAP like to comment on this, does it mean that active/passive clustering as already supported or that ANY mode of clustering will be unsupported. And within the timeframe of NW04?? Isn't that until around 2013?
    Has anyone had any experiences with active/passive clustering, especially Oracle on Windows, with EP6 SP3 and higher?
    Although the note indicates that it 'can (and probably will) be changed without notice' it remains a slightly worrying note, as I'm sure we all want to cluster our production portal central instance and database don't we?

    The problem no longer exists. If you maintain the codepages equally in the external DB and in the SAP BW DB, everything should work fine.

  • Error in accessing Processes in a Clustered WLS Environment

    Hi all,
    We are running a clustered deployment of OBPM in WLS. The cluster is behaving properly execpt published and deployed projects are displaying the following error when accessing the workspace:
    The Process '/FooRequestProcess#Default-1.0' is not available.
    The Process '/FooCenterRequestProcess#Default-1.0' is not available. Caused by: Process '/FooRequestProcess#Default-1.0' not available. Caused by: Cannot reach engine 'bpmQAengine' at URL: ''. Caused by: While trying to lookup 'engines.bpmQAengine' didn't find subcontext 'engines'. Resolved ''
    Now, there was an issue with the JDBC Data Source in WLS which has since been fixed otherwise neither the engine or workspace would have come up. I have rebuilt and redeployed the engine, and I have undeployed and unplished the project then republished and redeployed it yet I still see this error.
    The XA definitions are correct. Does anyone have a clue as to the cause of this?
    TIA,
    IGS

    Hi,
    Am getting the same error
    The Process '/ExpenseReport#Default-1.0' is not available.
    The Process '/ExpenseReport#Default-1.0' is not available. Caused by: Process '/ExpenseReport#Default-1.0' not available. Caused by: Cannot reach engine 'Engine3' at URL: ''. Caused by: While trying to lookup 'engines.Engine3' didn't find subcontext 'engines'. Resolved ''
    When i tried checking --reload from directory am getting the following error
    Error Caused by: JMX connection for J2EE_WEBLOGIC could not be established. URL: service:jmx:t3://cdi-server1.apac.apl.com:8101/jndi/weblogic.management.mbeanservers.runtime. Caused by: User: weblogic, failed to be authenticated.
    Am not sure of this error... Please advice how to recitfy.. I have given the host and port correctly (pointed to one of the managed server)
    The most interesting part is I have configured cluster in distributed environment. I have deployed the project to a cluster with two managed servers. one managed server is configured to another physical server..
    When i try the workspace with server1:9001\workspace am able to see the deployed processes without any errors/exceptions
    but when i try the workspace with server2:9002\workspace am getting process unavailable error..
    server1:9001 -- managed server1 of the cluster to which the process is deployed
    server2:9002 - managed server2 of the cluster to which the process is deployed
    Kindly help
    Thanks,
    Charan
    Edited by: charan27 on May 26, 2010 5:24 AM

  • Getting clusters from LabVIEW ActiveX Server with Excel/VBA

    Hello,
    my colleague and I are trying to control a LV from Excel (VBA) by ActiveX.
    I.E.:
    We do something like :
    Set LV = createObject("LabVIEW.Application")
    Set VI = LV.GetVIReference("Path_to_VI")
    ParamNames(0) = "Input1"
    ParamNames(1) = "Input2"
    ParamNames(2) = "Output"
    ParamValues(0) = 1
    ParamValues(1) = 3.1415
    Call VI.Call(ParamNames,ParamValues)
    msgbox("output =" & ParamVals(2))
    This works perfectly for simple data types (int, double, float, string, etc )
    Now we need to transfer more complex structures, which are originaly LV-clusters.
    But we did not find any clue on how do that (especially receive clusters) in the help or on the internet.
    Is there any chance to succeed ???
    TIA,
    Thomas

    Actually, working with clusters is really really easy. Through the magic of - well something - a cluster in LV comes out in the VBA environment as an array of variants. There was an activex example that shipped with V7.1 that showed this very thing. I couldn't find them in V8 so here is the 7.1 stuff.
    Check out the macros in the Excel spreadsheet... This show running the VI in the development environment, but if this looks interesting I can fill you in on how to make it work in an executable.
    Mike...
    Certified Professional Instructor
    Certified LabVIEW Architect
    LabVIEW Champion
    "... after all, He's not a tame lion..."
    Be thinking ahead and mark your dance card for NI Week 2015 now: TS 6139 - Object Oriented First Steps
    Attachments:
    freqresp.xls ‏49 KB
    Frequency Response.llb ‏155 KB

  • Clustering of Oracle 10gAS (Forms and Reports Standalone edition)

    Hi,
    Does anyone have experience of clustering of Forms\Reports in 10g environement with Forms and Reports Standalone edition?
    Clustering at Application Server level?
    Clustering at hardware level (load balancing \SAN for code storage)?
    Clustering at OS level (MS Clustering)?
    I am interested to know about others experiences\thoughts.
    John

    Since 10g Version (904) you can only cluster the infra (Cold Failover Cluster and Active Failover Cluster), the HA for Forms and Reports is only by frontending with a Loadbalancer, and I'll sugesst not to use WebCache as this frontend because forms get a lot of problems.
    Regards.

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

  • Wl6.0 clustering error

              Hi,
              I have two weblogic servers on different m/cs in a cluster.When the request is going
              to one server it is showing the following error.What could be the reason?
              weblogic.cluster.replication.NotFoundException: Lost 2 updates of -525890432560839199
              at weblogic.rmi.internal.AbstractOutboundRequest.sendReceive(AbstractOutboundRequest.java:90)
              at weblogic.cluster.replication.ReplicationManager_WLStub.update(ReplicationManager_WLStub.java:316)
              at weblogic.cluster.replication.ReplicationManager.updateSecondary(ReplicationManager.java:426)
              at weblogic.servlet.internal.session.ReplicatedSessionData.sync(ReplicatedSessionData.java:398)
              at weblogic.servlet.internal.session.ReplicatedSessionContext.sync(ReplicatedSessionContext.java:147)
              at weblogic.servlet.internal.ServletRequestImpl.syncSession(ServletRequestImpl.java:1526)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1310)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1622)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              --------------- nested within: ------------------
              weblogic.utils.NestedError: Could not find secondary on remote server - with nested
              exception:
              [weblogic.cluster.replication.NotFoundException: Lost 2 updates of -525890432560839199]
              at weblogic.servlet.internal.session.ReplicatedSessionData.sync(ReplicatedSessionData.java:405)
              at weblogic.servlet.internal.session.ReplicatedSessionContext.sync(ReplicatedSessionContext.java:147)
              at weblogic.servlet.internal.ServletRequestImpl.syncSession(ServletRequestImpl.java:1526)
              at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1310)
              at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1622)
              at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >
              Thanks
              Venkat
              

              Hi Patrizia,
              I have been in touch with bea supoort since last two weeks. They are trying to
              find out the reason for this error.
              I have posted this question long back but nobody gave me answer.
              I think nobody has done clustering successfully on wl6.0sp1.
              If I find the reason I will let you know.
              Thanks
              Venkat
              Patrizia MB <[email protected]> wrote:
              >Hi Venkat,
              >I've just enabled In memory session replicaton and I've encountered EXACTLY
              >your problem which repeat whenever I
              >invoke my servlet. I've two WLS6SP1 instances running in cluster.
              >Have you understood why Weblogic throws those kind of exceptions?
              >I've looked up in other newsgroups (weblogic.developer.interest.clustering.in-memory-replication)
              >and the problem
              >seems very frequent... what is wrong in our cluster configuration???
              >
              >Thanks
              >
              >Patrizia
              >
              >
              >Venkat wrote:
              >
              >> Hi,
              >>
              >> I have two weblogic servers on different m/cs in a cluster.When the
              >request is going
              >> to one server it is showing the following error.What could be the reason?
              >>
              >> weblogic.cluster.replication.NotFoundException: Lost 2 updates of -525890432560839199
              >> at weblogic.rmi.internal.AbstractOutboundRequest.sendReceive(AbstractOutboundRequest.java:90)
              >> at weblogic.cluster.replication.ReplicationManager_WLStub.update(ReplicationManager_WLStub.java:316)
              >> at weblogic.cluster.replication.ReplicationManager.updateSecondary(ReplicationManager.java:426)
              >> at weblogic.servlet.internal.session.ReplicatedSessionData.sync(ReplicatedSessionData.java:398)
              >> at weblogic.servlet.internal.session.ReplicatedSessionContext.sync(ReplicatedSessionContext.java:147)
              >> at weblogic.servlet.internal.ServletRequestImpl.syncSession(ServletRequestImpl.java:1526)
              >> at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1310)
              >> at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1622)
              >> at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
              >> at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >> --------------- nested within: ------------------
              >> weblogic.utils.NestedError: Could not find secondary on remote server
              >- with nested
              >> exception:
              >> [weblogic.cluster.replication.NotFoundException: Lost 2 updates of
              >-525890432560839199]
              >> at weblogic.servlet.internal.session.ReplicatedSessionData.sync(ReplicatedSessionData.java:405)
              >> at weblogic.servlet.internal.session.ReplicatedSessionContext.sync(ReplicatedSessionContext.java:147)
              >> at weblogic.servlet.internal.ServletRequestImpl.syncSession(ServletRequestImpl.java:1526)
              >> at weblogic.servlet.internal.WebAppServletContext.invokeServlet(WebAppServletContext.java:1310)
              >> at weblogic.servlet.internal.ServletRequestImpl.execute(ServletRequestImpl.java:1622)
              >> at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:137)
              >> at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:120)
              >> >
              >>
              >> Thanks
              >> Venkat
              >
              

  • Anomaly detection using ODM

    I was asked the following question:
    "My question is very simply, we are doing a monitoring system for a
    website that helps the admin to mine on specific data (using ODM to
    produce Web mining) so we want to apply the anomaly detection. We dont
    know what we should do and what we should produce as a results."
    A couple of suggestions come to mind:
    1) For an overall discussion of intrusion detection in general using the Oracle RDBMS as an analytical platform the following paper might be useful:
    http://www.oracle.com/technology/products/bi/odm/pdf/odm_based_intrusion_detection_paper_1205.pdf
    2) A couple of things to think about and do:
    (a) Define what is the "mining case", that is, the "object that defines what is the concept you want to mine. For example, in web mine you may want to detect anomalous session activity. This can be defined over the whole activity of a session or over time windows. In the first case each session will define a mining case (it will be a row in the training data). In the second case each section will generate many mining cases, one per time window. Let's assume for sake of discussion that the goal is to identify anomalous session activity. Then the training data will consist of the session activities (e.g., clicks, pages visited, and/or information from forms; or more generally, http requests). There will be one row per session in the training data. If we know beforehand that some of those sessions where intrusion or anomalous in some sense we can also capture this data as a target for supervised modeling.
    (b) Decide what modeling to do. Two types of modeling can be performed (see the paper above for examples):
    (i) Supervised modeling - case there are examples of anomalous cases as well as normal cases
    This can be done by building a classifier on the training data. It is also possible to measure the quality of the classifier on a held aside sample.
    (ii) Unsupervised modeling - this should be done as well even if we can create a supervised model
    Unsupervised approaches don't provide a measure that indicates how good the model is at predicting anomalous events. These models are better at ranking cases by how anomalous the model believe they are.
    Two common unsupervised techniques for anomaly detection are: Clustering and One-Class SVM. The latter is considered a state-of-art in many problem domains and it is the one implemented by ODM. ODM also has clustering but it does not return distance of a row to the center of cluster. This information is necessary for using it clustering for anomaly detection. If one wants to use clustering, the Oracle Data Mining blog has a post that can help compute distance from rows to centroids:
    http://oracledmt.blogspot.com/2006/07/finding-most-typical-record-in-group.html
    It is important to note that the method described in the post doesn't support nested column attributes.
    When building unsupervised models, only the data for normal cases should be used to training the models. The unsupervised models can be seen as defining what is normal. It will recognize that something is anomalous when it does not match the definition of normality learned by the model.
    (c) Use ODMR to help with modeling
    (d) As new session information is gathered it is possible to score in real-time the session to detect anomalous behavior. One should score both supervised (if information was available) and unsupervised models to detect anomalous behavior. See the above paper for some discussion on this.
    The supervised model will indicate if a case is anomalous or not based on known types of anomalous behavior. One should use ROC tuning in ODMR to find a good operating point for the model. This is necessary because the number of anomalous cases is usually small compared to normal ones.
    The unsupervised model (one-class SVM) will provide a ranking. The higher the probability of belonging to class 1 the more normal. A 0.5 probability for class 1 indicates the boundary between normal and not normal. In reality it marks a boundary where normality dominates. There can be some anomalous cases with probability higher than 0.5 and some normal cases with probabilities less than 0.5. If working in batch mode we can rank the probabilities in ascending order and select the first K rows for investigation.
    --Marcos

    A suggestion to speed up the process: provide more information about your data (e.g., schema) and how you are invoking the algorithm (GUI, API, settings). Case you are using the APIs, have you tried the sample programs for anomaly detection?
    Regarding the Apriori algorithm it does not support timestamps and dates columns. In fact, none of the algorithms in ODM does (see the documentation for Oracle Data Mining for the supported column data types). the dbms_predictive_analytics package does. Are you trying to do sequential association rules or just trying to do plain association rules using data from a date column? ODM does not support the former. The latter can be done by converting the date column to a VARCHAR or NUMBER column.
    --Marcos                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

Maybe you are looking for

  • Sourcing of product categories

    Hi, We are running on SRM 5.0 and ECC 6.0 In our scenario--if shopping cart is a catalog item,purchase order is created automatically. If the shopping cart is created using 'describe requirement' link--the shopping cart is sent to sourcing. But in th

  • Applescript: Need help to write a script that inputs an excel formula into multiple workbooks.

    Don't know if I can do this with automator or if I need to write a scipt I have a folder which contains probably 20 excel workbooks and each workbook needs the formula "=CONCATENATE(E17, " ",TEXT(N11, "mm-dd-yy"))" into cell T18 in every sheet of eac

  • Re-downloading program purchase

    My mac just died on me and I just bought a new one today. I am trying to get the programs I bought (the Design Standard package) to download on my new mac. How do I do this?

  • HT5622 I would like to use another credit card

    I want use another credit card

  • I testing GG on my Laptop

    Hi , I am having Oracle 11g installed on Laptop with Windows 7. I have installed two GG on separate directories . The Managers are running on port 7809 and 7810. I have only one database with Two Schemas Ravi and DAMS. My Source schema is DAMS. The E