About clusters

hi all,
could u pls explain me about clusters?
what is clusters?
and how the data will store in clusters?

Hi Devender,
Here are feww definitions. Hope it helps your understanding.
<b> Cluster table        </b>                                                                               
Cluster tables contain continuous text, for example, documentation. Several cluster tables can be combined to form a table cluster. Several logical lines of different tables are combined to form a physical record  in this table type. This permits object-by-object storage or object-by-object access. In order to combine tables in clusters, at least  parts of the keys must agree. Several cluster tables are stored in one       corresponding table on the database.                                       
  <b>What is a table cluster?</b>
     A table cluster combines several logical tables in the ABAP/4 Dictionary. Several logical rows from different cluster tables are brought together in a single physical record. The records from the cluster tables
assigned to a cluster are thus stored in a single common table in the database.
<b>What are DATA CLUSTERS ?</b>
You can group any complex internal data objects of an ABAP/4 program together in data clusters and store them temporarily in ABAP/4 memory or for longer periods in databases. You can store data clusters in special databases of the ABAP/4 Dictionary. These databases are known as ABAP/4 cluster databases and have a predefined structure.Storing a data cluster is specific to ABAP/4. Although you can also access cluster databases using SQL statements, only ABAP/4 statements are able to decode the structure of the stored data cluster.
http://help.sap.com/saphelp_nw04/helpdata/en/cf/21f0b7446011d189700000e8322d00/content.htm
Suggest to do search in SDN with key - 'Cluster'
Will get a lot of helpful discussions.
<b>Reward points if this helps.
Manish</b>

Similar Messages

  • Plz help me about clustering and loadbalancing

    hi all
    plz help me about clustering and loadbalancing
    how to do clustering and loadbalancing in tomcat ?

    I used Grid Bag Layout for allignment of all buttons and textfields.Use multiple different nested layout managers to get the desired layout.
    Give it another try.
    If you need further help then you need to create a [Short, Self Contained, Compilable and Executable, Example Program (SSCCE)|http://homepage1.nifty.com/algafield/sscce.html], that demonstrates the incorrect behaviour.
    Don't forget to use the Code Formatting Tags so the posted code retains its original formatting. That is done by selecting the code and then clicking on the "Code" button above the question input area.

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Clustering of Oracle AS 10.1.3.0 nodes in active-active topology

    Hi,
    The following requirements for clustering in our environment exist:
    * Active-active topology
    * External hardware load-balancer
    * Fail-over
    * Application session-state replication
    We use multiple Oracle AS 10.1.3.0 nodes. Applications are deployed on 2 or more OC4J instances on those nodes.
    I've read the Oracle docs on clustering and I have some questions regarding the clustering configuration to use:
    1. When an external hardware load-balancer is used, is configuring an Oracle AS cluster needed then? The docs talk about clusters, but in some cases I'm not sure whether an Oracle AS cluster or a hardware cluster is meant.
    2. It seems not entirely clear what type of application clustering to use in combination with an Oracle AS cluster: either multicast or peer-to-peer? If I'm correct the Oracle docs suggest to use dynamic peer-to-peer application clustering if an Oracle AS cluster is used and multicast for standalone OC4J. Is that correct?
    Thanks,
    Ronald

    1. Well, the idea is to have all clients route their HTTP requests to the physical load-balancer. The load-balancer will route/forward the requests to one of the Oracle AS nodes based on the load on those machines. If application state is synchronized between the application(s) on the OC4J instances, it would not be necessary to configure an Oracle AS cluster? Or are there advantages for having such a cluster? One of the pros I can think of would be that groups are automatically created so deployment/management of the application(s) becomes easier. Are there any other? Or is this configuration w/o Oracle AS cluster not a good idea?
    2. Clear.
    3. JMS, thanks for the tip.
    4. Yes we use Oracle RAC. Does that impose constraints on the Oracle AS clustering and/or application clustering?
    Ronald

  • Regarding the clusters  in HR - ABAP

    Hi All,
                  Anybody is having the documents regarding the clusters. If so can u please share with me…?
    Thanks & Regards,
      Suresh

    Hi Suresh,
    Go through  the following link.You will get a brief knowledge about clusters in HR
    www.hrexpertonline.com/downloads/12-04.doc .
    Regards,
    Deepthi Reddy

  • Multiple stored procedure run across clusters

    Hi there,
    Currently we are having a single Oracle 11g instance. All our stored procedures are run on this database either directly from within the database (DBMS_JOB) or called externally from front-end java web apps.
    The question is we now have to cater for scenarios where there will lot more avenues (other java apps) calling the stored procedure and we want to provision for such scenarios without impacting the performance or agreed upon throughput back to the calling application.
    One option I read was about clustering (RAC) and how this can be configured at the database level to transparently cater to a huge volume of stored procedure calls to the database without affecting (requiring changes on the calling entities). So the java front end apps will only refer to a single database but Oracle RAC configured for the database will handle the heavy load scenario in seamless and transparent fashion across the clusters
    We don't want to split the execution of one single stored procedure run into multiple process for performance we have that part covered with optimizing the queries of the stored procedure
    but we want to provision for a scenario where multiple apps can spawn calls to the stored procedures simultaneously and the database is efficient about handling these parallel stored procedure invocations and does not overwhelm under the pressure of large volume of stored procedure run causing degradation of stored procedure runtime/response time.
    Please provide your thoughts.

    If those stored procedures are making DML calls against more-or-less the same data, you will be introduce contention.
    In a single instance (non-RAC) contention is within the SGA (buffer_cache, shared_pool, latches, enqueues). If the application is not scalable within the single instance, you will likely make performance worse when running it in RAC.
    So, you must first evaluate how it works (or would work) in a single instance database -- find out if it scalable merely by adding hardware. If it is scalable but your current hardware is limited, you can consider RAC. If it is not scalable and you have serialisation or contention, performance would be worse in RAC.
    Hemant K Chitale

  • Help on Clustering of tables

    Hi,
    The below is an example of 3 tables for which i want to cluster.
    I have the following structure of the 3 tables:
    Table A Primary key -> referenced by Table B
    Table A & B Primary key -> referenced by Table C
    I have done the following:
    1. This is the cluster created for the table A primary key ano
    create cluster a_cl
    ano number(10)
    2. This creates a table A with the cluster command
    create table a
    ano number(10) primary key,
    aname varchar2(20)
    cluster a_cl(ano);
    3. This creates another cluster for the table B as it is referenced by table C ( I have no idea if this has to be done)
    create cluster b_cl
    bno number(10)
    4. This is the command i used to create a table which is having 2 clusters in it - cluster of table A & table B
    create table b
    (bno number(10) primary key,
    bname varchar2(20),
    ano number(10),
    foreign key(ano) references a(ano))
    cluster a_cl(ano), -- of table A
    cluster b_cl(bno); -- of table B
    The above command gave me the following error:
    ORA-01769: duplicate CLUSTER option specifications.
    I could not proceed further for the clustering of table C due to the above error.
    How do I use cluster command for a table which is related with 2 or more tables?
    I have gone through many examples on clustering but almost all of them have only one example in which only one table is related.
    Can anybody help me in this regard?
    Thanks in advance for any help.
    Regards,
    Rajashree.

    Rajashree,
    I understood relationships between them. It's just strange bacause many-to-many relationship includes many-to-one. If you have a reason, that's fine.
    Before creating a cluster try to find out why you need it. Don't create because you heard that clusters can improve performance dramatically. They may, but they are usefull only in certain cases whereas in others they may cause in very poor performance. To find useful information about clusters see Oracle Documnetation: Oracle8 Concepts.
    Read my first answer carefully. There is one of possible cluster organizations. How you want to organize your cluster depends on what you want. If you don't understand how cluster may benefit you, don't do it. I recommend to read carefully Oracle Docs to find out clusters benefits.
    Aleksandr

  • Problem in installing iAS6.5 (under clustering) on solaris 8

    I installed iAS 6.5 on solaris 8. The system is consisted of one web server and 2 application servers. The directory server is installed in only one application server and the other application server used that. Of course, those servers are the same clustering group.
    But on installing 2nd server, the error is occurred. After I input the application server's admin id and password, the error message is shown like following.
    ******* Error Message *******************
    Invalid entry it may be due to :
    1)Suffix which is the root of your user directory tree does not match with Administration Domain of configuratio
    n directory server.
    2)The iAS-AT Username entered (admin) already exists in the Directory Server, with a DIFFERENT Password.
    Please press ENTER, and then re-enter the Username and Password
    I installed the servers following the install guide about clustering on solaris.(http://developer.iplanet.com/appserver/samples/cluster/docs/unix-cluster.html)
    Please help me how to resolve this problem.
    Thank you.

    I resolved this problem. I think it is iplanet bug on installation.
    I tried to register the application server's admin id in Directory Server in setup but the install program did not work nomally. After many trial and errors, I checked the setup log file, ldapmod.log, which is existed during installation and deleted after setup. I showed the line that is the install program tried to add the entry without suffix like dc=sun, dc=com in this file.
    So I attempted to enter the admin id included the suffix. It was occurred the error message like "Bad Command" and passed the next page of installation. I completed the installation successfully. Of course some error messages is shown during configuration iAS Administration Tool but it is no regards. After setup, I tested the cluster and fail-over and it worked well.
    Thank you for your concern.

  • IronPort Clustering questions

    Hello all,
    I have some questions about clustering in Ironport:
    Actually I have one IronPort C150 in "Standalone mode" with an ip adress who takes the mail flow (192.168.1.34)
    We received a second Ironport for setup a cluster configuration between them.
    My question are :
    1) What happen for the mail flow if the first IronPort ( 192.168.1.34) move to a cluster configuration ?
    I have to configure a virtual address to be same of the original ip adress mail flow (192.168.1.34) or the cluster takes the original configuration of the first IronPort ?
    2) If one Ironport Fail, the second IronPort automatically takes the mail? or i have to reconfigure manually the ip address ?
    Thanks for your help.
    PS: Sorry for my english

    I agree with your thoughts on MX records. The biggest benefit to using a load balancer is with the management. Once you start getting a large number of hosts in an MX record you start running into problems with senders correctly resolving your MX records due to inproper DNS configuration on the internet (UDP vs TCP). Standing up a large number of hosts behind some load balancers is one potential solution. This of course comes with its own set of challenges.
    I'm still using MX records, but at some point will need to look at having multiple machines behind each host in my MX records to cut down on the size of the returned record.
    I just wish I could get all of my application developers to write their apps to understand MX records. Load balancers have worked well for my outbound environment where most applications are pointing at a host name instead of an MX record.
    Joe

  • Clustering with wl6.1 - Problems

    Hi,
              After reading a bit about clustering with weblogic 6.1 (and thanks to
              this list), I have done the following:
              1. Configure machines - two boxes (Solaris and Linux).
              2. Configure servers - weblogic 6.1 running on both at port 7001.
              3. Administration server is Win'XP. Here is the snippet of config.xml
              on the Administration Server:
              <Server Cluster="MyCluster" ListenAddress="192.168.1.239"
              Machine="dummy239" Name="wls239">
              <Log Name="wls239"/>
              <SSL Name="wls239"/>
              <ServerDebug Name="wls239"/>
              <KernelDebug Name="wls239"/>
              <ServerStart Name="wls239"/>
              <WebServer Name="wls239"/>
              </Server>
              <Server Cluster="MyCluster" ListenAddress="192.168.1.131"
              Machine="dummy131" Name="wls131">
              <Log Name="wls131"/>
              <SSL Name="wls131"/>
              <ServerDebug Name="wls131"/>
              <KernelDebug Name="wls131"/>
              <ServerStart Name="wls131"
              OutputFile="C:\bea\wlserver6.1\.\config\NodeManagerClientLogs\wls131\startserver_1029504698175.log"/>
              <WebServer Name="wls131"/>
              </Server>
              Problems:
              1. I can't figure out how I set the "OutputFile" parameter for the
              server "wls131".
              2. I have NodeManager started on 131 listening on port 5555. But when
              I try to start server "wls131" from the Administration Server, I get
              the following error:
              <Aug 16, 2002 6:56:58 AM PDT> <Error> <NodeManager> <Could not start
              server 'wls131' via Node Manager - reason: '[SecureCommandInvoker:
              Could not create a socket to the NodeManager running on host
              '192.168.1.131:5555' to execute command 'online null', reason:
              Connection refused: connect. Ensure that the NodeManager on host
              '192.168.1.131' is configured to listen on port '5555' and that it is
              actively listening]'
              Any help will be greatly appreciated.
              TIA,
              

    I have made some progress:
              1. The environment settings on 131 were missing. I executed setEnv.sh
              to setup the required environment variables.
              2. nodemanager.hosts (on 131) had the following entries earlier:
                   # more nodemanager.hosts
                   127.0.0.1
                   localhost
                   192.168.1.135
              I changed it to:
                   #more nodemanager.hosts
                   192.168.1.135
              3. The Administration Server (135) did not have any listen Address
              defined (since it was working without it), but since one of the errors
              thrown by NodeManager on 131 was - "could not connect to
              localhost:70001 via HTTP", I changed the listen Address to
              192.168.1.135 (instead of the null).
              4. I deleted all the logs (NodeManagerInternal logs on 131) and all
              log files on NodeManagerClientLogs on 135.
              5. Restarted Admin Server. Restarted NodeManager on 131.
              NodeManagerInternalLogs on 131 has:
                   [root@]# more NodeManagerInternal_1029567030003
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenAddress to '1
                   92.168.1.131'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting
              listenPort to '5555
                   '>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting WebLogic
              home to '/
                   home/weblogic/bea/wlserver6.1'>
                   <Aug 17, 2002 12:20:30 PM IST> <Info> <NodeManager> <Setting java
              home to '/home
                   /weblogic/jdk1.3.1_03'>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: Enabled Ciphers >
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <TLS_RSA_
                   EXPORT_WITH_RC4_40_MD5>
                   <Aug 17, 2002 12:20:33 PM IST> <Info>
              <[email protected]:5555> <SecureSo
                   cketListener: listening on 192.168.1.131:5555>
                   And the wls131 logs contain:
                   [root@dummy131 wls131]# more config
                   #Saved configuration for wls131
                   #Sat Aug 17 12:24:42 IST 2002
                   processId=18437
                   savedLogsDirectory=/home/weblogic/bea/wlserver6.1/NodeManagerLogs
                   classpath=NULL
                   nodemanager.debugEnabled=false
                   TimeStamp=1029567282621
                   command=online
                   java.security.policy=NULL
                   bea.home=NULL
                   weblogic.Domain=domain
                   serverStartArgs=NULL
                   weblogic.management.server=192.168.1.135\:7001
                   RootDirectory=NULL
                   nodemanager.sslEnabled=true
                   weblogic.Name=wls131
                   The error generated for the client (131) was:
                   [root@dummy131 wls131]# more wls131_error.log
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.
                   DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
                   and the output on the admin server (135) is:
                   <Aug 17, 2002 12:24:42 PM IST> <Info>
              <[email protected]:5555> <BaseProcessControl: saving process
              id of Weblogic Managed server 'wls131', pid: 18437>
                   Starting WebLogic Server ....
                   Connecting to http://192.168.1.135:7001...
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Configuration Management>
              <Errors detected attempting to connect to admin server at
              192.168.1.135:7001 during initialization of managed server (
              192.168.1.131:7001 ). The reported error was: <
              weblogic.security.acl.DefaultUserInfoImpl > This condition generally
              results when the managed and admin servers are using the same listen
              address and port.>
                   <Aug 17, 2002 12:24:50 PM IST> <Emergency> <Server> <Unable to
              initialize the server: 'Fatal initialization exception
                   Throwable: weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   '>
                   The WebLogic Server did not start up properly.
                   Exception raised:
                   java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl
                   <<no stack trace available>>
                   --------------- nested within: ------------------
                   weblogic.management.configuration.ConfigurationException:
              weblogic.security.acl.DefaultUserInfoImpl - with nested exception:
                   [java.lang.ClassCastException:
              weblogic.security.acl.DefaultUserInfoImpl]
                   at weblogic.management.Admin.initializeRemoteAdminHome(Admin.java:1042)
                   at weblogic.management.Admin.start(Admin.java:381)
                   at weblogic.t3.srvr.T3Srvr.initialize(T3Srvr.java:373)
                   at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:206)
                   at weblogic.Server.main(Server.java:35)
                   Reason: Fatal initialization exception
              6. Now from the client (131) error, I thought it was something to do
              with security. So I tried to start weblogic manually (connected as the
              same user). Curiosly enough, it does start (it threw some errors for
              some EJBs, but I got the final message):
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <ListenThread listening on port 7001>
                   <Aug 17, 2002 12:30:39 PM IST> <Notice> <WebLogicServer>
              <SSLListenThread listening on port 7002>
                   <Aug 17, 2002 12:30:40 PM IST> <Notice> <WebLogicServer> <Started
              WebLogic Admin Server "myserver" for domain "mydomain" running in
              Production Mode>
              7. As you can see the domain on the client (131) is "mydomain". But
              shouldn't the Admin server be 192.168.1.135, since this is what I have
              configured for the NodeManager? Or is it that the error occurs because
              the Admin server Node Manager is configured to work with is 135, while
              in the default scripts the admin server is itself? I'm confused :-)
              Help, anyone?
              

  • 2003 Terminal Server Session Directory Clustering

    Hi everybody ;
    I have question about clustering TS Session Directory Service.
    In my environment I have four terminal server which running Windows Server 2003 R2. Now we bought two new servers and we are planning to use them for File Server with Windows Server 2008 R2. I installed Windows Server 2008 R2 to this new servers
    and i configured them for file server and clustered this two server. But i want to cluster the my terminal servers session directory service in this cluster environment. My terminal server WS2003 but cluster environment is WS2008. Is there a problem about
    this version difference ?
    Thanks.

    i don't believe so. 2003 terminal services requires at least 2x 2003 servers with the session directory service running on them to be clustered in MSCS.  2008 has a whole new session directory which will not serve 2003 remote desktop hosts
    http://support.microsoft.com/kb/301923
    http://support.microsoft.com/kb/301926
    http://download.microsoft.com/download/8/6/2/8624174c-8587-4a37-8722-00139613a5bc/TS_Session_Directory.doc
    you may want to consider upgrading your old 2003 terminal server environment to 2008 though.

  • Simple question about segments

    Good afternoon,
    I am reading the Oracle Database Concepts and I want to make sure that I've understood correctly.
    A segment may only contain physical data for ONE and ONLY ONE schema object. In other words, there cannot be a segment that would have 1 or more extents that are part of table 1 and other extents that are part of table 2.
    Is this above statement correct in all cases ?
    Thank you for your help,
    John.

    440bx - 11gR2 wrote:
    I haven't gotten to the part about clusters yet, therefore I don't fully understand your reply because I thought that my sentences meant exactly the same thing. Therefore both should be true or both be false.You need to be very careful about the terminology you use. ;)
    So far I've covered the relationships of Tablespace -> Segment -> Extent -> Block
    I knew the following to be true:
    1. a block can only contain one part of a schema object (be it part of a table, index, other)Again depends upon what do you mean by "part of a schema object"
    2. an Extent is made up of mulitple blocks and, being the minimum logical unit of storage allocation it must therefore store one and only one schema object.Correct.
    3. A Segment is made up of muliple extents. This one I was not certain about and caused my original question, which could be rephrased as, can a segment contain extents that are part of table1 (for instance) and another extent that is a part of an index for table88 and another extent that is a piece of table52 ?No. Aman has already described it very well.
    Is the answer to (3) above ALWAYS NO_ , if it isn't then I have misunderstood something.Basically, you started your question with following statement
    A segment may only contain physical data for ONE and ONLY ONE schema objectAnd that is certainly not correct. You are probably interchanging the words "extents" and "data" with each other. They can not be. Replace "data" with "extents" and your understanding so far is correct.
    You may want to re-read your question after you read about clusters.The whole idea of clusters is being able to store related data close to each other, especially when it comes from different tables. Basically, clusters are segments on their own (and have extents of blocks assigned to it) and any tables created in clusters are only schema objects without any storage attributes of their own. This makes it possible to have a block in a segment (which is cluster) containing data from multiple tables (which are schema objects).
    Hope this helps.

  • Weblogic7/examples/clustering/ejb Automatic failover for idempotent methods ?

    This one should be easy since it is from the examples folder of bea 7 about
              clustering.
              Ref : \bea7\weblogic007\samples\server\src\examples\cluster\ejb
              I am referring to the cluster example provided with the weblogic server 7.0
              on windows 2000.
              I deployed Admin server and 2 managed server as described in document.
              Everything works fine as shown by the example. I get load balancing and
              failover both. Too Good.
              Client.java is using the while loop to manage the failover. So on exception
              it will go thru the loop again.
              I understand from the documentation that the stateless session EJB will
              provide the automatic failover for Idempotent stateless bean
              Case Failover Idempotent : ( Automatic )
              If methods are written in such a way that repeated calls to the same method
              do not cause duplicate updates, the method is said to be "idempotent." For
              idempotent methods, WebLogic Server provides the
              stateless-bean-methods-are-idempotent deployment property. If you set this
              property to "true" in weblogic-ejb-jar.xml, WebLogic Server assumes that the
              method is idempotent and will provide failover services for the EJB method,
              even if a failure occurs during a method call.
              Now I made 2 changes to the code.
              1 . I added as follows to the weblogic-ejb-jar.xml of teller stateless EJB
              <stateless-clustering>
              <stateless-bean-is-clusterable>true</stateless-bean-is-clusterable>
              <stateless-bean-load-algorithm>random</stateless-bean-load-algorithm>
              <stateless-bean-methods-are-idempotent>true</stateless-bean-methods-are-idem
              potent>
              </stateless-clustering>
              So I should get the automatic failover .............
              2. Also I added the break statement in the catch on line around 230 in
              Client .java
              catch (RemoteException re) {
              System.out.println(" Error: " + re);
              // Replace teller, in case that's the problem
              teller = null;
              invoke = false;
              break;
              So that the client program does not loop again and again.
              Now I compile and restart all my three servers and redeploy application (
              just to be sure )
              I start my client and I get a automatic load balancing between the server
              which makes me happy.
              But Failover ....?
              I kill one of the managed application server in cluster at any particular
              test fail point.
              I expect the exception to be taken care automatically by error/failover
              handler in the home/remote stub
              But the client program fails and terminates.
              1. What is wrong with the code ?
              2. Does the automatic failover with the indempotent methods also has to be
              taken care by coding the similar while loop for stateless ejb ?
              Your help will be appreciated ASAP.
              Let me know if you need any thing more from my system. But I am sure this
              will be very easy as it is from the sample code.........
              Thanks
              

    Sorry I meant to send this to the ejb newsgroup.
              dan
              dan benanav wrote:
              > Do any vendors provide for clustering with automatic failover of entity
              > beans? I know that WLS does not. How about Gemstone? If not is there
              > a reason why it is not possible?
              >
              > It seems to me that EJB servers should be capable of automatic failover
              > of entity beans.
              >
              > dan
              

  • Clustering Identity Manager

    I am interesting in knowing more about Clustering IDM. We have IDM running on Tomcat with Oracle DB
    Please let me know what your thoughts are on this and whats the best way to do this. Feel free to tell me where I can find more reference material on this.
    Thanks
    Edited by: singh_n on Sep 25, 2008 8:17 PM
    Edited by: singh_n on Sep 25, 2008 8:21 PM

    Yes, you install glassfish on an admin node and on several app nodes, deploy the app via the admin node and it automatically deploys to the app nodes. The app nodes have a load balancer in front of them. If one app node goes down, the load balancer stops sending web traffic to it.
    Within each individual resource configuration, you can also tell the resource adapter which app node to perform reconciliation on. For example:
    ========= LB =========
        |        |
    app-a app-b app-c app-d admin
        |        |        |        |         |
    --------------------------------------------------------admin network (ldap, AD, DB, traffic)app-a and app-b are "web nodes" that handle use traffic 50/50 (password changes, admin web interface, etc)
    app-c and app-d are "processing nodes" that handle activesync and reconciliation
    If a node dies, everything fails over and keep running. If you need more nodes, just create a new node on another server and add in into the mix.
    Glassfish handles the clustering, the load balancer handles the user-visible HA, and IDM handles distributing the processing workload to the app nodes.
    OK, I hope my ASCII art above looks OK....

  • Do OBIEE Clustering has restrictions on no# of servers?

    Hi,
    I want to set up OBIEE on 4 RedHat AS 5 Linux servers (10t,20t,30t and 40t). I haven't separated presentation and BI servers. I made 10t as primary and 20t and secondary cluster controller. I understand the schedulers can be on only two servers one in active and inactive.
    I have some concerns about clustering:
    1. Is BI clustering only for BI servers or does it include even presentation services?
    2. I gave the list of 4 servers in NQSCluster.INI for parameters "SERVERS". I started cluster services on 10t, 20t. While I am starting services on 30t and 40t I get the following error:
    Copyright (c) 1997-2006 Oracle Corporation, All rights reserved
    2008-05-22 13:53:24
    [71004] This machine (30t) is not specified as either the Primary or Secondary Cluster Controller.
    2008-05-22 13:53:24
    [71002] The Cluster Controller is terminating due to errors in the cluster configuration.
    2008-05-22 13:53:24
    [71026] Cluster Controller exiting.
    2008-05-22 19:31:15
    [71004] This machine (30t) is not specified as either the Primary or Secondary Cluster Controller.
    2008-05-22 19:31:15
    [71002] The Cluster Controller is terminating due to errors in the cluster configuration.
    2008-05-22 19:31:15
    [71026] Cluster Controller exiting.
    BI Gurus, Please advise.
    Thanks,
    Cherrish

    update: the exception is caused by the mysql-connector package. what i now tried was to sign the mysql-connector .jar file before putting it into my application .jar file. then, again i signed my own .jar but it still doesn't work - and the same error occurs.

Maybe you are looking for

  • This computer has previously been synced with an iphone

    I've got an iphone 3g & everytime I connect it to my computer it says "this computer has previously been synced with an iphone". It then says set up as a new phone or restore from the back up of iphone. This is the only iphone I've had & I've always

  • SOAP/HTTP adapter for ABAP proxy

    Hi, Is it possible to send data to a SAP system using an inbound proxy and  SOAP/ HTTP adapter. As per my knowledge , XI adapter is generally used for Proxy communication. If SOAP/HTTP can be used then what should be the message protocol. There are 2

  • New Issues with Flash builder - Second time you do search from imported Tutorial file it breaks with AS error

    Okay, so I had an issue with the Design View that was broke because of the <s:text> tag inside of the <s:RichText> tag. Adobe fiexed that in one of the next released builds (discussion here in fourms). So I took the tutorial file (again) and started

  • Audio freezes during photos transitions

    Hello, after burning DVD, when playing the disc on a regular dvd player, the music jumps during the photos transitions in the diaporama. When playing the disc on the macbook, everything is fine. My wish is to send the disc to other people so they can

  • Can't edit contacts in address book under Lion

    Instead of the "Edit" and "Share" buttons that normally appear at the bottom of my listed contacts, I only have "Add Contact" and "Share."  What gives?  Why can't I edit any of my contacts?