Questions about entity bean caching/pooling

We have a large J2ee app running on weblogic6.1 sp4. We are using entity beans
with cmp/cmr. We have about 200 EntityBeans and accessed quite heavily. We are
struggling with what is the right setting of max-beans-in-cache and idle-time-out.
The current max heap setting is 2GB. With the current setting (default setting
of max-beans-in-cache to 1000, with a few exceptions to take care of cachefullexceptions)
we run into extended gc happening after about 4 hours. The memory freed gradually
reduces with time and lurks around the 30% mark after about 4 hours of run at
the expected load. In relation to this we had the following questions
1. What does caching mean?
a. If a bean with primary key 100 exists in the cache, and the following
is done what is expected
i. findByPrimaryKey(100)
ii. findBySomeOtherKey(xyz)
which results in loading up bean with primary key 100
iii. cmr access to bean with
primary key 100
Is the instance in the cache reused at all between transactions?
If there is minimal reuse of the beans in cache, Is it fair to assume that caching
can only help loading of beans within a transaction. If this is the case, is there
any driver to increase the max-beans-in-cache other than to avoid CacheFullException?
In other words, is it wrong to say that max-beans-in-cache should be set to the
minimum value so as to avoid CacheFullExceptions.
2. Again what is the driver of setting idle-time-out to a value? ( We currently
have it at 30 secs) Partly the answer to this question would again go back to
what amount of reuse is done from cache? Is it right to say that it should be
set to a very low value? (Why is the default 10 min?)
3. Can you provide us any documentation that explains how all this works
in more detail, particularly in relevance to entity beans. We have already read
the documentation from weblogic as is. Anything to give more explicit detail?
Any tools that can be of use.
4. What is the right parameter (from among the things that weblogic console
throws up) to look at for optimizing?
Thanks in advance for your help
Cheers
Arun

The behaviour changes according to these descriptor settings: concurrency-strategy,
db-is-shared and include-updates.
1. If concurrency-strategy is Database, then the database is used to provide locking
and db-is-shared is ignored. A bean's ejbLoad() is called once per transaction,
and the 'cache' is really a per-transaction pool. A findByPrimaryKey() always
initially hits the db, but can use the cache if called again in the same txn (although
you'd simply just pass a reference around). A findByAnythingElse() always hits
the db.
2. If concurrency-strategy is ReadOnly then the cache is longer-term: ejbLoad()
is only called when the bean is activated; thereafter, the number of times ejbLoad()
is called is influenced by the setting of read-timeout-seconds. A findByPrimaryKey()
can use the cache. A findByAnythingElse() can't.
3. If concurrency-strategy is Exclusive then db-is-shared influences how many
times ejbLoad() is called. If db-is-shared is false (i.e. the container has exclusive
use of the underlying table), then the ejbLoad() behaviour is more like ReadOnly
(2. above), and the cache is longer-term. If db-is-shared is true, then the ejbLoad()
behaviour is like Database (1. above).
Exclusive concurrency reduces ejbLoads(), increases the effectiveness of the cache,
but can reduce app concurrency as only one instance of an entity bean can exist
inside the server, and access to it is serialised at the txn level.
You can't use db-is-shared = false in a cluster. So Exclusive mode is less useful.
That's when you think long and hard about Tangosol Coherence (http://www.tangosol.com)
4. If include-updates is true, then the cache is flushed to the db before every
non-findByPrimaryKey() finder call so the finder (which always hits the db) will
get the latest bean values. This overrides a true setting of delay-updates-until-end-of-tx.
The max-beans-in-cache setting refers to the maximum number of active beans (really
beans that have been returned by a finder in a txn that hasn't committed). This
wasn't checked in SP2 (we have an app that accidently loads 30,000 beans in a
txn with a max-beans-in-cache of 3,000. Slow, but it works, showing 3,000 active
beans, and 27,000 passivated ones...).
This setting is checked in SP5, but I don't know about SP4. So you do need to
size appropriately.
In summary:
- The cache isn't nearly as useful as you'd like. You get far more db activity
with entity beans than you'd like (too many ejbLoads()). This is disappointing.
- findByPrimaryKey() finders can use the cache. How long the cache is kept around
depends on concurrency-strategy.
- findByAnythingElse() finders always hit the db.
WebLogic 8 tidies all this up a bit with a cache-between-transactions setting
and optimistic locking. But I believe findByAnythingElse() finders still have
to hit the db - ejbql is never run against the cache, but is always converted
to SQL and run against the db.
Hope this is of some help - feel free to email me at simon-dot-spruzen-at-rbos-dot-com
(you get the idea!)
simon.

Similar Messages

  • Basic question about Entity Bean.

    Hi:
    Let's suppose that we have the code like below:
         home = (ProductHome)javax.rmi.PortableRemoteObject.narrow(ctx.lookup("ProductHome"),
    ProductHome.class);
         home.create("mouse");
         home.create("keyboard");
         home.create("monitor");
         home.create("mainboard");
    Q1: Does the weblogic hold the four instances of Product Bean after run those
    code?
    Q2: When do the instances will be backed to pool or be destoryed?
    Q3: Does the weblogic will ready all beans if client execute home.findByALL().
    (findByAll method return all product), that will consume a lot of memory if there
    is a mass of client do that?
    Regards!
    Eric Temel

    "Michael Jouravlev" <[email protected]> wrote in message
    news:[email protected]..
    >
    "Eric Temel" <[email protected]> wrote in message
    news:[email protected]..
    Q3: Does the weblogic will ready all beans if client executehome.findByALL().
    (findByAll method return all product), that will consume a lot of memoryif there
    is a mass of client do that?WL has a setting which allows you to find a bean without loading it.Search
    docs.
    ah, good point. To be redundant, even though the beans are found and not
    loaded (via the finders-load-bean DD setting which I think we're referring
    to),
    the 'found' but unloaded beans will still take up a some room in the entity
    bean cache.
    Something to keep in mind if memory is an issue..
    -thorick

  • Very Basic Question about Entity Beans !!!  Need your help.

    Hi,
    I have the following requirement:-
    ==============================
    There is an application A, whose multiple instances can run
    at the same time. There is some data/variable which is to be
    globally shared (i.e by all the instances). I have thought of using
    Entity Beans and putting that data in a single record in DB.
    Approach A:-
    ~~~~~~~~~~~
    Instance 1 of A (with Entity Bean ) -
    -> Database (only 1 row exist)
    Instance 2 of A (with Entity Bean ) -
    Approach B:-
    ~~~~~~~~~~~
    Instance 1 of A
    -> Entity Bean -> Database (only 1 row exist)
    Instance 2 of A
    My Query is:-
    1) In Approach A, both the instances of Application
    have their own Entity Bean (running in same JVM as them,
    packaged with Application)..Now both the entity bean instances
    represent 1 row on Database...At one time only 1 Entity bean
    will be performing the operation (read/write, other will be
    disallowed).
    2) In Approach B, both the instances of application(or Client) using
    the same Entity Bean - which is representing 1 row of Database
    Which is correct....I have read somewhere instance of Entity Bean
    corresponds to 1 row of database....If that is the case, Approach
    A would be wrong..
    Please help.

    1 Entity bean for 1 row is not true. An entity bean can represent data from multiple tables also. The correct statement is 1Entitybean for 1 resultset.
    So in case 1, u have 2 instances of Application running so it should not be an issue.

  • Entity Beans Cache

    Hi,
    We are running our application on WL platform 7.0. We have a number of EntityBeans(about
    30-40) which are container managed and also use CMR.
    The max-beans-in-cache is at its default of 1000. We reach this limit of 1000
    for about 10-15 of these beans in a day or two after a restart of the server.
    (This is production server, we restart this occasionally for maintenance). The
    memory usage for the server keeps increasing and once the entity cache limit is
    reached we see that passivation keeps occurring and the heap usage is always at
    about 80%-95% of the maximum (Total heap size is 1.5GB). We assume this could
    be due to the EntityBeans that are cached by WebLogic. We also see performance
    problems occassionally that might be probably due to the GC or passivation.
    We want to lower our memory usage and also get rid of the occassional slow response
    time. For doing this, is there any way to flush out those Beans from EntityCache
    which are no more used ? WebLogic doesn't seem to flush the cache but only passivate
    them as and when new beans are required. Is there any setting to change this behaviour
    Cheers
    Raja V.

    Thanks Thorick,
    We are using Database concurrency and non-read only beans, hence i believe this
    patch must help us.
    secondly, are you aware of any way to find out the memory usage of the default
    WLS Entity Bean Cache ?
    Cheers
    Raja
    "thorick" <[email protected]> wrote:
    >
    Hi,
    If you are using 'Database' concurrency, then support for an idle-timeout-seconds
    on this cache will be coming in release 7.0sp5. This feature is intended
    to ease
    heap usage when Entity Beans using Database/Optimistic/ReadOnly (but
    NOT Exclusive
    or read-only !). One sets the max-beans-in-cache to be large enough
    to handle
    periodic
    or occasional peak loads and idle-timeout-seconds is set to free the
    cache of
    unused beans
    during periods of low demand.
    If you cannot wait for sp5 and are willing to run a patch, there are
    patches available
    for
    7.0sp2 and 7.0sp3. You'll have to contact your support representative
    about
    these.
    Refer to 'CR110440' courtesy of yours truly !
    Hope this helps
    -thorick

  • Programmatic Invalidation of Entity Bean Cache

    Hi,
    I wonder if there is a way to trigger the invalidation of the WASs Entity Bean cache from a Java program.
    Does anybody know anything regarding this issue?
    Regards,
    Heiko

    Hi Heiko,
    the SAP EJB container does not support such kind of caching. We support a kind of "read only" entity beans, but you can mark a bean as "read only" only if it will never be updated. Any attempt for update will produce an exception. As a comparison, the JBoss "read only" options can be set for beans that are rarely updated.
    This explains why there is no invalidation command in SAP Web AS as well.
    We plan the implementation of a similar entity beans cache for the next releases.
    I hope this is not a showstopper for your porting project. This kind of caching is usually used with performance reasons, so the fall-back variant would be to define the entity beans as "regular" instead of "read only".
    HTH
    Regards,
    Svetoslav

  • Entity beans caching non-persistent data between transactions

    Some of the properties in our entity bean implementation classes are not declared
    in our descriptor files, and therefore, are non-persistent (we are using container-managed
    persistence); I will refer to these properties as "non-persistent properties".
    In WebLogic 5.1, we've noticed that the non-persistent properties are cached in
    between transactions. For instance, I ask for a particular Person (Person(James)),
    and I set one of the non-persistent properties (Property(X)) inside Transaction(A).
    In Transaction(B) (which is exclusive of Transaction(A)), I access Property(X)
    and find that it is the same value as I had set in Transaction(A)- this gives
    the appearance that non-persistent entity properties are being cached in between
    transactions.
    The same appears to hold true in WebLogic 7 SP1, however, we must use the "Exclusive"
    concurrency-strategy to maintain this consistency.
    I am worried that this assumption we are making of non-persistent properties is
    not valid in all cases, and the documentation does not promise anything in the
    way of such an assumption. I am worried that the container could kill the Person(James)
    entity implementation instance in the pool after Transaction(A), and create a
    new Person(James) instance to serve Transaction(B)- once that happens our assumption
    fails.
    "Database" concurrency strategy seems to fail our assumption on a regular basis,
    but that makes sense, since the documentation states that the "database will maintain
    the cache", and the container seems more willing to kill instances when they are
    finished with, or create new instances for new transactions.
    So my question is this: What is exactly guaranteed by the "Exclusive" concurrency-strategy?
    Will the assumption that we've made above ever fail under this strategy?
    Thanks in advance for any help.
    Regards,
    James

    It simply means that there is only one entity bean instance per PK in the
    server, and transaction which uses it locks it exclusively.
    James DeFelice <[email protected]> wrote:
    Thank you for the suggestion. I have considered taking this path, but before I
    make a final decision, I was hoping to get a clear answer to the question that
    I stated below:
    What EXACTLY is guaranteed by the "Exclusive" concurrency-strategy? Maybe someone
    from BEA knows?
    "Cameron Purdy" <[email protected]> wrote:
    To be safe: You should clear those values before the ejb load or set
    them
    after (or both).
    Peace,
    Cameron Purdy
    Tangosol, Inc.
    http://www.tangosol.com/coherence.jsp
    Tangosol Coherence: Clustered Replicated Cache for Weblogic
    "James DeFelice" <[email protected]> wrote in message
    news:[email protected]...
    Some of the properties in our entity bean implementation classes arenot
    declared
    in our descriptor files, and therefore, are non-persistent (we areusing
    container-managed
    persistence); I will refer to these properties as "non-persistentproperties".
    In WebLogic 5.1, we've noticed that the non-persistent properties arecached in
    between transactions. For instance, I ask for a particular Person(Person(James)),
    and I set one of the non-persistent properties (Property(X)) insideTransaction(A).
    In Transaction(B) (which is exclusive of Transaction(A)), I accessProperty(X)
    and find that it is the same value as I had set in Transaction(A)-this
    gives
    the appearance that non-persistent entity properties are being cachedin
    between
    transactions.
    The same appears to hold true in WebLogic 7 SP1, however, we must usethe
    "Exclusive"
    concurrency-strategy to maintain this consistency.
    I am worried that this assumption we are making of non-persistentproperties is
    not valid in all cases, and the documentation does not promise anythingin
    the
    way of such an assumption. I am worried that the container could killthe
    Person(James)
    entity implementation instance in the pool after Transaction(A), andcreate a
    new Person(James) instance to serve Transaction(B)- once that happensour
    assumption
    fails.
    "Database" concurrency strategy seems to fail our assumption on a regularbasis,
    but that makes sense, since the documentation states that the "databasewill maintain
    the cache", and the container seems more willing to kill instanceswhen
    they are
    finished with, or create new instances for new transactions.
    So my question is this: What is exactly guaranteed by the "Exclusive"concurrency-strategy?
    Will the assumption that we've made above ever fail under this strategy?
    Thanks in advance for any help.
    Regards,
    James
    Dimitri

  • Important conceptual question about Application Module, Maximum Pool Size

    Hello everyone,
    We have a critical question about the Application Module default settings (taking the DB connections from a DataSource)
    I know that on the Web it is generally suggested that each request must end with either a commit or rollback when executing PL/SQL blocks "directly" on the DB without the framework BC/ViewObject/Entity service intervention.
    Now, for some reasons, we started to develop our applications with thinking that each Web Session would reference exactly one DB session (opened by any instance taken from the AM pool) for the whole duration of the session, so that the changes made by each Web session to its DB session would never interfere with the changes made by "other" Web Sessions to "other" DB sessions .
    In other words, because of that convincement we often implemented sort of "transactions" that open and close (with either commit or rollback) each DB session not in/after a single HTTP request, but during many HTTP Requests.
    As a concrete example think of this scenario:
    1. the user presses the "Insert" button. An HTTP request is fired. The action listener is executed and ends up with inserting rows in a table via a PL SQL block (not via the ViewObjects API).
    2. no commit or rollback after the above PL/SQL block is done yet.
    3. finally the user presses a "Commit" or "Rollback" button, firing the call to the appropriate AM methos.
    Those three requests consist of what I called "transaction".
    From the documentation it's clear that there is no guarantee that the couple AM istance + DB session is the same during all the requests.
    This means that, during step 2, it's possible that another user might reference the same "pending" AM/DbSession for his needs and "steal" somehow the work done via PL/SQL after step 1. (This happens because sessions taken by the pool are always rolled back by default.)
    Now my question is:
    Suppose we set the "Maximum Pool Size" parameter to very a great number (always inferior to the maximum number of concurrent users):
    Is there any guarantee that all the requests will be isolated in that case?
    I hope the problem is clear.
    Let me know if you want more details.

    Thanks for the answers.
    If I am right, from all your answers about resource avaiability, this means that even supposing the framework is able to always give us the same AM instance back from the AM pool (by following the session-affinity criterias), there is, however, no "connection affinity" with the connections from the DataSource. This means that the "same AM instance" might take the "a new DB connection", if necessary, from the connection pool of the DataSource. If that happens, that could give us the same problems as taking "a new AM instance" (that is, not following session-affinity) from the beginning, since each time an a new connection is taken (either via a new AM instance or via the same AM instance plus a new DB connection), the corresponding DB session is rolle back by default, clearing all the pending transactions we might have performed before with direct PL/SQL calls bypassing the AM services during the life cycle of our application, so that the new HTTP request will have a clean DB session to start to work with.

  • Some problems about entity bean

    As what I learnt from books, client uses primary key for getting instance of entity bean. However, if I want to create some entity beans for read only purposes. That is, I will not has the process of bean instantiation from client side directly. What should that object be existed in my system. As they are persistent, they will not be value object. Would you give me some suggestions?

    If you want to create some entity-bean instances in order to read the data from the database, you should you find method insteads of create method. If you want to ADD new row in the table then use create method.
    If you want to read, it is better to used session bean, and get the RowSet from ResultSet(jdbc)(Please don't update the data in RowSet it will update the database but the entity bean when you do it).
    I hope I understand your question correctly and you understand what my words too.

  • SUP - 2 questions about the CDB (cache database)

    Hi,
    I have 2 questions about the cache database and the cache groups:
    1 - How does the "On demand" cache group policy exactly works? I know that online cache group is without storing any data on the CDB making direct requests to de backend from the device, the DCN is based on updating from the backend, the scheduled is based on a time period, but I don't understand how the "on demand" exactly works, and why it has a time period too.
    2 - Is it possible to query the cache database table to check the data that SUP has stored? How can I do this?
    Thank you!

    I posted a similar question in SUP Apps project not too long ago and  Paul Horan provided this useful reply:
    Create a "Sybase ASA v12.x for Unwired Server" connection profile in the Enterprise Explorer.  I named mine CDB.
    : Host = localhost (or whatever the machine name is)
    : Port = 5200
    : Database name = "default"
    : User Name = "dba"
    : Password = "sql"
    Obviously, change the userid/password to match, if you changed them during install time.
    Connect, and you'll see the "default" database displayed.
    Navigate down through the Tables folder, and the first subfolder is labeled something like [#should_delete_sk ...]  Start there.
    You'll see a bunch of tables with the naming convention "D1" + package name + package version + MBO name.  These are the cache tables for the MBOs.

  • Need help about entity bean relationshi

    I have two entity bean,userBean(userid,username,password,email)and subscriptionBean(email,subtopic).so email column is FK,right?
    but both email,subtopic fields in subscriptionBean are NOT set to be unique,i mean there should be many same email address with difference subtopic subscriptions.
    so NO primaryKey in my second entity bean.is it OK?

    Hello,
    Your EJB MUST have a primary key !, so add a field ( maybe subscriptionId) to you bean which will be unique.
    Regards,
    Sebastien Degardin.

  • About entity bean

    HI, does entity bean exsit in CE7.1 SR5? I can not find it.

    Snehal,
    Not quite correct. EJB 3.0 still supports entity beans, they are not removed and not even officially "deprecated". But you're right to some extent - JPA is now the preferred and recommended persistence technology for Java EE (and also Java SE).
    BTW: No JEE - it's Java EE
    Cheers,
    Vladimir

  • A question about entity manager in stateless session bean.

    JSR 220 ejbcore, page 47 : stateless session bean: All business object references of the same interface type for the same stateless session bean have the "same object identity", which is assigned by the container.
    So, if we have two session beans in client code...
    @EJB Cart cart1;
    @EJB Cart cart2;
    then cart1.equals(cart2)==true
    If we declare entity manager in stateless session bean:
    @PersistenceContext( unitName="ds" ,type=PersistenceContextType.TRANSACTION)
    private EntityManager em;If cart1 and cart2 are the same reference, do we have any problem when using the same reference(maybe the same em? ) to get data from db?

    If cart1 and cart2 are the same reference, do we have
    any problem when using the same reference(maybe the
    same em? ) to get data from db?No. In EJB, there is a distinction between the EJB reference and the bean instance.
    Each time you make an invocation on an EJB reference for a stateless session bean,
    the container can choose any instance of that bean's bean class to process the
    invocation. That's true whether you invoke the same reference multiple times or
    two difference references to the same bean.
    Each bean instance is guaranteed to be single-threaded.

  • Newbie question about entity and view objects

    Hi everyone,
    My first ADF application in JDeveloper is off to a difficult start. Having come from a forms background, I know that it is necessary avoid using post-query type lookups in order to have full filtering using F11/Ctrl+F11. This means creating an CRUDable view and getting as much of the lookup data as possible into the view without losing the ability to modify the data. In JDeveloper, I do not know how to build my data model to support that. My thought was to start with a robust updateable view as my main CRUD EO and then create a VO on top of that with additional EOs or VOs. But, I have found that I cannot add VOs to another VO. However, if I link the VOs then I have a master-detail which will not work for me.
    For example, I have two joins to CSI_INST_EXTEND_ATTRIB_V shown in the queries below and need to have that show in the table displaying the CRUD VO’s data. It seemed that the best way to do this is to create a CSI_INST_EXTEND_ATTRIB_V entity object and view object. The view object would have two parameters, P_INSTANCE_ID and P_ATTRIBUTE name. Both the building and the unit are needed on the same record, and this is not a master-detail even though it might look that way. (A master-detail data control will not work for me because I need all this data to appear on the same line in the table.) So, I need help figuring out the best way to link these to the main (CRUD) view; I want as much of this data as possible to be filterable.
    select
    cieav.attribute_value
    from
    csi_inst_extend_attrib_v cieav
    where cieav.instance_id = p_instance_id
    and cieav.attribute_code = 'BUILDING NAME'
    select
    cieav.attribute_value
    from
    csi_inst_extend_attrib_v cieav
    where cieav.instance_id = p_instance_id
    and cieav.attribute_code = 'UNIT NAME'
    Ultimately, I need to display a ton of data in each record displayed in the UI table, so a ton of joins will be needed. And, I need to be able to add records using the UI table also.
    James

    Hi Alejandro,
    Sorry if I caused confusion with my first post. What I had in mind assumed that I could have a single CSI_INST_EXTEND_ATTRIB_V EO with a BuildingVO and UnitVO on top of it. So, I wrote the queries individually to show how I would invoke each view. When I realized that confused the issue, I rewrote the query to explain things better.
    Now having seen your 2 queries. You need to create 2 EO. One for each table. Then create an association between the 2 aeO (this will be the join you are talking about). Then, you need to create a VO based on one of the EO and after you can modify and add the second EO (in here you select the join type).
    After your done with this, youll have 1 VO that supports CRUD in both tables.
    There were three tables in the query: CIEAV_BUILDING, CIEAV_UNIT, and T -- the main CRUD table. When you say that I should create two EOs, do you mean that they are to be for CIEAV_BUILDING and CIEAV_UNIT? Or, CIEAV and T? Assuming CIEAV and T, that sounds like it would allow me to show my building or unit on the record but not both.
    By the way, everything is a reference except for the main CRUD table.
    Look forward to hearing from you. Thanks for your help (and patience).

  • Question about connection between cache engine and cat6k

    Dear sir,
    Here is the problem description, please give me some help, thank you so much:
    catalyst 6509 is enable for wccp v2.CE 7320 also enable the wccp v2.Wccp service 91 is configured on 6509.Service-munber 91 and port-list 1(with port number 8080) are also configured on CE 7320.Wccp communicates well about service number 91.
    but browsing web page with port number 8080 gets always failed.
    1.6509 wccp configuration:
    ip wccp web-cache redirect-list 30
    ip wccp 91
    interface Vlan10
    ip address 211.162.224.2 255.255.255.240
    ip wccp web-cache redirect out
    ip wccp 91 redirect out
    2.ce7320 wccp configuration:
    wccp router-list 1 211.161.1.49
    wccp port-list 1 8080
    wccp web-cache router-list-num 1
    wccp service-number 91 router-list-num 1 port-list-num 1 application cache
    wccp version 2
    3.show info. from 6509 and ce 7320:
    gwbn7320#sh wccp content-engines
    Content Engine List for Service: Web Cache
    IP address = 211.161.1.50
    Routers seeing this Content Engine(1)
    211.162.224.2
    Content Engine List for Service: WCCPv2 Service 91
    IP address = 211.161.1.50
    Routers seeing this Content Engine(1)
    211.162.224.2
    gwbn7320#sh statistics http savings
    Statistics - Savings
    Requests Bytes
    Total: 90685 460066803
    Hits: 936 162710
    Miss: 89749 459904093
    Savings: 1.0 % 0.0 %
    6509-left#sh ip wccp
    Global WCCP information:
    Router information:
    Router Identifier: 211.162.224.2
    Protocol Version: 2.0
    Service Identifier: web-cache
    Number of Cache Engines: 1
    Number of routers: 1
    Total Packets Redirected: 2525
    Redirect access-list: 30
    Total Packets Denied Redirect: 0
    Total Packets Unassigned: 146
    Group access-list: -none-
    Total Messages Denied to Group: 0
    Total Authentication failures: 0
    Service Identifier: 91
    Number of Cache Engines: 1
    Number of routers: 1
    Total Packets Redirected: 0
    Redirect access-list: -none-
    Total Packets Denied Redirect: 0
    Total Packets Unassigned: 0
    Group access-list: -none-
    Total Messages Denied to Group: 0
    Total Authentication failures: 0
    Regards,
    Sha

    Gilles,
    Thank you!
    Here is the result:
    6509-left#sh ip wccp 91 detail
    WCCP Cache-Engine information:
    IP Address: 211.161.1.50
    Protocol Version: 2.0
    State: Usable
    Redirection: GRE
    Initial Hash Info: 00000000000000000000000000000000
    00000000000000000000000000000000
    Assigned Hash Info: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
    FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
    Hash Allotment: 256 (100.00%)
    Packets Redirected: 180
    Connect Time: 00:07:06
    Regards,
    Sha

  • Question about using ZFS root pool for whole disk?

    I played around with the newest version of Solaris 10/08 over my vacation by loading it onto a V210 with dual 72GB drives. I used the ZFS root partition configuration and all seem to go well.
    After I was done, I wondered if using the whole disk for the zpool was a good idea or not. I did some looking around but I didn't see anything that suggested that was good or bad.
    I already know about some the flash archive issues and will be playing with those shortly, but I am curious how others are setting up their root ZFS pools.
    Would it be smarter to setup say a 9gb partition on both drives so that the root ZFS is created on that to make the zpool and then mirror it to the other drive and then create another ZFS pool from the remaining disk?

    route1 wrote:
    Just a word of caution when using ZFS as your boot disk. There are tons of bugs in ZFS boot that can make the system un-bootable and un-recoverable.Can you expand upon that statement with supporting evidence (BugIDs and such)? I have a number of local zones (sparse and full) on three Sol10u6 SPARC machines and they've been booting fine. I am having problems LiveUpgrading (lucreate) that I'm scratching my head to resolve. But I haven't had any ZFS boot/root corruption.

Maybe you are looking for