Database overhead?

I'm considering using BDB to store fairly large numbers of keys (mostly RECNO, so 4 bytes) with very simple data (probably 4-8 bytes at the most.) I expect to have around 200k nodes, and was wondering how much space overhead the btree structure would give the file. With 200k nodes, and ~12 bytes per record (key + value,) the basic content of the database would be around 2.3 MB. Would insertion order affect the overhead (i.e. fragmentation, if such a concept is relevant) if I allowed duplicates? I would be adding the records in recno sequential order, so I don't need to worry about allocation of intermediate keys.
Thanks,
Daniel Peebles

Hi Daniel,
I'm considering using BDB to store fairly large
numbers of keys (mostly RECNO, so 4 bytes)...
how much space overhead the btree structure would
give the file....
I would be adding the records in recno sequential
orderCan you please clarify for me what kind of access method are you using? Recno or btree? You can paste here the flags with which you're opening the environment and the database.
Thanks,
Bogdan Coman

Similar Messages

  • Database overhead after migrate from BPEL 10.1.3.4 to BPEL 10.1.3.5

    Hi,
    After the migration from Bpel 10.1.3.4 for Bpel 10.1.3.5 the access to the database it is very more agressive. Anybody some help?
    Follow the explain plain for the SAME queries on Bpel 10.1.3.4 and Bpel 10.1.3.5:
    Query A
    SELECT /*+ INDEX ( dm dm_conversation ) INDEX ( ddmr doc_dlv_msg_ref_pk ) */ dm.conv_id,
    dm.conv_type, dm.message_guid, dm.domain_ref, dm.process_id,
    dm.revision_tag, dm.operation_name, dm.receive_date, dm.state, dm.res_process_guid,
    dm.res_subscriber, dm.properties, dm.headers_ref_id, ddmr.dockey, ddmr.message_guid,
    ddmr.part_name, ddmr.domain_ref, ddmr.message_type
    FROM dlv_message dm,
    document_dlv_msg_ref ddmr
    WHERE dm.conv_id = :1
    AND dm.domain_ref = :2
    AND dm.state =:"SYS_B_0"
    AND ddmr.message_type = :"SYS_B_1"
    AND dm.message_guid = ddmr.message_guid
    ORDER BY dm.message_guid;
    -- 10.1.3.5
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 10M| 6208M| | 9475K (1)| 31:35:10 |
    | 1 | MERGE JOIN | | 10M| 6208M| | 9475K (1)| 31:35:10 |
    | 2 | SORT JOIN | | 118K| 57M| 123M| 570K (1)| 01:54:12 |
    |* 3 | TABLE ACCESS BY INDEX ROWID| DLV_MESSAGE | 118K| 57M| | 558K (1)| 01:51:38 |
    |* 4 | INDEX RANGE SCAN | DM_CONVERSATION | 711K| | | 17156 (1)| 00:03:26 |
    |* 5 | SORT JOIN | | 10M| 1017M| 2381M| 8904K (1)| 29:40:59 |
    |* 6 | TABLE ACCESS BY INDEX ROWID| DOCUMENT_DLV_MSG_REF | 10M| 1017M| | 8662K (1)| 28:52:30 |
    | 7 | INDEX FULL SCAN | DOC_DLV_MSG_REF_PK | 10M| | | 141K (1)| 00:28:14 |
    -- 10.1.3.4
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 10M| 6208M| | 4314K (1)| 14:22:53 |
    | 1 | SORT ORDER BY | | 10M| 6208M| 12G| 4314K (1)| 14:22:53 |
    |* 2 | HASH JOIN | | 10M| 6208M| 59M| 2940K (1)| 09:48:12 |
    |* 3 | TABLE ACCESS BY INDEX ROWID| DLV_MESSAGE | 118K| 57M| | 174K (1)| 00:34:53 |
    |* 4 | INDEX RANGE SCAN | DM_CONVERSATION | 711K| | | 5361 (1)| 00:01:05 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| DOCUMENT_DLV_MSG_REF | 10M| 1017M| | 2707K (1)| 09:01:25 |
    | 6 | INDEX FULL SCAN | DOC_DLV_MSG_REF_PK | 10M| | | 44110 (1)| 00:08:50 |
    What can I do for better performance?
    Thanks!

    Hi,
    The auditLevel is set to production.
    I made the rollback to version 10.1.3.4, after the rollback to the version 10.1.3.4, the environment throughput is very better.
    My feeling is the database utilization on version 10.1.3.5 is very high comparing to version 10.1.3.4. The cost of queries is most elevated.
    Tks

  • Two database in same machine

    Hai
    I want to create two database in same machine. I searched in google but no use.
    Is it possible in oracle 9i r2?
    If so,Anybody please help on that.

    jey84 wrote:
    I want to create two database in same machine. You can have a 100 Oracle databases on the same machine. The issue though is why?
    Why do you want to duplicate database overheads? Why two sets of system and process monitoring processes? Log writers? Database writers? Etc.
    Why two SGAs? Why two system tablespaces? Two temp tablespaces, undo tablespaces? Two sets of redo logs? Etc.
    You now have two database footprints. Instead of a single large SGA with properly sized buffer cache, shared pool, large pool and so on - you know have two smaller SGAs and smaller caches and pools that are now less capable and less scalable.
    Why would you want to do this?
    There are no sound technical reasons for running multiple database instances on a server - unless it is something like a 32 CPU/multicore server with 256 GBs of memory. But then why not use this as a cloud-type server on which you can run multiple server VMs? One VM for each database, as that provides more flexibility than multiple databases on a single server(physical or VM)?
    In today's world of cluster and cloud computing, running 2 databases instances on the same server does not make much sense. And needs to be backed up by reason and logic that justify such an approach.

  • Running APEX in its own database instance

    Hi,
    I'm currently in the process of wanting to upgrade from apex 2.0 (or there abouts) to a current version. Have been thinking though, that it might be work actually spinning up apex in its own instance ( most likely just running under OE ) and connecting to our core database.
    The advantage this would have, would be easier upgrades ( very easy to rollback if something goes wrong ), plus I imagine apex would get a significate speed boost from running on 10g, as apposed to 9i which are core database is.
    Any thoughts on this? Would it be worth the hassle? I'm I just going to move the speed issues from slow pl/sql to remote database overhead?

    It's really not a good idea to install APEX in a separate DB. Performance will TERRIBLE over DB links. You have add views or synonyms for all of the remote objects so APEX can see them. The OP is talking about 2 different versions of the database, one of which (9i) isn't even supported anymore under standard support. With 2 different versions of the database, you essentially get a union of all possible bugs in each database.
    Tyler

  • APEX vs Forms 6i - Processor/System/Network Overhead

    We have been developing and deploying applications using Forms 6i for some years, have moved to web forms and are now developing in APEX. The IT department of client of ours has asked us to provide the relative performance merits and impact on CPU performance for each of the three technologies with particular focus on the server on which the Oracle database is running, in order to determine a basis for charging.
    Assuming that the application is the same. i.e. There is a common set of PL/SQL commands across the deployment technologies. Would it be true to say that this would be relatively the same for Forms and Web Forms, since these are generally deployed with separate forms or application servers, but would be higher for APEX, since APEX PL/SQL commands are required to build web pages before being sent (in this case) to the Oracle HTTP server? If so, are there any figures available to substantiate this case?
    Taking this one step further. Given that there is a network overhead for each of the deployments (in addition to the database overhead) has anyone conducted an analysis on the relative efficiencies of the three in presenting the same content? Or any insight as to what that might be? This could potentially be offset against an increase in datbase server cycles, if the former is true.
    Thanks very much for your help.
    Regards, Malcolm

    This will be hard to quantify without running your own tests, but based on feedback from other customers, the server resources required for APEX are somewhere in the neighborhood of 1/3 to 1/10 that required for Forms. This is especially true for memory, since every Forms client requires a dedicated server connection whereas APEX uses connection pooling. So, lets say you have 1,000 Forms users with an average memory requirement of 5mb per client (just guessing here), that's 4.8gb of RAM just for client connections. The typical number of sessions in that size APEX deployment is 10-20 = 50-100mb of RAM for client connections. The CPU impact of rendering APEX pages is VERY insignificant compared to the CPU required for most of the queries your developers will write. One of the busiest internal APEX instances has over 200,000 page views per day and is a 4 processor machine.
    Regarding network traffic, I'm not sure but you could measure the Forms traffic with Wireshark. You can probably estimate your average page view for an APEX to be somewhere between 35 and 50kb excluding CSS, JavaScript, and Images which should only need to load on the first page view. I highly doubt either client-server forms or web forms are less than that.
    Thanks,
    Tyler

  • Object Reference Column in a SqlCe Data Table

    This may be a strange question...
    I've defined a table in a SqlCe database (using VS C#2010 Express) that will never have any data persisted in it.  I did this so I can instantiate versions of it using the designer generated strongly typed table definition without ever directly using
    the one that's in the database.  (That may be obtuse, but it seems to work quite well, I don't need to worry about the database overhead, and I get all the documentation and code creation the designer provides).
    I've now run into a situation where I'd like one of the columns in the instantiated tables to contain references to C# objects (always requiring casting of course to use them).  Can I do that and, if so, what SqlCe datatype would I use to get a column
    that can contain an object reference?
    Thx.  Steve

    Erik -
    I'll admit to trying to cut some corners, which I'd like to do if there's a way that doesn't cross some .Net, Visual Studio  or C# line.
    As you noted, I do want to store a reference to the object ONLY during runtime.  I can certainly use a List or a DataTable created for that purpose, which is really what I want to do.  I'd like to use a DataTable that I already have setup
    to store other variables, so that as I traverse it (e.g. using foreach DataRow), I can reference the object as well as the other variables using the foreach DataRow reference. 
    The trick I want to use is to define the DataTable using the VS database designer rather than manually creating the code that defines it.  The designer is visual, easy to use, self-documenting, and of special note makes the code that uses
    the DataTable easy to read because the resulting DataTable is strongly-typed.  I should have more clearly exhibited in the sample code how I use the strong-typing in the line that sets myDataRow.  It should read: "MyDatabaseDataSet.MyTableDataRow
    myDataRow = myDataTable.Rows[I];", where "MyDatabaseDataSet.MyTableDataRow" uses to the VS generated strongly-typed DataRow definition.  Note that the code does not store data in the database's
    DataTable itself.  It merely uses the database's DataTable DEFINITIONS to instantiate other tables.
    I have used this approach when the all data in the DataTable is of one or more native C# types (int, decimal, etc) for which there are corresponding SqlCe types.  However, I'd like to use it with a custom type (MyObject), if that's possible, and am
    looking for what SqlCe type might work for that purpose, if any.
    I admit that this may be an unusual way to create a DataTable definition.  I'm not sure where I can up with the idea.  I may have read it somewhere or just cooked it up myself.  But I really like what the strong-typing provides (without having
    to manually create it myself).
    Is there any way to include custom types using this idea?

  • OO Design Vs. RDBMS functionality

    Hi All,
    I am new to Java world, been Oracle DBA so far and I am dealing with the following situation and need your input/comments on this:
    We have a business entity called Button and the button can be one of the following types:
    HTMLButton, MenuButton, SearchButton etc.
    So in our UML Diagram, we defined Button as a parent class (having members that are common to all the derived buttons like buttonId, Name, Description etc.) and the above 3 as the derived classes containing members that are specific to this particular entity. e.g., HTMLButton has a member called HTMLContents and SearchButton has a member called SearchURL etc.
    On the RDBMS side, we are using a single table to store all different kinds of buttons with an additional column "Button_Type". This table contains all the members from all the parent and derived classes. The buttonId will be unique across all the button types, hence a primary key for this table.
    Now, we are writing few methods for these classes that will be used by the business logic like remove(), update() and find().
    There are 2 ways to code these methods that I could think of.
    First Way:
    The parent class Button implements all 3 of the methods but in the find and update method, it only handles the members that are visible to this class. So in update method, it has SQL to update only the Button members/columns based upon the Primary key buttonId. And in the find method, it returns a Button object back based upon a given buttonId.
    And all the derived classes override these methods by first calling the (super).update or (super).find and then adding their own code to handle the specific members of their own code.
    e.g.
    public class Button {
    public Button find (String buttonId)
    Button button = new Button();
    // do all the logic to select button_id, button_name, button_desc from table_name where button_id = ? " and do the getString and call the setters on button.
    return button;
    public class HTMLButton extends Button {
    public HTMLButton find (String buttonId)
    HTMLButton htmlButton = new HTMLButton();
    Button button = (super).find(buttonId);
    // call the getters on parent to set the common members first
    htmlButton.setName(button.getName()); // repeat for other common memebrs
    // now get the HTMLButton specific members from Database and call setters on HTMLButton.
    // "select html_content, html_btn_height from tab_name where button_id = buttonId"
    // do a htmlButton.setString(rs.getString("HTML_CONTENT"));
    return (Button) htmlButton;
    So in a nutshell, the first way, i described above, breaks the table in 2 parts and the first part is handled by the parent and the second part is handled by the derived class.
    Second Way:
    I treat Button class as Abstract or may be a class that just stores the method signatures but the actual implementation is done at the derived class level. So while writing a find method on derived class, it executes a SQL like:
    "Select button_id, button_name, button_desc, html_content, html_btn_height from tab_name
    where button_id = buttonId"
    and then calls the setters and returns the object back.
    Now the problem:
    So I am trying to figure out which one is better way to implement.
    The first approach gives me a better OO approach but at the same time, doubles my database access as each update or find will be called in 2 steps (partially by parent class and partially by child class).
    The second approach makes me write duplicate code but avoids poor performance on the database side.
    I can tell you one thing that there wont exist something called a Button by itself, it always belongs to some type of button.
    So I will really appreciate responses on this forum and I am sure this will help me in understanding OO approach better while at the same time, giving me an efficient performace.
    Thanks in advance

    I feel Tim in incorrect, the correct OO approach would be the "First Way", as behaviour common to the subclasses can be captured in the button class, even if the class itself is abstract.
    However, the original poster has indicated that this would bring additional overhead to the database. I would still try and do it this way, and implement some clever scheme to avoid the database overhead. Perhaps the method in Button can start the construction of the database query, but allow subclasses to extend this by overriding another method, something like:public class Button {
    public (static) Button find(String buttonId) {
      StringBuffer queryBuf = new StringBuffer("SELECT buttons_tbl.button_id, buttons_tbl.button_name");
      query.append(getExtraSelectFields());
      query.append(" FROM buttons_tbl ");
      query.append(getFromExpression());
      query.append(" WHERE buttons_tbl.button_id = '");
      query.append(buttonId);
      query.append("'");
      query.append(getWhereExpression());
      // fire off query and capture result
    protected abstract String getExtraSelectFields();
    protected abstract String getFromExpression();
    protected abstract String getWhereExpression();
    protected
    }HtmlButton would then implement just the abstract methods, e.g.:public class HtmlButton {
    public String getExtraSearchFields() {
      return "htmlbuttons_tbl.html_content, htmlbuttons_tbl.html_btn_height";
    public String getFromExpression() {
      return " LEFT JOIN htmlbuttons_tbl";
    }Well, it needs to be a bit smarter than this, but I hope you get the point.

  • Re: Foreign key-type constraints

    The methodology my company has defined and uses to attack this problem is
    based upon looking for large grain 'business components' within the
    business model.
    When translating the functionality of the 'business component' to a
    physical design we end up with a component object which usually consists of
    a major entity(table) and several subsidiary entities and the services
    which operate on and maintain those entities i.e. a component object.
    We would then remove the referential integrity constraints only between the
    components - to be managed by a component reference object - but internally
    to the component leave the database referential integrity rules in place.
    I beleive this maintains idea of encapsulation as the only way to
    communicate with the component is through a defined public service
    interface. It also lessens the impact of database changes as they are
    usually confined to one componeet and the public service interface to any
    other is left intact. It makes use of the database functionality without
    dramatically effecting maintenance and performance by writing it all
    yourself and/or defining every relationship with the refence manager.
    It also leads very much to the definition of large grain reusable
    components which can be used in many applications, important to a company
    such as mine which develops software for others.
    Unfortunately it is not always as simple as it sounds, the methodology helps.
    Good database management systems with declarative referential integrity
    will usuaually prevent you from defining circular references so you could
    test for this by attempting to create the database before you remove the
    inter component links. But circular references are much less likely with
    the component technique properly applied.
    Keith Matthews
    Caro System Inc.
    www.carosys.com
    At 02:07 PM 10/23/97 +0100, John Challis wrote:
    We've been pondering the issue of how database integrity should be
    represeted within a Forte/OO app. We're thinking, in particular, about
    foreign key-type constraints.
    First of all, we're not sure whether these constraints should be on the
    database, because some would say that this represents business knowledge
    which should only be in the app. Also, if constraints are on the
    database, the errors you receive if they are violated may not be very
    useful; i.e. we're using Oracle, and we'd have to map constraint names
    in error messages to some more meaningful message to present to a user.
    If foreign key-type constraints aren't on the database, what other
    options do we have?
    Let's say there's associations between objects X, Y and Z, whereby X and
    Y both know about and use Z - we don't want to delete Z while X and Y
    exist. I accept that Z should know how to delete itself, from
    persistance, but how does it check for the existence of X and Y? If Z
    asks objects of types X and Y to check whether they exist in the
    database, you can end up with a circular reference. If you do the check
    yourself, i.e. by having SQL checking existence of X and Y within the
    delete method for Z, then I reckon you've blown encapsulation, and
    you've also got a problem in relation to impact if the shape of your
    database changes.
    We're toying with the idea of having a central integrity manager, which
    will tell Z whether it can go ahead with the delete, thus centralising
    the integrity constraint knowledge within the app. and minimising impact
    of changes to the shape of the database.
    I'd be interested to know what others have done to address this issue,
    and any thoughts you may have.
    Thanks,
    John Challis
    PanCredit
    Leeds, UK
    ** bassssss **

    At 02:07 PM 10/23/97 +0100, you wrote:
    ...>First of all, we're not sure whether these constraints should be on the
    database, because some would say that this represents business knowledge
    which should only be in the app. This is a long-winded response, but I tried to relate it to a real-world
    example, so bear with me...
    Purists may argue with me here, but I must take issue with the notion that
    your database cannot have any business knowledge. As soon as you define a
    table, you have implicitly given the database business knowledge.
    For example, suppose you define a database table Person, with columns Name,
    ID, and BirthDate. You are specifically telling the database that there
    exists a business "something" called Person which can (or must!) have
    values called Name, ID, and Birthdate. You are probably also telling the
    database about certain business rules: The value called ID can be used to
    uniquely identify a Person; The value Name contains text, and has a maximum
    length; Birthdate must conform to the format rules for something of type
    Date; etc. Need I go on?
    So, to me the argument cannot be that your database should not have any
    business knowledge, but rather, what type of business knowledge should be
    given to the database?
    On the other side of the coin, I also take exception to the argument that
    business knowledge belongs only in the Application. In fact, if your
    discussion centers around whether business knowledge belongs in the
    Application vs. the Database, then maybe both sides are still thinking in
    two tiers, and you need to take a step back and think about your business
    classes some more.
    In our oversimplified example above, we set a limit on the length of the
    Name attribute. This is a business rule, and so "belongs" to the business
    class. However, our application class needs to have knowledge of that rule
    so that it can set a limit on the length of data that it allows to be
    entered. Likewise, the persistent storage class must have knowledge of
    that rule to effectively store and retrieve the data.
    We also have an attribute that is a Date, and a date by definition must
    follow certain rules about format and value. The application class and the
    storage class will both do their job more effectively if they know that the
    attribute is a Date.
    Does it break the rules of encapsulation if you allow the application class
    or the storage class to have knowledge of certain rules that are defined in
    the business class? If it does, then we might as well throw encapsulation
    out the door, because it is a totally useless concept in the real world.
    Now, let's think about the referential constraints. Suppose you want to
    create a business class Employee which inherits from the class Person, and
    adds attributes HireDate and Department. When you physically store the
    Employee information in your Relational database, you might actually store
    two tables, with the ID as a foreign key between them. In this case, the
    foreign key relationship would clearly belong to the storage class and the
    database. The business class should not know or care whether the Employee
    information is physically stored in one table, or two, or twelve.
    Now, let's add another business rule, that Employee Department must be a
    valid department. To support this rule, you will create a business Class,
    Department. For the sake of argument, let us say that the persistent data
    for this business class will be stored in a database table, also called
    Department.
    We have said that there is a relationship between Employee and Department.
    Which business class will contain the rule that defines the relationship?
    Clearly, it is not Department. Department has no reason to know about
    Employee, or any other class that might contain a reference to it. Since
    Employee is the one that contains a reference to Department, you could
    argue that the rule belongs there. That works fine, until you want to
    delete a Department object. Obviously, you would not go to the Employee
    class for that. So it seems that the relationship does not belong in
    either class.
    Someone has suggested that you have an integrity manager or some similar
    class for that purpose. The integrity manager would have knowledge of the
    rules that define the relationships between your business objects. This
    allows you to keep your OO design more "pure" from the standpoint of
    encapsulation. Conceptually, this makes good sense, since the relationship
    between two classes does not belong to either of the individual classes.
    Let's hold that thought for a minute.
    Now let's think about your physical database design. I am betting that
    there is a high degree of correlation between your database tables and your
    business objects. It won't be 100%, because, among other things,
    relational databases do not deal well with the concept of inheritance. But
    if there is a very wide divergence, then I would need to question the
    validity of your design. With that in mind, I am going to propose that you
    already have an Integrity Manager, and that is your relational DBMS.
    My position is this, that it is ok, even necessary, for the data storage
    class to have knowledge of the structure and relationships of the data. It
    needs this information to effectively do its job. From my point of view,
    saying that you cannot tell the database that there is a relationship
    between Employee and Department is just as pointless as saying that you
    cannot tell the database that a certain column contains a date, or that
    another column contains a unique key which should be used as an index.
    Would you argue that an index implies business knowledge, and therefore
    does not belong in the database? On the other hand, you could argue that
    referential constraints always belong to the physical storage classes,
    since they describe under what circumstances data can be stored or deleted.
    Now, for performance or other reasons, you might choose not to implement
    the Employee-Department relationship in your physical database, and that's
    ok, too. Maybe you have decided that since you do not delete departments
    very often, that you do not want to incur the database overhead to maintain
    the foreign key relationship. Or maybe you have determined that the
    Department data will be stored somewhere else other than the database.
    Perhaps you would create an Integrity Manager instead, that would only be
    invoked when you wanted to delete a Department object. The point is, if
    you create an Integrity Manager, be sure you do it for the right reason,
    and not because someone has mistakenly decreed that a database cannot have
    any business knowledge.
    This brings us to the other question, which is: What do you do with the
    error if the constraint is violated? Consider this as an option: Create a
    User-defined exception named DeleteFailed or something like that. Then it
    does not matter if the error comes from the database manager or a separate
    Integrity manager. In either case, you fill the exception object with
    whatever meaningful data is appropriate, and raise the exception. The
    application, which knows what it was trying to do, can listen for the
    exception and deal with it appropriately. (btw, this is a good way to deal
    with other predictable database exceptions as well, such as DuplicateKey,
    or NotFound - your application need not listen for a particular SQL Code or
    SQL State, which might tie it to a particular database or storage format.)
    I do not see a problem with using the DBMS to define relational
    constraints. That is, after all, what a Relational database does. You do
    not need an integrity manager for OO-purity, but you can use one if it
    makes sense for other reasons. You should be able to change the method of
    enforcing the relationships, or even change the entire DBMS without having
    any impact on the application classes or the business classes. If you can
    meet that test, then as far as I am concerned, you have not violated any
    rules of encapsulation.
    Any rebuttals?
    =========================================
    Jeanne Hesler <[email protected]>
    MSF&W, Springfield, Illinois
    (217) 698-3535 ext 207
    =========================================

  • Append,Append_values

    Hi Guys,
    Can you people distingush between conventional insert and direct path insert?
    it may help me to understand oracle hints.....

    Hi,
    what you are asking can easily be found in the documentation: Conventional and Direct Path Loads
    Just have the patience to read it carefully
    In synthesis:
    A conventional path load executes SQL INSERT statements to populate tables in an Oracle database. A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources, so it can usually load data at near disk speed. Considerations inherent to direct path loads, such as restrictions, security, and backup implications, are discussed in this chapter.Continue reading the link provided and come back with question if there is something you don't understand.
    Regards.
    Al

  • Any suggestions for optimal configuration of cache on mulitple, high volume proxy servers?

    I am trying to optimize the cache on 12 proxy servers running Sun ONE Proxy 3.6 on Solaris 8. They are not set in an array at this time. Forward proxying only. I have 3- 36G drives available per box for cache. Traffic volume - approximately 50,000 users.

    Hi James,
              The typical configuration is to only use LocalDirector to balance the load
              across the web servers. Since the web servers are using our plugins to
              route the requests to the cluster based on information encoded in the
              session id, you do not need to (and should not try to) use Local Director
              between the web servers and the app servers.
              Hope this helps,
              Robert
              James Higginbotham wrote:
              > I was searching the BEA site for any tips or cautions when using Cisco
              > LocalDirector with WebLogic Server, but was surprised to only see one
              > mention of it in a whitepaper on clustering. What kinds of do's, dont's
              > would you suggest for the following project configuration:
              >
              > o 2 WebLogic 4.5.1 servers w/ cluster licenses on Solaris SPARC
              > o 2 Cisco LocalDirectors
              > o A J2EE Blueprints architecture application, using a single servlet,
              > in-memory replication of servlet sessions, stateless/stateful/entity
              > beans
              > o Entity bean caching preferred to reduce database overhead on reads
              > over time
              >
              > The clustering configuration and Cisco LocalDirectors are initially
              > meant to offer reliability and failover, rather than load balancing.
              > This is due to the local user count but high availability needs of the
              > project.
              >
              > Any advice would be appreciated.
              >
              > Regards,
              > James
              

  • Any suggestions for Cisco LocalDirector?

    I was searching the BEA site for any tips or cautions when using Cisco
              LocalDirector with WebLogic Server, but was surprised to only see one
              mention of it in a whitepaper on clustering. What kinds of do's, dont's
              would you suggest for the following project configuration:
              o 2 WebLogic 4.5.1 servers w/ cluster licenses on Solaris SPARC
              o 2 Cisco LocalDirectors
              o A J2EE Blueprints architecture application, using a single servlet,
              in-memory replication of servlet sessions, stateless/stateful/entity
              beans
              o Entity bean caching preferred to reduce database overhead on reads
              over time
              The clustering configuration and Cisco LocalDirectors are initially
              meant to offer reliability and failover, rather than load balancing.
              This is due to the local user count but high availability needs of the
              project.
              Any advice would be appreciated.
              Regards,
              James
              

    Hi James,
              The typical configuration is to only use LocalDirector to balance the load
              across the web servers. Since the web servers are using our plugins to
              route the requests to the cluster based on information encoded in the
              session id, you do not need to (and should not try to) use Local Director
              between the web servers and the app servers.
              Hope this helps,
              Robert
              James Higginbotham wrote:
              > I was searching the BEA site for any tips or cautions when using Cisco
              > LocalDirector with WebLogic Server, but was surprised to only see one
              > mention of it in a whitepaper on clustering. What kinds of do's, dont's
              > would you suggest for the following project configuration:
              >
              > o 2 WebLogic 4.5.1 servers w/ cluster licenses on Solaris SPARC
              > o 2 Cisco LocalDirectors
              > o A J2EE Blueprints architecture application, using a single servlet,
              > in-memory replication of servlet sessions, stateless/stateful/entity
              > beans
              > o Entity bean caching preferred to reduce database overhead on reads
              > over time
              >
              > The clustering configuration and Cisco LocalDirectors are initially
              > meant to offer reliability and failover, rather than load balancing.
              > This is due to the local user count but high availability needs of the
              > project.
              >
              > Any advice would be appreciated.
              >
              > Regards,
              > James
              

  • Moving a 32 bit Oracle 9i database to 64 bit on a different server

    Hello,
    We have a 24 GB database with Oracle 9.2.0.7 (32 bit). As the hardware of this server is getting obsolete, it is planned to move this instance to another server, which has 64 bit Oracle software of same version (9.2.0.7). In this scenario what is the best way to move the instance?
    Is it only the full export from 32 bit server and import into 64 bit server(after creating the instance there)?
    Since this is a 24 GB database, and target server is 8 gb of ram, any pointers on how long the import process can take?
    There is a documentation to change word size, I can run utlirp.sql as suggested here:
    http://www.orafaq.com/forum/?t=rview&goto=258668#msg_258668
    But I have some doubts as I mentioned in that post. Can you please share your suggestions?
    Thanks,
    Nirav

    Hi
    Is there some document or steps to follow when creating the instance on the new server The database move is easy, and here is one way to move the schema, fast:
    http://www.dba-oracle.com/oracle_tips_db_copy.htm
    And then, you just run the script to change the wordsize for 64-bit:
    Also, after your migration, watch out for common performance issues:
    http://www.dba-oracle.com/t_bad_poor_performance_upgrade_migration_32_64_bit.htm
    Also, note that Oracle has changed the optimizer costing model from "IO" to CPU" in 10g, and shops that combine an upgrade to 64-bit servers with a 10g migration may want to look at changing the new default for _optimizer_cost_model.
    Going 64-bit means that you can now allocate very large RAM data buffers and increase your shared_pool_size above two gigabytes. However, it is important to remember that there are downsides to having a super-large db_cache_size. While direct access to data is done with hashing, there are times when the database must examine all of the blocks in the RAM cache. These types of database may not always benefit from an upgrade to a 64 bit server:
    Systems with high Invalidations: Whenever a program issues a truncated table, uses temporary tables, or runs a large data purge, Oracle must sweep all of the blocks in the db_cache_size to remove dirty blocks. This can cause excessive overhead for system with a db_cache_size greater than 10 gigabytes.
    High Update Systems: The database writer (DBWR) process must sweep all of the blocks in db_cache_size when performing an asynchronous write. Having a huge db_cache_size can cause excessive work for the database writer. Some shops dedicate a separate, smaller data buffer (of a different blocksize) for high-update objects.
    RAC systems: Oracle RAC and Grid does not perform optimally with super-large data buffer RAM, as available in 64-bit systems. You may experience high cross-instance calls when using a large db_cache_size in multiple RAC instances. This inter-instance "pinging" can cause excessive overhead, and that is why RAC DBA's try to segregate RAC instances to access specific areas of the database. This is why Oracle 10g grid server blades generally contain only 4-gig RAM.
    Hope this helps. . .
    Don Burleson
    Oracle Press author
    Author of “Oracle Tuning: The Definitive Reference”
    http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm

  • Logical Database and Logical Thing

    Hi,
    i want to access KONV which is cluster table and the field is KWERT.
    The thing is that i want to access it by taking customers from KNVV and giving it to VBRK (SALES Table). Now in VBRK i want to have a selection on FKDAT to get a list of Customers stored in the field called KUNAG.
    on VBRK-KUNAG basis i want to access the table KONV-KWERT.
    If i am doing queries then the System stops responding cuz it has got alot of overhead. So i tried to use Logical Database called VFV.
    If this is the best solution means using LDB then how to use it, can anyone help me with this. I tried it by Function module but it is showing all data without considering selection criteria.
    If anyone can help me then plz do answer or refer me to any web site so that i can figure this thing out. If anyione has got a good book on that then plz feel free to mail me.
    Thanks,
    Muhammad Usman Malik
    ABAP Consultant
    Siemens
    [email protected]
    +92-333-2700972

    Thanks Shibba that was very helpful, i applied that but the system overhead was so much.
    can u help me with Dynamic selection code.
    I used FREE_SELECTION_INIT, FREE_SELECTION_DIALOG and then FREE_SELECTIONS_RANGE_2_WHERE to get ther Selections in one table.
    if u want me to send u the code then i can do that cuz i am getting so much mad that this work is not done yet.
    The Scenario here is that we want to take BILLED Customers and VKORG as Industrial Billing Customer and then taking VBRK and giving all these Customers and then taking selection on FKDAT range.
    Now after that the data should be collected from KONV-KWERT and i want to perform some calculation over it. I am using VFV (Logical Database) to perform this thing because i know that it would be very fast then applying my own queries.
    If you can mail me any book on Logical Database and Dynamic selection then it will be very Helpful.
    Thanks once again for being such helpful.
    Muhammad Usman Malik
    SAP Consultant
    [email protected]
    +92-333-2700972

  • Logical Database Plz Urgent

    Hi,
    i want to access KONV which is cluster table and the field is KWERT.
    The thing is that i want to access it by taking customers from KNVV and giving it to VBRK (SALES Table). Now in VBRK i want to have a selection on FKDAT to get a list of Customers stored in the field called KUNAG.
    on VBRK-KUNAG basis i want to access the table KONV-KWERT.
    If i am doing queries then the System stops responding cuz it has got alot of overhead. So i tried to use Logical Database called VFV.
    If this is the best solution means using LDB then how to use it, can anyone help me with this. I tried it by Function module but it is showing all data without considering selection criteria.
    If anyone can help me then plz do answer or refer me to any web site so that i can figure this thing out. If anyione has got a good book on that then plz feel free to mail me.
    Thanks,
    Muhammad Usman Malik
    ABAP Consultant
    Siemens
    [email protected]
    +92-333-2700972

    Write the entire logic between
    GET event and END-OF-SELECTION.
    and call your smartform in the event END-OF-SELECTION.
    START-OF-SELECTION.
    GET PERNR ..
    *your logic ..
    END-OF-SELECTION.
    call function'yoursmartform'

  • Steps to create LOGICAL DATABASE in sap

    hi guys,
    i have gone through many documents about LDB. But, i didnt get the steps to create a LDB.
    plz provide me with the steps to be followed to create a LDB.
    thnx,
    shivaa.

    Hi Shiva,
    This might help you!
    Logical database structures
    There are three defining entities in an SAP logical database. You must be clear on all three in order to create and use one.
    Table structure: Your logical database includes data from specified tables in SAP. There is a hierarchy among these tables defined by their foreign keys (all known to SAP), and you are going to define a customized relationship between select tables. This structure is unique and must be defined and saved.
    Data selection: You may not want or need every item in the referenced tables that contributes to your customized database. There is a selection screen that permits you to pick and choose.
    Database access programming: Once youu2019ve defined your logical database, SAP will generate the access subroutines needed to pull the data in the way you want it pulled.
    Creating your own logical database
    ABAP/4 (Advanced Business Application Programming language, version 4) is the language created by SAP for implementation and customization of its R/3 system. ABAP/4 comes loaded with many predefined logical databases that can construct and table just about any conventional business objects you might need in any canned SAP application. However, you can also create your own logical databases to construct any custom objects you care to define, as your application requires in ABAP/4. Hereu2019s a step-by-step guide:
    1. Call up transaction SLDB (or transaction SE36). The path you want is Tools | ABAP Workbench | Development | Programming Environment | Logical Databases. This screen is called Logical Database Builder.
    2. Enter an appropriate name in the logical database name field. You have three options on this screen: Create, Display, and Change. Choose Create.
    3. Youu2019ll be prompted for a short text description of your new logical database. Enter one. Youu2019ll then be prompted to specify a development class.
    4. Now comes the fun part! You must specify a root node, or a parent table, as the basis of your logical database structure. You can now place subsequent tables under the root table as needed to assemble the data object you want. You can access this tree from this point forward, to add additional tables, by selecting that root node and following the path Edit | Node | Create. Once youu2019ve saved the structure you define in this step, the system will generate the programming necessary to access your logical database. The best part is you donu2019t have to write a single line of code.
    Watch out!
    The use of very large tables will degrade the performance of a logical database, so be aware of that trade-off. Remember that some tables in SAP are very complex, so they will be problematic in any user-defined logical database.
    Declaring a logical database
    Hereu2019s another surprising feature of logical databases: You do not assign them in your ABAP/4 Code. Instead, the system requires that you specify logical databases as attributes. So when you are creating a report, have your logical database identifier (the name you gave it) on hand when you are defining its attributes on the Program Attributes screen. The Attributes section of the screen (the lower half) will include a Logical database field, where you can declare your logical database.
    Logical databases for increasing efficiency
    Why else would you want to create a logical database? Consider that the logical databases already available to you begin with a root node and proceed downward from there. If the data object you wish to construct consists of items that are all below the root node, you can use an existing logical database program to extract the data, then trim away what you donu2019t want using SELECT statementsu2014or you can increase the speed of the logical database program considerably by redefining the logical database for your object and starting with a table down in the chain. Either way, youu2019ll eliminate a great deal of overhead.
    Reward if useful.
    Thankyou,
    Regards.

Maybe you are looking for