Database Design Problem

Hi All,
I would like to know how we can store the Account, Invnetory, Customers and Vendor details of a Company having 30 Odd divisions and maintianing them separately in a Single Database with ORACLE Standard Version loaded on HP UX Unix M/c.
Currently I have made a set of tables which I copy for each company in a spearate tablespace by the Company name and working. But by doing this I have to dynamically code all my PRocedures to find out which Company the user has logged in and then Execute them.
ie. the Sql's look like
var := 'SELECT * FROM ' || COMPID || '_CUSTOMERS WHERE CUSTID = ' || CHR(39) || VCUSTID || CHR(39).
and execute them using the ORACLE 8i's built in EXECUTE IMMEDIATE clause to get the result. This type of generating SQL & executing does not allow any scope for SQL tuning.
Help in desiging will be highle solicited.
Thanks to all,
Pleas Write a mail at [email protected]
Regards
ravi

Hi Ravi,
I think u'b better track user using the audit user feature of Oracke instead of Log Miner or perhaps some trigger only depending on what u wanna know about ur users. Dont think u want to know ALL what ur users have done...
About ur question "how can we know a which user wants to work in which Company".
If u have one schema for one company, it's easy to set permissions access to a schema to certain user and not to all.
Example :
U got 2 schemas : comp1 and comp2
And 2 users : bob and bill
U just have to set up permissions to bob to permit him to access tables from schema comp1 and to schema comp2 to bill so each user will only have access to one company.
Fred

Similar Messages

  • SIMPLE Database Design Problem !

    Mapping is a big problem for many complex applications.
    So what happens if we put all the tables into one table called ENTITY?
    I have more than 300 attributeTypes.And there will be lots of null values in the records of that single table as every entityType uses the same table.
    Other than wasting space if I put a clustered index on my entityType coloumn in that table.What kind of performance penalties to I get?
    Definition of the table
    ENTITY
    EntityID > uniqueidentifier
    EntityType > Tells the entityTypeName
    Name >
    LastName >
    CompanyName > 300 attributeTypes
    OppurtunityPeriod >
    PS:There is also another table called RELATION that points the relations between entities.

    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    check the coloumn with WHERE _entityType='PERSON'
    as there is is clustered index on entityType...there
    is NO performance decrease.
    there is also a clustered index on RELATION table on
    relationType
    when we say WHERE _entityType ='PERSON' or
    WHERE relationType='CONTACTMECHANISM'.
    it scans the clustered index first.it acts like a
    table as it is physically ordered.I was thinking in terms of using several conditions in the same select, such as
    WHERE _entityType ='PERSON'
      AND LastName LIKE 'A%' In your case you have to use at least two indices, and since your clustered index comes first ...
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Have you ever thought of using constraints in your
    modell? How would you realize those?
    ...in fact we did.We have arranged the generic object
    model in an object database.The knowledge information
    is held in the object database.So your relational database is used only as a "simple" storage, everything has go through your object database.
    But the data schema is held in the RDBMS with code
    generation that creates a schema to hold data.If you think that this approach makes sense, why not.
    But in able to have a efficent mapping and a good
    performance we have thought about building only one
    table.The problem is we know we are losing some space
    but the thing is harddisk is much cheaper than RAM
    and CPU.So our trade off concerated on the storage
    cost.But I still wonder if there is a point that I
    have missed in terms performance?Just test your approach by using sufficiently data - only you know how many records you have to store in your modell.
    PS: it is not wise effective using generic object
    models also in object databases as CPU cost is a lot
    when u are holding the data.I don't know if I'd have taken your approach - using two database systems to hold data and business logic.
    PS2: RDBMS is a value based system where object
    databases are identity based.we are trying to be in
    the gray area of both worlds.Like I wrote: if your approach works and scales to the required size, why not? I would assume that you did a load test with your approach.
    What I would question though is that your discussing a "SIMPLE Database Design" problem. I don't see anything simple in your approach when it comes to implementation.
    C.

  • OO / database design problem.

    We are building an OO system with a database back end and I seem to come across the same problem with design a number of times.
    Say we have an object 'Transaction'. Each Transaction can be one of three types, Debit, Credit, Receipt.
    The database has a table of TransTypes, with three entries. There is also a table of transactions, and the transaction table has a foreign key to the TransType table to indicate which type each transaction is.
    I define a TransType class, and a Transaction class that holds a reference to a TransType object to indicate with type the transaction is.
    The problem is how to perform different processing depending on the type of the transaction. Eg. in psuedo code:
    If the type of the transaction is Credit, then add the amount to the balance. If it is a Debit or Receipt, then subtract the amount.
    How would this be done in code? How can you test which type object the transaction has? The only thing that differentiates the three types in the database is the name (a string), and saying
    If Trans.TransType.Name = "Credit" then
    (add balance)
    else
    (subtract balance)
    end if
    is no good at all.
    There is something else to note: All the database IDs are Guids, and the database is cleaned out and rebuilt regularly, so theres no use remembering the ID of each type.
    According to OO principles, the type class should encapsulate the behaviour.
    I can see two solutions.
    1. Add a boolean flag to the Type table named 'AddsToBalance' or similar. Also add boolean attribute the type class. Then, the test becomes
    If Trans.TransType.AddsToBalance then
    (add balance)
    else
    (subtract balance)
    end if
    But this is a fairly limited approach.
    2. Define a base class of TransType, say 'TransTypeBase'. Then create three subclasses: TransTypeCredit, TransTypeDebit, TransTypeReceipt.
    Then, I can either test like this:
    If Typeof(Trans.TransType) Is TransTypeCredit then
    (add balance)
    else
    (subtract balance)
    end if
    or, I can provide an overridable Property AddsToBalance in TransTypeBase that each subclass overrides to return the correct value, and do:
    If Trans.TransType.AddsToBalance then
    (add balance)
    else
    (subtract balance)
    end if
    which is the same as the previous solution, except that the AddsToBalance property is not saved to the database at all but is implemented in the code that defines the class.
    Problem with Solution 2:
    When I retrieve the TransTypes from the database and create the TransType objects, how do I know whether to create a Credit, Debit, or receipt TransType object?
    I could add a field into the TransType table which is "TypeID", which is an integer (1 = Credit, 2 = Debit, 3 = Receipt), and then perform a select case. I don't really mind having a select case here because its only when retrieving data - Dbs are not OO so theres always a clash somewhere.
    Anyway if you've read this far thanks for sticking with it, I hope I've explained the problem well enough.
    What do people think of these solutions
    Does anyone know the 'proper' way to do this?
    Thanks in advance
    Lindsay

    Someone else already answered this, but I think maybe that answer could be clarified.
    This is an example of where the OO model differs from the data model. From the OO perspective, since you have three different types of transactions, you probably want to have three transaction classes with a common superclass/interface:
    abstract class Transaction
        abstract Balance updateBalance(Balance in);
    class Credit extends Transaction
    class Debit extends Transaction
    class Receipt extends Transaction
        //...Whether these should be interfaces or classes is really up to what you want out of the design. I would go with classes because a transaction, in my eyes, is an atomic entity (you probably wouldn't have a class that implements Transaction and something else). As a side note, updateBalance deals with Balance objects so that we don't get into a debate over whether we should be using double or BigDecimal or whatever :-).
    To get transactions -- and translate from database representation to the object model -- you'd have a TransactionFactory class. This might look something like the following:
    public class TransactionFactory
        public Transaction getTransactions(Account acct)
    }

  • Database design problem for multiple language application

    Hi All,
    We are working on a travelling portal and for a traveling portal its content and details are the heart.we are planning to have it in multiple locale so this means we need to handle the dynamic data for each locale.
    currently we have following tables
    Destination
    Transport
    Places of Interests
    user comments etc.
    each table contains a lot of fields like
    Name
    ShortDescription
    LongDescription
    and many other fields which can contains a lot of data and we need to handle data in locale specific way.
    i am not sure how best we can design an application to handle this case,one thing came to my mind is like putting extra column for each locale in each table but that means for a new locale things needs to be changed from database to code level and that is not good at all.
    Since each table contains a lot of columns which can contain data eligible for internationalization so my question is what might be the best way to handle this case
    After doing some analysis and some goggling one approach that came to my mind is as below..
    i am planning to create a translation table for each table,like for destination i have the following design
    table languages
    -- uuid (varchar)
    -- language_id(varchar)
    -- name (varchar)
    table Destination
    --uuid (varchar)
    other fields which are not part of internationalization.
    table Destination_translation
    -- id (int)
    -- destination_id (int)
    -- language_id (int)
    -- name (text)
    -- description(text)
    Any valuable suggestion for above mentioned approach are most welcome...

    This approach sounds reasonable - it is the same approach used by Oracle Applications (Oracle ERP software). It de-normalizes information into two tables for every key object - one contains information that is not language sensitive and the other contains information that is language sensitive. The two tables are joined by a common internal id. Views are then created for each language that join these tables based on the common id and the language id column in the second table.
    HTH
    Srini

  • Web Page and Database Design Problem

    Ok, so I'm trying to develop an app, but am having a few problems and need some advice/help.
    I have a webpage made up of jsp pages. These pages will contain forms that will either list info from the databse, or allow users to enter data to submit to the DB.
    So I will have Servelts that will process the form information.
    I also have written DAO interfaces for different tables. For example I have a config table which olds keys and there values.
    This information will only ever be displayed so I have an interface which getAll() and get(String key).
    I want to avoid putting code like below going into the DAO
    ctx = new InitialContext();
    javax.sql.DataSource ds
    = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();
    PreparedStatement stmt = conn.prepareStatement(query);
    ResultSet records = stmt.executeQuery();
    I'd prefer to make calls from my DAO getAll() method to another class which would create the connection, and query the db, and return the ResultSet, so that I can store/manipulate it anyway I wish, before passing the results back to the servlet.
    Problem is the records seem to come back null!

    ctx = new InitialContext();
    javax.sql.DataSource ds = (javax.sql.DataSource) ctx.lookup (dataSource);
    conn = ds.getConnection();You should have a Connection Pool to create a connection and hand it out to whoover ask for it.
    Check out Hibernate
    Hibernate can manger your connection (you need to set the xml configuration first) .. It's another layer to encapsulate your JDBC connection, request, etc..
    Also check out Spring Framework. Spring can makes Transaction much more easy to implement (using aspect)
    It provides a handful of useful api for you to work with. like JdbcTemplate, HibernateTemplate.

  • Database design problem. Need Help?

    I have to display records in a table in a UI where users are allowed to move records to specify priority for the records.
    Here is an example, a db table has 4 records:
    RecordId Ranking
    A 1
    B 2
    C 3
    D 4
    Now if a user tries to set the Ranking of the record with RecordId "D" to say 1, I now have to update all records in this table. So the result should be the following:
    RecordId Ranking
    D 1
    A 2
    B 3
    C 4
    This means that if a user wants to change the Ranking of the last record to be that of the first record, I have to update all records in the table. I am trying to figure out an algorithm that can do this better than deleting all records and adding new records or updating all records. Any help will be greatly appreciated. Thank you in advance.
    null

    Well, does your computer actually support the required features? Is OpenGL enabled in PS? If not, you can't do anything with 3D. OpenGL support is mandatory. If it doesn't work you can try to update your graphics driver, but otherwise there is no way to enforce it. Also did you uninstall your Design Standard before installing Premium? if it's still using the old serioal number, then the features wouldn't show up in PS, eitehr.
    Mylenium

  • Re: (forte-users) Round-trip database design

    We have used Erwin quite sucessfully, but it's not cheap.
    "Rottier, Pascal" <Rottier.Pascalpmintl.ch> on 02/15/2001 04:51:01 AM
    To: 'Forte Users' <forte-userslists.xpedior.com>
    cc:
    Subject: (forte-users) Round-trip database design
    Hi,
    Maybe not 100% the right mailing list but it's worth a try.
    Does anyone use tools to automatically update the structure of an existing
    database?
    For example, you have a full database model (Power Designer) and you've
    created a script to create all these tables in a new and empty database.
    You've been using this database and filling tables with data for a while.
    Now you want to do some marginal modifications on these tables. Add a
    column, remove a column, rename a column, etc.
    Is there a way to automatically change the database without losing data and
    without having to do it manually (except the manual changes in the (Power
    Designer) model).
    Thanks
    Pascal Rottier
    Atos Origin Nederland (BAS/West End User Computing)
    Tel. +31 (0)10-2661223
    Fax. +31 (0)10-2661199
    E-mail: Pascal.Rottiernl.origin-it.com
    ++++++++++++++++++++++++++++
    Philip Morris (Afd. MIS)
    Tel. +31 (0)164-295149
    Fax. +31 (0)164-294444
    E-mail: Rottier.Pascalpmintl.ch
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    Hello Pascal,
    Forte has classes which might be able to scan the database structure
    (DBColumnDesc,etc.). Express use this classes to determine how the
    BusinessClass looks like. We use Forte to create the tables,indexes and
    constraints. We have the Problem that the above described classes are only
    readable but not fillable. The solution for us will be to create our own
    classes in
    the same manner than existing classes are. So we are able to make updates in
    the database structure and maybe able to change the database tables with tool
    code. Another reason for us to have the database structure in the
    application is the
    ability to see the table structure on which the Forte code works always up
    to date
    with the code. You are always able to compare the structure of the database
    with
    your businessclasses and able to convert a wrong structure to the correct
    structure
    with maybe just a little piece of code.
    Hope this helps
    Joseph Mirwald

  • Time-series / temporal database - design advice for DWH/OLAP???

    I am in front of task to design some DWH as effectively as it can be - for time series data analysis - are there some special design advices or best practices available? Or can the ordinary DWH/OLAP design concepts be used? I ask this - because I have seen the term 'time series database' in academia literature (but without further references) and also - I have heard the term 'temporal database' (as far as I have heard - it is not just a matter for logging of data changes etc.)
    So - it would be very nice if some can give me some hints about this type design problems?

    Hi Frank,
    Thanks for that - after 8 years of working with Oracle Forms and afterwards the same again with ADF, I still find it hard sometimes when using ADF to understand the best approach to a particular problem - there is so many different ways of doing things/where to put the code/how to call it etc... ! Things seemed so much simplier back in the Forms days !
    Chandra - thanks for the information but this doesn't suit my requirements - I originally went down that path thinking/expecting it to be the holy grail but ran into all sorts of problems as it means that the dates are always being converted into users timezone regardless of whether or not they are creating the transaction or viewing an earlier one. I need the correct "date" to be stored in the database when a user creates/updates a record (for example in California) and this needs to be preserved for other users in different timezones. For example, when a management user in London views that record, the date has got to remain the date that the user entered, and not what the date was in London at the time (eg user entered 14th Feb (23:00) - when London user views it, it must still say 14th Feb even though it was the 15th in London at the time). Global settings like you are using in the adf-config file made this difficult. This is why I went back to stripping all timezone settings back out of the ADF application and relied on database session timezones instead - and when displaying a default date to the user, use the timestamp from the database to ensure the users "date" is displayed.
    Cheers,
    Brent

  • Urgent help database design

    Hi,
    I need an urgent help with the design of the database, We have already a database in production but we are facing a problem with extensibility.
    The client information is variant that is
    1) number of fields are different for each client
    2) client may ask anytime in future to add another field into the table
    Please provide your views with practical implication (advantages and disadvantages) or any resource where I can find information.....
    Help appreciated.....

    Hi,
    Database design is an art & science by itself - as far as I know, there aren't any rigid rules.
    I would suggest that you have a look at the discussions in these two threads for a few general ideas :
    Database Design
    conversion from number to character
    If your client requirements keep changing, I would suggest that you keep 8- 10 "spare" columns in your tables - just call them spare1, spare2, etc.. The only purpose of these tables is to allow flexibility in the design - i.e.., in future you can always extend the table to accommodate more fields.
    I have used it a couple of times & found it to be useful - again, this is only a suggestion.
    Regards,
    Sandeep

  • Database design for share market

    Hi One and All,
    I have a query regarding, design of New database....
    Right now I joined as a Database administrator as a fresher, my superior has given me one assignment i.e. I have to create a sample database on Share market. As per his requirement the tables should be Issuer table, Security table, Broker table, INvestor table, Account table, Order table, Tradeing table. He said that I have to prepare the fiellds for this tables and relation ships and whole database structure.... I can prepare relations ships and database structure but the problem is, I don't know how the stock market is really works. If any body help me in this issue I am very thank full to him.
    I need just the table feilds, if I get this rest of job i wil do by studying the subject of share market.
    Thank You

    Hi,
    As per Hemant this forum is not appropriate for this question. However you have to analyze the system by meetings with stock broker at stock Exchange or to the client for which you are designing the system. Ask your superior to arrange meeting with client and then ask question to him so that you can made database design for them.
    Regards,
    Abbasi

  • Database Design/Application architecture question

    I'm working on a Java web app that includes creating a database from scratch. The UI needs to model a mostly static set of choices that led to other choices that lead to other choices..... I'm trying to figure out how to model this in a table or set of tables and how to design the UI and servlet interaction. Here's an example, in Petstore lingo:
    Do you need a dog house?
    For a small dog?
    Red?
    Blue?
    How many?
    For a big dog?
    Brown?
    Yellow?
    How many?
    Side windows?
    Do you need a bird cage?
    How many water dispensers?
    For big bird?
    This sort of multiple dependent options seems hard to model in a database. I'm trying to do this at the database level so that I can have a dynamic front end, in which I get the collection of options and then use JSTL to populate the UI with the chocies for whatever level or step I'm at. Any suggestions? I haven't had to solve a design problem precisely like this where some choices may have dependent choices, but others do not, and some have dependent choices which have very specific dependent choices that have very specific dependent choices. Thanks.
    Ken

    I'm working on a Java web app that includes creating
    a database from scratch. You mean the UI drives creating the database?
    Why?
    That is often driven by developer rather than business requirements and is often a bad idea. It is often done solely so the developer doesn't have to type as much. And not typing very much is solved by code generation while using meta data solutions for it is usually a bad maintainance idea.
    But if you need to then you create meta data.
    Table TheValues
    - Field: Value Id.
    - Field: Value Name
    - Field: Value Value
    Table: Collection
    - Field: Collection Id
    - Field: Collection Name
    Table: Link table
    - Field: Collection Id
    - Field: Id type
    - Field: Id (Collection or Value)
    Display values can be kept in another table or used directly.
    Notice that a collection can contain another connection.
    You can combine the first two tables as well.

  • Requirements for database design and installation

    Hi,
    As a database administrator, how to find out whether the database is design and installed properly?
    Can you please what would be the requirements to be considered towards the databse design for application developer ?
    thanks heaps !!!!

    Mohamed ELAzab wrote:
    regarding that the number of execution is the main thing that affects the performance i said that already above " the application executed it 30 000" but you didn't read my answer correctly. I did not respond to that "answer" of yours as it was not part of your posting that I responded too. In your response, which I quoted, talked about non-sharable SQL retrieving 20 rows and after 3 years it retrieving a million rows. This has no bearing on whether the SQL is sharable or not.
    I don't agree with you regarding that the design is not being done regarding considering the performance bottlenecks.So you decide on what the bottlenecks are up front, and then use these as database design considerations? I fail to see any logic or merit in such an approach.
    i want to let you know that we in the telecoms environment have many problems in our databases because the people who designed those applications didn't take performance in consideration.I understand too well - and it is not that they did not take performance into consideration when designing the database, it is because the design is just plain wrong from the start.
    You do not need to consider amount of memory available, number and speed of CPUs, bandwidth and speed of the I/O system, in order to design a database. These have no relevance at all during the design phase. Especially as the h/w that will run the design in production in a year's time can be drastically different from the h/w that will be used today.
    No, instead you use a proper and correct design methodology and data modeling approach. Why? Because such a design by its very nature will make optimal use of h/w resources and will provide data integrity, scalability and performance.
    Again i think design of the database application must take database performance bottlenecks in consideration like application which doesn't use bind variables if they took into consideration to avoid that it will help the DBA in the future but unfortunately most people doesn't do that. And as I said - using bind variables or not has absolutely nothing to do with the basic question asked in this thread: "+what are the requirements of database design+".
    How does using/not using bind variables influence the design of a table? Determine whether an entity is in 3NF? What the unique identidiers are for an entity? These are the design considerations for a database.. not bind variables.
    Yes, SQLs not using bind variables can cause performance problems. Not paying the electricity bill can cause a power outage for the database server. So what? These issues have no relevance to database design.

  • Database design to support parameterised interface with MS Excel

    Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
    I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
    I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
    The users can then use standard excel features - slicers etc to interrogate the various metrics.
    The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
    be retrieved.(The underlying data tables would need to be extended to have a date field)
    I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
    but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
    for the complete data set.
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
    refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    As an example of 3 of the views, 
    Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
    View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
    View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
    View C uses View A and view B to provide the variance between actual and planned
    Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
    Thanks for your time.

    You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point.  Rather you have an user interface issue and an database programmability
    issue.  Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc.  In addition, it seems that Excel is the primary interface for your users
    - so it may be that you should post your question to an excel forum.
    However, I do have some comments.  First, views based on views is generally a bad approach.  Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query. 
    Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer.  The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer. 
    I would rather not have the server resolve the views for the complete data set
    I don't understand the above statement but it scares me.  IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
    results.  You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values. 
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    Correct on the first point, though there is such a thing as a TVF which is similar in effect.  Before you go down that path, let's address the second statement.  I don't understand that last bit about "used as pseudo tables" but that sounds more
    like an Excel issue (or maybe an assumption).  You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is.  User simplicity perhaps? Maybe just a terminology issue?  Stored
    procedures are something I would highly encourage for a number of reasons.  Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does).   It
    is rather complex tsql - and this is one reason to advocate for stored procedures.  Separate the structure of the raw data from the user.
    (I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    DTA has its limitations.  What it doesn't do is evaluate the "model" - which is where you might have more significant issues.  Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
    yours is - just a generalization).  I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
    network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display.  Given that, you need to know how much time is spent in each part of that process - no sense
    wasting time optimizing the smallest time consumer. 
    So now to your sample table - Table A.  First, I'll give you my opinion of a flawed approach.  Your table records separate facts about an entity as multiple rows.  Such an approach is generally a schema issue for a number of reasons. 
    It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure.  I'll take this a step further and assume
    that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
    fall apart.  If you have control over this schema, then I suggest you consider changing it.  Store all the facts about a single entity in a single row.  Given the frequency that I see this pattern, I'll guess that you
    cannot.  So let's move on.
    30 thousand rows is tiny, so your current volume is negligible.  You still need to optimize your tables based on usage, so you need to address that first.  How is the data populated currently?  Is it done once as a batch?  Is it
    done throughout the day - and in what fashion (inserts vs updates vs deletes)?  You only store one day of data - so how do you accomplish that specifically?  Do you purge all data overnight and re-populate?   What indexes
    have you defined?  Do all tables have a clustered index or are some (most?) of them heaps?   OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
    and how it accesses your database.  Perhaps now is a good time to step back and reconsider your approach to providing information to the users.  Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
    everyone involved, what you have to work with, etc.  Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.   
    One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing).  So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
    attempting to model? 

  • Oracle Designer Problem Please help me

    Sir,
    1) I created a database
    2) Run>cd d:\Oracle_home\repadm61\admin\@ckqa
    @ckparams.txt
    @ ckvalqa
    @ ckcreate
    3)Opened Repository Administration Utility
    Log in as 'repos_manager/repos_manager@orcltest'
    Installed Repository.
    4) Opened Oracle 9i Designer. I am able to connect
    as 'repos_manager/repos_manager@orcltest'
    But I am not able to logon as any other user in same database/ any other user in different database. Why?
    Please help me.
    regards
    Mathew

    duplicate thread, see this one -> Re: Oracle Designer Problem Please help me

  • Clustering and database design question

    Hi,
              One of the things we are seeing with running clusters is that, reasonably,
              as we get more users (due to increased capacity of the cluster), we get more
              hits on the database. As these are now coming from multiple servers it is
              more likely they are in separate transactions and so are less coordinated.
              As a result, what we see, is more deadlocks at the database( Sybase 11.9.2).
              We have introduced row-level locking and this helps, also stored procedures
              have shown some improvement in really over-used tables, but I can't help
              thinking this is a general design concern.
              The way I have been intending to scale our cluster is to have identical
              server deployments of all beans and use load-balancing for the client
              connections to the cluster. However, all our cluster members will be using
              the same database so the chance of deadlocks must increase for each server I
              add.
              Is this a general design problem or are there design solutions I can adopt
              to help with this? Can anyone give some good advice, highlight specific
              weblogic features, discuss similar designs or point to some good documents
              on such scalability issues? Perhaps I should add the majority of our
              database access is by entity beans using bean managed persistence.
              Thanks
              Sioux
              

              Give TopLink a try:
              Clustering - Refresh Yes
              Ability to refresh cached beans and objects between nodes of a cluster of application
              servers
              Clustering - Synchronization Yes*
              TopLink WebLogic contains support for synchronous or asynchronous cache synchronization
              between nodes in a cluster. TopLink WebSphere cache synchronization will be supported
              01Q1
              Cocobase has a similar distributed caching scheme. But WebGain should be a better
              bet because they are 40% owned by BEA (better integration with WebLogic)
              Jim Zhou.
              "Mike Reiche" <[email protected]> wrote:
              >
              >
              ><whine>
              >I haven't seen a solution to this yet. Clustering prevents the application
              >server
              >from caching database data - which is the reason we use applications
              >in the first
              >place. All data is read from the db at the beginning of a transaction
              >and written
              >back at the end of a transaction.
              ></whine>
              >
              >We are trying to avoid this by pinning entity bean instances to a single
              >WL instance,
              >then we can set db-is-shared=false and the WL instance can cache the
              >entity bean.
              > Stateful session beans seem more suited to scaling than entity beans
              >- there
              >is only ever one instance of a session bean - if you access it from a
              >different
              >WL instance, it is accessed via RMI. For entity beans - multiple copies
              >exist
              >on the different servers.
              >
              >Read-only entity beans are ok if your data is truly read-only. Still,
              >each WL
              >instance will need to read the data from the DB and each copy of the
              >same entity
              >bean instance on each WL instance takes up memory. From the db's point
              >of view
              >- the more you scale, the worse things get. The only thing that WL clustering
              >scales is CPU. WL clustering does not really help to scale the number
              >of entity
              >beans instances.
              >
              >The Read-Mostly pattern exists - but I find this gives me the worst of
              >both worlds.
              >My beans are not always up-to-date and I have all these duplicate instances.
              >
              >Said that, I would love for someone from BEA to contradict me - and tell
              >me how
              >I can make entity beans scale in a WL cluster.
              >
              >
              >- Mike
              >
              >"Sioux France" <[email protected]> wrote:
              >>Hi,
              >>One of the things we are seeing with running clusters is that, reasonably,
              >>as we get more users (due to increased capacity of the cluster), we
              >get
              >>more
              >>hits on the database. As these are now coming from multiple servers
              >it
              >>is
              >>more likely they are in separate transactions and so are less coordinated.
              >>As a result, what we see, is more deadlocks at the database( Sybase
              >11.9.2).
              >>We have introduced row-level locking and this helps, also stored procedures
              >>have shown some improvement in really over-used tables, but I can't
              >help
              >>thinking this is a general design concern.
              >>The way I have been intending to scale our cluster is to have identical
              >>server deployments of all beans and use load-balancing for the client
              >>connections to the cluster. However, all our cluster members will be
              >>using
              >>the same database so the chance of deadlocks must increase for each
              >server
              >>I
              >>add.
              >>Is this a general design problem or are there design solutions I can
              >>adopt
              >>to help with this? Can anyone give some good advice, highlight specific
              >>weblogic features, discuss similar designs or point to some good documents
              >>on such scalability issues? Perhaps I should add the majority of our
              >>database access is by entity beans using bean managed persistence.
              >>Thanks
              >>Sioux
              >>
              >>
              >
              

Maybe you are looking for