Database Design - multiple schemas

Hi!
We're currently designing a DB for an AUTHENTICATION SYSTEM where several users from different companies (around 40) will have to be authenticated -- connected to ORACLE. Authentication and faster recovery is important.
Recovery/Backup
An issue raise where what if the schema encountered a problem then of course you have to backup the entire data. So we are considering to use multiple schemas.
One Company = One Schema
So if one schema is down then other schemas will not be affected and faster to recover.
Actualy, we're quite hesistant to use multiple schemas because of maintainability -- Managing different schemas and too much burder for our developers.
Will the idea of having multiple schemas be advantageous to what we want to achieve?
Is this a good design or any other idea to handle this kind of situation?
Can Partitioning do the same?
Thanks a lot

Advantages of multiple schemas:
- each schema is entirely separate
- you can maintain at different times/dates for different companies
- different schemas could be on different databases / servers
Disadvantages
- any 'shared' data may have to be duplicated (but you can always use a shared schema for reference data)
- yes, you have to maintain each schema separately (but that would be by scripts, and at least they'd be well tested!)
The dictionary (SYS tables) will be somewhat larger (40 copies of table, index, pl/sql definitions)
- you'll have 40 identical sets of SQL cached; they all look the same, but relate to different schemas. So you need a bigger SGA.
Can Partitioning do the same?No - partitioning is a solution to a physical problem, not a security problem
Is this a good design or any other idea to handle this kind of situation?I think either way works - it depends on size, number of users, are you using a third tier, etc.
Or, with a single schema, you can use VPD - virtual private database (otherwise known as FGAC - fine grained access control or RLS - row level security).
See eg http://builder.com.com/5100-6388_14-5062064.html and also Ask Tom http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:70287097313911 which refers to the documents.
You can also implement a kind of VPD on the cheap by using user defined namespaces and the SYS_CONTEXT function, combined with application logic and clever view definitions.
HTH
Regards Nigel

Similar Messages

  • Difference between database design and schema design

    Hi i have visited so many database websites and i found so many people saying we can design a data base for you. Is schema designing and database designing is the same. so many palces i find people saying we have to design data base first in order to create a physical databse. so i am little bit confused. are they same? and also what is the difference between data model and schema?

    > the definition i found for logical data model, physical data model and the definition you
    gave for logical database design, physical database design are the same.
    Not correct. The physical design is the implemetation of the logical design. These two designs are at different levels. Also, the logical design will be the same. irrespective of the RDBMS product use.
    What is incorrect is a designer/architect designing a logical design specifically for Oracle.. or specifically for SQL-Server. A logical design has nothing to do with the RDBMS product (or h/w platforms. app servers, web severs and operating systems used).
    So the logical design will always be the same - it is RDBMS independent.
    The physical design is fully dependent on the RDBMS product used. The same logical design will be implemented as different physical designs for Oracle and for SQL-Server.

  • Database Design - Multiple Customer Tables

    Scenario:
    - Let's say there are various customer "types"
    - Each one has a set of attributes which is only applicable to that customer type.
    - You have a table for orders that is for all customers
    - Want to treat it as one customer table for reports, updates, etc...
    The solution we've come up with is to basically have a lookup table which allows you to lookup the table the customer record exists in. The order table would have a "customer type" column to key off of. You then use a view to fill in the missing columns from each table for say a global query.
    I'm assuming this is a common business problem so thought I would ask the experts if there's a "best practice" for solving this problem.
    Thanks

    Think vertically ... not horizontally.
    An infinite number of attributes can be stored in finite space if you do. For example:
    CREATE TABLE horizontal (
    customer_id    NUMBER,
    attribute_val1 VARCHAR2(20),
    attribute_val1 VARCHAR2(20),
    attribute_val1 VARCHAR2(20));
    CREATE TABLE vertical (
    customer_id     NUMBER,
    attribute_name VARCHAR2(20),
    attribute_value  VARCHAR2(20),The first example requires you to add a column and break everything every time you add a new attribute. The second keeps right on working.

  • Help With Multiple Schemas In Multiple Environments

    Dear Oracle Forum:
    We have a bit of controversy around the office and I was hoping we could get some expert input to get us on the right track.
    For the purposes of this discussion, we have two machines, development and production. Currently, on each machine, we have one database with multiple schemas, say, one for sales data and another for inventory. The sales data has maybe 200 tables and the inventory has another 50. About 12 times a year, once a month, we have a release and move code from dev to prod. The database is accessed by several hundred Pro*C and Pro*Cobol programs for online transaction processing.
    The problem comes up when we need to have multiple development environments. If I need to work on something for May that requires the customer address field to be 50 characters and somebody else is working on something for July that requires the customer address field to be 100 characters, we can’t both function in the same schema. We have a method of configuring running programs to attach to a given schema/database. Currently, everything connects to the same place. We were told that we should not have the programs running as the owners of the schemas for some reason so we set up additional users. The SALES schema is accessed with the connect string: SALES_USER/[email protected]. (I don’t know where we got dot world from but that is not the current discussion.)
    One of the guys said that we should have 12 copies of the database running, which is kind of painful to think about in my opinion. Oracle is not a lightweight product and there are any number of ancillary processes that would have to be duplicated 12 times.
    My recommendation is that we have 12 schemas each for sales and inventory with 12 users each to access them. We would have something like JAN_SALES_USER, FEB_SALES_USER, etc. Each user would have synonyms set up for each of the tables it is interested in. When my program connects as MAY_SALES_USER, I could select from the customer table and I would get my 50 character address field. When the other user connects as JUL_SALES_USER, he would get his 100 character address field. Both of us would not know anything different.
    Another idea that came up is to have a logon trigger that would set the current schema for that user to the appropriate base schema. When JUL_SALES_USER logs in, the current schema would be set to JUL_SALES, etc. This would simplify things by allowing us to avoid having something like 2400 synonyms to maintain (which could be automated without too much difficulty) but it would complicate things by requiring a trigger.
    There are probably other ways to go about this we have not considered as yet. Any input you can give will be appreciated.
    Regards,
    /Bob Bryan

    Hans Forbrich wrote:
    I'd rather see you with 12 schemas than with 12 databases. Unless you have lots of CPUs to spare ... and lots of cash to pay for those extra CPU licenses.
    Then again, I'd take it one step further and ask to investigate the base design. There should be little reason to change the schema based on time. Indeed, from what little I know of your app, I'd have to ask whether adding a 'date' column and appropriate views or properly coded SQL statements might simplify things. Interesting. If we were to have one big Customer table with views for each month, how would we handle the case where the May people have to see 50 character address and July have to see a 100 character address field. I guess we could have MAY_ADDRESS VARCHAR2(50) and JULY_ADDRESS VARCHAR2(100) and take care to make sure that people connecting as May can only see the May columns, etc. This is simpler than multiple schemas?
    I may have overly simplified things in my effort to get something down that would not require too much explanation. The big thing is that multiple people are doing development and they have to be independent of each other. If we were to drop a column for July, the May people will have trouble compiling if we don’t keep things separate. It is not a case of making the data available. The data in development is something we cook up to allow us to test. The other part is the code we compile now will be released to production one of these times. In production, there is only a need for one database.
    We are moving from another database product where multiple databases are effectively different sets of files. We have lots of disk space so multiple databases were no problem. Oracle is such a powerful product; I can’t believe there is not some way to set up something similar.

  • Tools for capture database design based on multiple  schema

    My database is 11.2.0.1
    please suggest me a tools to capture database design.
    i tested toad,but this tools only capture based on table!

    sql developer data modeller.
    http://www.oracle.com/technetwork/developer-tools/datamodeler/index.html

  • Database Design for Multiple function site

    Hi
    I am working on one project which involve multiple function
    site, such as
    Company Product Catelog, Customer Support Forum, Document
    Exchange Engine and
    etc...
    Normally we will combine ALL TABLEs into one DATABASE.
    My question are:
    1) Is my break them to individual DATABASE, will it perform
    better?
    Means Product Catelog and Forum will have different
    DATABASE, but they
    will using the same DOMAIN NAME.
    2) I am worried about the break down and corruption of
    DATABASE, so I have the
    idea to separate them out. Are my idea correct or wrong?
    3) I am seeking for better DATABASE DESIGN, because I know
    the database will
    become huge in future. I request for your idea and opinion.
    Thank you very much.

    Creating views: not an option I think. It would involve a lot of programming with 'instead of' triggers etc.
    Seperate databases: a good way if the locations are completely independant and do not share information. This involves more DBA work.
    Separate schema's in one database: this would make public synonyms impossible, and is probably not a good option.
    Adding a location id to tables: the best way I think, and flexible. You can easily add another location, and locations can easily share information.

  • Is ADF meant for database designed with vertical schema ?

    Hi,
    I want to know if ADF 11g is meant for database designed with vertical schema where even the column names will be rows in a generic table?
    Thanks in advance.
    Edited by: user8925296 on Apr 12, 2010 10:06 AM

    The short answer is no...
    What you are calling a "vertical schema" is what others have called an entity-attribute-value or universal data model schema. I'd advise you to do some open-minded research about these types of schemas if you are developing a new application before you proceed. They sound great in practice, but have quite inherent usability and scalability issues. [url http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2314483800346542969]This might be a good place to start...
    John

  • Multiple Import into Multiple schemas in a single Database server ?

    Is it possible to create a multiple import into multiple schema in a single DB server ?

    Hi,
    Yes, You can do it.. With Multiple Export File..it may cause some Performance issue, so need to check your Server configuration before do that..

  • Oracle Streams setup for multiple schemas in a same database

    We are on 11.1.0.7 and will be using Oracle 11g Streams that will replicate the data real-time for two schemas between the source and target set of schemas with in the same database. We will be doing DDL as well as DML replication.
    I created the following plan and want your inputs. After implementing this, I created a table in SCOTT but it's get replicated to RPT_SCOTT later I tried inserting a row in the table created under SCOTT but that too didn't get replicated to RPT_SCOTT.
    Here are the steps that I used to set up my STREAMS -
    Database Instance:     TESTDB
    Schemas:
         Source:          SCOTT
                   HR
         Target:          RPT_SCOTT
                   RPT_HR
    Configuring Streams:
    1.     Database is in Archive log mode
    2.     Set up the Streams administrator.
    create user STRMADMIN identified by STRMADMIN default tablespace USERS temporary tablespace temp;
    grant resource, dba, AQ_ADMINISTRATOR_ROLE to STRMADMIN;
    BEGIN
    DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE(
    grantee => 'STRMADMIN',
    grant_privileges => TRUE);
    END;
    3.     Set up Streams queues
    CONNECT STRMADMIN/****
    BEGIN
    DBMS_STREAMS_ADM.SET_UP_QUEUE(
    queue_name => 'STREAMS_QUEUE',
    queue_table => 'STREAMS_QUETAB',
    queue_user => 'STRMADMIN');
    END;
    4.     Add the Apply rule
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_SCOTT',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name      => 'RPT_HR',
    streams_type     => 'APPLY',
    streams_name     => 'APPLY_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_CC_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name     => 'APPLY_AB_STREAM',
    apply_user     => 'STRMADMIN');
    END;
    5.     Add the Capture Rule
    CONNECT STRMADMIN/*****
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'SCOTT',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_CC_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    BEGIN
    DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
    schema_name     => 'HR',
    streams_type     => 'CAPTURE',
    streams_name     => 'CAPTURE_AB_STREAM',
    queue_name          => 'STRMADMIN.STREAMS_QUEUE',
    include_dml     => true,
    include_ddl     => true,
    inclusion_rule     => true,
    source_database     => 'TESTDB');
    END;
    6.     Set the instantiation system change number (SCN)
    CONNECT STRMADMIN/******
    DECLARE
    source_scn NUMBER;
    BEGIN
    source_scn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'SCOTT',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);
    END;
    7.     Start the Apply
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_CC_STREAM');
    END;
    BEGIN
    DBMS_APPLY_ADM.START_APPLY('APPLY_AB_STREAM');
    END;
    8.     Start the Capture
    CONNECT STRMADMIN/******
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_CC_STREAM');
    END;
    BEGIN
    DBMS_CAPTURE_ADM.START_CAPTURE('CAPTURE_AB_STREAM');
    END;
    Waiting for your inputs!

    If I understand from your code, you want to do this on the same DB :
    SCOTT --> RPT_SCOTT
    HR  --> RPT_HRSo there is a schema transformation, where is it coded ?
    General info : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_transform.htm
    More specific on schema rename : http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/strms_mtransform.htm#CHDGDHDE
    Next : where are the initialisation of both capture schema ?
    Missing :
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'scott'');
    execute DBMS_CAPTURE_ADM.PREPARE_SCHEMA_INSTANTIATION( schema_name          => 'HR'');This tell Streams from where to capture the SCN.
    Also there is an (WRONG) instantiation of SCOTT for the apply, which tells the apply to consider as valuable candidate all LCR after source_scn,
    but where is the code or RPT_HR. Alas, you put for the APPLY target schema 'SCOTT' while it should have been 'RPT_SCOTT'.
    the fact that is correct or false depends where you put the schema transformation. If you put the transformation at apply time then use the SOURCE schema name (SCOTT, HR) for the LCR will contains their name. If you put the transformation at capture time, then put target schema name for the LCR will contain their name (RPT_HR,RPT_SCOTT).
    Let's say you put the schema transformation at capture time then
    Missing:
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'RPT_HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);If you attache the transformation on the apply process then the code is :
    DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN (
    source_schema_name => 'HR',
    source_database_name => 'TESTDB',
    instantiation_scn => source_scn);And this is useless:
    -- useless code
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_CC_STREAM',
    apply_user => 'STRMADMIN');
    END;
    BEGIN
    DBMS_APPLY_ADM.ALTER_APPLY(
    apply_name => 'APPLY_AB_STREAM',
    apply_user => 'STRMADMIN');
    END;
    /Last : You are using the same queue for 2 separated capture/apply transformation.
    Do yourself a favor and give each couple capture/tranform/apply its own queue.

  • Combine multiple oracle 10g db as one 11g database with different schemas

    We have 3 10g databases on a solaris machine
    We are planning to migrate them and upgrade to 11g rel2 on linux machine
    and finally merge those into one database as different schemas
    has anyone got good ideas of the best strategy to do that
    thanks

    has anyone got good ideas of the best strategy to do thatexpdp/impdp

  • Single schema or multiple schemas

    Hello
    A few years ago I worked on a greenfield project where we were building a system to serve 20 or so different departments with 1200+ users. The approach the existing DBA had taken was to use a single schema for all objects and for all apps. The big advantage I found with this approach was for development, we could have one database and each developer could develop whatever they wanted in their own schema. Name resolution meant that they could override the main copy of whichever object they were working on just by having it in their schema. Unfortunately the project went nowhere and I left after 3 months so I never got to see what issues were raised with the system in production. So I'm wondering, has anyone else taken this approach and if so what would you say are the main things to be wary of? Especially things that aren't a problem when the objects are distributed between multiple schemas.
    Thank you in advance
    David

    user12065404 wrote:
    Hi Ed
    Thank you for your reply. I think I need to clarify what I meant a little.
    On a number of sites I've been to there have been multiple applications spread out over multiple schemas for the same business unit. To draw on the emp example it would be like having the EMP schema containing the emp table and the DEPT schema containing the dept table, with a single HR application (for a single buisness unit/department) selecting from both. If you scale this up to 000s of tables and 30 odd schemas, it's a bit of a nightmare from the perspective of code location and permissions because each department needs to look across schemas to get all of the data to do their job. Effectively tables and code had become segregated into separate schemas by business "function" and it means that the applicaitons are so intertwined with the different schemas that to have more than one business unit means you either need a completely separate database or you have to look at VPD.
    App         SCHEMA      Table
    HR          emp         emp
    dept        dept
    Invoicing   emp         emp
    invoicing   inv
    contracts   contract
    orders      ORDER
    Ordering    emp         emp
    contracts   contract
    orders      ORDER
    orders     orderitem
    ...The advantage I saw with a single schema for all the data for a single business unit is the one you have mentioned - i.e. you can very easily set up a new business unit that is totally separate from the other by having a new schema and point the same version of the same application at it. Or failing that, using VPD. And on top of that, security is simplified - no needing direct grants from one schema to the other and especially not requiring WITH GRANT OPTION in the case of a view that pulls data from various schemas and rights to select from the view have to be grated to a role.
    App         SCHEMA      Table
    HR          abc_ltd     emp
    abc_ltd     dept
    Invoicing   abc_ltd     emp
    abc_ltd     inv
    abc_ltd     contract
    abc_ltd     ORDER
    Ordering    abc_ltd     emp
    abc_ltd     contract
    abc_ltd     ORDER
    abc_ltd     orderitem
    ...The other big advantage I saw was from a development and testing perspective. You can have one testing database supporting lots of developers working on lots of different projects against the same core data set and code. The multiple schema setup I described above means that effectively you need to have one testing database per project - which becomes unmanageable when databases start growing to the TB range.
    David
    Edited by: user12065404 on 26-Mar-2010 08:30
    Saw a typo in my first exampleOK, now it is a lot more clear.
    What you have here is a problem of some data (employee) being enterprise data, and other data being application data.
    I take it your "invoiceing" app is used by the sales side of the business, and "ordering" is used by the purchasing side (my last assignment when I was an apps analyst was the purchasing/inventory control for an auto manufacturer).
    With what I know at this point, I'd have an HR schema, an INVOICING schema, and an ORDERING schema. (Well actually, I'd have SALES and PURCHASING, because INVOICEING and ORDERING are going to have to integrate with other stuff in those areas). Even though invoicing and ordering both have a CONTRACT table, I'd think the properties of a contract for one would be different than the other, so the table design itself would be different. How about tables SALES_CONTRACT and PURCHASING_CONTRACT. Same concept for ORDERS, and both sales and purchasing would need header and details .... SALES_ORDER_HEADER, SALES_ORDER_DETAIL, PURCH_ORDER_HEADER, PURCH_ORDER_DETAIL.
    These two apps need for employee data should be pulled directly from the HR emp table, with access limited to stored procedures that control what data is exposed. Or from materialized views. Never try to maintain the same data in two places at once. It will never maintain consistency.

  • Separating a database onto multiple servers

    We are working on a product that uses a set of SQL Server databases that are currently all in one instance and on one server. Now that we are growing, we need more processing power/space. So we want to move the databases onto multiple servers.
    So we would not need to "shard", we created separate databases (with the same schema) for each company that purchased the product. We could readily put different company databases onto the different servers.
    The question is this, how to we handle the routing to the different servers?
    If we would have "sharded" the data in one database with different keys, SQL Server tools would have handled the routing for us. Because we actually separated the databases, do we have to do the routing manually? (That is, do we need to figure
    out that when company A logs in that they need to go to database server aa.aa.aa while compay B needs to go to server bb.bb.bb?) Or is there a product that can help us with the routing?
    Thanks!
    www.insteptech.com ;
    msmvps.com/blogs/deborahk
    We are volunteers and ask only that if we are able to help you, that you mark our reply as your answer. THANKS!

    Hello,
    Based on your description, it seems that you want to copy all schema from a "Shared" database into other servers and separate data into the tables based on the company name.
    If I understand correctly, you can try to use the
    SQL Server Import and Export Data Wizard to copy data from table or view to destination server and specify query to restrict data  for copy. Or you can use linked server and
    SELECT INTO statement with condition to move the data.
    As per my understanding about the "Handling routing", balancing capability. In order to meet your requirement, you have to load balance through your application's design.
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here.
    Fanny Liu
    TechNet Community Support

  • Database design to support parameterised interface with MS Excel

    Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
    I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
    I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
    The users can then use standard excel features - slicers etc to interrogate the various metrics.
    The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
    be retrieved.(The underlying data tables would need to be extended to have a date field)
    I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
    but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
    for the complete data set.
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
    refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    As an example of 3 of the views, 
    Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
    View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
    View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
    View C uses View A and view B to provide the variance between actual and planned
    Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
    Thanks for your time.

    You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point.  Rather you have an user interface issue and an database programmability
    issue.  Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc.  In addition, it seems that Excel is the primary interface for your users
    - so it may be that you should post your question to an excel forum.
    However, I do have some comments.  First, views based on views is generally a bad approach.  Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query. 
    Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer.  The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer. 
    I would rather not have the server resolve the views for the complete data set
    I don't understand the above statement but it scares me.  IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
    results.  You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values. 
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    Correct on the first point, though there is such a thing as a TVF which is similar in effect.  Before you go down that path, let's address the second statement.  I don't understand that last bit about "used as pseudo tables" but that sounds more
    like an Excel issue (or maybe an assumption).  You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is.  User simplicity perhaps? Maybe just a terminology issue?  Stored
    procedures are something I would highly encourage for a number of reasons.  Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does).   It
    is rather complex tsql - and this is one reason to advocate for stored procedures.  Separate the structure of the raw data from the user.
    (I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    DTA has its limitations.  What it doesn't do is evaluate the "model" - which is where you might have more significant issues.  Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
    yours is - just a generalization).  I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
    network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display.  Given that, you need to know how much time is spent in each part of that process - no sense
    wasting time optimizing the smallest time consumer. 
    So now to your sample table - Table A.  First, I'll give you my opinion of a flawed approach.  Your table records separate facts about an entity as multiple rows.  Such an approach is generally a schema issue for a number of reasons. 
    It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure.  I'll take this a step further and assume
    that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
    fall apart.  If you have control over this schema, then I suggest you consider changing it.  Store all the facts about a single entity in a single row.  Given the frequency that I see this pattern, I'll guess that you
    cannot.  So let's move on.
    30 thousand rows is tiny, so your current volume is negligible.  You still need to optimize your tables based on usage, so you need to address that first.  How is the data populated currently?  Is it done once as a batch?  Is it
    done throughout the day - and in what fashion (inserts vs updates vs deletes)?  You only store one day of data - so how do you accomplish that specifically?  Do you purge all data overnight and re-populate?   What indexes
    have you defined?  Do all tables have a clustered index or are some (most?) of them heaps?   OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
    and how it accesses your database.  Perhaps now is a good time to step back and reconsider your approach to providing information to the users.  Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
    everyone involved, what you have to work with, etc.  Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.   
    One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing).  So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
    attempting to model? 

  • Multiple schema used on apex?

    hi all,
    i posted a thread here need some comments or suggestions for database designing
    just wanted to ask comments from APEX developers as well. is it recommended to have more than one schema if i use Oracle Application Express as front-end tool?
    considering that having multiple schema means multiple workspace. then there will be features that i won't be able to use such as a single sign on for all applications in this case?
    what i want to know is are there any other ways to have a user sign in to one application and be authenticated in all applications despite that those applications are on different workspaces?
    thanks
    allen

    Allen,
    One Workspace doesn't mean one schema. You can have multiple schemas assigned
    to one workspace.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Multiple Schemas under one user account with XE 10g

    Hi,
    I am using (learning) XE 10g. I would like to know if it is possible to have multiple schemas under one user account and have the schemas logically separated. As of right now, I have three schemas that I am working with, each one under a different user account. This is inconvenient, because I have to logout of one user account and login to another user account simply to be able to work with another schema.
    Thanks

    It isn't possible to have multiple schemas under one database user account. It is of course possible to grant rights to other database users, and or roles, in order to allow access to the tables/data from other accounts. In Oracle there is a one-one mapping between schema and user.
    Niall Litchfield
    http://www.orawin.info/

Maybe you are looking for

  • New FiOS GigE Wireless 'N' Router has slowed our wireless speed to a crawl...need help!

    This past Tuesday I installed the new N router.  Also, we upgraded to FiOS Quantum 75/35 (previously 15/5).  That was activated the following day.  I can't complain about the internet speed to our pc since it's wired.  Speed tests showed we were gett

  • DATABASE performance is very high

    Hi,   I'm having the following code. its performance is very high. Is it possible to reduce the processing time? SELECT  objnr gjahr kstar beknz twaer             wtg001 wtg002 wtg003 wtg004 wtg005 wtg006             wtg007 wtg008 wtg009 wtg010 wtg01

  • Apache-WebLogic Bridhge Failures

    Hi all, We are also having a similar setup, except that we are using WebLogic Server 5.1. We had this problem when the traffic in our production site is high. We thought we had solved the problem when we applied Service Pack 5. But after behaving for

  • Java Doc for IBM MQSeries

    Hi, Can anyone tell me where i can get javadoc for IBM MQSeries Classes. Regs David Jeyam

  • Error when moving to flashOnly\player10.3

    I changed compile path from flashOnly\player10.1 to flashOnly\player10.3 and now I get this error: verify com.adobe.rtc.clientManagers::MicrophoneManager/setAdvancedMicrophone()                         stack:                         scope: [global Ob