Database design - Better approach than XML/ XSD

Hi, I am designing web based application. The scenario I am working on is - I have around 50+ odd objects. They have few common things but other fields will change (Say e.g. Employee / Customers/ Assets, etc). We are also providing a facility where
in customer can add / rename / delete table column from UI. We are planning to use XML with XSD/ XQuery and storing all objects in 1 table only to accommodate dynamic schema issue. But it has performance impact. Is there any other better way to handle this
situation?

Basically, it really depends on what you are doing with the data. XML isn't so bad when you are just attaching some data onto an object, but when you need to use it in queries on a lot of rows, it just seems really cumbersome.
My favorite way is to just add columns to the table, probably using sparse columns. That gives you column level access like normal (including indexes and constraints), and an XML schema method to put data in using XML style access.
Entity Attribute Value (http://en.wikipedia.org/wiki/Entity-attribute-value_model) style works pretty well too. Basically you store the key of an object, the name of an attribute, and
the value (and perhaps the name of the entity, but ideally I would suggest one table per object you are extending.) It is more cumbersome to query, but works really naturally for front end tools.
However... One thing I think I hear from your requirements is that you are shooting to have 1 table. I would strongly suggest against that approach using SQL Server. SQL Server's primary strength is as a relational engine. Sure, it is more work to need to
adjust your schema when requirements change, but if you can get the base schema down, having these flexible elements to the implementation are not so damaging to performance.
For example, say you are building a database of students. Students, teachers, classrooms, subjects, etc, all are pretty much known. But there are a lot of attributes that may be different between usages, so adding a flexible element is very useful for the
end users. If you really want a tool with a flexible back end (something I am very much against unless you keep the rowcount low), look at how ORM tools create their data storage using tables. SQL Server itself is really and truly a flexible data storage platform.
Using CREATE TABLE, ALTER TABLE, etc, you can fashion almost any structures on the fly...
Louis
Without good requirements, my advice is only guesses. Please don't hold it against me if my answer answers my interpretation of your questions.

Similar Messages

  • Database design vs Application/UI  design, pls help..

    Hi All,
    I have to define a discoun that can be given on the combinations of the folllowing attributes : CustomerGroup, CustomerType, Customer, Region, District, Area, ProductGroup, ProductBrand, Product.
    From the dba I get this table :
    CREATE TABLE Promotions
    (PromotionNo int NOT NULL,
    PromotionName varchar(80) NOT NULL,
    LimitAmount NUMBER(15,2) NOT NULL,
    Status VARCHAR(15,2) NOT NULL,
    Discount numeric(5,3) NOT NULL,
    CustGroup int NULL,
    CustType int NULL,
    Customer int NULL,
    Region int NULL,
    District int NULL,
    Area int NULL,
    ProductGroup int NULL,
    ProductBrand int NULL,
    Product int NULL);
    From the UI perspective, the requirement is to make data entry master-detail, with one master and three details like on this hierarchy :
    PromotionMaster
    (PromotionNo int,
    PromotionName varchar(80),
    LimitAmount NUMBER(15,2),
    Status VARCHAR(15,2))
    PromotionDetailByCustomer
    (PromotionNo int ,
    CustGroup int ,
    CustType int ,
    Customer int )
    PromotionDetailByArea
    (PromotionNo int ,
    Region int ,
    District int ,
    Area int );
    PromotionDetailByProduct
    (PromotionNo int ,
    ProductGroup int ,
    ProductBrand int ,
    Product int );
    So here is the issue : To make the Promotions Table up to date, once the user press SAVE on the UI, the application must INSERT the following rows to the Promotions Table :
    INSERT INTO Promotions
    SELECT ..... FROM PromotionMaster JOIN
    PromotionDetailByCustomer CROSS JOIN
    PromotionDetailByArea CROSS JOIN
    PromotionDetailByProduct
    Is there any better approach than this ??
    Thank you for your help,
    xtanto

    . Isn't it normalized ?In your previous post you said:
    From the dba I get this table :
    CREATE TABLE PromotionsThat table is not normalised.
    Then I need a mechanism to make Cross Join from all details table to produce details
    in PROMOTIONS table.Why? What purpose does that serve? If you already have actual tables representing
    PromotionMaster, PromotionDetailByCustomer, PromotionDetailByArea and
    PromotionDetailByProductthen the PROMOTIONS table is redundant. If you don't actually have those four tables in your database schema then that's bad. You should split the PROMOTIONS table into its constituent parts.
    Why to make it easier for me to query which discounts a SalesOrder will get.If the PROMOTIONS table is intended to assist in this query then I think a table is the wrong answer. It is both an unnecessary waste of disk space and a A view would be better, although three views might be better still (one joining PromotionMaster with PromotionDetailByCustomer, one joining PromotionMaster with PromotionDetailByArea and one joining PromotionMaster with PromotionDetailByProduct). But I think the best of all would be a straight query on PromotionMaster with correlated sub-queries against PromotionDetailByCustomer, PromotionDetailByArea and PromotionDetailByProduct.
    Still, without knowing more about the business rules and other sundry details (usage, table sizes, etc) it's difficult for me to be prescriptive.
    Cheers, APC

  • Database design to support parameterised interface with MS Excel

    Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
    I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
    I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
    The users can then use standard excel features - slicers etc to interrogate the various metrics.
    The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
    be retrieved.(The underlying data tables would need to be extended to have a date field)
    I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
    but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
    for the complete data set.
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
    refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    As an example of 3 of the views, 
    Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
    View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
    View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
    View C uses View A and view B to provide the variance between actual and planned
    Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
    Thanks for your time.

    You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point.  Rather you have an user interface issue and an database programmability
    issue.  Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc.  In addition, it seems that Excel is the primary interface for your users
    - so it may be that you should post your question to an excel forum.
    However, I do have some comments.  First, views based on views is generally a bad approach.  Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query. 
    Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer.  The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer. 
    I would rather not have the server resolve the views for the complete data set
    I don't understand the above statement but it scares me.  IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
    results.  You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values. 
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    Correct on the first point, though there is such a thing as a TVF which is similar in effect.  Before you go down that path, let's address the second statement.  I don't understand that last bit about "used as pseudo tables" but that sounds more
    like an Excel issue (or maybe an assumption).  You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is.  User simplicity perhaps? Maybe just a terminology issue?  Stored
    procedures are something I would highly encourage for a number of reasons.  Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does).   It
    is rather complex tsql - and this is one reason to advocate for stored procedures.  Separate the structure of the raw data from the user.
    (I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    DTA has its limitations.  What it doesn't do is evaluate the "model" - which is where you might have more significant issues.  Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
    yours is - just a generalization).  I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
    network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display.  Given that, you need to know how much time is spent in each part of that process - no sense
    wasting time optimizing the smallest time consumer. 
    So now to your sample table - Table A.  First, I'll give you my opinion of a flawed approach.  Your table records separate facts about an entity as multiple rows.  Such an approach is generally a schema issue for a number of reasons. 
    It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure.  I'll take this a step further and assume
    that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
    fall apart.  If you have control over this schema, then I suggest you consider changing it.  Store all the facts about a single entity in a single row.  Given the frequency that I see this pattern, I'll guess that you
    cannot.  So let's move on.
    30 thousand rows is tiny, so your current volume is negligible.  You still need to optimize your tables based on usage, so you need to address that first.  How is the data populated currently?  Is it done once as a batch?  Is it
    done throughout the day - and in what fashion (inserts vs updates vs deletes)?  You only store one day of data - so how do you accomplish that specifically?  Do you purge all data overnight and re-populate?   What indexes
    have you defined?  Do all tables have a clustered index or are some (most?) of them heaps?   OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
    and how it accesses your database.  Perhaps now is a good time to step back and reconsider your approach to providing information to the users.  Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
    everyone involved, what you have to work with, etc.  Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.   
    One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing).  So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
    attempting to model? 

  • Database design for Help Desk application.

    Hello does any one know
    Database design for Help Desk application.
    ERD
    Business rules and features
    ?

    The best way to approach a database design is to write a
    specification for the application. Document what processes the help
    desk technicians will do. In the process, identify what pieces of
    information they work with. When you have the complete
    specification written, you can then begin grouping the pieces of
    information they work with together. For example, a ticket may have
    a number, status, priority and a person to whom it is assigned. The
    person to whom the ticket is assigned will have a name, phone
    number, e-mail address and a list of technical skills.
    So in this overly-simplified example, we could have a table
    that contains ticket information, a table that has information
    about technicians and a table of skills. Then ask your self
    questions like "Can one ticket be handled by more than one
    technician?" "Can one technician handle more than one ticket?" Can
    a technician have more than one skill?" In this way, you can begin
    seeing the one-to-one, one-to-many and many-to-many relationships
    that exist.

  • SIMPLE Database Design Problem !

    Mapping is a big problem for many complex applications.
    So what happens if we put all the tables into one table called ENTITY?
    I have more than 300 attributeTypes.And there will be lots of null values in the records of that single table as every entityType uses the same table.
    Other than wasting space if I put a clustered index on my entityType coloumn in that table.What kind of performance penalties to I get?
    Definition of the table
    ENTITY
    EntityID > uniqueidentifier
    EntityType > Tells the entityTypeName
    Name >
    LastName >
    CompanyName > 300 attributeTypes
    OppurtunityPeriod >
    PS:There is also another table called RELATION that points the relations between entities.

    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    check the coloumn with WHERE _entityType='PERSON'
    as there is is clustered index on entityType...there
    is NO performance decrease.
    there is also a clustered index on RELATION table on
    relationType
    when we say WHERE _entityType ='PERSON' or
    WHERE relationType='CONTACTMECHANISM'.
    it scans the clustered index first.it acts like a
    table as it is physically ordered.I was thinking in terms of using several conditions in the same select, such as
    WHERE _entityType ='PERSON'
      AND LastName LIKE 'A%' In your case you have to use at least two indices, and since your clustered index comes first ...
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
    Have you ever thought of using constraints in your
    modell? How would you realize those?
    ...in fact we did.We have arranged the generic object
    model in an object database.The knowledge information
    is held in the object database.So your relational database is used only as a "simple" storage, everything has go through your object database.
    But the data schema is held in the RDBMS with code
    generation that creates a schema to hold data.If you think that this approach makes sense, why not.
    But in able to have a efficent mapping and a good
    performance we have thought about building only one
    table.The problem is we know we are losing some space
    but the thing is harddisk is much cheaper than RAM
    and CPU.So our trade off concerated on the storage
    cost.But I still wonder if there is a point that I
    have missed in terms performance?Just test your approach by using sufficiently data - only you know how many records you have to store in your modell.
    PS: it is not wise effective using generic object
    models also in object databases as CPU cost is a lot
    when u are holding the data.I don't know if I'd have taken your approach - using two database systems to hold data and business logic.
    PS2: RDBMS is a value based system where object
    databases are identity based.we are trying to be in
    the gray area of both worlds.Like I wrote: if your approach works and scales to the required size, why not? I would assume that you did a load test with your approach.
    What I would question though is that your discussing a "SIMPLE Database Design" problem. I don't see anything simple in your approach when it comes to implementation.
    C.

  • DataBase Design For InvoiceSystems

    Sir I am doing my Final Year B.sc(Comp.sci), I am having my Oracle Project to do things.
    I just wan't a Detailed DataBase Design for
    Invoice Systems
    Please Could you help me in terms of DataBase Design......

    The best way to approach a database design is to write a
    specification for the application. Document what processes the help
    desk technicians will do. In the process, identify what pieces of
    information they work with. When you have the complete
    specification written, you can then begin grouping the pieces of
    information they work with together. For example, a ticket may have
    a number, status, priority and a person to whom it is assigned. The
    person to whom the ticket is assigned will have a name, phone
    number, e-mail address and a list of technical skills.
    So in this overly-simplified example, we could have a table
    that contains ticket information, a table that has information
    about technicians and a table of skills. Then ask your self
    questions like "Can one ticket be handled by more than one
    technician?" "Can one technician handle more than one ticket?" Can
    a technician have more than one skill?" In this way, you can begin
    seeing the one-to-one, one-to-many and many-to-many relationships
    that exist.

  • Database design for DSS

    Hello All
    I have to design database for a DSS.
    It is basically a MIS project which will be used for reporting purposes
    Can anyone please tell me what are the considerations to be made
    while designing for such a system.
    Any documents or web links will be of great help
    Thanks in advance
    Ashwin N.

    The best way to approach a database design is to write a
    specification for the application. Document what processes the help
    desk technicians will do. In the process, identify what pieces of
    information they work with. When you have the complete
    specification written, you can then begin grouping the pieces of
    information they work with together. For example, a ticket may have
    a number, status, priority and a person to whom it is assigned. The
    person to whom the ticket is assigned will have a name, phone
    number, e-mail address and a list of technical skills.
    So in this overly-simplified example, we could have a table
    that contains ticket information, a table that has information
    about technicians and a table of skills. Then ask your self
    questions like "Can one ticket be handled by more than one
    technician?" "Can one technician handle more than one ticket?" Can
    a technician have more than one skill?" In this way, you can begin
    seeing the one-to-one, one-to-many and many-to-many relationships
    that exist.

  • Convert SQL server database into SAP readable (encrypted) XML for SAP tool?

    Could anyone kindly let me know what is the procedure to convert SQL server database into SAP readable (encrypted) XML for SAP Authoring tool???

    So If I understood it correctly there an existing propriertory question bank with SQL server. You are looking at an option to migrate all the tests and questions from the existing system to the LSO system. Right ?
    I am still not clear on the xml conversion. Have you guys found a solution which could be achieved through a xml file ?
    am not aware of a way through which you could import only a xml file and create tests/questions. If you have a sample xml file then forward me so that I could do some testing on my end.As per my knowledge you could do one of the following. I
    1. Create the tests and questions manually in Authoring Environment. It will be a time consuming task. Based on the number of questions you have you might have to assemble a team of content developers to acheive this.
    2. Alternatively, you could create a Adobe Flash based assessment. The Flash component would be the front-end and will read from a xml file to display the questions and to drive the funcationality. This would be a easier and less time consuming than creating the assessments manually in authoring environment. However, you might miss out some of the functionality available in the Test Author of Authoring Environment unless you have all the functionality replicated inside Flash. This would require one time effort in creating the Flash template and the xml file structure. Once that is created you could create multiple assessments by just replacing the xml file. If you select this approach then you would have to ensure the data from SQL is converted in the desired xml format required by your Flash component.
    Please let me know if you require any further guidance or clarification regarding this.
    Regards,
    Ravi Sekhar

  • Replication better approach

    Hi All,
    I am doing transaction Replication of one database and want better approach and some clarification
    for example:-
    I have A database and want to replicate in two different location It create two publication for database A in Local Publication .My question is if it create two publication then it send the data two times to distributor ?
    And one more question I want to replicate same DB to two different location in US and Kolkatta and my publisher db also in Kolkata and I want one replication server in Kol and one in US  then I am using remote distributor in Kol and Now this is my approch:-
    A- Publisher in Kol
    B-Distribution in Kol
    C- Subscriber in Kol
    D- Subscriber in US
    Please let me know what i have to use pull and push Subscription and any other approch which i have to use sending data through WAN
    if any modification required then please help .
    Thanks in advance.

    I am still a little confused about what you are trying to do.
    It sounds like you have a single publisher/distributor and are replicating to 2 different subscribers. You want to make this as efficient as possible.
    Your topology is the best, because if data does not go to subscriber 2 but makes it to subscriber 1, you want it to also go to subscriber 2 when it comes on line. This store and replay method ensures that replication will be able to pick up where it left
    off and to ensure that both subscribers get the same dataset, although they might not get the same data at the same time - ie Kol being closer to the publisher will have a lower latency and be more up to date than the US server.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • DataBase Design For Inventory Systems

    Sir I am doing my Final Year B.sc(Comp.sci), I am having my Oracle Project to do things.
    I just wan't a Detailed DataBase Design for
    Inventory Systems
    Please Could you help me in terms of DataBase Design......

    The best way to approach a database design is to write a
    specification for the application. Document what processes the help
    desk technicians will do. In the process, identify what pieces of
    information they work with. When you have the complete
    specification written, you can then begin grouping the pieces of
    information they work with together. For example, a ticket may have
    a number, status, priority and a person to whom it is assigned. The
    person to whom the ticket is assigned will have a name, phone
    number, e-mail address and a list of technical skills.
    So in this overly-simplified example, we could have a table
    that contains ticket information, a table that has information
    about technicians and a table of skills. Then ask your self
    questions like "Can one ticket be handled by more than one
    technician?" "Can one technician handle more than one ticket?" Can
    a technician have more than one skill?" In this way, you can begin
    seeing the one-to-one, one-to-many and many-to-many relationships
    that exist.

  • Database design and pl/sql vs external procedures

    hi,
    My project involves predicting arrival time of a bus at a bus-stop, given statistical data of traffic patterns on the previous ‘n’(say 3) days, as well as the current location of the bus(latitude-longitude).
    Given current bus location, I derive my distance-until-destination bus-stop, which must be translated into time until arrival.
    Ive enlisted the triggers and procedures involved in making the prediction. Thse procedures especially the determination of perpendicular distances involve some complex trigonometric operations. I would like to know if my approach is correct and my database design is suited for the operations to b performed.
    Will it be more efficient to implement the procedures as external procedures or as pl/sql blocks
    This is my database design:
    LINKS ( a link is the road segment between adjacent bus-stops)
    LINK_ID                NUMBER      [PRIMARY-KEY]
    START_LATITUDE          NUMBER     
    START_LONGITUDE     NUMBER     
    START_STOP_ID          NUMBER
    END_LATITUDE          NUMBER
    END_LONGITUDE          NUMBER
    END_STOP_ID          NUMBER
    LINK_LENGTH          NUMBER
    BUS_ROUTE
    ROUTE_ID               NUMBER
    LINKS_ENROUTE          VARRAY(30) OF NUMBER
    STOPS_ENROOUTE          VARRAY(30) OF NUMBER
    TRACK(keeps track of current location of bus)
    BUS_ID           NUMBER          [PRIMARY-KEY]
    ROUTE          VARCHAR2(20)
    LATITUDE          NUMBER
    LONGITUDE          NUMBER
    TS               TIMESTAMP
    LINK_ID          NUMBER
    START_STOP     NUMBER
    END_STOP          NUMBER
    ARRIVAL_TIMES(actual arrival times of the bus, updated by track)
    BUS_ID           NUMBER     [PRIMARY-KEY]
    BUS_ROUTE          VARCHAR2(20)
    ARRIVAL          TIMESTAMP
    STOP_ID          NUMBER
    ETA (expected time of arrival)
    BUS_ID          NUMBER
    BUS_ROUTE          VARCHAR2(20)
    BUS_STOP_ID     NUMBER
    ARR_TIME          VARRAY(5) OF TIMESTAMP
    Triggers and procedures
    1)TRACK_TRIGGER
    On insert/update of track determine which link the us is currently on.
    Invoke a procedure that calculates perpendicular distance from current location to all links en-route (cursor on LINKS).
    Results are stored in a temporary table. Select the link-id of the tuple whose perpendicular distance is MINIMUM. This is the link the bus is currently on. Place link-id, start_stop_id and end_stop_id in corresponding row of TRACK.
    2)ARRIVAL_TRIGGER
    update ARRIVAL_TIMES.and store in ARRIVAL_TIMES that start-stop id with the time-stamp of the current track record.
    b)update ETA: Find the BUS-STOPS that come before the START_STOP of current track record. All these rows are deleted from the ETA tables, as the bus has already crossed these stops.
    3)Prediction Algorithm Procedure.
    Determine distance until destination for each STOP, 20 stops down from stop current location.
    Determine current avg. speed of the bus over a 2 hour window, by dividing total distance traveled by time taken.
    Calculate time-until arrival T1=current avg. speed * distance until destination
    From the records of previous ‘n’ days (say n=3) find those buses on the same route that were near the link the bus is currently on. Again determine avg speed over 2 hour window and calculate avg. speed.
    Calculate travel time T(i) = speed*distance until destination.     i.=2,3, 4…
    The final predicted arrival time is a weighted sum of all T(i).
    I hope Im not asking for too much, but the help would be greatly appreciated.
    Thankyou,
    Amina

    hello,
    actually i can manage ETA without a varray, since there will b a maxximum of 3 -4 values of expected arrival times at each stop. this can b done with separate columns.
    though i dont quite understand how lag() will help me...from what i understand lag() is to access values of previous rows. but in ETA table each element in the varray(if there is one) is going to be the expected arrival time of buses on a particular route at that particular stop, and is different from the arrival time at a previous stop(i.e.row).
    but for my other table BUS_ROUTE i have 2 varrays describing the links and stops enroute. in quite a few procedures i have to loop through these arrays and perform some calculations in every iteration is varray the best way 2 go, or nested tables?
    Thank you
    Amina
    As an aside, external procedures tend by their very
    nature to be slow - there's an overhead incurred
    each time we step outside the database. Therefore
    you really ought to avoid using a C extproc unless
    your calculations really cannot be done in PL/SQL or
    a Java Stored Procedure.
    Also, before you go down the VARRAY route you should
    consider the virues of analytic functions, notably
    [url=http://download-west.oracle.com/docs/cd/B1050
    1_01/server.920/a96540/functions56a.htm#83619]LAG()[/u
    rl]. I think you really ought to do some
    benchmarking of parformance before you start afdding
    denormalised columns like ETA. You may find the
    overhead in maintaining those columns exceeds their
    perceived benefits.
    Cheers, APC

  • Suggestion:  Create a Database Design Forum

    I recommend the creation of a new forum dealing exclusively with database design questions, such as setting Primary Keys, Unique constraints, Check constraints, Indexes, schema-creation scripts, etc. There is no forum devoted exclusively to this topic now and I feel it would be very helpful to the user community. It would certainly make searching for answers to design questions much easier.

    Billy  Verreynne  wrote:
    Prohan wrote:
    I don't agree there.
    1. How to create a relational model certainly IS relevant to Oracle, which is a RELATIONAL DBMS.Oracle also supports data warehousing (star schema designs), network/hierarchical designs, object-relational designs - or pretty much any data model that you may come up with. Calling it just a relational DBMS is incorrect.
    2. Your point that logical models are independent of specific technology is correct. What you're missing is that if a specific technology makes use of a certain foundational body of knowledge, that knowledge is a legitimate topic for a forum whose users use that specific technology.That is putting the cart in front of the horse IMO.
    I would rather see data modeling and logical database design being done in a way that is untainted with specific vendor implementations and technology used. There needs to be a clear line dividing the design from the implementation. If not, then design decisions can (and will) be made based not on the correct logical data modeling principles, but whether it can be "handled" by the technology. A design that is tainted like that, will always be less than optimal (especially as technology is continually evolving and changing).
    An OTN forum for database design will invariable be tainted with Oracle technology - and instead of learning sound data modeling fundamentals, a warped view of data modeling will be conveyed. Where doing abc will be acceptable (when it is not), because Oracle has feature xyz that can make the flawed design work (in a fashion).Excellent points. I think (or at least hope) such a forum would attract some number of pure theorists to straighten out the view. This might make for a lively forum, and might actually influence the real products, and might even get the cart on the right side of the horse.
    Hmmm, I guess I do sound hopelessly optimistic.

  • Logical Database design and physical database implementation

    Hi
    I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
    This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
    implementation"
    Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
    In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
    I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
    Also exactly what is the difference between logical database design vs physical database implementation
    Thanks and Regards

    Logical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
    Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
    It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
    If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference.

  • Convert the Database records to a standard XML file format?

    Hai,
    i want to convert the Database records to a standard XML file
    format which includes the schema name, table name, field name
    and field value. i am using Oracle 8.1.7. Is there any option
    please help me
    Thanks in advance.

    You could put the files somewhere and I can export them as QuickTime. Or you could find anyone you know who has Director. Or do the 30 day trial download
    Another approach would be to play the DCR in a browser window, and do a screen recording. But that’s unlikely to give the perfect frame rate that you would get by exporting from Director.

  • Re: (forte-users) Round-trip database design

    We have used Erwin quite sucessfully, but it's not cheap.
    "Rottier, Pascal" <Rottier.Pascalpmintl.ch> on 02/15/2001 04:51:01 AM
    To: 'Forte Users' <forte-userslists.xpedior.com>
    cc:
    Subject: (forte-users) Round-trip database design
    Hi,
    Maybe not 100% the right mailing list but it's worth a try.
    Does anyone use tools to automatically update the structure of an existing
    database?
    For example, you have a full database model (Power Designer) and you've
    created a script to create all these tables in a new and empty database.
    You've been using this database and filling tables with data for a while.
    Now you want to do some marginal modifications on these tables. Add a
    column, remove a column, rename a column, etc.
    Is there a way to automatically change the database without losing data and
    without having to do it manually (except the manual changes in the (Power
    Designer) model).
    Thanks
    Pascal Rottier
    Atos Origin Nederland (BAS/West End User Computing)
    Tel. +31 (0)10-2661223
    Fax. +31 (0)10-2661199
    E-mail: Pascal.Rottiernl.origin-it.com
    ++++++++++++++++++++++++++++
    Philip Morris (Afd. MIS)
    Tel. +31 (0)164-295149
    Fax. +31 (0)164-294444
    E-mail: Rottier.Pascalpmintl.ch
    For the archives, go to: http://lists.xpedior.com/forte-users and use
    the login: forte and the password: archive. To unsubscribe, send in a new
    email the word: 'Unsubscribe' to: forte-users-requestlists.xpedior.com

    Hello Pascal,
    Forte has classes which might be able to scan the database structure
    (DBColumnDesc,etc.). Express use this classes to determine how the
    BusinessClass looks like. We use Forte to create the tables,indexes and
    constraints. We have the Problem that the above described classes are only
    readable but not fillable. The solution for us will be to create our own
    classes in
    the same manner than existing classes are. So we are able to make updates in
    the database structure and maybe able to change the database tables with tool
    code. Another reason for us to have the database structure in the
    application is the
    ability to see the table structure on which the Forte code works always up
    to date
    with the code. You are always able to compare the structure of the database
    with
    your businessclasses and able to convert a wrong structure to the correct
    structure
    with maybe just a little piece of code.
    Hope this helps
    Joseph Mirwald

Maybe you are looking for

  • Difference in the Balance between FS10N and Customer Balances in Local Curr

    Hi, When i am trying to match the Balances between FS10N and Customer Balances in Local Currency for the Period 8, we are getting the difference, The reconcilliaton Account was changed on 30.08.2010. Please help Us in tracing the differences between

  • Configure Web Service finished with error

    Hi, I had configured the web server for version EPM 11.1.2.3 on Server 2008 R2. But I had a failed status. I copied the c:\program files\Oracle\Inventory\logs\Install 2014-01-14-05:00:00PM.log file as below; What is the problem? Did you take this fai

  • CV04N - Search Documents

    Hi Guru's I av an issuse while finding the documents in transaction cv04n.. I av released some of the documents... If i search the documents in cv04n transaction by entering From date ,TO date & Doucment status as released then if i  execute it is di

  • Unable to connect to Office Wireless Network (802.1x) after Windows Update

    Hi, my notebooks are on Windows 7 Enterprise 64bit. We have only recently started Windows Updates. We are on a Wireless network 802.1x. In the past, we are able to do the following steps : 1. Open the "connections are available" window by clicking on

  • Enabling ARCHIVELOG on windows

    Hi I'm on Oracle 10.2.0.4.0 running on MS Windows and following the instructions to put my database (ORCL) in ARCHIVELOG mode. It sounds like I need to perform the following to do this: ALTER SYSTEM SET log_archive_dest_1='LOCATION=E:\oracle\admin\OR