Resources: database design

I'm looking for some nice resources for database design.  The people I work with don't understand this stuff, and won't take my word for it ("No way, it's a GOOD idea to have Value_1, Value_2,Value_3,..,Value_9 columns!" *smack*)... does anyone know any good online resources or books for proper data modeling?

phrakture wrote:I'm looking for some nice resources for database design.  The people I work with don't understand this stuff, and won't take my word for it ("No way, it's a GOOD idea to have Value_1, Value_2,Value_3,..,Value_9 columns!" *smack*)... does anyone know any good online resources or books for proper data modeling?
Too bad you should have told me that a LinuxTag, I gave two courses this year on this. But my material is alll german ...
-neri

Similar Messages

  • Resources on database design !!

    We want to set up standarts on database design. So any help will be appreciated according to below questions.
    1. We need some theoretical information, algorithms and so on. Where can we find?
    2. Are there any named standart(s) for database design?

    I would recommened you to buy a book "Effective oracle by design".
    hare krishna
    Alok

  • Urgent help database design

    Hi,
    I need an urgent help with the design of the database, We have already a database in production but we are facing a problem with extensibility.
    The client information is variant that is
    1) number of fields are different for each client
    2) client may ask anytime in future to add another field into the table
    Please provide your views with practical implication (advantages and disadvantages) or any resource where I can find information.....
    Help appreciated.....

    Hi,
    Database design is an art & science by itself - as far as I know, there aren't any rigid rules.
    I would suggest that you have a look at the discussions in these two threads for a few general ideas :
    Database Design
    conversion from number to character
    If your client requirements keep changing, I would suggest that you keep 8- 10 "spare" columns in your tables - just call them spare1, spare2, etc.. The only purpose of these tables is to allow flexibility in the design - i.e.., in future you can always extend the table to accommodate more fields.
    I have used it a couple of times & found it to be useful - again, this is only a suggestion.
    Regards,
    Sandeep

  • Resource Adapter Design

    I am fairly new to IdM and have had little training on it. I need assistance with a Resource Adapter design question.
    I need to create an adapter that will read a database table of application accounts and associated attributes, and then link those accounts to existing IdM users. I do not want IdM to provsion to the resource, and I do not want the adapter to create IdM users. I just want the accounts linked to the IdM users and then have access in a workflow to the associated account attributes. Does this accurately describe what I should be able to do with an ActiveSync DatabaseTable adapter, or will I need to create a custom scripted adapter?
    Any suggestions or steps would be greatly appreciated.

    you can do this with an active sync database table adapter.

  • Multi-lingual Software & Database Design

    I am interested in how the industry is approaching multi-lingual software and database design. Specifically, I would like to know if there are any good resources (whitepapers, web sites, books) that get into the details of what the object model and data model of a multi-lingual design would look like and how the two fit together. I am working on a project that requires Product information to be stored in both English and French. Thank you.

    http://java.sun.com/j2se/1.3/docs/guide/intl/

  • SQL Server 2014 New Database Design Features

    SQL Server 2014 has three major database design related features:
    1. In-memory OLTP tables (Hekaton)
    2. Inline INDEX declaration in CREATE TABLE
    3. Updateable clustered columnstore index
    Any other new feature? Thanks.
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

    http://windowsitpro.com/sql-server-2014/top-ten-new-features-sql-server-2014
    http://www.sqlpass.org/sqlserver2014/Webinars.aspx
    Enhanced query processing for better performance without app changes.
    Buffer Pool extension to SSDs for faster paging.
    Resource Governor controls IO along with CPU and memory.
    Enhanced Always On now supports 8 secondary for better HA (High Availability

  • Requirements for database design and installation

    Hi,
    As a database administrator, how to find out whether the database is design and installed properly?
    Can you please what would be the requirements to be considered towards the databse design for application developer ?
    thanks heaps !!!!

    Mohamed ELAzab wrote:
    regarding that the number of execution is the main thing that affects the performance i said that already above " the application executed it 30 000" but you didn't read my answer correctly. I did not respond to that "answer" of yours as it was not part of your posting that I responded too. In your response, which I quoted, talked about non-sharable SQL retrieving 20 rows and after 3 years it retrieving a million rows. This has no bearing on whether the SQL is sharable or not.
    I don't agree with you regarding that the design is not being done regarding considering the performance bottlenecks.So you decide on what the bottlenecks are up front, and then use these as database design considerations? I fail to see any logic or merit in such an approach.
    i want to let you know that we in the telecoms environment have many problems in our databases because the people who designed those applications didn't take performance in consideration.I understand too well - and it is not that they did not take performance into consideration when designing the database, it is because the design is just plain wrong from the start.
    You do not need to consider amount of memory available, number and speed of CPUs, bandwidth and speed of the I/O system, in order to design a database. These have no relevance at all during the design phase. Especially as the h/w that will run the design in production in a year's time can be drastically different from the h/w that will be used today.
    No, instead you use a proper and correct design methodology and data modeling approach. Why? Because such a design by its very nature will make optimal use of h/w resources and will provide data integrity, scalability and performance.
    Again i think design of the database application must take database performance bottlenecks in consideration like application which doesn't use bind variables if they took into consideration to avoid that it will help the DBA in the future but unfortunately most people doesn't do that. And as I said - using bind variables or not has absolutely nothing to do with the basic question asked in this thread: "+what are the requirements of database design+".
    How does using/not using bind variables influence the design of a table? Determine whether an entity is in 3NF? What the unique identidiers are for an entity? These are the design considerations for a database.. not bind variables.
    Yes, SQLs not using bind variables can cause performance problems. Not paying the electricity bill can cause a power outage for the database server. So what? These issues have no relevance to database design.

  • Database design to support parameterised interface with MS Excel

    Hi, I am a novice user of SQL Server and would like some advice on how to solve a problem I have. (I hope I have chosen the correct forum to post this question)
    I have created a SQL Server 2012 database that comprises approx 10 base tables, with a further 40+ views that either summarise the base table data in various ways, or build upon other views to create more complex data sets (upto 4 levels of view).
    I then use EXCEL to create a dashboard that has multiple pivot table data connections to the various views.
    The users can then use standard excel features - slicers etc to interrogate the various metrics.
    The underlying database holds a single days worth of information, but I would like to extend this to cover multiple days worth of data, with the excel spreadsheet having a cell that defines the date for which information is to
    be retrieved.(The underlying data tables would need to be extended to have a date field)
    I can see how the excel connection string can be modified to filter the results such that a column value matches the date field,
    but how can this date value be passed down through all the views to ensure that information from base tables is restricted for the specied date, rather than the final results set being passed back to excel - I would rather not have the server resolve the views
    for the complete data set.
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    What other options do I have, or have I failed to grasp the way SQL server creates its execution plans and simply having the filter at the top level will ensure the result set is minimised at the lower level? (I dont really want the time taken for the dashboard
    refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    As an example of 3 of the views, 
    Table A has a row per system event (30,000+ per day), each event having an identity, a TYPE eg Arrival or Departure, with a time of event, and a planned time for the event (a specified identity will have a sequence of Arrival and Departure events)
    View A compares seperate rows to determine how long between the Arrival and Departure events for an identity
    View B compares seperate rows to determine how long between planned Arrival and Departure events for an identity
    View C uses View A and view B to provide the variance between actual and planned
    Excel dashboard has graphs showing information retrieved from Views A, B and C. The dashboard is only likely to need to query a single days worth of information.
    Thanks for your time.

    You are posting in the database design forum but it seems to me that you have 2 separate but highly dependent issues - neither of which is really database design related at this point.  Rather you have an user interface issue and an database programmability
    issue.  Those I cannot really address since much of that discussion requires knowledge of your users, how they interface with the database, what they use the data for, etc.  In addition, it seems that Excel is the primary interface for your users
    - so it may be that you should post your question to an excel forum.
    However, I do have some comments.  First, views based on views is generally a bad approach.  Absent the intention of indexing (i.e., materializing) the views, the db engine does nothing different for a view than it does for any ad-hoc query. 
    Unfortunately, the additional layering of logic can impede the effectiveness of the optimizer.  The more complex your views become and the deeper the layering, the greater the chance that you befuddle the optimizer. 
    I would rather not have the server resolve the views for the complete data set
    I don't understand the above statement but it scares me.  IMO, you DO want the server to do as much work as possible since it is closest to the data and has (or should have) the resources to access and manipulate the data and generate the desired
    results.  You DON'T want to move all the raw data involved in a query over the network and into the client machine's storage (memory or disk) and then attempt to compute the desired values. 
    I considered parameterisation of views, but I dont believe views support parameters, I also considered stored procedures, but I dont believe that stored procedures allow result sets to be used as pseudo tables.
    Correct on the first point, though there is such a thing as a TVF which is similar in effect.  Before you go down that path, let's address the second statement.  I don't understand that last bit about "used as pseudo tables" but that sounds more
    like an Excel issue (or maybe an assumption).  You can execute a stored procedure and use/access the resultset of this procedure in Excel, so I'm not certain what your concern is.  User simplicity perhaps? Maybe just a terminology issue?  Stored
    procedures are something I would highly encourage for a number of reasons.  Since you refer to pivoting specifically, I'll point out that sql server natively supports that function (though perhaps not in the same way/degree Excel does).   It
    is rather complex tsql - and this is one reason to advocate for stored procedures.  Separate the structure of the raw data from the user.
    (I dont really want the time taken for the dashboard refresh to increase - it currently takes approx 45 seconds following SQL Server Engine Tuning Advisor recommendations)
    DTA has its limitations.  What it doesn't do is evaluate the "model" - which is where you might have more significant issues.  Tuning your queries and indexing your tables will only go so far to compensate for a poorly designed schema (not that
    yours is - just a generalization).  I did want to point out that your refresh process involves many factors - the time to generate a resultset in the server (including plan compilation, loading the data from disk, etc.), transmitting that data over the
    network, receiving and storing the resultset in the client application, manipulating the resultset into the desired form/format), and then updating the display.  Given that, you need to know how much time is spent in each part of that process - no sense
    wasting time optimizing the smallest time consumer. 
    So now to your sample table - Table A.  First, I'll give you my opinion of a flawed approach.  Your table records separate facts about an entity as multiple rows.  Such an approach is generally a schema issue for a number of reasons. 
    It requires that you outer join in some fashion to get all the information about one thing into a single row - that is why you have a view to compare rows and generate a time interval between arrival and departure.  I'll take this a step further and assume
    that your schema/code likely has an assumption built into it - specifically that a "thing" will have no more than 2 rows and that there will only be one row with type "arrival" and one row with type "departure". Violate that assumption and things begin to
    fall apart.  If you have control over this schema, then I suggest you consider changing it.  Store all the facts about a single entity in a single row.  Given the frequency that I see this pattern, I'll guess that you
    cannot.  So let's move on.
    30 thousand rows is tiny, so your current volume is negligible.  You still need to optimize your tables based on usage, so you need to address that first.  How is the data populated currently?  Is it done once as a batch?  Is it
    done throughout the day - and in what fashion (inserts vs updates vs deletes)?  You only store one day of data - so how do you accomplish that specifically?  Do you purge all data overnight and re-populate?   What indexes
    have you defined?  Do all tables have a clustered index or are some (most?) of them heaps?   OTOH, I'm going to guess that the database is at most a minimal issue now and that most of your concerns are better addressed at the user interface
    and how it accesses your database.  Perhaps now is a good time to step back and reconsider your approach to providing information to the users.  Perhaps there is a better solution - but that requires an understanding of your users, the skillset of
    everyone involved, what you have to work with, etc.  Maybe just some advanced excel training? I can't really say and it might be a better question for a different forum.   
    One last comment - "identity" has a special meaning in sql server (and most database engines I'm guessing).  So when you refer to identity, do you refer to an identity column or the logical identity (i.e., natural key) for the "thing" that Table A is
    attempting to model? 

  • Database design and pl/sql vs external procedures

    hi,
    My project involves predicting arrival time of a bus at a bus-stop, given statistical data of traffic patterns on the previous ‘n’(say 3) days, as well as the current location of the bus(latitude-longitude).
    Given current bus location, I derive my distance-until-destination bus-stop, which must be translated into time until arrival.
    Ive enlisted the triggers and procedures involved in making the prediction. Thse procedures especially the determination of perpendicular distances involve some complex trigonometric operations. I would like to know if my approach is correct and my database design is suited for the operations to b performed.
    Will it be more efficient to implement the procedures as external procedures or as pl/sql blocks
    This is my database design:
    LINKS ( a link is the road segment between adjacent bus-stops)
    LINK_ID                NUMBER      [PRIMARY-KEY]
    START_LATITUDE          NUMBER     
    START_LONGITUDE     NUMBER     
    START_STOP_ID          NUMBER
    END_LATITUDE          NUMBER
    END_LONGITUDE          NUMBER
    END_STOP_ID          NUMBER
    LINK_LENGTH          NUMBER
    BUS_ROUTE
    ROUTE_ID               NUMBER
    LINKS_ENROUTE          VARRAY(30) OF NUMBER
    STOPS_ENROOUTE          VARRAY(30) OF NUMBER
    TRACK(keeps track of current location of bus)
    BUS_ID           NUMBER          [PRIMARY-KEY]
    ROUTE          VARCHAR2(20)
    LATITUDE          NUMBER
    LONGITUDE          NUMBER
    TS               TIMESTAMP
    LINK_ID          NUMBER
    START_STOP     NUMBER
    END_STOP          NUMBER
    ARRIVAL_TIMES(actual arrival times of the bus, updated by track)
    BUS_ID           NUMBER     [PRIMARY-KEY]
    BUS_ROUTE          VARCHAR2(20)
    ARRIVAL          TIMESTAMP
    STOP_ID          NUMBER
    ETA (expected time of arrival)
    BUS_ID          NUMBER
    BUS_ROUTE          VARCHAR2(20)
    BUS_STOP_ID     NUMBER
    ARR_TIME          VARRAY(5) OF TIMESTAMP
    Triggers and procedures
    1)TRACK_TRIGGER
    On insert/update of track determine which link the us is currently on.
    Invoke a procedure that calculates perpendicular distance from current location to all links en-route (cursor on LINKS).
    Results are stored in a temporary table. Select the link-id of the tuple whose perpendicular distance is MINIMUM. This is the link the bus is currently on. Place link-id, start_stop_id and end_stop_id in corresponding row of TRACK.
    2)ARRIVAL_TRIGGER
    update ARRIVAL_TIMES.and store in ARRIVAL_TIMES that start-stop id with the time-stamp of the current track record.
    b)update ETA: Find the BUS-STOPS that come before the START_STOP of current track record. All these rows are deleted from the ETA tables, as the bus has already crossed these stops.
    3)Prediction Algorithm Procedure.
    Determine distance until destination for each STOP, 20 stops down from stop current location.
    Determine current avg. speed of the bus over a 2 hour window, by dividing total distance traveled by time taken.
    Calculate time-until arrival T1=current avg. speed * distance until destination
    From the records of previous ‘n’ days (say n=3) find those buses on the same route that were near the link the bus is currently on. Again determine avg speed over 2 hour window and calculate avg. speed.
    Calculate travel time T(i) = speed*distance until destination.     i.=2,3, 4…
    The final predicted arrival time is a weighted sum of all T(i).
    I hope Im not asking for too much, but the help would be greatly appreciated.
    Thankyou,
    Amina

    hello,
    actually i can manage ETA without a varray, since there will b a maxximum of 3 -4 values of expected arrival times at each stop. this can b done with separate columns.
    though i dont quite understand how lag() will help me...from what i understand lag() is to access values of previous rows. but in ETA table each element in the varray(if there is one) is going to be the expected arrival time of buses on a particular route at that particular stop, and is different from the arrival time at a previous stop(i.e.row).
    but for my other table BUS_ROUTE i have 2 varrays describing the links and stops enroute. in quite a few procedures i have to loop through these arrays and perform some calculations in every iteration is varray the best way 2 go, or nested tables?
    Thank you
    Amina
    As an aside, external procedures tend by their very
    nature to be slow - there's an overhead incurred
    each time we step outside the database. Therefore
    you really ought to avoid using a C extproc unless
    your calculations really cannot be done in PL/SQL or
    a Java Stored Procedure.
    Also, before you go down the VARRAY route you should
    consider the virues of analytic functions, notably
    [url=http://download-west.oracle.com/docs/cd/B1050
    1_01/server.920/a96540/functions56a.htm#83619]LAG()[/u
    rl]. I think you really ought to do some
    benchmarking of parformance before you start afdding
    denormalised columns like ETA. You may find the
    overhead in maintaining those columns exceeds their
    perceived benefits.
    Cheers, APC

  • Suggestion:  Create a Database Design Forum

    I recommend the creation of a new forum dealing exclusively with database design questions, such as setting Primary Keys, Unique constraints, Check constraints, Indexes, schema-creation scripts, etc. There is no forum devoted exclusively to this topic now and I feel it would be very helpful to the user community. It would certainly make searching for answers to design questions much easier.

    Billy  Verreynne  wrote:
    Prohan wrote:
    I don't agree there.
    1. How to create a relational model certainly IS relevant to Oracle, which is a RELATIONAL DBMS.Oracle also supports data warehousing (star schema designs), network/hierarchical designs, object-relational designs - or pretty much any data model that you may come up with. Calling it just a relational DBMS is incorrect.
    2. Your point that logical models are independent of specific technology is correct. What you're missing is that if a specific technology makes use of a certain foundational body of knowledge, that knowledge is a legitimate topic for a forum whose users use that specific technology.That is putting the cart in front of the horse IMO.
    I would rather see data modeling and logical database design being done in a way that is untainted with specific vendor implementations and technology used. There needs to be a clear line dividing the design from the implementation. If not, then design decisions can (and will) be made based not on the correct logical data modeling principles, but whether it can be "handled" by the technology. A design that is tainted like that, will always be less than optimal (especially as technology is continually evolving and changing).
    An OTN forum for database design will invariable be tainted with Oracle technology - and instead of learning sound data modeling fundamentals, a warped view of data modeling will be conveyed. Where doing abc will be acceptable (when it is not), because Oracle has feature xyz that can make the flawed design work (in a fashion).Excellent points. I think (or at least hope) such a forum would attract some number of pure theorists to straighten out the view. This might make for a lively forum, and might actually influence the real products, and might even get the cart on the right side of the horse.
    Hmmm, I guess I do sound hopelessly optimistic.

  • IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan

    Hi Experts,
    IF Auto Update Statistics ENABLED in Database Design, Why we need to Update Statistics as a maintenance plan for Daily/weekly??
    Vinai Kumar Gandla

    Hi Vikki,
    Many systems rely solely on SQL Server to update statistics automatically(AUTO UPDATE STATISTICS enabled), however, based on my research, large tables, tables with uneven data distributions, tables with ever-increasing keys and tables that have significant
    changes in distribution often require manual statistics updates as the following explanation.
    1.If a table is very big, then waiting for 20% of rows to change before SQL Server automatically updates the statistics could mean that millions of rows are modified, added or removed before it happens. Depending on the workload patterns and the data,
    this could mean the optimizer is choosing a substandard execution plans long before SQL Server reaches the threshold where it invalidates statistics for a table and starts to update them automatically. In such cases, you might consider updating statistics
    manually for those tables on a defined schedule (while leaving AUTO UPDATE STATISTICS enabled so that SQL Server continues to maintain statistics for other tables).
    2.In cases where you know data distribution in a column is "skewed", it may be necessary to update statistics manually with a full sample, or create a set of filtered statistics in order to generate query plans of good quality. Remember,
    however, that sampling with FULLSCAN can be costly for larger tables, and must be done so as not to affect production performance.
    3.It is quite common to see an ascending key, such as an IDENTITY or date/time data types, used as the leading column in an index. In such cases, the statistic for the key rarely matches the actual data, unless we update the Statistic manually after
    every insert.
    So in the case above, we could perform manual statistics updates by
    creating a maintenance plan that will run the UPDATE STATISTICS command, and update statistics on a regular schedule. For more information about the process, please refer to the article:
    https://www.simple-talk.com/sql/performance/managing-sql-server-statistics/
    Regards,
    Michelle Li

  • Logical Database design and physical database implementation

    Hi
    I am an ORACLE DBA basically and we started a proactive server dashboard portal ,which basically reports all aspects of our infrastructure (Dev,QA and Prod,performance,capacity,number of servers,No of CPU,decomissioned date,OS level,Database patch level) etc..
    This has to be done entirely by our DBA team as this is not externally funded project.Now i was asked to do " Logical Database design and physical Database
    implementation"
    Even though i know roughly what's that mean(like designing whole set of tables in star schema format) ,i have never done this before.
    In my mind i have a rough set of tables that can be used but again i think there is lot of engineering involved in this area to make sure that we do it properly.
    I am wondering you guys might be having some recommendations for me in the sense where to start?are there any documents online , are there any book on this topic?Are there any documents which explain this phenomena with examples ?
    Also exactly what is the difference between logical database design vs physical database implementation
    Thanks and Regards

    Logical database design is the process of taking a business or conceptual data model (often described in the form of an Entity-Relationship Diagram) and transforming that into a logical representation of that model using the specific semantics of the database management system. In the case of an RDBMS such as Oracle, this representation would be in the form of definitions of relational tables, primary, unique and foreign key constraints and the appropriate column data types supported by the RDBMS.
    Physical database implementation is the process of taking the logical database design and translating that into the actual DDL statements supported by the target RDBMS that will create the database objects in a target RDBMS database. This will generally include specific physical implementation details such as the specification of tablespaces, use of specialised indexing (bitmap, clustered etc), partitioning, compression and anything else that relates to how data will actually be physically stored inside the database.
    It sounds like you already have a physical implementation? If so, you can reverse engineer this implementation into a design tool such as SQL Developer Data Modeller. This will create a logical design by examining the contents of the Oracle data dictionary. Even if you don't have an existing database, Data Modeller is a good tool to use as a starting point for logical and even conceptual/business models.
    If you want to read anything about logical design, "An Introduction to Database Systems" by Date is always a good starting point. "Database Systems - A Practical Approach to Design, Implementation and Management" by Connolly & Begg is also an excellent reference.

  • Help with a database design for community housing project

    Talking database design
    Hi all, I have been wondering about the design of tables for a big block of residential units. There are 100 + rooms. there are about 25 houses in this complex but the 100 + rooms are all rented out separately.
    Its just like a college campus really but its not a college campus , its a little unique.
    The rents are applied as a percentage of income, so thats not common, so I included a tblRoomCost where the pre-calculated weekly cost is entered, and its got a date field in there for when the change of rent charged. I probably need to include an income field in tblCustomer, even just as an Admin reference.
    So is this looking pretty ok and would there be any point in scrapping the database and using text files ?
    So what do you think of these tables please ?
    tblCustomer
    pkCustomer, fldFirstName, fldLastname
    tblRoomAllocation
    pkRoomID, fkCustomer
    tblRoomCost
    pkRoomID, fldDate, fldRoomCost
    tblTransactionID
    pkTransactionID, fldDate, fldTransactionType, fkCustomer, fldAmount

    The naming scheme is one I learned and havn't thought past it though I do get into trouble and your suggestion may prove useful when codeing !
    I thought the tblRoomAllocation and tblRoomCost took care of changing. Though I see now that tbleRoomAllocation needs a Date field really. And the tblRoomCost has a fldRoomCost which isnt really that good an implementation as the rooms themselves are priced always accoedijng to the income of a resident and not because of the room. So the real world object is getting fudgy........
    It is extremely unlikly that Admin would ever allow two rooms to be rented by an individual.
    I will have a look at possibly your suggestion that an Accounts table be used.
    Also I thought about having a startDate and EndDate in the tblTransaction to represent the period being paid for. Just seems like a lot of Dates in a transaction table. One to record the transaction, the other two dates to indicate the period that was actually being paid for. ? When perhaps I should work that out in the runtime code.v ? This will be a VB.Net app.
    Do you think there is a need for Accounts table if only one room is permitted to be rented , though room changing may be common?
    And thx for your input.
    Message was edited by:
    user521381

  • New database design product - ModelRight for Oracle

    Whether you are a beginner or an expert data modeler, ModelRight for Oracle is the database design tool of choice for Oracle. Here are some of the things that make ModelRight for Oracle unique:
    •     Extensive support for Oracle – support and advanced features - OR types, object tables, object views, materialized views, index-organized tables, clusters, partitions, function-based indexes, etc...
    • full Forward, Reverse and Compare capabilities
    •     Unique User Interface and Diagrammatic elements: with our mode-less and hyperlinked user-interface, navigating and editing your model is intuitive and easy.
    •     Extensive use of Domains: you can create Domains for just about every type of object to propagate patterns, reusability & classification.
    •     Unprecedented level of programmatic control: you can control the smallest details of the FE and Alter Script generation process.
    Please check out our website at http://www.modelright.com and download the free trial version.
    Please let me know if you have any suggestions or comments.
    Thank you,
    Tim Guinther
    Founder, ModelRight, Inc.
    [email protected]
    (215) 534 5282

    Excellent product. Pretty impressive. Gorgeous diagrams and sophisticated reports. Loved the myriad of navigation ways and non-obtrusive modeless dialogs. Very easy to use!
    Keep the good work up.

  • Doubt in Database Design

    Hi,
    I have got some doubt in database design. I am designing a database for Inventory Management. Wherein i need to store the order details in one to many relationship. I have designed my tables as follows (Just sample)
    -- Tbl_OrderOne
    OrderCode (PK)
    OrderDate
    CustomerInfo
    -- Tbl_OrderMany
    OrderCode (FK)
    ItemCode
    Qty
    What doubt i have got is, since oracle has the features like creating objects and use them as data type and store the data.
    Can i use this feature in this case. Here in this case trasactions will be very high. Does it affect the performance. But presently i have designed it using foreign key reference as i showed you in the beginning. If using the Objects to store that huge number of data is feasible and increases the performance, then i can go for Object type feature.
    Thanx in Advance
    Regards
    Vinayak
    null

    Vinayak,
    One more thing. You can check out more information about NESTED TABLE at http://otn.oracle.com/docs/products/oracle8i/doc_library/817_doc/appdev.817/a76976/adobjdes.htm#446526
    Regards,
    Geoff

Maybe you are looking for