Building tables - Best practices?

Hi all,
I hope this is a good place to ask a general question like this. I'm trying to improve myself as a DB designer/programmer and am wondering what are the current practices used when deploying a database that must be kept running at the highest performance possible (as far as selecting data and keeping the database clean).
Basically, here are the specific topics of concern for me:
- table sizing
- index sizing
- oracle parameter tuning
- maintenance work required to be done on tables/indexes
The things I've studied on were all based on Oracle 8i, and I'm wondering if much has changed for 9i and/or 10g.
Thanks.
Peter

Actually I'm not very new to doing database work now, but I do still consider myself not quite sufficient in certain aspects of a typical DBA. For that reason, I'm trying to keep my questions very general as though as I'm learning them afresh.
It does seem that I'm trying to ask something that is too broad to bring up in forum discussions... I'll go back and do some independent studies then come back to the forum with better questions. :)
When looking through the 10g bug reports in metalink, it made me uncomfortable on some issues that people have been running into (been a while since I did the initial evaluation, and I forgot which specific issues I looked at). I realized that Oracle 10g provided a lot of conveniences with their new web-based EM and EMServer (especially interested in the new reports and built-in automations that Oracle provided), and also on grid deployments for high-availability systems, but we've been held back by many reasons to not go forward with 10g at this time. Having said that, moving to 10g is still planned for the future, so I am continuing the evaluation in several aspects that are specific to our design to determine what we can use and/or abandon in our existing deployment processes.
Thanks for everyone's time, best wishes.
Peter

Similar Messages

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Activate Scenarios with Solution Builder CRM Best Practices V1.2007

    Hi,
    I finished all steps in Quickguide for CRM Best Practices V1.2007 until the end.
    All worked fine without any problem.
    Now I want to activate a scenario.
    1. In the field Workbench I get a list of 15 Request/Task, I`m only able to select one.
    2. In the field Customizing I do not get any values.
    3. How to maintain this fields?
    3. Do I have to create a customizing request?
    Can anybody tell me how to proceed with this step? I copied the standard solution to my favorite Solution and marked seven scenarios.
    Perhaps there is a another documentation than Solution_Builder_Quick_Start_V4
    Regards
    Andreas

    Hi Andreas,
    In the same popup window, at the bottom, you will find options to create work bench and customising requests.
    You can assign only one workbench and one customizing request for all the activities of solution builder.
    If you do not have an existing customizing request, choose the option to create one.
    Regards,
    Padma

  • Building a best practice web application using ColdFusion and Jave EE

    I've been tasked with rewriting a software using ColdFusion.  I cannot seem to find a lot of information on best practice development in ColdFusion.  I am an experience Java developer who has never used ColdFusion before.  I want to build this application using a synergy of ColdFusion and Java EE technologies.  Can someone recommend me a book that outlines how to developer in ColdFusion?  Ideally this book assumes the reader is an experienced developer with no exposure to ColdFusion.  Ideally the methods outlined in the book are still "best practice" methods.

    jaisheela wrote:
    Hello Friends,
    I am also in the same situation.
    I am a building a new web application using JSF and AJAX.
    Requirement is I need to use IBM version of DOJO and JSF but I need to develop the whole application using Eclipse 3.3,2 and Tomcat 5.5.
    With IBM version of DOJO and JSF, will Eclipse and Tomcat help to speed up the development or do you suggest me to go for Rational Application Developer and WebSphere Application Server.
    If I need to go with RAD and WAS, then I am new to RAD and WAS, is it easy to use RAD and WAS for this kind of application and implement web applicaiton fast.
    Any feedback will be great help.Those don't sound like requirements of the system to me. They sound more like someone wants to improve their CV/resume
    From what I've read recently, if it's just fast you want, look at Ruby on Rails

  • Building Block  & Best Practice

    Dear All,
                I havea query. I am searching on net for quite some time but  I amunable to find the bulding block  configuration and Best practices for E Recruitment.
    CAn any one please help me out with it. In case if SAP doesnot offer them then what is the next best possible option do I have.
    Your feed back and help would be highly appreacited.
    points will be awarded as well.
    Regards

    hi
    Just keep in mind that before you do any configurations , there has to be a transition program from the mannual system to the automated system.
    Document every step in the AS IS stage and then let them know what all is possible and not possible in the E-rec module .
    Do not ever agree for major 'Z'. that is more important.
    Regards
    Sameer

  • Join large external external Tables (best practice ?)

    Hi there,
    I have three external tables in a master-detail relation: table A (10000000 rows) is master of table B (20000000 rows), Table B is master of table C (100000000 rows). Can you tell my the best way:
    - directly join the external tables, or
    - copy the external tables into tables, create an index, and join them
    What is more efficient, and why ?
    Thanks for your help and ideas.
    Gerhard.

    In general, if the joins you are doing can benefit from indexes, you will want to copy the data to database tables. If the joins will end up doing full table scans anyway, it will matter far less.
    For data integrity, you will likely also want to be able to enforce foreign key constraints.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Build specifications best practice

    I am just wondering what other users do when building Installers to find what is the most efficient method.
    I used to include the LVRT Engine (LabVIEW Runtime) in the Installer Build
    PROS:
    1) Only one Installer required for the Target Computer to install everything making it much easier to install.
    CONS:
    1) Installer is very large - hundreds of Megabytes making it harder to distribute upgrades (eg. through e-mail).
    2) Each time you re-build the Installer takes a VERY long time, because it has to include the LVRT.
    3) If you change to a different LV version you then have to re-build using the new LVRT and the application EXE.
    So I decided to split the installation of the LVRT from the Application EXE.
    PROS:
    1) The Executable is now very small - say 10 Megabytes even for a large application making it easier to send upgrades.
    2) Very quick to build a new Installer.
    3) For each version of the LVRT installed, there might be several upgrades to the EXE making this approach more efficient.
    CONS:
    1) I have to now build & run an Installer for the Apllication EXE AND run the LVRT Installer to install application on Target Computer.
    Assuming we use the second approach, what is the best way of building and installing new and/or upgrading EXE applications?
    In my application I build a new Executable for each new software change and then create an Installer (with the same version number) to install the new executable as shown below.
    However each time I build a new Installer a new Upgrade code is generaterd - {45D6510F-EB72-4DA2-A8E6-A4CB2363129A} in the Version Information window and when I run the new Installer, if there is an existing installation present it just installs over it - BUT when you look in the ADD/REMOVE programs list there is a NEW entry each time a new Installer is run.
    Now to overcome this I could just use one common Installer that I upgrade each time with the latest EXE build as follows...
    But then I don't have separate Installers for each version and if I want to go back a version I have to build the Installer again.
    So what is the best method?
    Chris

    Norbert_B wrote:
    For longterm support, i suggest you to split the installers apart. One containing all components for executing the application (so drivers, LV RTE), another one for the application and it's support files (ReadMe, config, ...). Let's call the first installer "Framework Installer", the second "Application Installer".
    One correction in terminology btw:
    Since LV RT is the abbrevation for LabVIEW Real Time, we use LV RTE (LabVIEW RunTime Engine) as abbrevation for the components required to execute compiled LV code.
    So why split up?
    First, the big Framework Installer is not going to change much. It has to be updated once you update LV or some of the drivers, but for anything else, this could stay the same.
    Second, updates for the application can be supplied in small packages. Most developers like to use simple "copy&replace" mechanism to do this, which works fine once you update only a few files; but ojnce you have to update significant things (several files, changes in expected directory structure), i recommend you to use an installer.
    Additional note on installers:
    Each installer includes something called "upgrade code". This is the key the installer uses to register the application on the system. Once running another installer sharing this key, the OS will treat that as an update of the existing component. This is suggested to use once you want to replace things for a newer version of the application. Manual copy and replacing might induce errors....
    just my 5 cents,
    Norbert
    I also wonder about the upgrade code. Would you use the same code for framework installer and application installer? If so, wouldn't the application installer remove the framework again?
    When using different upgrade codes the user will have 2 entries in the control panel > add/remove software. In that case I how would installing a new framework deal with the files modified by another installer (the exe installer)?
    Also, will uninstalling both "products" leave a clean system (independent of the uninstallation order)?

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Arranging fields in a table-like form: best-Practice-Solution wanted

    Hello Experts,
    I´m wondering if there exists a 'best practice' considering how to arrange fields in a table-like form.
    I know about cross-tables, but that´s not what we need. Most of the requirements that I have come to known are just that certain fields should be put in a certain order in a table-like outfit.
    We have tried to do this using the drawing functions (e.g. putting a square around the fields and certain border styles), but it often happens that the lines overlap or there are breaks between the lines, so that you have to do a lot of manual configuration with the 'table'.
    Since this is a requirement I´ve come upon with many reports, I can´t believe that this is supposed to be the best solution for this.
    I don´t understand why there isn´t a table-like element in Crystal Reports to use for this. E.g. put a table with x rows and y columns in the header or group head section section and then just put the fields in it.
    Many thanks in advance for your help !

    Hi Frank,
    You can use build in templates available in Template expert.
    Click on Report menu-> Template Expert.
    Select the desired template. ( Table grid template would suite best here) and click OK.
    There is no facility of inserting a table directly as you said. You will have to do it manually by using lines and boxes.
    Hope this is helpful.
    Regards

  • Best practice when deleting from different table simultainiously

    Greetings people,
    I have two tables joined with a foreign key contrraint. They are written at the same time to keep the constraint happy but I don't know the best way of deleting them as far as rowsets and datamodels are concerned. Are there "gotchas" like do I delete the row in the foreign key table first?
    I am reading thread:http://swforum.sun.com/jive/thread.jspa?forumID=123&threadID=49918
    and getting my head around it.
    Is there a tutorial which deals with this topic?
    I was wondering the best way to go.
    Many Thanks.
    Phil
    is there a "best practice" method for

    Without knowing many details about your specifics... I can suggest a few alternatives -
    You can definitely build coordinating the deletes into your application - you can automatically delete any FK related entries prior to deleting the master, or, refuse to delete the master until the user goes and explicitly deletes the children... just depends on how you want to manage it.
    Also in many databases you can build the cascading delete rules into your database tables themselves.... so that when you delete the master the deletes automatically cascade. I think this is something you typically declare when creating the FK constrataint (delete cascade and update cascade rules).
    hth,
    v

  • Varying table columns, best practices

    I've been wondering about this for quite sometime now. JTable is very complex, but it has a lot of funcationality that hints at reusable models. The separation of TableModel and ColumnModel seems to hint at being able to reuse a TableModel that stores some sort of objects, and apply different ColumnModels to view the data in different ways. Which is really cool.
    However, who is in charge of managing the columns? The default implementation is usually good enough. But, it doesn't do anything special to the columns like: assign renderers, or editors. Should the column model be in charge of this? But, then you have to swap full on column models out when you want to change the look of the table. What if you just want to vary the renderer on a column, or remove one column. Would you build a whole new ColumnModel for this?
    Should the JTable be in charge of setting himself up in these matters? But that seems to impose the view's representation on the model. What if you change views in some way that affects your model's structure.
    Should there be some external controller in charge of this?
    Sometimes you don't plan for these things at it hurts you when you need to reuse models, but maybe modify them in some way. What are your best practices?
    charlie

    The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.
    What I'm wondering is that if anyone has come up with a very elgant way to organize their class' responsibilities between who populates the column model. I know I can subclass and fill it in the subclass, but it seems that I might NOT need to subclass, use the default, and have another class ( maybe the JTable or a controller ), that populates the ColumnModel.
    Then I can get better reuse between TableModel's and ColumnModels.
    charlie

  • [CS4-CS5] Table from XML: what's the best practice?

    Hi,
    I have to build a huge table (20-25 pages long...) from an XML file.
    No problem with that, I wrote a XSLT file to convert my client's XML in the "Table/Cell structure" InDesign needs with all style parameters.
    The problem is that it takes a long time (4-5 hours) to ID to build the whole table.
    I wonder if this is still the best practice with such a huge amount of data (the input XML is 1,1 Mb).
    I also tried to build the table using a script (JavaScript) but from some time test I can see the problem is even bigger.
    I'm currently using an iMac (Mac OS X 10.6.2) with 3.06 GHZ Intel Core 2 Duo and 8 GB ram, it's not exactly the worst computer in this world...
    Is there a best practice for this kind of work?
    Client is becoming a pain in the arse...
    Thanks in advance!

    First transform the XML through XSLT seprately and then Import that XML in InDesign.
    Hope it help.
    Regards,
    Anil Yadav

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

Maybe you are looking for

  • Making a bonging sound at start up

    As of yesterday morning when starting up my iMac it highlights the HD drive and then makes this random bonging sound like a repeating drum sound. I press the escape button and that does the trick. Then I can be using the computer for a while and then

  • IPod Nano not recognized by Windows

    Thousands of others have posted about this problem on this and other websites. When I plug-in my iPod, Windows provides a message that it is an "unrecognized USB device". (This happens in XP and Vista.) Device Manager shows lists it as an Unrecognize

  • OH! ...look, my filenames were changed... does it matter?

    Heres something odd i have just noticed... using dos the contents of my directories "used" to look like this: c:\jEdu\JHTP\chptr3>dir Ex3_1 (.java file) Ex3_1 (.class file) Ex3_2 Ex3_2 Ex3_3 Ex3_3 Ex3_4 Ex3_4 then using explorer i rearranged my dir n

  • How to use debug on CSM SSL module?

    I'm installing a new CSM with SSL module (WS-X6066-SLB-S-K9) and can't get the debugs to work. Acutally, I enabled debugging (to troubleshoot SSL Handshake problems) but nothing shows up on the screen or in the log. Any ideas? mcbconmrk105d1z2-ssl#sh

  • Scanning lots of photos - ideas?

    I'm looking to scan a ton of old family photos. One scanner, the Kodak i1210, got a great write-up in the WSJ in early January. Here is a Consumer Reports blog post on it (I don't think he's right about it not being available to consumers): http://bl