Varying table columns, best practices

I've been wondering about this for quite sometime now. JTable is very complex, but it has a lot of funcationality that hints at reusable models. The separation of TableModel and ColumnModel seems to hint at being able to reuse a TableModel that stores some sort of objects, and apply different ColumnModels to view the data in different ways. Which is really cool.
However, who is in charge of managing the columns? The default implementation is usually good enough. But, it doesn't do anything special to the columns like: assign renderers, or editors. Should the column model be in charge of this? But, then you have to swap full on column models out when you want to change the look of the table. What if you just want to vary the renderer on a column, or remove one column. Would you build a whole new ColumnModel for this?
Should the JTable be in charge of setting himself up in these matters? But that seems to impose the view's representation on the model. What if you change views in some way that affects your model's structure.
Should there be some external controller in charge of this?
Sometimes you don't plan for these things at it hurts you when you need to reuse models, but maybe modify them in some way. What are your best practices?
charlie

The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.The practice you described is what I'm doing right now, and I feel that it is cumbersome for reuse.
What I'm wondering is that if anyone has come up with a very elgant way to organize their class' responsibilities between who populates the column model. I know I can subclass and fill it in the subclass, but it seems that I might NOT need to subclass, use the default, and have another class ( maybe the JTable or a controller ), that populates the ColumnModel.
Then I can get better reuse between TableModel's and ColumnModels.
charlie

Similar Messages

  • Table partitioning best practices

    Just looking for some ideas.
    We have a large information warehouse table that we are looking to partition on 'service_period' id. We have been requested to partition on every month of every year which right now will create approximately 70 partitions. The other problem is that this is a rolling or dynamic partition meaning we will have a 'new' partition vale with each new month. I understand in 11g there is a rolling partition functionality but we are not there yet.
    So right now we are looking for a best practice for this scenario. We are thinking of possibly creating a partition on each year and indexing on the service period within each partition, maybe hash partitioning on the service period id (although does not seem to group the service periods distinct within each partition), somehow creating the partition dynamically via pl/sql (creating the table with a basic partition and then running an alter table on the data creating the proper number of partitions within a list partition.
    I am also wondering if there is a point of too many partitions on a table. I am thinking 70 may be a little extreme but not sure. We are going to do some performance testing but would be nice to hear from the community. We have 5,000,000 over approx 70 partitions giving us approx 70,000 records per partition. The other option would be to create the partition based on year and then apply an index over top on the service period to reduce the number of partitions.
    Thanks in advance,
    Bruce Carson

    This is not a lot of data, so the effort of partitioning may not be worth the benefit you receive. 70 partitions is not unreasonable. Do you have performance problems ? Do the majority of your queries reference service_period ? Do you have a lot of full table scans ?
    Partitioning strategies depend on the queries you plan to run, your data distributions, and your purge / archival strategy.
    Think about whether you should pre-create partitions for years in advance. Think about whether you should put every partition in a seperate tablespace for easy purging and archival. Think about what indexing you will use (can you use local indexes, or do you need global ?) Think about what data changes are happening ? Are they all on the newest data ? Can you compress older partitions with PCT_FREE 0 to improve performance ?

  • Azure table design best practice

    What's the best practice for designing tables in Azure Tables to optimize query performance?

    Hi Raj,
    When we design the azure table, we need to consider the scalability of the azure table.
    and selecting the PartitionKey is very more important to scalability.
    Basically, we have two options which have their advantages and disadvantages:
    One Option: having a single partition by having the same value for PartitionKey for all entities to
    Second Option: having a unique value for PartitionKey for every entity
    More information about how to get the most out of windows azure tables ,please refer to the link below:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
    There is  also a detailed article which explain how to design a scalable partitioning strategy for Windows Azure Storage,please refer to the link below:
    http://msdn.microsoft.com/en-us/library/hh508997.aspx
    Best Regards,
    Kevin Shen

  • External Table Authorization Best practices

    Hi,
    I am working on OBIEE External table Authorization. I am able to successfully implement for one Project (catalog). The field for Authorization table (AuthTable) are
    Windows_ID     Employeeid     Name     EmpEMail     GroupName     Process_ID     Process_Name     Portal_Path
    Here as per requirement a user should see data for a few process. So, I put a column for Process_ID and subsequently I created a INIT block in repository where query are like
    Select 'PROCESS_ID',AuthTable. Process_id
    From AuthTable
    WHERE upper(AuthTable.AD_ID) = upper(':USER')
    Then for User Groups I applied FILTERs for all the tables E.G for every Logical Table I applied Filter
    Dim_Process."Process ID" = VALUEOF(NQ_SESSION."PROCESS_ID")
    I checked data and every thing is correct. But My question is:
    We have many projects/catalog for which Filter Criteria will be different so shall we insert a new column for each criteria in SAME AuthTable or there is any other and better way to maintain it. Because if we maintain one table for all the projects/catalog it will be very messy I would prefer to keep different tables for different projects/catalog as there data marts are different.
    But Problem is for all other session variables we may use different INIT BLOCKS and hence different tables BUT for PORTALPATH there should be only one INIT BLOCK so only for PORTALPATH sake we need to keep every thing in same table ?
    Tell me if I am wrong some where in my understanding or there is a better way to do it.
    Regards
    Saurabh

    Hi,
    Pls refer to this link. Kumar explained it very clearly
    http://obieeblog.wordpress.com/category/obiee/obiee-security/
    Pls award points, if helpful
    Regards,
    Sarat Nallapati

  • VC table use, best practices?

    Hi,
    I'm updating a table in the back end with an RFC. I would like to send only the rows I've modified or added on the VC client to the RFC and not the whole table. Is this possible?

    Hey Joel,
    Make a condition(say check box) on changing the table row (user need to select what are all the rows he is modifying).
    while sending the values to the RFC, give the guard condition as valid only when check box is selected.
    Regards,
    Pradeep

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice to define length for varchar field of table in sql server

    What is best practice to define length for a varchar field in table
    where field suppose Remarks By Person  varchar(max) or varchar(4000)
    Could it affect on optimization in future????
    experts Reply Must ... 
    Dilip Patil..

    Hi Dilip,
    Varchar(n/max) is a variable-length, non-unicode character data. N defines the string length and can be a value from 1 through 8,000. Max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered
    + 2 bytes. We always use varchar when the sizes of the column data entries vary considerably. While if the filed data size might exceed 8,000 bytes in some way, we should use varchar(max).
    So the conclusion is just like Uri said, use varchar(max) or varchar(4000) is depends on how much characters we are going to store.
    The following document about varchar in SQL Server is for your reference:
    http://technet.microsoft.com/en-us/library/ms176089.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • Best Practice : how 2 fetch tables, views, ... names and schema

    hi,
    I am looking for the best practice about getting the catalog of a database.
    I have seen that I can make some select in system-tables(or views) such as DBA_TABLES and DBA_VIEWS, or DBA_CATALOG, but is that the best way to grab thses informations ?
    (I ask this question because It seems a strange way to me to get the table names using a simple select, but getting column info using a specialized function, OCIDescribeAny(). this does not look like a coherent API...)
    thanks for your advice
    cd

    in the same idea, why use OCIDescribeAny instead of doing an appropriate select in DBA_TAB_COLUMNS ?
    cd

  • Arranging fields in a table-like form: best-Practice-Solution wanted

    Hello Experts,
    I´m wondering if there exists a 'best practice' considering how to arrange fields in a table-like form.
    I know about cross-tables, but that´s not what we need. Most of the requirements that I have come to known are just that certain fields should be put in a certain order in a table-like outfit.
    We have tried to do this using the drawing functions (e.g. putting a square around the fields and certain border styles), but it often happens that the lines overlap or there are breaks between the lines, so that you have to do a lot of manual configuration with the 'table'.
    Since this is a requirement I´ve come upon with many reports, I can´t believe that this is supposed to be the best solution for this.
    I don´t understand why there isn´t a table-like element in Crystal Reports to use for this. E.g. put a table with x rows and y columns in the header or group head section section and then just put the fields in it.
    Many thanks in advance for your help !

    Hi Frank,
    You can use build in templates available in Template expert.
    Click on Report menu-> Template Expert.
    Select the desired template. ( Table grid template would suite best here) and click OK.
    There is no facility of inserting a table directly as you said. You will have to do it manually by using lines and boxes.
    Hope this is helpful.
    Regards

  • Best Practice of Essbase Cube Join Relational Table

    Hi,
    As the topic mentioned, what's the best practice of Essbase cube join relational table?
    Please notice that I want description columns, not measure columns, from relational table.
    Thanks.

    As a starting point, check out what Ed wrote together a while ago:
    [https://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx|https://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx]
    [http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/11/03/enhance-your-essbase-month-dimension-to-aid-sorting.aspx|http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/11/03/enhance-your-essbase-month-dimension-to-aid-sorting.aspx]
    Cheers,
    C.

  • Database best practice: max number of columns

    I have two questions that I would appreciate comments on...
    We have a table titled TRANSACTION with 160 columns and a view titled TRANSACTIONS_VIEW with 233 columns in it. This was designed by someone a while ago. I am wondering if it is against best practice to have this many columns in a table? I have never before seen a table with this many columns in it and feel that there must be a way to split the data into multiple tables to make it more manageable.
    My second question is on partitions, the above table TRANSACTION is partitioned by manually specifying partitions with max values on the transaction date starting august 2008 through january 2010 at 1 month increments. Isn't it much better to specify automatic partitioning using the interval clause?

    kev374 wrote:
    thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
    Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
    As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

  • Oracle Best practices for changing  Byte to Char on Varchar2 columns

    Dear Team,
    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    Thanks in Advance !!!
    SK

    Application Team wanted to change Byte to Char on Varchar2 columns to accommodate Multi byte character  on couple of production tables.
    Wanted to know is it safe to have mixture of BYTE and CHAR semantics in the same table i have read on the couple of documents that It's good practice to avoid using a mixture of BYTE and CHAR semantics columns in the same table.
    No change is needed to 'accommodate Multibyte characters'. That support has NOTHING to do with whether a column is specified using BYTE or CHAR.
    In 11g the limit for a VARCHAR2 column is 4000 bytes, period. If you specify CHAR and try to insert 1001 characters that each take 4 bytes you will get an exception since that would require 4004 bytes and the limit is 4000 bytes.
    In practice the use of CHAR is mostly a convenience to the developer when defining columns for multibyte characters. For example for a NAME column you might want to make sure Oracle will allocate room for 50 characters REGARDLESS of the actual length in bytes.
    If you provide a name of 50 one byte characters then only 50 bytes will be used. Provide a name of 50 four byte characters and 200 bytes will be used.
    So if  that NAME column was defined using BYTE how would you know what length to use for the column? Fifty BYTES will seldom be long enough and 200 bytes SEEMS large since the business user wants a limit of FIFTY characters.
    That is why such columns would typically use CHAR; so that the length (fifty) defined for the column matches the logical length of the number of characters.
    What happens if we have mixture of BYTE and CHAR semantics columns in the same table?
    Nothing happens - Oracle could care less.
    Do we need to gather stats & rebuild indexes on the table after these column changes .
    No - not if you by 'need' you mean simply because you made ONLY that change.
    But that begs the question: if the table already exists, has data and has been in use without their being any problems then why bother changing things now?
    In other words: if it ain't broke why try to fix it?
    So back to your question of 'best practices'
    Best practices is to set the length semantics at the database level when the database is first created and to then use that same setting (BYTE or CHAR) when you create new objects or make DDL changes.
    Best practices is also to not fix things that aren't broken.
    See the 'Length Semantics' section of the globalization support guide for more best practices
    http://docs.oracle.com/cd/E11882_01/server.112/e10729/ch2charset.htm#i1006683

  • What's the best practice to partition an existing table?

    We are using Oracle 11g EE 11.2.0.3.0. We want to partition an existing table. I found some information from this link [http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php], but I am not sure whether that's the best way. Could anyone share the best practice of partitioning an existing table?
    Thanks.
    Jun

    >
    We want to partition a few related tables. A couple of tables have about 30 million rows, and a couple of other tables have about 10 million rows. The data for those tables is for ~2000 accounts, each of them is identified by a column named "account_id". We want to partition the tables by using the "account_id" as the partition key to gain better performance for queries, so we would probably use hash partition. Please advise.
    >
    1. What evidence do you have that your performance will be any better if you partition those tables?
    2. What evidence do you have that there is anything wrong with the performance of your current queries?
    3. Have you reviewed the actual execution plans for your most important queries to see if there is even any room for improvement?
    If your query uses account_id as a predicate but only pulls a few records from a 'master' table for that key value your performance may actually be worse if you partition that table. That is because using an index range scan Oracle can easily find the ROWIDs of the rows that need to be selected and then easily retrieve them regardless of their physical location.
    If you have queries that do NOT use account_id but currently use indexes the performance of those queries may also be worse if you partition that table. Those indexes will be GLOBAL.
    There are two major implications when you use hash partitioning:
    1. Any query that does NOT include the partitioning key (account_id for your case) will, of necessity, have to use a full table scan if an appropriate index is not available.
    2. You will get NO management benefits such as being able to add/remove partitions that include old data.
    Before you decide to partition a table you should conduct tests and examine the execution plans for your important queries.

Maybe you are looking for

  • Our menu bar no longer works correctly. The flyouts only show the first menu option and none of the drop-down menu options.

    In previous versions of Mozilla we had drop-down menus that contained flyouts that offered more drop-down options. This no longer works in Mozilla 4. If you look at our website in IE7, 8 or 9 and Mozilla 4.0.2 they now work differently. (www.saskscho

  • GR Issue

    Hi all, How to post stock to Quality inspection directly for a PO. Even though in GR i make sure that its posting in Quality inspection, by default it is going to Urestricted stock. How to fix this by posting stock to Quality inspection directly? "Po

  • What happened to the "Importing" settings in the iTunes software?

    For some reason whenever I go to "edit" and the select "preferences" on the iTunes software menu the tab for the "Importing settings" is gone. That's the tab where you can select if you want to import your cd using AAC or Apple Lossless or MP3,etc. I

  • Facebook apps loading not good

    Hello since the upgrade to Firefox 6 in Facebook, Code of War the game, its loads not properly.... I have to reload several times to get it right.

  • Need SAPGUI for Mac OS 10.4.11 (Tiger)

    First, what is the recommended version of SAPGUI which will run in Mac OS 10.4.11 (Tiger). Would it be SAPGUI 7.1? And secondly, where can I download a version which will work in Tiger running on an Intel-based Mac? I would be so grateful for anyone'