Best index for big association table

Hi,
I have an association table:
SURROGATE_ID, CONTENT_TYPE, OWNER_ID, REF_COUNT
The SURROGATE_ID is an auto generated primary key.
There are about 5 different content type integers.
There are a huge number of OWNER_ID long integers, including a value of NOBODY_ID.
The refcount reflects the number of contents for that type with that owner.
I need a fast COUNT(*) query that for a given CONTENT_TYPE value, and a given OWNER_ID value, counts the number of rows (either zero or one) with REF_COUNT > 0.
My naive ploy would be orthogonal indices on CONTENT_TYPE, OWNER_ID, and REF_COUNT.
And then construct a reasonable query not worrying about the order of the conditions:
where CONTENT_TYPE = :mytype and OWNER_ID = :myowner and REF_COUNT > 0
Can someone tell me the wisest index to use?
Andy

user9990110 wrote:
Hi,
I have an association table:
SURROGATE_ID, CONTENT_TYPE, OWNER_ID, REF_COUNT
The SURROGATE_ID is an auto generated primary key.
There are about 5 different content type integers.
There are a huge number of OWNER_ID long integers, including a value of NOBODY_ID.
The refcount reflects the number of contents for that type with that owner.
I need a fast COUNT(*) query that for a given CONTENT_TYPE value, and a given OWNER_ID value, counts the number of rows (either zero or one) with REF_COUNT > 0.Define fast please.
user9990110 wrote:
My naive ploy would be orthogonal indices on CONTENT_TYPE, OWNER_ID, and REF_COUNT.I'm not sure i understand your use of the word orthogonal in this context, can you elaborate please?
user9990110 wrote:
And then construct a reasonable query not worrying about the order of the conditions:
where CONTENT_TYPE = :mytype and OWNER_ID = :myowner and REF_COUNT > 0
Can someone tell me the wisest index to use?
AndyDon't worry about the order of the conditions in your SQL, the cost based optimizer will handle that for you based on a plethora of factors.
Are you always going to query by content_type, owner_id and ref_count? How many other columns does this table have? You say 'big', but are you talking row wise, column wise, both?
My first instinct would be to index (in this order, and assuming this table has many other columns):
CONTENT_TYPE
OWNER_ID
REF_COUNT

Similar Messages

  • How can I create index(uniqie index) for a existed table?

    Hi guys,
    I want to create a index for a existed table, it should be unique index. I did it using se11, "indexes," and select the radio button "unqiue index", of course the fields as well. But when I activate it, I get a waring and therefore can not create this index. But if I select radio button "non-uniqe index", it works.
    However I have to reate a unqiue index.
    How can I do it?
    Thanks in advance
    Regards,
    Liying

    HI Wang
    You can create your index via SE11, enter the table name, click change, choose Go To, Indexes. Here create your index with the key fields that you want. To use the index, your select statement WHERE clause, you must have the key fields of the index in the order that they appear in the index. The "optimizer" will choose the index depending on your fields of the WHERE clause.
    One thing to remember is that when you create indexes for tables, the update or insert of these tables may have a slower response then before, I for one have never seen a big problem as of yet.
    this Document may help u on primary and secondary indexes :
    http://jdc.joy.com/helpdata/EN/cf/21eb20446011d189700000e8322d00/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/cf/21eb47446011d189700000e8322d00/content.htm
    If it helps Reward the points
    Regards,
    Rk
    Message was edited by:
            Rk Pasupuleti

  • Index for a PSA table is not in the "customer" namespace

    Hi,
    While loading data through infosource 0CO_OM_WBS_1 to
    data target 0IMFA_1 loading failes and the reason given is that the system reads data from PSA table /BIC/B0001060000 of 0IM_FA_IQ_9 and the index generated for this table supplied return code 14.
    I found no notes in the subject - with the syntax :
    Index for a PSA table is not in the "customer" namespace.
    thanks in advance for your help.

    Hi,
    I think you need to speak to your basis guys / DBA as index created on PSA table is not in your tablespace. He should be able to help you.
    Vikash

  • Can we create secondary index for a cluster table

    hi
    can we create secondary index for a cluster table

    Jyothsna,
    There seems to be some kind of misunderstanding here. You <i>cannot</i> create a secondary index on a cluster table. A cluster table does not exist as a separate physical table in the database; it is part of a "physical cluster". In the case of BSEG for instance, the physical cluster is RFBLG. The only fields of the cluster table that also exist as fields of the physical cluster are the leading fields of the primary key. Taking again BSEG as the example, the primary key includes the fields MANDT, BUKRS, BELNR, GJAHR, BUZEI. If you look at the structure of the RFBLG table, you will see that it has primary key fields MANDT, BUKRS, BELNR, GJAHR, PAGENO. The first four fields are those that all cluster tables inside BSEG have in common. The fifth field, PAGENO, is a "technical" field giving the sequence number of the current record in the series of cluster records sharing the same primary key.
    All the "functional" fields of the cluster table (for BSEG this is field BUZEI and everything beyond that) exist only inside a raw binary object. The database does not know about these fields, it only sees the raw object (the field VARDATA of the physical cluster). Since the field does not exist in the database, it is impossible to create a secondary index on it. If you try to create a secondary index on a cluster table in transaction SE11, you will therefore rightly get the error "Index maintenance only possible for transparent tables".
    Theoretically you could get around this by converting the cluster table to a transparent table. You can do this in the SAP dictionary. However, in practice this is almost never a good solution. The table becomes much larger (clusters are compressed) and you lose the advantage that related records are stored close to each other (the main reason for having cluster tables in the first place). Apart from the performance and disk space hit, converting a big cluster table like BSEG to transparent would take extremely long.
    In cases where "indexing" of fields of a cluster table is worthwhile, SAP has constructed "indexing tables" around the cluster. For example, around BSEG there are transparent tables like BSIS, BSAS, etc. Other clusters normally do not have this, but that simply means there is no reason for having it. I have worked with the SAP dictionary for over 12 years and I have never met a single case where it was necessary to convert a cluster to transparent.
    If you try to select on specific values of a non-transparent field in a cluster without also specifying selections for the primary key, then the database will have to do a serial read of the whole physical cluster (and the ABAP DB interface will have to decompress every single record to extract the fields). The performance of that is monstrous -- maybe that was the reason of your question. However, the solution then is (in the case of BSEG) to query via one of the index tables (where you are free to create secondary indexes since those tables are transparent).
    Hope this clarifies things,
    Mark

  • Creating index for standard SAP tables

    Hi!
    What are the advantages and disadvantages of creating addtional indexes for tables with massive amount of data (BSEG, BKPF, COEP, etc...).
    If I create a new index it supposed to make the table access faster, for the cost of hard disk space.
    Am I right?
    Thank you
    Tamá

    Hi,
    Primary and secondary indexes
    Index: Technical key of a database table.
    Primary index: The primary index contains the key fields of the table and a pointer to the non-key fields of the table. The primary index is created automatically when the table is created in the database.
    Secondary index: Additional indexes could be created considering the most frequently accessed dimensions of the table.
    Structure of an Index
    An index can be used to speed up the selection of data records from a table.
    An index can be considered to be a copy of a database table reduced to certain fields. The data is stored in sorted form in this copy. This sorting permits fast access to the records of the table (for example using a binary search). Not all of the fields of the table are contained in the index. The index also contains a pointer from the index entry to the corresponding table entry to permit all the field contents to be read.
    When creating indexes, please note that:
    An index can only be used up to the last specified field in the selection! The fields which are specified in the WHERE clause for a large number of selections should be in the first position.
    Only those fields whose values significantly restrict the amount of data are meaningful in an index.
    When you change a data record of a table, you must adjust the index sorting. Tables whose contents are frequently changed therefore should not have too many indexes.
    Make sure that the indexes on a table are as disjunctive as possible.
    (That is they should contain as few fields in common as possible. If two indexes on a table have a large number of common fields, this could make it more difficult for the optimizer to choose the most selective index.)
    Accessing tables using Indexes
    The database optimizer decides which index on the table should be used by the database to access data records.
    You must distinguish between the primary index and secondary indexes of a table. The primary index contains the key fields of the table. The primary index is automatically created in the database when the table is activated. If a large table is frequently accessed such that it is not possible to apply primary index sorting, you should create secondary indexes for the table.
    The indexes on a table have a three-character index ID. '0' is reserved for the primary index. Customers can create their own indexes on SAP tables; their IDs must begin with Y or Z.
    If the index fields have key function, i.e. they already uniquely identify each record of the table, an index can be called a unique index. This ensures that there are no duplicate index fields in the database.
    When you define a secondary index in the ABAP Dictionary, you can specify whether it should be created on the database when it is activated. Some indexes only result in a gain in performance for certain database systems. You can therefore specify a list of database systems when you define an index. The index is then only created on the specified database systems when activated
    Thanks and Regards
    Arun Joseph

  • Best index for Between operator

    I have a table abc where create_date column with date datatype.
    My table abc is having one million row,
    my query is some thing like
    select * from abc
    where create_Date between :low_date and :high_date;
    Ideally if I use the same query in my report it is taking 15 hrs for one month of data and in normal case it is doing Full Table Scan of abc.
    So I want to create an index, Please let me know what type of index will give me the performance improvement. (expected 4 to 5 hours).

    Alex wrote:
    Hi,
    Create a NON-Unique Bitmap Index for create_Date column.
    CREATE INDEX Your_Index_Name ON Your_table_Name
    (create_Date )
    LOGGING
    PARALLEL ( DEGREE Default INSTANCES Default );Even you ahve 1 million records, 15 hours is too far from the normal situation.
    So ones after you create teh index check the report again.
    if it takes more time then execute the same query in sql plus or any other sql interface and see how long it takes to retrive all details.
    Sometimes this slowness can caused by the report you use. Not exactly from the sql query.Probably not a great idea since we have no idea what this table is, how it's loaded, if it's transactional or data-warehouse oriented.
    Your "solution" could well cause a myriad of performance problems outside of the OP's original query. And it will almost certainly not be the solution to their problem. As i stated in my post, and you have also mentioned ... 15 hours to read 1 million rows is indicative of a serious problem, no single index is going to fix that.

  • Best practice for existing target table

    We want to develop mappings for existing target tables.
    1. The tables are imported from the datadictionary
    2. The tables are used in the mappings
    Is it a good idea to set deploy to no for these tables, to prevent them from beeing used by the deplomentmanager (default action:create)?
    Thanks your for a good practice
    Stp
    Message was edited by:
    user444776
    Message was edited by:
    user444776

    Hi,
    Yes, you are right. for the tables which already exist in the database schema the deployment actions should be "None". Unless you planned and made any changes to the table structures.
    Cheers
    Mahesh

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • HOW TO CREATE LOCAL INDEX ON BIG PARTITION TABLE

    Dear All,
    I have one big table 450GB stored on 9 partitions and same partitions I have created for the index. Now the problem is when i am trying to create local index it took one and half day and is still going on...
    is there any shortest way to create local index on this table easily.
    Database version is 11.2.0.1.0
    INDEX SCRIPT IS
    CREATE INDEX INDEX_SPACE0_IX_LOCAL ON FINANCE (END_TIME)
    INITRANS 2 MAXTRANS
    255
    LOCAL ( PARTITION INDEX_SPACE01
    LOGGING
    NOCOMPRESS
    TABLESPACE INDEX_SPACE01
    PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS
    2147483645 BUFFER_POOL
    DEFAULT), PARTITION INDEX_SPACE02
    LOGGING
    NOCOMPRESS
    TABLESPACE INDEX_SPACE02 PCTFREE
    5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS
    2147483645 BUFFER_POOL DEFAULT),
    PARTITION INDEX_SPACE03
    LOGGING
    NOCOMPRESS
    TABLESPACE
    INDEX_SPACE03
    PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS
    2147483645 BUFFER_POOL DEFAULT),
    PARTITION INDEX_SPACE04
    LOGGING
    NOCOMPRESS
    TABLESPACE
    INDEX_SPACE04 PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS
    1 MAXEXTENTS
    2147483645 BUFFER_POOL DEFAULT),
    PARTITION INDEX_SPACE05
    LOGGING
    NOCOMPRESS
    TABLESPACE
    INDEX_SPACE05 PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS
    1 MAXEXTENTS 2147483645 BUFFER_POOL DEFAULT),
    PARTITION INDEX_SPACE06
    LOGGING
    NOCOMPRESS
    TABLESPACE INDEX_SPACE06 PCTFREE 5 INITRANS 2
    MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS 2147483645 BUFFER_POOL
    DEFAULT),
    PARTITION INDEX_SPACE07
    LOGGING
    NOCOMPRESS
    TABLESPACE INDEX_SPACE07 PCTFREE
    5 INITRANS 2
    MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS 2147483645 BUFFER_POOL
    DEFAULT),
    PARTITION INDEX_SPACE08
    LOGGING
    NOCOMPRESS
    TABLESPACE INDEX_SPACE08 PCTFREE
    5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS 1 MAXEXTENTS
    2147483645
    BUFFER_POOL DEFAULT),
    PARTITION INDEX_SPACE09
    LOGGING
    NOCOMPRESS
    TABLESPACE
    INDEX_SPACE09 PCTFREE 5 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 1M MINEXTENTS
    1 MAXEXTENTS 2147483645 BUFFER_POOL
    DEFAULT))
    NOPARALLEL;
    Thanks in advance......
    Thanks,
    Edited by: sherkhan on Aug 24, 2011 3:36 AM
    Edited by: sherkhan on Aug 24, 2011 3:49 AM

    Have you verified that 'n' Index partition segments have got created so far ? (they would apepar as TEMPORARY segments only till the full index creation is completed). Have you monitored the session statistics and waits and confirmed that it is not waiting on something horrible ?
    A CREATE INDEX can well be NOLOGGING instead of LOGGING. It could also use PARALLEL but I always recommend setting it back to NOPARALLEL immediately after the CREATE is completed.
    You can also "quickly" build an empty index and then gradually create (i.e. build) each partition
    CREATE INDEX INDEX_SPACE0_IX_LOCAL  .........  UNUSABLE ;
    ALTER INDEX INDEX_SPACE0_IX_LOCAL REBUILD PARTITION PARTITION INDEX_SPACE01;
    ALTER INDEX INDEX_SPACE0_IX_LOCAL REBUILD PARTITION PARTITION INDEX_SPACE02;
    ...Hemant K Chitale

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best practice for archiving FGA_LOG$ table?

    Hi all,
    I am running Oracle 11g Release 1, and have been asked to look at how to best archive the FA_LOG$ table. We are required to store the logs for 3 years. I ran the following query:
    select owner, segment_name, segment_type, bytes/1024/1024 "MB" from dba_segments
    where tablespace_name = 'AUDIT_TBS' AND rownum <=100
    AND bytes/1024/1024 > 1 order by bytes desc;
    and it shows that the FGA_LOG$ table is currently at 84861 MB!
    Is my best course of action to:
    1. Install the patch necessary to access the DBMS_AUDIT_MGMT package
    2. Move the table to a different tablespace
    3. Use the DataPump export utlity
    4. Cleanup the table with the DBMS_AUDIT_MGMT package?
    Thanks,
    David

    to remove from the table itself
    delete from fga_log$ where timestamp# < sysdate-(365*3);
    if you want to keep data outside
    expdp to a file called this_years_fga_data.dmp , zip up and keep wherever you like.
    truncate fga_log$
    repeat annually
    * edit, jsut to clarify the timestamp# column may depend on version.
    Edited by: deebee_eh on Apr 25, 2012 5:05 PM

  • Is OWB the Best Tool for Designing Warehouse Tables?

    My current site uses OWB for ETL but mandates the use of TOAD Data Modeller for design and build of the warehouse tables.
    I always thought that OWB could do this task just as well as TOAD, but I'm not very experienced with OWB.
    Can anyone advise?

    You can design table with OWB and deploy to database, but is better create database objects [tables, views ets.] with SQL by DDL script and import this objects to OWB.
    You can define physical properties, keys, constraints, partitions, indexes, grants ets. for one table in one SQL-script.

  • Best Practice for Running Number Table

    Dear All
    Thank you for your attention.
    I would like to generate number for each order
    AAAA150001
    AAAA is prefix
    1 is year and 0001 is he sequence number.
    I proposed the table as below
    Prefix    | Year     | Number
    AAAA    | 15        | 1
    Using  SQL query as below to get the lastest number
    SELECT CurrentNumber = Prefix + Year + RIGHT ('0000'+ CAST (Number+1 AS VARCHAR(4)), 4)
    FROM RunningNumber WHERE Prefix = 'AAAA'
    after all save process then update the running number table
    UPDATE RunningNumber SET Number = (Number +1) WHERE Prefix = 'AAAA' AND Year = '15'
    Is that a normal approach and good to handle concurrent saving?
    Thanks.
    Best Regards
    mintssoul

    Dear Visakh16
    Each year the number will reset, table will as below
    Prefix    | Year     | Number
    AAAA    | 15        | 8749
    AAAA    | 16        | 1
    I could only use option1 from your ref.
    To use this approach, I must make sure 
    a) the number will not be duplicated or jumped as there is multiple users using the system concurrently.
    b) the number will not increment when there is any error after get the new number
    Is that using the following methods could archive a) & b)? 
    1) .NET SqlTransaction.Rollback
    2) SQL
    ROLLBACK TRANSACTION Thanks.
    To prevent repeat information, details of 1) & 2) is not listed here, please refer to my previous reply to Uri
    thanks.
    Best Regardsmintssoul

  • JDeveloper 11.1.1.3: Best Practice for Checkboxes in Table

    Hi there,
    I'm having problems with checkboxes inside the table component.
    Can someone please fill me in as to what the best practice is to use checkboxes inside a table ?
    In the database, we are storing values Y and N.
    Thanks,
    Mark

    Hi Mark,
    I suppose you are talking about ADF Faces applications. If so, then I have two prefered approaches tested and used in real practice:
    *1) First approach: Create a simple converter, define it in the faces-config.xml and set it in the <af:selectBooleanCheckbox> tags' "converter" attribute, for example:*
    package mypackage;
    import javax.faces.application.FacesMessage;
    import javax.faces.component.UIComponent;
    import javax.faces.context.FacesContext;
    import javax.faces.convert.Converter;
    import javax.faces.convert.ConverterException;
    public class BooleanYNConverter implements Converter {
      public BooleanYNConverter() {
      public Object getAsObject(FacesContext facesContext, UIComponent uiComponent, String string) {
        if (string==null) return null;
        String s = string.trim();
        if (s.length()==0) return null;
        if (s.equalsIgnoreCase("true")) return "Y";
        if (s.equalsIgnoreCase("false")) return "N";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + string + " to Y/N. It must be either true or false",
            "Cannot convert " + string + " to Y/N. It must be either true or false" );
        throw new ConverterException(errorMessage);
      public String getAsString(FacesContext facesContext, UIComponent uiComponent, Object object) {
        if (object == null) return "";
        if (object.equals("Y")) return "true";
        if (object.equals("N")) return "false";
        FacesMessage errorMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,
            "Cannot convert " + object + " to true/false. It must be either Y or N",
            "Cannot convert " + object + " to true/false. It must be either Y or N" );
        throw new ConverterException(errorMessage);
    }In faces-config.xml:
      <converter>
        <converter-id>BooleanYNConverter</converter-id>
        <converter-class>mypackage.BooleanYNConverter</converter-class>
      </converter>In JSF page:
    <af:selectBooleanCheckbox ... converter="BooleanYNConverter"/>
    N.B. If you use this approach, the ViewObject attribute's Control Type should be set to "Default" instead of "Checkbox" (see the attribute's Control Hints section in the dialog box)!
    *2) Second approach: In the PageDef define a button binding for Y/N values and map the corresponding item in the table binding to this button binding. In this way you will remap VO attribute's Y/N value to true/false as how the checkbox component expects it. Neither converters nor additional configuration is necessary, but you have to do this on each checkbox field again:*
    <?xml version="1.0" encoding="UTF-8" ?>
    <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="11.1.1.56.60" id="TestPagePageDef" Package="view.pageDefs">
      <parameters/>
      <executables>
        <variableIterator id="variables"/>
        <iterator Binds="DeptViewRO" RangeSize="25" DataControl="AppModuleDataControl" id="DeptViewROIterator"/>
      </executables>
      <bindings>
        <tree IterBinding="DeptViewROIterator" id="DeptViewRO">
          <nodeDefinition DefName="model.DeptViewRO" Name="DeptViewRO0">
            <AttrNames>
              <Item Value="DeptID"/>
              <Item Value="DeptName"/>
              <Item Value="Flag" Binds="MyFlag"/>
            </AttrNames>
          </nodeDefinition>
        </tree>
        <button IterBinding="DeptViewROIterator" StaticList="true" id="MyFlag">
          <AttrNames>
            <Item Value="Flag"/>
          </AttrNames>
          <ValueList>
            <Item Value="Y"/>
            <Item Value="N"/>
          </ValueList>
        </button>
      </bindings>
    </pageDefinition>In the sample above the target VO attribute is called Flag. Have a look at the line <tt><Item Value="Flag" Binds="MyFlag"/></tt>. This line does the magic.
    Hope I've been a bit helpful.
    Dimitar
    Edited by: Dimitar Dimitrov on Nov 13, 2010 1:53 PM
    There was a little mistake: Instead of BooleanYNConverter I had written BooleanYNCheckbox in the <af:selectBooleanCheckbox> tag.

  • Best method for archiving a table?

    I want to archive data in a table, based on a date selection.
    My initial thought was:
    -- Force creation of blank table, if it does not already exist
    create table ifsapp.customer_order_kmarchive as select * from ifsapp.customer_order where 1=2
    -- Update table with latest data to purge
    insert into ifsapp.customer_order_kmarchive as select * from ifsapp.customer_order where order_date < sysdate - 360
    -- Remove data
    delete from ifsapp.customer_order where order_date < sysdate - 360
    But of course you have to specify values on the insert. Is there a simple way round, without explicitly naming each column?
    Is there any check I can do so that if the copy fails, the delete won't happen?
    Thanks

    Oscar
    If proper date format (or trunc function) is not used, I bet, your archival process will be messed up at some stage.
    Consider for example the below scenario and run a test in your database:
    SQL> create table original_table(id Number, order_date date);
    SQL> Begin
    For i in 1..100
    Loop
    insert into original_table values (i, (sysdate-360)+ i/3600);
    End Loop;
    Commit;
    End;
    Now run the following select statement (it's like inserting into archive table) .
    Keep running it few times and you will see how the TIME part in the date field affects your whole process.
    SQL> select count(*) from original_table where order_date < sysdate-360;
    You will see that the result returned will change as your clock moves even though the date is same.
    As a result, your INSERT into ARHIVE_TABLE might start at 9 am and select all rows less than a year and 9 am on that day and archive them. Your DELETE statement might begin at 9.30 am and will delete all rows from original_table with order_date less than a year and 9.30 am. Now, understand why you need a TO_DATE function or a TRUNC function to get consistent results.
    You can use function based index to index TRUNC(ORDER_DATE) and your queries will be still performing great!

Maybe you are looking for

  • How do I sort my phone numbers inside a contact? (using custom labels)

    Alright, since I've converted to iCloud, I'm now adding new contacts in there. However... before I was adding phone numbers through Address Book on my Macbook (that ended up creating a conflict with iCloud so I'm now using iCloud). What I want to kno

  • How do I transfer everything from my old iPad to my new iPad

    I bought an iPad Air 2 an am wanting to transfer everything from my iPad 3 to it. If there an easy way to do this?

  • How to create a generic stack (whose elements can be of any data types) ???

    How do you define a "Stack" class that takes the Data type as its constructor argument ,and creates a new instance of that stack type ??? e.g. Stack("double") should create a stack of doubles ,and so on...      Putting it in another way, how do you g

  • Mappingproblem

    Hi. I have a mapping using the INVOIC02 IDoc. I'm using data in the E1EDK03 segment to populate a field in the E1EDP02 segment. In the explenation I will just present the effected segment and fields. Source: <INVOIC02>      <IDOC BEGIN="">           

  • General Database failure - Urgent assistance needed

    I've have the following error report: ------------------- Error Report ------------------- Error report created 1/1/2014 10:15:52 PM CLR is not terminating --------------- Bucketing Parameters --------------- EventType=VMM20 P1(appName)=vmmservice.ex