Select count(x) on a table with many column numbers?

Hi all,
i have a table with physical data with 850 (!!) colums and
~1 Million rows.
The select count(cycle)from test_table Statement is very, very slow
WHY?
The select count(cycle)from test_table is very fast by e.g 10 Colums. WHY?
What has the number of columns, to do with the SELECT count(cyle).... statement?
create test_table(
cycle number primary key,
stamp date,
sensor 1 number,
sensor 2 number,
sensor_849 number,
sensor_850 number);
on W2K Oracle 9i Enterprise Edition Release 9.2.0.4.0 Production
Can anybody help me?
Many Thanks
Achim

hi lennert, hi all,
many thanks for all the answers. I�m not an Oracle expert.
Sorry for my english.
Hi Lennert,
you are right, what must i do to use the index in the
query? Can you give me a pointer of direction, please?
Many greetings
Achim
select count(*) from w4t.v_tfmc_3_blocktime;
COUNT(*) ==> Table with 3 columns (very fast)
306057
Ausf�hrungsplan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _BLOCKTIME'    
Statistiken
0 recursive calls
0 db block gets
801 consistent gets
794 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
select count(*) from w4t.v_tfmc_3_value;
COUNT(*)==> Table with 850 columns (very slow)
64000
Ausf�hrungsplan
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'V_TFMC_3 _VALUE'    
Statistiken
1 recursive calls
1 db block gets
48410 consistent gets
38791 physical reads
13068 redo size
387 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Similar Messages

  • Sparse table with many columns

    Hi,
    I have a table that contains around 800 columns. The table is a sparse table such that many rows
    contain up to 50 populated columns (The others contain NULL).
    My questions are:
    1. Table that contains many columns can cause a performance problem? Is there an alternative option to
    hold table with many columns efficiently?
    2. Does a row that contains NULL values consume storage space?
    Thanks
    dyahav

    [NULLs Indicate Absence of Value|http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10743/schema.htm#sthref725]
    A null is the absence of a value in a column of a row. Nulls indicate missing, unknown, or inapplicable data. A null should not be used to imply any other value, such as zero. A column allows nulls unless a NOT NULL or PRIMARY KEY integrity constraint has been defined for the column, in which case no row can be inserted without a value for that column.
    Nulls are stored in the database if they fall between columns with data values. In these cases they require 1 byte to store the length of the column (zero).
    Trailing nulls in a row require no storage because a new row header signals that the remaining columns in the previous row are null. For example, if the last three columns of a table are null, no information is stored for those columns. In tables with many columns, the columns more likely to contain nulls should be defined last to conserve disk space.
    Most comparisons between nulls and other values are by definition neither true nor false, but unknown. To identify nulls in SQL, use the IS NULL predicate. Use the SQL function NVL to convert nulls to non-null values.
    Nulls are not indexed, except when the cluster key column value is null or the index is a bitmap index.>
    My guess for efficiently storing this information would be to take any columns that are almost always null and place them at the end of the table definition so they don't consume any space.
    HTH!

  • Select count from large fact tables with bitmap indexes on them

    Hi..
    I have several large fact tables with bitmap indexes on them, and when I do a select count from these tables, I get a different result than when I do a select count, column one from the table, group by column one. I don't have any null values in these columns. Is there a patch or a one-off that can rectify this.
    Thx

    You may have corruption in the index if the queries ...
    Select /*+ full(t) */ count(*) from my_table t
    ... and ...
    Select /*+ index_combine(t my_index) */ count(*) from my_table t;
    ... give different results.
    Look at metalink for patches, and in the meantime drop-and-recreate the indexes or make them unusable then rebuild them.

  • No records in Azure databrowser viewing tables with many columns.

    Yesterday I encountered an issue while browsing a table created in Azure.
    I created a new database in Azure and in this database I created and populated several tables, containing one very big table.
    The big table has 239 columns.
    I succeeded in populating this table with our in-company table-data, means by a dtsx-package. No problem this far.
    When I query the table from SQL Server Management Studio, I get correct results.
    However, the databrowser on the azure-site itself does not show any data for this table. That’s a little disappointing regarding the fact that there are more than 76000 records in this table. Refresh didn’t help.
    When I browse smaller tables with less data-columns, I do get data in this data-browser.
    Is this a known issue or do you know a solution for this issue ?
    Kind regards,
    Fred Silven
    AEB Amsterdam
    The Netherlands.

    Hello,
    Based on your description, you want to edit data of a large table in the Management Portal of SQL database, but it is not return rows in GUI Design tab.  Can you get the data when select "TOP 200 rows"?
    Since there are 239 columns and 76000 rows in the table, the Portal may take a bit long time to load all data in GUI. Please try to using T-SQL statement to perform the select or update operation and specify condition in WHERE clause to load the
    needed data.
    Regards,
    Fanny Liu
    Fanny Liu
    TechNet Community Support

  • Advanced table with many columns

    I have a advanced table with more or less 60 columns and the table is greater than the header image. How can I align the table with the advanced table?
    Thanks.
    Edited by: Vieira on 3-giu-2010 1.01

    bedabrata wrote:
    Hi,
    I had a question. Can we fix the size of the lolumns in a table region. I was unable to do so using the property width of the items. So tried the below code:
    // Set width for all the columns
    OATableBean tableBean = (OATableBean)webBean.findIndexedChildRecursive("HeaderDetailsRN") ;
    OAMessageStyledTextBean ColumnStyle;
    ColumnStyle = (OAMessageStyledTextBean)tableBean.findIndexedChildRecursive("SblOrderNumber");
    CSSStyle cellUpperCase = new CSSStyle();
    cellUpperCase.setProperty("width","150px");
    if (ColumnStyle != null)
    ColumnStyle.setInlineStyle(cellUpperCase);
    Now the width is getting set for the cloumn which has values (data) but it is not setting the width for columns which has null value or no data.
    Could any of you guide me on how to set the width for every column of a table region.
    Thanks
    BedabrataOpen an other topic for this. ;-)
    Edited by: Vieira on 3-giu-2010 9.27

  • MaxDB: Table with many LONG fields does not allow an INSERT: ...?

    Hi,
    I have a table with many LONG fields (28). So far, everythings works fine.
    However, if I add another LONG field I cannot insert a dataset anymore
    (29 LONG fields).
    Does there exist a MaxDB parameter or anything else I can change to make inserts possible again?
    Thanks in advance
    Michael
    appendix:
    - Create and Insert command and error message
    - MaxDB version and its parameters
    Create and Insert command and error message
    CREATE TABLE "DBA"."AZ_Z_TEST02"
         "ZTB_ID"               Integer    NOT NULL,
         "ZTB_NAMEOFREPORT"           Char (400) ASCII DEFAULT '',
         "ZTB_LONG_COMMENT"                LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_00"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_01"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_02"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_03"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_04"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_05"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_06"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_07"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_08"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_09"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_10"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_11"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_12"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_13"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_14"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_15"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_16"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_17"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_18"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_19"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_20"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_21"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_22"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_23"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_24"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_25"         LONG ASCII DEFAULT '',
         "ZTB_LONG_TEXTBLOCK_26"         LONG ASCII DEFAULT '',
         PRIMARY KEY ("ZTB_ID")
    The insert command
    INSERT INTO AZ_Z_TEST02 SET ztb_id = 87
    works fine. If I add the LONG field
    "ZTB_LONG_TEXTBLOCK_27"         LONG ASCII DEFAULT '',
    the following error occurs:
        Auto Commit: On, SQL Mode: Internal, Isolation Level: Committed
        General error;-7032 POS(1) SQL statement not allowed for column of data type LONG
        INSERT INTO AZ_Z_TEST02 SET ztb_id = 88
    MaxDB version and its parameters
    All db params given by
    dbmcli -d myDB -u dbm,dbm param_directgetall > maxdb_params.txt
    are
    KERNELVERSION                         KERNEL    7.5.0    BUILD 026-123-094-430
    INSTANCE_TYPE                         OLTP
    MCOD                                  NO
    RESTART_SHUTDOWN                      MANUAL
    SERVERDBFOR_SAP                     YES
    _UNICODE                              NO
    DEFAULT_CODE                          ASCII
    DATE_TIME_FORMAT                      INTERNAL
    CONTROLUSERID                         DBM
    CONTROLPASSWORD                       
    MAXLOGVOLUMES                         10
    MAXDATAVOLUMES                        11
    LOG_VOLUME_NAME_001                   LOG_001
    LOG_VOLUME_TYPE_001                   F
    LOG_VOLUME_SIZE_001                   64000
    DATA_VOLUME_NAME_0001                 DAT_0001
    DATA_VOLUME_TYPE_0001                 F
    DATA_VOLUME_SIZE_0001                 64000
    DATA_VOLUME_MODE_0001                 NORMAL
    DATA_VOLUME_GROUPS                    1
    LOG_BACKUP_TO_PIPE                    NO
    MAXBACKUPDEVS                         2
    BACKUP_BLOCK_CNT                      8
    LOG_MIRRORED                          NO
    MAXVOLUMES                            22
    MULTIO_BLOCK_CNT                    4
    DELAYLOGWRITER                      0
    LOG_IO_QUEUE                          50
    RESTARTTIME                         600
    MAXCPU                                1
    MAXUSERTASKS                          50
    TRANSRGNS                           8
    TABRGNS                             8
    OMSREGIONS                          0
    OMSRGNS                             25
    OMS_HEAP_LIMIT                        0
    OMS_HEAP_COUNT                        1
    OMS_HEAP_BLOCKSIZE                    10000
    OMS_HEAP_THRESHOLD                    100
    OMS_VERS_THRESHOLD                    2097152
    HEAP_CHECK_LEVEL                      0
    ROWRGNS                             8
    MINSERVER_DESC                      16
    MAXSERVERTASKS                        20
    _MAXTRANS                             288
    MAXLOCKS                              2880
    LOCKSUPPLY_BLOCK                    100
    DEADLOCK_DETECTION                    4
    SESSION_TIMEOUT                       900
    OMS_STREAM_TIMEOUT                    30
    REQUEST_TIMEOUT                       5000
    USEASYNC_IO                         YES
    IOPROCSPER_DEV                      1
    IOPROCSFOR_PRIO                     1
    USEIOPROCS_ONLY                     NO
    IOPROCSSWITCH                       2
    LRU_FOR_SCAN                          NO
    PAGESIZE                            8192
    PACKETSIZE                          36864
    MINREPLYSIZE                        4096
    MBLOCKDATA_SIZE                     32768
    MBLOCKQUAL_SIZE                     16384
    MBLOCKSTACK_SIZE                    16384
    MBLOCKSTRAT_SIZE                    8192
    WORKSTACKSIZE                       16384
    WORKDATASIZE                        8192
    CATCACHE_MINSIZE                    262144
    CAT_CACHE_SUPPLY                      1632
    INIT_ALLOCATORSIZE                    229376
    ALLOW_MULTIPLE_SERVERTASK_UKTS        NO
    TASKCLUSTER01                       tw;al;ut;2000sv,100bup;10ev,10gc;
    TASKCLUSTER02                       ti,100dw;30000us;
    TASKCLUSTER03                       compress
    MPRGN_QUEUE                         YES
    MPRGN_DIRTY_READ                    NO
    MPRGN_BUSY_WAIT                     NO
    MPDISP_LOOPS                        1
    MPDISP_PRIO                         NO
    XP_MP_RGN_LOOP                        0
    MP_RGN_LOOP                           0
    MPRGN_PRIO                          NO
    MAXRGN_REQUEST                        300
    PRIOBASE_U2U                        100
    PRIOBASE_IOC                        80
    PRIOBASE_RAV                        80
    PRIOBASE_REX                        40
    PRIOBASE_COM                        10
    PRIOFACTOR                          80
    DELAYCOMMIT                         NO
    SVP1_CONV_FLUSH                     NO
    MAXGARBAGECOLL                      0
    MAXTASKSTACK                        1024
    MAX_SERVERTASK_STACK                  100
    MAX_SPECIALTASK_STACK                 100
    DWIO_AREA_SIZE                      50
    DWIO_AREA_FLUSH                     50
    FBM_VOLUME_COMPRESSION                50
    FBM_VOLUME_BALANCE                    10
    FBMLOW_IO_RATE                      10
    CACHE_SIZE                            10000
    DWLRU_TAIL_FLUSH                    25
    XP_DATA_CACHE_RGNS                    0
    DATACACHE_RGNS                      8
    XP_CONVERTER_REGIONS                  0
    CONVERTER_REGIONS                     8
    XP_MAXPAGER                           0
    MAXPAGER                              11
    SEQUENCE_CACHE                        1
    IDXFILELIST_SIZE                    2048
    SERVERDESC_CACHE                    73
    SERVERCMD_CACHE                     21
    VOLUMENO_BIT_COUNT                    8
    OPTIM_MAX_MERGE                       500
    OPTIM_INV_ONLY                        YES
    OPTIM_CACHE                           NO
    OPTIM_JOIN_FETCH                      0
    JOIN_SEARCH_LEVEL                     0
    JOIN_MAXTAB_LEVEL4                    16
    JOIN_MAXTAB_LEVEL9                    5
    READAHEADBLOBS                      25
    RUNDIRECTORY                          E:\_mp\u_v_dbs\EVERW_C5
    _KERNELDIAGFILE                       knldiag
    KERNELDIAGSIZE                        800
    _EVENTFILE                            knldiag.evt
    _EVENTSIZE                            0
    _MAXEVENTTASKS                        1
    _MAXEVENTS                            100
    _KERNELTRACEFILE                      knltrace
    TRACE_PAGES_TI                        2
    TRACE_PAGES_GC                        0
    TRACE_PAGES_LW                        5
    TRACE_PAGES_PG                        3
    TRACE_PAGES_US                        10
    TRACE_PAGES_UT                        5
    TRACE_PAGES_SV                        5
    TRACE_PAGES_EV                        2
    TRACE_PAGES_BUP                       0
    KERNELTRACESIZE                       648
    EXTERNAL_DUMP_REQUEST                 NO
    AKDUMP_ALLOWED                      YES
    _KERNELDUMPFILE                       knldump
    _RTEDUMPFILE                          rtedump
    UTILITYPROTFILE                     dbm.utl
    UTILITY_PROTSIZE                      100
    BACKUPHISTFILE                      dbm.knl
    BACKUPMED_DEF                       dbm.mdf
    MAXMESSAGE_FILES                    0
    EVENTALIVE_CYCLE                    0
    _SHAREDDYNDATA                        10280
    _SHAREDDYNPOOL                        3607
    USE_MEM_ENHANCE                       NO
    MEM_ENHANCE_LIMIT                     0
    __PARAM_CHANGED___                    0
    __PARAM_VERIFIED__                    2008-05-13 13:47:17
    DIAG_HISTORY_NUM                      2
    DIAG_HISTORY_PATH                     E:\_mp\u_v_dbs\EVERW_C5\DIAGHISTORY
    DIAGSEM                             1
    SHOW_MAX_STACK_USE                    NO
    LOG_SEGMENT_SIZE                      21333
    SUPPRESS_CORE                         YES
    FORMATTING_MODE                       PARALLEL
    FORMAT_DATAVOLUME                     YES
    HIRES_TIMER_TYPE                      CPU
    LOAD_BALANCING_CHK                    0
    LOAD_BALANCING_DIF                    10
    LOAD_BALANCING_EQ                     5
    HS_STORAGE_DLL                        libhsscopy
    HS_SYNC_INTERVAL                      50
    USE_OPEN_DIRECT                       NO
    SYMBOL_DEMANGLING                     NO
    EXPAND_COM_TRACE                      NO
    OPTIMIZE_OPERATOR_JOIN_COSTFUNC       YES
    OPTIMIZE_JOIN_PARALLEL_SERVERS        0
    OPTIMIZE_JOIN_OPERATOR_SORT           YES
    OPTIMIZE_JOIN_OUTER                   YES
    JOIN_OPERATOR_IMPLEMENTATION          IMPROVED
    JOIN_TABLEBUFFER                      128
    OPTIMIZE_FETCH_REVERSE                YES
    SET_VOLUME_LOCK                       YES
    SHAREDSQL                             NO
    SHAREDSQL_EXPECTEDSTATEMENTCOUNT      1500
    SHAREDSQL_COMMANDCACHESIZE            32768
    MEMORY_ALLOCATION_LIMIT               0
    USE_SYSTEM_PAGE_CACHE                 YES
    USE_COROUTINES                        YES
    MIN_RETENTION_TIME                    60
    MAX_RETENTION_TIME                    480
    MAX_SINGLE_HASHTABLE_SIZE             512
    MAX_HASHTABLE_MEMORY                  5120
    HASHED_RESULTSET                      NO
    HASHED_RESULTSET_CACHESIZE            262144
    AUTO_RECREATE_BAD_INDEXES             NO
    LOCAL_REDO_LOG_BUFFER_SIZE            0
    FORBID_LOAD_BALANCING                 NO

    >
    Lars Breddemann wrote:
    > Hi Michael,
    >
    > this really looks like one of those "Find-the-5-errors-in-the-picture" riddles to me.
    > Really.
    >
    > Ok, first to your question: this seems to be a bug - I could reproduce it with my 7.5. Build 48.
    > Anyhow, when I use
    >
    > insert into "AZ_Z_TEST02"  values (87,'','','','','','','','','','','','','','','',''
    >                                           ,'','','','','','','','','','','','','','','','')
    >
    > it works fine.
    It solves my problem. Thanks a lot. -- I hardly can believe that this is all needed to solve the bug. This may be the reason why I have not given it a try.
    >
    Since explicitely specifying all values for an insert is a good idea anyhow (you can see directly, what value the new tupel will have), you may want to change your code to this.
    >
    > Now to the other errors:
    > - 28 Long values per row?
    > What the heck is wrong with the data design here?
    > Honestly, you can save data up to 2 GB in a BLOB/CLOB.
    > Currently, your data design allows 56 GB per row.
    > Moreover 26 of those columns seems to belong together originally - why do you split them up at all?
    >
    > - The "ZTB_NAMEOFREPORT" looks like something the users see -
    > still there is no unique constraint preventing that you get 10000 of reports with the same name...
    You are right. This table looks a bit strange. The story behind it is: Each crystal report in the application has a few textblocks which are the same for all the e.g. persons the e.g. letter is created for. Principally, the textblocks could be directy added to the crystal report. However, as it is often the case, these textblocks may change once in a while. Thus, I put the texts of the textblock into this "strange" db table (one row for each report, one field for each textblock, the name of the report is given by "ztb_nameofreport"). And the application offers a menue by which these textblocks can be changed. Of course, the fields in the table could be of type CHAR, but LONG has the advantage that I do not have to think about the length of the field, since sometime the texts are short and sometimes they are really long.
    (These texts would blow up the sql select command of the crystal report very much if they were integrated into the this select command. Thus it is realized in another way: the texts are read before the crystal report is loaded, then the texts are "given" to the crystal report (by its parameters), and finally the crystal report is loaded.)
    >
    - MaxDB 7.5 Build 26 ?? Where have you been the last years?
    > Really - download the 7.6.03 Version [here|https://www.sdn.sap.com/irj/sdn/maxdb-downloads] from SDN and upgrade.
    > With 7.6. I was not able to reproduce your issue at all.
    The customer still has Win98 clients. MaxDB odbc driver 7.5.00.26 does not work for them. I got the hint to use odbc driver 7.3 (see [lists.mysql.com/maxdb/25667|lists.mysql.com/maxdb/25667]). Do MaxDB 7.6 and odbc driver 7.3 work together?
    All Win98 clients may be replaced by WinXP clients in the near future. Then, an upgrade may be reasonable.
    >
    - Are you really putting your data into the DBA schema? Don't do that, ever.
    > DBM/SUPERDBA (the sysdba-schemas) are reserved for the MaxDB system tables.
    > Create a user/schema for your application data and put your tables into that.
    >
    > KR Lars
    In the first MaxDB version I used, schemas were not available. I haven't changed it afterwards. Is there an easy way to "move an existing table into a new schema"?
    Michael

  • Select max date from a table with multiple records

    I need help writing an SQL to select max date from a table with multiple records.
    Here's the scenario. There are multiple SA_IDs repeated with various EFFDT (dates). I want to retrieve the most recent effective date so that the SA_ID is unique. Looks simple, but I can't figure this out. Please help.
    SA_ID CHAR_TYPE_CD EFFDT CHAR_VAL
    0000651005 BASE 15-AUG-07 YES
    0000651005 BASE 13-NOV-09 NO
    0010973671 BASE 20-MAR-08 YES
    0010973671 BASE 18-JUN-10 NO

    Hi,
    Welcome to the forum!
    Whenever you have a question, post a little sample data in a form that people can use to re-create the problem and test their ideas.
    For example:
    CREATE TABLE     table_x
    (     sa_id          NUMBER (10)
    ,     char_type     VARCHAR2 (10)
    ,     effdt          DATE
    ,     char_val     VARCHAR2 (10)
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('15-AUG-2007', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0000651005, 'BASE',    TO_DATE ('13-NOV-2009', 'DD-MON-YYYY'), 'NO');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('20-MAR-2008', 'DD-MON-YYYY'), 'YES');
    INSERT INTO table_x (sa_id,  char_type, effdt,                          char_val)
         VALUES     (0010973671, 'BASE',    TO_DATE ('18-JUN-2010', 'DD-MON-YYYY'), 'NO');
    COMMIT;Also, post the results that you want from that data. I'm not certain, but I think you want these results:
    `    SA_ID LAST_EFFD
        651005 13-NOV-09
      10973671 18-JUN-10That is, the latest effdt for each distinct sa_id.
    Here's how to get those results:
    SELECT    sa_id
    ,         MAX (effdt)    AS last_effdt
    FROM      table_x
    GROUP BY  sa_id
    ;

  • Selection in table with Master Column

    Hi,
    can anybody explain me how selection works in table with MasterColumn (TreeByNestingTableColumn)?
    I've created a table with several columns and the "master" one behaves differently from others.
    First of all, leaf elements of master column are not clickable (i.e. I don't receive notification "onLeadSelect" from the table, when user clicks in the cell of leaf element of the column). With expandable elements I receive loadchildren event and can react on selection.
    I'm using TextView as cell editor, will try LinkToAction, may be it will resolve situation, but it's rather workaround as solution
    Second, getLeadSelection doesn't work anymore for the table - it works with indexes of top level elements only.
    However onLeadSelect action brings me selected row number, which I somehow should map to the tree.
    I.e. when I get "25" as selected element, I recursively traverse context node to find 25th element. Which is rather slow operation as for any selection I have to reiterate whole structure and I hope that visual appearance is similar to context element order.
    May be there is simplier ways to get selected element by its index in "flat" list?
    And one another strange thing: first click on the table sends me onLeadSelect "0", but highlight points to the element user clicked - not necessary first one.
    Context node is "single" selection, initLeadSelect=true,
    selection = 0..1 (with 1..1 setTreeSelection doesn't work - method I want to use to be able to select a node in hierarchy on my own)
    all quite similar to other tables I have, which works well. Just the problems with the table, where Master Column is used.
    Thank you!
    Best regards,
    Nick

    >Valery's proposal was to perform reverse traverse from current element to root and summarize parent element indexes.
    It seems to work:
    in method, which sets selection I applied
    // we need to know index of the element not in whole context, but in scope of table-relevant node
    int index = getIndexInContext(selElement, wdContext.currentXXXElement());
    IContextElement curContext = wdContext.currentContextElement();
    int firstVisibleRow = curContext.getTableFirstVisibleRow();
    int visibleRows = curContext.getTableVisibleRows();
    if (index < firstVisibleRow || index > firstVisibleRow + visibleRows) {
    curContext.setTableFirstVisibleRow(index);
    and method getIndexInContext looks as:
        private int getIndexInContext (IWDNodeElement element, IWDNodeElement root) {
              int index = element.index();
              if (element == root) {
                   return index;
              IWDNodeElement parent = element.node().getParentElement();
              if (parent == null) {
    // do whatever you like here for error diagnostic
                   myMsgMgr.reportException("Internal Error: getIndexInContext - element is not under passed root", false);
                   return 0;
              // +1 - every level adds 1 (otherwise indexes of first children of hierarchy will be 0)
              return  index + getIndexInContext(parent, root) + 1;
    Best regards,
    Nick

  • Many to many join table with different column names

    Hi have a joint table with different column names as foreign keys in the joining
    tables...
    e.g. i have a many to many reltnshp btwn Table A and Table B ..and join table
    C
    both have a column called pk.
    and the join table C has columns call fk1 and fk2
    does cmd require the same column name in the join table as in the joining table?
    are there any workarounds?
    thanks

    HI,
    No, the foreign key column names in the join table do not have to match the primary
    key names in the joined tables.
    -thorick

  • ORA-00939 when creating XML table with Virtual Columns

    Getting error on creating table with VIRTUAL COLUMNS:
    Error at Command Line:4 Column:31
    Error report:
    SQL Error: ORA-00939: too many arguments for function
    00939. 00000 - "too many arguments for function"
    Without VIRTUAL COLUMNS works fine.
    Where to start?
    Is it possible to add Virtual Columns after a table is created?
    CREATE TABLE TDS_XML OF XMLType
    XMLSCHEMA "http://xmlns.abc.com/tds/TDSSchemaGen2.xsd"
    ELEMENT "TDSTestData"
      VIRTUAL COLUMNS
      TESTID AS (
        XMLCast(
                  XMLQuery('declare default element namespace "http://xmlns.abc.com/tds/TDSSchemaGen2.xsd"; /TDSTestData/TestID' PASSING OBJECT_VALUE RETURNING CONTENT)  AS VARCHAR2(32)
       )SQL*Plus: Release 11.2.0.2.0 Production on Mon Apr 30 20:17:29 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE 11.2.0.2.0 Production
    TNS for 64-bit Windows: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL>

    victor_shostak wrote:
    Figured, Virtual Columns work only for Binary XML.They are only supported, currently, for Binary XML.

  • Troubles when trying to transfer a table with hierarchical column to a tree

    In WD for ABAP, I encounter many troubles when trying to transfer a table with hierarchical column to an actual tree. In my UI, the tree is in the left pane and the detail for a node is in the right.
    1. With hierarchical table, I can bind the dataSource of the table directly to the context node in View which is mapped to a context node in component controller which is a DDIC structure. But with a tree, is it possible to map the dataSource of the tree or node in tree to context node in component controller?
    2. With hierarchical table, when lead selection changed, the detail info in the right pane will change automatically. But with a tree, it seems to have to be implemented via action by myself.
    Can anyone give some suggestions to overcome these troubles?

    Wei,
    <i>With tree table, the dataSource of the table is just bound to the context node ORG_UNIT</i>
    Hmmm... You said, that this node resides in component controller, not in view controller. Or does it?
    Any recursive node that can be displayed with tree table can be displayed with tree as well. Period.
    1. Create tree and bind data source to the same node as table data source.
    2. Create tree node type and bind it to the same node again.
    3. Create one additional calculated attribute in forementioned node (type boolean) that inverts/negates attribute for table column "hasChildren" to tree node "isLeaf".
    4. That's all.
    VS

  • Error inserting a row into a table with identity column using cfgrid on change

    I got an error on trying to insert a row into a table with identity column using cfgrid on change see below
    also i would like to use cfstoreproc instead of cfquery but which argument i need to pass and how to use it usually i use stored procedure
    update table (xxx,xxx,xxx)
    values (uu,uuu,uu)
         My component
    <!--- Edit a Media Type  --->
        <cffunction name="cfn_MediaType_Update" access="remote">
            <cfargument name="gridaction" type="string" required="yes">
            <cfargument name="gridrow" type="struct" required="yes">
            <cfargument name="gridchanged" type="struct" required="yes">
            <!--- Local variables --->
            <cfset var colname="">
            <cfset var value="">
            <!--- Process gridaction --->
            <cfswitch expression="#ARGUMENTS.gridaction#">
                <!--- Process updates --->
                <cfcase value="U">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                    <cfquery datasource="#application.dsn#">
                    UPDATE SP.MediaType
                    SET #colname# = '#value#'
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <!--- Process deletes --->
                <cfcase value="D">
                    <!--- Perform actual delete --->
                    <cfquery datasource="#application.dsn#">
                    update SP.MediaType
                    set Deleted=1
                    WHERE MediaTypeID = #ARGUMENTS.gridrow.MediaTypeID#
                    </cfquery>
                </cfcase>
                <cfcase value="I">
                    <!--- Get column name and value --->
                    <cfset colname=StructKeyList(ARGUMENTS.gridchanged)>
                    <cfset value=ARGUMENTS.gridchanged[colname]>
                    <!--- Perform actual update --->
                   <cfquery datasource="#application.dsn#">
                    insert into  SP.MediaType (#colname#)
                    Values ('#value#')              
                    </cfquery>
                </cfcase>
            </cfswitch>
        </cffunction>
    my table
    mediatype:
    mediatypeid primary key,identity
    mediatypename
    my code is
    <cfform method="post" name="GridExampleForm">
            <cfgrid format="html" name="grid_Tables2" pagesize="3"  selectmode="edit" width="800px" 
            delete="yes"
            insert="yes"
                  bind="cfc:sp3.testing.MediaType.cfn_MediaType_All
                                                                ({cfgridpage},{cfgridpagesize},{cfgridsortcolumn},{cfgridsortdirection})"
                  onchange="cfc:sp3.testing.MediaType.cfn_MediaType_Update({cfgridaction},
                                                {cfgridrow},
                                                {cfgridchanged})">
                <cfgridcolumn name="MediaTypeID" header="ID"  display="no"/>
                <cfgridcolumn name="MediaTypeName" header="Media Type" />
            </cfgrid>
    </cfform>
    on insert I get the following error message ajax logging error message
    http: Error invoking xxxxxxx/MediaType.cfc : Element '' is undefined in a CFML structure referenced as part of an expression.
    {"gridaction":"I","gridrow":{"MEDIATYPEID":"","MEDIATYPENAME":"uuuuuu","CFGRIDROWINDEX":4} ,"gridchanged":{}}
    Thanks

    Is this with the Travel database or another database?
    If it's another database then make sure your columns
    allow nulls. To check this in the Server Navigator, expand
    your DataSource down to the column.
    Select the column and view the Is Nullable property
    in the Property Sheet
    If still no luck, check out a tutorial, like Performing Inserts, ...
    http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/index.jsp
    John

  • Table with frozen columns

    In a recent project I've had the need of a Java Swing table with frozen columns.
    After a long research at the Swing forum at java.sun.org and google I found out that
    1. Currently there is no proper Open Source solution
    2. All the postings at sun are good concepts, but far away from solutions
    3. The only (for me )acceptable commercial solution is the JCTable from Quest (formerly KGroup)
    Since I want to have full control over the source code of the table, I decided to collect all the tips from this forums and write my own table.
    Here is the my first result:
    http://jroller.com/resources/kriede/CoolTable.java
    The main idea is to have two tables, one for the locked columns (=fixed columns = frozen columns) and one for the scrollable columns. With all the tips from this forum it was more or less a puzzle to make it work pretty. The next task will be to provide a JTable-like interface, meaning to write delegate/adapter methods to the API of the two nested JTable instances.

    The FixedColumnExample from the link that you provided doesn't work with JDK 1.3 or higher.
    Anyway, putting a seperate table for the fixed columns into the row header is the first step to solve the problem of fixed columns.
    The next problem is to synchonize the scrolling behaviour of the two tables. The FixedColumnExample tries to do it by overwriting the valueChange method (maybe this was working with JDK 1.8), but with JDK 1.2 or higher it is recommended to use ChangeListeners. That's what I am doing, as also described here: http://www.chka.de/swing/components/JScrollPane-bugfix.html.
    Another problem is navigation with the tab or arrow keys: The default actions for those events move the selection only within one JTable. It would be nicer, if e.g. the tab key moves the selection within one row across both, the fixed and the scrollable columns.
    Those problems and some more are solved in the CoolTable sample:
    http://jroller.com/resources/kriede/CoolTable.java
    Please try it out and send comments.

  • Pivot table with variables columns

    I need a helo to pivot table with variable columns,
    I have a pivot table :
    SELECT a.*
    FROM (SELECT codigo_aluno,nome_aluno , id_curso,dia FROM c_frequencia where dia like '201308%') PIVOT (sum(null)   FOR dia IN ('20130805' ,'20130812','20130819','20130826')) a
    but I need to run the select with values for dia , getting from a other table :
    SELECT a.*
    FROM (SELECT codigo_aluno,nome_aluno , id_curso,dia FROM c_frequencia where dia like '201308%') PIVOT (sum(null)   FOR dia IN (
    select dia from v_dia_mes )) a
    thank you

    The correct answer should be "Use the Pivoted Report Region Plugin".
    But, as far as I know, nobody has created/posted that type of APEX plugin.
    You may have to use a Basic Report (not an IR) so that you can use "Function returning SELECT" for your Source.
    You would need two functions:
    One that dynamically generates the Column Names
    One that dynamically generates the SELECT statement
    These should be in a PL/SQL Package so that the later can call the former to ensure that the column data matches the column names.
    i.e. -- no 'SELECT *'
    MK

  • Table with 300 columns

    While talking with a friend, he was telling me that sometimes he uses tables with 300 columns which will only be accessed once for information extraction. For me this is a bit strange because 300 columns looks like a lot of columns for me...
    Having tables with so many columns is a common practice in Data Warehousing? If yes, can you explain me on which situations this can happen and how often?
    Regards

    We have tables with cca 150 columns.
    You need to know that we are not talking about relational systems that are normalized. Its common practise when bilding data warehouse. Number of columns in your tables will depend on complexity and size of your warehouse, and number, size and complexity of your source systems.
    Point is that even now, when data warehouse is "finished", we still add/remove columns as warehouse changes as the source systems change.

Maybe you are looking for

  • Installed ssd on new macbook but getting a grey folder with a question mark?

    Hi, I bought a new macbook pro, late 2011. Installed aftermarket SSD myself. Switched mac on and i get a question mark in a grey folder with a white background. I rang Apple and they said they mac cannot find the operating system. Any help?

  • [solved] How to quickly crop screenshots?

    Hi there, I am looking for a software recommendation that combines nicely with gnome-screenshot. gnome-screenshot is great to quickly take a selection and copy it into the clipboard. However, sometimes I need to grab the whole screen, for instance be

  • Invalid serial number Lion Upgrade

    Anyone else getting "Invalid serial number" when going through the upgrading process at http://www.apple.com/macosx/uptodate ?

  • Sudden slowdown from 4Mbps to 400 Kbps affecting...

    Homehub reports connected at 5664Kbps as it has for a year or so. Normal throughput about 4Mbps. IP Profile 4500Kbps Three days ago sudden drop in delivered throughput to 380Kbps. Appears to be affecting several people in the village. Taken 48hr so f

  • My ipad has "flatlined" after the "upgrade

    I just upgraded to the new os...what a mistake.  My ipad has nothing but a flatline; my itunes no longer works...it was a "windows" type experience.  Now my iPad or iTunes does not work.  Please advise.