Avoid fts

Hello,
i am on 11gr2, 11.2.0.3.0
and i am trying to avoid fts because query takes too long and the cost is too high.
This is the explain plan :
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 309652726
| Id  | Operation                | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT         |                  |  4232K|   314M|       | 66586   (2)| 00:13:20 |       |       |
|*  1 |  HASH JOIN               |                  |  4232K|   314M|    46M| 66586   (2)| 00:13:20 |       |       |
|   2 |   VIEW                   | index$_join$_002 |  1869K|    24M|       |  5741   (1)| 00:01:09 |       |       |
|*  3 |    HASH JOIN             |                  |       |       |       |            |          |       |       |
|   4 |     PARTITION LIST ALL   |                  |  1869K|    24M|       |   410   (1)| 00:00:05 |     1 |     4 |
|   5 |      INDEX FAST FULL SCAN| KLI_MBR_I        |  1869K|    24M|       |   410   (1)| 00:00:05 |     1 |     4 |
|   6 |     INDEX FAST FULL SCAN | KLI_PK           |  1869K|    24M|       |   318   (1)| 00:00:04 |       |       |
|   7 |   TABLE ACCESS FULL      | PARTIJA          |  4245K|   259M|       | 43238   (2)| 00:08:39 |       |       |
Predicate Information (identified by operation id):
   1 - access("KLI"."ID"="P"."KLI_ID")
   3 - access(ROWID=ROWID)
20 rows selected.Am I right if I assume that the fts on partija table is due to lack of filters(and,where), as it uses only the join KLI"."ID"="P"."KLI_ID ?
also,when I try to force a btree index access on p.kli_id using a hint , plan is even worse.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
Plan hash value: 2491098214
| Id  | Operation                    | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time  | Pstart| Pstop |
|   0 | SELECT STATEMENT             |                  |  4232K|   314M|       |   204K  (1)| 00:40:55 |       |       |
|   1 |  MERGE JOIN                  |                  |  4232K|   314M|       |   204K  (1)| 00:40:55 |       |       |
|   2 |   TABLE ACCESS BY INDEX ROWID| PARTIJA          |  4245K|   259M|       |   189K  (1)| 00:37:55 |       |       |
|   3 |    INDEX FULL SCAN           | PARTIJA_KLI_FK_I |  4209K|       |       |   616   (1)| 00:00:08 |       |       |
|*  4 |   SORT JOIN                  |                  |  1869K|    24M|    85M| 15008   (1)| 00:03:01 |       |       |
|   5 |    VIEW                      | index$_join$_002 |  1869K|    24M|       |  5741   (1)| 00:01:09 |       |       |
|*  6 |     HASH JOIN                |                  |       |       |       |            |       |  |       |
|   7 |      PARTITION LIST ALL      |                  |  1869K|    24M|       |   410   (1)| 00:00:05 |     1 |     4 |
|   8 |       INDEX FAST FULL SCAN   | KLI_MBR_I        |  1869K|    24M|       |   410   (1)| 00:00:05 |     1 |     4 |
|   9 |      INDEX FAST FULL SCAN    | KLI_PK           |  1869K|    24M|       |   318   (1)| 00:00:04 |       |       |
Predicate Information (identified by operation id):
   4 - access("KLI"."ID"="P"."KLI_ID")
       filter("KLI"."ID"="P"."KLI_ID")
   6 - access(ROWID=ROWID)
23 rows selected.here is the env
SQL> show parameter optimizer
NAME                                 TYPE        VALUE
optimizer_capture_sql_plan_baselines boolean     FALSE
optimizer_dynamic_sampling           integer     2
optimizer_features_enable            string      11.2.0.3
optimizer_index_caching              integer     95
optimizer_index_cost_adj             integer     5
optimizer_mode                       string      ALL_ROWS
optimizer_secure_view_merging        boolean     TRUE
optimizer_use_invisible_indexes      boolean     FALSE
optimizer_use_pending_statistics     boolean     FALSE
optimizer_use_sql_plan_baselines     boolean     TRUE
SQL> show parameter db_file_multi
NAME                                 TYPE        VALUE
db_file_multiblock_read_count        integer     16
SQL> show parameter db_block_size
NAME                                 TYPE        VALUE
db_block_size                        integer     8192
SQL> show parameter cursor_sharin
NAME                                 TYPE        VALUE
cursor_sharing                       string      EXACT
SQL>and this is the query , with the hint
select /*+ index (p partija_kli_fk_i) */ p.par_sifpar as account_id
           ,  p.par_vposla ||'-'|| p.par_sifpar as account_ref_id
           ,  alc_accounts_pck.f_account_name (p.par_kredep, p.par_vposla) as account_name
           ,  kli.maticni_broj as customer_id
           ,  case
                 when     p.par_vazido = 0
                      and p.par_sporni = 0 then
                    'ACTIVE'
                 when     p.par_vazido = 0
                      and p.par_sporni > 0 then
                    'BLOCKED'
                 when par_vazido > 0 then
                    'INACTIVE'
                 else
                    null
              end
                 as account_status_code
           ,  alc_accounts_pck.f_account_type (p.par_kredep, p.par_vposla) as account_type
           ,  case
                 when p.par_kredep = 0 then 'D'
                 when p.par_kredep > 0 then 'C'
                 else null
              end
                 as credit_debit_code
           ,  alc_util_pck.f_currency_code as currency_code
           ,  case when  p.par_vaziod = 0 then  null else p.par_vaziod end as date_opened
           ,  case when  p.par_vazido = 0  then null  else p.par_vazido end as date_closed
           ,  alc_accounts_pck.f_account_balance (p.par_sifpar) as account_balance
           ,  p.par_posjed as branch_id
           ,  alc_accounts_pck.f_relation_mng_id (p.par_sektor
                                                ,    p.par_centar
                                                ,    p.par_sluzba
                                                ,    p.par_orgjed
                                                ,    p.par_siftim)as relationship_mgr_id
           ,  trim (p.par_ibanbr) as iban
           ,  'OTHER' as product_source_type_code
           ,  alc_util_pck.f_risk_account_flag (kli.id) as risk_flag
           ,  'N' as non_face_to_face_flag
           ,  alc_accounts_pck.f_trustee_flag (p.par_sifpar) as trustee_flag
        from partija p
             join klijenti kli
                on (kli.id = p.kli_id)
       where 1=1Edited by: Vili Dialis on Nov 22, 2012 1:10 AM
forgot to add query

Hi Vili,
full table scans are very efficient for retrieving large percentage of rows from a table for several reasons:
1) they can benefit from multiblock reads (as opposed to index range/unique scans which use single-block reads)
2) they don't read rowid's
3) they don't need to navigate through root/branch blocks, i.e. only data block are read
Contrary to a popular belief, indexes only provide benefits in a limited number of cases:
1) a small percentage of rows is needed (index range/unique/skip scan)
2) only column(s) contain in the index are needed (index fast full scan)
3) rows are needed in sorted order, and sorting would be expensive (index full scan)
So it's not surprising that the optimizer decides in favor of a full table scan in your case.
Best regards,
Nikolay

Similar Messages

  • How to avoid the FTS on WSH_delivery_details.

    Hi,
    The following below query, when i executed this query was taking more time due to the FTS on WSH_delivery_details.
    Pleae let me know, is there any other tables to join with this table to avoid FTS...
    SELECT b.NAME "Org", source_header_number "Order Number",
    source_line_number "Line Number",
    TRUNC (last_update_date) "Last Update Date"
    FROM wsh_delivery_details a, hr_operating_units b
    WHERE released_status = 'C'
    AND oe_interfaced_flag = 'N'
    AND TRUNC (last_update_date) BETWEEN TRUNC (SYSDATE - 10)
    AND TRUNC (SYSDATE - 1)
    AND a.org_id = b.organization_id
    ORDER BY source_header_number, source_line_number
    Thanks,
    Sathish

    Have not checked for column name errors..here are the SQLs..
    SELECT b.NAME, source_header_number, source_line_number, TRUNC (a.last_update_date)
    FROM wsh_delivery_details a, hr_operating_units b, wsh_delivery_assignments c, wsh_trip_stops d, wsh_delivery_legs e
    WHERE released_status = 'C'
    AND oe_interfaced_flag = 'N'
    AND TRUNC *(d.actual_departure_date)* BETWEEN TRUNC (SYSDATE - 10) AND TRUNC (SYSDATE - 1)
    AND a.org_id = b.organization_id
    AND a.delivery_detail_id = c.delivery_detail_id
    AND c.delivery_id = e.delivery_id
    AND e.pick_up_stop_id = d.stop_id
    ORDER BY source_header_number, source_line_number
    OR
    initial_pickup_date is what goes to update the order lines as actual_shipment_date and is indexed.
    SELECT b.NAME "Org", source_header_number "Order Number",
    source_line_number "Line Number",
    TRUNC (last_update_date) "Last Update Date"
    FROM wsh_delivery_details a, hr_operating_units b,wsh_delivery_assignments c,wsh_new_deliveries d
    WHERE released_status = 'C'
    AND oe_interfaced_flag = 'N'
    AND TRUNC *(d.initial_pickup_date)* BETWEEN TRUNC (SYSDATE - 10)
    AND TRUNC (SYSDATE - 1)
    AND a.org_id = b.organization_id
    and a.delivery_detail_Id=c.delivery_detail_id
    and c.delivery_id=d.delivery_id
    ORDER BY source_header_number, source_line_number
    Thanks
    Nagamohan

  • How to find the count of tables going for fts(full table scan in oracle 10g

    HI
    how to find the count of tables going for fts(full table scan) in oracle 10g
    regards

    Hi,
    Why do you want to 'find' those tables?
    Do you want to 'avoid FTS' on those tables?
    You provide little information here. (Perhaps you just migrated from 9i and having problems with certain queries now?)
    FTS is sometimes the fastest way to retrieve data, and sometimes an index scan is.
    http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968
    There's no 'FTS view' available, if you want to know what happens on your DB you need, like Anand already said, to trace sessions that 'worry you'.

  • SQL query to fetch approximately 9000 rows

    Hi,
    I am using the following SQL query in a java class. Using JDBC to connect to an Oracle database using JRUN app server connection pooling.
    SELECT PT.ACCOUNT_NUMBER,PAYMENT_REF_ID, TO_CHAR (PT.DATE_CREATED, 'MM/DD/YYYY'),
    PT.AMOUNT_PAID, DECODE(STATUS_VALUE,'Cancelled','Canceled',STATUS_VALUE)
    FROM EPAY_PAYMENT_TRANSACTIONS PT, EPAY_STATUS_LOOKUP SL
    WHERE
    PT.CLIENT_NAME = 'someclientname'
    AND TO_CHAR(DATE_PAYMENT_SCHEDULED,'MM/DD/YYYY') = '08/28/2006'
    AND PT.PAYMENT_STATUS = SL.STATUS_CODE
    ORDER BY PT.DATE_CREATED DESC
    EPAY_PAYMENT_TRANSACTIONS is a huge table consisting of thousands of rows. There are 4 indexes defined on this table but not on the columns used in the join condition of the query.
    EPAY_STATUS_LOOKUP is a much smaller table mainly used for lookup consisting of some 100 rows.
    The above query fetches approx 9000 records and takes a very long time to execute. Is there any way it can be optimized or we can change some attributes on the tables involved for speeding up the query?
    Thanks in advance,
    Nisha.

    So the task is to avoid FTS on EPAY_PAYMENT_TRANSACTIONSWell, maybe...
    But I believe that we have yet to hear more details about data distribution, execution plans and so on.
    Because it all depends...
    Consider simple testcase:
    SQL> select * from v$version where rownum = 1;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
    SQL> -- Create "small" table
    SQL>
    SQL>  create table build_t as
      2   select rownum x,
      3          rpad('*', 200, '*') padding
      4     from dual
      5  connect by level <= 100;
    Table created.
    SQL> -- Create "large" table - initially, add 9000 rows which satisfy join criteria
    SQL>
    SQL>  create table probe_t as
      2   select mod(rownum - 1, 100) + 1 x,
      3          dbms_random.value y,
      4          rpad('*', 200, '*') padding
      5     from dual
      6  connect by level <= 9000;
    Table created.
    SQL> -- Now, add some extra rows to large table - "thousands of rows"
    SQL>
    SQL>  insert into probe_t
      2   select rownum + 100,
      3          0,
      4          rpad('*', 200, '*') padding
      5     from dual
      6  connect by level <= 100000;
    100000 rows created.
    SQL> exec dbms_stats.gather_table_stats(user, 'BUILD_T')
    PL/SQL procedure successfully completed.
    SQL> exec dbms_stats.gather_table_stats(user, 'PROBE_T')
    PL/SQL procedure successfully completed.
    SQL> -- Ok, let's measure ...
    SQL>
    SQL> set autot traceonly
    SQL>
    SQL> select a.*, b.*
      2    from build_t a,
      3         probe_t b
      4   where a.x = b.x
      5   order by b.y;
    9000 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1387 Card=109 Bytes=45017)
       1    0   SORT (ORDER BY) (Cost=1387 Card=109 Bytes=45017)
       2    1     HASH JOIN (Cost=1386 Card=109 Bytes=45017)
       3    2       TABLE ACCESS (FULL) OF 'BUILD_T' (Cost=5 Card=100 Bytes=20400)
       4    2       TABLE ACCESS (FULL) OF 'PROBE_T' (Cost=1355 Card=109000 Bytes=22781000)
    Statistics
              0  recursive calls
              0  db block gets
           3280  consistent gets
              0  physical reads
              0  redo size
         360756  bytes sent via SQL*Net to client
           7096  bytes received via SQL*Net from client
            601  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
           9000  rows processedOn my server, this query finished in less that second and took 3280 LIO to complete.
    Now, which index did you mean to build? I guess it's and index on join column, isn't it?
    If so - let's build it and measure again:
    SQL> create index idx_probe_t on probe_t(x);
    Index created.
    SQL> select /*+ ORDERED USE_NL(a b) */
      2         a.*, b.*
      3    from build_t a,
      4         probe_t b
      5   where a.x = b.x
      6   order by b.y;
    9000 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=CHOOSE (Cost=211 Card=109 Bytes=45017)
       1    0   SORT (ORDER BY) (Cost=211 Card=109 Bytes=45017)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'PROBE_T' (Cost=3 Card=1 Bytes=209)
       3    2       NESTED LOOPS (Cost=210 Card=109 Bytes=45017)
       4    3         TABLE ACCESS (FULL) OF 'BUILD_T' (Cost=5 Card=100 Bytes=20400)
       5    3         INDEX (RANGE SCAN) OF 'IDX_PROBE_T' (NON-UNIQUE) (Cost=2 Card=1)
    Statistics
              0  recursive calls
              0  db block gets
           9124  consistent gets
             21  physical reads
              0  redo size
         360756  bytes sent via SQL*Net to client
           7096  bytes received via SQL*Net from client
            601  SQL*Net roundtrips to/from client
              1  sorts (memory)
              0  sorts (disk)
           9000  rows processedLook at this - 9124 consistent gets, almost three times larger than in case of hash join.
    Full table scan is not something to be avoided at all cost.
    Indexing - sometimes it is good,
    sometimes it's just useless,
    sometimes it only makes things worse...
    It all depends on underlying data distribution. That's why I asked user527580 to tell us more.
    Kind regards.

  • Does FrameMaker 12 have an LEID value for use with serialization via AAMEE?

    Does FrameMaker 12 have an LEID value for use with serialization via AAMEE?

    I wish life was that easy :-).
    You have a valid argument; So I had this kind of test going. I had 50,000 messages of
    type-1 first loaded into the queue. Then I had 100 messages of type-2 loaded on to the
    queue.
    At this point, I issued a conditional dequeue that skipped type-1 messages and only
    asked for type-2 messages. I saw the full table scan.
    Then I created another test case 1 million type-1 messages had to be skipped.
    Still full table scans. I am not convinced that the doing full table scans is the best query
    plan out there. Cost Based Optimizer tends to disagree for some reason. May be
    it is right; But I have to avoid FTS when possible. As you know, not only blocks
    used by the existing rows in the table get scanned, but also every block until the
    high water mark level.
    One might say for a table with volatile data that comes and goes on a daily basis, million is an extreme case. Howvere, I am building a general purpose infrastructure based
    on AQ and cannot rule out consumers who misbehave.
    Are there better ways to handle multiple types of consumers on a single physical queue ? I need to be able dequeue specific consumer types if necessary, and this has to be
    done in a performant manner.
    Other alternatives I can think are
    1. Using a multi consumer queue such that there is one subscriber for each message
    type. Use multiple agent sets in a listen method to listen to messages and then
    control what agents we pass into listen method.
    Any pattern suggestions welcome.
    Thanks
    Vijay

  • Executions Plans stored on CACHE

    Hello Friends and Oracle Gurus...
    I'm not much expert on Execution Plan but I have a select on Oracle 9.2.0.1.0 Windows 2003 server that takes too long since a few days ago...
    When I look on E.Manager, Session Details... its doing a full table scan in one of my tables....
    But just above the Execution Plan is a message saying..
    EXECUTION PLAN STORED ON CACHE: MODE: ALL_ROWS
    However, my OPTIMIZER_MODE is CHOOSE and all tables on select have statistics up to date...
    How Did Oracle use this plan and how eliminate it and use another one? that would accept CHOOSE because the statistics?
    Tks for everyone

    Guys..
    here is the select I was talking about
    SELECT SUM(NOTAS_ITENS.QUANTIDADE),NOTAS_ITENS.CHAVE_NOTA, NOTAS_ITENS.NUMPED,SUM(NOTAS_ITENS.PESO_BRUTO),NOTAS_ITENS.CHAVE_ALMOXARIFADO,
    NOTAS_ITENS.VALOR_TOTAL, NOTAS_ITENS.CHAVE,
    PRODUTOS.CPROD, PRODUTOS.CODIGO, PRODUTOS.DESCRICAO, PRODUTOS.LOCACAO, PRODUTOS.VASILHAME,PRODUTOS.PESO_LIQUIDO,
    UNIDADES.UNIDADE,
    PERICULOSIDADE.DESCRICAO
    FROM NOTAS_ITENS, PRODUTOS, UNIDADES, PERICULOSIDADE
    WHERE (NOTAS_ITENS.CHAVE_PRODUTO = PRODUTOS.CPROD)
    AND (NOTAS_ITENS.QUANTIDADE > 0)
    AND (PRODUTOS.CHAVE_UNIDADE = UNIDADES.CHAVE)
    AND (PRODUTOS.CHAVE_PERICULOSIDADE = PERICULOSIDADE.CHAVE(+))
    AND ( CHAVE_NOTA IN
    (SELECT CHAVE FROM NOTAS WHERE CHAVE = CHAVE AND (NOTAS.ATIVA = 'SIM') AND (NOTAS.IMPRESSO_ROMANEIO = 'NAO')))
    GROUP BY PRODUTOS.CPROD, PRODUTOS.CODIGO, PRODUTOS.DESCRICAO, PRODUTOS.LOCACAO, PRODUTOS.VASILHAME, PRODUTOS.PESO_LIQUIDO,
    UNIDADES.UNIDADE,
    PERICULOSIDADE.DESCRICAO,
    NOTAS_ITENS.CHAVE_NOTA, NOTAS_ITENS.NUMPED, NOTAS_ITENS.CHAVE_ALMOXARIFADO, NOTAS_ITENS.CHAVE, NOTAS_ITENS.VALOR_TOTAL
    ORDER BY NOTAS_ITENS.CHAVE;
    and here is the execution plan for him..
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10615 Card=66372 Byt
    es=11747844)
    1 0 SORT (GROUP BY) (Cost=10615 Card=66372 Bytes=11747844)
    2 1 HASH JOIN (Cost=8855 Card=66372 Bytes=11747844)
    3 2 TABLE ACCESS (FULL) OF 'UNIDADES' (Cost=2 Card=30 Byte
    s=240)
    4 2 HASH JOIN (OUTER) (Cost=8851 Card=66372 Bytes=11216868
    5 4 HASH JOIN (Cost=8696 Card=66372 Bytes=9225708)
    6 5 TABLE ACCESS (FULL) OF 'PRODUTOS' (Cost=98 Card=19
    01 Bytes=171090)
    7 5 HASH JOIN (Cost=8584 Card=66387 Bytes=3252963)
    8 7 VIEW OF 'index$_join$_005' (Cost=347 Card=13193
    Bytes=171509)
    9 8 HASH JOIN
    10 9 HASH JOIN
    11 10 INDEX (RANGE SCAN) OF 'NOTAS_ATIVA_IDX' (N
    ON-UNIQUE) (Cost=140 Card=13193 Bytes=171509)
    12 10 INDEX (RANGE SCAN) OF 'NOTAS_IMPRESSO_ROMA
    NEIO' (NON-UNIQUE) (Cost=140 Card=13193 Bytes=171509)
    13 9 INDEX (FAST FULL SCAN) OF 'NOTAS_PK' (UNIQUE
    ) (Cost=140 Card=13193 Bytes=171509)
    14 7 TABLE ACCESS (FULL) OF 'NOTAS_ITENS' (Cost=8170
    Card=265547 Bytes=9559692)
    15 4 TABLE ACCESS (FULL) OF 'PERICULOSIDADE' (Cost=2 Card
    =1 Bytes=30)
    Estatística
    0 recursive calls
    0 db block gets
    855476 consistent gets
    83917 physical reads
    0 redo size
    1064 bytes sent via SQL*Net to client
    368 bytes received via SQL*Net from client
    1 SQL*Net roundtrips to/from client
    1 sorts (memory)
    0 sorts (disk)
    0 rows processed
    Note that the cost for the HASH JOIN is high-- Is there anyway to get it lower ?
    Note that Oracle performs FTS on NOTAS_ITENS table which is quite big for ue... I tryied a lot of hints but none of them avoid FTS ...
    Any tips ?

  • Is there any way I can avoid 2x FTS on that query

    Hi,
    I'm on 10.2.0.3 and got query which is doing 2x full table scan because of union all, can I use some equivalent to make that only 1x FTS ?
    select distinct 'Monthly spend' wsk,
    a.dt_podpisania_umowy,
    a.podzial_1,
    a.podzial_2,
    a.podzial_3,
    a.podzial_4,
    a.podzial_5,
    a.podzial_6,
    count(unique a.account_num) ilosc_rach,
    (sum(case when a.mob=0 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob0,
    (sum(case when a.mob=1 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob1,
    (sum(case when a.mob=2 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob2,
    (sum(case when a.mob=3 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob3,
    (sum(case when a.mob=4 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob4,
    (sum(case when a.mob=5 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob5,
    (sum(case when a.mob=6 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob6,
    (sum(case when a.mob=7 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob7,
    (sum(case when a.mob=8 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob8,
    (sum(case when a.mob=9 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob9,
    (sum(case when a.mob=10 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob10,
    (sum(case when a.mob=11 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob11,
    (sum(case when a.mob=12 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob12,
    (sum(case when a.mob=13 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob13,
    (sum(case when a.mob=14 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob14,
    (sum(case when a.mob=15 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob15,
    (sum(case when a.mob=16 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob16,
    (sum(case when a.mob=17 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob17,
    (sum(case when a.mob=18 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob18,
    (sum(case when a.mob=19 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob19,
    (sum(case when a.mob=20 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob20,
    (sum(case when a.mob=21 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob21,
    (sum(case when a.mob=22 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob22,
    (sum(case when a.mob=23 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob23,
    (sum(case when a.mob=24 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob24,
    (sum(case when a.mob=25 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob25,
    (sum(case when a.mob=26 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob26,
    (sum(case when a.mob=27 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob27,
    (sum(case when a.mob=28 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob28,
    (sum(case when a.mob=29 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob29,
    (sum(case when a.mob=30 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) mob30
    from TABLE a
    group by a.dt_podpisania_umowy,
    a.podzial_1,
    a.podzial_2,
    a.podzial_3,
    a.podzial_4,
    a.podzial_5,
    a.podzial_6
    UNION ALL -->                                       HERE is UNION ALL
    select distinct 'Monthly ending net receivables' wsk,
    a.dt_podpisania_umowy,
    a.podzial_1,
    a.podzial_2,
    a.podzial_3,
    a.podzial_4,
    a.podzial_5,
    a.podzial_6,
    count(unique a.account_num) ilosc_rach,
    (sum(case when a.mob=0 then nvl(a.sum_wpl,0) else 0 end)) mob0,
    (sum(case when a.mob=1 then nvl(a.sum_wpl,0) else 0 end)) mob1,
    (sum(case when a.mob=2 then nvl(a.sum_wpl,0) else 0 end)) mob2,
    (sum(case when a.mob=3 then nvl(a.sum_wpl,0) else 0 end)) mob3,
    (sum(case when a.mob=4 then nvl(a.sum_wpl,0) else 0 end)) mob4,
    (sum(case when a.mob=5 then nvl(a.sum_wpl,0) else 0 end)) mob5,
    (sum(case when a.mob=6 then nvl(a.sum_wpl,0) else 0 end)) mob6,
    (sum(case when a.mob=7 then nvl(a.sum_wpl,0) else 0 end)) mob7,
    (sum(case when a.mob=8 then nvl(a.sum_wpl,0) else 0 end)) mob8,
    (sum(case when a.mob=9 then nvl(a.sum_wpl,0) else 0 end)) mob9,
    (sum(case when a.mob=10 then nvl(a.sum_wpl,0) else 0 end)) mob10,
    (sum(case when a.mob=11 then nvl(a.sum_wpl,0) else 0 end)) mob11,
    (sum(case when a.mob=12 then nvl(a.sum_wpl,0) else 0 end)) mob12,
    (sum(case when a.mob=13 then nvl(a.sum_wpl,0) else 0 end)) mob13,
    (sum(case when a.mob=14 then nvl(a.sum_wpl,0) else 0 end)) mob14,
    (sum(case when a.mob=15 then nvl(a.sum_wpl,0) else 0 end)) mob15,
    (sum(case when a.mob=16 then nvl(a.sum_wpl,0) else 0 end)) mob16,
    (sum(case when a.mob=17 then nvl(a.sum_wpl,0) else 0 end)) mob17,
    (sum(case when a.mob=18 then nvl(a.sum_wpl,0) else 0 end)) mob18,
    (sum(case when a.mob=19 then nvl(a.sum_wpl,0) else 0 end)) mob19,
    (sum(case when a.mob=20 then nvl(a.sum_wpl,0) else 0 end)) mob20,
    (sum(case when a.mob=21 then nvl(a.sum_wpl,0) else 0 end)) mob21,
    (sum(case when a.mob=22 then nvl(a.sum_wpl,0) else 0 end)) mob22,
    (sum(case when a.mob=23 then nvl(a.sum_wpl,0) else 0 end)) mob23,
    (sum(case when a.mob=24 then nvl(a.sum_wpl,0) else 0 end)) mob24,
    (sum(case when a.mob=25 then nvl(a.sum_wpl,0) else 0 end)) mob25,
    (sum(case when a.mob=26 then nvl(a.sum_wpl,0) else 0 end)) mob26,
    (sum(case when a.mob=27 then nvl(a.sum_wpl,0) else 0 end)) mob27,
    (sum(case when a.mob=28 then nvl(a.sum_wpl,0) else 0 end)) mob28,
    (sum(case when a.mob=29 then nvl(a.sum_wpl,0) else 0 end)) mob29,
    (sum(case when a.mob=30 then nvl(a.sum_wpl,0) else 0 end)) mob30
    from TABLE a
    group by a.dt_podpisania_umowy,
    a.podzial_1,
    a.podzial_2,
    a.podzial_3,
    a.podzial_4,
    a.podzial_5,
    a.podzial_6

    Hello
    If it's the full scan that's expensive, then it may be worth using sub query factoring and do all of the aggregation in one hit(you're grouping on the same columns so there shouldn't be a problem there.
    Just to note though, you don't need to use DISTINCT when you have group by.
    WITH agg
    AS 
    (   SELECT
            a.dt_podpisania_umowy,
            a.podzial_1,
            a.podzial_2,
            a.podzial_3,
            a.podzial_4,
            a.podzial_5,
            a.podzial_6,
            count(unique a.account_num) ilosc_rach,
            (sum(case when a.mob=0 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob0,
            (sum(case when a.mob=1 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob1,
            (sum(case when a.mob=2 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob2,
            (sum(case when a.mob=3 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob3,
            (sum(case when a.mob=4 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob4,
            (sum(case when a.mob=5 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob5,
            (sum(case when a.mob=6 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob6,
            (sum(case when a.mob=7 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob7,
            (sum(case when a.mob=8 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob8,
            (sum(case when a.mob=9 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end))  ms_mob9,
            (sum(case when a.mob=10 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob10,
            (sum(case when a.mob=11 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob11,
            (sum(case when a.mob=12 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob12,
            (sum(case when a.mob=13 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob13,
            (sum(case when a.mob=14 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob14,
            (sum(case when a.mob=15 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob15,
            (sum(case when a.mob=16 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob16,
            (sum(case when a.mob=17 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob17,
            (sum(case when a.mob=18 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob18,
            (sum(case when a.mob=19 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob19,
            (sum(case when a.mob=20 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob20,
            (sum(case when a.mob=21 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob21,
            (sum(case when a.mob=22 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob22,
            (sum(case when a.mob=23 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob23,
            (sum(case when a.mob=24 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob24,
            (sum(case when a.mob=25 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob25,
            (sum(case when a.mob=26 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob26,
            (sum(case when a.mob=27 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob27,
            (sum(case when a.mob=28 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob28,
            (sum(case when a.mob=29 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob29,
            (sum(case when a.mob=30 then nvl(a.NO_TRX_TOTAL,0)-nvl(a.NO_TRX_PPT,0) else 0 end)) ms_mob30,
            (sum(case when a.mob=0 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob0,
            (sum(case when a.mob=1 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob1,
            (sum(case when a.mob=2 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob2,
            (sum(case when a.mob=3 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob3,
            (sum(case when a.mob=4 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob4,
            (sum(case when a.mob=5 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob5,
            (sum(case when a.mob=6 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob6,
            (sum(case when a.mob=7 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob7,
            (sum(case when a.mob=8 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob8,
            (sum(case when a.mob=9 then nvl(a.sum_wpl,0) else 0 end))  mnr_mob9,
            (sum(case when a.mob=10 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob10,
            (sum(case when a.mob=11 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob11,
            (sum(case when a.mob=12 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob12,
            (sum(case when a.mob=13 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob13,
            (sum(case when a.mob=14 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob14,
            (sum(case when a.mob=15 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob15,
            (sum(case when a.mob=16 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob16,
            (sum(case when a.mob=17 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob17,
            (sum(case when a.mob=18 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob18,
            (sum(case when a.mob=19 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob19,
            (sum(case when a.mob=20 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob20,
            (sum(case when a.mob=21 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob21,
            (sum(case when a.mob=22 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob22,
            (sum(case when a.mob=23 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob23,
            (sum(case when a.mob=24 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob24,
            (sum(case when a.mob=25 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob25,
            (sum(case when a.mob=26 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob26,
            (sum(case when a.mob=27 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob27,
            (sum(case when a.mob=28 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob28,
            (sum(case when a.mob=29 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob29,
            (sum(case when a.mob=30 then nvl(a.sum_wpl,0) else 0 end)) mnr_mob30 
        FROM
            tablea
        GROUP BY
            a.dt_podpisania_umowy,
            a.podzial_1,
            a.podzial_2,
            a.podzial_3,
            a.podzial_4,
            a.podzial_5,
            a.podzial_6   
    SELECT
        'Monthly spend' wsk
        dt_podpisania_umowy,
        podzial_1,
        podzial_2,
        podzial_3,
        podzial_4,
        podzial_5,
        podzial_6,
        ilosc_rach,       
        ms_mob0  mob0,
        ms_mob1  mob1,
        ms_mob2  mob2,
        ms_mob3  mob3,
        ms_mob4  mob4,
        ms_mob5  mob5,
        ms_mob6  mob6,
        ms_mob7  mob7,
        ms_mob8  mob8,
        ms_mob9  mob9,
        ms_mob10 mob10,
        ms_mob11 mob11,
        ms_mob12 mob12,
        ms_mob13 mob13,
        ms_mob14 mob14,
        ms_mob15 mob15,
        ms_mob16 mob16,
        ms_mob17 mob17,
        ms_mob18 mob18,
        ms_mob19 mob19,
        ms_mob20 mob20,
        ms_mob21 mob21,
        ms_mob22 mob22,
        ms_mob23 mob23,
        ms_mob24 mob24,
        ms_mob25 mob25,
        ms_mob26 mob26,
        ms_mob27 mob27,
        ms_mob28 mob28,
        ms_mob29 mob29,
        ms_mob30 mob30
    FROM
        agg
    UNION ALL       
    SELECT
        'Monthly ending net receivables' wsk
        dt_podpisania_umowy,
        podzial_1,
        podzial_2,
        podzial_3,
        podzial_4,
        podzial_5,
        podzial_6,
        ilosc_rach,       
        mnr_mob0  mob0,
        mnr_mob1  mob1,
        mnr_mob2  mob2,
        mnr_mob3  mob3,
        mnr_mob4  mob4,
        mnr_mob5  mob5,
        mnr_mob6  mob6,
        mnr_mob7  mob7,
        mnr_mob8  mob8,
        mnr_mob9  mob9,
        mnr_mob10 mob10,
        mnr_mob11 mob11,
        mnr_mob12 mob12,
        mnr_mob13 mob13,
        mnr_mob14 mob14,
        mnr_mob15 mob15,
        mnr_mob16 mob16,
        mnr_mob17 mob17,
        mnr_mob18 mob18,
        mnr_mob19 mob19,
        mnr_mob20 mob20,
        mnr_mob21 mob21,
        mnr_mob22 mob22,
        mnr_mob23 mob23,
        mnr_mob24 mob24,
        mnr_mob25 mob25,
        mnr_mob26 mob26,
        mnr_mob27 mob27,
        mnr_mob28 mob28,
        mnr_mob29 mob29,
        ms_mob30 mob30
    FROM
        agg             HTH
    David
    Edited by: Bravid on Sep 7, 2011 1:21 PM
    Mistake with copy and paste

  • Issue with Full-Text (FTS) master merge on SQL Server 2012 SP2

    Hi,
    On my current project we have really annoying issue with master merge process that occurs after Full Population of FTS Index.
    Let me describe our process: we have continuous build that setup environment and run unit tests after each commit in our source control. For each run we create new DB (on the same SQL Server) then fill it with test data and run unit tests. Sometimes unit
    tests failed because FTS Index population cannot finish in time.
    We have constantly seeing in [sysprocesses] table lots of sessions with wait type FT_MASTER_MERGE that block our tests
    In FTS log we have following error:
    The master merge started at the end of the full crawl of table or indexed view [TABLENAME] failed with HRESULT = '0x80000049'. Database ID is '45', table id is
    706101556, catalog ID: 5.
    Here is an example of how we create FTS catalog and add table to it (As you can see it's created with auto change tracking together with
    index update in background):
    -- Create FTS catalog
    EXEC sp_fulltext_catalog 'WilcoFTSCatalog', 'create'EXEC sp_fulltext_table 'Users', 'create', 'WilcoFTSCatalog', 'PK_Users'
    EXEC sp_fulltext_column 'Users', 'UserId', 'add'
    EXEC sp_fulltext_column 'Users', 'Name', 'add'
    EXEC sp_fulltext_table 'Users', 'activate'
    EXEC sp_fulltext_table 'Users', 'start_change_tracking'
    EXEC sp_fulltext_table 'Users', 'start_background_updateindex'
    Does anybody know what is the root cause of this issue and what should be done to avoid it?
    Thank you in advance,
    Olena Smoliak

    Hi,
    On my current project we have really annoying issue with master merge process that occurs after Full Population of FTS Index.
    Let me describe our process: we have continuous build that setup environment and run unit tests after each commit in our source control. For each run we create new DB (on the same SQL Server) then fill it with test data and run unit tests. Sometimes unit
    tests failed because FTS Index population cannot finish in time.
    We have constantly seeing in [sysprocesses] table lots of sessions with wait type FT_MASTER_MERGE that block our tests
    In FTS log we have following error:
    The master merge started at the end of the full crawl of table or indexed view [TABLENAME] failed with HRESULT = '0x80000049'. Database ID is '45', table id is
    706101556, catalog ID: 5.
    Here is an example of how we create FTS catalog and add table to it (As you can see it's created with auto change tracking together with
    index update in background):
    -- Create FTS catalog
    EXEC sp_fulltext_catalog 'WilcoFTSCatalog', 'create'EXEC sp_fulltext_table 'Users', 'create', 'WilcoFTSCatalog', 'PK_Users'
    EXEC sp_fulltext_column 'Users', 'UserId', 'add'
    EXEC sp_fulltext_column 'Users', 'Name', 'add'
    EXEC sp_fulltext_table 'Users', 'activate'
    EXEC sp_fulltext_table 'Users', 'start_change_tracking'
    EXEC sp_fulltext_table 'Users', 'start_background_updateindex'
    Does anybody know what is the root cause of this issue and what should be done to avoid it?
    Thank you in advance,
    Olena Smoliak

  • Avoiding null and duplicate values using model clause

    Hi,
    I am trying to use model clause to get comma seperated list of data : following is the scenario:
    testuser>select * from test1;
    ID VALUE
    1 Value1
    2 Value2
    3 Value3
    4 Value4
    5 Value4
    6
    7 value5
    8
    8 rows selected.
    the query I have is:
    testuser>with src as (
    2 select distinct id,value
    3 from test1
    4 ),
    5 t as (
    6 select distinct substr(value,2) value
    7 from src
    8 model
    9 ignore nav
    10 dimension by (id)
    11 measures (cast(value as varchar2(100)) value)
    12 rules
    13 (
    14 value[any] order by id =
    15 value[cv()-1] || ',' || value[cv()]
    16 )
    17 )
    18 select max(value) oneline
    19 from t;
    ONELINE
    Value1,Value2,Value3,Value4,Value4,,value5,
    what I find is that this query has duplicate value and null (',,') coming in as data has null and duplicate value. Is there a way i can avoid the null and the duplicate values in the query output?
    thanks,
    Edited by: orausern on Feb 19, 2010 5:05 AM

    Hi,
    Try this code.
    with
    t as ( select substr(value,2)value,ind
            from test1
            model
            ignore nav
            dimension by (id)
            measures (cast(value as varchar2(100)) value, 0 ind)
            rules
            ( ind[any]=  instr(value[cv()-1],value[cv()]),
            value[any] order by id = value[cv()-1] || CASE WHEN value[cv()] IS NOT NULL
                                               and ind[cv()]=0     THEN ',' || value[cv()] END      
    select max(value) oneline
    from t;
    SQL> select * from test1;
            ID VALUE
             1 Value1
             2 Value2
             3 Value3
             4 Value4
             5 Value4
             6
             7 value5
             8
    8 ligne(s) sélectionnée(s).
    SQL> with
      2   t as ( select substr(value,2)value,ind
      3          from test1
      4          model
      5          ignore nav
      6          dimension by (id)
      7          measures (cast(value as varchar2(100)) value, 0 ind)
      8          rules
      9          ( ind[any]=  instr(value[cv()-1],value[cv()]),
    10          value[any] order by id = value[cv()-1] || CASE WHEN value[cv()] IS NOT NULL
    11                                             and ind[cv()]=0     THEN ',' || value[cv()] END 
    12          )
    13        )
    14   select max(value) oneline
    15   from t;
    ONELINE
    Value1,Value2,Value3,Value4,value5
    SQL>

  • How to set text resources avoiding automatic page update with c:set tag

    Hello everyone,
    I'm developing my web application with JDeveloper 11.1.2.3.0 in order to support two language locales (en and de). Following this guide I've performed the following steps:
    Creation of two property files (Resources.properties and Resources_de.properties) with the key-value entries;
    Modify of faces-config.xml file adding these lines:
    <locale-config>
            <default-locale>en</default-locale>
            <supported-locale>de</supported-locale>
      </locale-config>
      <resource-bundle>
          <base-name>view.Resources</base-name>
          <var>res</var>
       </resource-bundle>
    In the project properties > Resources Bundle I've checked:
    Automatically Synchronize Bundle;
    Warn about Hard-coed Translatable Strings;
    Always Prompt for Description.
    In the same place I've set the default project bundle name to view.Resources.
    In a test JSP page I've a outputText with the value #{res['HELLOWORLD']} where HELLOWORLD is the key in the property files. All works fine, and the correct string is shown based on locale browser settings.
    Anyway, when I use the "Select Text Resources..." menu in any text value choosing a value from the default property file, JDev automatically adds the following tag:
    <c:set var="customuiBundle" value="#{adfBundle['view.ViewControllerBundle']}"/>
    setting the value of the text in #{ViewControllerBundle.HELLOWORLD}.
    There is a way to avoid this behavior? Can I manage the resources in a different way? I would to choose a value from the list in order to get the res.KEY value instead of ViewControllerBundle.KEY value.
    Thanks in advance for your help.
    Manuel

    don't select from menu - go to source and write it . The problem with the tools is they have a certain way of doing things and don't think we should spend time customizing jdeveloper rather concentrate on the work in hand.

  • Can a BIG form be served up one page at a time to avoid long load time?

    Tricks I have read for optimizing the load time of large forms are not helping. Linearization causes the first page to render quickly, but you can't interact with the fields until the whole form finishes loading -- no help there. Is there a way to break the form into pages (without creating entirely separate forms) so the user can fill out a page, hit a Next Page button, fill out that page, etc.? Understood that this is an old school idea, but until Reader can download a 1+ MB form in less time than it takes an average user to get ticked off, old school might do the trick.
    Alternatively, is there a way to construct a form so you can start interacting with it without having to wait for it all to load? This question comes from the (uninformed) assumption that maybe there are forward references that can't be satisfied until all the bits have come over the wire. If that's right, can a multipage form be architected so as to avoid this problem?

    No that technology does not exist yet. There are form level events that need to have the entire document there before they can fire. Also you would have to keep track of where you are so that would mean some sort of session information for each user.

  • To avoid writing database code in the front end

    Hello,
    I am working on a database application using 10g database as backend and dotnet as front end. I wish to execute only oracle stored procedure for all the select (to avoid hard parse and use of bind variable), DDL and DML operations; just to avoid writing database code in the front end. Can anyone please give me a little examples of :
    1.Select query's output to be return as resultset by stored procedure.
    2.DML example by stored procedure.
    3.Any DDL example by stored procedure.
    using scott.emp, so that i would just call the stored procedure, rather than giving select,DML and/or DDL commands in the front end. Even though i have read in the documentation, but a clear cut examples will help me to get into clear concept as well.
    Thanks & Regards
    Girish Sharma

    Hi...
    -->Select example
    create or replace procedure get_emp(rc out sys_refcursor)
    is
    begin
    open rc for select * from emp;
    end;
    -->DML example
    create or replace procedure do_dml_emp(pempid in number,
    pempname varchar2,
    result out number)
    is
    begin
    insert into emp(empid,empname) values(pempid,pempname) returning empid into result;
    exception
    when others then
    result:=-1;
    end;
    -->DDL example
    create or replace procedure ddl_emp(colname varchar2,
    coltype varchar2,
    result out number)
    is
    begin
    result:=-1;
    execute immediate 'alter table emp add column ' || colname || ' ' || coltype ;
    result:=1;
    end;

  • Query to populate an alert message to avoid the duplication of Reference no

    Hi Experts,
    SUB:Query to populate an alert message to avoid the duplication of BP reference no.
    In A/R invoice, BP reference (numAtcard) is used to enter sales Order no as the reference number. As human error, double A/R invoice is created to that particular Sales Order no.
    So,I want a formatted search query in that BP reference field, so that when i type the sales order number and give tab, it should populate the alert message if it already exists.
    Moreover i do not want to block it through store procedure method, only warning is required to my scenario.
    Kindly, help me on this ground.
    Regards,
    Dwarak

    Hi there, i think this could work, maybe you'll only need to configure the Formated search to work with the document total, each time it changes
    declare @numatcard varchar(15)
    declare @count int
    set @numatcard=(select $[oinv.numatcard])
    set @count= (select count(numatcard) from oinv where numatcard=@numatcard)
    if @count>1
    select 'There is a duplicated reference'
    select @numatcard
    hope it works

  • Avoiding data memory duplication in subVI calls

    Hi,
    I am on a Quest to better understand some of the subtle ways of the LabVIEW memory manager. Overall, I want to (as much as practically possible) eliminate calls to the memory manager while the code is running.
    (I mainly do RT code that is expected to run "forever", the more static and "quiet" the memory manager activity is, the faster and simpler it is to prove beyond reasonable doubt that your application does not have memory leaks, and that if will not run into memory fragmentation (out of memory) issues etc. What I like to see as much as possible, are near static "used memory" and "largest contiguous block available" stats over days and weeks of deployed RT code.)
    In my first example (attached, "IPE vs non-IPE.png"), I compared IPE buffer allocation (black dots) for doing some of the operations in an IPE structure vs. "the old way". I see fewer dots the old way, and removed the IPE structure.
    Next I went from initializing an array of size x to values y to using a constant array (0 values) with an "array add" to get an array with the same values as my first version of the code. ("constant array.png")
    The length of the constant array is set to my "worst case" of 25 elements (in example). Since "replace sub-array" does not change the size of the input array even when the sub-array is "too long", this saves me from constantly creating small, variable sized arrays at run-time. (not sure what the run-time cpu/memory hit is if you tried to replace the last 4 elements with a sub-array that is 25 elements long...??)
    Once I arrived at this point, I found myself wondering "how exactly the constant array is handled during run-time?". Is it allocated the first time that this sub-vi is called then remains in memory until the main/top VI terminates, or is it unloaded every time the SubVI finishes execution? (I -think- Mac's could unload, while windows and linux/unix it remains in memory until top level closes?)  When thinking (and hopefully answering),  consider that the the code is compiled to an RTEXE runningg on a cRIO-9014 (vxWorks OS).  
    In this case, I could make the constant array a control, and place the constant on the diagram of the caller, and pipe the constant all the way up to the top level VI, but this seems cumbersome and I'm not convinced that the compiler would properly reckognize that at the end of a long chain of sub-sub-sub VI's all those "controls" are actually always tied off to a single constant. Another way would perhaps be to initialize a FG with this constant array and always "read it" out from the FG. (using this cool trick on creating large arrays on a shift register with only one copy which avoids the dual copy (one for shift register, one from "initialize array" function)).
    This is just one example of many cases where I'm trying to avoid creating memory manager activity by making LabVIEW assign memory space once, then only operate on that data "in-place" as much as possible. In another discussion on "in-place element" structures (here), I got the distinct sense that in-place very rarely adds any advantage as the compiler can pick up on and do "in-place" automatically in pretty much any situation. I find the NI documentation on IPE's lacking in that it doesn't really show good examples of when it works and when it doesn't. In particular, this already great article would vastly benefit from updates showing good/bad use of IPE's.
    I've read the following NI links to try and self-help (all links should open in new window/tab):
    cool trick on creating large arrays on a shift register with only one copy
    somewhat dated but good article on memory optimization
    IPE caveats and recommendations
    How Can I Optimize the Memory Use in My LabVIEW VI?
    Determining When and Where LabVIEW Allocates a New Buffer
    I do have the memory profiler tool, but it shows min/max/average allocations, it doesn't really tell me (or I don't know how to read it properly) how many times blocks are allocated or re-allocated.
    Thanks, and I hope to build on this thread with other examples and at the end of the thread, hopefully everyone have found one or two neat things that they can use to memory optimize their own applications.  Next on my list are probably handling of large strings, lots of array math operations on various input arrays to create a result output array etc.
    -Q
    QFang
    CLD LabVIEW 7.1 to 2013
    Attachments:
    IPE vs non-IPE.png ‏4 KB
    constant array.png ‏3 KB

    I sense a hint of frustration on your part, I'm not trying to be dense or difficult, but do realize that this is more towards the "philosophical" side than "practical" side. Code clarity and practicalities are not necessarily the objectives here.
    Also, I have greatly appreciated all your time and input on this and the other thread!
    The answer to your first question is actually "yes, sort of". I had a RT application that developed a small memory leak (through a bug with the "get volume info.vi' from NI), but to isolate it and prove it out took a very long time because the constant large allocation/deallocations would mask out the leak. (Trace's didn't work out either since it was a very very slow leak and the traces would bomb out before showing anythinng conclusive.) The leak is a few bytes, but in addition to short term memory oscilations and  long term (days) cyclical "saw-tooth" ramps in memory usage, made this very hard to see. A more "static" memory landscape would possibly have made this simpler to narrow down and diagnose. or maybe not. 
    Also, you are missing my point entierely, this is not about "running out of memory" (and the size of 25 in my screen-shot may or may not be what that array (and others) end up being). This is about having things allocated in memory ONCE then not de-allocated or moved, and how/when this is possible to accomplish.  Also this is a quest (meaning something  I'm undertaking to improve and expand my knowledge, who said it has to be practical).
    You may find this document really interesting, its the sort of thing you could end up being forced to code to, albeit, I don't see how 100% compliance with this document would ever be possible in LabVIEW, thats not to say its worthless: JPL Institutional Coding Standard for the C Programming Language (while it is directed at C, they have a lot of valid general points in there.)
    Yes, you are right that the IPE would grow the output if the lenght of my replacement array is not the same, and since I can't share the full VI's its a bit of a stretch to expect people to infer from the small screen dummp that the I32 wires on the right guarantee the lengths will match up in the IPE example.
    Once, on recomendation of NI support, I actually did use the Request deallocation primitive during the hunt for what was going on in that RT app I was debugging last year. At that particular time, the symptom was constant fragmentation of memory, until the largest contiguous block would be less than a couple of kB and the app would terminate with 60+MB of free memory space.. (AKA memory leak, though we could not yet prove that from diagnostic memory consumption statistics due to the constant dynamic behavior of the program)  I later removed them. Also, they would run counter to my goal of "allocate once, re-use forever" that I'm chasing. and again, I'm chasing this more as a way to learn than because all my code MUST run this way. 
    I'm not sure I see what you mean by "copying data in and out of some temporary array". Previously (before the constant array) at every call to the containing sub-vi, I used to "initialize array" with x elements of value y (where x depends to a large degree on a configuration parameter, and y is determined by the input data array). Since I would call to "initialize" a new array each time the code was called, and the size of the array could change, I looked for a way that I could get rid of the dynamic size, and get rid of dynamically creating the array from scratch each time the sub-vi was called. What I came up with is perhaps not as clear as the old way I did it, but with some comments, I think its clear enough. In the new way, the array is created as a constant, so I would think that would cause less "movement" in memory as it at that point should be preventing the "source" array from (potentially) moving around in memory?  Considering the alternative of always re-creating a new array, how is this adding an "extra" copy that creating new ones would not create?
    How would you accomplish the task of creating an array of "n" elements, all of value "y" without creating "extra" copies? -auto-indexing using a for loop is certainly a good option, but again, is that sure to reuse the same memory location with each call? Would that not, in a nit-picking way, use more CPU cycles since you are building the array one element at the time instead of just using a primitive array add operation (which I have found to be wicked fast operations) and operate on a constant data structure?
    I cannot provide full VI's without further isolation, maybe down the road (once my weekends clear up a bit). Again, I appreciate your attention and your time!
    QFang
    CLD LabVIEW 7.1 to 2013

  • Avoid printing Header and Footer in the last page

    Hi,
    Could anyone please let me know how to avoid print the header and footer in the last page?
    Note: I'm printing RTF template for publishing the output.
    Looking forward for your valuable inputs/suggestions.
    Thanks in advance,
    Regards,
    Muru

    Hai,
    My report got FROM PO & TO PO parameters and i need to print footer only in first page of each PO. Tried with section but now i am getting first page of all PO contionious and then all lines together.
    Please call me or sent replies to [email protected]

Maybe you are looking for