Need to Improve performace of query

Hi,
I have 2 tables TAB1 and TAB2. TAB1 has around 55,000,000 rows and TAB2 has aound 150,000 rows.
I am inserting all the rows from TAB2 to TAB1 and I am using following query
MERGE INTO TAB1 TRG
USING (SELECT COL1,COL2.....
FROM TAB2 ) SRC
ON (TRG.COL1 = SRC.COL1
AND TRG.COL2 = SRC.COL2
AND TRG.COL3 = SRC.COL3)
WHEN NOT MATCHED THEN
INSERT (TRG.COL1,TRG.COL2 ....)
VALUES (SRC.COL1,SRC.COL2 ....);
Following is the explain plan for this statement
Plan
MERGE STATEMENT ALL_ROWS Cost: 14,380 Bytes: 81,668,417 Cardinality: 143,027                
     6 MERGE TAB2                     
          5 VIEW                
               4 HASH JOIN OUTER Cost: 14,380 Bytes: 32,324,102 Cardinality: 143,027      
                    1 TABLE ACCESS FULL TABLE TAB1 Cost: 304 Bytes: 14,588,754 Cardinality: 143,027      
                    3 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE TAB2 Cost: 12,999 Bytes: 69,025,716 Cardinality: 556,659 Partition #: 5      
                    2 INDEX RANGE SCAN INDEX IDX2 Cost: 1,551 Cardinality: 573,852
There are 2 indexes in TAB1
1. IDX1 on col1 and col2
2. IDX2 on col3
There is one index in TAB2
1.IDX1 on col1 and col2
TAB2 is truncated and populated with rows every 2-3 days.
This process takes too much time to complete.
Is there any other way to improve the performace of this process(inserting data from TAB2 to TAB1)?
or Is there any way other than using MERGE to improve performace?
Thanks.

The trick here would be to optimise the hash outer join, and I have a couple of suggestions:
Firstly, you could do this by hash partitioning (or subpartitioning) on the join keys. Partition-wise joins would reduce the chances of the join spilling to disk.
Secondly, you might be able to introduce a redundant predicate into the source query that can be used to partition prune on the target table.
The latter option works in a situation where you are merging a set of data into a table that has a limited range of values on the join key, and a classic example of this would be in maintaining an aggregate table in a data warehouse by merging in recent data only. The aggregate table might be range partitioned on date_of_transaction and cover a range of values over ten years, but the new data might just cover the most recent week. In that case you would like to be able to place a predicate directly on the target table to say "I'm only going to find a join on this range of rows". You can't do that directly but the optimiser can infer it if you place a redundant predicate in the USING clause.
Demonstration script:
create table src (col1 number, col2 number);
insert into src
select 2, rownum
from   dual
connect by rownum < 10000
create table tgt (col1 number, col2 number)
partition by range (col1)
partition p1 values less than (2),
partition p2 values less than (3)
insert into tgt
select 1, rownum
from   dual
connect by rownum < 100000
commit;
exec dbms_stats.gather_table_stats(user,'src');
exec dbms_stats.gather_table_stats(user,'tgt');Table tgt only has values in partition P1, but the merge is only going to populate partition P2.
explain plan for
merge into tgt
using (select * from src) src
on (tgt.col1 = src.col1)
when matched then update
set col2 = src.col2
when not matched then insert
values (src.col1, src.col2)
select * from table(dbms_xplan.display)
explain plan succeeded.
PLAN_TABLE_OUTPUT                                                                                                                                                                                                                                                                                           
Plan hash value: 3718868795                                                                                                                                                                                                                                                                                 
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                                                             
|   0 | MERGE STATEMENT        |      |  9999 |   449K|    64   (5)| 00:00:01 |       |       |                                                                                                                                                                                                             
|   1 |  MERGE                 | TGT  |       |       |            |          |       |       |                                                                                                                                                                                                             
|   2 |   VIEW                 |      |       |       |            |          |       |       |                                                                                                                                                                                                             
|*  3 |    HASH JOIN OUTER     |      |  9999 |   126K|    64   (5)| 00:00:01 |       |       |                                                                                                                                                                                                             
|   4 |     TABLE ACCESS FULL  | SRC  |  9999 | 59994 |     6   (0)| 00:00:01 |       |       |                                                                                                                                                                                                             
|   5 |     PARTITION RANGE ALL|      | 99999 |   683K|    56   (2)| 00:00:01 |     1 |     2 |                                                                                                                                                                                                             
|   6 |      TABLE ACCESS FULL | TGT  | 99999 |   683K|    56   (2)| 00:00:01 |     1 |     2 |                                                                                                                                                                                                             
Predicate Information (identified by operation id):                                                                                                                                                                                                                                                         
   3 - access("TGT"."COL1"(+)="SRC"."COL1")                                                                                                                                                                                                                                                                 
18 rows selectedSo you see from the above that a full scan of both partitions of TGT is performed.
We introduce a redundant predicate into the USING clause:
explain plan for
merge into tgt
using (select * from src where col1 >= 2) src
on (tgt.col1 = src.col1)
when matched then update
set col2 = src.col2
when not matched then insert
values (src.col1, src.col2)
select * from table(dbms_xplan.display)
explain plan succeeded.
PLAN_TABLE_OUTPUT                                                                                                                                                                                                                                                                                           
Plan hash value: 2500172128                                                                                                                                                                                                                                                                                 
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                                                          
|   0 | MERGE STATEMENT           |      |  9999 |   449K|     9  (12)| 00:00:01 |       |       |                                                                                                                                                                                                          
|   1 |  MERGE                    | TGT  |       |       |            |          |       |       |                                                                                                                                                                                                          
|   2 |   VIEW                    |      |       |       |            |          |       |       |                                                                                                                                                                                                          
|*  3 |    HASH JOIN RIGHT OUTER  |      |  9999 |   429K|     9  (12)| 00:00:01 |       |       |                                                                                                                                                                                                          
|   4 |     PARTITION RANGE SINGLE|      |     1 |    38 |     2   (0)| 00:00:01 |     2 |     2 |                                                                                                                                                                                                          
|   5 |      TABLE ACCESS FULL    | TGT  |     1 |    38 |     2   (0)| 00:00:01 |     2 |     2 |                                                                                                                                                                                                          
|*  6 |     TABLE ACCESS FULL     | SRC  |  9999 | 59994 |     6   (0)| 00:00:01 |       |       |                                                                                                                                                                                                          
Predicate Information (identified by operation id):                                                                                                                                                                                                                                                         
   3 - access("TGT"."COL1"(+)="SRC"."COL1")                                                                                                                                                                                                                                                                 
   6 - filter("COL1">=2)                                                                                                                                                                                                                                                                                    
19 rows selectedYou see from the plan that only one partition of TGT is now being scanned, and as that is empty in this case it will be a very fast action.
Even without partition pruning this could well be more efficient than the "vanilla" alternative by giving the optimiser more information about the size of the subset of TGT rows that are being merged into and giving the opportunity of a more efficient path for reading them.

Similar Messages

  • Need to improve query performance

    I need to improve the following query, that takes more than 20 minutes to run...
    all variables with v_ are set within procedure based on wheather user selected to filter on the field or chose not to
    in my basic search, I only pass in equipment_index_number, when_discovered_date range and s_class
    /*+ ORDERED INDEX (JCN, ad_jcn_trak_seq_wdd_ein_fil) USE_NL (JCN AM SH)
    JCN.JOB_ID as JOB_ID,
    JCN.JOB_SEQ as job_seq,
    JCN.EQUIPMENT_INDEX_NUMBER as EQUIPMENT_INDEX_NUMBER,
    AM.JOB_DATE_OF_LAST_UPDATE as job_date_of_last_update,
    SH.UNIT_NAME as unit_name,
    JCN.FILTER_KEY as FILTER_KEY,
    AM.WHEN_DISCOVERED_CODE as when_discovered_code,
    AM.MAN_HOURS_OPENING as man_hours_opening,
    AM.TYPE_AVAILABILITY_CODE as type_availability_code,
    AM.WHEN_DISCOVERED_DATE as WHEN_DISCOVERED_DATE,
    AM.JCN as JCN,
    AM.STATUS_CODE as STATUS_CODE,
    AM.MAN_HOURS_REMAINING as MAN_HOURS_REMAINING,
    AM.PRIORITY_CODE as PRIORITY_CODE,
    AM.DATE_CLOSING as DATE_CLOSING,
    AM.EQUIPMENT_NOMENCLATURE as EQUIPMENT_NOMENCLATURE,
    AM.DEFERRAL_REASON_CODE as DEFERRAL_REASON_CODE,
    AM.DUE_DATE as DUE_DATE,
    AM.LOCATION as LOCATION,
    AM.ACTION_TAKEN_CODE as ACTION_TAKEN_CODE,
    AM.SAFETY_CODE as SAFETY_CODE,
    AM.ESWBS_OPENING as ESWBS_OPENING,
    AM.CSMP_NARRATIVE_SUMMARY as CSMP_NARRATIVE_SUMMARY,
    AM.INSURV_NUMBER as INSURV_NUMBER ,
    AM.INSURV_MAINTENANCE_INDICATOR as INSURV_MAINTENANCE_INDICATOR,
    AM.INSURV_MISSION_DEGRADING_CODE as INSURV_MISSION_DEGRADING_CODE,
    AM.PROBLEM_DESCRIPTION as PROBLEM_DESCRIPTION ,
    AM.RECOMMEND_SOLUTION as RECOMMEND_SOLUTION,
    AM.ACTUAL_SOLUTION as ACTUAL_SOLUTION ,
    AM.CLOSING_REMARKS as CLOSING_REMARKS ,
    AM.ADDITIONAL_NARRATIVE as ADDITIONAL_NARRATIVE
    FROM
    AD_TR JCN JOIN
    RW_ACT_MAINT AM ON
    JCN.JOB_SEQ = AM.JOB_SEQ
    LEFT JOIN
    SH_UNIT SH ON
    JCN.HULL_NUMBER = SH.HULL_NUMBER
    WHERE
    JCN.JOB_SEQ is NOT null and
    JCN.WHEN_DISCOVERED_DATE between p_from_date and p_to_date and
    (v_jcn = -1 or AM.TYPE_HULL||AM.WORK_AREA||AM.JOB_SEQUENCE_NUMBER IN (select jcn from temp_hull_wa_jsn)) and
    (v_apl = -1 or AM.APL IN (select apl from temp_apl_eic)) and
    (v_eic = -1 or AM.EIC IN (select eic from temp_apl_eic)) and
    (v_ein = -1 or JCN.EQUIPMENT_INDEX_NUMBER IN (select equipment_index_number from temp_ein)) and
    (v_filter = -1 or JCN.FILTER_KEY IN (select filter_key from temp_filter)) and
    (v_tmr = -1 or TRIM(AM.ACTION_TAKEN_CODE) = '8') and
    (v_s_class = -1 or AM.S_CLASS IN (select s_class from temp_s_class)) and
    (v_discovered = -1 or AM.WHEN_DISCOVERED_CODE IN (select when_discov_code from temp_when_discov_code)) and
    (v_action = -1 and v_atc = -1) or
    (v_action = 1 and v_atc = 1 and
    (AM.ACTION_TAKEN_CODE IS NULL or AM.ACTION_TAKEN_CODE IN (select action_code from temp_action_code))
    ) or
    (v_action = 1 and v_atc = -1 and
    AM.ACTION_TAKEN_CODE IN (select action_code from temp_action_code)
    ) or
    (v_action = -1 and v_atc = 1 and
    AM.ACTION_TAKEN_CODE IS NULL
    ) and
    (v_status = -1 or AM.STATUS_CODE IN(select status_code from temp_status_code)) and
    (v_priority = -1 or AM.PRIORITY_CODE IN(select priority_code from temp_priority_code))and
    (v_cause = -1 or AM.CAUSE_CODE IN(select cause_code from temp_cause_code))and
    (v_deferral = -1 or AM.DEFERRAL_REASON_CODE IN(select deferral_reason_code from temp_deferral_code))and
    (v_availability = -1 or AM.TYPE_AVAILABILITY_CODE IN(select type_availability_code from temp_type_avail_code)) and
    (v_narrative = -1 or UPPER(AM.CSMP_NARRATIVE_SUMMARY) LIKE v_upper_narrative or
    UPPER(AM.PROBLEM_DESCRIPTION) LIKE v_upper_narrative or UPPER(AM.RECOMMEND_SOLUTION) LIKE v_upper_narrative or
    UPPER(AM.ACTUAL_SOLUTION) LIKE v_upper_narrative or UPPER(AM.CLOSING_REMARKS) LIKE v_upper_narrative or
    UPPER(AM.ADDITIONAL_NARRATIVE) LIKE v_upper_narrative or UPPER(AM.SLR_NARRATIVE) LIKE v_upper_narrative) and
    (v_niin = -1 or EXISTS
    (select 1 from rw_issues pi
    where pi.job_seq=am.job_seq and
    pi.niin in (select niin from temp_niin))
    or EXISTS
    (select 1 from rw_demands pd
    where pd.job_seq=am.job_seq and
    pd.niin in (select niin from temp_niin))
    ORDER BY JCN.H_NUMBER, JCN.WORK_AREA, JCN.JSN, JCN.JCN_SUFFIX;
    AD_TR table has 5 mil rows, RW_ACT_MAINT has 15 mil rows and sh_unit has only 1000 rows
    I have index on AD_TR table for job_seq, when discovered date, equipment_index_number and filter_key which is the most frequent search selections that user makes...
    any suggestions?

    SQL> show parameter optimizer
    NAME                                 TYPE        VALUE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
    optimizer_dynamic_sampling           integer     2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    optimizer_features_enable            string      10.2.0.4                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
    optimizer_index_caching              integer     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
    optimizer_index_cost_adj             integer     100                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    optimizer_mode                       string      ALL_ROWS                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
    optimizer_secure_view_merging        boolean     TRUE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
    SQL>
    SQL> show parameter db_file_multi
    NAME                                 TYPE        VALUE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
    db_file_multiblock_read_count        integer     16                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        
    SQL>
    SQL> show parameter db_block_size
    NAME                                 TYPE        VALUE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
    db_block_size                        integer     8192                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
    SQL>
    SQL> show parameter cursor_sharing
    NAME                                 TYPE        VALUE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
    cursor_sharing                       string      EXACT                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
    SQL>
    SQL> column sname format a20
    SQL> column pname format a20
    SQL> column pva12 format a20
    SQL>
    SQL> select  sname
      2            , pname
      3            , pval1
      4            , pval2
      5  from
      6          sys.aux_stats$;
    SNAME                PNAME                     PVAL1 PVAL2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    SYSSTATS_INFO        STATUS                          COMPLETED                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
    SYSSTATS_INFO        DSTART                          12-05-2007 14:40                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
    SYSSTATS_INFO        DSTOP                           12-05-2007 14:40                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      
    SYSSTATS_INFO        FLAGS                         1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    SYSSTATS_MAIN        CPUSPEEDNW           1227.03273                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    SYSSTATS_MAIN        IOSEEKTIM                    10                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    SYSSTATS_MAIN        IOTFRSPEED                 4096                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    SYSSTATS_MAIN        SREADTIM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    SYSSTATS_MAIN        MREADTIM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    SYSSTATS_MAIN        CPUSPEED                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    SYSSTATS_MAIN        MBRC                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
    SNAME                PNAME                     PVAL1 PVAL2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 
    SYSSTATS_MAIN        MAXTHR                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    SYSSTATS_MAIN        SLAVETHR                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
    13 rows selected.
    SQL>
    SQL> explain plan for
    Execution Plan
    Plan hash value: 2373934626                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                
    | Id  | Operation                             | Name                       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    |   0 | SELECT STATEMENT                      |                            |   113 | 13221 |   319   (1)| 00:00:04 |       |       |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    |   1 |  SORT ORDER BY                        |                            |   113 | 13221 |   319   (1)| 00:00:04 |       |       |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
    |*  2 |   HASH JOIN OUTER                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        

  • Performance improvement for select query

    Hi all,
    need to improve performace for the below select query as it is taking long time
    SELECT vbeln pdstk
             FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
          WHERE vbeln = it_likp-vbeln       AND
                wbstk = 'C'  AND "pdstk = ' ' AND
                vbtyp IN gr_delivery AND
                ( fkstk = 'A' OR fkstk = 'B' ) OR
                ( fkivk = 'A' OR fkivk = 'B' ).
    Regards,
    Kumar

    Hi,
        Check if it_likp is sorted on vbeln.
    SELECT vbeln pdstk
    FROM vbuk INTO TABLE it_vbuk1 FOR ALL ENTRIES IN it_likp
    WHERE vbeln = it_likp-vbeln AND
    wbstk = 'C' AND
    vbtyp IN gr_delivery AND
    ( ( fkstk = 'A' OR fkstk = 'B' ) OR      <-- check this condition , if ( ) is needed ...
      ( fkivk = 'A' OR fkivk = 'B' ) ) .
    Regards,
    Srini.

  • Improving performance of query with View

    Hi ,
    I'm working on a stored procedure where certain records have to be eleminated , unfortunately tables involved in this exception query are present in a different database which will lead to performance issue. Is there any way in SQL Server to store this query
    in a view and store it's execution plan and make it work like sp.While I beleive it's kinda crazy thought but is there any better way to improve performance of query when accessed across databases.
    Thanks,
    Vishal.

    Do not try to solve problems that you have not yet confirmed to exist.  There is no general reason why a query (regardless of whether it involves a view) that refers to a table in a different database (NB - DATABASE not INSTANCE) will perform poorly. 
    As a suggestion, write a working query using a duplicate of the table in the current database.  Once it is working, then worry about performance.  Once that is working as efficiently as it can , change the query to use the "remote" table rather
    than the duplicate. Then determine if you have an issue.  If you cannot get the level of performance you desire with a local table, then you most likely have a much larger issue to address.  In that case, perhaps you need to change your perspective
    and approach to accomplishing your goal. 

  • IN NEED OF A SCCM 2012 QUERY THAT SHOWS LAST TIME SOFTWARE WAS USED OR OPENED

    Hello
    I am in need of an SCCM 2012 query that shows PCs that have Visio , Adobe Professional and Visual Studio and the last time each was used or opened. I have the query below which give me the PC name and the product. Any assistance will be very helpful
    select distinct SMS_R_System.NetbiosName, SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName from SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName
    like "%adobe acrobat%pro%"
    select distinct SMS_R_System.NetbiosName, SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName from SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName
    like "%visio%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%viewer%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%service pack%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%security
    update%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%hydra%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%update%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%MUI%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName
    not like "%amd%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%microsoft visio%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%vision%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%add-in%"
    select distinct SMS_R_System.NetbiosName, SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName from SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName
    = "Microsoft Visual studio 2012 devenv" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%hotfix%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%security%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName
    not like "%update%" and SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName not like "%service%"

    Did you create a software metering rule for each software title? if not then you need you do that first and it will take over a week before you see results.
    Also keep in mind that your query will only find x86 software titles.
    http://www.enhansoft.com/

  • Need a record from second query which is not a part of main query.

    I have this Query which Leads me to two Rwos of Data
    select papf.employee_number E_CODE
    ,to_char(paaf1.effective_start_date,'DD-MON-RRRR') EFFECTIVE_START_DATE
    ,DECODE(to_char(paaf1.effective_end_date,'DD-MON-RRRR'),'31-DEC-4712',NULL,to_char(paaf1.effective_end_date,'DD-MON-RRRR')) EFFECTIVE_END_DATE
    ,TRIM(SUBSTR(PAAF1.ASS_ATTRIBUTE21,INSTR(PAAF1.ASS_ATTRIBUTE21,'-')+1)) PREVIOUS_CO
    from apps.per_all_assignments_f paaf
    ,apps.per_all_people_f papf
    ,apps.per_grades pg
    ,apps.per_jobs pj
    ,apps.per_person_types ppt
    ,apps.per_person_type_usages_f pptuf
    ,apps.per_all_assignments_f paaf1
    where 1=1
    and papf.person_id = paaf.person_id
    and pptuf.person_id = papf.person_id
    and pptuf.person_type_id = ppt.person_type_id
    and ppt.user_person_type = 'Employee'
    and papf.current_employee_flag ='Y'
    and paaf.primary_flag = 'Y'
    and paaf1.primary_flag = 'Y'
    and paaf1.grade_id = pg.grade_id
    and paaf1.job_id = pj.job_id
    and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
    and trunc(sysdate) between pptuf.effective_start_date and pptuf.effective_end_date
    and papf.person_id = paaf1.person_id
    and (TRIM(UPPER(paaf1.ass_attribute24)) <> TRIM(UPPER(paaf.ass_attribute24))
    OR TRIM(UPPER(paaf1.ass_attribute21)) <> TRIM(UPPER(paaf.ass_attribute21))
    OR TRIM(UPPER(paaf1.ass_attribute22)) <> TRIM(UPPER(paaf.ass_attribute22))
    OR TRIM(UPPER(paaf1.ass_attribute25)) <> TRIM(UPPER(paaf.ass_attribute25))
    OR TRIM(UPPER(paaf1.ass_attribute23)) <> TRIM(UPPER(paaf.ass_attribute23))
    OR paaf1.grade_id <> paaf.grade_id)
    and paaf1.effective_end_date = paaf.effective_start_date - 1
    and papf.employee_number in ('10620')
    and paaf1.effective_start_date >= '01-JAN-1950'
    ---------------------------OUT PUT-----------------------------
    E_CODE     EFFECTIVE_START_DATE     EFFECTIVE_END_DATE     PREVIOUS_CO
    Row1 10620     17-SEP-2009     30-NOV-2009     CORPORATE
    Row2 10620     19-NOV-2007     31-JAN-2008     CORPORATE
    Problem is enire output of the query is perfectly fine but in the second row at column effective_start_date insted of 19-NOV-2007 in need a value from another query. there must not be any change in rest of the columns data including first row.
    i.e select ORIGINAL_DATE_OF_HIRE from per_all_people_f
    where employee_number = '10620'
    and rownum < 2
    ---------------------------OUT PUT----------------------------
    15-MAY-2006
    Is there is any approach to get this thing.
    Thanks in advance
    Bachan.
    Edited by: Bachan on Sep 20, 2010 8:17 PM

    maybe a union for your second row.
    select E_CODE,
           EFFECTIVE_START_DATE,
           EFFECTIVE_END_DATE,
           PREVIOUS_CO
      from (select rownum rn,
                   papf.employee_number E_CODE
                   ,to_char(paaf1.effective_start_date,'DD-MON-RRRR') EFFECTIVE_START_DATE
                   ,DECODE(to_char(paaf1.effective_end_date,'DD-MON-RRRR'),'31-DEC-4712',NULL,to_char(paaf1.effective_end_date,'DD-MON-RRRR')) EFFECTIVE_END_DATE
                   ,TRIM(SUBSTR(PAAF1.***_ATTRIBUTE21,INSTR(PAAF1.***_ATTRIBUTE21,'-')+1)) PREVIOUS_CO
              from apps.per_all_assignments_f paaf
                   ,apps.per_all_people_f papf
                   ,apps.per_grades pg
                   ,apps.per_jobs pj
                   ,apps.per_person_types ppt
                   ,apps.per_person_type_usages_f pptuf
                   ,apps.per_all_assignments_f paaf1
             where 1=1
               and papf.person_id = paaf.person_id
               and pptuf.person_id = papf.person_id
               and pptuf.person_type_id = ppt.person_type_id
               and ppt.user_person_type = 'Employee'
               and papf.current_employee_flag ='Y'
               and paaf.primary_flag = 'Y'
               and paaf1.primary_flag = 'Y'
               and paaf1.grade_id = pg.grade_id
               and paaf1.job_id = pj.job_id
               and trunc(sysdate) between papf.effective_start_date and papf.effective_end_date
               and trunc(sysdate) between pptuf.effective_start_date and pptuf.effective_end_date
               and papf.person_id = paaf1.person_id
               and (TRIM(UPPER(paaf1.***_attribute24)) TRIM(UPPER(paaf.***_attribute24))
               OR TRIM(UPPER(paaf1.***_attribute21)) TRIM(UPPER(paaf.***_attribute21))
               OR TRIM(UPPER(paaf1.***_attribute22)) TRIM(UPPER(paaf.***_attribute22))
               OR TRIM(UPPER(paaf1.***_attribute25)) TRIM(UPPER(paaf.***_attribute25))
               OR TRIM(UPPER(paaf1.***_attribute23)) TRIM(UPPER(paaf.***_attribute23))
               OR paaf1.grade_id paaf.grade_id)
               and paaf1.effective_end_date = paaf.effective_start_date - 1
               and papf.employee_number in ('10620')
               and paaf1.effective_start_date >= '01-JAN-1950'0
      where rn = 1
    union all
    select employee_number E_CODE,
           ORIGINAL_DATE_OF_HIRE,
           EFFECTIVE_END_DATE,
           TRIM(SUBSTR(PAAF1.***_ATTRIBUTE21,INSTR(PAAF1.***_ATTRIBUTE21,'-')+1)) PREVIOUS_CO
      from per_all_people_f
    where employee_number = '10620'
       and rownum < 2note: untested

  • How to improve performance of query

    Hi all,
    How to improve performance of query.
    please send :
    [email protected]
    thanks in advance
    bhaskar

    hi
    go through the following links for performance
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    http://www.asug.com/client_files/Calendar/Upload/ASUG%205-mar-2004%20BW%20Performance%20PDF.pdf
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2

  • Any room for improvement for this query? Explain Plan attached.

    Is there any room for improvement for this query? Table stats are up-to-date. Any suggestions Query rewrite, addition of indexes,...etc ??
    select sum(CONF
                 when (cd.actl_qty - cd.total_alloc_qty - lsd.Q < 0) then
                  0
                 else
                  cd.actl_qty - cd.total_alloc_qty - lsd.Q
               end)
      from (select sum(reqd_qty) as Q, ITEM_ID as ITEM
              from SHIP_DTL SD
             where exists (select 1
                      from CONF_dtl
                     where CONF_nbr = '1'
                       and ITEM_id = SD.ITEM_id)
             group by ITEM_id) lsd,
           CONF_dtl cd
    where lsd.ITEM = cd.ITEM_id
       and cd.CONF_nbr = '1'Total number of rows in the tables involved
    select count(*) from CONF_DTL;
      COUNT(*)
       1785889
    select count(*) from shp_dtl;
      COUNT(*)
        286675
      Explain Plan
    PLAN_TABLE_OUTPUT
    Plan hash value: 2325658044
    | Id  | Operation                           | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT                    |                    |     1 |    39 |     4  (25)| 00:00:01 |
    |   1 |  SORT AGGREGATE                     |                    |     1 |    39 |            |          |
    |   2 |   VIEW                              |                    |     1 |    39 |     4  (25)| 00:00:01 |
    |   3 |    HASH GROUP BY                    |                    |     1 |   117 |     4  (25)| 00:00:01 |
    |   4 |     TABLE ACCESS BY INDEX ROWID     | SHIP_DTL           |     1 |    15 |     1   (0)| 00:00:01
    |   5 |      NESTED LOOPS                   |                    |     1 |   117 |     3   (0)| 00:00:01 |
    |   6 |       MERGE JOIN CARTESIAN          |                    |     1 |   102 |     2   (0)| 00:00:01 |
    |   7 |        TABLE ACCESS BY INDEX ROWID  | CONF_DTL           |     1 |    70 |     1   (0)| 00:00:01 |
    |*  8 |         INDEX RANGE SCAN            | PK_CONF_DTL        |     1 |       |     1   (0)| 00:00:01 |
    |   9 |        BUFFER SORT                  |                    |     1 |    32 |     1   (0)| 00:00:01 |
    |  10 |         SORT UNIQUE                 |                    |     1 |    32 |     1   (0)| 00:00:01 |
    |  11 |          TABLE ACCESS BY INDEX ROWID| CONF_DTL           |     1 |    32 |     1   (0)| 00:00:01 |
    |* 12 |           INDEX RANGE SCAN          | PK_CONF_DTL        |     1 |       |     1   (0)| 00:00:01 |
    |* 13 |       INDEX RANGE SCAN              | SHIP_DTL_IND_6 |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       8 - access("CD"."CONF_NBR"='1')
      12 - access("CONF_NBR"='1')
      13 - access("ITEM_ID"="SD"."ITEM_ID")
           filter("ITEM_ID"="CD"."ITEM_ID")

    Citizen_2 wrote:
    Is there any room for improvement for this query? Table stats are up-to-date. Any suggestions Query rewrite, addition of indexes,...etc ??You say that the table stats are up-to-date, but is the following assumption of the optimizer correct:
    select count(*)
    from CONF_dtl
    where CONF_nbr = '1';Does this query return a count of 1? I doubt that, but that's what Oracle estimates in the EXPLAIN PLAN output. Based on that assumption you get a cartesian join between the two CONF_DTL table instances, and the result - which is still expected to be one row at most - is then joined to the SHIP_DTL table using a NESTED LOOP.
    If above assumption is incorrect, the number of rows generated by the cartesian join can be tremendous rendering the NESTED LOOP operation quite inefficient.
    You can verify this by using the DBMS_XPLAN.DISPLAY_CURSOR function together with the GATHER_PLAN_STATISTICS hint, if you're already on 10g or later.
    For more information regarding the DISPLAY_CURSOR function, see e.g. here: http://jonathanlewis.wordpress.com/2006/11/09/dbms_xplan-in-10g/
    It will show you the actual cardinalities compared to the estimated cardinalities.
    If the estimate of the optimizer is incorrect, you should find out why. There still might be some issues with the statistics, since this is most obvious reason for incorrect estimates.
    Are your index statistics up-to-date?
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Need Help! Ad Hoc Query in HR Tables

    Dear Friends,
    I need to create a adhoc query for the following tables.
    PA*
    HRP1000
    HRP1001
    But I could select only PNPCE LDB... I do not know how to define one and get the information out from PA* and HRP1000 and HRP1001 info types....
    Please help.
    Thanks,
    Joy

    Hi Joydip,
    Go to transaction SQ02, from the Edit menu go to "Change Infotype Selection" step. Go to the end of the tree structure. From "Infotypes of related objects" select the respective organizational object (e.g. org unit, position) & respective relationship. When you click on these items & confirm your selection, they all apeear in the infotype tree structure at the left part of the SQ02 screen. You can use them just like other info types of PNPCE logical DB.
    Regards,
    Dilek

  • Daily morning 8 o clock , we need to execute a select query

    Hi,
    daily morning 8 o clock , we need to execute a select query.
    How we can do this using dbms_scheduler? or any other ways?
    db version : 10g
    Thanks,
    Kumar.
    Message was edited by:
    user548258

    Use daily and byhour parameters as shown in below link.
    http://www.dba-oracle.com/t_dbms_scheduler_examples.htm

  • Need help improve the code

    I have this file but I need to improve the code on some method. The methods are addNode, deleteNode, saveNode, findLader and are they anyway for me to iliminate the findsmallest method?
    here is the code
    import java.io.*;
    import java.util.LinkedList;
    import java.util.Stack;
    import javax.swing.*;
    public class Graph {
    private LinkedList graph;
    private BufferedReader inputFile;
    public Graph() {
    graph = new LinkedList();
    // read the words from the given file
    // create a GraphNode
    // Add the node to the graph
    public void createGraph(String fileName) throws IOException {
    inputFile = new BufferedReader(
    new InputStreamReader(new FileInputStream("word.txt")));
    // Convert the linkedlist to an array 'a'
    // sort the array 'a'
    // create a string from all elements in 'a'
    // return the string
    public String printGraph() {
    String output = new String();
    // will contain String objects
    // ... do some work with the list, adding, removing String objects
    String[] a = new String[graph.size()];
    graph.toArray(a);
    // now stringArray contains all the element from linkedList
    quickSort(a, 0, a.size() - 1);
    for(int i=0;i<a.length;++i)
    output = output + new String(a.toString())+"\n";
    return output;
    private static void quickSort(Comparable[] theArray,
    int first, int last) {
    // Sorts the items in an array into ascending order.
    // Precondition: theArray[first..last] is an array.
    // Postcondition: theArray[first..last] is sorted.
    // Calls: partition.
    int pivotIndex;
    if (first < last) {
    // create the partition: S1, Pivot, S2
    pivotIndex = partition(theArray, first, last);
    // sort regions S1 and S2
    quickSort(theArray, first, pivotIndex-1);
    quickSort(theArray, pivotIndex+1, last);
    } // end if
    } // end quickSort
    private static int partition(Comparable[] theArray,
    int first, int last) {
    // Partitions an array for quicksort.
    // Precondition: theArray[first..last] is an array;
    // first <= last.
    // Postcondition: Returns the index of the pivot element of
    // theArray[first..last]. Upon completion of the method,
    // this will be the index value lastS1 such that
    // S1 = theArray[first..lastS1-1] < pivot
    // theArray[lastS1] == pivot
    // S2 = theArray[lastS1+1..last] >= pivot
    // Calls: choosePivot.
    // tempItem is used to swap elements in the array
    Comparable tempItem;
    // place pivot in theArray[first]
    //choosePivot(theArray, first, last);
    Comparable pivot = theArray[first]; // reference pivot
    // initially, everything but pivot is in unknown
    int lastS1 = first; // index of last item in S1
    // move one item at a time until unknown region is empty
    for (int firstUnknown = first + 1; firstUnknown <= last;
    ++firstUnknown) {
    // Invariant: theArray[first+1..lastS1] < pivot
    // theArray[lastS1+1..firstUnknown-1] >= pivot
    // move item from unknown to proper region
    if (theArray[firstUnknown].compareTo(pivot) < 0) {
    // item from unknown belongs in S1
    ++lastS1;
    tempItem = theArray[firstUnknown];
    theArray[firstUnknown] = theArray[lastS1];
    theArray[lastS1] = tempItem;
    } // end if
    // else item from unknown belongs in S2
    } // end for
    // place pivot in proper position and mark its location
    tempItem = theArray[first];
    theArray[first] = theArray[lastS1];
    theArray[lastS1] = tempItem;
    return lastS1;
    } // end partition
    // Given a new word, add it to the graph
    public void addNode(String word) {
    GraphNode node = new GraphNode(word);
    if(graph.contains(node)){
    JOptionPane.showMessageDialog(null,"Duplicate Word, operation terminated");
    for(int i=0; i<graph.size(); ++i) {
    if(isAnEdge((String)(((GraphNode)graph.get(i)).getVertex()),(String)(node.getVertex()))) {
    EdgeNode e1 = new EdgeNode((String)node.getVertex(),1);
    EdgeNode e2 = new EdgeNode((String)((GraphNode)graph.get(i)).getVertex(),1);
    node.addEdge(e2);
    ((GraphNode)graph.get(i)).addEdge(e1);
    graph.add(node);
    public boolean deleteNode(String word) {
    GraphNode node = new GraphNode(word);
    EdgeNode n = new EdgeNode(word,1);
    if(!graph.contains(node)) {
    return false;
    else {
    for(int i=0; i<graph.size();++i) {
    ((GraphNode)graph.get(i)).getEdgeList().remove(n);
    graph.remove(node);
    return true;
    public void save(String fileName) {
    try {
    PrintWriter output = new PrintWriter(new FileWriter("word.txt"));
    for(int i=0; i< graph.size();++i) {
    output.println(((GraphNode)graph.get(i)).getVertex());
    output.close();
    catch (IOException e) {
    // given two word, find the ladder (using dijkstra's algorithm
    // create a string for the ladder and return it
    public String findLadder(String start,String end) {
    String ladder = new String();
    GraphNode sv = new GraphNode(start);
    GraphNode ev = new GraphNode(end);
    if(!graph.contains(sv)) {
    JOptionPane.showMessageDialog(null,start + " not in graph");
    return null;
    if(!graph.contains(ev)) {
    JOptionPane.showMessageDialog(null,end + " not in graph");
    return null;
    LinkedList distance = new LinkedList(((GraphNode)graph.get(graph.indexOf(sv))).getEdgeList());
    LinkedList visited = new LinkedList();
    visited.add(start);
    LinkedList path = new LinkedList();
    path.add(new PathNode(start,"****"));
    for(int i=0; i<distance.size();++i) {
    PathNode p = new PathNode((String)((EdgeNode)distance.get(i)).getKey(),start);
    path.add(p);
    while(!visited.contains(end)) {
    EdgeNode min = findSmallest(distance,visited);
    String v = (String)min.getVertex();
    if(v.equals("****"))
    return null;
    visited.add(v);
    // for(int i=0;i<graph.size();++i) {
    // String u = (String)(((GraphNode)(graph.get(i))).getVertex());
    GraphNode temp1 = new GraphNode(v);
    int index = graph.indexOf(temp1);
    LinkedList l = new LinkedList(((GraphNode)graph.get(index)).getEdgeList());
    for(int i=0;i<l.size();++i)
    String u = (String)(((EdgeNode)(l.get(i))).getVertex());
    if(!visited.contains(u)) {
    int du=999, dv=999, avu=999;
    dv = min.getCost();
    EdgeNode edge = new EdgeNode(u,1);
    if(distance.contains(edge)) {
    du = ((EdgeNode)(distance.get(distance.indexOf(edge)))).getCost();
    GraphNode temp = new GraphNode(v);
    GraphNode node = ((GraphNode)(graph.get(graph.indexOf(temp))));
    LinkedList edges = node.getEdgeList();
    if(edges.contains(edge)) {
    avu = ((EdgeNode)(edges.get(edges.indexOf(new EdgeNode(u,1))))).getCost();
    if( du > dv+avu) {
    if(du == 999) {
    distance.add(new EdgeNode(u,dv+avu));
    path.add(new PathNode(u,v));
    else {
    ((EdgeNode)(distance.get(distance.indexOf(u)))).setCost(dv+avu);
    ((PathNode)(path.get(path.indexOf(u)))).setEnd(v);
    if(!path.contains(new PathNode(end,"")))
    return null;
    LinkedList pathList = new LinkedList();
    for(int i=0;i<path.size();++i) {
    PathNode n = (PathNode)path.get(path.indexOf(new PathNode(end,"****")));
    if(n.getEnd().compareTo("****") != 0) {
    pathList.addFirst(end);
    n = (PathNode)path.get(path.indexOf(new PathNode(n.getEnd(),"****")));
    end = n.getStart();
    pathList.addFirst(start);
    for(int i=0;i<pathList.size()-1;++i) {
    ladder = ladder + ((String)(pathList.get(i))) + " --> ";
    ladder = ladder + ((String)(pathList.get(pathList.size()-1)));
    return ladder;
    private EdgeNode findSmallest(LinkedList distance, LinkedList visited) {
    EdgeNode min = new EdgeNode("****",999);
    for(int i=0;i<distance.size();++i) {
    String node = (String)(((EdgeNode)distance.get(i)).getVertex());
    if(!visited.contains(node)) {
    if(((EdgeNode)distance.get(i)).getCost()<min.getCost()) {
    min = (EdgeNode)distance.get(i);
    return min;
    // class that represents nodes inserted into path set
    private class PathNode {
    protected String sv;
    protected String ev;
    public PathNode(String s,String e) {
    sv = s;
    ev = e;
    public String getEnd() {
    return ev;
    public String getStart() {
    return sv;
    public void setEnd(String n) {
    ev = n;
    public boolean equals(Object o) {
    return this.sv.equals(((PathNode)o).sv);
    public String toString() {
    return "("+sv+":"+ev+")";
    thank you

    let me fix my misstake which was point out by some one in here and thank you ofr do so because I'm new at this.
    I have this file but I need to improve the code on some method. The methods are addNode, deleteNode, saveNode, findLader and are they anyway for me to iliminate the findsmallest method?
    here is the code
    import java.io.*;
    import java.util.LinkedList;
    import java.util.Stack;
    import javax.swing.*;
    public class Graph {
    private LinkedList graph;
    private BufferedReader inputFile;
    public Graph() {
    graph = new LinkedList();
    // read the words from the given file
    // create a GraphNode
    // Add the node to the graph
    public void createGraph(String fileName) throws IOException {
    inputFile = new BufferedReader(
    new InputStreamReader(new FileInputStream("word.txt")));
    // Convert the linkedlist to an array 'a'
    // sort the array 'a'
    // create a string from all elements in 'a'
    // return the string
    public String printGraph() {
    String output = new String();
    // will contain String objects
    // ... do some work with the list, adding, removing String objects
    String[] a = new String[graph.size()];
    graph.toArray(a);
    // now stringArray contains all the element from linkedList
    quickSort(a, 0, a.size() - 1);
    for(int i=0;i<a.length;++i)
    output = output + new String(a.toString())+"\n";
    return output;
    private static void quickSort(Comparable[] theArray,
    int first, int last) {
    // Sorts the items in an array into ascending order.
    // Precondition: theArray[first..last] is an array.
    // Postcondition: theArray[first..last] is sorted.
    // Calls: partition.
    int pivotIndex;
    if (first < last) {
    // create the partition: S1, Pivot, S2
    pivotIndex = partition(theArray, first, last);
    // sort regions S1 and S2
    quickSort(theArray, first, pivotIndex-1);
    quickSort(theArray, pivotIndex+1, last);
    } // end if
    } // end quickSort
    private static int partition(Comparable[] theArray,
    int first, int last) {
    // Partitions an array for quicksort.
    // Precondition: theArray[first..last] is an array;
    // first <= last.
    // Postcondition: Returns the index of the pivot element of
    // theArray[first..last]. Upon completion of the method,
    // this will be the index value lastS1 such that
    // S1 = theArray[first..lastS1-1] < pivot
    // theArray[lastS1] == pivot
    // S2 = theArray[lastS1+1..last] >= pivot
    // Calls: choosePivot.
    // tempItem is used to swap elements in the array
    Comparable tempItem;
    // place pivot in theArray[first]
    //choosePivot(theArray, first, last);
    Comparable pivot = theArray[first]; // reference pivot
    // initially, everything but pivot is in unknown
    int lastS1 = first; // index of last item in S1
    // move one item at a time until unknown region is empty
    for (int firstUnknown = first + 1; firstUnknown <= last;
    ++firstUnknown) {
    // Invariant: theArray[first+1..lastS1] < pivot
    // theArray[lastS1+1..firstUnknown-1] >= pivot
    // move item from unknown to proper region
    if (theArray[firstUnknown].compareTo(pivot) < 0) {
    // item from unknown belongs in S1
    ++lastS1;
    tempItem = theArray[firstUnknown];
    theArray[firstUnknown] = theArray[lastS1];
    theArray[lastS1] = tempItem;
    } // end if
    // else item from unknown belongs in S2
    } // end for
    // place pivot in proper position and mark its location
    tempItem = theArray[first];
    theArray[first] = theArray[lastS1];
    theArray[lastS1] = tempItem;
    return lastS1;
    } // end partition
    // Given a new word, add it to the graph
    public void addNode(String word) {
    GraphNode node = new GraphNode(word);
    if(graph.contains(node)){
    JOptionPane.showMessageDialog(null,"Duplicate Word, operation terminated");
    for(int i=0; i<graph.size(); ++i) {
    if(isAnEdge((String)(((GraphNode)graph.get(i)).getVertex()),(String)(node.getVertex()))) {
    EdgeNode e1 = new EdgeNode((String)node.getVertex(),1);
    EdgeNode e2 = new EdgeNode((String)((GraphNode)graph.get(i)).getVertex(),1);
    node.addEdge(e2);
    ((GraphNode)graph.get(i)).addEdge(e1);
    graph.add(node);
    public boolean deleteNode(String word) {
    GraphNode node = new GraphNode(word);
    EdgeNode n = new EdgeNode(word,1);
    if(!graph.contains(node)) {
    return false;
    else {
    for(int i=0; i<graph.size();++i) {
    ((GraphNode)graph.get(i)).getEdgeList().remove(n);
    graph.remove(node);
    return true;
    public void save(String fileName) {
    try {
    PrintWriter output = new PrintWriter(new FileWriter("word.txt"));
    for(int i=0; i< graph.size();++i) {
    output.println(((GraphNode)graph.get(i)).getVertex());
    output.close();
    catch (IOException e) {
    // given two word, find the ladder (using dijkstra's algorithm
    // create a string for the ladder and return it
    public String findLadder(String start,String end) {
    String ladder = new String();
    GraphNode sv = new GraphNode(start);
    GraphNode ev = new GraphNode(end);
    if(!graph.contains(sv)) {
    JOptionPane.showMessageDialog(null,start + " not in graph");
    return null;
    if(!graph.contains(ev)) {
    JOptionPane.showMessageDialog(null,end + " not in graph");
    return null;
    LinkedList distance = new LinkedList(((GraphNode)graph.get(graph.indexOf(sv))).getEdgeList());
    LinkedList visited = new LinkedList();
    visited.add(start);
    LinkedList path = new LinkedList();
    path.add(new PathNode(start,"****"));
    for(int i=0; i<distance.size();++i) {
    PathNode p = new PathNode((String)((EdgeNode)distance.get(i)).getKey(),start);
    path.add(p);
    while(!visited.contains(end)) {
    EdgeNode min = findSmallest(distance,visited);
    String v = (String)min.getVertex();
    if(v.equals("****"))
    return null;
    visited.add(v);
    // for(int i=0;i<graph.size();++i) {
    // String u = (String)(((GraphNode)(graph.get(i))).getVertex());
    GraphNode temp1 = new GraphNode(v);
    int index = graph.indexOf(temp1);
    LinkedList l = new LinkedList(((GraphNode)graph.get(index)).getEdgeList());
    for(int i=0;i<l.size();++i)
    String u = (String)(((EdgeNode)(l.get(i))).getVertex());
    if(!visited.contains(u)) {
    int du=999, dv=999, avu=999;
    dv = min.getCost();
    EdgeNode edge = new EdgeNode(u,1);
    if(distance.contains(edge)) {
    du = ((EdgeNode)(distance.get(distance.indexOf(edge)))).getCost();
    GraphNode temp = new GraphNode(v);
    GraphNode node = ((GraphNode)(graph.get(graph.indexOf(temp))));
    LinkedList edges = node.getEdgeList();
    if(edges.contains(edge)) {
    avu = ((EdgeNode)(edges.get(edges.indexOf(new EdgeNode(u,1))))).getCost();
    if( du > dv+avu) {
    if(du == 999) {
    distance.add(new EdgeNode(u,dv+avu));
    path.add(new PathNode(u,v));
    else {
    ((EdgeNode)(distance.get(distance.indexOf(u)))).setCost(dv+avu);
    ((PathNode)(path.get(path.indexOf(u)))).setEnd(v);
    if(!path.contains(new PathNode(end,"")))
    return null;
    LinkedList pathList = new LinkedList();
    for(int i=0;i<path.size();++i) {
    PathNode n = (PathNode)path.get(path.indexOf(new PathNode(end,"****")));
    if(n.getEnd().compareTo("****") != 0) {
    pathList.addFirst(end);
    n = (PathNode)path.get(path.indexOf(new PathNode(n.getEnd(),"****")));
    end = n.getStart();
    pathList.addFirst(start);
    for(int i=0;i<pathList.size()-1;++i) {
    ladder = ladder + ((String)(pathList.get(i))) + " --> ";
    ladder = ladder + ((String)(pathList.get(pathList.size()-1)));
    return ladder;
    private EdgeNode findSmallest(LinkedList distance, LinkedList visited) {
    EdgeNode min = new EdgeNode("****",999);
    for(int i=0;i<distance.size();++i) {
    String node = (String)(((EdgeNode)distance.get(i)).getVertex());
    if(!visited.contains(node)) {
    if(((EdgeNode)distance.get(i)).getCost()<min.getCost()) {
    min = (EdgeNode)distance.get(i);
    return min;
    // class that represents nodes inserted into path set
    private class PathNode {
    protected String sv;
    protected String ev;
    public PathNode(String s,String e) {
    sv = s;
    ev = e;
    public String getEnd() {
    return ev;
    public String getStart() {
    return sv;
    public void setEnd(String n) {
    ev = n;
    public boolean equals(Object o) {
    return this.sv.equals(((PathNode)o).sv);
    public String toString() {
    return "("+sv+":"+ev+")";
    }thank you

  • Need suggestions on date range query

    I have a requirement to show the amount of product remaining. There is a table that holds updated "inventory" amounts with a date and tonnage, and a series of transactional tables that detail the individual disbursements from the stockpile. The trick is that the dates for the inventory adjustments may not all be the same, meaning that I need to individually resolve the stockpiles.
    This query will give me the inventory disbursements:
    select FN_STN_KEY(j.FACTORY_ID, j.STATION_ID) as STATION,
      count(j.LOAD_JOB_ID) as LOADS,
               CASE SUM(w.SPOT_WEIGHT)
                 WHEN 0 THEN SUM(NVL(j.MAN_SPOT_WT,0))
                 ELSE SUM(w.SPOT_WEIGHT)
               END TONS
           from TC c, TC_LOAD_JOBS j, SPOT_WEIGHTS w
          where c.TC_ID = j.TC_ID
            and c.DATE_INDEX = w.DATE_INDEX and j.LOAD_RATE_ID = w.LOAD_RATE_ID
            and c.DATE_INDEX BETWEEN to_date('09/01/2009','MM/DD/YYYY') and sysdate
            and FN_STN_KEY(j.FACTORY_ID, j.STATION_ID) in (810,410)
          group by FN_STN_KEY(j.FACTORY_ID, j.STATION_ID);Note that the date and the list of stations in the where clause are dynamic and selected by user. If this was only one station at a time it wouldn't be this complicated.
    This query will give me the last known inventory amount:
    select to_char(MAX(AS_OF_DT),'Mon DD, YYYY'), TONS
      from STATION_LOG
          where AS_OF_DT < sysdate and STN_KEY in (810,410) group by TONS;Again, the date and list of stations are selected by user. They should be identical to those selected for the other query.
    Does anyone have any good ideas on how to combine these two statements into a single report?
    Note: FN_STN_KEY acts as a join function. You don't really want me to get into why there isn't a single unique key to reference.

    Hi,
    I'm trying to follow your descrioption, but lots of things don't make sense to me.
    blarman74 wrote:
    Yeah. I put in some data so I could get the message back to you, then filled in the rest.
    So the user is going to pass in two parameters: The date of the report and the list of stations they want to get an inventory count on. What were the parameters that produced the output you posted before:
    STATION     INITIAL_TONS     USED_TONS     AS_OF_DATE
    810               835500        465100      09/01/2010
    410               495800        366900      09/02/2010
    550               568900        122600      08/31/2010
    What I need the report to do is
    1) take a station from the list
    2) find out the inventory tally from STATION_LOG where the date is the largest date less than the supplied date. This should give me AS_OF_DATE and my initial quantity.
    3) query the data table for all tons hauled from the AS_OF_DATE for that station.
    4) repeat for the next station.So this is what your existing PL/SQL code does. A non procedural language, like SQL, won't follow the same steps, of course.
    The sample data for station_log is:
    INSERT INTO STATION_LOG (1, to_date('08/31/2010','MM/DD/YYYY'), 810, 562500);
    INSERT INTO STATION_LOG (2, to_date('09/02/2010','MM/DD/YYYY'), 410, 495500);
    INSERT INTO STATION_LOG (3, to_date('09/01/2010','MM/DD/YYYY'), 910, 832600);
    INSERT INTO STATION_LOG (4, to_date('12/31/2010','MM/DD/YYYY'), 810, 239800);How do you get the initial_tons in the output above from the data above? Did you mean to post some new sample data for station_log?
    I still get ORA-00928 errors from all the INPUT statements.
    As I said, I can do it inside a loop in PL/SQL, but I got completely stumped on how I could accomplish this in SQL. The trick is that if I can do it in SQL, I can allow the user to export the data to csv using built-in functionality. If I have to do it in PL/SQL, I can't provide the export as easily.
    One more thing I just thought about, I am going to need to use a BETWEEN on the dates of the data I need to grab. I obviously don't want to grab data past another inventory tally record from STATION_LOG for the same station, and I can use an NVL so it cuts off at SYSDATE. I obviously haven't hauled anything in the future ;)I doubt if I'll get enough information to do this for you before I leave on vacation.
    Here's an example of what you need to do using the scott.emp table instead of your station_log table:
    SELECT       job
    ,       hiredate
    ,       sal
    FROM       scott.emp
    ORDER BY  job
    ,            hiredate
    JOB       HIREDATE           SAL
    ANALYST   03-Dec-1981       3000
    ANALYST   19-Apr-1987       3000
    CLERK     17-Dec-1980        800
    CLERK     03-Dec-1981        950
    CLERK     23-Jan-1982       1300
    CLERK     23-May-1987       1100
    MANAGER   02-Apr-1981       2975
    MANAGER   01-May-1981       2850
    MANAGER   09-Jun-1981       2450
    PRESIDENT 17-Nov-1981       5000
    SALESMAN  20-Feb-1981       1600
    SALESMAN  22-Feb-1981       1250
    SALESMAN  08-Sep-1981       1500
    SALESMAN  28-Sep-1981       1250Say we want to find, for each job in a given list, the sal that corresponds to the last hiredate that is no later than the given report_date. (This seems pretty close to what you heed: fior each stn_key in a given list, the quantity that corresponds to the last row that is no later than the given report_date.)
    That is, if we're only interested in the jobs CLERK, MANAGER and PRESIDENT, and only in hiredates on or before December 31, 1981, the output would be:
    JOB       LAST_HIREDA   LAST_SAL
    CLERK     03-Dec-1981        950
    MANAGER   09-Jun-1981       2450
    PRESIDENT 17-Nov-1981       5000That is, we want to ignore all jobs that are not in the given list, and all rows whose hiredate is after the given report_date. Among the rows that remain, we're interested only in the last one for each job.
    Note that the last_sal for CLERK is not 1300 or 1100: those values were after the given report_date. also, the last_sal for CLERK is not 800; that's not the last one of the remaining rows.
    Here's one way to get those results in pure SQL:
    DEFINE       jobs_wanted     = "CLERK,MANAGER,PRESIDENT"
    DEFINE       report_date     = "DATE '1981-12-31'"
    SELECT       job
    ,       MAX (hiredate)                         AS last_hiredate
    ,       MAX (sal) KEEP (DENSE_RANK LAST ORDER BY hiredate)     AS last_sal
    FROM       scott.emp
    WHERE       hiredate     <= &report_date
    AND        ',' || '&jobs_wanted' || ','     LIKE
           '%,' || job                  || ',%'
    GROUP BY  job
    ORDER BY  job
    ;I used substitution variables for the parameters. You could use bind_variables, or hard-code the values instead.
    The WHERE clause is applied before aggreate functions are computed, so rows after &report_date don't matter.
    "MAX (sal) KEEP (DENSE_RANK LAST ORDER BY hiredate)" means the sal that is associated with the last row, in order by hiredate. If there happens to be a tie (that is, two or more rows have exactly the same hiredate, and no row is later) then the highest sal from thsoe rows is returned; that's what MAX means here. Ties may be impossible in your data.
    You need to write a similar query using your station_log table, and join the results of that to your load_data table, including only the rows that have dates between the date in the sub-query (last_hiredate in my example) and the parameter report_date. That can be part of the join condition.

  • Need to create at BEX Query to get last 30 days data.

    Hi,
    I need to create a bex query based on input date need to calculate last 30 days outstanding and 31-60 days outstanding 61-90 days outstanding 91-180 days outstanding and greater than 180 days outstanding. Please find the format of the report.Kindly help me.
                                                                                                                          Thanks & Regards,

    Based on those documents you can easily create.
    1. First create variable (Mandatory) user input
    2. Posting date is avaialble as char you will get
    3. need to calcualte difference b/w those 2 dates  you can refer below  By using replacement path we can convert both dates into get difference.
    http://www.sd-solutions.com/SAP-HCM-BW-Replacement-Path-Variables.html
    4. now need to create  Bucketing logic  formula  as per requirement above documents will give idea.

  • Need Store DB tables to query to get the latest scanned price at register

    Need Store DB tables to query to get the latest scanned price at register.
    Please provide a sample sql script that will help me do this. I need to have this informatino for several items at a time.
    Thanks,
    Edited by: user10133807 on Jan 13, 2012 9:22 AM
    Edited by: user10133807 on Jan 13, 2012 9:23 AM

    Hi,
    You can find multiple items saled price at the register by running below quary
    select distinct DC_DY_BSN,ID_ITM_POS,MO_PRN_PRC from TR_LTM_SLS_RTN;
    Thanks,
    MG

  • Need to install only the query for fi account receivables standard data ?

    Hi all,
    I need to insall only the query for the standard business content for fi_ar in bw 3.5 ?
    i have selected the cube and then in grouping i have selected data flow afterwards only.
    And i have selected the queries only. but the cube is selected again. Should i uncheck the cube and install only the queries and transport it?
    Pls guide me on this!!
    Thanks
    Pooja

    Hi,
    If the required cube is already avaialble and continue with the existing design, you can uncheck the cube and install only queries. Check all the prerequisites before installing queries like used infoobjects, other targets if any.
    Hope this helps,
    Regards,
    Rama Murthy.

Maybe you are looking for

  • A serious problem with flush 11.3 on Windows XP! Forgive assistance.../Серьёзная проблема с флешем..

    Я вчера поставил и обнаружил проблему со звуком.(если смотреть видео с звуком(разумеется) или слушать муз через инет(браузер(вообщем где задействован хоть как то  флеф-плеер) звук какой то с разрывами(слушать не возможно).... Это как и в флеше для та

  • How to give  push button in alv report  output

    hi, my requirement is that , i have to give push button in alv report output(item level) not in application toolbar, i am using reuse_alv_grid_display FM, can any body provide me sample code regards, siva kumar

  • RAC Installation on AIX 5.3 oracle(10.2.0.4.0)

    Hi, I am planning to setup RAC on AIX 5.3 in oracle 10.2.0.4.0 Please provide me any document for step by step installation of RAC on AIX 5.3 in oracle 10.2.0.4.0. If possible provide me metalink document ID. Thanks

  • [solved] xmonad error importing modules

    Attempting to set up xmonad This is my xmonad.hs: import XMonad import System.IO import XMonad.Hooks.DynamicLog main = do xmonad $ defaultConfig modMask = mod4Mask , terminal = "urxvt" when I recompile xmonad I get: Error detected while loading xmona

  • This code didnt show any output

    hi i m little new bie in swings plz check the following code it compile well without error but diidnt show any output. plz chk ii import java.awt.*; import java.awt.event.*; import javax.swing.*; public class GoMoku1 extends JFrame{ JButton newGameBu