Did analyzing of a table make faster execution????

Did analyzing of a table make faster execution?????I have written a package and it is taking 30 minutes to execute.....Plz help me.....

- I assume you mean "Does analyzing a table improve the performance of queries?"
- What version of Oracle?
- Are you using the rule-based optimizer (RBO)? Or the cost-based optimizer (CBO)?
- Have statistics ever been gathered on the table(s) involved in your procedure? If so, are the statistics still accurate? Or has there been a substantial amount of change to the table since statistics were last gathered?
- How do you normally gather statistics.
If you are using the CBO and your statistics are missing or out of data, gathering fresh statistics should give the optimizer better information from which it should generate better query plans. If your code is slow because a particular SQL statement is using a suboptimal plan because statistics are incorrect, that would improve the performance of your procedure. Because there are many other reasons that code might be slow, and because we don't know whether the optimizer has made an incorrect decision or whether the existing statistics are misleading or even whether the statistics matter to the optimizer you're using, we can't say anything more definitive than "maybe".
Justin

Similar Messages

  • How can I make the execution of my script faster

    Hi everyone
    How can I make the execution of my script faster, because it takes a lot of time to execute? The following is my script:
    DECLARE
    CURSOR C1 IS
    SELECT A.ITEM_CODE,A.STORE_CODE,ST_UNIT,SA_UNIT,CART_QTY,QUANTITY_ON_HAND
    FROM PROJ.IM_LOCATION A
    WHERE A.ITEM_CODE BETWEEN :ITEM_FRM AND :ITEM_TO
    AND A.STORE_CODE = :FRM_STORE
    ORDER BY
    A.STORE_CODE ;
    CURSOR C2 IS
    SELECT A.ITEM_CODE,A.STORE_CODE,ST_UNIT,SA_UNIT,CART_QTY,QUANTITY_ON_HAND
    FROM PROJ.IM_LOCATION A
    WHERE A.ITEM_CODE BETWEEN :ITEM_FRM AND :ITEM_TO
    AND A.STORE_CODE = :FRM_STORE
    ORDER BY
    A.STORE_CODE ;
    big_syb_qty_issue number(12,3);
    small_syb_qty_issue number(12,3);
    big_issue number(12,3);
    small_issue number(12,3);
    item_syb_code varchar2(30);
    big_syb_qty_rec number(12,3);
    small_syb_qty_rec number(12,3);
    BI_SUPP_REC number(12,3);
    SM_SUPP_REC number(12,3);
    big_syb_qty_ADJ number(12,3);
    small_syb_qty_ADJ number(12,3);
    big_ADJ number(12,3);
    small_ADJ number(12,3);
    big_syb_qty_rec_iner number(12,3);
    small_syb_qty_rec_iner number(12,3);
    BI_INTER number(12,3);
    SM_INTER number(12,3);
    cl_big_qty number(12,3);
    cl_small_qty number(12,3);
    cl_big_qty1 number(12,3);
    cl_small_qty1 number(12,3);
    BIG_QTY_OPEN number(12,3);
    SMALL_QTY_OPEN number(12,3);
    BEGIN
         IF ((:FRM_STORE IS NULL) OR (:ITEM_FRM IS NULL) OR (:ITEM_TO IS NULL) OR (:DATE_FRM IS NULL)) THEN
              SHOW_MESSAGE('You Should Enter the Parameters Values Correctly Please try again !!!! ');
              RAISE FORM_TRIGGER_FAILURE;
              GO_FIELD('DATE_FRM');
         END IF;     
    DELETE FROM STOCK_AT_DATE_REP2;
    COMMIT;
    cl_big_qty := 0;
    cl_small_qty := 0;
    -- MESSAGE('Please Wait The System Calculating The Transactions !!!');
    SET_APPLICATION_PROPERTY(CURSOR_STYLE,'BUSY');
    FOR R IN C1
    LOOP
    cl_big_qty := R.CART_QTY ;
    cl_small_qty := R.QUANTITY_ON_HAND;
    -- Transerfer Data 1
    BEGIN
    SELECT B.ITEM_CODE,SUM(NVL(CART_QTY,0)) ,SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,big_syb_qty_issue,small_syb_qty_issue
    FROM IM_TRANS_ISSUE_HEADER A,IM_TRANS_ISSUE_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND B.DEL_STORE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE > :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('ISSUED BIG'||' '||big_syb_qty_issue);
    exception
    when no_data_found then big_syb_qty_issue := 0;
    small_syb_qty_issue := 0;
    when others then MESSAGE(10,sqlerrm);
    END ;
    -- Goods Received Data From Supplier 1
    BEGIN
    SELECT B.ITEM_CODE,SUM(B.CART_QTY),SUM(ITEM_QUANTITY)
    INTO item_syb_code,big_syb_qty_rec,small_syb_qty_rec
    FROM IM_GOODS_RECIEVE_HEADER A,IM_GOODS_RECIEVE_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND STORE_CODE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE > :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('RECEIVED FROM SUPPLIER BIG'||' '||big_syb_qty_rec);
    exception
    when no_data_found then big_syb_qty_rec := 0;
    small_syb_qty_rec := 0;
    when others then message(10,sqlerrm);
    END ;
    -- Adjustement Data 1
    BEGIN
    SELECT B.ITEM_CODE ,SUM(NVL(CART_QTY,0)) ADJUST_QTY,SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,big_syb_qty_ADJ,small_syb_qty_ADJ
    FROM IM_ADJUST_HEADER A,IM_ADJUST_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND B.STORE_CODE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND A.DOC_DATE > :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('Adjust BIG'||' '||big_syb_qty_ADJ);
    exception
    when no_data_found then big_syb_qty_ADJ := 0;
    small_syb_qty_ADJ := 0;
    when others then message(10,sqlerrm);
    END ;
    -- Goods Received Data From Stores 1
    BEGIN
    SELECT B.ITEM_CODE,SUM(B.CART_QTY),SUM(ITEM_QUANTITY)
    INTO item_syb_code,big_syb_qty_rec_iner,small_syb_qty_rec_iner
    FROM IM_TRANS_REC_HEADER A,IM_TRANS_REC_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND REC_STORE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE > :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- show_message('here');
    -- SHOW_MESSAGE('Received From Stores BIG'||' '||big_syb_qty_rec_iner);
    exception
    when no_data_found then
    big_syb_qty_rec_iner := 0;
    small_syb_qty_rec_iner := 0;
    when others then message(10,sqlerrm);
    END ;
    cl_big_qty := (NVL(cl_big_qty,0) + NVL(big_syb_qty_issue,0));
    cl_big_qty := (NVL(cl_big_qty,0) - NVL(big_syb_qty_rec,0));
    cl_big_qty := (NVL(cl_big_qty,0) - NVL(big_syb_qty_rec_iner,0));
    big_syb_qty_ADJ := -1 * NVL(big_syb_qty_ADJ,0);
    cl_big_qty := (NVL(cl_big_qty,0) + NVL(big_syb_qty_ADJ,0));
    -- srw.message(2000,'cl_small_qty'||cl_small_qty);
    cl_small_qty := (NVL(cl_small_qty,0) + NVL(small_syb_qty_issue,0));
    cl_small_qty := (NVL(cl_small_qty,0) - NVL(small_syb_qty_rec,0));
    cl_small_qty := (NVL(cl_small_qty,0) - NVL(small_syb_qty_rec_iner,0));
    small_syb_qty_ADJ := -1 * NVL(small_syb_qty_ADJ,0);
    cl_small_qty := (NVL(cl_small_qty,0) + NVL(small_syb_qty_ADJ,0));
    -- srw.message(2000,'cl_small_qty'||cl_small_qty); srw.message(2000,'cl_small_qty'||cl_small_qty);
    INSERT INTO STOCK_AT_DATE_REP2
    VALUES(R.STORE_CODE,R.ITEM_CODE,cl_big_qty,cl_small_qty,R.ST_UNIT,R.SA_UNIT,:DATE_FRM,
    :DATE_TO,0,0,0,0,0,0,0,0,cl_big_qty,cl_small_qty);
    cl_big_qty := 0;
    cl_small_qty := 0;
    END LOOP;
    COMMIT;
    FOR R IN C2
    LOOP
    -- Transerfer Data 2
    BEGIN
    SELECT B.ITEM_CODE,SUM(NVL(CART_QTY,0)) ,SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,big_issue,small_issue
    FROM IM_TRANS_ISSUE_HEADER A,IM_TRANS_ISSUE_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND B.DEL_STORE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE BETWEEN :DATE_FRM AND :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('ISSUED BIG'||' '||big_syb_qty_issue);
    exception
    when no_data_found then
    big_issue := 0;
    small_issue := 0;
    when others then MESSAGE(10,sqlerrm);
    END ;
    -- Goods Received Data From Supplier 2
    BEGIN
    SELECT B.ITEM_CODE,SUM(NVL(B.CART_QTY,0)),SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,BI_SUPP_REC,SM_SUPP_REC
    FROM IM_GOODS_RECIEVE_HEADER A,IM_GOODS_RECIEVE_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND STORE_CODE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE BETWEEN :DATE_FRM AND :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('1- SM_SUPP_REC '||' '||SM_SUPP_REC );
    -- SHOW_MESSAGE('RECEIVED FROM SUPPLIER BIG'||' '||big_syb_qty_rec);
    exception
    when no_data_found then
    BI_SUPP_REC := 0;
    SM_SUPP_REC := 0;
    when others then message(10,sqlerrm);
    END ;
    -- Adjustement Data 2
    BEGIN
    SELECT B.ITEM_CODE ,SUM(NVL(CART_QTY,0)) ADJUST_QTY,SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,big_ADJ,small_ADJ
    FROM IM_ADJUST_HEADER A,IM_ADJUST_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND B.STORE_CODE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND A.DOC_DATE BETWEEN :DATE_FRM AND :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- SHOW_MESSAGE('Adjust BIG'||' '||big_syb_qty_ADJ);
    exception
    when no_data_found then
    big_ADJ := 0;
    small_ADJ := 0;
    when others then message(10,sqlerrm);
    END ;
    -- Goods Received Data From Stores 2
    BEGIN
    SELECT B.ITEM_CODE,SUM(NVL(B.CART_QTY,0)),SUM(NVL(ITEM_QUANTITY,0))
    INTO item_syb_code,BI_INTER,SM_INTER
    FROM IM_TRANS_REC_HEADER A,IM_TRANS_REC_DETAILS B
    WHERE A.DOC_CODE = B.DOC_CODE
    AND REC_STORE = R.STORE_CODE
    AND ITEM_CODE = R.ITEM_CODE
    AND DOC_DATE BETWEEN :DATE_FRM AND :DATE_TO
    GROUP BY
    B.ITEM_CODE;
    -- show_message('here');
    -- SHOW_MESSAGE('Received From Stores BIG'||' '||big_syb_qty_rec_iner);
    exception
    when no_data_found then
    BI_INTER := 0;
    SM_INTER := 0;
    when others then message(10,sqlerrm);
    END ;
    BEGIN
         BIG_QTY_OPEN := 0;
    SMALL_QTY_OPEN := 0;
    BEGIN
    SELECT NVL(S_BIG_QTY_OPEN,0) ,NVL(S_SMALL_QTY_OPEN,0)
    INTO
    BIG_QTY_OPEN,SMALL_QTY_OPEN
    FROM STOCK_AT_DATE_REP2
    WHERE S_STORE_CODE = R.STORE_CODE
    AND S_ITEM_CODE = R.ITEM_CODE;
    END;
    BIG_QTY_OPEN := ((BIG_QTY_OPEN) + NVL(big_issue,0));
    BIG_QTY_OPEN := ((BIG_QTY_OPEN) - NVL(BI_SUPP_REC,0));
    big_adj := -1 * NVL(big_adj,0);
    BIG_QTY_OPEN := ((BIG_QTY_OPEN) + NVL(big_adj,0));
    BIG_QTY_OPEN := ((BIG_QTY_OPEN) - NVL(BI_INTER,0));
    SMALL_QTY_OPEN := ((SMALL_QTY_OPEN) + NVL(SMALL_issue,0));
    SMALL_QTY_OPEN := ((SMALL_QTY_OPEN) - NVL(SM_SUPP_REC ,0));
    SMALL_adj := -1 * NVL(SMALL_adj,0);
    SMALL_QTY_OPEN := ((SMALL_QTY_OPEN) + NVL(SMALL_adj,0));
    SMALL_QTY_OPEN := ((SMALL_QTY_OPEN) - NVL(SM_INTER,0));
    END;
    BEGIN
    UPDATE STOCK_AT_DATE_REP2
    SET BIG_SUP_REC = BI_SUPP_REC,
    SMALL_SUP_REC = SM_SUPP_REC,
    BIG_ISSUE_TRAN = big_issue,
    SMALL_ISSUE_TRAN = SMALL_issue,
    BIG_ADJUST = big_adj,
    SMALL_ADJUST = SMALL_adj,
    BIG_INTER_REC = BI_INTER,
    SMALL_INTER_REC = SM_INTER,
    S_BIG_QTY_OPEN = BIG_QTY_OPEN,
    S_SMALL_QTY_OPEN = SMALL_QTY_OPEN
    WHERE S_STORE_CODE = R.STORE_CODE
    AND S_ITEM_CODE = R.ITEM_CODE;
    exception
    when no_data_found then
    NULL;
    when others then
    message(10,sqlerrm);
    END;
    END LOOP;
    COMMIT;
    SET_APPLICATION_PROPERTY(CURSOR_STYLE,'default');
    SYNCHRONIZE;
    -- SHOW_MESSAGE('The Data Have Been Calculated !!!');
    END;
    declare
         pl_id ParamList;
    APPLICATION_ID VARCHAR2(20):='PRD';
              COMMAND_LINE VARCHAR2(100) :='STOCK_LEDGER';
    BEGIN
    pl_id := Get_Parameter_List('tmpdata');
    IF NOT Id_Null(pl_id) THEN
    Destroy_Parameter_List( pl_id );
    END IF;
    pl_id := Create_Parameter_List('tmpdata');
    Add_Parameter(pl_id,'DATE_FRM',TEXT_PARAMETER,:DATE_FRM);
    Add_Parameter(pl_id,'DATE_TO',TEXT_PARAMETER,:DATE_TO);
    Add_Parameter(pl_id,'ITEM_FRM',TEXT_PARAMETER,:ITEM_FRM);
    Add_Parameter(pl_id,'ITEM_TO',TEXT_PARAMETER,:ITEM_TO);
    Add_Parameter(pl_id,'FRM_STORE',TEXT_PARAMETER,:FRM_STORE);
    IF :REPORT_TYPE = 1 THEN
    Run_Product(REPORTS,'C:\INV\RDF\STOCK_LEDGER.rdf',SYNCHRONOUS,RUNTIME,
    FILESYSTEM, pl_id,NULL);
    ELSIF :REPORT_TYPE = 2 THEN
    Run_Product(REPORTS,'C:\INV\RDF\STOCK_LEDGER_2.rdf',SYNCHRONOUS,RUNTIME,
    FILESYSTEM, pl_id,NULL);
    END IF;
    END;
    Waiting for your valuable answer
    Best Regards
    Jamil Alshaibani

    Make a matte in Photoshop.
    From the Photoshop menu bar: File > New > Film & Video Presets. Choose one that suits your FCP project pixel dimensions. Make the background black. Place a white rectangle with rounded corners on top (the white will determine how much of your picture is visible). Flatten Image. Save as TIFF. Import into FCP.
    Move your video clips up to V2. Place the imported TIFF on V1. Highlight all clips on V2.
    Go to Modify > Composite Mode > Travel Matte Luma. Done.
    If you want soft edges, no need for a matte. Double click your clip to place it in the Viewer. Open up the Motion tab > Crop > Edge Feather. Nice and quick. Copy and Paste attributes for the other clips.

  • Global temporary table screwing up execution plan

    Hello
    I have a process I'm trying to optimise. The first part of the process opens a cursor, bulk collects, and then inserts the data into a table using FORALL. The next part uses another FORALL statement to insert into another table, using 4 of those collections populated by the bulk collect to control a select statement i.e.
    FORALL i IN 1..Collection1.COUNT
        INSERT
        INTO
             table
        SELECT
             t1.col1,
             t2.col2,
             t3.col3,
             Collection4(i)
        FROM
           table1 t1,
           table2 t2,
           table3 t3
        WHERE
           t1.pk = Collection1(i)
        AND
           t2.fk = t1.pk
        AND
           t3.fk = t1.pk;This statement is executed 50k+ times and takes about an hour. The execution plan for the select is fine, the time is just down to the number of itterations. Anyway, to try and make this quicker, I created a global temporary table to store the contents of the collections of interest, and then in a new cursor using the above select (replacing the collections with the columns in the temporary table), I bulk collect that into new collections and do the inserts. The problem is, that the optimiser assumes that the temporary table has 2 million+ rows in it for some strange reason, and so this completely changes the executions plan to give a cost of 180k! I can't analyze a temporary table so does anyone have an idea as to what I can do to make this work and bring the cost down to something more reasonable? Maybe an alternative strategy?
    Cheers

    <<I can't analyze a temporary table>>
    What version of Oracle do you use?
    SQL> create global temporary table mytable(col1 number,
      2  col2 date) on commit preserve rows;
    Table created.
    SQL> begin
      2  for i in 1..1000 loop
      3  insert into mytable values(i,sysdate);
      4  end loop;
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL> select count(1) from mytable;
      COUNT(1)
          1000
    SQL> set autot traceonly
    SQL> select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=RULE
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE'
    Statistics
              0  recursive calls
              0  db block gets
              5  consistent gets
              0  physical reads
              0  redo size
            199  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> ALTER SESSION SET OPTIMIZER_GOAL = ALL_ROWS;
    Session altered.
    SQL>  select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=11 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (Cost=11 Card=8168)
    Statistics
              5  recursive calls
              0  db block gets
             10  consistent gets
              0  physical reads
              0  redo size
            211  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              2  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>
    SQL> ANALYZE TABLE MYTABLE COMPUTE STATISTICS;
    Table analyzed.
    SQL> select count(1) from mytable;
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1)
       1    0   SORT (AGGREGATE)
       2    1     TABLE ACCESS (FULL) OF 'MYTABLE' (Cost=2 Card=1000)
    Statistics
              0  recursive calls
              0  db block gets
              5  consistent gets
              0  physical reads
              0  redo size
            210  bytes sent via SQL*Net to client
            277  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> disconnect
    Disconnected from Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production
    With the Partitioning option
    JServer Release 9.2.0.3.0 - Production

  • MATERIALIZED view on two tables with Fast Refresh

    i Wanted to create MV on two tables with Fast refresh on commit.
    I followed below steps
    create materialized view log on t1 WITH PRIMARY KEY, rowid;
    create materialized view log on t2 WITH PRIMARY KEY, rowid;
    CREATE MATERIALIZED VIEW ETL_ENTITY_DIVISION_ASSO_MV
    REFRESH fast ON commit
    ENABLE QUERY REWRITE
    AS
    select A.ROWID B.ROWID,a.c1, DECODE(a.c1,'aaa','xxx','aaa') c2
    from t1 A
    join t2 b
    on AB.c1= CD.c2;
    i am getting below error.
    Error report:
    SQL Error: ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
    12054. 00000 - "cannot set the ON COMMIT refresh attribute for the materialized view"
    *Cause:    The materialized view did not satisfy conditions for refresh at
    commit time.
    *Action:   Specify only valid options.
    Basically i want to take record in MV by joinig two tables and if both of the base tables will updated then record should reflect in materialised view.
    Please do the needfull.

    does the table support PCT? the other restrictions on joins look to be ok in your statement.
    maybe try creating first with on demand instead of commit to see does it create.
    http://docs.oracle.com/cd/B19306_01/server.102/b14223/basicmv.htm
    >
    Materialized Views Containing Only Joins
    Some materialized views contain only joins and no aggregates, such as in Example 8-4, where a materialized view is created that joins the sales table to the times and customers tables. The advantage of creating this type of materialized view is that expensive joins will be precalculated.
    Fast refresh for a materialized view containing only joins is possible after any type of DML to the base tables (direct-path or conventional INSERT, UPDATE, or DELETE).
    A materialized view containing only joins can be defined to be refreshed ON COMMIT or ON DEMAND. If it is ON COMMIT, the refresh is performed at commit time of the transaction that does DML on the materialized view's detail table.
    If you specify REFRESH FAST, Oracle performs further verification of the query definition to ensure that fast refresh can be performed if any of the detail tables change. These additional checks are:
    A materialized view log must be present for each detail table unless the table supports PCT. Also, when a materialized view log is required, the ROWID column must be present in each materialized view log.
    The rowids of all the detail tables must appear in the SELECT list of the materialized view query definition.
    If some of these restrictions are not met, you can create the materialized view as REFRESH FORCE to take advantage of fast refresh when it is possible. If one of the tables did not meet all of the criteria, but the other tables did, the materialized view would still be fast refreshable with respect to the other tables for which all the criteria are met.

  • Unable to update when executed, table gets lock Execution does not stop Execution even for an hour

    Following is my Query unable to update when Executed table gets lock Execution does not stop even for an hour.
    update Employees 
          set Status = 'Close'
          where statusid IN (select statusid 
                                             from MyView 
                                              where DownloadedDate ='2014-07-27 00:00:00.000'
    here Employee contains 3,00,000 of records and Subquery return 1,50,000 Empid 
    i tried in various ways but not able to solve, statusid  column have no index on,  i tried using cursor but it does not work.
    plz let me know how to solve this Issue, THANKS IN ADVANCE.

    Hello,
    You should better post your question to a more related Forum, like Transact-SQL or SQL Server Database Engine; this Forum is for samples & community Projects.
    Have you checked the execution plan if indexes are used?
    You could update the data in chunks, e.g. 10.000 rows per execution. For this you have to add a TOP clause and a filter to update only those, who are not updated yet.
    update TOP (10000) Employees
    set Status = 'Close'
    where statusid IN
    (select statusid
    from MyView
    where DownloadedDate ='2014-07-27 00:00:00.000')
    AND Status <> 'Close'
    Olaf Helper
    [ Blog] [ Xing] [ MVP]

  • My i phone 3gs is ios 5.0.1 but its very slow what i can do for make fast?

    my i phone 3gs ios 5.0.1 is runing very very slow what i can do for make fast runing ?

    You shall most likely buy a new phone since updating a hacked or jailbroken phone to the current version of iOS often ends up permanently bricking the phone.

  • Oracle Apps - How to create a table before report execution?

    Hi,
    I'm new to oracle apps, I created a procedure to get some values for a report and created a query to get the remaining values. Now I need to populate those values into a temporary table and after the report execution I have to delete the table. In apps there is no before parameter form. If any one can help me, that will great.
    Thanks in advance.

    Hi,
    Do you really need to create a table INSIDE the report? It doesn't look like a good idea to me. For instance, what happens if 2 users try to execute the report at the same time? I would create a global temporary table before any execution of the report, and then I would just populate the data in the table in the 'Before Report' trigger. The data in a global temporary table is only visible at session level, so you wouldn't have any problems with multiple users executing the report at the same time and the data disappears once you finish your session, so you don't need to take care of deleting the data.
    Hope it helps.

  • Performance problems - query on the copy of a table is faster than the orig

    Hi all,
    I have sql (select) performance problems with a specific table (costs 7800)(Oracle 10.2.0.4.0, Linux).
    So I copied the table 1:1 (same structure, same data, same indexes etc. ) under the same area (database, user, tablespace etc.) and gathered the table_stats with the same parameters. The same query on this copied table is faster and the costs are just 3600.
    Why for gods sake is the query on this new table faster??
    I appreciate any idea.
    Thank you!
    Fibo
    Edited by: user954903 on 13.01.2010 04:23

    Could you please share more information/link which can elaborate the significance of using SHRINK clause.
    If this is so useful and can shrink the unused space , why not my database architect has suggested this :).
    Any disadvantage also?It moves the highwater mark back to the lowest position, therefore full tables scans would work faster in some cases. Also it can reduce number of migrated rows and number of used blocks.
    Disadvantage is that it involves row movement, so operations which based on rowid are permitted during the shrinking.
    I think it is even better to stop all operations on the table when shrinking and disable all triggers. Another problem is that this process can take a long time.
    Guru's, please correct me if I'm mistaken.
    Edited by: Oleg Gorskin on 13.01.2010 5:50

  • Inserting into Table-in-Table after re-execution the Outer Table query

    Hi Gurus,
    I have a problem with inserting into Table-in-Table after re-execution the Outer Table query:
    The problem:
    I have two tables; both tables are based on VO; tables have a VL between them;
    The Inner Table is hidden when page lode.
    I'm executing the Outer Table, open the details to see the Inner Table, I can add rows to the Inner Table. Everything is OK.
    But-
    After I'm executing the Outer Table again –
    Adding new rows to the Inner Table is not work anymore, despite I'm executing the Inner Table RowSet query.
    Scenario is:
    In processRequest():
    In the processRequest() I'm calling executeQuery() of the Outer Table Only.
    (So the Details executeQuery() will be done by VL and Its works fine.)
    In processFormRequest():
    User opens the details to see the Inner Table.
    On some event, I'm executing the Outer Table query.
    On some other event, programmatically I am inserting a new row to the Inner Table, and a new row is displayed in the Inner Table as I wanted.
    In this way, one row after another, I can add as many rows as I wish, without any problem.
    (This scenario should behave as same clicking on button of type "Add-Another-Row", just adding the row automatically)
    The code for the inserting (as specified in the chapter "Classic Tables" --> "table-in-table"):
    OARow newRow = (OARow) innerRowSet.createRow();
    innerRowSet.insertRow(newRow);
    innerRowSet.setRangeSize(innerRowSet.getRangeSize()+ 1);
    innerRowSet.executeQuery();
    If the user won't cause to re-execution of the outer table – no problem occurs.
    * Should I add something after the Outer Table VO execution?
    * Maybe to the inserting to the Inner Table code?
    Please help…

    Hi, Thanks.
    I don't know which one of the Outer Table rows is the current row,
    But-
    Only the Details (the Inner Tables) who where opened before the executeQuery() of the Outer table are not acts as they should.
    All the Details who where closed before the executeQuery() of the Outer table are still works fine.
    Please advise.

  • What is the different type mapping faster execution order ?

    Order is XSLT, JAVA(SAX), JAVA(DOM), Graphical(sax), and ABAP mappings ?
    Is there any Graphical mapping with dom parser ?
    Please send the faster execution order .

    Hi,
    I am not getting exactly what you are looking for,
    I think I already had answered your questions in your previous blog
    Mapping types performance
    Is there any Graphical mapping with dom parser ?
    -->Graphical mapping with DOM parser is not possible, thus you have to go for Java Mapping
    JAVA mapping
    /people/amjad-ali.khoja/blog/2006/02/07/using-dom4j-in-xi--a-more-sophisticated-option-for-xml-processing-than-sap-xml-toolkit
    Mapping Techniques
    Please let me know if any specific thing that you will be looking for, so accordingly we could narrow down the analysis and answers.
    Thanks
    Swarup

  • HT3053 i did some thing wrong that makes my mac just showing a black screen and amessage says you need to restart your mac .... etc and i wanted to talk to any one from apple and i couldnt get my mac serial number and i could do nothing i turned to some o

    i did some thing wrong that makes my mac just showing a black screen and amessage says you need to restart your mac .... etc and i wanted to talk to any one from apple and i couldnt get my mac serial number and i could do nothing i turned to some one who formatted the hard desk and i tried to talk to any one from apple but it seems no one care i spent 4 years to be able to buy this mac and i dont know what to do > i will never buy mac again please some one tell me what to do dvd is not working i press on options and nothing happen what showld i do ? through it ? some body tell me what to do or where to go

    Just to make certain, you inserted the original installation disk and started the MBP holding down the C key?  Did the Optical drive reject the disk or did it just spin?  If the former, the lens in the drive may be dirty and needs to be cleaned.  If the latter, you will have to take it to an Apple repair facility.
    Ciao.

  • In my computer firefox running very slow, i feel hurt, please guide me how make fast firefox ?

    in my computer Firefox all additions i have tried and all additions working very slow. if no solution for make fast Firefox browser in my computer then i will remove Firefox from my computer.

    You are using beta software and shouldn't expect that everything works as smoothly as with an official release.
    Create a new profile as a test to check if your current profile is causing the problems.
    See [[Basic Troubleshooting#Make_a_new_profile|Basic Troubleshooting&#58; Make a new profile]]
    There may be extensions and plugins installed by default in a new profile, so check that in "Tools > Add-ons > Extensions & Plugins"
    If that new profile works then you can transfer some files from the old profile to that new profile (be careful not to copy corrupted files)
    See http://kb.mozillazine.org/Transferring_data_to_a_new_profile_-_Firefox

  • Analyze Vs Gather Table Stats

    hi!
    I have some confusions about analyze and gather table stats command. Please answers my questions to remove these confusions:
    1 - What’s major difference between analyze and gather table stats ?
    2 - Which is better?
    3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?
    Hopes you will support me.
    regards
    Irfan Ahmad

    [email protected] wrote:
    1 - What’s major difference between analyze and gather table stats ?
    2 - Which is bet
    3 - Suppose i am running analyze/stats table command when some transactions are being performed on the process table then what will be affected of performance?1. analyze is older and probably not being extended with new functionality/support for new features
    2. overall, dbms_stats is better - it should support everything
    3. Any queries running when the analyze takes place will have to use the old stats
    Although dbms_stats is probably better, I find the analyze syntax LOTS easier to remember! The temptation for me to be lazy and just use analyze in development is very high. For advanced environments though dbms_stats will probably work better.
    There is one other minor difference between analyze and dbms_stats. There's a column in the user/all/dba_histograms view called endpoint_actual_value that I have seen analyze populate with the data value the histogram was created for but have not seen dbms_stats populate.

  • Query optimization - Query is taking long time even there is no table scan in execution plan

    Hi All,
    The below query execution is taking very long time even there are all required indexes present. 
    Also in execution plan there is no table scan. I did a lot of research but i am unable to find a solution. 
    Please help, this is required very urgently. Thanks in advance. :)
    WITH cte
    AS (
    SELECT Acc_ex1_3
    FROM Acc_ex1
    INNER JOIN Acc_ex5 ON (
    Acc_ex1.Acc_ex1_Id = Acc_ex5.Acc_ex5_Id
    AND Acc_ex1.OwnerID = Acc_ex5.OwnerID
    WHERE (
    cast(Acc_ex5.Acc_ex5_92 AS DATETIME) >= '12/31/2010 18:30:00'
    AND cast(Acc_ex5.Acc_ex5_92 AS DATETIME) < '01/31/2014 18:30:00'
    SELECT DISTINCT R.ReportsTo AS directReportingUserId
    ,UC.UserName AS EmpName
    ,UC.EmployeeCode AS EmpCode
    ,UEx1.Use_ex1_1 AS PortfolioCode
    SELECT TOP 1 TerritoryName
    FROM UserTerritoryLevelView
    WHERE displayOrder = 6
    AND UserId = R.ReportsTo
    ) AS BranchName
    ,GroupsNotContacted AS groupLastContact
    ,GroupCount AS groupTotal
    FROM ReportingMembers R
    INNER JOIN TeamMembers T ON (
    T.OwnerID = R.OwnerID
    AND T.MemberID = R.ReportsTo
    AND T.ReportsTo = 1
    INNER JOIN UserContact UC ON (
    UC.CompanyID = R.OwnerID
    AND UC.UserID = R.ReportsTo
    INNER JOIN Use_ex1 UEx1 ON (
    UEx1.OwnerId = R.OwnerID
    AND UEx1.Use_ex1_Id = R.ReportsTo
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupCount
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.Acc_ex1_3 > '0'
    AND Accounts.OwnerID = 109
    GROUP BY Accounts.AssignedTo
    ) TotalGroups ON (TotalGroups.AssignedTo = R.ReportsTo)
    INNER JOIN (
    SELECT Accounts.AssignedTo
    ,count(DISTINCT Acc_ex1_3) AS GroupsNotContacted
    FROM Accounts
    INNER JOIN Acc_ex1 ON (
    Accounts.AccountID = Acc_ex1.Acc_ex1_Id
    AND Acc_ex1.OwnerID = Accounts.OwnerID
    AND Acc_ex1.Acc_ex1_3 > '0'
    INNER JOIN Acc_ex5 ON (
    Accounts.AccountID = Acc_ex5.Acc_ex5_Id
    AND Acc_ex5.OwnerID = Accounts.OwnerID
    WHERE Accounts.OwnerID = 109
    AND Acc_ex1.Acc_ex1_3 NOT IN (
    SELECT Acc_ex1_3
    FROM cte
    GROUP BY Accounts.AssignedTo
    ) TotalGroupsNotContacted ON (TotalGroupsNotContacted.AssignedTo = R.ReportsTo)
    WHERE R.OwnerID = 109
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

    Hi All,
    Thanks for the replies.
    I have optimized that query to make it run in few seconds.
    Here is my final query.
    select ReportsTo as directReportingUserId, 
    UserName AS EmpName, 
    EmployeeCode AS EmpCode,
    Use_ex1_1 AS PortfolioCode,
    BranchName,
    GroupInfo.groupTotal,
    GroupInfo.groupLastContact,
    case when exists
    (select 1 from ReportingMembers RM
    where RM.ReportsTo =  UserInfo.ReportsTo
    and RM.MemberID <> UserInfo.ReportsTo
    ) then 0  else UserInfo.ReportsTo end as memberid1,
    (select code from Regions where ownerid=109 and  name=UserInfo.BranchName) as BranchCode,
    ROW_NUMBER() OVER (ORDER BY directReportingUserId) AS ROWNUMBER
    FROM 
    (select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo )
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    union
    select distinct R.ReportsTo, UC.UserName, UC.EmployeeCode,UEx1.Use_ex1_1,
    (select top 1 TerritoryName 
    from UserTerritoryLevelView
    where displayOrder = 6
    and UserId = R.ReportsTo) as BranchName,
    Case when R.ReportsTo = Accounts.AssignedTo then Accounts.AssignedTo else 0 end as memberid1
    from ReportingMembers R
    --INNER JOIN TeamMembers T ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join UserContact UC on (UC.CompanyID = R.OwnerID and UC.UserID = R.ReportsTo)
    inner join Use_ex1 UEx1 on (UEx1.OwnerId = R.OwnerID and UEx1.Use_ex1_Id = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    where R.MemberID = 1
    ) UserInfo
    inner join 
    select directReportingUserId, sum(Groups) as groupTotal, SUM(GroupsNotContacted) as groupLastContact
    from
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo AND T.ReportsTo = 1)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    union 
    select distinct R.ReportsTo as directReportingUserId, Acc_ex1_3 as GroupName, 1 as Groups,
    case when Acc_ex5.Acc_ex5_92 between GETDATE()-365*10 and GETDATE() then 1 else 0 end as GroupsNotContacted
    FROM ReportingMembers R
    INNER JOIN TeamMembers T 
    ON (T.OwnerID = R.OwnerID AND T.MemberID = R.ReportsTo)
    inner join Accounts on (Accounts.OwnerID = 109 and Accounts.AssignedTo = R.ReportsTo)
    inner join Acc_ex1 on (Acc_ex1.OwnerID = 109 and Acc_ex1.Acc_ex1_Id = Accounts.AccountID and Acc_ex1.Acc_ex1_3 > '0')
    inner join Acc_ex5 on (Acc_ex5.OwnerID = 109 and Acc_ex5.Acc_ex5_Id = Accounts.AccountID )
    --where TerritoryID in ( select ChildRegionID  RegionID from RegionWithSubRegions where OwnerID =109 and RegionID = 729)
    where R.MemberID = 1
    ) GroupWiseInfo
    group by directReportingUserId
    ) GroupInfo
    on UserInfo.ReportsTo = GroupInfo.directReportingUserId
    Please mark it as an answer/helpful if you find it as useful. Thanks, Satya Prakash Jugran

  • Slow, then fast execution of same query

    I have a test query:
    dbxml> query 'count(collection()/Quots/QuotVerificationInstance/@lastModifiedDate[.="2009-09-07"])'
    With this index:
    Index: node-attribute-substring-string for node {}:lastModifiedDate
    The first time I run the query in the shell, it takes a while:
    Query - Finished query execution, time taken = 29656ms
    The second time I run the query, it's a more acceptable speed:
    Query - Finished query execution, time taken = 735ms
    Can anyone explain the difference in search speed? The slow, fast pattern is pretty consistent (I've tried fresh shell sessions a few times and get the same initial slow search followed by the quicker one.)
    What I'm trying to do is allow on-demand generation of a table showing the number of records modified per day and per month. The test date I'm searching for above is the date of insertion of records so it returns a comparatively large result set at the moment, but 29-odd seconds is way too slow to make the query comfortable. I've added a substring index because sometimes I want a partial date search (e.g. when counting records modified per month). I remember reading that a substring index doubles as an equality index. The date is stored simply as a string, not a date type.
    What am I missing? I'd like to understand the difference in execution times, but basically I need it to run more quickly for simultaneous date queries. I'd be grateful for any pointers.
    Tim

    Hi Viyacheslav
    I think it's perhaps too much trouble for us both for me to try and share the container -- my managers would want assurances about copyright (the data is part of a previously published dictionary), and I'd have to build a smaller one as the current file is > 500MB. But thanks for your interest.
    My container setup is pretty simple. It's around 45000 small XML files added to a node container for a read only database (no environment). I have 16 indices supporting a range of different searches and a couple of reports, and the only index relevant to the @lastModifiedDate search is the node-attribute-substring-string index mentioned above (when I tried other edge or equality indexes I deleted the substring index). The query script I intended would be instantiated purely to run the one query and then die each time, so I don't see there being any chance of getting past an initial slow query if that's the behaviour. Because that first query finds around 37000 matches among the 45000 dates, it's understandable that it would take a while but I'm not sure why it speeds up in the shell from the second time round. An obvious suggestion would be that some form of optimisation is taking place in the shell, but I don't understand the workings of DBXML well enough to guess how. Even if this is the case, knowing that it happens wouldn't help me speed up my query due to the life cycle of the script.
    Perhaps I should just generate these stats on a daily basis rather than generating them on-demand, or else try a different approach at getting the counts. But thanks again for the comments.
    Tim

Maybe you are looking for

  • Multiple ipods on one apple account - Quick question

    Hi My 3 sons have just got ipods.  I have set them up under my apple account then set them separate emails so they can imessage etc... 2 are fine but when I message the 3rd from my iphone it also sends a copy to my iphone and equally if he messages m

  • My home button on my ipod touch 4th generation is stuck, how do i get it unstuck

    my home button is stuck. i try pushing it several of times and it wont do anything, the only way i can get it to change games is if i keep turning it off and thats starting to get annoying, please help!!!!!!

  • [Solved] KDE 4.4 Knetworkmanager, has anyone got this to work yet?

    Been trying to get knetworkmanager working on KDE 4.4 and haven't got it to work yet.  Well initially it did work.  Here's the steps I did. 1) Had network running '/etc/rc.d/network start 2) Installed 'kdeplasma-applets-networkmanagement' 3) Commente

  • Size of plot legend graphics

    After trying to resize the plot legend of a waveform graph with the mouse, the small graphics showing the color and other plot properties suddenly got much longer, and the plot text was no longer left aligned. Tried to modify the legend programatical

  • Why are my images dimmed out in Time Machine

    I have been using TIme Machine with Time Capsule for about a month. I just noticed that when i enter time machine application and I look in Finder, that "all images" and all the other items under Search are dimmed out. Any explanations? thanks