Query on GL_BALANCES Table is very slow

Dear Members,
I have prepared a query on GL_BALANCES Table which is as follows:
SQL>
select (nvl(sum(gb.period_net_dr), 0) - nvl(sum(gb.period_net_cr), 0))
INTO V_AMT
from gl_balances gb,
gl_code_combinations gcc,
fnd_flex_value_hierarchies ffvh
where gb.set_of_books_id = 7
and gb.actual_flag = 'B'
and gb.period_year = P_YEAR --'2010'
and gb.budget_version_id = P_BUD_VER_ID--1234
and gb.code_combination_id = gcc.code_combination_id
and gcc.segment3 >= ffvh.child_flex_value_low
and gcc.segment3 <= ffvh.child_flex_value_high
and ffvh.flex_value_set_id = 1012576
and ffvh.parent_flex_value = P_FLEX_VAL;
The above query is taking long time to complete.
Plan of my query is as follows
SELECT STATEMENT, GOAL = CHOOSE     Cost=49885     Cardinality=1     Bytes=64
SORT AGGREGATE                                    Cardinality=1     Bytes=64
MERGE JOIN               Cost=49885     Cardinality=12     Bytes=768
SORT JOIN               Cost=49859     Cardinality=2192     Bytes=74528
HASH JOIN               Cost=49822     Cardinality=2192     Bytes=74528
TABLE ACCESS FULL     Object owner=GL     Object name=GL_BALANCES     Cost=47955     Cardinality=2192     Bytes=50416
TABLE ACCESS FULL     Object owner=GL     Object name=GL_CODE_COMBINATIONS     Cost=1854     Cardinality=526850     Bytes=5795350
FILTER                         
SORT JOIN                         
INDEX RANGE SCAN     Object owner=APPLSYS     Object name=FND_FLEX_VALUE_HIERARCHIES_N1     Cost=2     Cardinality=2     Bytes=60
Can any one please help me in tuning this query.
Thanks in advance.
Best Regards.

As per the other replies, you've not really given enough information to go on - what are you trying to achieve, versions, etc.
On my old apps 11.5.8 system, the explain plan for your query uses GL_CODE_COMBINATIONS_U1 rather than a full scan of gl_code_combinations.
Check your stats are up to date (select table_name, num_rows, last_analyzed from dba_tables where ...)
See if you can also use period_name and/or period_set_name (or period_num) from GL_Periods rather than period_year (i.e. use P_YEAR to lookup the period_name/period_set_name/period_num from gl_periods). It might be faster to do per period and then consolidate for the whole year, as there are indexes on gl_balances for columns period_name, period_set_name, period_num.
regards, Ivan

Similar Messages

  • Stepping through table is very slow

    LV 7.0 on Win 2K
    I use a table to display records of information. The table is filled once at entry to a user interface subprogram and the user has the possiblity to select certain records by clicking or stepping through with the cursor buttons.
    I have also written subprograms to sort the table by the various columns. The active row is used to provide more information e.g. displaying it in an image.
    Usually you would click on a line with the mouse, but somtimes it is more convenient to step through subsequent lines with the cursor buttons.
    As it turned out the latter is sometimes very slow.
    I tried to trace that behaviour in my program, but it seems that the problem is within labview. I reduced the vi to a simple loop
    where nothing is done with the table except to display the value. If you click on a line the value will follow instantaniously. If you click on a line at the top of the table and then step down with the cursor buttons the value will also follow very quickly. But when you do the same while beeing at the bottom this will take several seconds.
    I read several entries in the forum that describe similar problems when updating the table. Please note that this is a very static table here. The only thing that is happening, is a user interacting with it with the keyboard.
    The answers I saw so far, suggesting to keep the table small, or switching to other indicators etc. seem to overlook the reason why to use the table; that is tp have a compact display of a lot of ordered information. If the reason is really within LV runtime. Than NI has a nice task to improve LV next time :-(
    Gabi
    7.1 -- 2013
    CLA
    Attachments:
    Tableaccess.zip ‏65 KB

    Unfortunately, I can confirm that LabVIEW 7.1 shows the same slow behavior. Actually pressing "down arrow" on my rig is so slow, it seems to lock up the PC. (W2k)
    Interestingly, if you capture the "up" and "down" arrows with a filter event, there is no slowdown! Maybe you can used the attached rough draft (LabVIEW 7.0) to improve the UI experience to the user until NI fixes the issue.
    Enjoy!
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    Tableaccess.vi ‏266 KB

  • Adding a new column to a big table is very slow

    All,
    I am trying to add a column to a table with defaul value as 0. The table has about 15M rows. The alter table statement is extermely slow.
    I am using 11.1.0.7 on windows.
    Thanks in advance

    Please be careful when using this new 11g feature that can add columns to tables without updating each row. If you use a default value on a new not null column, update triggers that reference that column's :new value could have problems. There is a bug that has been around for at least 3 years that Oracle claims is "not a bug" that could cause problems in your triggers.
    The issue is that triggers will still see null for the :new.column value even though it has a not null default. Even though you can select from the table and the results will show the default value, which it pulls from the data dictionary, the trigger cannot handle it. It appears to pull the value from the data block, not the data dictionary. So if it has never been updated, those two values will be out of synch. The fix, shown below, it to update each row.
    For example, in 11.2.0.2:
    1. Create a new table, insert one row, and add a new column (not null, with default value):
    create table t (c1 number, c2 number);
    insert into t values (1,1);
    alter table t add (c3 number default 0 not null);Select from it and everything looks fine. The value for C3 is 0, my default. Perfect.
    SQL> select * from t;
            C1         C2         C3
             1          1          0My new column C3 looks fine. Even though that row was never updated, my query pulled the default value of 0 from the data dictionary. Great!
    But now add a simple trigger:
    create or replace trigger t_bur_tr
    before update on t for each row
    begin
      :new.c3 := :new.c3;
    end;
    /I am updating C1, one of my original columns. The trigger looks at the new value of C3 anyway, but because I am not updating it, it should equal its existing value. But this is what I get:
    SQL> update t set c1 = 9999 where c1 = 1;
    update t set c1 = 9999 where c1 = 1
    ORA-01407: cannot update ("AMARTIN"."T"."C3") to NULL
    SQL> The workaround is to update every row. (Defeating the purpose of the cool new 11g feature that can supposedly add columns without updating each row.)
    SQL> update t set c3 = c3;
    1 row updated
    SQL> commit;
    Commit complete
    SQL> select * from t;
            C1         C2         C3
             1          1          0
    SQL> update t set c1 = 9999 where c1 = 1;
    1 row updated
    SQL> select * from t;
            C1         C2         C3
          9999          1          0 Now it works as expected.
    Just a heads-up on this unexpected feature. Not a bug? Sure looks like one to me.
    Edit: Appears to be fixed in 11.2.0.3

  • SQL query with parallel hint  running very slow

    I have a SQL query which joins three huge tables. (given below)
         insert /*+ append */ into final_table (oid, rmeth, id, expdt, crddt, coupon, bitfields, processed_count)
         select /*+ full(t2) parallel(t2,31) full(t3) parallel(t3,31)*/
         seq_final_table.nextval, '200', t2.id, t3.end_date, '1/jul/2009',123,t2.bitfield, 0
         from table1 t1, table2 t2, table3 t3 where
         t1.id=t2.id and
         t2.pid=t3.pid and
         t2.vid=t3.vid and
         t3.end_date is not null and
         (trunc(t1.expiry_date) != trunc(t3.end_date) or trim(t1.expiry_date) is null);
         Below are some statistics of the three tables.
         Table_Name          RowCount    Size(MB)
         table1 36469938 532
         table2 242172205     39184
         table3 231756758     29814
         The above query ran for 30+ hours, and returned with no rows inserted into final_table. I didn't get any error message also.
         But when I ran the query with table1 containing just 10000 records, the query completed succesfully within 20 minutes.
         Can any one please optimize the above query?
    Edited by: jaysara on Aug 18, 2009 11:51 PM

    As a side note: You probably don't want to insert a string into a date field, won't you?
    Under the assumption that crddt is of datatype date:
    crddt='1/jul/2009' needs to be changed into
    crddt= to_date('01/07/2009','dd/mm/yyyy') This is data type correct and nls independent.

  • Deletion of data from a 56 GB table is very slow

    Hi All,
    I want to delete some old data from a table in orcale 10g.
    Plz could anyone suggest , how to improve the deletion process???
    Thnaks & Regards.
    Ram

    marcinp1 wrote:
    What kind of problems do you expect after re-use of that space ?
    I can see only problem with clustering factor changes after reusing space but that change can go into both direction - it can be better or worse clustering factor for indexes on that table.
    Marcin,
    A key step in getting the physical implementation of a database right is to ask: "how much data, and how is it scattered" - which is what you're on about when you mention the "clustering factor". It's important, by the way, to make sure you distinguish carefully between the index statistic "clustering_factor" and the actual pattern of data scatter: the statistic should reflect reality, but since it doesn't, you need to make sure that everyone knows which one you are thinking of.
    The data scatter is the thing that can change most significantly when you try to reclaim space - and that's why you have to think about when the re-use takes place and how it takes place. The fact that it MAY be better or MAY be worse is irrelevant - you still have to THINK about what's going to happen, and how you can affect it, before you implement a change.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    "Science is more than a body of knowledge; it is a way of thinking" Carl Sagan

  • Query runs very slow and sometime freezes

    Hi Experts,
    I have just installed SAP GUI 710 but when i open a query in analyzer it runs very slow and  if the variables are set for more than 6 months it just sits there and finally times out. It takes at least 5 minutes to open the query if it opens at all. Can you please advise me on how to resolve this issue??
    Thanks, point will be awarded.

    Worth running RSRT .
    Ravi Thothadri

  • Retrieving record from oracle DB very slow..pls help

    Hi, i'm writing a VB code to retrieving records from Oracle DB Server version 8. I'm using VB Adodb to retrieve the records from various tables. Unfortunately one of the table are very slow to response, the table only contain around 204900 records. The SQL Statement to retrieve the records is a simple SQL Statement that contain WHERE clause only. Any issue that will make the retrieving time become slow? Is that a Indexing? Oracle Driver? Hardware Spec? Or any solution for me to improve the performance. Thanks!

    Well, there are a few things to consider...
    First, can you try executing your query via SQL*Plus? If there are database tuning problems, your query will be slow no matter where you run it.
    Second, are you retrieving significantly more rows in this query than in your other queries? It can take a significant amount of time to retrieve records to the client, even if it's quick to select them.
    Justin

  • Data Base is very slow

    Dear All,
    Certain query on my database is very slow .
    One of the query some times does not execute at all.this query involves a big table of about 5 million records.
    Some Facts About my Database.
    OS: SUN Solaris
    DataBase: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
    RAM:32GB
    Dedicated oracle server
    Processors:16
    DB Block Size:2048
    Large pool:150994944
    log_buffer:10485760
    shared_pool_size:150994944
    There are in total 21 production Database running on the same BOX.
    Previously my buffer cache hit ratio was 27%.
    So I recommended an increase in DB_CACHE_SIZE from 101MB to 300MB
    and SGA_MAX_SIZE to 800MB from 600MB.
    As a result ,The buffer cache hit ratio increased to 75%.
    But still the queries run slow.
    I even tried with Partitioning the Big Table Didn't help.
    My question is ,is the system over loaded ?
    or increasing the db_cache_size will help ?
    Regards.

    By itself the buffer hit cache ratio is a meaningless statistic. It can in fact be a misleading indicator since it does not actually reflect application performance.
    Tune the query. Make sure it is running as well as it can.
    Then look at overall machine resources: average and peak cpu, memory, and IO loads.
    If spare resources exist then considering more resources to the more important databases on the system.
    Document any performance changes that occur after each change. It is possible for the database performance problem being latching, that is, shared pool access and you might need space in the shared pool more than the buffers. It depends on the application and user load.
    Why are you using a 2K database block size? I would think 4k or 8k would probably be better even for a true OLTP with almost all access by index.
    To get help on the query you will need to post it, the explain plan, and information on available indexes, table row counts, and perhaps column statistics for the indexed columns and filter conditions.
    HTH -- Mark D Powell --

  • Query with parameter as Array (UDT) very slow

    Hi.
    I have following Problem. I try to use Oracle Instant Client 11 and ODP.NET to pass Arrays in SELECT statements as Bind Parameters. I did it, but it runs very-very slow. Example:
    - Inittial Query:
    SELECT tbl1.field1, tbl1.field2, tbl2.field1, tbl2.field2 ... FROM tbl1
    LEFT JOIN tbl2 ON tbl1.field11=tbl2.field0
    LEFT JOIN tbl3 ON tbl2.field11=tbl3.field0 AND tbll1.field5=tbl3.field1
    ...and another LEFT JOINS
    WHERE
    tbl1.field0 IN ('id01', 'id02', 'id03'...)
    this query with 100 elements in "IN" on my database takes 3 seconds.
    - Query with Array bind:
    in Oracle I did UDT: create or replace type myschema.mytype as table of varchar2(1000)
    than, as described in Oracle Example I did few classes (Factory and implementing IOracleCustomType) and use it in Query,
    instead of IN ('id01', 'id02', 'id03'...) I have tbl1.field0 IN (select column_value from table(:prmTable)), and :prmTable is bound array.
    this query takes 190 seconds!!! Why? I works, but the HDD of Oracle server works very hard, and it takes too long.
    Oracle server we habe 10g.
    PS: I tried to use only 5 elements in array - the same result, it takes also 190 seconds...
    Please help!

    I did (some time ago and it was a packaged procedure) something like
    Procedure p(p_one in datatype,p_two in datatype,p_dataset out sys_refcursor) is
      the_sql varchar2(32000);
      the_cursor sys_refcursor;
    begin
      the_sql = 'WITH NOTIFICACAO AS( ' ||
                '      SELECT ' ||
                '       t1.cd_consultora, ' ||
                '                               where       t1.dt_notificacao_cn >= to_date(''01/09/2006'',''dd/mm/yyyy'') ' ||  -- note the ''
                '           where rownum <= :W_TO_REC) ' ||   -- parameter 1
                '         where r_linha >= :W_FROM_REC ';     -- parameter 2
      open the_cursor for the_sql using p_one,p_two;  -- just by the book
    end p;if I remember correctly
    Regards
    Etbin

  • Very Slow Query due to Bitmap Conversion

    I have a strange problem with the performance of a spatial query. If I perform a 'SELECT non_geom_column FROM my_table WHERE complicated_join_query' the result comes back sub-second. However, when I replace the column selected with geometry and perform 'SELECT geom_column FROM my_table WHERE same_complicated_join_query' the response takes over a minute.
    The issue is that in the second case, despite the identical where clause, the explain plan is significantly different. In the 'select geom_column' query there is a BITMAP CONVERSION (TO ROWIDS) which accounts for all of the extra time, where as in the 'select other_column' query that conversion is replaced with TABLE ACCESS (BY INDEX ROWID) which is near instant.
    I have tried putting in some hints, although I do not have much experience with hints, and have also tried nesting the query in various sub-selects. Whatever I try I can not persuade the explain plan to drop the bitmap conversion when I select the geometry column. The full query and an explanation of that query are below. I have run out of things to try, so any help or suggestions at all would be much appreciated.
    Regards,
    Chris
    Explanation and query
    My application allows users to select geometries from a map image through clicking, dragging a box and various other means. The image is then refreshed - highlighting geometries based on the query with which I am having trouble. The user is then able to deselect any of those highlighted geometries, or append others with additional clicks or dragged selections.
    If there are 2 (or any even number of) clicks within the same geometry then that geometry is deselected. Alternatively the geometry could have been selected through an intersection with a dragged box, and then clicked in to deselect - again an even number of selections. Any odd number of selections (i.e. selecting, deselecting, then selecting again) would result in the geometry being selected.
    The application can not know if the multiple user clicks are in the same geometry, as it simply has an image to work with, so all it does is pass all the clicks so far to the database to deal with.
    My query therefore does each spatial point or rectangle query in turn and then appends the unique key for the rows each returned to a list. After performing all of the queries it groups the list by the key and the groups with an odd total are 'selected'. To do this logic in a single where clause I have ended up with nested select statements that are joined with union all commands.
    The query is therefore..
    SELECT
    --the below column (geometry) makes it very slow...replacing it with any non-spatial column takes less than 1/100 of the time - that is my problem!
    geometry
    FROM
    my_table
    WHERE
    primary_key IN
    SELECT primary_key FROM
    SELECT primary_key FROM my_table WHERE
    sdo_relate(geometry, mdsys.sdo_geometry(2003, 81989, NULL, sdo_elem_info_array(1, 1003, 3), sdo_ordinate_array( rectangle co-ords )), 'mask=anyinteract') = 'TRUE'
    UNION ALL SELECT primary_key FROM my_table WHERE
    sdo_relate(geometry, mdsys.sdo_geometry(2001, 81989, sdo_point_type( point co-ords , NULL), NULL, NULL), 'mask=anyinteract') = 'TRUE'
    --potentially more 'union all select...' here
    GROUP BY primary_key HAVING mod(count(*),2) = 1     
    AND
    --the below is the bounding rectangle of the whole image to be returned
    sdo_filter(geometry, mdsys.sdo_geometry(2003, 81989, NULL, sdo_elem_info_array(1, 1003, 3), sdo_ordinate_array( outer rectangle co-ords )), 'mask=anyinteract') = 'TRUE'

    Hi
    Thanks for the reply. After a lot more googling- it turns out this is a general Oracle problem and is not solely related to use of the GEOMETRY column. It seems that sometimes, the Oracle optimiser makes an arbitrary decision to do bitmap conversion. No amount of hints will get it to change its mind !
    One person reported a similarly negative change after table statistic collection had run.
    Why changing the columns being retrieved should change the execution path, I do not know.
    We have a numeric primary key which is always set to a positive value. When I added "AND primary_key_column > 0" (a pretty pointless clause) the optimiser changed the way it works and we got it working fast again.
    Chris

  • Query can run in Oracle 10g but very slow in 11g

    Hi,
    We've just migrated to Oracle 11g and we noticed that some of our view are very slow (it takes seconds in 10g and takes 30 minutes in 11g), and the tables are using the local table.
    Do any of you face the same issue?
    This is our query:
    SELECT
    A.wellbore
    ,a.depth center
    ,d.MD maxbc
    ,d.XDELT xbc
    ,d.YDELT ybc
    ,e.MD minac
    ,e.XDELT xac
    ,e.YDELT yac
    from
    table_A d,table_A e, table_B a
    where a.wellbore = d.WELLBORE (+)
    and a.wellbore = e.WELLBORE(+)
    and d.MD = (select max(MD) from table_A b where b.MD < a.depth and
    d.wellBORE = b.wellBORE)
    and e.md = (select min(md) from table_A c where c.MD > a.depth and
    e.wellBORE = c.wellBORE);

    Thanks I will move to the correct one..
    Rafi,
    Build the Indexes and it is still slow. I am querying from a view from another database, which is in 10g instances.
    Moved: Query can run in Oracle 10g but very slow in 11g
    Edited by: 924400 on Apr 1, 2012 6:03 PM
    Edited by: 924400 on Apr 1, 2012 6:26 PM

  • SPATIAL QUERY VERY SLOW

    I CAN TO EXECUTE THIS QUERY BUT IT IS VERY SLOW, I HAVE 2 TABLE , ONE A WITH 250.000 SITE AND B WITH 250.000 POINTS, I WANT TO DETERMINING HOW MANY RISK INSIDE THE SITES.
    THANKS
    JGS
    SELECT B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS_SITE A, DBG_RIESGOS_CUMULOS B
    WHERE A.GC = 'PATRIMONIAL FENOMENOS SISMICOS' AND A.GC=B.GC
    AND SDO_RELATE(B.GEOMETRY, A.GEOMETRY, 'MASK=INSIDE') = 'TRUE';
    100 RECORS IN 220 '' SLOWWWWW

    I would do two things:
    1) Ensure Oracle is patched with the latest 10.2.0.4 patches
    This is the list I've been working with:
    Patch 7003151
    Patch 6989483
    Patch 7237687
    Patch 7276032
    Patch 7307918
    2) Write the query like this
    SELECT /*+ ORDERED*/ B.ID, A.ID, A.GC, A.SUMA
    FROM DBG_RIESGOS_CUMULOS B, DBG_RIESGOS_CUMULOS_SITE A
    WHERE B.GC = 'PATRIMONIAL FENOMENOS SISMICOS'
    AND A.GC=B.GC
    AND SDO_ANYINTERACT(A.GEOMETRY, B.GEOMETRY) = 'TRUE';

  • Very Slow Query with CTE inner join

    I have 2 tables (heavily simplified here to show relevant columns):
    CREATE TABLE tblCharge
    (ChargeID int NOT NULL,
    ParentChargeID int NULL,
    ChargeName varchar(200) NULL)
    CREATE TABLE tblChargeShare
    (ChargeShareID int NOT NULL,
    ChargeID int NOT NULL,
    TotalAmount money NOT NULL,
    TaxAmount money NULL,
    DiscountAmount money NULL,
    CustomerID int NOT NULL,
    ChargeShareStatusID int NOT NULL)
    I have a very basic View to Join them:
    CREATE VIEW vwBASEChargeShareRelation as
    Select c.ChargeID, ParentChargeID, s.CustomerID, s.TotalAmount, isnull(s.TaxAmount, 0) as TaxAmount, isnull(s.DiscountAmount, 0) as DiscountAmount
    from tblCharge c inner join tblChargeShare s
    on c.ChargeID = s.ChargeID Where s.ChargeShareStatusID < 3
    GO
    I then have a view containing a CTE to get the children of the Parent Charge:
    ALTER VIEW [vwChargeShareSubCharges] AS
    WITH RCTE AS
    SELECT ParentChargeId, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
    ISNULL(DiscountAmount, 0) as DiscountAmount, CustomerID, ChargeID as MasterChargeID
    FROM vwBASEChargeShareRelation Where ParentChargeID is NULL
    UNION ALL
    SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0) , rh.CustomerID
    , rc.MasterChargeID
    FROM vwBASEChargeShareRelation rh
    INNER JOIN RCTE rc ON rh.PArentChargeID = rc.ChargeID and rh.CustomerID = rc.CustomerID
    Select MasterChargeID as ChargeID, CustomerID, Sum(TotalAmount) as TotalCharged, Sum(TaxAmount) as TotalTax, Sum(DiscountAmount) as TotalDiscount
    from RCTE
    Group by MasterChargeID, CustomerID
    GO
    So far so good, I can query this view and get the total cost for a line item including all children.
    The problem occurs when I join this table. The query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where s.ChargeID = 1291094
    Takes around 30 ms to return a result (tblCharge and Charge Share have around 3.5 million records).
    But the query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where InvoiceID = 1045854
    Takes around 2 minutes to return a result - even though the only charge with that InvoiceID is the same charge as the one used in the previous query.
    The same thing occurs if I do the join in the same query that the CTE is defined in.
    I ran the execution plan for each query. The first (fast) query looks like this:
    The second(slow) query looks like this:
    I am at a loss, and my skills at decoding execution plans to resolve this are lacking.
    I have separate indexes on tblCharge.ChargeID, tblCharge.ParentChargeID, tblChargeShare.ChargeID, tblChargeShare.InvoiceID, tblChargeShare.ChargeShareStatusID
    Any ideas? Tested on SQL 2008R2 and SQL 2012

    >> The database is linked [sic] to an established app and the column and table names can't be changed. <<
    Link? That is a term from pointer chains and network databases, not SQL. I will guess that means the app came back in the old pre-RDBMS days and you are screwed. 
    >> I am not too worried about the money field [sic], this is used for money and money based calculations so the precision and rounding are acceptable at this level. <<
    Field is a COBOL concept; columns are totally different. MONEY is how Sybase mimics the PICTURE clause that puts currency signs, commas, period, etc in a COBOL money field. 
    Using more than one operation (multiplication or division) on money columns will produce severe rounding errors. A simple way to visualize money arithmetic is to place a ROUND() function calls after 
    every operation. For example,
    Amount = (Portion / total_amt) * gross_amt
    can be rewritten using money arithmetic as:
    Amount = ROUND(ROUND(Portion/total_amt, 4) * 
    gross_amt, 4)
    Rounding to four decimal places might not seem an 
    issue, until the numbers you are using are greater 
    than 10,000. 
    BEGIN
    DECLARE @gross_amt MONEY,
     @total_amt MONEY,
     @my_part MONEY,
     @money_result MONEY,
     @float_result FLOAT,
     @all_floats FLOAT;
     SET @gross_amt = 55294.72;
     SET @total_amt = 7328.75;
     SET @my_part = 1793.33;
     SET @money_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @float_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @Retult3 = (CAST(@my_part AS FLOAT)
     / CAST( @total_amt AS FLOAT))
     * CAST(FLOAT, @gross_amt AS FLOAT);
     SELECT @money_result, @float_result, @all_floats;
    END;
    @money_result = 13525.09 -- incorrect
    @float_result = 13525.0885 -- incorrect
    @all_floats = 13530.5038673171 -- correct, with a -
    5.42 error 
    >> The keys are ChargeID(int, identity) and ChargeShareID(int, identity). <<
    Sorry, but IDENTITY is not relational and cannot be a key by definition. But it sure works just like a record number in your old COBOL file system. 
    >> .. these need to be int so that they are assigned by the database and unique. <<
    No, the data type of a key is not determined by physical storage, but by logical design. IDENTITY is the number of a parking space in a garage; a VIN is how you identify the automobile. 
    >> What would you recommend I use as keys? <<
    I do not know. I have no specs and without that, I cannot pull a Kabbalah number from the hardware. Your magic numbers can identify Squids, Automobile or Lady Gaga! I would ask the accounting department how they identify a charge. 
    >> Charge_Share_Status_ID links [sic] to another table which contains the name, formatting [sic] and other information [sic] or a charge share's status, so it is both an Id and a status. <<
    More pointer chains! Formatting? Unh? In RDBMS, we use a tiered architecture. That means display formatting is in a presentation layer. A properly created table has cohesion – it does one and only one data element. A status is a state of being that applies
    to an entity over a period time (think employment, marriage, etc. status if that is too abstract). 
    An identifier is based on the Law of Identity from formal logic “To be is to be something in particular” or “A is A” informally. There is no entity here! The Charge_Share_Status table should have the encoded values for a status and perhaps a description if
    they are unclear. If the list of values is clear, short and static, then use a CHECK() constraint. 
    On a scale from 1 to 10, what color is your favorite letter of the alphabet? Yes, this is literally that silly and wrong. 
    >> I understand what a CTE is; is there a better way to sum all children for a parent hierarchy? <<
    There are many ways to represent a tree or hierarchy in SQL.  This is called an adjacency list model and it looks like this:
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     boss_emp_name CHAR(10) REFERENCES OrgChart(emp_name), 
     salary_amt DECIMAL(6,2) DEFAULT 100.00 NOT NULL,
     << horrible cycle constraints >>);
    OrgChart 
    emp_name  boss_emp_name  salary_amt 
    ==============================
    'Albert'    NULL    1000.00
    'Bert'    'Albert'   900.00
    'Chuck'   'Albert'   900.00
    'Donna'   'Chuck'    800.00
    'Eddie'   'Chuck'    700.00
    'Fred'    'Chuck'    600.00
    This approach will wind up with really ugly code -- CTEs hiding recursive procedures, horrible cycle prevention code, etc.  The root of your problem is not knowing that rows are not records, that SQL uses sets and trying to fake pointer chains with some
    vague, magical non-relational "id".  
    This matches the way we did it in old file systems with pointer chains.  Non-RDBMS programmers are comfortable with it because it looks familiar -- it looks like records and not rows.  
    Another way of representing trees is to show them as nested sets. 
    Since SQL is a set oriented language, this is a better model than the usual adjacency list approach you see in most text books. Let us define a simple OrgChart table like this.
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     lft INTEGER NOT NULL UNIQUE CHECK (lft > 0), 
     rgt INTEGER NOT NULL UNIQUE CHECK (rgt > 1),
      CONSTRAINT order_okay CHECK (lft < rgt));
    OrgChart 
    emp_name         lft rgt 
    ======================
    'Albert'      1   12 
    'Bert'        2    3 
    'Chuck'       4   11 
    'Donna'       5    6 
    'Eddie'       7    8 
    'Fred'        9   10 
    The (lft, rgt) pairs are like tags in a mark-up language, or parens in algebra, BEGIN-END blocks in Algol-family programming languages, etc. -- they bracket a sub-set.  This is a set-oriented approach to trees in a set-oriented language. 
    The organizational chart would look like this as a directed graph:
                Albert (1, 12)
        Bert (2, 3)    Chuck (4, 11)
                       /    |   \
                     /      |     \
                   /        |       \
                 /          |         \
            Donna (5, 6) Eddie (7, 8) Fred (9, 10)
    The adjacency list table is denormalized in several ways. We are modeling both the Personnel and the Organizational chart in one table. But for the sake of saving space, pretend that the names are job titles and that we have another table which describes the
    Personnel that hold those positions.
    Another problem with the adjacency list model is that the boss_emp_name and employee columns are the same kind of thing (i.e. identifiers of personnel), and therefore should be shown in only one column in a normalized table.  To prove that this is not
    normalized, assume that "Chuck" changes his name to "Charles"; you have to change his name in both columns and several places. The defining characteristic of a normalized table is that you have one fact, one place, one time.
    The final problem is that the adjacency list model does not model subordination. Authority flows downhill in a hierarchy, but If I fire Chuck, I disconnect all of his subordinates from Albert. There are situations (i.e. water pipes) where this is true, but
    that is not the expected situation in this case.
    To show a tree as nested sets, replace the nodes with ovals, and then nest subordinate ovals inside each other. The root will be the largest oval and will contain every other node.  The leaf nodes will be the innermost ovals with nothing else inside them
    and the nesting will show the hierarchical relationship. The (lft, rgt) columns (I cannot use the reserved words LEFT and RIGHT in SQL) are what show the nesting. This is like XML, HTML or parentheses. 
    At this point, the boss_emp_name column is both redundant and denormalized, so it can be dropped. Also, note that the tree structure can be kept in one table and all the information about a node can be put in a second table and they can be joined on employee
    number for queries.
    To convert the graph into a nested sets model think of a little worm crawling along the tree. The worm starts at the top, the root, makes a complete trip around the tree. When he comes to a node, he puts a number in the cell on the side that he is visiting
    and increments his counter.  Each node will get two numbers, one of the right side and one for the left. Computer Science majors will recognize this as a modified preorder tree traversal algorithm. Finally, drop the unneeded OrgChart.boss_emp_name column
    which used to represent the edges of a graph.
    This has some predictable results that we can use for building queries.  The root is always (left = 1, right = 2 * (SELECT COUNT(*) FROM TreeTable)); leaf nodes always have (left + 1 = right); subtrees are defined by the BETWEEN predicate; etc. Here are
    two common queries which can be used to build others:
    1. An employee and all their Supervisors, no matter how deep the tree.
     SELECT O2.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O1.emp_name = :in_emp_name;
    2. The employee and all their subordinates. There is a nice symmetry here.
     SELECT O1.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O2.emp_name = :in_emp_name;
    3. Add a GROUP BY and aggregate functions to these basic queries and you have hierarchical reports. For example, the total salaries which each employee controls:
     SELECT O2.emp_name, SUM(S1.salary_amt)
       FROM OrgChart AS O1, OrgChart AS O2,
            Salaries AS S1
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND S1.emp_name = O2.emp_name 
       GROUP BY O2.emp_name;
    4. To find the level and the size of the subtree rooted at each emp_name, so you can print the tree as an indented listing. 
    SELECT O1.emp_name, 
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN O2.sale_amt ELSE 0.00 END) AS sale_amt_tot,
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN 1 ELSE 0 END) AS subtree_size,
       SUM(CASE WHEN O1.lft BETWEEN O2.lft AND O2.rgt
       THEN 1 ELSE 0 END) AS lvl
      FROM OrgChart AS O1, OrgChart AS O2
     GROUP BY O1.emp_name;
    5. The nested set model has an implied ordering of siblings which the adjacency list model does not. To insert a new node, G1, under part G.  We can insert one node at a time like this:
    BEGIN ATOMIC
    DECLARE rightmost_spread INTEGER;
    SET rightmost_spread 
        = (SELECT rgt 
             FROM Frammis 
            WHERE part = 'G');
    UPDATE Frammis
       SET lft = CASE WHEN lft > rightmost_spread
                      THEN lft + 2
                      ELSE lft END,
           rgt = CASE WHEN rgt >= rightmost_spread
                      THEN rgt + 2
                      ELSE rgt END
     WHERE rgt >= rightmost_spread;
     INSERT INTO Frammis (part, lft, rgt)
     VALUES ('G1', rightmost_spread, (rightmost_spread + 1));
     COMMIT WORK;
    END;
    The idea is to spread the (lft, rgt) numbers after the youngest child of the parent, G in this case, over by two to make room for the new addition, G1.  This procedure will add the new node to the rightmost child position, which helps to preserve the idea
    of an age order among the siblings.
    6. To convert a nested sets model into an adjacency list model:
    SELECT B.emp_name AS boss_emp_name, E.emp_name
      FROM OrgChart AS E
           LEFT OUTER JOIN
           OrgChart AS B
           ON B.lft
              = (SELECT MAX(lft)
                   FROM OrgChart AS S
                  WHERE E.lft > S.lft
                    AND E.lft < S.rgt);
    7. To find the immediate parent of a node: 
    SELECT MAX(P2.lft), MIN(P2.rgt)
      FROM Personnel AS P1, Personnel AS P2
     WHERE P1.lft BETWEEN P2.lft AND P2.rgt 
       AND P1.emp_name = @my_emp_name;
    I have a book on TREES & HIERARCHIES IN SQL which you can get at Amazon.com right now. It has a lot of other programming idioms for nested sets, like levels, structural comparisons, re-arrangement procedures, etc. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Oracle interview questions;; query very slow

    Hai
    Most of time in interview they ask..
    WHAT IS THE STEPS U DO WHEN UR QUERY IS RUNNNING VERY SLOW..?
    i used to say
    1) first i will check for whether table is properly indexed or not
    2) next properly normalized or not
    interviewers are not fully satisfied with these answers..
    So kindly tell me more suggestion
    S

    Also when checking the execution plan, get the actual plan using DBMS_XPLAN.DISPLAY_CURSOR, rather than the predicted one (EXPLAIN PLAN FOR). If you use a configurable IDE such as SQL Developer of PL/SQL Developer it is worth taking the time to set this up in the session browser so that you can easily capture it while it's running. You might also look at the estimated vs actual cardinalities. While you're at it you could check v$session_longops, v$session_wait and (if you have the Diagnostic and Tuning packs licenced) v$active_session_history and the various dba_hist% views.
    You might try the SQL Tuning Advisor (DBMS_SQLTUNE) which generates profiles for SQL statements (requires ADVISOR system privilege to run a tuning task, and CREATE ANY SQL PROFILE to apply a profile).
    In 11g look at SQL Monitor.
    Tracing is all very well if you can get access to the tracefile in a reasonable timeframe, though in many sites (including my current one) it's just too much trouble unless you're a DBA.
    Edited by: William Robertson on Apr 18, 2011 11:40 PM
    Sorry Rob, should probably have replied to oraclehema rather than you.

  • User Defined Type - Array bind Query very slow

    Hi.
    I have following Problem. I try to use Oracle Instant Client 11 and ODP.NET to pass Arrays in SELECT statements as Bind Parameters. I did it, but it runs very-very slow. Example:
    - Inittial Query:
    SELECT tbl1.field1, tbl1.field2, tbl2.field1, tbl2.field2 ... FROM tbl1
    LEFT JOIN tbl2 ON tbl1.field11=tbl2.field0
    LEFT JOIN tbl3 ON tbl2.field11=tbl3.field0 AND tbll1.field5=tbl3.field1
    ...and another LEFT JOINS
    WHERE
    tbl1.field0 IN ('id01', 'id02', 'id03'...)
    this query with 100 elements in "IN" on my database takes 3 seconds.
    - Query with Array bind:
    in Oracle I did UDT: create or replace type myschema.mytype as table of varchar2(1000)
    than, as described in Oracle Example I did few classes (Factory and implementing IOracleCustomType) and use it in Query,
    instead of IN ('id01', 'id02', 'id03'...) I have tbl1.field0 IN (select column_value from table(:prmTable)), and :prmTable is bound array.
    this query takes 190 seconds!!! Why? I works, but the HDD of Oracle server works very hard, and it takes too long.
    Oracle server we habe 10g.
    PS: I tried to use only 5 elements in array - the same result, it takes also 190 seconds...
    Please help!

    I recommend you generate an explain plan for each query and post them here. Based on what you have given the following MAY be happening:
    Your first query has as static IN list when it is submitted to the server. Therefore when Oracle generates the execution plan the CBO can accurately determine it based on a KNOWN set of input parameters. However the second query has a bind variable for this list of parameters and Oracle has no way of knowing at the time the execution plan is generated what that list contains. If it does not know what the list contains it cannot generate the most optimal execution plan. Therefore I would guess that it is probably doing some sort of full table scan (although these aren't always bad, remember that!).
    Again please post the execution plans for each.
    HTH!

Maybe you are looking for

  • Problem with restoring the back-up after the firmw...

    Hello, I just made the latest firmware update. I went without problems, but I can not restore the back-up using the pc suite. It stops at 60-80%. Please help me, I need my old settings back on the phone. Cristian

  • Dynadock U3.0 freezes the display when connected

    I am using the dynadock u3.0 docking station. The display/laptop freezes every so often. It happens most often when I visit a website with flash video (CNN/Youtube) etc. I thought this has to do with Chrome/Flash so I uninstalled/reinstalled etc but

  • How do I start Macintosh again when I worked with Microsoft (I am not using Parallel Desktops)?

    I had just installed Windows Vista on my MacBook and worked with it for a while but when I wanted to switch back to Macintosh it wouldn't work. Does anyone know a solution?

  • ICC profiles from iMac to laptop

    I use Aperture on a MacBook and an iMac. I print on an Epson R280 connected to the iMac. The ColorSync options for the Epson appear in my dialog box on the iMac, but do not on the laptop. How do I import them to the Aperture running on my laptop?

  • Weird JPanel Resizing Problem

    This program works as I want it to with one exception: sometimes when resizing the frame, the contained JPanels do not fully cover the frame's BorderLayout center region. If you run the program you'll discover that a light gray area often appears on