Join query getting poor performance

Hi,
This is my join query to retriving data , i had a problem from this, its getting very slow to retrive data, even i used in report builder, it could not build the report.
From this select statement i've using three tables ,
please help and have suggestion to tune my query better fast and give some new idea , but i'm using oracle 8i.
select a.customer_code customer, c.name name, c.place place, a.product_code product, b.quantity ord_qty, nvl(b.delivery_district_code,c.district_code) district, nvl(b.delivery_town_code,c.town_code) town
from order_book a, order_book_detail b, customer c
where a.region_code = b.region_code
and a.order_book_form_no = b.order_book_form_no
and a.customer_code = c.customer_code
and c.division_code = 34
and a.region_code = 10
and c.state_code = 1
and a.order_book_form_date = '18-OCT-2007'
and nvl(c.classification_code,'N') = 'S'
order by 1;
regards
venki

why nobody answering me.Because you gave us nothing to investigate. Please read [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]this thread and post a tkprof output here. For an explain plan in 8i, you could "set autotrace on explain" in SQL*Plus.
Regards,
Rob.

Similar Messages

  • HT1515 I'm getting poor performance from my apple TV, will plugging in my airport express via ethernet to apple tv increase its performance (poor wi-fi signal)

    I'm getting poor performance from my apple TV, will plugging in my airport express via ethernet to apple tv increase its performance (poor wi-fi signal)

    Yes, though it is less ideal than what I suggested earlier. You would need to place your AirPort Express in a location approximately midway between your AirPort Extreme Base Station and the TV. Then, using AirPort Utility, configure it to "extend a wireless network".
    It may help. Bear in mind the signal thus "extended" by the Express can only be as good as the signal it can receive from the Extreme. That is the reason for selecting its appoximate midway location.
    Read about it here: http://support.apple.com/kb/HT4145
    Scroll down to "Wirelessly Extended Network (802.11n)" for a picture.
    This solution requires an all-Apple wireless network. No third party stuff.
    You mentioned using Ethernet. That would be a better idea. Scroll down to "Roaming Network (Ethernet-connected Wi-Fi base stations)" for a picture. In your case, the longer the Ethernet cable - hence the closer you can move the Express to the TV - the better

  • Poor performance and overheat

    Story
    I've been using arch for the past four months, dual booting with my old Windows XP. As I'm very fond of Flash games and make my own programs with a cross-platform language, I've found few problems with the migration. One of them was the Adobe Flash Player performance, which was stunningly bad. But everyone was saying that was normal, so I left it as is.
    However, one special error always worried me: a seemingly randomly started siren sound coming from the motherboard speaker. Thinking it was a alarm about some fatal kernel error, I had been solving it mostly with reboots.
    But then it happened. While playing a graphics intensive game on Windows quickly after rebooting from Arch, the same siren sound started. It felt like a slap across the face: it was not a kernel error, it was an motherboard overheat alarm.
    The Problem
    Since the computer was giving overheat signs, I started looking at things from another angle. I noticed that some tasks take unusually long times in Arch (i.e.: building things from source; Firefox / OpenOffice startup; any graphics intensive program), specially from the Flash Player.
    A great example is the game Penguinz, that runs flawlessly in Windows but is unbearably slow in Arch. So slow that it alone caused said overheat twice. And trying to record another flash game's record using XVidCap things went so bad that the game halved the FPS and started ignoring key presses.
    Tech Info
    Dual Core 3.2 processor
    1 gb RAM
    256 mb Geforce FX 5500 video card
    Running Openbox
    Using proprietary NVIDIA driver
    TL;DR: poor performance on some tasks. Flash Player is so slow that overheats CPU and makes me cry. It's fine on Windows.
    Off the top of my head I can think up some reasons: bad video driver, unwanted background application messing up, known Flash Player performance problems and Actionscript Linux/Arch-only bug.
    Where do you think is the problem?

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • Poor performance and voltage fluctuation.

    I'm running two 280x in crossfire which I upgraded from two 6970's.   When I'm playing DayZ my FPS never go above 30 FPS.  With my 6970's I was well into the 70's.  I never changed my video settings when I went from the 6970's to the 280X's.   For the most part I have a lot of the graphics low or disabled.   Borderlands 2 is a nightmare on the 280X.  I drop down into 10 fps area and this has never happened with on my 6970's.
    During games my core voltage fluctuates between 500mhz and 1020mhz during game play.  I have ULPS disable as well.
    My power supply is an Antec HCG-900.
    Happened on all these drivers 14.4, 14.6 Beta, and 14.6 RC (currently installed).

    jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage?  That seems like the logical place to start.  You shouldn't be getting poor performance in anything with that system.  I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash.  It's much better than it used to be.
    Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
    %CPU
    Firefox: 80%~100%
    X: 0~20%
    Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
    EDIT:
    Did a Javascript benchmark to test both systems and browsers.
    Windows XP + Firefox = 4361.4ms
    Arch + Firefox = 5146.0ms
    So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
    I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
    EDIT2:
    Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
    Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
    Last edited by BoppreH (2009-05-03 04:25:20)

  • Poor performance on reports that were migrated from 6i to 10g

    We are migrating from 6i client server to 10g reports server and getting poor performance on various reports. Reports that work in seconds in 6i are taking much longer to run or even timing out.
    Reports Server:
    Version 10.1.2.0.2
    initEngine = 1
    maxEngine = 20
    minEngine = 1
    engLife = 1
    engLife = 1
    maxIdle = 30
    The reports are being called from 10g forms with the following:
    T_repstr := '../reports/rwservlet?server=rep_aporaapp_frhome1'
    || '&report='|| T_prog_name
    || '&userid='|| T_nds_uid;
    || '&destype=cache'
    || '&paramform=yes'
    || '&mode=Default'
    || '&desformat=pdf'
    ||' orientation=Landscape';
    web.show_document(T_repstr,'_blank');

    Using these and not hearing much bad
    Init Engine 1
    Max Engine 6
    Min Engine 0
    Eng Life 10
    MaxIdle 30
    Trace Error
    Trace Replace
    I set my Report Server Parameters
    CACHE SIZE - 700
    CACHE DIRECTORY = (you have to decide)
    IDLE timeout 120
    Max Connections 120
    Max Queue Size 4000
    trace options = trace_err
    trace mode trace_replace

  • Mbpr poor performance in bootcamp, return?

    I'm still getting poor performance in bootcamp, especially when using an external monitor after the updates. is this a hardware or software issue?

    ¿Works OK in Mac OS X?
    --Yes, not a Hardware or Mac OS X software problem.
    ¿Works OK in Windows?
    --Yes, all is well.
    --No, its a Windows software Problem, since the Hardware worked fine in Mac OS X.
    Have you re-installed Windows?

  • Poor performance and high number of gets on seemingly simple insert/select

    Versions & config:
    Database : 10.2.0.4.0
    Application : Oracle E-Business Suite 11.5.10.2
    2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
    INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
      NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
      WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
      WIA.ITEM_TYPE = :B1
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          4           0
    Execute      2      3.44       6.36          2      24297        198          36
    Fetch        0      0.00       0.00          0          0          0           0
    total        3      3.44       6.36          2      24297        202          36
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
    Rows     Execution Plan
          0  INSERT STATEMENT   MODE: ALL_ROWS
          0   TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
          0    INDEX   MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      library cache lock                             12        0.00          0.00
      gc current block 2-way                         14        0.00          0.00
      db file sequential read                         2        0.01          0.01
      row cache lock                                 24        0.00          0.01
      library cache pin                               2        0.00          0.00
      rdbms ipc reply                                 1        0.00          0.00
      gc cr block 2-way                               4        0.00          0.00
      gc current grant busy                           1        0.00          0.00
    ********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
    exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
    exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
    If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
    If I make the insert into an empty, non-partitioned table, I get :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.01       0.08          0        137         53          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.01       0.08          0        137         53          25and same explain plan - using index range scan on WF_Item_Attributes_PK.
    This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1      0.00       0.10         10         27        136          25
    Fetch        0      0.00       0.00          0          0          0           0
    total        2      0.00       0.10         10         27        136          25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
    I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
    further info on the objects concerned:
    query source table :
    WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
    WF_Item_Attributes tbl : non-partitioned, 160 blocks
    insert destination table:
    WF_Item_Attribute_Values:
    range partitioned on Item_Type, and hash sub-partitioned on Item_Key
    both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
    WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
    Bind values:
    exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
    exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
    number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
    The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
    thanks and regards
    Ivan

    hi Sven,
    Thanks for your input.
    1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
    2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
    3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
    ============= From DBA_Part_Tables : Partition Type / Count =============
    PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
    RANGE   HASH                 77 APPS_TS_TX_DATA
    1 row selected.
    ============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
    Partition Name       TS Name         High Value           High Val Len
    WF_ITEM1             APPS_TS_TX_DATA 'A1'                            4
    WF_ITEM2             APPS_TS_TX_DATA 'AM'                            4
    WF_ITEM3             APPS_TS_TX_DATA 'AP'                            4
    WF_ITEM47            APPS_TS_TX_DATA 'OB'                            4
    WF_ITEM48            APPS_TS_TX_DATA 'OE'                            4
    WF_ITEM49            APPS_TS_TX_DATA 'OF'                            4
    WF_ITEM50            APPS_TS_TX_DATA 'OK'                            4
    WF_ITEM75            APPS_TS_TX_DATA 'WI'                            4
    WF_ITEM76            APPS_TS_TX_DATA 'WS'                            4
    WF_ITEM77            APPS_TS_TX_DATA MAXVALUE                        8
    77 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_TYPE                                    1
    1 row selected.
    PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
    ============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
    Partition Name       SUBPARTITION_NAME              TS Name         High Value           High Val Len
    WF_ITEM49            SYS_SUBP3326                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3328                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3332                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3331                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3330                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3329                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3327                   APPS_TS_TX_DATA                                 0
    WF_ITEM49            SYS_SUBP3325                   APPS_TS_TX_DATA                                 0
    8 rows selected.
    ============= From dba_part_key_columns : Partition Columns =============
    NAME                           OBJEC Column Name                    COLUMN_POSITION
    WF_ITEM_ATTRIBUTE_VALUES       TABLE ITEM_KEY                                     1
    1 row selected.
    from DBA_Segments - just for partition WF_ITEM49  :
    Segment Name                        TSname       Partition Name       Segment Type     BLOCKS     Mbytes    EXTENTS Next Ext(Mb)
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3332         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3331         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3330         TblSubPart        16160     126.25       1010         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3329         TblSubPart        16112    125.875       1007         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3328         TblSubPart        16096     125.75       1006         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3327         TblSubPart        16224     126.75       1014         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3326         TblSubPart        16208    126.625       1013         .125
    WF_ITEM_ATTRIBUTE_VALUES            @TX_DATA     SYS_SUBP3325         TblSubPart        16128        126       1008         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3332         IdxSubPart        59424     464.25       3714         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3331         IdxSubPart        59296     463.25       3706         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3330         IdxSubPart        59520        465       3720         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3329         IdxSubPart        59104     461.75       3694         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3328         IdxSubPart        59456      464.5       3716         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3327         IdxSubPart        60016    468.875       3751         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3326         IdxSubPart        59616     465.75       3726         .125
    WF_ITEM_ATTRIBUTE_VALUES_PK         @TX_IDX      SYS_SUBP3325         IdxSubPart        59376    463.875       3711         .125
    sum                                                                                               4726.5
    [the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
    The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
    Ivan

  • Poor performance with Oracle Spatial when spatial query invoked remotely

    Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
    Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
    Thank you in advance for any thoughts you might share.

    OK, that's clearer.
    Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
    set autotrace on
    set timing on
    SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
    Have you profiled the procedure? Here is an example of how to do it:
    Prompt Firstly, create PL/SQL profiler table
    @$ORACLE_HOME/rdbms/admin/proftab.sql
    Prompt Secondly, use the profiler to gather stats on execution characteristics
    DECLARE
      l_run_num PLS_INTEGER := 1;
      l_max_num PLS_INTEGER := 1;
      v_geom    mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
    BEGIN
      dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE'));  -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
      v_geom := Parallel(v_geom,10,0.05,1);  -- Put your procedure call here
      dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
    END;
    SHOW ERRORS
    Prompt Finally, report activity
    COLUMN runid FORMAT 99999
    COLUMN run_comment FORMAT A40
    SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
      FROM plsql_profiler_runs
      ORDER BY runid;
    COLUMN runid       FORMAT 99999
    COLUMN unit_number FORMAT 99999
    COLUMN unit_type   FORMAT A20
    COLUMN unit_owner  FORMAT A20
    COLUMN text        FORMAT A100
    compute sum label 'Total_Time' of total_time on runid
    break on runid skip 1
    set linesize 200
    SELECT u.runid || ',' ||
           u.unit_name,
           d.line#,
           d.total_occur,
           d.total_time,
           text
    FROM   plsql_profiler_units u
           JOIN plsql_profiler_data d ON u.runid = d.runid
                                         AND
                                         u.unit_number = d.unit_number
           JOIN all_source als ON ( als.owner = 'CODESYS'
                                   AND als.type = u.unit_type
                                   AND als.name = u.unit_name
                                AND als.line = d.line# )
    WHERE  u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
    ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
    regards
    Simon

  • How can I perform this kind of range join query using DPL?

    How can I perform this kind of range join query using DPL?
    SELECT * from t where 1<=t.a<=2 and 3<=t.b<=5
    In this pdf : http://www.oracle.com/technology/products/berkeley-db/pdf/performing%20queries%20in%20oracle%20berkeley%20db%20java%20edition.pdf,
    It shows how to perform "Two equality-conditions query on a single primary database" just like SELECT * FROM tab WHERE col1 = A AND col2 = B using entity join class, but it does not give a solution about the range join query.

    I'm sorry, I think I've misled you. I suggested that you perform two queries and then take the intersection of the results. You could do this, but the solution to your query is much simpler. I'll correct my previous message.
    Your query is very simple to implement. You should perform the first part of query to get a cursor on the index for 'a' for the "1<=t.a<=2" part. Then simply iterate over that cursor, and process the entities where the "3<=t.b<=5" expression is true. You don't need a second index (on 'b') or another cursor.
    This is called "filtering" because you're iterating through entities that you obtain from one index, and selecting some entities for processing and discarding others. The white paper you mentioned has an example of filtering in combination with the use of an index.
    An alternative is to reverse the procedure above: use the index for 'b' to get a cursor for the "3<=t.b<=5" part of the query, then iterate and filter the results based on the "1<=t.a<=2" expression.
    If you're concerned about efficiency, you can choose the index (i.e., choose which of these two alternatives to implement) based on which part of the query you believe will return the smallest number of results. The less entities read, the faster the query.
    Contrary to what I said earlier, taking the intersection of two queries that are ANDed doesn't make sense -- filtering is the better solution. However, taking the union of two queries does make sense, when the queries are ORed. Sorry for the confusion.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How to get Hierarchical XML File from a Database Join Query !

    Hi,
    How can i get a Hierarchical XML File from a Database Join Query ?
    Any join query returns repeated values as below:
    BD17:SQL>select d.dname, e.ename, e.sal
    2 from dept d
    3 natural join
    4 emp e
    5 /
    DNAME ENAME SAL
    ACCOUNTING CLARK 2450
    ACCOUNTING KING 5000
    ACCOUNTING MILLER 1300
    RESEARCH SMITH 800
    RESEARCH ADAMS 1100
    RESEARCH FORD 3000
    RESEARCH SCOTT 3000
    RESEARCH JONES 2975
    SALES ALLEN 1600
    SALES BLAKE 2850
    SALES MARTIN 1250
    SALES JAMES 950
    SALES TURNER 1500
    SALES WARD 1250
    14 rows selected.
    We tried use DBMS_XMLQUERY to generate a xml file, but it was unable to get xml in Hierarchical format.
    <?xml version="1.0" encoding="ISO-8859-1" ?>
    - <ROWSET>
    - <ROW num="1">
    <DNAME>ACCOUNTING</DNAME>
    <ENAME>CLARK</ENAME>
    <SAL>2450</SAL>
    </ROW>
    - <ROW num="2">
    <DNAME>ACCOUNTING</DNAME>
    <ENAME>KING</ENAME>
    <SAL>5000</SAL>
    </ROW>
    - <ROW num="3">
    <DNAME>ACCOUNTING</DNAME>
    <ENAME>MILLER</ENAME>
    <SAL>1300</SAL>
    </ROW>
    - <ROW num="4">
    <DNAME>RESEARCH</DNAME>
    <ENAME>SMITH</ENAME>
    <SAL>800</SAL>
    </ROW>
    - <ROW num="5">
    <DNAME>RESEARCH</DNAME>
    <ENAME>ADAMS</ENAME>
    <SAL>1100</SAL>
    </ROW>
    - <ROW num="6">
    <DNAME>RESEARCH</DNAME>
    <ENAME>FORD</ENAME>
    <SAL>3000</SAL>
    </ROW>
    - <ROW num="7">
    <DNAME>RESEARCH</DNAME>
    <ENAME>SCOTT</ENAME>
    <SAL>3000</SAL>
    </ROW>
    - <ROW num="8">
    <DNAME>RESEARCH</DNAME>
    <ENAME>JONES</ENAME>
    <SAL>2975</SAL>
    </ROW>
    - <ROW num="9">
    <DNAME>SALES</DNAME>
    <ENAME>ALLEN</ENAME>
    <SAL>1600</SAL>
    </ROW>
    - <ROW num="10">
    <DNAME>SALES</DNAME>
    <ENAME>BLAKE</ENAME>
    <SAL>2850</SAL>
    </ROW>
    - <ROW num="11">
    <DNAME>SALES</DNAME>
    <ENAME>MARTIN</ENAME>
    <SAL>1250</SAL>
    </ROW>
    - <ROW num="12">
    <DNAME>SALES</DNAME>
    <ENAME>JAMES</ENAME>
    <SAL>950</SAL>
    </ROW>
    - <ROW num="13">
    <DNAME>SALES</DNAME>
    <ENAME>TURNER</ENAME>
    <SAL>1500</SAL>
    </ROW>
    - <ROW num="14">
    <DNAME>SALES</DNAME>
    <ENAME>WARD</ENAME>
    <SAL>1250</SAL>
    </ROW>
    </ROWSET>
    Thank you for some help.
    Nelson Alberti

    Hi,
    I wrote a general ABAP program which can be configured to grab contrent from an URL and post that content as a new PI message into the integration adapter .... from that point on normal PI configuration can be used to route it to anywhere ...
    It can be easily scheduled as a background job to grab content on a daily basis etc ...
    Regards,
    Steven

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • Inner Join. How to improve the performance of inner join query

    Inner Join. How to improve the performance of inner join query.
    Query is :
    select f1~ablbelnr
             f1~gernr
             f1~equnr
             f1~zwnummer
             f1~adat
             f1~atim
             f1~v_zwstand
             f1~n_zwstand
             f1~aktiv
             f1~adatsoll
             f1~pruefzahl
             f1~ablstat
             f1~pruefpkt
             f1~popcode
             f1~erdat
             f1~istablart
             f2~anlage
             f2~ablesgr
             f2~abrdats
             f2~ableinh
                from eabl as f1
                inner join eablg as f2
                on f1ablbelnr = f2ablbelnr
                into corresponding fields of table it_list
                where f1~ablstat in s_mrstat
                %_HINTS ORACLE 'USE_NL (T_00 T_01) index(T_01 "EABLG~0")'.
    I wanted to modify the query, since its taking lot of time to load the data.
    Please suggest : -
    Treat this is very urgent.

    Hi Shyamal,
    In your program , you are using "into corresponding fields of ".
    Try not to use this addition in your select query.
    Instead, just use "into table it_list".
    As an example,
    Just give a normal query using "into corresponding fields of" in a program. Now go to se30 ( Runtime analysis), and give the program name and execute it .
    Now if you click on Analyze button , you can see, the analysis given for the query.The one given in "Red" line informs you that you need to find for alternate methods.
    On the other hand, if you are using "into table itab", it will give you an entirely different analysis.
    So try not to give "into corresponding fields" in your query.
    Regards,
    SP.

  • Performance of the query is poor

    Hi All,
    This is Prasad.  I have a problem with the query it is taking more time to retrieve the data from the Cube.  In the query they are using a Variable of type Customer Exit.   The Cube is not at compressed.  I think the issue with the F fact table is due to the high number of table partitions (requests) that it has to select from. If I compress the cube, the performance of the query is increased r not?  Is there any alternative for improving the performance of the query.  Somebody suggested Result set query, iam not aware of this technique if u know let me know.
    Thanks in advance

    Hi Prasad,
    Query performance will depend on many factors like
    1. Aggregates
    2. Compression of requests
    3. Query read mode setting
    4. Cache memory setting
    5. By Creating BI Accelerator Indexes on Infocubes
    6. Indexes
    Proposing aggregates to improve query performance:
    First try to execute the query in RSRT on which u required to build aggregates. Check how much time it is taking to execute.....and whether it is required to build aggregate on this querry?? To get this information, Goto SE11> Give tabl name RSDDSTAT_DM in BI7.0 or RSDDSTAT in BW3.x.> Disply -> Contnts-> Give from date and to date values as today, user name as Ur user name, and give the query name
    --> execute.
    Now u'll get a list with fields like Object anme(Report anme), Time read, Infoprovider name(Multiprovider), Partprovider name (Cube), Aggregate name... etc. If the time read is less than 100,000,000 (100 sec) is acceptable. If the time read is more than 100 sec then it is recommended to create Aggregates for that query to increase performance. Keep in mind this time read.
    Again goto RSRT> Give query name> Execute+Debug-->
    A popup will come in that select the check box display aggregates found--> continue. If any aggregates or exist for that
    query it will display first if u press on continue button, it will display from which cube which fields are coming it will display...try to copy this list of objects on which aggregate can be created into one text file...
    then select that particular cube in RSA1>context>Maintain Aggregates-> Create by own> click on create aggregate button on top left side> Give discription of the aggregate>continue> take first object from list and fclick on find button in aggregates creation screen> give the object name and search... drag and drop that object into aggregate name right side (Drag and drop all the fields like this into aggregate).---->
    Activate the aggregate--> it will take some time once the activation finishes --> make sure that aggregate is in switch on mode.
    Try to xecute the query from RSRT again and find out the time read and compare this with first time read. If it is less tahn first time read then u can propose this aggregate to incraese the performance of the query.
    I hope this will help u... go through the below links to know about aggregates more clear.
    http://help.sap.com/saphelp_nw04s/helpdata/en/10/244538780fc80de10000009b38f842/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/3f66ba90-0201-0010-ac8d-b61d8fd9abe9
    Follow this thread for creation of BIA Indexes:
    Re: BIA Creation
    Hopr this helps...
    Regards,
    Ramki.

  • Problem getting results with no unique key in a joined query

    I created a descriptor to do a joined query, which generated a query in log as:
    Select t0.empID,t1.empID, t1.phone from Emp t0, Phone t1
    where t0.empId=t1.empId and t0.empId=1001
    When I run it, I got the result as:
    1001,1001,9999999
    1001,1001,9999999
    The correct result should be (I copy and paste the query from log into SQL-Plus):
    1001,1001,9999999
    1001,1001,1234455
    I suspect this is caused by Toplink caching objects by primary key. I wonder if anybody has a solution for this.
    My descriptor is created using WorkBench. Emp is the primary table, Phone as additional table linked by foreignKey empId.
    The join is implemented by modifying the descriptor as the following:
    ExpressionBuilder builder = new ExpressionBuilder();
    descriptor.getQueryManage()
    .setMultipleTableJoinExpression(
    builder.getField("EMP.EMPID").equal(
    builder.getField("EMP.EMPID")));
    descriptor.disableCacheHits();
    I'd really appreciate your help.

    I am implementing a people search function. The batch reading is quite expensive if toplink does a read query for every person. A customized query requires only one database access, the other way may requires hundreds.
    I don't want to use the cache either in this case because the database is also accessed by legacy system which cann't notify toplink any updates.
    I opened a TAR on this and the solution I got is to use ReportQuery rathen than ReadAllQuery. The only problem here is that now I have to map the query result to my class manually.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

Maybe you are looking for

  • Campo cNF do XML v.200 da NFe com 9 dígitos

    Olá! Estamos testando a NFe na versão 2.00 do XML. Encontramos inconsistência no campo cNF do XML gerado pelo SAP. Segundo o Manual do Contribuinte 4.01, o campo cNF deve ter o  tamanho de 8 dígitos. Nosso ambiente de testes está com o SP18 atualizad

  • Applescript to change font in outgoing messsage

    I'm quite inexperienced with applescript -- this is basic question I think. Can someone help with a script to set the font of the entire contents of an outgoing message to a particular font and size, say Helvetica 12? MBP Mac OS X (10.4.9)

  • MacOS 10.9.4 and Firefox tabs

    Hi; Just updated to MacOS 10.9.4. Can't seem to open new tabs (Command T, File > New tab, clicking the  + on the tab area) but can open new windows. Anyone else experiencing this? Chris

  • MBAM 2.0 Agents with MBAM 2.5 Servers

    Hi All, Does anyone know whether MBAM 2.0 agents will continue to work with MBAM 2.5 infrastructure? We are in the process of planning an upgrade to 2.5 for the infrastructure but I cannot find anything that states whether there is an absolute requir

  • Synch Folder How? (Why such a painful experience Adobe?)

    I have been using Adobe products for 20+ years now. I'm new (1 month) to Adobe Creative Cloud. I'm really disappointed with the creative cloud. I want to set up a folder on my laptop that synchs with the cloud and also synchs on my desktop machine. S