Parallelized CONNECT BY PUMP

Hi All,
Did anyone managed to parallelize CONNECT BY PUMP rowsource and its corresponding join?
I'm thinkging about following steps from execution plan
| Id  | Operation                            | Name              | Rows  | Bytes | Cost (%CPU)| Time  | Pstart| Pstop |
|   5 |   NESTED LOOPS                       |                   |       |       |            |       |  |       |
|   6 |    CONNECT BY PUMP                   |                   |       |       |            |       |  |       |
-------------------------------------------------------------------------------------------------------------------------- Update:
Not solution, but kind of workaround. Maybe someone will find it useful. You can improve performance of CONNECT BY operation on big datasets (milions of rows) by parallelizing source rowset, using NO_CONNECT_BY_FILTERING hint and accessing data with FTS. This way you're parallel HJ, which for me was eight to ten times faster than other methods. BTW, you'll need significant amount of PGA available for this process - I was using manual worarea policy for this.
SQL> ALTER SESSION SET workarea_size_policy=MANUAL;
Session altered.
Elapsed: 00:00:00.00
SQL> ALTER SESSION SET sort_area_size=1073741824;
Session altered.
Elapsed: 00:00:00.00
SQL> INSERT /*+ APPEND */ INTO zzz_t1 SELECT /*+ NO_CONNECT_BY_FILTERING */ * FROM cp1_client StART WITH client_parent_id = 1 CONNECT BY PRIOR client_id = client_parent_id;
1497838 rows created.
Elapsed: 00:00:40.04
| Id  | Operation                                | Name       | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
|   0 | INSERT STATEMENT                         |            |       |       |  2330 (100)|          |       |       |        |      |            |
|   1 |  LOAD AS SELECT                          |            |       |       |            |          |       |       |        |      |            |
|*  2 |   CONNECT BY NO FILTERING WITH START-WITH|            |       |       |            |          |       |       |        |      |            |
|   3 |    PX COORDINATOR                        |            |       |       |            |          |       |       |        |      |            |
|   4 |     PX SEND QC (RANDOM)                  | :TQ10000   |  1497K|   617M|  2330   (2)| 00:00:28 |       |       |  Q1,00 | P->S | QC (RAND)  |
|   5 |      PX BLOCK ITERATOR                   |            |  1497K|   617M|  2330   (2)| 00:00:28 |     1 |    16 |  Q1,00 | PCWC |            |
|*  6 |       MAT_VIEW ACCESS FULL               | CP1_CLIENT |  1497K|   617M|  2330   (2)| 00:00:28 |     1 |    16 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------------------------------Thanks,
Lukasz
Edited by: Łukasz Mastalerz on Mar 26, 2012 2:11 PM

This is new feature of Oracle 9i and above related with hierarchy queries.
Sometimes it leads to peroformance problems. You can use undocumented
parameter oldconnect_by_enabled to get rid of it and return to "old good 8i":
SQL> set autotrace traceonly expl
SQL> select object_id from nc_objects start with object_id = 100
  2  connect by prior object_id = parent_id
  3  /
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=15 Bytes=345)
   1    0   CONNECT BY (WITH FILTERING)
   2    1     NESTED LOOPS
   3    2       INDEX (UNIQUE SCAN) OF 'XPKNC_OBJECTS' (UNIQUE) (Cost=
          2 Card=1 Bytes=12)
   4    2       TABLE ACCESS (BY USER ROWID) OF 'NC_OBJECTS'
   5    1     NESTED LOOPS
   6    5       BUFFER (SORT)
   7    6         CONNECT BY PUMP
   8    5       TABLE ACCESS (BY INDEX ROWID) OF 'NC_OBJECTS' (Cost=6
          Card=15 Bytes=345)
   9    8         INDEX (RANGE SCAN) OF 'XIF25NC_OBJECTS' (NON-UNIQUE)
           (Cost=3 Card=15)
SQL> alter session set "_old_connect_by_enabled" = true;
Session altered.
SQL> select object_id from nc_objects start with object_id = 100
  2  connect by prior object_id = parent_id
  3  /
Execution Plan
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=15 Bytes=345
   1    0   CONNECT BY
   2    1     INDEX (UNIQUE SCAN) OF 'XPKNC_OBJECTS' (UNIQUE) (Cost=2
          Card=1 Bytes=12)
   3    1     TABLE ACCESS (BY USER ROWID) OF 'NC_OBJECTS'
   4    1     TABLE ACCESS (BY INDEX ROWID) OF 'NC_OBJECTS' (Cost=6 C
          rd=15 Bytes=345)
   5    4       INDEX (RANGE SCAN) OF 'XIF25NC_OBJECTS' (NON-UNIQUE)
          Cost=3 Card=15)Rgds.

Similar Messages

  • JDBC Receiver - Muliple parallel connections?

    Hi ,
          Does the Receiver JDBC adapter support multiple parallel connections to the database. Can the calls be made in parallel. If yes then where is this parameter (maximum connections) set?.
    I was testing an RFC to JDBC sync scenario... My call from the RFC were in parallel (after I increased the max connections in the sender RFC).. but I did not find a similar parameter in JDBC....the db team reported that only one connection was created from Xi...

    Have a look at the possible performance enhancement options at JDBC receiver:
    1. You may increase the thread count for JDBC related queues. This has to be done in accordance with SAP Note 1084161.
    2. There is parameter in JDBC communication channel called Maximum Concurrency. It signifies that one communication channel can make how many connections to database. This is 1 by default and could be increased to some values like 3-4.
    3. In the Visual Admin/ NWA, there is a parameter called as queueParallelism.maxReceivers which defines the number of parallel worker threads for one receiver channel instance. This should be done following SAP Note 1136790. This can be done along with the first point.
    Regards,
    Prateek

  • Parallel connection cables for a HP2100TN printer

    What are the right parallel conection cables for a HP 2100 TN Printer.  I purchased a set and theses turned out to be the wrong ones (#16899-USB1284-DB25)

    HP part # 2 Meter A to B C2950A
    Bidirectional ECP type-B parallel port (IEEE-1284 compliant)
    Please mark the post that solves your issue as "Accept as Solution".
    If my answer was helpful click the “Thumbs Up" on the left to say “Thanks”!
    I am not a HP employee.

  • Doubt about sys fans and connection of water cooler pump

    Hello, I have a motherboard MSI Z97 gaming 7 and need to know which connector you guys recommend connecting the water cooler pump H60 Corsair: sys fan or cpu fan?
    I want to connect the pump in the sys fan 1, 2 or 3, but need to know if I can put the SYS Fan running at full speed 100% for some bios option, it is possible? So I can leave the option CPU Fan 1 and 2 free for fans.

    Quote from: madmecca on 14-May-15, 06:56:08
    I have my h100i fans on cpu 1 and 2 and pump on system fan 3 no problems
    You can put the sys fan running the pump at maximum speed?

  • FTP problem: connection going idle or missing; takes a minute to refresh

    Hi, i hope i explain myself, please forgive me my lack of tech-language and if i'm redundant:
    In my office we got a G5 dual that we turn into a server (Mac OS Server 10.5.7) for a couple of websites (for external use). I need this guys to have FTP access for some file sharing. Server is up, websites and databases are running fine, remote administration in local and external network is flawless, everything seems fine, except for the FTP: When i connect with cyberduck or any other FTP app, the authentication is done quickly but then, the connection goes idle (like it didn't exists) so the folder listing takes about a minute. Then, as soon as you get the listing, if you try to put something (i.e, a big file of 50mb, using the local network), the app makes a quick login but then it goes idle again (same, about a minute) and when the connections appears, the transfer goes really fast, after the transfers ends connection goes idle again a minute before managing the folder listing. I know that the connection "goes idle" or "disappears" because i started using little snitch to take a look at it. It appears like if the ftp app wasn´t even trying to connect, then suddenly, it´s there and connects/transfers. Every operation takes this minute to get going. If i try it with a browser, well, it times out. This is happening in both, local and external connection. But every other service is working cool.
    Can you give me any ideas about this matter?
    Message was edited by: rdlfo

    Ok, rather than going for the most problematic protocols known to modern networking, try with something simple.
    Don't test with ftp, test with sftp.
    I would initially suspect you're running into issues with firewalls here; beyond transmitting your credentials in cleartext, the design of ftp is inherently extremely allergic to firewalls, and particularly to transfers over connections that involve both local and remote firewalls.
    ftp is funky here in that it needs two connections between the client and the server, and the second connection tends to get blocked. An ftp active-mode connection has a back-connect from the ftp server to an ephemeral port on the client; traversing firewalls from server to client. An ftp passive-mode connection has a forward connection; a second parallel connection from the client to a specified (usually) ephemeral port on the ftp server; traversing the firewalls.
    Ignoring explicit transfer-mode selection, various clients can (transparently) try to switch between active and passive, too.
    My preferred approach is to avoid ftp. At all. Use sftp. sftp is far easier to punch through firewalls. And sftp doesn't transmit your username and password in cleartext, for that matter.
    The other part of this effort is around ensuring proper file protections and ownership in the web server directories. The web-facing file ownership should be user:www (often root:www) and the www user (the web server) should be able to read its web files, but (in general) should not be able to write to the web files or directories. This is defensive.

  • Need to print simultaneously via the network/TCP/IP & also print parallel from stand-alone PC

    1. HP LaserJet 4200 Q2427A
    2. Win7 - 32bit
    3. When I print from the Networked PC (which is on same subnet as printer) Prints come out. No problems. Then I need to print from a PC( which is a standalone PC w/ no connection to the corporate LAN, but does have a separate DSL connection to the internet) connected via the parallel cable. Prints fine no problems. When I try to then go back to the networked PC and print, The job takes 10-15 minutes print if at all- which sometimes it does not print at all.
    4. The stand-alone PC was replaced with a new windows 7 PC. The previous stand-alone PC was running windows XP. We did not experience any issues when PC was XP. PC was replaced due to Hard drive going bad and age. The windows 7 PC has a PCI parallel card installed for the parallel connection.
    Have also tried using a parallel to usb adapter with same results.

    That would be Wireless Distribution System (WDS). That is to say Wireless to Wireless. And if I recall correctly that option is not available to you.
    Bridge Mode is bridging between ethernet and WiFi.
    One way you could maybe do what you want without ethernet over powerline adaptors is to get an Airport Express (more money), attach that to the Pirelli via ethernet. Put the Express into Bridge Mode. Now have the Express provide the WDS server to the Airport Extreme Base Station. Of course you now have 3 WiFi transmiters in your home just so you can print something, but just think of the coverage in your home.
    NOTE: If for some reason you decide to try this Express to AEBS WDS system, you can create a roaming network by giving all 3 WiFi systems the same SSID and password. When everyone is using the same SSID and password you can roam around the house and your laptops will automatically switch to the base station with the strongest signal (assuming that is worth something to you).

  • JCo Increaing maximum no of connections

    Hi,
    I have some issues with RFC adapter. I need to increase the Maximum no. of connections for RFC adapter JCo Connection pool. I have increased the maximum connections in RFC receiver adapter. This does not seem to work.
    Any help on this is highly appreciated.
    Regards,
    Jai Shankar

    >>>are you looking for maintaining multiple parallel connection?
    By default RFC adapter works in COnnection pooling method to increase performance. But the problem is I am getting an error msg
    <b>Error while lookup Problem when calling an adapter by using communication channel CC_NSA0004_GIL_RFC_Product_RCV (Party: , Service: <BS_XXX>, Object ID: 2f929451f8d83ce8af30b7275120c14d) XI AF API call failed. Module exception: 'error while processing the request to rfc-client: com.sap.aii.af.rfc.afcommunication.RfcAFWException: error while processing message to remote system:com.sap.aii.af.rfc.core.client.RfcClientException: resource error: could not get a client from JCO.Pool: com.sap.mw.jco.JCO$Exception: (106) JCO_ERROR_RESOURCE: Connection pool RfcClient[CC_XXX]2f929451f8d83ce8af30b7275120c14d is exhausted. The current pool size limit (max connections) is 4 connections.'. Cause Exception: 'error while processing message to remote system:com.sap.aii.af.rfc.core.client.RfcClientException: resource error: could not get a client from JCO.Pool: com.sap.mw.jco.JCO$Exception: (106) JCO_ERROR_RESOURCE: Connection pool RfcClient[CC_XXX]2f929451f8d83ce8af30b7275120c14d is exhausted. The current pool size limit (max connections) is 4 connections.'</b>
    Note I have set the max connections to 20 in RFC receiver adapter
    Regards,
    Jai Shankar

  • G4 powerbook doesn't recognize epson 3000 using usb parallel printer adapte

    i downloaded and installed driver per instructions from epson (epson10860). my epson 3000 does not appear on the list of printers in the mac print setup utility. (selected epson usb in window after selecting "more printers." i'm using a usb parallel printer adapter. anyone know why it's not recognizing my printer?

    the parallel adapter is the most likely problem, and it probably will never work, as from what I've ever seen, OS X does not support parallel connections.
    Your best bet is to buy a new printer.

  • Exporting whole database (10GB) using Data Pump export utility

    Hi,
    I have a requirement that we have to export the whole database (10GB) using Data Pump export utility because it is not possible to send the 10GB dump in a CD/DVD to the system vendor of our application (to analyze few issues we have).
    Now when i checked online full export is available but not able to understand how it works, as we never used this data pump utility, we use normal export method. Also, will data pump reduce the size of the dump file so it can fit in a DVD or can we use Parallel Full DB export utility to split the files and include them in a DVD, is it possible.
    Please correct me if i am wrong and kindly help.
    Thanks for your help in advance.

    You need to create a directory object.
    sqlplus user/password
    create directory foo as '/path_here';
    grant all on directory foo to public;
    exit;
    then run you expdp command.
    Data Pump can compress the dumpfile if you are on 11.1 and have the appropriate options. The reason for saying filesize is to limit the size of the dumpfile. If you have 10G and are not compressing and the total dumpfiles are 10G, then by specifying 600MB, you will just have 10G/600MB = 17 dumpfiles that are 600MB. You will have to send them 17 cds. (probably a few more if dumpfiles don't get filled up 100% due to parallel.
    Data Pump dumpfiles are written by the server, not the client, so the dumpfiles don't get created in the directory where the job is run.
    Dean

  • Connect by prior subquery - performance problem

    Hello,
    I have some data which is organized in a folder tree. The requeriement is to be able to search from any subfolder and down.
    /Documents
    ___Folder A
    ______Doc A
    ______Doc B
    ___Folder B
    ______Doc C
    ______Doc D
    The folder structure is defined in a table called CORNERS where the records(=folders) has a ID/PARENTID relationsship to describe the folder structure.
    Another table, called MASTER, contains the main content. Each item has a CORNERID value which defined in which subfolder the document is located.
    MASTER
    ID CORNERID TITLE INDEX_URL
    100 2 Doc A http://xxx/yy.com
    101 2 Doc B http://xxz/yy.com
    102 3 Doc C http://xyz/yy.com
    103 3 Doc D http://xyz/zz.com
    CORNERS
    ID PARENTID NAME
    1 Documents
    2 1 Folder A
    3 1 Folder B
    MASTER table has ~50000 records
    CORNERS has ~900 records.
    Analyzed nighly and stats are fresh.
    Indexes defined:
    CORNERS_ID_PARENT_IDX corners(id,parentid)
    CORNERS_PARENT_ID_IDX corners(parentid,id)
    MASTER_ID_CORNERID_IDX master(id,cornerid)
    MASTER_CORNERID_ID_IDX master(cornerid,id)
    Oracle Text index (URL based) on MASTER.INDEX_URL
    Foreign key defined:
    MASTER.CORNERID references CORNERS.ID
    If I do a search without involving the hierarchy, then the search runs pretty fast:
    SQL> SELECT COUNT(*) FROM (SELECT a.id, a.cornerid FROM MASTER a WHERE (CONTAINS(title,'$ADS AND {S} AND $PARAMETER',2) > 1 OR CONTAINS(index_url,'$ADS AND {S} AND $PARAMETER',1) > 1) );
    COUNT(*)
    5125
    Elapsed: 00:00:00.14
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1354 Card=1 Bytes=15
    8)
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MASTER' (Cost=1354 Car
    d=758 Bytes=119764)
    3 2 BITMAP CONVERSION (TO ROWIDS)
    4 3 BITMAP OR
    5 4 BITMAP CONVERSION (FROM ROWIDS)
    6 5 SORT (ORDER BY)
    7 6 DOMAIN INDEX OF 'MASTER_TITLE_IDX' (Cost=470)
    8 4 BITMAP CONVERSION (FROM ROWIDS)
    9 8 SORT (ORDER BY)
    10 9 DOMAIN INDEX OF 'MASTER_IDX' (Cost=650)
    Statistics
    1462 recursive calls
    0 db block gets
    5507 consistent gets
    347 physical reads
    0 redo size
    380 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    2 sorts (memory)
    0 sorts (disk)
    1 rows processed
    SQL>
    BUT, if I add a subquery to limit the search to a certain folder tree (which includes ~200 nodes), then the performance is really badly affected. The subquery itself runs fast - around 0.07 seconds, but together with the rest of the query the preformance is really bad:
    SQL> SELECT COUNT(*) FROM (SELECT a.id, a.cornerid FROM MASTER a WHERE (CONTAINS(title,'$ADS AND {S} AND $PARAMETER',2) > 1 OR CONTAINS(index_url,'$ADS AND {S} AND $PARAMETER',1) > 1) AND cornerid IN ( SELECT ID FROM corners START WITH id = 2434 CONNECT BY PRIOR id = parentid) );
    COUNT(*)
    942
    Elapsed: 00:00:01.83
    Execution Plan
    0 SELECT STATEMENT Optimizer=CHOOSE (Cost=118 Card=1 Bytes=175
    1 0 SORT (AGGREGATE)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'MASTER' (Cost=19 Card=
    1 Bytes=162)
    3 2 NESTED LOOPS (Cost=118 Card=8 Bytes=1400)
    4 3 VIEW OF 'VW_NSO_1' (Cost=2 Card=6 Bytes=78)
    5 4 SORT (UNIQUE)
    6 5 CONNECT BY (WITH FILTERING)
    7 6 NESTED LOOPS
    8 7 INDEX (UNIQUE SCAN) OF 'SYS_C002969' (UNIQUE
    ) (Cost=1 Card=1 Bytes=4)
    9 7 TABLE ACCESS (BY USER ROWID) OF 'CORNERS'
    10 6 NESTED LOOPS
    11 10 BUFFER (SORT)
    12 11 CONNECT BY PUMP
    13 10 INDEX (RANGE SCAN) OF 'CORNERS_PARENT_ID_IDX
    ' (NON-UNIQUE) (Cost=2 Card=6 Bytes=48)
    14 3 INDEX (RANGE SCAN) OF 'MASTER_CORNERID_ID_IDX' (NON-
    UNIQUE) (Cost=1 Card=38)
    Statistics
    29267 recursive calls
    0 db block gets
    55414 consistent gets
    140 physical reads
    0 redo size
    380 bytes sent via SQL*Net to client
    503 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    12 sorts (memory)
    0 sorts (disk)
    1 rows processed
    I've tried an alternative syntax, instead of the IN clause like this:
    SELECT COUNT(*) FROM (
    WITH folders AS (
    SELECT ID
    FROM CORNERS
    START WITH ID=2434
    CONNECT BY PRIOR ID= PARENTID
    SELECT a.id
    FROM MASTER a, folders b
    WHERE a.cornerid = b.id
    AND CONTAINS(index_url,'$ADS AND {S} AND $PARAMETER',1) > 1);
    It does runfaster, but still takes around 1 second.
    Any suggestion on how to make this run faster!?
    Thanks in advance!
    -Mats

    how long does it takes to complete the query?

  • Slow connect by prior ... start with subquery in 9i

    Has anyone come across a performance problem (compared to 8i) when using hierarchical queries where the START WITH list is generated by a subquery? The culprit seems to be an extra visit to the subquery block as part of the CONNECT BY WITH FILTERING operation.
    For example, take a simple tree structure:
    CREATE TABLE tree
    id NUMBER,
    parentid NUMBER
    CONSTRAINT tree_pk PRIMARY KEY (id)
    ...and a subquery - here just a table called sample with a subset of the ids from the tree table:
    CREATE TABLE sample
    id NUMBER,
    CONSTRAINT sample_pk PRIMARY KEY (id)
    ...with which to drive the start points of the treewalk:
    SELECT parentid, id, label
    FROM tree
    CONNECT BY PRIOR parentid = id
    START WITH id IN
    SELECT id FROM SAMPLE
    With the tables populated and analyzed, I get this from 8i:
    Execution Plan
    .0......SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=19)
    .1....0...CONNECT BY
    .2....1.....NESTED LOOPS (Cost=1 Card=1280 Bytes=10240)
    .3....2.......INDEX (FAST FULL SCAN) OF 'ID_PK' (UNIQUE) (Cost=1 Card=1280 Bytes=5120)
    .4....2.......INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE)
    .5....1.....TABLE ACCESS (BY USER ROWID) OF 'TREE'
    .6....1.....TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (Cost=2 Card=1 Bytes=19)
    .7....6.......INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE) (Cost=1 Card=1)
    Statistics
    .....0..recursive calls
    .....4..db block gets
    .15687..consistent gets
    ....59..physical reads
    .....0..redo size
    223313..bytes sent via SQL*Net to client
    .38276..bytes received via SQL*Net from client
    ...343..SQL*Net roundtrips to/from client
    .....3..sorts (memory)
    .....0..sorts (disk)
    ..5120..rows processed
    and this is 9i:
    Execution Plan
    .0......SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=19)
    .1....0...CONNECT BY (WITH FILTERING)
    .2....1.....NESTED LOOPS
    .3....2.......NESTED LOOPS (Cost=2 Card=1280 Bytes=10240)
    .4....3.........INDEX (FAST FULL SCAN) OF 'ID_PK' (UNIQUE) (Cost=2 Card=1280 Bytes=5120)
    .5....3.........INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE)
    .6....2.......TABLE ACCESS (BY USER ROWID) OF 'TREE'
    .7....1.....NESTED LOOPS
    .8....7.......BUFFER (SORT)
    .9....8.........CONNECT BY PUMP
    10....7.......TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (Cost=2 Card=1 Bytes=19)
    11...10.........INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE) (Cost=1 Card=20480)
    12....1.....INDEX (UNIQUE SCAN) OF 'SAMPLE_PK' (UNIQUE) (Cost=1 Card=1 Bytes=4)
    Statistics
    .....1..recursive calls
    .....1..db block gets
    .20525..consistent gets
    ....72..physical reads
    ...120..redo size
    224681..bytes sent via SQL*Net to client
    .38281..bytes received via SQL*Net from client
    ...343..SQL*Net roundtrips to/from client
    .....9..sorts (memory)
    .....0..sorts (disk)
    ..5120..rows processed
    ..so, about another 5000 logical reads, corresponding to the extra access of the sample table at the bottom of the query plan. So instead of just visiting the START WITH subquery once, to kick off the treewalk, I seem to be revisiting it for every row returned. Not too bad if that happens to be a unique index scan as here but that's not always the case.
    I know I've got new options for re-writing this as a join under 9i, I'm just curious about those extra lookups and why they're necessary.
    Cheers - Andrew

    There is undocumented parameter in Oracle 9i "_old_connect_by_enabled"
    which controls the behavoiur of hierarchy queries in 9i and above:
    You can try to return to 8i behaviour using it:
    SQL> SELECT parentid, id
      2  FROM tree
      3  CONNECT BY PRIOR parentid = id
      4  START WITH id IN
      5  (
      6  SELECT id FROM SAMPLE
      7  )
      8  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=26)
       1    0   CONNECT BY (WITH FILTERING)
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (TABLE)
       3    2       NESTED LOOPS (Cost=2 Card=1 Bytes=26)
       4    3         TABLE ACCESS (FULL) OF 'SAMPLE' (TABLE) (Cost=2 Card
              =1 Bytes=13)
       5    3         INDEX (UNIQUE SCAN) OF 'TREE_PK' (INDEX (UNIQUE)) (C
              ost=0 Card=1 Bytes=13)
       6    1     NESTED LOOPS
       7    6       BUFFER (SORT)
       8    7         CONNECT BY PUMP
       9    6       TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (TABLE) (Cost=
              1 Card=1 Bytes=26)
      10    9         INDEX (UNIQUE SCAN) OF 'TREE_PK' (INDEX (UNIQUE)) (C
              ost=1 Card=1)
      11    1     TABLE ACCESS (FULL) OF 'TREE' (TABLE) (Cost=1 Card=1 Byt
              es=26)
      12    1     INDEX (UNIQUE SCAN) OF 'SAMPLE_PK' (INDEX (UNIQUE)) (Cos
              t=1 Card=1 Bytes=13)
    SQL> alter session set "_old_connect_by_enabled" = TRUE;
    Session altered.
    SQL> SELECT parentid, id
      2  FROM tree
      3  CONNECT BY PRIOR parentid = id
      4  START WITH id IN
      5  (
      6  SELECT id FROM SAMPLE
      7  )
      8  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=26)
       1    0   CONNECT BY
       2    1     NESTED LOOPS (Cost=2 Card=1 Bytes=26)
       3    2       TABLE ACCESS (FULL) OF 'SAMPLE' (TABLE) (Cost=2 Card=1
               Bytes=13)
       4    2       INDEX (UNIQUE SCAN) OF 'TREE_PK' (INDEX (UNIQUE)) (Cos
              t=0 Card=1 Bytes=13)
       5    1     TABLE ACCESS (BY USER ROWID) OF 'TREE' (TABLE)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (TABLE) (Cost=1
              Card=1 Bytes=26)
       7    6       INDEX (UNIQUE SCAN) OF 'TREE_PK' (INDEX (UNIQUE)) (Cos
              t=1 Card=1)
    Rgds.

  • Slow connect by ... start with subquery in 9i

    Has anyone come across a performance problem (compared to 8i) when using hierarchical queries where the START WITH list is generated by a subquery? The culprit seems to be an extra visit to the subquery block as part of the CONNECT BY WITH FILTERING operation.
    For example, take a simple tree structure:
    CREATE TABLE tree
    id NUMBER,
    parentid NUMBER
    CONSTRAINT tree_pk PRIMARY KEY (id)
    ...and a subquery - here just a table called sample with a subset of the ids from the tree table:
    CREATE TABLE sample
    id NUMBER,
    CONSTRAINT sample_pk PRIMARY KEY (id)
    ...with which to drive the start points of the treewalk:
    SELECT parentid, id, label
    FROM tree
    CONNECT BY PRIOR parentid = id
    START WITH id IN
    SELECT id FROM SAMPLE
    With the tables populated and analyzed, I get this from 8i:
    Execution Plan
    .0......SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=19)
    .1....0...CONNECT BY
    .2....1.....NESTED LOOPS (Cost=1 Card=1280 Bytes=10240)
    .3....2.......INDEX (FAST FULL SCAN) OF 'ID_PK' (UNIQUE) (Cost=1 Card=1280 Bytes=5120)
    .4....2.......INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE)
    .5....1.....TABLE ACCESS (BY USER ROWID) OF 'TREE'
    .6....1.....TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (Cost=2 Card=1 Bytes=19)
    .7....6.......INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE) (Cost=1 Card=1)
    Statistics
    .....0..recursive calls
    .....4..db block gets
    .15687..consistent gets
    ....59..physical reads
    .....0..redo size
    223313..bytes sent via SQL*Net to client
    .38276..bytes received via SQL*Net from client
    ...343..SQL*Net roundtrips to/from client
    .....3..sorts (memory)
    .....0..sorts (disk)
    ..5120..rows processed
    and this is 9i:
    Execution Plan
    .0......SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=19)
    .1....0...CONNECT BY (WITH FILTERING)
    .2....1.....NESTED LOOPS
    .3....2.......NESTED LOOPS (Cost=2 Card=1280 Bytes=10240)
    .4....3.........INDEX (FAST FULL SCAN) OF 'ID_PK' (UNIQUE) (Cost=2 Card=1280 Bytes=5120)
    .5....3.........INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE)
    .6....2.......TABLE ACCESS (BY USER ROWID) OF 'TREE'
    .7....1.....NESTED LOOPS
    .8....7.......BUFFER (SORT)
    .9....8.........CONNECT BY PUMP
    10....7.......TABLE ACCESS (BY INDEX ROWID) OF 'TREE' (Cost=2 Card=1 Bytes=19)
    11...10.........INDEX (UNIQUE SCAN) OF 'TREE_PK' (UNIQUE) (Cost=1 Card=20480)
    12....1.....INDEX (UNIQUE SCAN) OF 'SAMPLE_PK' (UNIQUE) (Cost=1 Card=1 Bytes=4)
    Statistics
    .....1..recursive calls
    .....1..db block gets
    .20525..consistent gets
    ....72..physical reads
    ...120..redo size
    224681..bytes sent via SQL*Net to client
    .38281..bytes received via SQL*Net from client
    ...343..SQL*Net roundtrips to/from client
    .....9..sorts (memory)
    .....0..sorts (disk)
    ..5120..rows processed
    ..so, about another 5000 logical reads, corresponding to the extra access of the sample table at the bottom of the query plan. So instead of just visiting the START WITH subquery once, to kick off the treewalk, I seem to be revisiting it for every row returned. Not too bad if that happens to be a unique index scan as here but that's not always the case.
    I know I've got new options for re-writing this as a join under 9i, I'm just curious about those extra lookups and why they're necessary.
    Cheers - Andrew

    Hi Andrew,
    Just noticed you message. I have exact same performance problem. It's just killing ant other processes and runs forever.
    Could you please share you experience how to deal with "CONNECT BY" in 9i and also could you please tell about this option to re-write CONNECT BY as a join?
    Thank you very much,
    Victor

  • Decreasing amount of RFC-Destinations when using UD-Connect?

    Hello,
    I created an RFC-Destination in sm59 as well as JCO RFC Provider and used the SAP manual for UD-Connect/JDBC to a "foreign" Sybase-DB. The Connection is working properly and all DB-data is accessible, but our cutomer claims that there are more than 25 parallel rfc-connections from our Web AS to his Sybase-DB. My question is: How can I decrease the amount of parallel connections, because I don't think 25 connections to Sybase-DB are necessary. I know there is the maxConnection-Property in JCO RFC Provider but this parameter is globally affecting all RFC-connections and not only those that are associated with the Sybase-DB.
    Is there actually a way to customize the amount of parallel connection via JDBC?
    Best regards
    Danny

    Hi,
    use transaction  "smqs", select destination and click on registration  change max number of connections.
    Try to define rfc server group  to control connections ...
    Regards
    Ben

  • Datapump - Parallelism is not working

    Hello,
    I am running 11.1.0.7 on AIX.
    I am taking an expdp of an table using the value of 4 for parameter PARALLEL.
    expdp SYSTEM TABLES=MYTEST.HISTORY DIRECTORY=EXPORT_FILES DUMPFILE=TEST_HIST_%U.EXPDP.DMP LOGFILE=TEST_HIST.EXPDP.LOG PARALLEL=4But I see only two dumpfile created that too seems like most of the data is going to only one -
    ls -ltr
    total 286757112
    -rw-r-----    1 oracle   staff         32768 Jan 17 15:38 TEST_HIST_02.EXPDP.DMP
    -rw-r-----    1 oracle   staff    19154370560 Jan 17 15:38 TEST_HIST_01.EXPDP.DMPWhy this behaviour? I thought that the data will be distributed to 4 different dumpfiles as I have set it to run in parallel mode and I have 6 CPUs in the box.
    Thanks in advance!

    This has nothing to do with the parallelism set for the table. DO NOT CHANGE TABLE PARALLELISM for Data Pump. Sorry for the shout, but the table parallelism for the table does not change anything that Data Pump looks at. This suggestion is wrong.
    The reason you may only get two dumpfiles is because of many things. First, let me explain how expdp works with parallelism. When expdp start, the first work item to be assigned to a worker process is the export the metadata. The first part of this request is the 'estimation' phase. This phase gets the names of the tables/partitoins/subpartitoins that need to be exported. This information is sent to the MCP process so it can then schedule the data unload. This data unload will start right away if parallel is greater than 1. The worker process that did the estimation now starts unloading metadata. This metadata is written to file #1. In your case, parallel=4 so the MCP will try to split up the data unload that needs to be done. The data can be broken up into 1 or n jobs. Some of the decision on how many jobs to create are based on these factors:
    1. Generally exporting data using direct path is n times faster than external tables
    2. Direct path does not support parallelism on a single table
    this means that worker 2 could get assigned table1 direct path and worker 3 could be assigned table2. Parallelism is
    achieved, but by unloading 2 tables at the same time.
    3. Some attributes of tables are not supported by direct path, so if a table has those attributes, external table must be
    chosen. External table supports parallelism on a single table, but some attributes prohibit single table parallelism.
    4. if the table is not larger than xMB then the over head of setting up external table is not worth the parallelism so just
    use direct path (parallel 1).
    And the list goes on. From what I can see, you had 1 worker exporting metadata and writing it to one file and you had anther worker exporting the data writing to your second file. The data for that table was exported using parallel 1. Not sure why, but because you only had 2 dump files, that is the only scenario I can come up with.
    Can you do this and post the results:
    Use your expdp command and add estimate=blocks then post the results from the estimate lines. I might be able to tell from that information why you exported the data using parallel 1.
    Dean

  • FILESIZE parameter in DATA PUMP

    Hi All,
    As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
    But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
    I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
    IT creates the files with different size.
    JOB_NAME=SCHEMA.ENV.080410..p1
    DIRECTORY=dump_dir
    DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
    LOGFILE=SCHEMA.ENV.080410..p1.explog
    PARALLEL=16
    CONTENT=ALL
    EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
    TABLES= TABLE NAMES

    user4005330 wrote:
    Hi All,
    As per the data pump syntax if we define FILESIZE paramter is create dmp files with that mention file.
    But My question is if I ignore FILESIZE parameter, How oracle define the size of parameter.
    I am creating dpexp with following syntax for dpexp file. It creates dmp files with name SCHEMA.ENV.080410..p1*%U*.dmp with 1,2,3 etc.
    IT creates the files with different size.
    JOB_NAME=SCHEMA.ENV.080410..p1
    DIRECTORY=dump_dir
    DUMPFILE=dump_dir:SCHEMA.ENV.080410..p1%U.dmp
    LOGFILE=SCHEMA.ENV.080410..p1.explog
    PARALLEL=16
    CONTENT=ALL
    EXCLUDE=INDEX,CONSTRAINT,TABLE_STATISTICS
    TABLES= TABLE NAMESAs you defined PARALLEL=16, Data Pump will create 16 process and each process will write to it's own file, that's the reason why you get different sized files

Maybe you are looking for

  • Pipeline error for abap proxy to IDoc scenario

    hi experts, i am getting the following system error in moni for my proxy to idoc scenario. In error it shows : "Receiver service cannot be converted into an ALE logical system" but i have checked adapter specific attributes of both receiver and sende

  • PreparedStatement with 56cols in batch giving ArrayIndexOutOfBoundException

    Hello All, PreparedStatement trying to do bath update/inserts in table with 56 columns is giving ArrayIndexOutOfBoundException sporadically with 11.2.0.3 driver. Is it related to Bug 6396242,which was supposed fix in 11.1.0.7.0? Here is the stackTrac

  • ITunes preview cache?

    Bit of a strange question... While browsing iTunes/films I came across a film trailer I was interested in (not in the apple trailers section, but the film download store). When I click view trailer the trailer downloads but I cannot save it or view i

  • LSMW for Inspection Plan

    Hi Sapiens, I am a requirement to upload the Quality Management data through LSMW. 1) Quality Info Recrod 2) Inspection Plan For the first one, I could able to create Project in LSMW and successfully do for over 200 customers. I am saying problem in

  • [SOLVED] Wine w/ EVE

    I have an i686 install, and no custom kernel, and all -syyu update.  Linux homebox 3.7.10-1-ARCH #1 SMP PREEMPT Thu Feb 28 09:59:14 CET 2013 i686 GNU/Linux Except wine 1.5.25-2 -- as I using wine 1.5.24-1 -- which has been running great for playing E