Performance issue with Oracle Global Temporary table

Hi
Oracle version : 10.2.0.3.0 - Production
We have an application in Java / Oracle. Users request comes in XML and oracle parser parses it and inserts it into Global temporary tables and then Business Stored procedure picks data from these GTT's and do the required processing.
in the end data required response data is again inserted into response GTT's from which Response XML is generated.
Question : Is the use of Global temporary tables in Oracle degrades performance as we have large number of GTT's in our application approx. 5-600 such tables.
Regards,
Vikas Kumar

Hi All,
Here is architecture of my application:
Java application creates XML from the screen values and then inserts that XML
into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
2. it calls Product SP and then in product SP we have business logic. Product SP
does the execution and then inserts response into Response GTT.
3. Response XML is created by using XML generation function and response GTT.
I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
Regards,
Vikas Kumar

Similar Messages

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

  • Deadlock with CREATE GLOBAL TEMPORARY TABLE

    I got this error
    ORA-00604: error occurred at recursive SQL level 1
    ORA-00060: deadlock detected while waiting for resource
    while trying to create global temporary table.
    Table creation command:
    CREATE GLOBAL TEMPORARY TABLE ITUSER.T_0091FBDG ("GOD" char(4) DEFAULT (' ') NOT NULL,"UNKUM" number(10,0) DEFAULT (0) NOT NULL,[a lot of other fields]) ON COMMIT PRESERVE ROWS
    There is no outer references in command. So does somebody know where does deadlock come from?
    I'm using Oracle 10g.
    Edited by: LeopoldStoch on Apr 13, 2010 7:04 AM

    I have grabbed log files. But it make me even more curious. Here it is:
    alert_itdb.log
    Thread 1 advanced to log sequence 253 (LGWR switch)
    Current log# 1 seq# 253 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\REDO01.LOG
    Tue Apr 13 10:53:09 2010
    Thread 1 advanced to log sequence 254 (LGWR switch)
    Current log# 2 seq# 254 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\REDO02.LOG
    Tue Apr 13 10:55:32 2010
    Thread 1 advanced to log sequence 255 (LGWR switch)
    Current log# 3 seq# 255 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\REDO03.LOG
    Tue Apr 13 10:55:49 2010
    ORA-00060: Deadlock detected. More info in file c:\oracle\product\10.2.0\admin\itdb\udump\itdb_ora_3868.trc.
    Tue Apr 13 11:01:58 2010
    Thread 1 advanced to log sequence 256 (LGWR switch)
    Current log# 1 seq# 256 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\REDO01.LOG
    Tue Apr 13 11:03:29 2010
    Thread 1 advanced to log sequence 257 (LGWR switch)
    Current log# 2 seq# 257 mem# 0: C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\REDO02.LOG
    Tue Apr 13 11:14:16 2010
    itdb_ora_3868.trc
    Dump file c:\oracle\product\10.2.0\admin\itdb\udump\itdb_ora_3868.trc
    Tue Apr 13 10:55:48 2010
    ORACLE V10.2.0.4.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Release 10.2.0.4.0 - Production
    Windows NT Version V5.2 Service Pack 2
    CPU : 2 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:3568M/8188M, Ph+PgF:7889M/12090M, VA:579M/2799M
    Instance name: itdb
    Redo thread mounted by this instance: 1
    Oracle process number: 18
    Windows thread id: 3868, image: ORACLE.EXE (SHAD)
    *** 2010-04-13 10:55:48.874
    *** ACTION NAME:() 2010-04-13 10:55:48.811
    *** MODULE NAME:(5436,20100413105316,82100) 2010-04-13 10:55:48.811
    *** SERVICE NAME:(ITDB) 2010-04-13 10:55:48.811
    *** SESSION ID:(145.52602) 2010-04-13 10:55:48.811
    DEADLOCK DETECTED ( ORA-00060 )
    [Transaction Deadlock]
    The following deadlock is not an ORACLE error. It is a
    deadlock due to user error in the design of an application
    or from issuing incorrect ad-hoc SQL. The following
    information may aid in determining the deadlock:
    Deadlock graph:
    ---------Blocker(s)-------- ---------Waiter(s)---------
    Resource Name process session holds waits process session holds waits
    TX-00080011-0000198d 18 145 X 17 158 S
    TX-0006001c-0000192d 17 158 X 18 145 S
    session 145: DID 0001-0012-00000164 session 158: DID 0001-0011-000005F5
    session 158: DID 0001-0011-000005F5 session 145: DID 0001-0012-00000164
    Rows waited on:
    Session 158: obj - rowid = 00000000 - D/////AACAAAKy6AAA
    (dictionary objn - 0, file - 2, block - 44218, slot - 0)
    Session 145: obj - rowid = 00000000 - D/////AACAAAABJAAA
    (dictionary objn - 0, file - 2, block - 73, slot - 0)
    Information on the OTHER waiting sessions:
    Session 158:
    pid=17 serial=2727 audsid=241067 user: 60/ITUSER
    O/S info: user: VSC03\it_appsrv, term: VSC03, ospid: 2328:2640, machine: WORKGROUP\VSC03
    program: AppServer.exe
    client info: ELIZ-041.R#046IKSANOV.RIKSANOV.VSC03:8223.2328.5-1-2600.IT_APPSR
    application name: 5436,20100413105322,82000, hash value=3107059750
    Current SQL Statement:
    CREATE GLOBAL TEMPORARY TABLE ITUSER.T_009EFBDN ("GOD" char(4) DEFAULT (' ') NOT NULL,"UNKUM" number(10,0) DEFAULT (0) NOT NULL,"CENA" number(15,5) DEFAULT (0) NOT NULL,"EDI" number(3,0) NULL,"EDI2" number(3,0) NULL,"KOL" number(14,5) DEFAULT (0) NOT NULL,"KOL2" number(14,5) DEFAULT (0) NOT NULL,"SUMMA" number(16,2) DEFAULT (0) NOT NULL,"KOL_N1" number(14,5) DEFAULT (0) NOT NULL,"KOL_N2" number(14,5) DEFAULT (0) NOT NULL,"KOL_N3" number(14,5) DEFAULT (0) NOT NULL,"KOL_N4" number(14,5) DEFAULT (0) NOT NULL,"KOL_N5" number(14,5) DEFAULT (0) NOT NULL,"KOL_N6" number(14,5) DEFAULT (0) NOT NULL,"KOL_N7" number(14,5) DEFAULT (0) NOT NULL,"KOL_N8" number(14,5) DEFAULT (0) NOT NULL,"KOL_N9" number(14,5) DEFAULT (0) NOT NULL,"KOL_N10" number(14,5) DEFAULT (0) NOT NULL,"KOL_N11" number(14,5) DEFAULT (0) NOT NULL,"KOL_N12" number(14,5) DEFAULT (0) NOT NULL,"KOL_N13" number(14,5) DEFAULT (0) NOT NULL,"KOL2N1" number(14,5) DEFAULT (0) NOT NULL,"KOL2N2" number(14,5) DEFAULT (0) NOT NULL,"KOL2N3" number(14,5) DEFAULT (0) NOT NULL,"KOL2N4" number(14,5) DEFAULT (0) NOT NULL,"KOL2N5" number(14,5) DEFAULT (0) NOT NULL,"KOL2N6" number(14,5) DEFAULT (0) NOT NULL,"KOL2N7" number(14,5) DEFAULT (0) NOT NULL,"KOL2N8" number(14,5) DEFAULT (0) NOT NULL,"KOL2N9" number(14,5) DEFAULT (0) NOT NULL,"KOL2N10" number(14,5) DEFAULT (0) NOT NULL,"KOL2N11" number(14,5) DEFAULT (0) NOT NULL,"KOL2N12" number(14,5) DEFAULT (0) NOT NULL,"KOL2N13" number(14,5) DEFAULT (0) NOT NULL,"SUM_N1" number(16,2) DEFAULT (0) NOT NULL,"SUM_N2" number(16,2) DEFAULT (0) NOT NULL,"SUM_N3" number(16,2) DEFAULT (0) NOT NULL,"SUM_N4" number(16,2) DEFAULT (0) NOT NULL,"SUM_N5" number(16,2) DEFAULT (0) NOT NULL,"SUM_N6" number(16,2) DEFAULT (0) NOT NULL,"SUM_N7" number(16,2) DEFAULT (0) NOT NULL,"SUM_N8" number(16,2) DEFAULT (0) NOT NULL,"SUM_N9" number(16,2) DEFAULT (0) NOT NULL,"SUM_N10" number(16,2) DEFAULT (0) NOT NULL,"SUM_N11" number(16,2) DEFAULT (0) NOT NULL,"SUM_N12" number(16,2) DEFAULT (0) NOT NULL,"SUM_N13" number(16,2) DEFAULT (0) NOT NULL,"DATE_REST" date NULL,"KOL_PRI" number(14,5) DEFAULT (0) NOT NULL,"KOL2PRI" number(14,5) DEFAULT (0) NOT NULL,"SUM_PRI" number(16,2) DEFAULT (0) NOT NULL,"DATE_FPRI" date NULL,"NDOC_FPRI" char(20) DEFAULT (' ') NOT NULL,"KOL_PRIG" number(14,5) DEFAULT (0) NOT NULL,"KOL2PRIG" number(14,5) DEFAULT (0) NOT NULL,"SUM_PRIG" number(16,2) DEFAULT (0) NOT NULL,"KOL_PRIT" number(14,5) DEFAULT (0) NOT NULL,"KOL2PRIT" number(14,5) DEFAULT (0) NOT NULL,"KOL_RAS" number(14,5) DEFAULT (0) NOT NULL,"KOL2RAS" number(14,5) DEFAULT (0) NOT NULL,"SUM_RAS" number(16,2) DEFAULT (0) NOT NULL,"DATE_LRAS" date NULL,"NDOC_LRAS" char(20) DEFAULT (' ') NOT NULL,"KOL_RASG" number(14,5) DEFAULT (0) NOT NULL,"KOL2RASG" number(14,5) DEFAULT (0) NOT NULL,"SUM_RASG" number(16,2) DEFAULT (0) NOT NULL,"KOL_RAST" number(14,5) DEFAULT (0) NOT NULL,"KOL2RAST" number(14,5) DEFAULT (0) NOT NULL,"KOL_PRIREZ" number(14,5) DEFAULT (0) NOT NULL,"KOL2PRIREZ" number(14,5) DEFAULT (0) NOT NULL,"SUM_PRIREZ" number(16,2) DEFAULT (0) NOT NULL,"KOL_RASREZ" number(14,5) DEFAULT (0) NOT NULL,"KOL2RASREZ" number(14,5) DEFAULT (0) NOT NULL,"SUM_RASREZ" number(16,2) DEFAULT (0) NOT NULL,"PRC_RAS" number(3,0) DEFAULT (0) NOT NULL,"KSSM" char(5) NULL,"COMM" char(40) DEFAULT (' ') NOT NULL,"KDM3" char(1) DEFAULT (' ') NOT NULL,"KDM4" char(1) DEFAULT (' ') NOT NULL,"KOL_INV" number(14,5) DEFAULT (0) NOT NULL,"KOL2INV" number(14,5) DEFAULT (0) NOT NULL,"CENA_INV" number(15,5) DEFAULT (0) NOT NULL,"SUM_INV" number(16,2) DEFAULT (0) NOT NULL,"DATE_INV" date NULL,"KSBG" char(3) DEFAULT (' ') NOT NULL,"KOL_C" number(14,5) DEFAULT (0) NOT NULL,"KOL2C" number(14,5) DEFAULT (0) NOT NULL,"SUM_C" number(16,2) DEFAULT (0) NOT NULL,"KBLS" char(5) DEFAULT (' ') NOT NULL,"BS_ZATR" char(10) NULL,"KAU_ZATR" char(12) DEFAULT (' ') NOT NULL,"MECEXPL" number(3,0) DEFAULT (0) NOT NULL,"SUM_IZNOS" number(14,2) DEFAULT (0) NOT NULL,"SUM_IMEC" number(14,2) DEFAULT (0) NOT NULL,"NINKAS" number(10,0) DEFAULT (0) NOT NULL,"DATE_D" date NULL,"FIO_D" char(10) DEFAULT (' ') NOT NULL,"DATE_K" date NULL,"FIO_O" char(10) DEFAULT (' ') NOT NULL,"STDCURR" char(1) DEFAULT (' ') NOT NULL) ON COMMIT PRESERVE ROWS
    End of information on OTHER waiting sessions.
    Current SQL statement for this session:
    insert into col$(obj#,name,intcol#,segcol#,type#,length,precision#,scale,null$,offset,fixedstorage,segcollength,deflength,default$,col#,property,charsetid,charsetform,spare1,spare2,spare3)values(:1,:2,:3,:4,:5,:6,decode(:5,182/*DTYIYM*/,:7,183/*DTYIDS*/,:7,decode(:7,0,null,:7)),decode(:5,2,decode(:8,-127/*MAXSB1MINAL*/,null,:8),178,:8,179,:8,180,:8,181,:8,182,:8,183,:8,231,:8,null),:9,0,:10,:11,decode(:12,0,null,:12),:13,:14,:15,:16,:17,:18,:19,:20)
    ===================================================
    PROCESS STATE
    Process global information:
    process: 5F28A1F8, call: 5F3A8E98, xact: 5DFF8B40, curses: 5F37D6A8, usrses: 5F375F38
    SO: 5F28A1F8, type: 2, owner: 00000000, flag: INIT/-/-/0x00
    (process) Oracle pid=18, calls cur/top: 5F3A8E98/5F3A75B8, flag: (0) -
    int error: 0, call error: 0, sess error: 0, txn error 0
    (post info) last post received: 0 0 117
    last post received-location: kcbzww
    last process to post me: 5f289c00 93 0
    last post sent: 0 0 117
    last post sent-location: kcbzww
    last process posted by me: 5f289c00 93 0
    (latch info) wait_event=0 bits=0
    Process Group: DEFAULT, pseudo proc: 5F2BC4EC
    O/S info: user: SYSTEM, term: VSC03, ospid: 3868
    OSD pid info: Windows thread id: 3868, image: ORACLE.EXE (SHAD)
    Dump of memory from 0x5F276E78 to 0x5F276FFC
    5F276E70 0000000B 5E1231B8 [.....1.^]
    5F276E80 00000010 000313A9 5F3A75B8 00000003 [.........u:_....]
    5F276E90 000313A9 5F4B92D4 0000000B 000313A9 [......K_........]
    5F276EA0 5F375F38 00000004 0003129D 5DE1FFA4 [8_7_...........]]
    5F276EB0 00000007 000313A9 5DE20028 00000007 [........(..]....]
    5F276EC0 000313A9 5DE200BC 00000007 000313A9 [.......]........]
    5F276ED0 5DE20140 00000007 000313A9 5DE201C4 [@..]...........]]
    5F276EE0 00000007 000313A9 5DE20248 00000007 [........H..]....]
    5F276EF0 000313A9 5DE202CC 00000007 000313A9 [.......]........]
    5F276F00 00000000 00000000 00000000 00000000 [................]
    Repeat 14 times
    5F276FF0 00000000 00000000 00000000 [............]
    (FOB) flags=2 fib=5DEEEC98 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\UNDOTBS01.DBF
    fno=2 lblksz=8192 fsiz=311040
    (FOB) flags=2 fib=5DEEE608 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\CONTROL03.CTL
    fno=2 lblksz=16384 fsiz=430
    (FOB) flags=2 fib=5DEEE2C8 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\CONTROL02.CTL
    fno=1 lblksz=16384 fsiz=430
    (FOB) flags=2 fib=5DEEDF88 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\CONTROL01.CTL
    fno=0 lblksz=16384 fsiz=430
    (FOB) flags=2 fib=5DEEF658 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\ITDATA01.DBF
    fno=5 lblksz=8192 fsiz=2109440
    (FOB) flags=2 fib=5DEEE948 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\SYSTEM01.DBF
    fno=1 lblksz=8192 fsiz=79360
    (FOB) flags=2 fib=5DEEFCE8 incno=0 pending i/o cnt=0
    fname=C:\ORACLE\PRODUCT\10.2.0\ORADATA\ITDB\TEMP01.DBF
    fno=201 lblksz=8192 fsiz=43776
    SO: 5F375F38, type: 4, owner: 5F28A1F8, flag: INIT/-/-/0x00
    (session) sid: 145 trans: 5D06D4B0, creator: 5F28A1F8, flag: (8100041) USR/- BSY/-/-/-/-/-
    DID: 0001-0012-00000164, short-term DID: 0000-0000-00000000
    txn branch: 00000000
    oct: 1, prv: 0, sql: 50B2BAC0, psql: 57554078, user: 60/ITUSER
    service name: ITDB
    O/S info: user: VSC03\it_appsrv, term: VSC03, ospid: 3668:3616, machine: WORKGROUP\VSC03
    program: AppServer.exe
    client info: ELIZ-041.R#046IKSANOV.RIKSANOV.VSC03:8223.3668.5-1-2600.IT_APPSR
    application name: 5436,20100413105316,82100, hash value=3093400541
    last wait for 'enq: TX - allocate ITL entry' blocking sess=0x5F386200 seq=4256 wait_time=2999487 seconds since wait started=2
    name|mode=54580004, usn<<16 | slot=6001c, sequence=192d
    Dumping Session Wait History
    for 'enq: TX - allocate ITL entry' count=1 wait_time=2999487
    name|mode=54580004, usn<<16 | slot=6001c, sequence=192d
    for 'buffer busy waits' count=1 wait_time=10
    file#=1, block#=f923, class#=1
    for 'buffer busy waits' count=1 wait_time=53
    file#=1, block#=d1ff, class#=1
    for 'buffer busy waits' count=1 wait_time=36
    file#=1, block#=19, class#=4
    for 'buffer busy waits' count=1 wait_time=28
    file#=1, block#=19, class#=4
    for 'buffer busy waits' count=1 wait_time=27
    file#=1, block#=f923, class#=1
    for 'buffer busy waits' count=1 wait_time=13
    file#=1, block#=ec86, class#=1
    for 'buffer busy waits' count=1 wait_time=29
    file#=1, block#=f923, class#=1
    for 'buffer busy waits' count=1 wait_time=15
    file#=1, block#=f95d, class#=1
    for 'buffer busy waits' count=1 wait_time=215
    file#=1, block#=d1ff, class#=1
    temporary object counter: 1
    UOL used : 0 locks(used=2, free=10)
    KGX Atomic Operation Log 69405330
    Mutex 00000000(0, 0) idn 0 oper NONE
    Cursor Parent uid 145 efd 5 whr 11 slp 0
    oper=NONE pt1=A4744BC4 pt2=6842B2F4 pt3=A4744B94
    pt4=00000000 u41=0 stt=0
    KGX Atomic Operation Log 69405358
    Mutex 50B2BB74(0, 1) idn 0 oper NONE
    Cursor Stat uid 145 efd 8 whr 1 slp 0
    oper=NONE pt1=50B2BAC0 pt2=00000000 pt3=00000000
    pt4=00000000 u41=0 stt=8
    KGX Atomic Operation Log 69405380
    Mutex 00000000(0, 0) idn 0 oper NONE
    Library Cache uid 145 efd 0 whr 0 slp 0
    SO: 5C5A6334, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c5a6334 handle=5e9a6868 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C5A6384[5C76A228,5C59D1D0] htb=5C59D1D0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=CBK[0020] savepoint=0x0
    LIBRARY OBJECT HANDLE: handle=5e9a6868 mtx=5E9A691C(0) cdp=0
    namespace=CRSR flags=RON/KGHP/PN0/EXP/[10010100]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=S latch#=1 hpc=c742 hlc=c742
    lwt=5E9A68C4[5E9A68C4,5E9A68C4] ltm=5E9A68CC[5E9A68CC,5E9A68CC]
    pwt=5E9A68A8[5E9A68A8,5E9A68A8] ptm=5E9A68B0[5E9A68B0,5E9A68B0]
    ref=5E9A68E4[662DCE3C,662DCE3C] lnd=5E9A68F0[5E9A68F0,5E9A68F0]
    LIBRARY OBJECT: object=51e6451c
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    DEPENDENCIES: count=1 size=16
    AUTHORIZATIONS: count=1 size=16 minimum entrysize=18
    ACCESSES: count=1 size=16
    TRANSLATIONS: count=1 size=16
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 a1f966d4 51e645b4 I/P/A/-/- 0 NONE 00
    6 662dcce4 a24e2534 I/P/A/-/E 0 NONE 00
    KGX Atomic Operation Log 50C2014C
    Mutex 662DCC54(0, 2) idn d64ee82 oper SHRD
    Cursor Pin uid 145 efd 0 whr 3 slp 0
    opr=4 pso=5C5A6334 flg=0
    pcs=662DCC54 nxt=5B9C77F4 flg=18 cld=0 hd=5E9A6868 par=54763C50
    ct=2 hsh=0 unp=00000000 unn=0 hvl=662dcff0 nhv=0 ses=00000000
    hep=662DCCA0 flg=80 ld=1 ob=51E6451C ptr=A24E2534 fex=A24E16F8
    SO: 5C76A1D8, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c76a1d8 handle=5a67b168 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C76A228[5C59D1D0,5C5A6384] htb=5C59D1D0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x4bc41571
    LIBRARY OBJECT HANDLE: handle=5a67b168 mtx=5A67B21C(2) cdp=2
    name=
    insert into col$(obj#,name,intcol#,segcol#,type#,length,precision#,scale,null$,offset,fixedstorage,segcollength,deflength,default$,col#,property,charsetid,charsetform,spare1,spare2,spare3)values(:1,:2,:3,:4,:5,:6,decode(:5,182/*DTYIYM*/,:7,183/*DTYIDS*/,:7,decode(:7,0,null,:7)),decode(:5,2,decode(:8,-127/*MAXSB1MINAL*/,null,:8),178,:8,179,:8,180,:8,181,:8,182,:8,183,:8,231,:8,null),:9,0,:10,:11,decode(:12,0,null,:12),:13,:14,:15,:16,:17,:18,:19,:20)
    hash=012a6293ef607cee606b82dc0d64ee82 timestamp=04-08-2010 17:06:19
    namespace=CRSR flags=RON/KGHP/TIM/PN0/LRG/KST/DBN/MTX/[100100d1]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=1 hpc=c298 hlc=c298
    lwt=5A67B1C4[5A67B1C4,5A67B1C4] ltm=5A67B1CC[5A67B1CC,5A67B1CC]
    pwt=5A67B1A8[5A67B1A8,5A67B1A8] ptm=5A67B1B0[5A67B1B0,5A67B1B0]
    ref=5A67B1E4[5A67B1E4,5A67B1E4] lnd=5A67B1F0[5A67B1F0,5A67B1F0]
    LIBRARY OBJECT: object=54763bb8
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    CHILDREN: size=16
    child# table reference handle
    0 662dd020 662dce3c 5e9a6868
    1 662dd020 5b9c7940 5eb1fdbc
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 54c98590 54763c50 I/P/A/-/- 0 NONE 00
    SO: 5C5AFBCC, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c5afbcc handle=6ad6312c mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C5AFC1C[5C61B52C,5C59D0B0] htb=5C59D0B0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x0
    LIBRARY OBJECT HANDLE: handle=6ad6312c mtx=6AD631E0(0) cdp=0
    namespace=CRSR flags=RON/KGHP/PN0/EXP/[10010100]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=2 hpc=b9a0 hlc=b9a0
    lwt=6AD63188[6AD63188,6AD63188] ltm=6AD63190[6AD63190,6AD63190]
    pwt=6AD6316C[6AD6316C,6AD6316C] ptm=6AD63174[6AD63174,6AD63174
    SO: 5C5A5C34, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c5a5c34 handle=69731e20 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C5A5C84[5C59D4D0,5C6074B0] htb=5C59D4D0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x4bc41571
    LIBRARY OBJECT HANDLE: handle=69731e20 mtx=69731ED4(2) cdp=2
    name=update con$ set con#=:3 where owner#=:1 and name=:2
    hash=cb0043a665029adc35682cfd8f583ce2 timestamp=04-08-2010 17:10:17
    namespace=CRSR flags=RON/KGHP/TIM/PN0/SML/KST/DBN/MTX/[120100d0]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=2 hpc=ac50 hlc=ac50
    lwt=69731E7C[69731E7C,69731E7C] ltm=69731E84[69731E84,69731E84]
    pwt=69731E60[69731E60,69731E60] ptm=69731E68[69731E68,69731E68]
    ref=69731E9C[69731E9C,69731E9C] lnd=69731EA8[69731EA8,69731EA8]
    LIBRARY OBJECT: object=66177404
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    CHILDREN: size=16
    child# table reference handle
    0 5887c454 5887c270 69641ffc
    1 5887c454 5887c400 693d29b4
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 57178070 6617749c I/P/A/-/- 0 NONE 00
    SO: 5C62D288, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c62d288 handle=5734d800 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C62D2D8[5C59D590,5C59D590] htb=5C59D590 ssga=5C59CD04
    user=5f375f38 session=5f375f38 count=0 flags=LRU/[4000] savepoint=0x17ee5af
    LIBRARY OBJECT HANDLE: handle=5734d800 mtx=5734D8B4(0) cdp=0
    name=SYS._default_auditing_options_
    hash=fab1a450ca8625c88d7aa501cb042efa timestamp=03-14-2008 18:46:51
    namespace=TABL flags=KGHP/TIM/SML/[02000000]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=1 hpc=3e1e hlc=3e1e
    lwt=5734D85C[5734D85C,5734D85C] ltm=5734D864[5734D864,5734D864]
    pwt=5734D840[5734D840,5734D840] ptm=5734D848[5734D848,5734D848]
    ref=5734D87C[5734D87C,5734D87C] lnd=5734D888[5734D888,5734D888]
    LIBRARY OBJECT: object=69e8d9e4
    type=TABL flags=EXS/LOC[0005] pflags=[0000] status=VALD load=0
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 572f55b0 69e8da7c I/-/A/-/- 0 NONE 00
    SO: 5C62CA38, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c62ca38 handle=54e2b2d0 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C62CA88[5C7875E0,5C59D458] htb=5C59D458 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x0
    LIBRARY OBJECT HANDLE: handle=54e2b2d0 mtx=54E2B384(0) cdp=0
    namespace=CRSR flags=RON/KGHP/PN0/EXP/[10010100]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=1 hpc=cb1c hlc=cb1c
    lwt=54E2B32C[54E2B32C,54E2B32C] ltm=54E2B334[54E2B334,54E2B334]
    pwt=54E2B310[54E2B310,54E2B310] ptm=54E2B318[54E2B318,54E2B318]
    ref=54E2B34C[66147E34,66147E34] lnd=54E2B358[54E2B358,54E2B358]
    LIBRARY OBJECT: object=51e651ac
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    DEPENDENCIES: count=1 size=16
    AUTHORIZATIONS: count=1 size=16 minimum entrysize=18
    ACCESSES: count=1 size=16
    TRANSLATIONS: count=1 size=16
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 6a86de6c 51e65244 I/P/A/-/- 0 NONE 00
    6 66147cdc 9dabe588 I/-/A/-/E 0 NONE 00
    SO: 5C787590, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c787590 handle=57422b64 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C7875E0[5C59D458,5C62CA88] htb=5C59D458 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x4bc41571
    LIBRARY OBJECT HANDLE: handle=57422b64 mtx=57422C18(2) cdp=2
    name=insert into obj$(owner#,name,namespace,obj#,type#,ctime,mtime,stime,status,remoteowner,linkname,subname,dataobj#,flags,oid$,spare1,spare2)values(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16, :17)
    hash=8876f3fed7222711572e6a76e623c9d3 timestamp=04-08-2010 17:06:19
    namespace=CRSR flags=RON/KGHP/TIM/PN0/MED/KST/DBN/MTX/[500100d0]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=1 hpc=60c0 hlc=60c0
    lwt=57422BC0[57422BC0,57422BC0] ltm=57422BC8[57422BC8,57422BC8]
    pwt=57422BA4[57422BA4,57422BA4] ptm=57422BAC[57422BAC,57422BAC]
    ref=57422BE0[57422BE0,57422BE0] lnd=57422BEC[57422BEC,57422BEC]
    LIBRARY OBJECT: object=6a7276e4
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    CHILDREN: size=16
    child# table reference handle
    0 66148018 66147e34 54e2b2d0
    1 66148018 5896c0ec 6af7d37c
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 696b76e0 6a72777c I/P/A/-/- 0 NONE 00
    SO: 5C78B970, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c78b970 handle=54fbd288 mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C78B9C0[5C787B90,5C59D1B0] htb=5C59D1B0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x0
    LIBRARY OBJECT HANDLE: handle=54fbd288 mtx=54FBD33C(0) cdp=0
    namespace=CRSR flags=RON/KGHP/PN0/EXP/[10010100]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=3 hpc=f79a hlc=f79a
    lwt=54FBD2E4[54FBD2E4,54FBD2E4] ltm=54FBD2EC[54FBD2EC,54FBD2EC]
    pwt=54FBD2C8[54FBD2C8,54FBD2C8] ptm=54FBD2D0[54FBD2D0,54FBD2D0]
    ref=54FBD304[5BF92EF0,5BF92EF0] lnd=54FBD310[54FBD310,54FBD310]
    LIBRARY OBJECT: object=541e8b98
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    DEPENDENCIES: count=1 size=16
    AUTHORIZATIONS: count=1 size=16 minimum entrysize=16
    ACCESSES: count=1 size=16
    TRANSLATIONS: count=1 size=16
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 693cb454 541e8c30 I/P/A/-/- 0 NONE 00
    6 5bf92e34 a021d62c I/-/A/-/E 0 NONE 00
    SO: 5C787B40, type: 53, owner: 5F375F38, flag: INIT/-/-/0x00
    LIBRARY OBJECT LOCK: lock=5c787b40 handle=5eb09aec mode=N
    call pin=00000000 session pin=00000000 hpc=0000 hlc=0000
    htl=5C787B90[5C59D1B0,5C78B9C0] htb=5C59D1B0 ssga=5C59CD04
    user=5f375f38 session=5f37d6a8 count=1 flags=[0000] savepoint=0x4bc41571
    LIBRARY OBJECT HANDLE: handle=5eb09aec mtx=5EB09BA0(2) cdp=2
    name=select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,o.dataobj#,o.flags from obj$ o where o.obj#=:1
    hash=ae93e4a5100360375a3ff87632f4667e timestamp=04-02-2010 10:17:58
    namespace=CRSR flags=RON/KGHP/TIM/PN0/MED/KST/DBN/MTX/[500100d0]
    kkkk-dddd-llll=0000-0001-0001 lock=N pin=0 latch#=3 hpc=d23e hlc=d23e
    lwt=5EB09B48[5EB09B48,5EB09B48] ltm=5EB09B50[5EB09B50,5EB09B50]
    pwt=5EB09B2C[5EB09B2C,5EB09B2C] ptm=5EB09B34[5EB09B34,5EB09B34]
    ref=5EB09B68[5EB09B68,5EB09B68] lnd=5EB09B74[5EB09B74,5EB09B74]
    LIBRARY OBJECT: object=5bf92fd4
    type=CRSR flags=EXS[0001] pflags=[0000] status=VALD load=0
    CHILDREN: size=16
    child# table reference handle
    0 5bf92f60 5bf92d7c 5eb099a8
    1 5bf92f60 5bf92ef0 54fbd288
    2 5bf92f60 5b9aa120 576a80c8
    3 5bf92f60 5b9aa284 54f7a6d0
    DATA BLOCKS:
    data# heap pointer status pins change whr
    0 5eb09a7c 5bf9306c I/P/A/-/- 0 NONE 00

  • Problem with Create global temporary table command

    Hi,
    Following is the query i am using in one of my pl/sql report..
    EXECUTE IMMEDIATE 'CREATE GLOBAL TEMPORARY TABLE Billing_Report_Table ON COMMIT PRESERVE ROWS as select * from (Vc_Sql_Statement)';
    Error message i am getting when i run the report is "missing SELECT keyword"
    The variable Vc_Sql_statement contains a complex query retrieving data from different tables.
    Please help me out.
    Thanks in advance
    Shanthi

    Hi,
    SCOTT@soti_9> DECLARE
      2    Vc_Sql_Statement VARCHAR2(30) := 'DUAL';
      3  BEGIN
      4    EXECUTE IMMEDIATE
      5      'CREATE GLOBAL TEMPORARY TABLE Billing_Report_Table ON COMMIT PRESERVE ROWS AS ' ||
      6      ' select * from ' || Vc_Sql_Statement;
      7  END;
      8  /
    PL/SQL procedure successfully completed.
    SCOTT@soti_9> select * from Billing_Report_Table;
    D
    XRegards,
    Dima

  • View objects performance issue with oracle seeded tables

    While i am writing a view object on a oracle seeded tables like MTL_PARAMETERS, its taking more time to show in the oaf page.I am trying to display all these view object columns in detail disclosure of advanced table. My Application is taking more than two minutes to display the view columns of the query which is returning just 200 rows. Please help me how to improve performance when my query using seeded tables.
    This issue is happening only in R12 view object and advanced tables.
    Edited by: vlsn on Jun 24, 2012 11:36 PM

    Hi All,
    Here is architecture of my application:
    Java application creates XML from the screen values and then inserts that XML
    into a framework(separate DB schema) table . then Java calls a Stored Procedure from same framework DB and in SP we have following steps.
    1. It fatches XML from the XML type table and inserts XML into screen specific XML TYPE table in the framework DB Schema. This table has a trigger which parses XML and then inserts XML values into GTT which are created in separate product schemas.
    2. it calls Product SP and then in product SP we have business logic. Product SP
    does the execution and then inserts response into Response GTT.
    3. Response XML is created by using XML generation function and response GTT.
    I hope u will understand my architeture this time and now let me know if GTT are good in this scenario or not. also please not that i need data in GTT only during execution and not after that. i dont want to do specific delete which i have to do if i am using normal tables.
    Regards,
    Vikas Kumar

  • Performance issue with Oracle data source

    Hi all,
    I've a rather strange problem that I'm stuck on need some assistance on.
    I have a rules file which drags data in via an SQL data source thats an Oracle server. If I cut/paste the 3 sections of "select" "from" and "where" into SQL-Developer and run the query, it takes less than 1 second to complete. When I run the "load data" with this rule file or even use the "Retrieve" with the rules file edit, it takes up to an hour to complete/retrieve the data.
    The table in question being used has millions of rows and I'm using one of the indexed fields to retrieve the data. It's as if the Essbase/Rule file is ognoring the index, or I have a config issue with the ODBC settings on the server that is causing the problem.
    ODBC.INI file entry for the Oracle server as follows (changed any sensitive info to xxx or 999).
    [XXX]
    Driver=/opt/data01/hyperion/common/ODBC-64/Merant/5.2/lib/ARora22.so
    Description=DataDirect 5.2 Oracle Wire Protocol
    AlternateServers=
    ApplicationUsingThreads=1
    ArraySize=60000
    CachedCursorLimit=32
    CachedDescLimit=0
    CatalogIncludesSynonyms=1
    CatalogOptions=0
    ConnectionRetryCount=0
    ConnectionRetryDelay=3
    DefaultLongDataBuffLen=1024
    DescribeAtPrepare=0
    EnableDescribeParam=0
    EnableNcharSupport=0
    EnableScrollableCursors=1
    EnableStaticCursorsForLongData=0
    EnableTimestampWithTimeZone=0
    HostName=999.999.999.999
    LoadBalancing=0
    LocalTimeZoneOffset=
    LockTimeOut=-1
    LogonID=xxx
    Password=xxx
    PortNumber=1521
    ProcedureRetResults=0
    ReportCodePageConversionErrors=0
    ServiceType=0
    ServiceName=xxx
    SID=
    TimeEscapeMapping=0
    UseCurrentSchema=1
    Can anyone please advise on this lack of performance.
    Thanks in advance
    Bagpuss

    One other thing that I've seen is that if your Oracle data source and Essbase server are in different geographic locations, you can get some delay when it retrieves data over the WAN. I guess there is some handshaking going on when passing the data from Oracle to Essbase (either by record or groups of records) that is slowed WAY down over the WAN.
    Our solution to this was remove teh query out of the load rule, run it via SQL+ on a command line at the geographic location where the Oracle database is, then ftp the resulting file to where the Essbase server is.
    With upwards of 6 million records being retrieved, it took around 4 hours in the load rule, but running the query via command line took 10 minutes, then the ftp took less than 5.

  • Performance issue with Oracle Text index

    Hi Experts,
    We are on Oracle 11.2..0.3 on Solaris 10. I have implemented Oracle Text in our environment and I am facing a strange performance issue that is happening in our environment.
    One sql having CONTAINS clause is taking forever - more than 20 minutes and still does not complete. This sql has a contains clause and an exists clause and a not exists clause.
    Now if I remove the exists clause and a not exists clause , it completes fast. but with those two clauses it is just taking forever. It is late night so i am not able to post the table and sql query details and will do so tomorrow but based on this general description, are there any pointers for me to review?
    sql query doing fine:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    --sql query that hangs forever:
    SELECT
        U.CLNT_OID, U.USR_OID, S.MAILADDR
    FROM
        access_usr U
        INNER JOIN access_sia S
            ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
        WHERE U.CLNT_OID = 'ABCX32S'
        AND CONTAINS(LAST_NAME , 'TO%' ) >0
    and exists (--one clause here wiht a few table joins)
    and not exists (--one clause here wiht a few table joins);
    --Now another strange thing I found is if instead of 'TO%' in this sql, if I were to use 'ZZ%' or 'L1%' it works fast but for 'TO%' it goes slow with those two exists not exists clauses!
    I will be most thankful for the inputs.
    OrauserN

    Hi Barbara,
    First of all, thanks a lot for reviewing the issue.
    Unluckily making the change to empty_stoplist did not work out. I am today copying the entire sql here that has this issue and will be most thankful for more insights/pointers on what can be done.
    Here is the entire sql:
    SELECT U.CLNT_OID,
           U.USR_OID,
           S.EMAILADDRESS,
           U.FIRST_NAME,
           U.LAST_NAME,
           S.JOBCODE,
           S.LOCATION,
           S.DEPARTMENT,
           S.ASSOCIATEID,
           S.ENTERPRISECOMPANYCODE,
           S.EMPLOYEEID,
           S.PAYGROUP,
           S.PRODUCTLOCALE
      FROM    ACCESS_USR U
           INNER JOIN
              ACCESS_SIA S
           ON S.USR_OID = U.USR_OID AND S.CLNT_OID = U.CLNT_OID
    WHERE     U.CLNT_OID = 'G39NY3D25942TXDA'
           AND EXISTS
                  (SELECT 1
                     FROM ACCESS_USR_GROUP_XREF UGX
                          INNER JOIN ACCESS_GROUP RELG
                             ON     RELG.CLNT_OID = UGX.CLNT_OID
                                AND RELG.GROUP_OID = UGX.GROUP_OID
                          INNER JOIN ACCESS_GROUP G
                             ON     G.CLNT_OID = RELG.CLNT_OID
                                AND G.GROUP_TYPE_OID = RELG.GROUP_TYPE_OID
                    WHERE     UGX.CLNT_OID = U.CLNT_OID
                          AND UGX.USR_OID = U.USR_OID
                          AND G.GROUP_OID = 920512943
                          AND UGX.INCLUDED = 1)
           AND NOT EXISTS
                      (SELECT 1
                         FROM    ACCESS_USR_GROUP_XREF UGX
                              INNER JOIN
                                 ACCESS_GROUP G
                              ON     G.CLNT_OID = UGX.CLNT_OID
                                 AND G.GROUP_OID = UGX.GROUP_OID
                        WHERE     UGX.CLNT_OID = U.CLNT_OID
                              AND UGX.USR_OID = U.USR_OID
                              AND G.GROUP_OID = 920512943
                              AND UGX.INCLUDED = 1)
           AND CONTAINS (U.LAST_NAME, 'Bon%') > 0;
    Like I said before if the EXISTS and NOT EXISTS clause are removed it works in sub-second. But with those EXISTS and NOT EXISTS CLAUSE IT TAKES ANY WHERE FROM 25 minutes to more than one hour.
    NOte also that it was not TO% but Bon% in the CONTAINS clause that is giving the issue - sorry that was wrong on my part.
    Also please see below the ORACLE TEXT index defined on the table ACCESS_USER:
    --definition of preferences used in the index:
    SET SERVEROUTPUT ON size unlimited
    WHENEVER SQLERROR EXIT SQL.SQLCODE
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_lexer', 'BASIC_LEXER');
       ctxsys.ctx_ddl.set_attribute ('cust_lexer', 'base_letter', 'YES'); -- removes diacritics
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_LEXER with BASIC LEXER is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    DECLARE
       v_err       VARCHAR2 (1000);
       v_sqlcode   NUMBER;
       v_count     NUMBER;
    BEGIN
       ctxsys.ctx_ddl.create_preference ('cust_wl', 'BASIC_WORDLIST');
       ctxsys.ctx_ddl.set_attribute ('cust_wl', 'SUBSTRING_INDEX', 'true'); -- to improve performance
    EXCEPTION
       WHEN OTHERS
       THEN
          v_err := SQLERRM;
          v_sqlcode := SQLCODE;
          v_count := INSTR (v_err, 'DRG-10701');
          IF v_count > 0
          THEN
             DBMS_OUTPUT.put_line (
                'The required preference named CUST_WL with BASIC WORDLIST is already set up');
          ELSE
             RAISE;
          END IF;
    END;
    --now below is the code of the index:
    CREATE INDEX ACCESS_USR_IDX3 ON ACCESS_USR
    (FIRST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    CREATE INDEX ACCESS_USR_IDX4 ON ACCESS_USR
    (LAST_NAME)
    INDEXTYPE IS CTXSYS.CONTEXT
    PARAMETERS('LEXER cust_lexer WORDLIST cust_wl SYNC (ON COMMIT)');
    The strange thing is that, like I said, If I remove the exists clause the query returns very fast. Also if I modify the query to use only one NOT EXISTS clause and remove the other EXISTS clause it returns in less than one second.  Also if I remove the EXISTS clause and use only the NOT EXISTS  clause it returns in less than 4 seconds. But with both clauses it runs forever!
    When I tried to get dbms_xplan.display_cursor to get the query plan (for the case of both exists and not exists clause in the query), it said that previous statement's sql id was 0 or something like that so that I was not able to see the query plan. I will keep trying to get this plan (it takes 25 minutes to one hour each time but will get this info soon). Again any pointers are most helpful.
    Regards
    OrauserN

  • Scalability issue with global temporary table.

    Hi All,
    Does create global temporary table would lock data disctionary like create table? if yes would not it be a scalable issue with multi user environment?
    Thanks and Regards,
    Rudra

    Billy  Verreynne  wrote:
    acadet wrote:
    am I correct in interpreting your response that we should be using GTT's in favour of bulk operations and collections and in memory operations? No. I said collections cannot scale. This means due to the fact that collections reside in expensive PGA memory, you cannot stuff large data volumes into them. Thus they do not make an ideal storage bin for temporary data (e.g. data loaded from file or a web service). GTTs otoh do not suffer from the same restrictions, can be indexed and offer vastly better scalability and so on.
    Multiple passes are often needed using such a data structure. Or filtering to find specific data. As a GTT is a SQL native, it offers a lot more flexibility and performance in this regard.
    And this makes sense - as where do we put out persistent data? Also in tables, but ones of a persistent and not temporary kind like a GTT.
    Collections are pretty useful - but limited in size and capability.
    Rudra states:
    I want to pull out few metrices from differnt tables and processing itIf this can't be achieved in a SQL statement, unless Rudra is a master of understatement then I would see GTT's as a waste of IO and programming effort. I agree.
    My comments however were about choices for a temporary data storage bin in PL/SQL.I agree with your general comments regarding temporary storage bins in Oracle, but to say that collections don't scale is putting to narrow a definition on scaling. True, collections can be resource intensive in terms of memory and CPU requirements, but their persistence will generally be much shorter than other types of temporary storage. Given the right characteristics, collections will scale and given the wrong characteristics GTT's wont scale.
    As you say it is all about choice. Getting back to the theme of this thread though, the original poster should be made aware that well designed and well coded applications are most likely to scale. Creating tables on the fly is generally considered bad practice and letting the database do what it does best, join tables in queries at the SQL level is considered good practice. The rest lies somewhere in between and knowing when to do which is why we get paid the big bucks (not). ;-)
    Regards
    Andre

  • Problem with global temporary table in Oracle 10g

    Hi All,
    I face a peculiar problem in Oracle 10g with respect to Global temporary table.
    Have Oracle 10g version in Production and 11g version in UAT.
    Table_
    create global temporary table TT_TEMPGPSMANUAL
      Col_1    VARCHAR2(50),
      Col_2    VARCHAR2(500),
      Col_3    VARCHAR2(50),
      Col_4    VARCHAR2(50),
      Col_5    VARCHAR2(15),
      Col_6    VARCHAR2(20),
      Col_7    VARCHAR2(250),
      Col_8    VARCHAR2(20),
      Col_9    VARCHAR2(15),
      Col_10   VARCHAR2(20),
      Flag     NUMBER,
      Col_11   INTEGER,
      Col_12   VARCHAR2(50)
    on commit preserve rows;So this should preserve the rows inserted into this table until the session ends.
    Have a webpage in front-end where in turn, it opens another page (session is carried through) and a few rows will be inserted to this table from the webpage (through a function) on submit and the current page will be closed.
    From the parent page, if I open the sub-page data inserted in the temporary table are held and displayed (another function to fetch the values in the Global Temp table).
    The Problem in Oracle 10g (Production) is, this is not happening properly. When I close and open the sub-page, not every time I get the data stored i.e if I close and open the page 10 times, atelast 4 times the data is missed in the page (I am not getting values from temp table) randomly.
    But this does not happen in UAT (which has Oracle 11g installed) as I get the data in the webpage consistently. After passing UAT, when we rolled out to Prod, getting this issue which we are unable to get what could be the reason.
    It is very hard to debug using GTT dynamically in prod. It takes time to get Oracle 11g installed in Prod.
    Can anyone suggest?
    Regards
    Deep

    935195 wrote:
    Also, I am opening the sub-page from the parent page (through a hyperlink). Then in this case, Would session will be changed from parent to subpage? (I am not aware exactly and have the impression that, as the second page is a child, I guess it would take the same session).I'm not sure what "sub-page" or "parent page" means to you. If you're just linking from one page to another, "parent" and "child" don't really make sense since page A links to page B and B links to A quite frequently.
    Assuming that you have to log in to access the site, it is likely that the two pages share the same middle tier application session. It is unlikely that the middle tier would hold the database session from the first request open waiting to see if the user eventually requested the second page. It is theoretically possible that you could code your middle tier this way but it is extremely unlikely that you would want to do so for a variety of reasons. So, when you say "would [the] session ... be changed", it is likely that the application session would be the same for both calls but that the database session would be different.
    Justin

  • Direct Path Loading Issues with Global Temporary Tables - OCI & OCILib

    I am writing some code to import data into a warehouse from a CPU grid which computes risk data. Due to the fact a computing grid is used there will be many clients which can load the data concurrently and at any point in time.
    Currently the import uses Binding in OCCI and chunking with a prepared statement to import the data into a global temporary table in a staging area after which a stored procedure is called within the same session which will process the data and load the data into a star schema.
    The GTT has the advantage that if any clients have issues no dirty data will be left and each client only sees their own instance of the data.
    I have been looking at using direct path loading to increase the performance of the load and have written some OCI code to perform the same task. I have manged to import the data into a regular heap based table using the OCI direct path apis. However when I try and use the same code to import against a Global Temporary Table I get an OCI Error (ORA-00600: internal error code, arguments: [6979], [16], [1], [1318528], [], [], [], [], [], [], [], [])
    I get error when the function OCIDirPathPrepare is executed. The same issue occurs in both OCI and OCILib.
    Is it not possible to use Direct Path Loading against a Global Temporry Table ? Because you can use the /*+ APPEND */ hint and load global temporary tables this way from tools like SQL Devloper / toad which is surely informing the SQL Engine to use Direct Path ?
    Looking at the table USER_OBJECTS I can see that for a Global Temporary Table the DATA_OBJECT_ID is null. Does this mean that it is impossible to us a direct path load into Global Temporary Tables ?
    Any ideas / suggestions would be really appreciated. If this means redesigning the application then I would appreciate suggestions which would allow many client to quick write processes in a parallel fashion. If this means creating a new parition in a Heap Table for each writer and direct path loading into this table then so be it.
    Thanks
    H
    Edited by: 813640 on 19-Nov-2010 11:08

    Replying to my own message in case anyone else is interested.
    I have now managed to successfully load data using direct path into a global temporary table with OCI. There appears to be no reason why this approach will not work.
    I loaded data into the temporary table and then issued a select count(*) on the table from within the session and from a new session. The results were as expected.
    The resaon for the ORA-006000 error was due to the fact that I had enabled table level parallel loading
    ie
    OCIAttrSet((dvoid *) context, (ub4) OCI_HTYPE_DIRPATH_CTX, *(ub1) 1*, (ub4)0, (ub4) OCI_ATTR_DIRPATH_PARALLEL, errhp)
    When loading a Global Temporary Table the OCI_ATTR_DIRPATH_PARALLEL attribute needs to be zero
    This makes sense, since the temp table does not have any partitions so it would not be possible to write in parallel to multiple paritions.
    Edited by: 813640 on 22-Nov-2010 08:42

  • Performance issue with temporary table

    Hello oracle community,
    Oracle 11.1
    I have a problem with a global temp table (IMPO.REPCUSTOMERSLUCK24). I insert about 600.000 records into the table and doing some UPDATE statements on the table and at the end a MERGE statemtent to fill another table. I think the problem is, that the optimizier dont know how many records are in the temp table (Cardinality 1), but I cannot use DBMS_STATS.GATHER_TABLE_STATS to analyze the temp table (will lose the records if I do). Maybe I could analyze it with the "preserve on commit" option, but would like to avoid that. here is the
    Plan
    UPDATE STATEMENT ALL_ROWSCost: 1 Bytes: 1.171 Cardinality: 1                                              
         15 UPDATE IMPO.REPCUSTOMERSLUCK24                                         
              14 FILTER                                    
                   2 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCUSTOMERSLUCK24 Cost: 1 Bytes: 1.171 Cardinality: 1                               
                        1 INDEX RANGE SCAN INDEX IMPO.FK_1883_REPCUSTOMERSLUCK24 Cost: 1 Cardinality: 1                          
                   13 FILTER                               
                        12 SORT GROUP BY NOSORT Cost: 0 Bytes: 2.212 Cardinality: 1                          
                             11 NESTED LOOPS                     
                                  9 NESTED LOOPS Cost: 0 Bytes: 2.212 Cardinality: 1                
                                       7 NESTED LOOPS Cost: 0 Bytes: 1.685 Cardinality: 1           
                                            4 TABLE ACCESS BY INDEX ROWID TABLE (TEMP) IMPO.REPCONTRACTSLUCK24 Cost: 0 Bytes: 1.158 Cardinality: 1      
                                                 3 INDEX FULL SCAN INDEX IMPO.FK_1875_REPCONTRACTSLUCK24 Cost: 0 Cardinality: 1
                                            6 TABLE ACCESS BY INDEX ROWID TABLE CRM2.MEDIACODE Cost: 0 Bytes: 527 Cardinality: 1      
                                                 5 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.AK_1970_MEDIACODE Cost: 0 Cardinality: 1
                                       8 INDEX UNIQUE SCAN INDEX (UNIQUE) CRM2.PK_1955_PARTNER Cost: 0 Cardinality: 1           
                                  10 TABLE ACCESS BY INDEX ROWID TABLE CRM2.PARTNER Cost: 0 Bytes: 527 Cardinality: 1                
    any suggestions to my problem ?
    Ikrischer

    hi,
    dynamic sampling is read only a part of the table to make an estimatation (generally count the number of rows, or get an average (if the sample is 'large' enough' for the result to be reliable) etc.
    So in you case you could evaluate the number of row like this (the explain plans show you that the estimated cost is propotional to the size of the sample read (either expressed in # of rows or block)).
    SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jun 17 15:32:43 2010
    Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    With the Partitioning, Oracle Label Security, OLAP and Data Mining options
    SQL> CREATE GLOBAL TEMPORARY TABLE XTEST
      2  (
      3    NUM1  NUMBER                                  NOT NULL
      4  )
      5  ON COMMIT PRESERVE ROWS
      6  NOCACHE
      7  /
    Table created.
    SQL> INSERT INTO xtest
      2     SELECT     ROWNUM
      3     FROM       DUAL
      4     CONNECT BY ROWNUM <= 100000;
    100000 rows created.
    SQL> commit;
    Commit complete.
    SQL> EXEC dbms_stats.gather_table_stats(ownname=>user,tabname=>'XTEST');
    PL/SQL procedure successfully completed.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st1' FOR SELECT COUNT(*)*10 FROM xtest SAMPLE(10);
    Explained.
    SQL> EXPLAIN PLAN SET STATEMENT_ID = 'st2' FOR SELECT COUNT(*)*1.1 FROM xtest SAMPLE(90);
    Explained.
    SQL> set linesize 120;
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st1','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    31  (26)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 10077 | 40308 |    31  (26)| 00:00:01 |
    9 rows selected.
    SQL> SELECT PLAN_TABLE_OUTPUT FROM   TABLE(DBMS_XPLAN.DISPLAY(NULL,'st2','TYPICAL'));
    PLAN_TABLE_OUTPUT
    Plan hash value: 2221487120
    | Id  | Operation            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |       |     1 |     4 |    32  (29)| 00:00:01 |
    |   1 |  SORT AGGREGATE      |       |     1 |     4 |            |          |
    |   2 |   TABLE ACCESS SAMPLE| XTEST | 90693 |   354K|    32  (29)| 00:00:01 |
    9 rows selected.
    SQL> Note the difference of rows/bytes in both samples, but be carrefull because the explain plan only gives you an estimation ...
    REM: If you sample by blocks, you'll get less 'IO' (physical or not) (select count(*)1.5 from mytable sample block (50) is costless thans elect count(*)1.5 from mytable sample (50)) ...

  • How to resolve most of the Oracle SQL , PL/SQL Performance issues with help of quick Checklist/guidelines ?

    Please go thru below important checklist/guidelines to identify issue in any Perforamnce issue and resolution in no time.
    Checklist for Quick Performance  problem Resolution
    ·         get trace, code and other information for given PE case
              - Latest Code from Production env
              - Trace (sql queries, statistics, row source operations with row count, explain plan, all wait events)
              - Program parameters & their frequently used values
              - Run Frequency of the program
              - existing Run-time/response time in Production
              - Business Purpose
    ·         Identify most time consuming SQL taking more than 60 % of program time using Trace & Code analysis
    ·         Check all mandatory parameters/bind variables are directly mapped to index columns of large transaction tables without any functions
    ·         Identify most time consuming operation(s) using Row Source Operation section
    ·         Study program parameter input directly mapped to SQL
    ·         Identify all Input bind parameters being used to SQL
    ·         Is SQL query returning large records for given inputs
    ·         what are the large tables and their respective columns being used to mapped with input parameters
    ·         which operation is scanning highest number of records in Row Source operation/Explain Plan
    ·         Is Oracle Cost Based Optimizer using right Driving table for given SQL ?
    ·         Check the time consuming index on large table and measure Index Selectivity
    ·         Study Where clause for input parameters mapped to tables and their columns to find the correct/optimal usage of index
    ·         Is correct index being used for all large tables?
    ·         Is there any Full Table Scan on Large tables ?
    ·         Is there any unwanted Table being used in SQL ?
    ·         Evaluate Join condition on Large tables and their columns
    ·         Is FTS on large table b'cos of usage of non index columns
    ·         Is there any implicit or explicit conversion causing index not getting used ?
    ·         Statistics of all large tables are upto date ?
    Quick Resolution tips
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    2) Use Data Caching Technique/Options to cache static data
    3) Use Pipe Line Table Functions whenever possible
    4) Use Global Temporary Table, Materialized view to process complex records
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    8) Follow Oracle PL/SQL Best Practices
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    12) Review Join condition on existing query explain plan
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    14) Avoid applying SQL functions on index columns
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    Thanks
    Praful

    I understand you were trying to post something helpful to people, but sorry, this list is appalling.
    1) Use Bulk Processing feature BULK COLLECT with LIMIT and FOR ALL for DML instead of row by row processing
    No, use pure SQL.
    2) Use Data Caching Technique/Options to cache static data
    No, use pure SQL, and the database and operating system will handle caching.
    3) Use Pipe Line Table Functions whenever possible
    No, use pure SQL
    4) Use Global Temporary Table, Materialized view to process complex records
    No, use pure SQL
    5) Try avoiding multiple network trips for every row between two database using dblink, Use Global temporary table or set operator to reduce network trip
    No, use pure SQL
    6) Use EXTERNAL Table to build interface rather then creating custom table and program to Load and validate the data
    Makes no sense.
    7) Understand Oracle's Cost based Optimizer and Tune most expensive SQL queries with help of Explain plan
    What about using the execution trace?
    8) Follow Oracle PL/SQL Best Practices
    Which are?
    9) Review tables and their indexes being used in the SQL queries and avoid unnecessary Table scanning
    You mean design your database and queries properly?  And table scanning is not always bad.
    10) Avoid costly Full Table Scan on Big Transaction tables with Huge data volume,
    It depends if that is necessary or not.
    11) Use appropriate filtration condition on index columns of seeded Oracle tables directly mapped to program parameters
    No, consider that too many indexes can have an impact on overall performance and can prevent the CBO from picking the best plan.  There's far more to creating indexes than just picking every column that people are likely to search on; you have to consider the cardinality and selectivity of data, as well as the volumes of data being searched and the most common search requirements.
    12) Review Join condition on existing query explain plan
    Well, if you don't have your join conditions right then your query won't work, so that's obvious.
    13) Use Oracle hint to guide Oracle Cost based optimizer to choose best plan for your custom queries
    No.  Oracle recommends you do not use hints for query optimization (it says so in the documentation).  Only certain hints such as APPEND etc. which are more related to certain operations such as inserting data etc. are acceptable in general.  Oracle recommends you use the query optimization tools to help optimize your queries rather than use hints.
    14) Avoid applying SQL functions on index columns
    Why?  If there's a need for a function based index, then it should be used.
    15) Use appropriate hint to guide Oracle CBO to choose best plan to reduce response time
    See 13.
    In short, there are no silver bullets for dealing with performance.  Each situation is different and needs to be evaluated on its own merits.

  • Doubt with Global Temporary table

    hi,
    i have created a global temporary table with ON COMMIT DELETE ROWS option. in my Function in a loop i m inserting values into this Table, after that loop closes and then i m selecting some other values from DB. and in the last i am returning a ref cursor which is selecting values from temporary table i hav created.
    now the thing is i m not getting any values in the cursor.
    later I have created the table with ON COMMIT PRESERVE ROWS option, in this case cursor returning values,
    can anyone explain me the functionality, as per my knowledge global temporary table values are session specific so why i m not getting the values in the 1st case when i used ON COMMIT DELETE ROWS (same session).
    Thanks
    Piyush

    Ok, here's a simple example, like we'd like to see from you not working....
    First create a GTT with ON COMMIT DELETE ROWS...
    SQL> ed
    Wrote file afiedt.buf
      1* create global temporary table mytable (x number) on commit delete rows
    SQL> /
    Table created.Now a simple function that populates the GTT and returns a ref cursor to the data without doing any commits (hence the data should be there!)
    SQL> ed
    Wrote file afiedt.buf
      1  create or replace function pop_table return sys_refcursor is
      2    v_rc sys_refcursor;
      3  begin
      4    insert into mytable
      5    select rownum from dual connect by rownum <= 10;
      6    OPEN v_rc FOR SELECT x FROM mytable;
      7    RETURN v_rc;
      8* end;
    SQL> /
    Function created.So now we call the function and get a reference to our ref cursor...
    SQL> var v_a refcursor;
    SQL> exec :v_a := pop_table();
    PL/SQL procedure successfully completed.So, in principle, because no commits have been issued the ref cursor should return data...
    SQL> print v_a;
             X
             1
             2
             3
             4
             5
             6
             7
             8
             9
            10
    10 rows selected.... which it does.
    Now, what happens if we do that again...
    SQL> commit;
    Commit complete.
    SQL> exec :v_a := pop_table();
    PL/SQL procedure successfully completed.... but this time we commit before retrieving the data...
    SQL> commit;
    Commit complete.
    SQL> print v_a;
    ERROR:
    ORA-00600: internal error code, arguments: [kcbz_check_objd_typ_1], [0], [0], [1], [], [], [], []
    no rows selected
    SQL>Oracle has (correctly) lost reference to the data because of the commit.
    So show us what yours is doing.

  • Does Global Temporary Table help in performance?

    I have a large database table that is growing daily. The application I have has a page for the past day data and another for some chosen period of time. Since I'm looking at a very large amount of data for each page (~100k rows) and having charts based on time, I have performance issues. I tried collections for each of these and found out that it is making everything slower and I think because the collection is large and it is not indexed.
    Since I don't need the data to be maintained for the session and in fact for each time that I submit a page I need to get the updated data at least for the past day page, I wonder if Global Temporary Table is a good solution for me.
    The only reason I want to store the data in a table is to avoid running similar queries for different charts and reports. Is this a valid reason at all?
    If this is a good solution, can someone give me a hint on how to do this?
    Any help is appreciated.

    It all depends on how efficient your query is. You can have a billion row table and still get a fraction of a second response if the data is indexed, and the number of data blocks to be visited to retrieve the data is small. It's all about reducing the number of I/Os to find and retrieve your data with the query. Many aspects of the data, stats, table/index structure etc can influence the efficiency of your query. The SQL forum would be a better place to get into the query tuning, but if this test is fast, you can probably focus elsewhere for now. It will resolve your full resultset, and then just do a count of the result (to avoid sending 100k rows back to the client). We are trying to get an idea of how long it takes to resolve your resultset. Using litterals rather than item names in your sql should be fine for this test. Avoid using V() around item names in your SQL.
    select count(*) from ( <your-query-goes-here> );

Maybe you are looking for

  • Windows 8.1 installed and now 16GB of space for Windows

    I installed Windows 8.1 on my laptop and now the Windows folder is using 16.2GB of space. I think somehow I have 2 copies of the OS on here. I need to know how to uninstall the 2nd copy. I do NOT need a second copy of the OS or a partition. Please he

  • FCP 5.0 and Kona video card

    Hi. Just upgraded to FCP v 5.0 and am having problems with my Kona card. Upgrading the software seems to have reset many a/v preferences, and I can't get video to play on my NTSC monitor. Initializing capture tool makes the screen break up into stran

  • How can I open my iPhone 5S unlocked

    Dears. how can I make my Iphone 5s unlocked? I bought a new Iphone 5s from America and it is AT&T, so I cannot open it here in Oman. How can I make it work?

  • The iPad Mini does not charge using a laptop

    My brand new iPad Mini does not charge using my macbook pro, but it does when is connected to de ac power wall adapter !!!

  • Why database related code not wriiten in jsp's

    hi all what all resons are there for which database related code is not written in jsp's one thing that i know is it's not a j2ee architecture to do so so is it the thing that doing so hits the performance issue thanx