Hash_sj hint in oracle 10g

Please let me know how hash_sj hint is used to improve performance in sql query using EXISTS in oracle 10g? What is the use of hash_sj hint? How it improves the performance?

Hi Nicosa,
I think EXISTS also works in the same way. What I know is that if a row returned by the outer query matches the condition written inside EXISTS clause it will fetch the next row from outer query,if it does not find the matches it tries the next match in the EXISTS subquery..
Please correct me if I am wrong.
Though I have come to know this hint is depricated in 10g I am waiting for your reply.
Mrinmoy

Similar Messages

  • SQL query performance difference with Index Hint in Oracle 10g

    Hi,
    I was having a problem in SQL select query which was taking around 20 seconds to get the results. So, by hit and trail method I added Index Oracle Hint into the same query with the list of indexes of the tables and the results are retrieved with in 10 milli seconds. I am not sure to get How this is working with Indexes Hint.
    The query with out Index Hint:
    select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3  where FdnTab.id=52787 and paramTab1.id= FdnTab.id  and paramTab3.id = FdnTab.id  and paramTab3.attr_value = FdnTab2.fdn  and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2  where TemplateTab2.id=FdnTab.id  and ParamTab2.id=TemplateTab2.template_id  and ParamTab2.id=FdnTab2.id  and ParamTab2.attr_name=paramTab1.attr_name)  ==> EXECUTION TIME: 20 secs
    The same query with Index Hint:
    select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name) ==> EXECUTION TIME: 10 milli secs
    Can any one suggest what could be the real problem?
    Regards,
    Purushotham

    Sorry,
    The right query and the explain plan:
    select /*+rule*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3  where FdnTab.id=52787 and paramTab1.id= FdnTab.id  and paramTab3.id = FdnTab.id  and paramTab3.attr_value = FdnTab2.fdn  and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2  where TemplateTab2.id=FdnTab.id  and ParamTab2.id=TemplateTab2.template_id  and ParamTab2.id=FdnTab2.id  and ParamTab2.attr_name=paramTab1.attr_name) 
    SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 651267974
    | Id | Operation | Name |
    | 0 | SELECT STATEMENT | |
    |* 1 | FILTER | |
    | 2 | NESTED LOOPS | |
    | 3 | NESTED LOOPS | |
    | 4 | NESTED LOOPS | |
    |* 5 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE |
    PLAN_TABLE_OUTPUT
    |* 6 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE |
    |* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
    | 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE |
    |* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE |
    | 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE |
    |* 11 | INDEX UNIQUE SCAN | SYS_C005695 |
    | 12 | NESTED LOOPS | |
    |* 13 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE |
    |* 14 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    1 - filter( EXISTS (SELECT 0 FROM "TEMPLATERELATIONTABLE"
    "TEMPLATETAB2","PARAMETERTABLE" "PARAMTAB2" WHERE
    "PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"=:B2 AND
    "PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID" AND
    "TEMPLATETAB2"."ID"=:B3))
    5 - access("FDNTAB"."ID"=52787)
    6 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
    7 - access("PARAMTAB1"."ID"="FDNTAB"."ID" AND
    PLAN_TABLE_OUTPUT
    "PARAMTAB1"."ATTR_NAME"='harqUsersMax')
    9 - access("PARAMTAB3"."ID"="FDNTAB"."ID")
    11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
    13 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
    14 - access("TEMPLATETAB2"."ID"=:B1 AND
    "PARAMTAB2"."ID"="TEMPLATETAB2"."TEMPLATE_ID")
    Note
    - rule based optimizer used (consider using cbo)
    43 rows selected.
    WITH INDEX HINT:
    select /*+INDEX(fdnmappingtable[PRIMARY_KY_FDNMAPPINGTABLE],parametertable[PRIMARY_KY_PARAMETERTABLE])*/ FdnTab2.fdn, paramTab3.attr_name from fdnmappingtable FdnTab, fdnmappingtable FdnTab2, parametertable paramTab1 ,parametertable paramTab3 where FdnTab.id=52787 and paramTab1.id= FdnTab.id and paramTab3.id = FdnTab.id and paramTab3.attr_value = FdnTab2.fdn and paramTab1.attr_name='harqUsersMax' and paramTab1.attr_value <> 'DEFAULT' and exists ( select ParamTab2.attr_name from parametertable ParamTab2, templaterelationtable TemplateTab2 where TemplateTab2.id=FdnTab.id and ParamTab2.id=TemplateTab2.template_id and ParamTab2.id=FdnTab2.id and ParamTab2.attr_name=paramTab1.attr_name);
    SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2924316070
    | Id | Operation | Name | Rows | B
    ytes | Cost (%CPU)| Time |
    PLAN_TABLE_OUTPUT
    | 0 | SELECT STATEMENT | | 1 |
    916 | 6 (0)| 00:00:01 |
    |* 1 | FILTER | | |
    | | |
    | 2 | NESTED LOOPS | | 1 |
    916 | 4 (0)| 00:00:01 |
    | 3 | NESTED LOOPS | | 1 |
    401 | 3 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    | 4 | NESTED LOOPS | | 1 |
    207 | 2 (0)| 00:00:01 |
    |* 5 | TABLE ACCESS BY INDEX ROWID| PARAMETERTABLE | 1 |
    194 | 1 (0)| 00:00:01 |
    |* 6 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    | 1 (0)| 00:00:01 |
    |* 7 | INDEX UNIQUE SCAN | PRIMARY_KY_FDNMAPPINGTABLE | 1 |
    PLAN_TABLE_OUTPUT
    13 | 1 (0)| 00:00:01 |
    | 8 | TABLE ACCESS BY INDEX ROWID | PARAMETERTABLE | 1 |
    194 | 1 (0)| 00:00:01 |
    |* 9 | INDEX RANGE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    | 1 (0)| 00:00:01 |
    | 10 | TABLE ACCESS BY INDEX ROWID | FDNMAPPINGTABLE | 1 |
    515 | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    |* 11 | INDEX UNIQUE SCAN | SYS_C005695 | 1 |
    | 1 (0)| 00:00:01 |
    | 12 | NESTED LOOPS | | 1 |
    91 | 2 (0)| 00:00:01 |
    |* 13 | INDEX UNIQUE SCAN | PRIMARY_KEY_TRTABLE | 1 |
    26 | 1 (0)| 00:00:01 |
    |* 14 | INDEX UNIQUE SCAN | PRIMARY_KY_PARAMETERTABLE | 1 |
    65 | 1 (0)| 00:00:01 |
    PLAN_TABLE_OUTPUT
    Predicate Information (identified by operation id):
    1 - filter( EXISTS (SELECT /*+ */ 0 FROM "TEMPLATERELATIONTABLE" "TEMPLATETAB
    2","PARAMETERTABLE"
    PLAN_TABLE_OUTPUT
    "PARAMTAB2" WHERE "PARAMTAB2"."ATTR_NAME"=:B1 AND "PARAMTAB2"."ID"
    =:B2 AND
    "TEMPLATETAB2"."TEMPLATE_ID"=:B3 AND "TEMPLATETAB2"."ID"=:B4))
    5 - filter("PARAMTAB1"."ATTR_VALUE"<>'DEFAULT')
    6 - access("PARAMTAB1"."ID"=52787 AND "PARAMTAB1"."ATTR_NAME"='harqUsersMax')
    7 - access("FDNTAB"."ID"=52787)
    9 - access("PARAMTAB3"."ID"=52787)
    11 - access("PARAMTAB3"."ATTR_VALUE"="FDNTAB2"."FDN")
    13 - access("TEMPLATETAB2"."ID"=:B1 AND "TEMPLATETAB2"."TEMPLATE_ID"=:B2)
    14 - access("PARAMTAB2"."ID"=:B1 AND "PARAMTAB2"."ATTR_NAME"=:B2)
    PLAN_TABLE_OUTPUT
    Note
    - dynamic sampling used for this statement
    39 rows selected.

  • Fast Refresh MVs and HASH_SJ Hint

    I am building fast refresh MVs on a 3rd party database to enable faster reporting. This is an interim solution whilst we build a new ETL process using CDC.
    The source DB has no PKs, so I'm creating the MV logs with ROWID. When I refresh the MV (exec DBMS_MVIEW.REFRESH('<mview_name>') and trace the session I notice:
    1. The query joins back to the base table - I think this is necessary as there are two base tables and the MV change could be instigated from either table independently. Therefore the changes might not be in the log.
    2. However in this case shouldn't it be possible to just joij mv_log1 to base_table2 and ignore base_table1?
    3. There is a HASH_SJ hint in this join, forcing a full table scan on the 7M row base_table1.
    4. I am doing 1 update then refreshing the MV
    5. In production this table would have many 10s of single row inserts and updates per minute
    This is an excerpt from the tkprof'd trace file (I've hidden some table/column names)
    FROM   (SELECT MAS$.ROWID RID$ 
                  ,MAS$.* 
            FROM   <base_table1> MAS$
            WHERE  ROWID IN (SELECT  /*+ HASH_SJ */ 
                                    CHARTOROWID(MAS$.M_ROW$$) RID$    
                             FROM   <mview_log1> MAS$  
                             WHERE  MAS$.SNAPTIME$$ > sysdate-1/24 --:1
           ) AS OF SNAPSHOT (:2) JV$
           ,<base_table2> AS OF SNAPSHOT (:2)  MAS$0
    WHERE   JV$.<col1>=MAS$0.<col1>
    AND     JV$.<col2>=MAS$0.<col2>
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      1     13.78     153.32     490874     551013          3           1
    Fetch        0      0.00       0.00          0          0          0           0
    total        2     13.78     153.32     490874     551013          3           1
    Misses in library cache during parse: 1
    Misses in library cache during execute: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 277  (<user>)   (recursive depth: 1)
    Rows     Row Source Operation
          1  TABLE ACCESS BY INDEX ROWID <base_table2>(cr=551010 pr=490874 pw=0 time=153321352 us)
          3   NESTED LOOPS  (cr=551009 pr=490874 pw=0 time=647 us)
          1    VIEW  (cr=551006 pr=490874 pw=0 time=153321282 us)
          1     HASH JOIN RIGHT SEMI (cr=551006 pr=490874 pw=0 time=153321234 us)
          2      TABLE ACCESS FULL <base_table1_mv_log> (cr=21 pr=0 pw=0 time=36 us)
    7194644      TABLE ACCESS FULL <base_table1>(cr=550985 pr=490874 pw=0 time=158282171 us)
          1    INDEX RANGE SCAN <base_table2_index> (cr=3 pr=0 pw=0 time=22 us)(object id 3495055)As you can see there are two rows in the MV log (one update, old and new values), the FTS on the base table ensure that the MV refresh is far from fast
    I have tried this with refreshing on demand and commit with similar results. Implementing this would make my the application impossibly slow.
    I will search the knowledge base once I am given access
    SQL>select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 64-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - ProductionThank you for taking the time to read/respond.
    Ben

    Thanks for looking.
    From the Knowledge Base it appears that Bug 6456841 might be the cause. I'll play around with the settings it suggests and see what happens.
    the MV query is basically:
    SELECT ...
    FROM   base_table1
          ,base_table2
    WHERE  base_table1.col1 = base_table2.col1
    AND    base_table1.col2 = base_table2.col2When 1 row in base_table1 is updated there is a FTS for that table, rather than:
    1. getting the data from the MV log or
    2. a Nested loop join to base_table1 from its mv_log on rowid
    This is due to the oracle internal code putting a HASH_SJ hint in when joining the mv log to its base table
    Ben

  • Oracle 10g on HP-UX, Terrible Poor Performance!!

    Hi All,
    I setup an Oracle 10g on HP-UX 11iv1. Server is a HP 9000, 4 CPUs (750
    MHZ). It is connected to Disk System 2405 (Virtual Array 7110). Fiber
    Channels are connected at 2 GB speed.
    I installed a cluster 10g database. First I installed CRS and after
    that I installed oracle database. ( I want to test clustered database
    with one instance)
    I installed every thing line by line as oracle document wrote.
    All the things, kernel parameters, patches, are like oracle wrote in
    its document.
    I installed Golden quality package June 2004.
    I increased shmmax to 2.3 G . My SGA is 1.7 G And change some other
    parameters as Sandy Gruver wrote in Best Practices for Oracle on HPUX.
    I used oracle new storage system called ASM for this case.
    When I put the system under the load, I was monitoring the system
    carefully.
    I started gmp. When we sent some quarries to database (It is not heavy
    load, I tested it with a Linux system on proliant ML570 without any
    problem), suddenly DISK section in gpm changed to red (critical ) I
    read the warning. It said "Disk bottleneck probability = 100%". I
    changed the output of disk report to "Report IO by Disk"
    "DISK%" was 100% and "RAW IO RT" was about 1000 for two disks ( This
    two disks dedicated for ASM). In this situation CPU idle time was 1% or
    2% for all the CPUs but load average was about 1. Performance is not
    acceptable at all ( In comparison with Oracle that installed on Linux).
    Glance reported Disk was in Critical situation.
    I think the problem is IO or something about Disks
    I used HP Disk System 2405. Fibber channels on both server side and
    Disk Array side are configured at 2 Gb and topologies are
    PTTOPT_FABRIC.
    Is it ok that RAW IO RT about 1000 for each LUN?
    Why Disk% in glance/report IO BY Disk/ was 100%?
    I found an error in STM logs about I/O.It said:
    Entry type: I/O error
    Product: Fiber Channel Interface
    Logger: td
    It logged this error about 12 times during the test.Any comment?
    Regards,
    Hasan

    Sorry, I have not a solution for your problem, but similar things happen on our installation on Solaris 5.8 with Oracle 10:
    I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
    Environment at Customer Server Side:
    Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
    Data Storage: Em2
    DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
    Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
    Anton

  • Rpm package error while installing oracle 10g

    Hi ,
    I tried to run the following package in RHEL 5 , as a preliminary step to install oracle 10g, the third file from disk 1 in server folder , I am getting this error.... all the other packages I was able to install
    [amjadali@rhel5 Server]$ rpm -Uvh glibc-2*
    warning: glibc-2.5-12.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
    warning: package glibc = 2.5-12 was already added, skipping glibc < 2.5-12
    error: error reading from file glibc-2.5-12.i686.rpm
    [amjadali@rhel5 Server]$
    Thanks and Regards

    AAP wrote:
    Hi ,
    I tried to run the following package in RHEL 5 , as a preliminary step to install oracle 10g, the third file from disk 1 in server folder , I am getting this error.... all the other packages I was able to install
    [amjadali@rhel5 Server]$ rpm -Uvh glibc-2*
    warning: glibc-2.5-12.i386.rpm: Header V3 DSA signature: NOKEY, key ID 37017186
    warning: package glibc = 2.5-12 was already added, skipping glibc < 2.5-12
    error: error reading from file glibc-2.5-12.i686.rpm
    [amjadali@rhel5 Server]$
    Thanks and RegardsIf you will use yum instead of rpm, you will get out of this package-dependency-hell in one fell swoop. Oracle has a free public yum server. After properly configuring, just
    root> yum install oracle-validatedI leave discovering the detail of configuring your system to use the yum server as an exercise for the student. Hint: google is your friend.

  • Oracle 10g on SuSE 9 with kernel 2.6.4

    Hi,
    i have installed Oracle 10g on SuSE 9 Professional with kernel 2.4.21-192 and it works fine.
    For testing purposes i have installed a 2.6.4 Kernel and get the following error on database startup:
    SQL> Connected to an idle instance
    SQL> ORA-27125: unable to create shared memory segment
    Linux Error: 1: Operation not permitted
    SQL> Disconnected
    Do you have any hints or is the kernel 2.6 not supported.
    Thanks Klaus

    /* using strace I have narrowed this down to the shmget call for allocating SGA memory. Specifically 2.6 kernel generates a WAIT in this call and Oracle specify IPC_NOWAIT flag, hence the failure condition.*/
    #include <sys/types.h>
    #include <sys/shm.h>
    #include <sys/types.h>
    #include <sys/mman.h>
    #include <errno.h>
    extern int errno;
    #define SHM_HUGETLB 04000
    #define SGA_SIZE 192937984
    #define dprintf(x) printf(x)
    #define ADDR 3285588844UL
    main()
    int shmid;
    int i, j, k;
    volatile char *shmaddr;
    if ((shmid = shmget(ADDR, SGA_SIZE, IPC_CREAT|IPC_NOWAIT|IPC_EXCL|0660) )
    < 0) {
    perror("IPC_NOWAIT Failure:\nTry without NOWAIT condition");
    if ((shmid = shmget(ADDR, SGA_SIZE, IPC_CREAT|IPC_EXCL|0660) )
    < 0) {
    perror("Failure:");
    exit(1);
    printf("shmid: 0x%x\n", shmid);
    shmaddr = shmat(shmid, (void *)ADDR, SHM_RND) ;
    if (errno != 0) {
    perror("Shared Memory Attach Failure:");
    exit(2);
    printf("shmaddr: %p\n", shmaddr);
    dprintf("Starting the writes:\n");
    for (i=0;i<SGA_SIZE;i++) {
    shmaddr[i] = (char) (i);
    if (!(i%(1024*1024))) dprintf(".");
    dprintf("\n");
    dprintf("Starting the Check...");
    for (i=0; i<SGA_SIZE;i++)
    if (shmaddr[i] != (char)i)
    printf("\nIndex %d mismatched.");
    dprintf("Done.\n");
    if (shmdt((const void *)shmaddr) != 0) {
    perror("Detached Failure:");
    exit (3);

  • Pivot function in Oracle 10g???

    Hello everybody,
    at the beginning of the week I had a simple problem (I thought that...), but now after trying and trying, I can't find a solution for it. First of all I'm working on Oracle 10g with the version 10.2.0.4.0. I can't change the version, it's standard in the whole company...
    At the beginning I have a table like the following one, but please note, that the compartment, the type and the amount are flexible and can change at any time:
    comp type amount
    a1 6280 10
    a2 6280 20
    a2 4810 15
    a2 1147 12
    a3 6280 33
    Now I want the table to look like this:
    a1 a2 a3
    1147 0 12 0
    4810 0 15 0
    6280 10 20 33
    A simple question in Excel for example, I just use the pivot function and have it fixed within 10seconds. But how can I do sth. like this in Oracle with simple SQL? Or it can be PL/SQL too, cause I will use this in an APEX application.
    Can you please give me a hint or a solution? But as stated before a1, a2, a3 are just examples it is possible that tomorrow a4, a5 and so on are coming. If it is necessary I can also create additional tables and views of course!
    Thanks for your help!
    Regards
    hoge

    Hi Hoge!
    Here is your solution:
    SELECT TYPE,
           sum(a1) AS a1,
           sum(a2) AS a2,
           sum(a3) AS a3
      FROM (SELECT TYPE,
                   decode(comp, 'a1', amount, 0) AS a1,
                   decode(comp, 'a2', amount, 0) AS a2,
                   decode(comp, 'a3', amount, 0) AS a3
              FROM test)
      GROUP BY TYPE
      ORDER BY TYPE; And here is my test case setup:
    CREATE TABLE test
        (comp VARCHAR2(255),
         TYPE NUMBER,
         amount NUMBER);
    INSERT INTO test(comp, TYPE, amount) VALUES('a1', 6280, 10);
    INSERT INTO test(comp, TYPE, amount) VALUES('a2', 6280, 20);
    INSERT INTO test(comp, TYPE, amount) VALUES('a2', 4810, 15);
    INSERT INTO test(comp, TYPE, amount) VALUES('a2', 1147, 12);
    INSERT INTO test(comp, TYPE, amount) VALUES('a3', 6280, 33);
    commit;Best regards,
    Matt

  • SAP BW supporting Oracle 10g

    Hello!
    Does anybody know if SAP BW 3.0b currently supports Oracle 10g?
    Moreover, does anybody know of any BW version currently supporting this Oracle 10g version? or by when does SAP plans to support it and which BW version(s) they plan?
    Thank you!
    Regards,
    Mario Vallejo

    Hi Dinesh,
    Sorry but I could't find it...
    I also searched by "Oracle 10g" bot no related link was found.
    Any hints on how to look for it? or more specific URL...
    I really appreciate it!
    Regards,
    Mario

  • Oracle 10g and enterprise manager

    Hey;
    I'm obviously fairly new to the oracle arena, just having passed through a number of courses. I seem to have run into two problems with the installation that have me perplexed. Here are the details for one of them:
    I have oracle 10g up and running on an oracle enterprise linux server:
    SQL> select instance_name, status from v$instance;
    INSTANCE_NAME STATUS
    oci1 OPEN
    The listener is up and running and the oci1 database is dynamically registered:
    $ lsnrctl status
    [[snip]]
    Service "oci1" has 1 instance(s).
    Instance "oci1", status READY, has 1 handler(s) for this service...
    [[snip]]
    I started the dbconsole w/ emctl start dbconsole and it seems to be running:
    $ emctl status dbconsole
    TZ set to US/Central
    Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0
    Copyright (c) 1996, 2005 Oracle Corporation. All rights reserved.
    http://ocidb1.olearycomputers.com:1158/em/console/aboutApplication
    Oracle Enterprise Manager 10g is running.
    When I hit the web page, though, I'm never presented with a login screen and it says my database is down and that the agent can't connect to the instance. When I examine the log that's referenced in the dbconsole status, I see a number of errors such as:
    2009-06-09 18:37:10,367 [OmsServiceDriver thread] ERROR conn.ConnectionService v
    erifyRepositoryEx.433 - Invalid Connection Pool. ERROR = ORA-01017: invalid user
    name/password; logon denied
    I saw errors in the emoms.log file related to the sysman user so I unlocked the account.
    SQL> select username, account_status
    2 from dba_users
    3 where username = 'SYSMAN';
    USERNAME ACCOUNT_STATUS
    SYSMAN OPEN
    That didn't do the trick because not very long after, the account is relocked:
    SQL> select username, account_status
    2 from dba_users
    3 where username = 'SYSMAN';
    USERNAME ACCOUNT_STATUS
    SYSMAN LOCKED(TIMED)
    and, instead of the wrong userid/password, I'm getting errors about
    2009-06-09 22:04:58,586 [HttpRequestHandler-22472173] ERROR eml.OMSHandshake getParameterFromDB.402 - ORA-28000: the account is locked
    Anyone have any info on what I'm messing up?
    Appreciate any info/hints/tips/suggestions (as long as they're not "go back to unix admin") :)
    Doug O'Leary

    Hey;
    Thanks again for the response. I tried dropping the repository and got a whole bunch of errors. While attempting to enable the sysman account, I realized that my password contained some characters that have special meaning for oracle and I think that's what was throwing it. The errors with the repository convinced me to try one more reinstall. Bit overkill, perhaps, but that's the whole point of a practice box. Once the database was up and running, teh enterprise manager is fully functional.
    Now that I think about it, I wonder if that password could be responsible for my other schema issue as well...
    Thanks again; I appreciate your time and patience.
    Doug O'Leary

  • Installation errors with Oracle 10g Express Edition client.

    Hi,
    I want to use the Oracle 10g XE client on my Ubuntu (Lucid) laptop. I am following the installation instructions as mentioned at Installing Database Oracle XE Client. When I run the .sh file for my bash shell, I keep getting this error message:
    root@machine:/usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin# . ./oracle_env.sh
    /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin/nls_lang.sh: 112: [[: not found
    /usr/lib/oracle/xe/app/oracle/product/10.2.0/client/bin/nls_lang.sh: 112: [[: not foundAnd, when I look up nls_lang.sh at around line 112, I see this:
    # Detertmine the LANGUAGE_TERRITORY part of NLS_LANG
    # we derive it from the current locale by inspecting the LC_ALL and
    # the LANG environment variable. Other LC_* environment variables
    # are not inspected.
    if [[ -n "$LC_ALL" ]]; then
      locale=$LC_ALL
    elif [[ -n "$LANG" ]]; then
      locale=$LANG
    else
      locale=
    fiHow do I get the client working ?

    So, I have been searching around to look for a method to enable the client installed on my laptop to connect to the database. Granted that, the server is not installed on my local machine. But, I am simply, unable to find any clues as to how do I specify the database name that the client should connect. The ORACLE_SID parameter needs to hold a database name; but, where are the IP address and port number mentioned for the database ? One site recommended that tnsping be used to check for database existence; whereas, tnsping is not installed. Then, the commonest hint that I keep bumping into - TNSNAMES.ORA file. Since I have only the client installed, I don't even see the network/admin folder (or whatever the path is to the .ora file) under /usr/lib/oracle/xe/app/oracle/product/10.2.0.
    It can't be this hard to connect to an Oracle server with only a itsy-bitsy Oracle installed on my local machine, hmmm ?

  • Oracle 10g as Resource in IDM 5.5

    Hi,
    We started adding Oracel 10g Databases to our IDM-Resources.
    When provisioning users into Oracle 10g Resource we get the following Exception:
    com.waveset.util.WavesetException:
    Error trying to update Oracle user 'G2814' ORA-30041:
    Cannot grant quota on the tablespace
    java.sql.SQLException: ORA-30041:
    Cannot grant quota on the tablespace Since we already removed the oracleTempTSQuota-Attribute I assume that the adapter overwrites null values internally.
    Any hints how to solve this problem?
    Thanks in advance...
    Oracle-Docs:ORA-30041: Cannot grant quota on the tablespace
    Cause: User tried to grant quota on an undo or temporary tablespaceIDM_Technical_Reference_2005Q3M1 refernces Oracle 10g as supported resource

    There is no need to give quota on temporary tablespace, because you will never create objects on a temporary tablespace
    It is a wrong assumption to think that your temporary space usage will be limited by a quota.
    The fact that you can "grant" quota on a temporary tablespace is rather a bug, which is fixed in 10gR2
    Re: Cannot grant quota on tablespace
    This bug is not fixed in IDM 6.0 SP1.

  • Not a GROUP BY expression - Oracle 10g bug?

    Hi,
    I am geting 00979. 00000 - "not a GROUP BY expression" error on Oracle 10g 10.2.0.4.0 - 64bit Production.
    To illustrate my problem I created following example.
    Let think I have some shop with clothes. Everytime I sell something, I store this information in the database - storing actual time, clothes type (trousers, socks, ...) and the size of the piece (M, L, XL, ...).
    Now, system counts statistics every hour. So it goes thrue the table with sold pieces and counts the number of pieces per clothes type and per size from the beginning of the day. It is important to realize that it is from the beginning of the day. Because of that, the number of sold pieces in the statistical table grows every hour (or is at least on the same value as in previous hour).
    Now, from this statistical table I need to make new statistic. I want a statistic how many pieces per size I sold every hour.
    I created this query for that:
    SELECT TIME, xSIZE, (SOLD  - NVL((SELECT SUM(S1.SOLD)
                                      FROM STATISTICS S1
                                      WHERE S1.xSIZE = S.xSIZE
                                        AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
                                        AND TO_CHAR(S1.TIME, 'HH24') != '23'
                                        AND S1.xSIZE IS NOT NULL
                                      GROUP BY TRUNC(S1.TIME, 'HH24'), S1.xSIZE),0)) SOLD
    FROM(
    SELECT TRUNC(S.TIME, 'HH24') TIME, S.xSIZE, SUM(S.SOLD) SOLD
    FROM STATISTICS S
    WHERE S.xSIZE IS NOT NULL
    GROUP BY TRUNC(S.TIME, 'HH24'), S.xSIZE
    --ORDER BY 1 DESC
    ) S
    ORDER BY TIME DESC, xSIZE ASCFirst I select number of sold pieces per hour per size. To get number of sold pieces for particular hour, I need to substract from this value number of sold pieces from previous hour. I decided to do this with parameter query...
    Running the query like this I get "not a GROUP BY expression" error. However if I uncomment the "ORDER BY 1 DESC" statement, the query works. I am pretty sure it has to do something with this line:
    AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
    If you modify this query like this:
    SELECT TIME, xSIZE, (SOLD  - NVL((SELECT SUM(S1.SOLD)
                                      FROM STATISTICS S1
                                      WHERE S1.xSIZE = S.xSIZE
                                        --AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
                                        AND TO_CHAR(S1.TIME, 'HH24') != '23'
                                        AND S1.xSIZE IS NOT NULL
                                      GROUP BY  S1.xSIZE),0)) SOLD
    FROM(
    SELECT TRUNC(S.TIME, 'HH24') TIME, S.xSIZE, SUM(S.SOLD) SOLD
    FROM STATISTICS S
    WHERE S.xSIZE IS NOT NULL
    GROUP BY TRUNC(S.TIME, 'HH24'), S.xSIZE
    --ORDER BY 1 DESC
    ) S
    ORDER BY TIME DESC, xSIZE ASCRemoved joining the tables on truncated time and grouping by the truncated time -> The query does not fail...
    And now the best...if you run the first query on Oracle 11g (Release 11.1.0.6.0 - 64bit Production), it works.
    Does anybody know why is the first query not working on 10g? Is there some bug or limitation for this server version?
    Please don't say me to rewrite the query in another way, I already did it, so it works on 10g as well. I am just curious why it doesn't work on 10g.
    Finally here are some data for testing.
    CREATE TABLE STATISTICS(
      TIME DATE DEFAULT SYSDATE,
      TYPE VARCHAR2(20),
      xSIZE VARCHAR2(2),
      SOLD NUMBER(5,0) DEFAULT 0
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'T-Shirt', 'M', 10);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'M', 3);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'T-Shirt', 'L', 1);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'L', 50);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Trousers', 'XL', 7);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 2/24, 'Socks', 'XL', 3);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'T-Shirt', 'M', 13);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Socks', 'L', 60);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Trousers', 'XL', 15);
    INSERT INTO STATISTICS(TIME, TYPE, xSIZE, SOLD) VALUES(SYSDATE - 1/24, 'Socks', 'XL', 6);Edited by: user12047225 on 20.9.2011 23:12
    Edited by: user12047225 on 20.9.2011 23:45

    It is a known issue when optimizer decides to expand in-line view. You can add something (besides ORDER BY you already used) to in-line view to prevent optimizer from expanding it. For example:
    SQL> SELECT  TIME,
      2          xSIZE,
      3          (SOLD - NVL(
      4                      (
      5                       SELECT  SUM(S1.SOLD)
      6                         FROM  STATISTICS S1
      7                         WHERE S1.xSIZE = S.xSIZE
      8                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
      9                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    10                           AND S1.xSIZE IS NOT NULL
    11                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    12                                    S1.xSIZE
    13                      ),
    14                      0
    15                     )
    16          ) SOLD
    17    FROM  (
    18           SELECT  TRUNC(S.TIME, 'HH24') TIME,
    19                   S.xSIZE,
    20                   SUM(S.SOLD) SOLD
    21             FROM  STATISTICS S
    22             WHERE S.xSIZE IS NOT NULL
    23             GROUP BY TRUNC(S.TIME, 'HH24'),
    24                      S.xSIZE
    25           --ORDER BY 1 DESC
    26          ) S
    27    ORDER BY TIME DESC,
    28             xSIZE ASC
    29  /
             SELECT  TRUNC(S.TIME, 'HH24') TIME,
    ERROR at line 18:
    ORA-00979: not a GROUP BY expression
    SQL> SELECT  TIME,
      2          xSIZE,
      3          (SOLD - NVL(
      4                      (
      5                       SELECT  SUM(S1.SOLD)
      6                         FROM  STATISTICS S1
      7                         WHERE S1.xSIZE = S.xSIZE
      8                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
      9                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    10                           AND S1.xSIZE IS NOT NULL
    11                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    12                                    S1.xSIZE
    13                      ),
    14                      0
    15                     )
    16          ) SOLD
    17    FROM  (
    18           SELECT  TRUNC(S.TIME, 'HH24') TIME,
    19                   S.xSIZE,
    20                   SUM(S.SOLD) SOLD,
    21                   ROW_NUMBER() OVER(ORDER BY SUM(S.SOLD)) RN
    22             FROM  STATISTICS S
    23             WHERE S.xSIZE IS NOT NULL
    24             GROUP BY TRUNC(S.TIME, 'HH24'),
    25                      S.xSIZE
    26           --ORDER BY 1 DESC
    27          ) S
    28    ORDER BY TIME DESC,
    29             xSIZE ASC
    30  /
    TIME      XS       SOLD
    20-SEP-11 L           9
    20-SEP-11 M           0
    20-SEP-11 XL         11
    20-SEP-11 L          51
    20-SEP-11 M          13
    20-SEP-11 XL         10
    6 rows selected.
    SQL> Or use subquery factoring (WITH clause) + undocumented hint MATERIALIZE:
    SQL> WITH S AS (
      2             SELECT  /*+ MATERIALIZE */ TRUNC(S.TIME, 'HH24') TIME,
      3                     S.xSIZE,
      4                     SUM(S.SOLD) SOLD
      5               FROM  STATISTICS S
      6               WHERE S.xSIZE IS NOT NULL
      7               GROUP BY TRUNC(S.TIME, 'HH24'),
      8                        S.xSIZE
      9             --ORDER BY 1 DESC
    10            )
    11  SELECT  TIME,
    12          xSIZE,
    13          (SOLD - NVL(
    14                      (
    15                       SELECT  SUM(S1.SOLD)
    16                         FROM  STATISTICS S1
    17                         WHERE S1.xSIZE = S.xSIZE
    18                           AND TRUNC(S1.TIME, 'HH24') + 1/24 = S.TIME
    19                           AND TO_CHAR(S1.TIME, 'HH24') != '23'
    20                           AND S1.xSIZE IS NOT NULL
    21                           GROUP BY TRUNC(S1.TIME, 'HH24'),
    22                                    S1.xSIZE
    23                      ),
    24                      0
    25                     )
    26          ) SOLD
    27    FROM  S
    28    ORDER BY TIME DESC,
    29             xSIZE ASC
    30  /
    TIME      XS       SOLD
    20-SEP-11 L           9
    20-SEP-11 M           0
    20-SEP-11 XL         11
    20-SEP-11 L          51
    20-SEP-11 M          13
    20-SEP-11 XL         10
    6 rows selected.
    SQL> SY.

  • Regarding parallel queries in ABAP same as in oracle 10g

    Hi,
       Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.Kindly see below;
    alter table emp parallel (degree 4);
    select degree from user_tables where table_name = 'EMP';
    select count(*) from emp;
    alter table emp noparallel;
    SELECT /*+ PARALLEL(emp,4) / COUNT()
    FROM emp;
    The idea here is to distribute the load of select query in multiple CPUs for load balancing & performance improvement.
    Kindly advise.
    Thanks:
    Gaurav

    Hi,
    >    Is there any way we can write parallel queries in ABAP, in the same way we do in oracle 10g.
    sure. Since it is just a hint...
    SELECT *
      FROM t100 INTO TABLE it100
      %_HINTS ORACLE 'PARALLEL(T100,4)'.
    will give you such an execution plan for example:
    SELECT STATEMENT ( Estimated Costs = 651 , Estimated #Rows = 924.308 )
           4 PX COORDINATOR
               3 PX SEND QC (RANDOM) :TQ10000
                 ( Estim. Costs = 651 , Estim. #Rows = 924.308 )
                 Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
                   2 PX BLOCK ITERATOR
                     ( Estim. Costs = 651 , Estim. #Rows = 924.308 )
                     Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
                       1 TABLE ACCESS FULL T100
                         ( Estim. Costs = 651 , Estim. #Rows = 924.308 )
                         Estim. CPU-Costs = 33.377.789 Estim. IO-Costs = 646
    PX = Parallel eXecution...
    But be sure that you know what you do with the parallel execution option... it is not scalable.... .
    Kind regards,
    Hermann

  • General formula in Oracle 10g

    Hi,
    I use Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod.
    I want to develop a way to deal with complex expressions using SQL. I can't use listagg, so I use xmlagg. I created a query, which runs within 8-15 secs. It returns 20000 rows. When I slightly modify the query (should run within 80-150 secs and return 200000 rows), it keeps running and running... Is it because I use XML functions (I don't really understand the way XML functions work, I only use it)? I put these two queries in question separated by lines. Any hints on how to improve the performance of the query wouldl be appreciated. Any hints on how to replace the complex replace expression in the query would be also great.
    Thanks,
    Gabor
    How to make it look like a code? How can I display the /[B/] (but without /) expression in the code?
    with dim_mut as (select 1 as mut_id, '([A]+)-([C]+[D])' as keplet from dual
    union select 2 as mut_id, '([A]+[B])*([C]+[D])' as keplet from dual
    ), dim_mut_komp as (
    select 1 as mut_id,1 as mut_komp_id,1 as mut_sorszam,'[A]' as mut_azonosito from dual
    union select 1,2,2,'[B]' from dual
    union select 1,3,3,'[C]' from dual
    union select 1,4,4,'[D]' from dual
    union select 2,5,1,'[A]' from dual
    union select 2,6,2,'[B]' from dual
    union select 2,7,3,'[C]' from dual
    union select 2,8,4,'[D]' from dual
    ), sp8 as (select 1 as mut_komp_id from dual
    union select 2 from dual
    union select 3 from dual
    union select 4 from dual
    union select 5 from dual
    union select 6 from dual
    union select 7 from dual
    union select 8 from dual),
    sp10 as (select 0 as sorszam from dual
    union select 1 from dual
    union select 2 from dual
    union select 3 from dual
    union select 4 from dual
    union select 5 from dual
    union select 6 from dual
    union select 7 from dual
    union select 8 from dual
    union select 9 from dual),
    dim_mut_value as
    select /*+ no_merge */s1.mut_komp_id,
    s2.sorszam+s3.sorszam*10+s4.sorszam*100+s5.sorszam*1000 as sorszam,
    ceil(dbms_random.value(1,100)) as ertek
    from sp8 s1,
    sp10 s2,
    sp10 s3,
    sp10 s4,
    sp10 s5
    select count(*) from (
    select
    replace(
    replace(
    replace(
    replace(
    keplet,
    substr(azonositok,1,instr(azonositok,';',1,1)-1),
    substr(ertekek,1,instr(ertekek,';',1,1)-1)
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1),
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1))
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)),
    substr(ertekek,instr(ertekek,';',1,2)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2))
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)),
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3))
    ) as kifejezes,
    substr(azonositok,1,instr(azonositok,';',1,1)-1) as azonosito1,
    substr(ertekek,1,instr(ertekek,';',1,1)-1) as ertek1
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1) as azonosito2,
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1)) as ertek2
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)) as azonosito3,
    substr(ertekek,instr(ertekek,';',2,1)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2)) as ertek3
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)) as azonosito4,
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3)) as ertek4,
    keplet,
    ertekek,
    azonositok,
    dbms_aw.eval_number(
    replace(
    replace(
    replace(
    replace(
    keplet,
    substr(azonositok,1,instr(azonositok,';',1,1)-1),
    substr(ertekek,1,instr(ertekek,';',1,1)-1)
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1),
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1))
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)),
    substr(ertekek,instr(ertekek,';',1,2)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2))
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)),
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3))
    ) as vegeredmeny
    from (
    select m.mut_id,
    mv.sorszam,
    xmlagg(xmlelement(e,mv.ertek||';') order by mk.mut_sorszam).extract('//text()') as ertekek,
    xmlagg(xmlelement(e,mk.mut_azonosito||';') order by mk.mut_sorszam).extract('//text()') as azonositok,
    m.keplet
    from dim_mut m,
    dim_mut_komp mk,
    dim_mut_value mv
    where m.mut_id=mk.mut_id and mk.mut_komp_id=mv.mut_komp_id
    group by m.mut_id,m.keplet, mv.sorszam
    with dim_mut as (select 1 as mut_id, '([A]+[B])-([C]+[D])' as keplet from dual
    union select 2 as mut_id, '([A]+[B])*([C]+[D])' as keplet from dual
    ), dim_mut_komp as (
    select 1 as mut_id,1 as mut_komp_id,1 as mut_sorszam,'[A]' as mut_azonosito from dual
    union select 1,2,2,'[B]' from dual
    union select 1,3,3,'[C]' from dual
    union select 1,4,4,'[D]' from dual
    union select 2,5,1,'[A]' from dual
    union select 2,6,2,'[B]' from dual
    union select 2,7,3,'[C]' from dual
    union select 2,8,4,'[D]' from dual
    ), sp8 as (select 1 as mut_komp_id from dual
    union select 2 from dual
    union select 3 from dual
    union select 4 from dual
    union select 5 from dual
    union select 6 from dual
    union select 7 from dual
    union select 8 from dual),
    sp10 as (select 0 as sorszam from dual
    union select 1 from dual
    union select 2 from dual
    union select 3 from dual
    union select 4 from dual
    union select 5 from dual
    union select 6 from dual
    union select 7 from dual
    union select 8 from dual
    union select 9 from dual),
    dim_mut_value as
    select /*+ no_merge */s1.mut_komp_id,
    s2.sorszam+s3.sorszam*10+s4.sorszam*100+s5.sorszam*1000+s6.sorszam*10000 as sorszam,
    ceil(dbms_random.value(1,100)) as ertek
    from sp8 s1,
    sp10 s2,
    sp10 s3,
    sp10 s4,
    sp10 s5,
    sp10 s6
    select count(*) from (
    select
    replace(
    replace(
    replace(
    replace(
    keplet,
    substr(azonositok,1,instr(azonositok,';',1,1)-1),
    substr(ertekek,1,instr(ertekek,';',1,1)-1)
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1),
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1))
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)),
    substr(ertekek,instr(ertekek,';',1,2)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2))
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)),
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3))
    ) as kifejezes,
    substr(azonositok,1,instr(azonositok,';',1,1)-1) as azonosito1,
    substr(ertekek,1,instr(ertekek,';',1,1)-1) as ertek1
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1) as azonosito2,
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1)) as ertek2
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)) as azonosito3,
    substr(ertekek,instr(ertekek,';',2,1)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2)) as ertek3
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)) as azonosito4,
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3)) as ertek4,
    keplet,
    ertekek,
    azonositok,
    dbms_aw.eval_number(
    replace(
    replace(
    replace(
    replace(
    keplet,
    substr(azonositok,1,instr(azonositok,';',1,1)-1),
    substr(ertekek,1,instr(ertekek,';',1,1)-1)
    substr(azonositok,instr(azonositok,';',1,1)+1,instr(azonositok,';',1,2)-instr(azonositok,';',1,1)-1),
    substr(ertekek,instr(ertekek,';',1,1)+1,instr(ertekek,';',1,2)-1-instr(ertekek,';',1,1))
    substr(azonositok,instr(azonositok,';',1,2)+1,instr(azonositok,';',1,3)-1-instr(azonositok,';',1,2)),
    substr(ertekek,instr(ertekek,';',1,2)+1,instr(ertekek,';',1,3)-1-instr(ertekek,';',1,2))
    substr(azonositok,instr(azonositok,';',1,3)+1,instr(azonositok,';',1,4)-1-instr(azonositok,';',1,3)),
    substr(ertekek,instr(ertekek,';',1,3)+1,instr(ertekek,';',1,4)-1-instr(ertekek,';',1,3))
    ) as vegeredmeny
    from (
    select m.mut_id,
    mv.sorszam,
    xmlagg(xmlelement(e,mv.ertek||';') order by mk.mut_sorszam).extract('//text()') as ertekek,
    xmlagg(xmlelement(e,mk.mut_azonosito||';') order by mk.mut_sorszam).extract('//text()') as azonositok,
    m.keplet
    from dim_mut m,
    dim_mut_komp mk,
    dim_mut_value mv
    where m.mut_id=mk.mut_id and mk.mut_komp_id=mv.mut_komp_id
    group by m.mut_id,m.keplet, mv.sorszam
    Edited by: BluShadow on 04-Oct-2012 09:12
    added {noformat}{noformat} tags for code readability. Please read: {message:id=9360002} to learn to do this yourself.

    The following links describe the new Text features in Oracle 10g and 11g.
    http://download.oracle.com/docs/cd/B19306_01/text.102/b14218/whatsnew.htm#i969790
    http://download.oracle.com/docs/cd/B28359_01/text.111/b28304/whatsnew.htm#sthref6

  • Insert statement taking time on oracle 10g

    Hi,
    My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
    I m using oracle version 10.2.0.4.0.
    cust_item is matiralize view in procedure and it is refreshing in the procedure
    Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
    There are almost 6 lac records into MV which are going to insert into TABLE.
    In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl  NOLOGGING
             (SELECT /*+ PARALLEL */
                     ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
                     cust_nbr, item_nbr, lu_eff_dt,
                     0, 0, 0, lu_end_dt,
                     bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
                     '', 0, ' ',
                                   case
                                 when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
                                 THEN
                                         case
                                            when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
                                            then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
                                                          and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
                                                          and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
                                                          a.cases_per_pallet)
                                      else cases_per_pallet
                                  end
                          else cases_per_pallet
                     END cases_per_pallet,
                     cases_per_layer
                FROM cust_item a
               WHERE a.ctry_code = p_country_code ----varible passing by procedure
                 AND a.co_code = p_company_code   ----varible passing by procedure
                 AND a.ROWID =
                        (SELECT MAX (b.ROWID)
                           FROM cust_item b
                          WHERE b.ctry_code = a.ctry_code
                            AND b.co_code = a.co_code
                            AND b.ctry_code = p_country_code ----varible passing by procedure
                            AND b.co_code = p_company_code   ----varible passing by procedure
                            AND b.srce_loc_nbr = a.srce_loc_nbr
                            AND b.srce_loc_type_code = a.srce_loc_type_code
                            AND b.cust_nbr = a.cust_nbr
                            AND b.item_nbr = a.item_nbr
                            AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
    Plan
    INSERT STATEMENT  CHOOSECost: 133,310  Bytes: 248  Cardinality: 1                      
         5 FILTER                 
              4 HASH GROUP BY  Cost: 133,310  Bytes: 248  Cardinality: 1            
                   3 HASH JOIN  Cost: 132,424  Bytes: 1,273,090,640  Cardinality: 5,133,430       
                        1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026  Bytes: 554,410,440  Cardinality: 5,133,430 
                        2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570  Bytes: 718,680,200  Cardinality: 5,133,430  can you please look into the issue?
    Thanks.

    According to the execution plan you posted parallelism is not taking place - no parallel operations listed
    Check the hint syntax. In particular, "PARALLEL" does not look right.
    Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
    select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky

Maybe you are looking for

  • How many webpages can I create?

    How many different webpages can I create with iWeb? I just created a site to show friends a slide show from a recent trip, but I need to create another webpage that will have video on it. It would be a totally different subject matter. I do have a .M

  • What are the possible values for a check box?

    I'm trying to fill a pdf with a asp.net web form. I'm getting the text boxes filled ok, but I'm having trouble setting the check boxes. I am testing to see if a web form check box is checked. if it is, I tried to set the pdf checkbox value to "on". B

  • How to share Endeca development work with team

    Hi all, I'm wondering what is the best way for a (distributed) team of  developers to share work when building an Endeca application in the workbench (e.g. cartridges, pages etc..). I see there is a utility to export/import entire applications but th

  • I can not import videos frames to layers.

    I can not import videos frames to layers. ( I do have QuickTime )

  • Difrnce BT BI and BO

    Hi Experts, I m a ABAPer Can anybody explain me what is the difference between Business Inteligence and Business Objects. My comapny is thinking to get BI or BO. But i thought if it wud be BI, i could learn it. Can a ABAPer learn BO also? will it be