Oracle table insertion is very slow - Very Imp

I have a oracle 9i db installed on Windows 2000 Adv. Server. Server is single processor ,2GB RAM.
and I have a table is have one long raw field & 4 other fields. It contails 10k records. and table is indexed.
I have an application is VB using ADOs I connected to Oracle db. I am saving binary file to long raw field. For me retreival is very fast and when i am inserting the record it is very slow. It is taking 4min for one record.
Please help me to solve this issue

Is it possible for your capture the execution plan, as well as session wait events?
If you have buffer busy waits, and not using ASSM (Automatic Segemtn Storage Management), playing with free list also helps.
Jaffar

Similar Messages

  • Oracle insert very slow (very urgent)

    Hello
    I am new to this forum and also new to oracle .... I am woking in a C# 3.5 desktop application
    I am Leasing data from socket (1 message per 10 millisecond) and save in queue<T> and then i have a background thread which dequeu the data and perform some calculation and create “insert sql query “ on run time NO STORE PROCEDURE just simple insert query
    For example
    insert into Product values(0,computer , 125.35);
    I pass that insert query to my datalayer which create oracle connection and insert in to a data base. see the code below
    using System.Data.OracleClient
    class db
    OracleConnection conns = null
    public static void conn(string dbalias, string userid, string password){
    try
    string connString = @"server =" + dbalias + ";uid =" + userid + ";password =" + password + ";";
    conns = new OracleConnection(connString);
    conns.Open();
    catch (OracleException e){
    Console.WriteLine("Error: " + e);}}
    public static void ExecuteCommand(string sqlquery)
    try
    OracleCommand cmd = new OracleCommand(sqlquery,conns);
    cmd.ExecuteNonQuery();
    NOW the problem is that inseration in oracle database is very slow please tell me how to solve this issue

    Additionally:
    How slow? Just one single insert is slow? Or you're doing thousands of inserts that way and they add up to being slow?
    If you're doing a bunch of inserts, wrap a bunch of them inside a transaction instead of doing them one by one which will avoid a commit each time as well.
    Or use Array binding or Associatve arrays as indicated previously (You'd need to use Oracle's provider for that though ~ ou're using System.Data.OracleClient).
    You're using a literal hard coded statement, per your example? Use bind variables.
    Also, this forum is for tools that plug in to VS. Problems with ODP.NET code you've written would be more appropriate in the [ODP.NET forum|http://forums.oracle.com/forums/forum.jspa?forumID=146], but that forum deals with problems with Oracle's ODP, not Microsofts (which is in maintenance mode by the way)
    Hope it helps,
    Greg

  • Very slow, Very slow

    My Imac is running very slow and I am getting quite frustrated. Any ideas?

    Your system profile is a complete blank.  Which os version are you using? 
    Please describe in detail all you have attempted to do in order to resolve the issue. 
    How large is your hard drive and how much hard drive space do you have left? 

  • PS Elements 12 very slow, very very slow!

    Hi, I started using Ps elements 12 to try it out. My past experience with elements is flawless speed but with 12 for example using the smudge tool can sometimes take up to 30+ seconds to complete. I can actually watch the progress are the picture as it transitions. Is there a setting that I cant find or something causing this to happen?
    Pc specs
    Win 7 ultimate
    3.8ghz i7 haswell
    16gb 1666ghz ram
    gtx 770 2gb card
    Ps elements and projects I'm working on are installed on a SSD.
    So with those specs it should run smooth.
    Help please!

    My guess, if you aren't waiting for the organizer to generate thumbnails, would be the auto analyzer. It's not as easy to kill in PSE 12. Go into task manager and stop it there and see if it makes a difference.

  • Date insertion from large XML document (clob) into relation table very slow

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

    Hi Everybody!
    I'm working with Oracle 9.2.0.5 on Microsoft Windows Server 2003 Enterprise Edition.
    The server (a test server) is a Pentium 4 2.8 GHz, 1GB of RAM.
    I use a procedure called PARITOP_TRAITERXMLRESULTMASSE to insert the data contained in the pXMLDOC clob parameter in the table pTABLENAME. (You can see the format of the XML document below). The first step on this procedure is to verify that the XML document is not empty. If not, the procedure needs to add a node in the document, in every <ROW> tag. This added node is named “RST_ID”. It’s the foreign key of each record. I can retrieve the value of each <RST_ID> node in an other table in which the data has been previously added (by the calling procedure). When each of the <ROW> elements has been treated, the PARITOP_INSERTXML procedure is called. This procedure uses DBMS_XMLSAVE.INSERTXML to insert the data contained in the XML document in the specified table.
    (Below, you can see the code of my procedures.)
    With this information, can you tell me why this treatment is very very very slow with a large XML document and how I can improve it?
    Thank you for your help!
    Anne-Marie
    CREATE OR REPLACE PROCEDURE "PARITOP_TRAITERXMLRESULTMASSE" (
    pPRC_ID IN PARITOP_PARC.PRC_ID%TYPE,
    pRST_MONDE IN PARITOP_RESULTAT.RST_MONDE%TYPE,
    pXMLDOC IN CLOB,
    pTABLENAME IN VARCHAR2)
    AS
    Objectif :Insérer le contenu du XML passé en paramètre (pXMLDOC) à la table passée en paramètre (pTABLENAME)
    La table passée en paramètre doit être une table ayant comme clé étrangère le champs "RST_ID" .
    (Le noeud RST_ID est donc ajouté à tous les document XML. Ce rst_id est
    déterminé à partir de la table PARITOP_RESULTAT grâce à pPRC_ID et
    pRstMonde fournis en paramètre)
    result_doc CLOB;
    XMLDOMDOC XDB.DBMS_XMLDOM.DOMDOCUMENT;
    NODE_ROWSET DBMS_XMLDOM.DOMNODE;
    NODE_ROW DBMS_XMLDOM.DOMNODE;
    vUE_ID PARITOP_RESULTAT.UE_ID%TYPE;
    vRST_ID PARITOP_RESULTAT.RST_ID%TYPE;
    nodeList DBMS_XMLDOM.DOMNODELIST;
    BEGIN
    BEGIN
    vUE_ID := 0;
    vRST_ID := 0;
    XMLDOMDOC := DBMS_XMLDOM.NEWDOMDOCUMENT(pXMLDOC);
    IF NOT GESTXML_PKG.FN_PARITOP_DOCUMENT_IS_NULL(XMLDOMDOC) THEN
    NODE_ROWSET := DBMS_XMLDOM.item(DBMS_XMLDOM.GETCHILDNODES (DBMS_XMLDOM.MAKENODE(XMLDOMDOC)),0);
    for i in 0..dbms_xmldom.getLength(DBMS_XMLDOM.getchildnodes(NODE_ROWSET))-1 loop
    NODE_ROW := DBMS_XMLDOM.ITEM(DBMS_XMLDOM.GETCHILDNODES(NODE_ROWSET), i) ;
    nodeList := DBMS_XMLDOM.GETELEMENTSBYTAGNAME(DBMS_XMLDOM.makeelement(NODE_ROW) , 'UE_ID');
    IF vUE_ID <> DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0))) THEN
    vUE_ID := DBMS_XMLDOM.GETNODEVALUE(DBMS_XMLDOM.GETFIRSTCHILD(DBMS_XMLDOM.ITEM(nodeList, 0)));
    --on ramasse le rst_id
    SELECT RST_ID INTO vRST_ID
    FROM PARITOP_RESULTAT RST
    WHERE RST.PRC_ID = pPRC_ID
    AND RST.UE_ID = vUE_ID
    AND RST.RST_MONDE = pRST_MONDE
    AND RST_A_SUPPRIMER = 0;
    END IF;
    GESTXML_PKG.PARITOP_ADDNODETOROW(XMLDOMDOC, NODE_ROW, 'RST_ID', vRST_ID);
    end loop;
    RESULT_DOC := ' '; --à garder, pour ne pas que ca fasse d'erreur lors du WriteToClob.
    dbms_xmldom.writeToClob(DBMS_XMLDOM.MAKENODE(XMLDOMDOC), RESULT_DOC);
    --Insertion du document XML dans la table "tableName"
    GESTXML_PKG.PARITOP_INSERTXML(RESULT_DOC, pTABLENAME);
    DBMS_XMLDOM.FREEDOCUMENT( XMLDOMDOC);
    end if;
    EXCEPTION
    […exception treatement…]
    END;
    END;
    The format of a XML clob is :
    <ROWSET>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6223</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>92.307692307692307</CMP_INDICESELECTION>
    <CMP_PVRES>94900</CMP_PVRES>
    <CMP_PVAJUSTE>72678.017699115066</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>72678.017699115095</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>72678.017699115037</CMP_PVAJUSTEMAX>
    <CMP_PV>148000</CMP_PV>
    <CMP_VALROLE>129400</CMP_VALROLE>
    <CMP_PVRESECART>4790</CMP_PVRESECART>
    <CMP_PVRHAB>101778.01769911509</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>148000</CMP_PVA>
    </ROW>
    <ROW>
    <PRC_ID>193</PRC_ID>
    <UE_ID>8781</UE_ID>
    <VEN_ID>6235</VEN_ID>
    <RST_MONDE>0</RST_MONDE>
    <CMP_SELMAN>0</CMP_SELMAN>
    <CMP_INDICESELECTION>76.92307692307692</CMP_INDICESELECTION>
    <CMP_PVRES>117800</CMP_PVRES>
    <CMP_PVAJUSTE>118080</CMP_PVAJUSTE>
    <CMP_PVAJUSTEMIN>118080</CMP_PVAJUSTEMIN>
    <CMP_PVAJUSTEMAX>118080</CMP_PVAJUSTEMAX>
    <CMP_PV>172000</CMP_PV>
    <CMP_VALROLE>134800</CMP_VALROLE>
    <CMP_PVRESECART>0</CMP_PVRESECART>
    <CMP_PVRHAB>147180</CMP_PVRHAB>
    <CMP_UTILISE>1</CMP_UTILISE>
    <CMP_TVM>1</CMP_TVM>
    <CMP_PVA>172000</CMP_PVA>
    </ROW>
    </ROWSET>
    PARITOP_COMPARABLE TABLE :
    RST_ID NUMBER(10) NOT NULL,
    VEN_ID NUMBER(10) NOT NULL,
    CMP_SELMAN NUMBER(1) NOT NULL,
    CMP_UTILISE NUMBER(1) NOT NULL,
    CMP_INDICESELECTION FLOAT(53) NOT NULL,
    CMP_PVRES FLOAT(53) NULL,
    CMP_PVAJUSTE FLOAT(53) NULL,
    CMP_PVRHAB FLOAT(53) NULL,
    CMP_TVM FLOAT(53) NULL
    ROCEDURE PARITOP_INSERTXML (xmlDoc IN clob, tableName IN VARCHAR2)
    AS
    insCtx DBMS_XMLSave.ctxType;
    rowss number;
    BEGIN
    --permet d'insérer les champs du XML dans la table passée en paramètre.
    --il suffit que les champs XML aient le même nom que les champs de la table
    BEGIN
    insCtx := DBMS_XMLSave.newContext(tableName); -- get context handle
    DBMS_XMLSAVE.SETDATEFORMAT( insCtx, 'yyyy-MM-dd HH:mm:ss');--attention, case sensitive
    DBMS_XMLSAVE.setIgnoreCase(insCtx, 1);
    rowss := DBMS_XMLSAVE.INSERTXML(insCtx , xmlDoc);
    DBMS_XMLSave.closeContext(insCtx);
    EXCEPTION
    […]
    END;
    END;
    PROCEDURE PARITOP_ADDNODETOROW (
    XMLDOMDOC DBMS_XMLDOM.DOMDOCUMENT,
    NODE_ROW dbms_xmldom.DOMNode,
    NOM_NOEUD VARCHAR2,
    VALEUR_NOEUD VARCHAR2)
    AS
    --PERMET D'AJOUTER UN NOEUD AVEC 1 SEULE VALEUR DANS une ROW D'UN XML.
    --UTILE SURTOUT POUR LES CLÉS ÉTRANGÈRES
    domElemAInserer DBMS_XMLDOM.DOMELEMENT;
    NODE dbms_xmldom.DOMNode;
    NODE_TMP dbms_xmldom.DOMNode;
    BEGIN
    domElemAInserer := DBMS_XMLDOM.createElement(XMLDOMDOC, NOM_NOEUD) ;
    NODE := DBMS_XMLDOM.MAKENODE(domElemAInserer); --cast
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE_ROW,NODE);
    NODE_TMP := DBMS_XMLDOM.MAKENODE(DBMS_XMLDOM.CREATETEXTNODE(XMLDOMDOC, VALEUR_NOEUD ) );
    NODE := DBMS_XMLDOM.APPENDCHILD(NODE,NODE_TMP );
    END;

  • Oracle 11G - Update is very slow on View

    I have big trouble with some Update query on Oracle 11G.
    I have a set of tables (5) of identical structures and a view that consists in an UNION ALL of the 5 tables.
    None of this table contains more than 20 000 rows.
    Let's call the view V_INTE_NE. Each of the basic table has a PRIMARY KEY defined on 3 NUMBERS(10,0) -> INTE_REF / NE_REF / INSTANCE.
    Now, I get 6 rows in another table and I want to update my view from the data of this small table (let's call it SMALL). This table has the 3 columns INTE_REF / NE_REF / INSTANCE.
    When I try to join the two tables :
    SELECT * FROM T_INTE_NE T2
    WHERE EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE )
    I get the 6 lines in 0.037 seconds
    When I try to update the view (I have an INSTEAD OF trigger that does nothing (just return for testing even without modifying anything), I execute the following query :
    UPDATE T_INTE_NE T2
    SET INTE_STATE = -11 WHERE
    EXISTS ( SELECT 1 FROM SMALL T1 WHERE T2.INTE_REF = T1.INTEREF AND T2.NE_REF = T1.NEREF AND T2.INTE_INST = T1.INSTANCE )
    The 6 rows are updated (at least TRIGGER is called) in 20 seconds.
    However, in the execution plan, I can't see where Oracle takes time to achieve the query :
    Plan hash value: 907176690
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | UPDATE STATEMENT | | 6 | 36870 | 153 (1)| 00:00:02 |
    | 1 | UPDATE | T_INTE_NE | | | | |
    |* 2 | HASH JOIN RIGHT SEMI | | 6 | 36870 | 153 (1)| 00:00:02 |
    | 3 | TABLE ACCESS FULL | SMALL | 6 | 234 | 9 (0)| 00:00:01 |
    | 4 | VIEW | T_INTE_NE | 6 | 36636 | 143 (0)| 00:00:02 |
    | 5 | VIEW | X_V_T_INTE_NE | 6 | 18636 | 143 (0)| 00:00:02 |
    | 6 | UNION-ALL | | | | | |
    | 7 | TABLE ACCESS FULL| SECNODE1_T_INTE_NE | 1 | 3106 | 60 (0)| 00:00:01 |
    | 8 | TABLE ACCESS FULL| SECNODE2_T_INTE_NE | 1 | 3106 | 60 (0)| 00:00:01 |
    | 9 | TABLE ACCESS FULL| SECNODE3_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 10 | TABLE ACCESS FULL| SECNODE4_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 11 | TABLE ACCESS FULL| SECNODE5_T_INTE_NE | 1 | 3106 | 2 (0)| 00:00:01 |
    | 12 | TABLE ACCESS FULL| SYS_T_INTE_NE | 1 | 3106 | 17 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - access("T2"."INTE_REF"="T1"."INTEREF" AND "T2"."NE_REF"="T1"."NEREF" AND
    "T2"."INTE_INST"="T1"."INSTANCE")
    Note
    - dynamic sampling used for this statement (level=2)
    Statistics
    3 user calls
    0 physical read total bytes
    0 physical write total bytes
    0 spare statistic 3
    0 commit cleanout failures: cannot pin
    0 TBS Extension: bytes extended
    0 total number of times SMON posted
    0 SMON posted for undo segment recovery
    0 SMON posted for dropping temp segment
    0 segment prealloc tasks
    What could explain the difference ?
    I get exactly the same execution plan (when autotrace is ON).
    Furthermore, if I try to do the same update on each of the basic tables, I get the rows updated instantaneously.
    Is there any reason for avoiding this kind of query ?
    Any help would be greatly appreciated :-)
    Regards,
    Patrick

    Sorry for this, I lost myself in conjonctures and I didn't think I would have to explain the whole case.
    So, I wrote a small piece of PL/SQL that reproduces the same issue.
    It seems that my issue is not due to the UPDATE but to the use of the IN predicate.
    As you can see at the end of the script, I try to join the 2 tables using different technics.
    The first query is very fast, the second is very slow.
    I need the second one if I want to do any update.
    DROP TABLE Part1;
    DROP TABLE Part2;
    DROP TABLE Part3;
    DROP TABLE Part4;
    CREATE TABLE Part1 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 1 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part1 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part2 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 2 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part2 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part3 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 3 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part3 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE TABLE Part4 ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), PartId NUMBER(10, 0) DEFAULT( 4 ) NOT NULL, Data1 VARCHAR2(1000), X_Data2 VARCHAR2(2000) NULL, X_Data3 VARCHAR2(2000) NULL, CONSTRAINT PK_Part4 PRIMARY KEY( Key1, Key2, Key3 ) );
    CREATE OR REPLACE FUNCTION Decrypt
    x_in IN VARCHAR2
    ) RETURN VARCHAR2
    AS
    x_out VARCHAR2(2000);
    BEGIN
    SELECT REVERSE( x_in ) INTO x_out FROM DUAL;
    RETURN ( x_out );
    END;
    CREATE OR REPLACE VIEW AllParts AS
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part1
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part2
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part3
    UNION ALL
    SELECT Key1, Key2, Key3, PartId, Data1, Decrypt( X_Data2 ) AS Data2, Decrypt( X_Data3 ) AS Data3 FROM Part4;
    DROP TABLE Small;
    CREATE TABLE Small ( Key1 NUMBER(10, 0), Key2 NUMBER(10, 0), Key3 NUMBER(10, 0), Data1 VARCHAR2(1000) );
    BEGIN
    DECLARE
    n_Key NUMBER(10, 0 ) := 0;
    BEGIN
    WHILE ( n_Key < 50000 )
    LOOP
    INSERT INTO Part1( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part2( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part3( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    INSERT INTO Part4( Key1, Key2, Key3 )
    VALUES( n_Key, FLOOR( n_Key / 10 ), FLOOR( n_Key / 100 ) );
    n_Key := n_Key + 1;
    END LOOP;
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 1000, 100, 10, 'Test 1000' );
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 3000, 300, 30, 'Test 3000' );
    INSERT INTO Small( Key1, Key2, Key3, Data1 ) VALUES ( 5000, 500, 50, 'Test 5000' );
    COMMIT;
    END;
    END;
    SELECT T2.*
    FROM Small T1, AllParts T2
    WHERE T2.Key1 = T1.Key1 AND T2.Key2 = T1.Key2 AND T2.Key3 = T1.Key3;
    SELECT T1.*
    FROM AllParts T1
    WHERE ( T1.Key1, T1.Key2, T1.Key3 ) IN ( SELECT T2.Key1, T2.Key2, T2.Key3 FROM Small T2 );

  • Query can run in Oracle 10g but very slow in 11g

    Hi,
    We've just migrated to Oracle 11g and we noticed that some of our view are very slow (it takes seconds in 10g and takes 30 minutes in 11g), and the tables are using the local table.
    Do any of you face the same issue?
    This is our query:
    SELECT
    A.wellbore
    ,a.depth center
    ,d.MD maxbc
    ,d.XDELT xbc
    ,d.YDELT ybc
    ,e.MD minac
    ,e.XDELT xac
    ,e.YDELT yac
    from
    table_A d,table_A e, table_B a
    where a.wellbore = d.WELLBORE (+)
    and a.wellbore = e.WELLBORE(+)
    and d.MD = (select max(MD) from table_A b where b.MD < a.depth and
    d.wellBORE = b.wellBORE)
    and e.md = (select min(md) from table_A c where c.MD > a.depth and
    e.wellBORE = c.wellBORE);

    Thanks I will move to the correct one..
    Rafi,
    Build the Indexes and it is still slow. I am querying from a view from another database, which is in 10g instances.
    Moved: Query can run in Oracle 10g but very slow in 11g
    Edited by: 924400 on Apr 1, 2012 6:03 PM
    Edited by: 924400 on Apr 1, 2012 6:26 PM

  • Transactional replication very slow with indexes on Subscriber table

    I have setup Transactional Replication for one of our databases where one table with about 5mln records is replicated to a Subsriber database. With every replication about 500-600.000 changed records are send to the Subscriber.
    Since one month I see very strange behaviour when I add about 10 indexes to the Subscriber table. As soon as I have added the indexes replication speed becomes extremely slow (almost 3 hours for 600k records). As soon as I remove the indexes the replication
    is again very fast, about 3 minutes for the same amount of records.
    I've searched a lot on the internet to solve this issue but can't find any explaination for this strange behaviour after adding the indexes. As far as I know it doesn't have to be a problem to add indexes to a Subscriber table, and it hasn't been before on
    another replication configuration we use.
    Some information from the Replication Log:
    With indexes on the Subscriber table
    Total Run Time (ms) : 9589938 Total Work Time : 9586782
    Total Num Trans : 3 Num Trans/Sec : 0.00
    Total Num Cmds : 616245 Num Cmds/Sec : 64.28
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 9580752 
    Time Spent on Commits (ms): 2687 Commits/Sec : 0.00
    Time to Apply Cmds (ms) : 9586782 Cmds/Sec : 64.28
    Time Cmd Queue Empty (ms) : 5499 Empty Q Waits > 10ms: 172
    Total Time Request Blk(ms): 5499 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 10378 Cmds/Sec : 59379.94
    Time Cmd Queue Full (ms) : 9577919 Full Q Waits > 10ms : 6072
    Without indexes on the Subscriber table
    Total Run Time (ms) : 89282 Total Work Time : 88891
    Total Num Trans : 3 Num Trans/Sec : 0.03
    Total Num Cmds : 437324 Num Cmds/Sec : 4919.78
    Total Idle Time : 0 
    Writer Thread Stats
    Total Number of Retries : 0 
    Time Spent on Exec : 86298 
    Time Spent on Commits (ms): 282 Commits/Sec : 0.03
    Time to Apply Cmds (ms) : 88891 Cmds/Sec : 4919.78
    Time Cmd Queue Empty (ms) : 1827 Empty Q Waits > 10ms: 113
    Total Time Request Blk(ms): 1827 
    P2P Work Time (ms) : 0 P2P Cmds Skipped : 0
    Reader Thread Stats
    Calls to Retrieve Cmds : 2 
    Time to Retrieve Cmds (ms): 2812 Cmds/Sec : 155520.63
    Time Cmd Queue Full (ms) : 86032 Full Q Waits > 10ms : 4026
    Can someone please help me with this issue? Any ideas? 
    Pim 

    Hi Megens:
    Insert statement might be slow with not only indexes and few others things too
    0) SQL DB Blocking during inserts
    1) If any insert triggers are existed
    2) Constraints - if any
    3) Index fragmentation
    4) Page splits / fill factor
    Without indexes inserts will be fast because, each time when new row going to insert to the table, SQL Server will do
    1) it will check for the room, if no room page splits will happen and record will placed at right place
    2) Once the record updated all the index should be update
    3) all these extra update work will cause can make insert statement bit slow
    Its better to have index maintenance jobs frequently to avoid fragmentation.
    If every thing is clear on SQL Server Side, you need look up on DISK IO, N/W Latency between the servers and so on
    Thanks,
    Thanks, Satish Kumar. Please mark as this post as answered if my anser helps you to resolves your issue :)

  • Retrieving record from oracle DB very slow..pls help

    Hi, i'm writing a VB code to retrieving records from Oracle DB Server version 8. I'm using VB Adodb to retrieve the records from various tables. Unfortunately one of the table are very slow to response, the table only contain around 204900 records. The SQL Statement to retrieve the records is a simple SQL Statement that contain WHERE clause only. Any issue that will make the retrieving time become slow? Is that a Indexing? Oracle Driver? Hardware Spec? Or any solution for me to improve the performance. Thanks!

    Well, there are a few things to consider...
    First, can you try executing your query via SQL*Plus? If there are database tuning problems, your query will be slow no matter where you run it.
    Second, are you retrieving significantly more rows in this query than in your other queries? It can take a significant amount of time to retrieve records to the client, even if it's quick to select them.
    Justin

  • Very slow queries when using oracle spatial

    Hi,
    I am new to oracle, please help with me the problem that I am facing.
    I am storing spatial regions inside the data base. The data is in very simple in form. That is each region is a small rectangle. The boundaries of rectangle touch each other but they do not overlap. Basically I am filling the 2D space with rectangles.
    I use the SDO_FILTER query to find regions inside the data base that intersect with a line. The data base contains a single table. With one column of type SDO_GEOMETRY. And the table has around 25000 tuples.
    The problem is that when the result of the query is large (70% of the tuples) that means there are many regions in the cache that intersect the line, the amount of time it takes is a lot more(6 times) than if I do the same without using the R Tree.
    I can do it without using the R Tree because all my objects are rectangles. I store the the left bottom and the top right corner of the rectangle in the data base. And then use a simple range query.
    Is this behaviour expected? should I use an R Tree index structure, or something else. I know that when the result is very large the R Tree search will have to search many sub trees to retrieve the data points. But should the performance be worse than a sequential scan?
    Or will indexes only work for cases where the selectivity of the query is high?
    Please reply I need help,
    Thank you,
    Nishant

    Hi,
    I am using the version of spatial that we get with Oracle9i Release 2 (9.2.0.1) or it could be
    Oracle 9i release 2 (9.2.0.2).
    I don't know how to check that.
    Here I provide the create script, create index statement and a few sample data items that I insert into the data base.
    Create Script:
    create table out5d(
    ID                          NUMBER(9),
    PID NUMBER(9),
    X                          NUMBER,
    Y NUMBER,
    L NUMBER,
    PL NUMBER,
    shape                          MDSYS.SDO_GEOMETRY,
    HEIGHT                         NUMBER,
    N                          NUMBER(9),
    d0 NUMBER,
    d1 NUMBER,
    d2 NUMBER,
    d3 NUMBER,
    d4                         NUMBER,
    D0_MAX NUMBER,
    D1_MAX NUMBER,
    D2_MAX NUMBER,
    D3_MAX NUMBER,
    D4_MAX NUMBER,
    D0_MIN NUMBER,
    D1_MIN NUMBER,
    D2_MIN NUMBER,
    D3_MIN NUMBER,
    D4_MIN NUMBER);
    alter table out5d
    add constraint keyConstraint
    primary key (ID);
    Create Index:
    INSERT INTO USER_SDO_GEOM_METADATA VALUES ('out5d' , 'shape' , MDSYS.SDO_DIM_ARRAY( MDSYS.SDO_DIM_ELEMENT('X', 0, 1, 0.0000001), MDSYS.SDO_DIM_ELEMENT('Y', 0, 1, 0.0000001) ), NULL);
    CREATE INDEX out5d_rtree ON out5d(shape) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    I changed the fan out to 50. Because I read in the spatial user guide that in case you are expecting a big output result you should increase the fan out to 50 or 60.
    Also here is a sample data items that I inserted into the tables.
    insert into out5d Values (25129,17556,0.804932,0.804993,0,NULL,NULL,NULL,1,0.298039,0.5,0.243137,0.222222,0.266667,0.298039,0.5,0.243137,0.222222,0.266667,0.298039,0.5,0.243137,0.222222,0.266667);
    insert into out5d Values (25130,17556,0.804993,0.805054,0,NULL,NULL,NULL,1,0.290196,0.494681,0.235294,0.222222,0.266667,0.290196,0.494681,0.235294,0.222222,0.266667,0.290196,0.494681,0.235294,0.222222,0.266667);
    insert into out5d Values (25131,19920,0.371094,0.371155,0,NULL,NULL,NULL,1,0.337255,0.734043,0.262745,0.239316,0.357576,0.337255,0.734043,0.262745,0.239316,0.357576,0.337255,0.734043,0.262745,0.239316,0.357576);
    insert into out5d Values (25132,19920,0.371155,0.371216,0,NULL,NULL,NULL,1,0.345098,0.723404,0.270588,0.239316,0.357576,0.345098,0.723404,0.270588,0.239316,0.357576,0.345098,0.723404,0.270588,0.239316,0.357576);
    insert into out5d Values (25133,19920,0.371216,0.371277,0,NULL,NULL,NULL,1,0.352941,0.734043,0.262745,0.247863,0.363636,0.352941,0.734043,0.262745,0.247863,0.363636,0.352941,0.734043,0.262745,0.247863,0.363636);
    insert into out5d Values (25134,19920,0.371277,0.371338,0,NULL,NULL,NULL,1,0.345098,0.728723,0.262745,0.247863,0.363636,0.345098,0.728723,0.262745,0.247863,0.363636,0.345098,0.728723,0.262745,0.247863,0.363636);
    insert into out5d Values (25135,17615,0.298706,0.298767,0,NULL,NULL,NULL,1,0.231373,1,0.262745,0.589744,0.606061,0.231373,1,0.262745,0.589744,0.606061,0.231373,1,0.262745,0.589744,0.606061);
    insert into out5d Values (25136,17615,0.298767,0.298828,0,NULL,NULL,NULL,1,0.223529,1,0.270588,0.589744,0.606061,0.223529,1,0.270588,0.589744,0.606061,0.223529,1,0.270588,0.589744,0.606061);
    insert into out5d Values (25137,17615,0.298828,0.298889,0,NULL,NULL,NULL,1,0.231373,1,0.262745,0.598291,0.618182,0.231373,1,0.262745,0.598291,0.618182,0.231373,1,0.262745,0.598291,0.618182);
    insert into out5d Values (25138,17615,0.298889,0.29895,0,NULL,NULL,NULL,1,0.231373,1,0.270588,0.589744,0.612121,0.231373,1,0.270588,0.589744,0.612121,0.231373,1,0.270588,0.589744,0.612121);
    insert into out5d Values (25139,19918,0.518127,0.518188,0,NULL,NULL,NULL,1,0.584314,1,0.152941,0.282051,0.412121,0.584314,1,0.152941,0.282051,0.412121,0.584314,1,0.152941,0.282051,0.412121);
    insert into out5d Values (25140,19918,0.518188,0.51825,0,NULL,NULL,NULL,1,0.572549,1,0.160784,0.282051,0.418182,0.572549,1,0.160784,0.282051,0.418182,0.572549,1,0.160784,0.282051,0.418182);
    insert into out5d Values (25141,19918,0.51825,0.518311,0,NULL,NULL,NULL,1,0.572549,0.994681,0.152941,0.282051,0.412121,0.572549,0.994681,0.152941,0.282051,0.412121,0.572549,0.994681,0.152941,0.282051,0.412121);
    insert into out5d Values (25142,19918,0.518311,0.518372,0,NULL,NULL,NULL,1,0.576471,1,0.160784,0.282051,0.412121,0.576471,1,0.160784,0.282051,0.412121,0.576471,1,0.160784,0.282051,0.412121);
    insert into out5d Values (25143,23411,0.193237,0.193298,0,NULL,NULL,NULL,1,0.211765,1,0.145098,0.196581,0.218182,0.211765,1,0.145098,0.196581,0.218182,0.211765,1,0.145098,0.196581,0.218182);
    insert into out5d Values (25144,23411,0.193298,0.193359,0,NULL,NULL,NULL,1,0.219608,1,0.145098,0.196581,0.212121,0.219608,1,0.145098,0.196581,0.212121,0.219608,1,0.145098,0.196581,0.212121);
    insert into out5d Values (25145,23411,0.193359,0.19342,0,NULL,NULL,NULL,1,0.219608,1,0.145098,0.196581,0.206061,0.219608,1,0.145098,0.196581,0.206061,0.219608,1,0.145098,0.196581,0.206061);
    insert into out5d Values (25146,23411,0.19342,0.193481,0,NULL,NULL,NULL,1,0.219608,0.994681,0.152941,0.188034,0.218182,0.219608,0.994681,0.152941,0.188034,0.218182,0.219608,0.994681,0.152941,0.188034,0.218182);
    insert into out5d Values (25147,23411,0.193481,0.193542,0,NULL,NULL,NULL,1,0.219608,1,0.152941,0.188034,0.212121,0.219608,1,0.152941,0.188034,0.212121,0.219608,1,0.152941,0.188034,0.212121);
    insert into out5d Values (25148,10867,0.326904,0.326965,0,NULL,NULL,NULL,1,0.156863,1,0.290196,0.589744,0.612121,0.156863,1,0.290196,0.589744,0.612121,0.156863,1,0.290196,0.589744,0.612121);
    insert into out5d Values (25149,10867,0.326965,0.327026,0,NULL,NULL,NULL,1,0.152941,1,0.298039,0.581197,0.606061,0.152941,1,0.298039,0.581197,0.606061,0.152941,1,0.298039,0.581197,0.606061);
    insert into out5d Values (25150,10867,0.327026,0.327087,0,NULL,NULL,NULL,1,0.152941,1,0.290196,0.598291,0.618182,0.152941,1,0.290196,0.598291,0.618182,0.152941,1,0.290196,0.598291,0.618182);
    insert into out5d Values (25151,10867,0.327087,0.327148,0,NULL,NULL,NULL,1,0.156863,1,0.305882,0.589744,0.612121,0.156863,1,0.305882,0.589744,0.612121,0.156863,1,0.305882,0.589744,0.612121);
    insert into out5d Values (25152,10867,0.327148,0.327209,0,NULL,NULL,NULL,1,0.156863,1,0.298039,0.589744,0.612121,0.156863,1,0.298039,0.589744,0.612121,0.156863,1,0.298039,0.589744,0.612121);

  • A very slow select with sub selects sql statement on Oracle

    Hi,
    I'm moving an application from MySql to Oracle. The following select were very efficiently executed in MySql. In oracle its slow like a snail.
    Do anyone have a hint on how to speed it up?
    The slow part is the four sub selects in the select part. Removing them makes the select about 50 times faster on Oracle.
    Best Regards,
    Stephane
    select
    (select count(*) from relation rr where rr.document_id = d.id and rr.product_id = ? and (rr.relation_type_id = 'link' OR rr.relation_type_id = 'product')) as relationList,
    (select count(*) from relation rr where rr.document_id = d.id and rr.product_id = ? and rr.relation_type_id = 'number') as relationNumber,
    (select count(*) from relation rr where rr.document_id = d.id and rr.product_id = ? and rr.relation_type_id = 'title') as relationTitle,
    (select count(*) from relation rr where rr.document_id = d.id and rr.product_id = ? and rr.relation_type_id = 'content') as relationText,
    d.*
    from document d,(
    select distinct r.document_id id
    from relation r
    where
    r.product_id = ?
    ) dd
    where d.id=dd.id

    You are accessing the relation-table too many times, so a rewrite to a query like this
    SQL> select dept.deptno
      2       , dept.dname
      3       , count(decode(job,'CLERK',1)) clerk
      4       , count(decode(job,'MANAGER',1)) manager
      5       , count(decode(job,'SALESMAN',1)) salesman
      6    from dept, emp
      7   where dept.deptno = emp.deptno (+)
      8   group by dept.deptno
      9       , dept.dname
    10  /
        DEPTNO DNAME               CLERK    MANAGER   SALESMAN
            10 ACCOUNTING              1          1          0
            20 RESEARCH                2          1          0
            30 SALES                   1          1          4
            40 OPERATIONS              0          0          0
    4 rijen zijn geselecteerd.will be worth the effort.
    If still not satisfied, you have to do some investigation, as described [url http://forums.oracle.com/forums/thread.jspa?threadID=501834&tstart=0]here
    Regards,
    Rob.

  • Very slow insert and query

    Dear Professionals:
    The insert and query become very slow and somtimes hang. It wasn't like this before .. Network people added 100 pc to the network .. most of them not using the databse only internet and we are all in the same network .. Can this slow down the database Oracle 9.2.0.1.0 OS w2k server ? And how to know that the slowness is from network and not from query or inserts ?
    Ahmed.

    Hi,
    >>Network people added 100 pc to the network .. most of them not using the databse only internet. Can this slow down the database Oracle 9.2.0.1.0 OS w2k server ?
    Maybe yes, maybe not, maybe a network performance problem ...
    If you try to execute these DML's (insert, ...) and querys directly on the Server, what's happen ?
    Cheers

  • Update Process Very Slow in Oracle 8 which update bulk data

    Dear all
    i am just updating data through SQLsub-query,but i want to get to column from sub-query and need to update my source table, but there is problem is that through sub-query just return a single column while updating,but i don't wan't to re-type another query for another column due to performance issue.
    Also the other issued related performance is very slow,how should i fast update in bulk.
    Please suggest,
    Thanks

    Actually i am update time roster table with machine date, first i get from file & insert into Machine_table & then
    i make joing query & then update roster table which is like below.
    in roster table data consist 1 to last day of month of every employee.
    update roster a
    set (a.timein,a.timeout) = (select timein,timeout from machine_table mch
    where a.roster_date = mch.roster_date and a.person_id = mch.person_id);
    this query is updating around 7750 & it takes to much time.
    please help urgent thanks.

  • Splitter operator doesnt use multi table inserts in OWB...very very urgent

    Hi,
    I am using OWB 9i to carry out tranformations. I want to copy the same seuence numbers to the two target tables.
    Scenario:
    I have a source table source_table, which is connected to a splitter and the splitter is used to dump the records in two target tables namely target1_table and target2_table. I have a sequence which is also an input to the splitter, so that I can have the same sequence number in the the two output groups of he splitter. I then map the sequence number from the two output groups to the two target tables expecting to have the same sequence number in the target tables. But when I see the generated code it creates two procedures and effectively inserts sequencing numbers in the target tables which are not consistent. Please help me so that I have the same sequencing numbers in the target tables which are consistent.
    Well the above example works in row based operating mode but not in set based mode. Please give me a valid explanation.
    OWB pdf says that splitter uses multi table inserts for multiple targets. After seeing the generated code for set based operations I dont agree to this.
    Its very urgent.
    thanks a lot in advance.
    -Sharat

    Hi Mark,
    You got me wrong, let me explain you the problem again.
    RDBMS oracle 9.2.0.4
    OWB 9.2.0.2.8
    I have three tables T1,T2 and T3.
    T1 is the source table and the remaining two tables T2 and T3 are target tables.
    Following are the contents of table T1 -
    SQl>select * from T1;
    DEPTNAME LOCATIO?N
    COMP PUNE
    MECH BOMBAY
    ELEC A.P
    Now I want to populate the two destination tables T2 and T3 with the records in T1.
    For this I am using splitter operator in OWB which is suppose to generate multi table inserts, but unfortunately its not doing so when I generate the SQL. There si no "insert all" command in the sql it generates.
    What I want is, when I populate T2 and T3 I use a sequence generator and I want the same sequences for T2 and T3 eg.
    SQl>select * from T2;
    NEXT_VAL DEPTNAME LOCATIO?N
    1 COMP PUNE
    2 MECH BOMBAY
    3 ELEC A.P
    SQl>select * from T3;
    NEXT_VAL DEPTNAME LOCATIO?N
    1 COMP PUNE
    2 MECH BOMBAY
    3 ELEC A.P
    I am able to achieve this when I set the operating mode to ROW BASED. I am not geting the same result when I set the operating mode to SET BASED.
    Help me....
    -Sharat

  • Large SGA issue-- insert data is very slow--Who can help me?

    I set the max_sga_size to 10G,and db_cache_size to 8G, but the value of db_cache_size is negative number in OEM, and also I found that the data inserting was very slow, I checked the OS, found no CPU consuming and no IO consuming.
    The OS is HP-UX B11.23 ia64.
    Oracle server 9.2.0.7
    Physical memory : 64G
    CPU: 8
    (oracle server and os are all 64-bit).
    If I decrease the SGA to 3G,and db_cache_size to 2G, and the same data inserting is very fast, everything is well.
    so I guess if there are some os parameters needed to set for using LARGE memory.
    Who know this issue or who has the experience of using large SGA in HP-UX ?
    Message was edited by:
    user548543

    Sounds like you might have a configuration issue on the o/s side.
    Check that kernel parameters are set as recommended in the installation guide
    The first thing that came to mind after reading the problem description was that you might have too low SHMMAX for that 10GB SGA, which would cause multiple shm segments to be created and thus explain the performance degration you're experiencing.
    A quick way to check if that's the case would be doing "ipcs -m" to see if there are multiple shm segments when SGA is set to 10GB.

Maybe you are looking for

  • How can i transfer music from my libary to a friends nano help please!!!!!

    I have and old shuffle ipod and a friend has asked if i can put my albums on his nano, his is connecting and showing up in the software but icannot find out how to transfer the albums into his nano, can someone help please. Also they are my albums no

  • Is there a way to access my iTunes library that's stored on my Mac Mini Server at home over the internet while I'm away on business?

    Hello. I've done a bit of searching on Google and have come up empty so far and am looking for your help. Here's my situation: I have my iTunes media files stored on my NAS on my home network along with my Mac Mini Server with the iTunes library file

  • Multiple entries in context menu

    Hi, when right click on a file to open with a programm, the context menu shows multiple entries for some apps. My question is now how to put away multiple entries? Best regards, Thomas

  • Point and click quiz not rendering properly

    Hi I have created a point and click quiz in Captivate.  The problem I'm having is that after a few slides, the screen breaks up and does not render correctly, meaning that my click box targets are in the wrong place. Would anyone kow why this is happ

  • Sounds Like Crap

    I've been beginning to wonder what GB's real purpose is... I've recently purchased a Snowball Microphone $105.00 to go with GB because my band and I were simply utilizing the built-in microphone on my MacPro -- ugh, not good at all. It's much better