Query slow using SDO_GEOM package

Hi,
i've a query that extract this data from a table:
SDO_GEOM.sdo_centroid (geometry, 0.5).sdo_point.x
It's very, but very slow, about 15sec for 300 records.
Remove this package from select it's very fast.
Any ideas to how tuning spacial package??
TNX

With column in select:
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.02       0.05          0        606          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       14      1.31       5.32          0         36          0         184
total       16      1.33       5.37          0        642          0         184
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 119
Rows     Row Source Operation
    184  TABLE ACCESS FULL CAA_PA (cr=129 pr=0 pw=0 time=2880603 us cost=7 size=219 card=1)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      14        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  SQL*Net message from client                    14       24.76         24.80
********************************************************************************whitout package in select :
SELECT ID2 AS ID_COLUMN,
         '' AS SYMBOL,
         ICONCODE AS IMAGECODE_COLUMN,
         ID_UNICO AS TOOLTIP_COLUMN,
         1121 AS QUERYID,
         '0' AS HASACTIONS,
         1 AS ALIGNMENT
    FROM IDT_SETPRIMARIO.V_CAA_XY
   WHERE     X > 1567630
         AND X < 1872370
         AND Y > 4998933
         AND Y < 5139066
         AND ICONCODE IS NOT NULL
ORDER BY ICONCODE
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.01       0.02          0        370          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       14      0.83       1.82          0        121          0         184
total       16      0.84       1.85          0        491          0         184
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 119
Rows     Row Source Operation
    184  TABLE ACCESS FULL CAA_PA (cr=121 pr=0 pw=0 time=1473516 us cost=7 size=219 card=1)
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      14        0.00          0.00
  Disk file operations I/O                        1        0.00          0.00
  SQL*Net message from client                    14       20.27         20.30
********************************************************************************

Similar Messages

  • Query objectGUID using dbms_ldap package

    Hi
    I've managed to retrieve the objectGUID from Active Directory using the DBMS_LDAP package.
    It is returned in this format: 8FDD7ACDA0749648B136E0AD6847BD64
    How can I use this value in a filter for dbms_ldap.search_s?
    objectGUID=8FDD7ACDA0749648B136E0AD6847BD64 does not work,
    I've also tried escaping the value \8F\DD\... and \\8F\\DD\\...
    Any one know what I need to do?
    Thanks

    I already have the HEX value of the GUID - the problem is the function dbms_ldap.search_s appears to be trying to match the objectGUID which has been converted to ascii.
    I can do partial matches like objectGUID=cf*
    But that is matching an ascii representation of the GUID: eg. cf¿t¿H¿6¿@¿D
    And nothing can match the upside down question marks.
    There must be another way - or another unique identifier to compare/filter by.

  • MS SQL query slow using view column as criteria

    HI,
    I am experiencing a very frustrate problem. I have 2 tables, and create a view
    to union these 2 tables, when do a select on this view using the column of the
    view as criteria is took more 1 minutes, but the query runs fine in Qurey Analyzer.
    Anybody has the same experience? is this the problem with jdbc?

    I searched http://e-docs.bea.com/wls/docs70/index.html, also searched the documentation
    for wls6.1, wls5.1. As you pointed I searched support site, they are all the customer
    case, it's not formal documentation.
    Joe Weinstein <[email protected]> wrote:
    >
    >
    jen wrote:
    Thanks. but I search on the table is fine (the same column). is thereany db setting
    could be tuned? so the view is the problem? No, it's a client decision/issue. If you defined your tables to have
    nvarchar columns
    the jdbc driver's parameter values would be fine as-is.
    I searched "useVarChars" on whole
    site and can't find anything.Which site? This is a property of the weblogic.jdbc.mssqlserver4.Driver.
    I just went to www.bea.com/support and entered useVarChars in the Ask
    BEA
    question panel and got hits...
    Joe
    Joe Weinstein <[email protected]> wrote:
    Jen wrote:
    Sorry it's my bad. I am testing on wls81, but the problems is on wls70,so they
    are using different drivers.
    You are the magic man. It worked on wls81 now. I am sure it will curethe problem
    on wls70. Is there any documentation on this? Why it is not a problemon some
    databse server? ThanksSure. The issue has to do with the MS client-DBMS protocol having evolved
    from
    an 8-bit (7-bit really) character representation. Now it has a newer
    16-bit
    way, to transfer NVARCHAR data. Java characters are all 16-bit, so
    by default
    a JDBC driver will send Java parameters over as NVARCHAR data. This
    is
    crucial
    for Japanese data etc. However, once simple ASCII data is transformed
    to an
    NVARCHAR format, the DBMS can't directly compare it to varchar data,
    or use it
    in index searches. The DBMS has to convert the VARCHAR data to NVARCHAR,
    but it
    can't guarantee that the converted data will retain the same ordering
    as the index,
    so the DBMS has to do a table scan!
    The properties I suggested are each driver's way of allowing you
    to say "I'm
    a simple American ;) I am using simple varchar data so please sendmy
    java
    strings that way.". This allows the DBMS to use your varchar indexes
    in queries.
    Joe Weinstein at BEA
    Joe Weinstein <[email protected]> wrote:
    Jen wrote:
    It doesn't cure the problem. Here is my pool
    <JDBCConnectionPool DriverName="weblogic.jdbc.sqlserver.SQLServerDriver"Name="test_pool"
    Password="{3DES}fKSovViFe5kHzl/vTs0LVQ==" Properties="user=user;PortNumber=1543;useVarChars=true;ServerName=194.20.2.10;DatabaseName=devDB"
    Targets="admin" TestTableName="SQL SELECT COUNT(*) FROM sysobjects"URL="jdbc:bea:sqlserver://194.20.2.10:1543"/>
    Strange is some database is fine.Oh, sorry. I thought it was the older weblogic driver. Change the
    useVarChars=true to sendStringParametersAsUnicode=false
    Let me know... Also, I suggest changing the TestTableName to "SQL
    select
    1".
    For MS, that will be much more efficient than involving a full count
    of sysobjects!
    Joe
    Joe Weinstein <[email protected]> wrote:
    Jen wrote:
    You are right. Tadaa! Am I Kreskin, or what? ;) Here's what I recommend:
    In your pool definition, for this driver add a driver property:
    useVarChars=true
    and let me know if it's all better.
    Joe
    I am using weblogic jdbc driver weblogic.jdbc.mssqlserver4.Driver.
    here is the code:
    getData(Connection connection, String stmt, ArrayList arguments)
         PreparedStatement pStatement=null;>>>>>>>>     ResultSet resultSet=null;>>>>>>>>     try {>>>>>>>>         pStatement = connection.prepareStatement(stmt);>>>>>>>>         for (int i = 1; i <= arguments.size(); i++) {>>>>>>>>          pStatement.setString(i, (String) arguments.get(i-1));>>>>>>>>                    resultSet = pStatement.executeQuery(); //this statement takesmore than 1
    min.
    Joe Weinstein <[email protected]> wrote:
    Jen wrote:
    HI,
    I am experiencing a very frustrate problem. I have 2 tables,
    and
    create
    a view
    to union these 2 tables, when do a select on this view using
    the
    column
    of the
    view as criteria is took more 1 minutes, but the query runs
    fine
    in
    Qurey Analyzer.
    Anybody has the same experience? is this the problem with jdbc?
    I have suspicions... Show me the jdbc code. I'm guessing it's
    a
    PreparedStatement,
    and you send the search criterion column value as a parameter
    you
    set
    with a
    setString().... Let me know... (also let me know which jdbc driveryou're
    using).
    Joe

  • Using a packaged function in a query

    Hi ,
    I have read the following statement (correct option for the question)in 1z0-147 questions.
    The question is :: WHEN USING A PACKAGED FUNCTION IN A QUERY which of the following is true ::
    The packaged function cannot execute DML statements against the table that is being queriedI tried the following to understand the above statement
    create or replace package test_pk is
    function fn_test(i number) return number ;
    end;
    create or replace package body test_pk is
    function fn_test (i number) return number is
      j number:=1;
    begin
        insert into a values(200,300);
        commit;
       return j;
       end;
    end test_pk;  Upon executing the above function in the select statement
    select TEST_PK.FN_TEST(1) from dualI got the error saying that Can't perform DML Inside SElect
    Is this example for the above statement is TRUE ????
    Could you please explain me if my example is wrong
    Thanks
    Edited by: Smile on Nov 22, 2012 9:47 AM

    No, it is not true. Any function used in SQL is not allowed to perform a DML operation other than SELECT regardless if it is against the table that is being queried or not. Select is allowed against any table. And function in your example does INSERT which is not allowed. Also, Any function used in SQL is not allowed to perform commit/rollback/savepoint.
    SY.
    Edit: unless function is autonomous transaction.
    Edited by: Solomon Yakobson on Nov 22, 2012 10:22 AM

  • Sql query slowness due to rank and columns with null values:

        
    Sql query slowness due to rank and columns with null values:
    I have the following table in database with around 10 millions records:
    Declaration:
    create table PropertyOwners (
    [Key] int not null primary key,
    PropertyKey int not null,    
    BoughtDate DateTime,    
    OwnerKey int null,    
    GroupKey int null   
    go
    [Key] is primary key and combination of PropertyKey, BoughtDate, OwnerKey and GroupKey is unique.
    With the following index:
    CREATE NONCLUSTERED INDEX [IX_PropertyOwners] ON [dbo].[PropertyOwners]    
    [PropertyKey] ASC,   
    [BoughtDate] DESC,   
    [OwnerKey] DESC,   
    [GroupKey] DESC   
    go
    Description of the case:
    For single BoughtDate one property can belong to multiple owners or single group, for single record there can either be OwnerKey or GroupKey but not both so one of them will be null for each record. I am trying to retrieve the data from the table using
    following query for the OwnerKey. If there are same property rows for owners and group at the same time than the rows having OwnerKey with be preferred, that is why I am using "OwnerKey desc" in Rank function.
    declare @ownerKey int = 40000   
    select PropertyKey, BoughtDate, OwnerKey, GroupKey   
    from (    
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,       
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]   
    from PropertyOwners   
    ) as result   
    where result.[Rank]=1 and result.[OwnerKey]=@ownerKey
    It is taking 2-3 seconds to get the records which is too slow, similar time it is taking as I try to get the records using the GroupKey. But when I tried to get the records for the PropertyKey with the same query, it is executing in 10 milliseconds.
    May be the slowness is due to as OwnerKey/GroupKey in the table  can be null and sql server in unable to index it. I have also tried to use the Indexed view to pre ranked them but I can't use it in my query as Rank function is not supported in indexed
    view.
    Please note this table is updated once a day and using Sql Server 2008 R2. Any help will be greatly appreciated.

    create table #result (PropertyKey int not null, BoughtDate datetime, OwnerKey int null, GroupKey int null, [Rank] int not null)Create index idx ON #result(OwnerKey ,rnk)
    insert into #result(PropertyKey, BoughtDate, OwnerKey, GroupKey, [Rank])
    select PropertyKey, BoughtDate, OwnerKey, GroupKey,
    RANK() over (partition by PropertyKey order by BoughtDate desc, OwnerKey desc, GroupKey desc) as [Rank]
    from PropertyOwners
    go
    declare @ownerKey int = 1
    select PropertyKey, BoughtDate, OwnerKey, GroupKey
    from #result as result
    where result.[Rank]=1
    and result.[OwnerKey]=@ownerKey
    go
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Need to generate the excel file with diffrent sheets using utl_file package

    Hi,
    Sorry for previous message in which I had missed the usage of " UTL_FILE " package
    I need to generate the excel file with diffrent sheets . Currently I am generating the data in three diffrent excel files using
    " UTL_File " package and my requirement is to generate this in a single excel file with diffrent sheets.
    Please help on this
    Thanks & Regards,
    Krishna Vyavahare

    Hello 10866107,
    at Re: How to save a query result and export it to, say excell? you can find links to different solutions. At least the packages behind second and fourth link support more than one worksheet.
    Regards
    Marcus

  • How could I Encrypt the data of SDO_GEOMETRY type using DBMS_CRYPTO package

    Hi:
    I want to Encrypt the data of SDO_GEOMETRY object type using DBMS_CRYPTO package.
    What could I do? hope anyone can help me,give me a suggestions!
    thanks in advance.
    lgs

    well, the spatial api would not be able to handle this data anymore, so what you are trying to do is converting an SDO_GEOMETRY to some cryptable user type (see http://download-uk.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_crypto.htm#sthref1506) and encrypting this.
    Before using the SDO_GEOMETRY type will have to decrypt and reconvert it again and pass it to the spatial query or function.

  • ORA-02019 while using DBMS_FILE_TRANSFER Package.

    Hi,
    I am trying to transfer the datafiles from 10.2.0.3 database residing on File-system to 11gR2 database residing on ASM. Both the DBs are on Different machines across datacenters.
    I am trying to use Transportable Tablespace to move the data. As a part of it, I am trying to use DBMS_FILE_TRANSFER package to move the 10gR2 files to 11gR2 ASM.
    I am getting the below issue while doing so:
    SQL> exec sys.DBMS_FILE_TRANSFER.GET_FILE('sdir','data01.dbf','tdir','data01.dbf','RECDB');
    BEGIN sys.DBMS_FILE_TRANSFER.GET_FILE('sdir','data01.dbf','tdir','data01.dbf','RECDB'); END;
    ERROR at line 1:
    ORA-02019: connection description for remote database not found
    ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 37
    ORA-06512: at "SYS.DBMS_FILE_TRANSFER", line 132
    ORA-06512: at line 1
    Above SQL was executed from Target Host (11gR2) to GET file from 10gR2. The above directory objects point to correct locations on respective hosts.
    RECDB - it is the DB link which is owned by SYSTEM user and will connect to 10gR2 DB as SYSTEM user.
    Strange thing is when i am querying the source 10gR2 DB from Target DB using Db link, IT is WORKING fine
    SQL> select name from v$database@RECDB ;
    NAME
    POCREC01
    Elapsed: 00:00:00.15
    SQL> select * from dual@RECDB;
    D
    X
    Elapsed: 00:00:00.12
    I also have TNS entry in target tnsnames.ora as:
    POCREC01 =
    (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = pocserver.com)(PORT = 1521))
    (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = pocrec01)
    SQL> create database link RECDB connect to system identified by <password> using 'POCREC01';
    Database link created.
    SQL> select * from all_db_links;
    OWNER
    DB_LINK
    USERNAME
    HOST
    CREATED
    SYSTEM
    RECDB
    SYSTEM
    POCREC01
    05-JAN-11
    So, I want help on whether above issue is a BUG with Package (checked in Metalink and cannot find anything) OR am I doing something wrong??
    Thanks

    Can you please use "create public database link " to create the db link as a PUBLIC link and retry.

  • Query slower in stored procedure(after upgrade to 16)

    Hi all,
    We are looking to upgrade our SQL Anywhere 9 database to 16. We thought version 16 is slower in update tables, but the reason of slowness are query's in stored procedures called from the trigger. One of the query's has a duration of 8 times the duration in version 9!! When I run the query in Interactive Sql(v16), they run fast like version 9. So the query runs slower in a stored procedure.
    I thought parameter sniffing, but it's not MS Sql Server. I've no idea why it was good in v9 and bad in v16. Have someone a idea?
    It's a query with subquery's in the where , a view in the from. In the view are unions. Looks not so special.
    thanks

    It may be more convenient for you to ask this question on the other forum: SAP SQL Anywhere Forum
    Preamble...
    It sometimes happens that (a) the same query will perform faster or slower in different versions because of different behaviors by the query optimizer. In most cases a minor change is necessary to restore good performance; in some rare cases, the changes are difficult.
    It also sometimes happens that (b) the same query will perform faster or slower under different operating conditions; e.g., client request (ISQL) versus inside a BEGIN block (stored procedure). Again, minor changes all that's required in most cases, sometimes otherwise.
    Details...
    In order to help, we will need more information. Two graphical plans with statistics for the same problematic query, both using V16, one captured in ISQL, and the other one captured inside the stored procedure, is the best way to pass on the information; ALL OTHER forms are inadequate, especially prose descriptions and edited snippets of code (that way lies frustration for all parties).
    There are a number of articles about graphical plans on this blog.

  • Sql query slow in new redhat enviornment

    We just migrated to a new dev environment in Linux REDHAT5, and now the query is very slow, and I used the TOAD to run the query, it took like 700 msecond to finish, however from any server connection, the sql query takes hours to finish.
    I checked toad monitor, it said need to increase db_buffer_cache and shared pool too small.
    Also three red alert from toad is:
    1. Library Cache get hit ratio: Dynamic or unsharable sql
    2. Chained fetch ratio: PCT free too low for a table
    3. parse to execute ratio: HIgh parse to execute ratio.
    App team said it ran real quick in the old AIX system, however I ran it in old system, and monitored in the toad, it gave me all same 5 red alerts in old system, and it did provide query results a lot quicker though.
    Here is the parameters in the old system (11gr1 on AIX):
    SQL> show parameter target
    NAME TYPE VALUE
    -------------------------------- archive_lag_target integer 0
    db_flashback_retention_target integer 1440
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    memory_max_target big integer 0
    memory_target big integer 0
    pga_aggregate_target big integer 278928K
    sga_target big integer 0
    SQL> show parameter shared
    NAME TYPE VALUE
    -------------------------------- hi_shared_memory_address integer 0
    max_shared_servers integer
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 31876710
    shared_pool_size big integer 608M
    shared_server_sessions integer
    shared_servers integer 0
    SQL> show parameter db_buffer
    SQL> show parameter buffer
    NAME TYPE VALUE
    -------------------------------- buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 2048000
    use_indirect_data_buffers boolean FALSE
    SQL>
    In new 11gr2 Linux REDHAT parameter:
    NAME TYPE VALUE
    ----------- archive_lag_target integer 0
    db_flashback_retention_target integer 1440
    fast_start_io_target integer 0
    fast_start_mttr_target integer 0
    memory_max_target big integer 2512M
    memory_target big integer 2512M
    parallel_servers_target integer 192
    pga_aggregate_target big integer 0
    sga_target big integer 1648M
    SQL> show parameter shared
    NAME TYPE VALUE
    ----------- hi_shared_memory_address integer 0
    max_shared_servers integer
    shared_memory_address integer 0
    shared_pool_reserved_size big integer 28M
    shared_pool_size big integer 0
    shared_server_sessions integer
    shared_servers integer 1
    SQL> show parameter buffer
    NAME TYPE VALUE
    ----------- buffer_pool_keep string
    buffer_pool_recycle string
    db_block_buffers integer 0
    log_buffer integer 18857984
    use_indirect_data_buffers boolean FALSE
    SQL>
    Please help. Thanks in advance.

    846422 wrote:
    why need ddl? we have a sql query slow.The DDL shows the physical structure of the table and physical storage characteristics. All relevant in performance tuning.
    As for the SQL query being slow. It is not.
    You have not provided any evidence that it is slow. And no, comparing performance with a totally different system is not a valid baseline for comparison. (most cars have 4 wheels, a gearbox and a steering wheel - but that does not mean you can compare different cars like a VW Beetle with a VW Porsche)
    What is slow? What are the biggest wait states for the SQL? What does the execution plan say?
    You have not defined a problem - you identified a symptom called "+query is slow+". You need to diagnose the condition by determining exactly what the SQL qeury is doing in the database. (and please, do not use TOAD and similar tools in an attempt to do this - do it properly instead)

  • Spatial Query that uses Result of another Spatial Query

    Hi all,
    I am using the SDO_DIFFERENCE Operator to return the difference between a query window and an intersection geometry. This is fine and i get an object back that is cast to a STRUCT using the SDO API. I want then to use this (returned)object in another SQL query that uses the SDO_RELATE Operator. The problem is that i dont know what type it needs to be so that the SDO_RELATE operator will accept it as a valid argument. Has anyone done this before or seen anything like it? Any help will be very much appreciated.
    Keith.

    Thanks for your help,
    I have done the following steps, but i still dont understand why it doesnt work.
    CREATE TABLE val_results (sdo_rowid ROWID, result varchar2(1000));
    CALL SDO_GEOM.VALIDATE_LAYER_WITH_CONTEXT('SHAPES','SHAPE','VAL_RESULTS');
    SELECT * FROM val_results;
    1 null     Rows Processed <3>
    2 AAARDDAAEAAAAA+AAA     13348 [Element <1>] [Ring <1>]
    3 AAARDDAAEAAAAA+AAB     13367 [Element <1>] [Ring <1>]
    SELECT shape_id,SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(c.shape, 0.005)
    FROM shapes c;
    3     TRUE
    2     13348 [Element <1>] [Ring <1>]
    1     13367 [Element <1>] [Ring <1>]
    If my rectangle and polygon is still invalid, can someone tell me why they are invalid. And how can i make them valid?
    Thanks.
    UPDATE:
    I have solved Polygon issue.
    In Spatial Developer's Guide it says:
    Simple polygon whose vertices are connected by straight line
    segments. You must specify a point for each vertex; and the
    last point specified must be exactly the same point as the first
    (within the tolerance value), to close the polygon. For
    example, for a 4-sided polygon, specify 5 points, with point 5
    the same as point 1.
    I have added a point to polygon which is same as first point. Now, it can find the point in polygon.
    INSERT INTO shapes VALUES (4, 'Polygon', SDO_GEOMETRY(2003, NULL, NULL, SDO_ELEM_INFO_ARRAY(1,1003,1), SDO_ORDINATE_ARRAY(306,193,130,441,489,653,88,183,442,354,306,193)));
    But, when i run this query again:
    SELECT shape_id,SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(c.shape, 0.005) FROM shapes c;
    4     13349 [Element <1>] [Ring <1>][Edge <4>][Edge <1>]
    3     TRUE
    2     13348 [Element <1>] [Ring <1>]
    1     13367 [Element <1>] [Ring <1>]
    it doesn't say TRUE like it did for Circle ??
    Edited by: WhiteScars on Jan 4, 2010 12:45 AM

  • Required to create a script for base table update using XMLSTORE package.

    Hi can anybody provide me some help full suggestion on how to update base table using XMLSTORE package.
    I created a simple script for Employee table and can able to do the basic operation like Insert and update on the table.
    Query is as follow's
    DECLARE
    insCtx DBMS_XMLSTORE.ctxType;
    rows NUMBER;
    xmlDoc CLOB :=
    '<ROWSET>
    <ROW num="1">
    <EMPLOYEE_ID>922</EMPLOYEE_ID>
    <SALARY>1801</SALARY>
    <HIRE_DATE>17-DEC-2007</HIRE_DATE>
    <JOB_ID>ST_CLERK</JOB_ID>
    <EMAIL>RAUSSJACK</EMAIL>
    <LAST_NAME>JACK</LAST_NAME>
    <DEPARTMENT_ID>20</DEPARTMENT_ID>
    </ROW>
    <ROW>
    <EMPLOYEE_ID>923</EMPLOYEE_ID>
    <SALARY>2001</SALARY>
    <HIRE_DATE>31-DEC-2005</HIRE_DATE>
    <JOB_ID>ST_CLERK</JOB_ID>
    <EMAIL>PATHAK</EMAIL>
    <LAST_NAME>PRATIK</LAST_NAME>
    <DEPARTMENT_ID>20</DEPARTMENT_ID>
    </ROW>
    </ROWSET>';
    BEGIN
    insCtx := DBMS_XMLSTORE.newContext('EMPLOYEES'); -- Get saved context
    DBMS_XMLSTORE.clearUpdateColumnList(insCtx); -- Clear the update settings
    -- Set the columns to be updated as a list of values
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'EMPLOYEE_ID');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'SALARY');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'HIRE_DATE');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'JOB_ID');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'EMAIL');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'LAST_NAME');
    DBMS_XMLSTORE.setUpdateColumn(insCtx, 'DEPARTMENT_ID');
    -- Insert the doc.
    rows := DBMS_XMLSTORE.insertXML(insCtx, xmlDoc);
    --COMMIT;
    DBMS_OUTPUT.put_line(rows || ' rows inserted.');
    -- Close the context
    DBMS_XMLSTORE.closeContext(insCtx);
    END;
    SELECT employee_id, LAST_name FROM employees WHERE employee_id = 114;
    DECLARE
    updCtx DBMS_XMLSTORE.ctxType;
    rows NUMBER;
    xmlDoc CLOB :=
    '<ROWSET>
    <ROW>
    <EMPLOYEE_ID>114</EMPLOYEE_ID>
    <LAST_NAME>PRABHU</LAST_NAME>
    </ROW>
    </ROWSET>';
    BEGIN
    updCtx := DBMS_XMLSTORE.newContext('EMPLOYEES'); -- get the context
    DBMS_XMLSTORE.clearUpdateColumnList(updCtx); -- clear update settings
    -- Specify that column employee_id is a "key" to identify the row to update.
    DBMS_XMLSTORE.setKeyColumn(updCtx, 'EMPLOYEE_ID');
    rows := DBMS_XMLSTORE.updateXML(updCtx, xmlDoc); -- update the table
    DBMS_XMLSTORE.closeContext(updCtx); -- close the context
    commit;
    END;
    Nowi want little modification on this above query like as i am passing static XML tags and i want it to pick the dynamic XML from web and use the XMLSTORE for the update.
    and also for complex XML having 2-3 levels how this query needs to be changed.As i am new to this Oracle utillity any help from xepert will be a great help for me.
    Thanks

    Nowi want little modification on this above query like as i am passing static XML tags and i want it to pick the dynamic XML from webFrom a Web Service?
    You'll need UTL_HTTP or HttpUriType interface to send the request and receive the XML response.
    Search in the forum, there are already a lot of useful examples available.
    and also for complex XML having 2-3 levels how this query needs to be changed.DBMS_XMLStore is OK for readily processing a canonical XML format (/ROWSET/ROW/COLUMN structure or alike).
    However, if you have to deal with a more complex structure, you either have to :
    - use a target object table that matches the XML structure
    - preprocess the input document using XSLT to transform it to canonical format
    That's why DBMS_XMLStore is not appropriate for multilevel documents, especially if they contain nested repeating groups.
    In this case, XMLTable is a more flexible way of parsing the XML and process it relationally at the same time.
    Depending on the size of the document, performance may be improved with schema-based object-relational storage.
    For more help, please post a new thread in the {forum:id=34} forum, with the following information :
    - database version (select * from v$version)
    - a sample XML document (the complex one)
    - DDL of your target table
    - mapping between XML elements and columns (ie which tag goes to which column?)
    - an XML schema (if you have one)

  • Query slow down when added a where clause

    I have a procedure that has performance issue, so I copy some of the query and run in the sql plus and try to spot which join cause the problem, but I get a result which I can figuer out why. I have a query which like below:
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    WHERE a.TypeID = 2;
    TableA has 140000 records, when the where clause is not added, the count return quite quick, but if I add the where clause, then the query slow down and seems never return so I have to kill my SQL Plus session. TableA has index on TypeID and TypeID is a number type. When TablA has 3000 records, the procedure return very quick, but it slow down and hang there when the TableA contains 140000 records. Any idea why this will slow down the query?
    Also, the TypeID is a foreign key to another table (TableAType), so the query above can written as :
    Select Count(a.ID) From TableA a
    -- INNER JOIN other tables
    INNER JOIN TableAType atype ON a.TypeID = atype.ID
    WHERE atype.Name = 'typename';
    TableAType table is a small table only contains less than 100 records, in this case, would the second query be more efficient to the first query?
    Any suggestions are welcome, thanks in advance...
    Message was edited by:
    user500168

    TableA now has 230000 records and 28000 of them has the TypeID 2.
    I haven't use the hint yet but thank you for your reply which let me to to run a query to check how many records in TableA has TypeID 2. When I doing this, it seems pretty fast. So I begin with the select count for TableA only and gradually add table to join and seems the query is pretty fast as long as TableA is the fist table to select from.
    Before in my query TableA is the second table to join from, there is another table (which is large as well but not as large as TableA) before TableA. So I think this is why it runs slow before. I am not at work yesterday so the query given in my post is based on my roughly memory and I forget to mention another table is joined before TableA, really sorry about that.
    I think I learn a lesson here, the largest table need to be in the begining of the select statement...
    Thank you very much everyone.

  • Internet very slow use linksys WRT54gl

    hi i have router linksys WRT54gl, in my office i hvae cpu,phones, camaras that conected to the router
    lately its very slow the whole package and i think the problem is in the router i reboot it several times didnt help
    i see memory used : 90%
    can this be the problem?
    please help me what do to i cant work like this
    thx

    Where did you see the memory usage? The firmware on the router might not be updated so it is best that you do an upgrade. Make sure to reset and reconfigure the router after that.

  • Error while sending a mail using UTP_MAIL package in Oracle 10g

    Hi,
    We are using UTP_MAIL package to send a mail from Oracle 10g.We have follwed the following steps ...
    SQL> connect sys/password as sysdba
    Connected.
    SQL> @$ORACLE_HOME/rdbms/admin/utlmail.sql
    Package created.
    Synonym created.
    SQL> @$ORACLE_HOME /rdbms/admin/prvtmail.plb
    Package body created.
    SQL > alter system set smtp_out_server = '<mail_server_ip:25>' scope =spfile;
    System altered..
    Now we try the code
    begin
    utl_mail.send(
    sender => 'sender's mail',
    recipients => 'receiver mail',
    CC => 'optional',
    subject => 'Testing utl_mail',
    message => 'Test Mail'
    end;
    But we get the following error...
    ERROR at line 1:
    ORA-29278: SMTP transient error: 421 Service not available
    ORA-06512: at "SYS.UTL_SMTP", line 21
    ORA-06512: at "SYS.UTL_SMTP", line 97
    ORA-06512: at "SYS.UTL_SMTP", line 139
    ORA-06512: at "SYS.UTL_MAIL", line 405
    ORA-06512: at "SYS.UTL_MAIL", line 594
    ORA-06512: at line 2
    We also tried connecting to the mail server through telnet .But it is not getting connected..
    Please help us to solve the issue.

    From your own posting you may have the clue, if you try to access your mail server through telnet and it is not successful, it means the service is down or there are networking issues.
    On pre 10gR2 versions there was a bug 4083461.8. It could affect you if you are on 10gR1
    "Bug 4083461 - UTL_SMTP.OPEN_CONNECTION in shared server fails with ORA-29278 Doc ID:      Note:4083461.8"
    This was fixed on 10gR2 base and on 9.2.0.8.0
    ~ Madrid

Maybe you are looking for

  • Menu bar not showing in IE, but in DW CS5 'live view' it's there.

    I am having problems with my webpage, for some reason when I am in DW live view everything works,but when I go to preview in browser the menu bar doesn't show up. I have made sure that my local and testing server files are all synced. Here is a link

  • How to create own database in pc

    Hi All, I have installed oracle 10g in my pc. Now I need to create a database of my own in my pc. I searched many site in google but didnot find the sufficient information. Can any one explain me the process of creating a database Thanks Sunil

  • HOW TO CLEAR PARAMETER ID?

    Hey experts, I use alv hotspot to call transaction tm_53 as follows:   SET PARAMETER ID 'BUK' FIELD P_CCODE .   SET PARAMETER ID 'FAN' FIELD P_LIST-DEAL_NUMBER .   CALL TRANSACTION 'TM_53' . the first P_LIST-DEAL_NUMBER = 11. and it is successed. the

  • Constraint Checking Vs After Row Trigger

    According to db concepts Chap 17 Trigger -> Trigger Execution, integrity checking is performed BEFORE the after row trigger executed. I guess this is not 100% true. AFAIK, we can create an AFTER ROW trigger on a child table which automatically insert

  • Compiling gettext extension for mod_php

    On this thread: http://discussions.apple.com/thread.jspa?messageID=10106396 pterobyte listed some great instructions for downloading and recompiling the gettext extension for PHP. That's exactly what I need, so I followed the instructions and was abl