Regex help for SQL update statement

Hello,
need help from IPS regex guru - trying to build the signature to detect SQL update statement in HTTP requests.
1) Am I correct with regex below specified as Request-Regex?
[Uu][Pp][Dd][Aa][Tt][Ee]([%]20|[+])[\x20-\x7e]+[Ss][Ee][Tt]([%]20|[+])[\x20-\x7e]+=
2) How do I make sure that it detects 'Update' in URI and Arguments only and not in the body on entire webserver response (currently looks like the case)?

1) It looks correct to me
2) Typically, the "service HTTP" engine is used to inspect requests and the "TCP string" engine is used to inspect HTTP server responses. If you only want to inspect requests, use the service HTTP engine.

Similar Messages

  • SQL UPDATE Statement Explaination (urgent please...)

    Hi,
    Could anyone explain for me what this statement will do:
    update KPIR KR
    set ( CA ) = (
    select sum(sl.RRC_CONN_STP_ATT) CA
    from PV_WCEL_SERVICE_LEVEL sl, UTP_MO um
    where sl.GLOBAL_ID = um.CO_GID
    group by sl.STARTTIME, um.OBJECT_INSTANCE
    I could not understand how the mapping between the two tables will be ("um" clause is not used in the select statement and it is used only in the "where" and "group by" arguments) What does this mean?.
    The actual problem is that when I execute the above SQL statement, it was take around just 3 minutes to be implemented. But after we upgrade from Oracle 8 to Oracle 9, this statement is taking around 3 hours to be implemented.
    Anyone have any idea what is the problem?

    Below is the plan:
    KPIR table:
    RNC_ID (str), PST ('YYYYMMDD_HH24'), CA (Numner)
    1, 20061105_13, 0
    1, 20061105_14, 0
    1, 20061105_15, 0
    2, 20061105_13, 0
    2, 20061105_14, 0
    2, 20061105_15, 0
    3, 20061105_13, 0
    3, 20061105_14, 0
    3, 20061105_15, 0
    4, 20061105_13, 0
    4, 20061105_14, 0
    4, 20061105_15, 0
    PV_WCEL_SERVICE_LEVEL table:
    GLOBAL_ID (str), STARTTIME ('YYYYMMDD_HH24'), RRC_CONN_STP_ATT (Number)
    A1, 20061101_00, 9
    A1, 20061101_01, 4
    A1, 20061101_23, 3
    A1, 20061130_23, 4
    A2, 20061101_00, 3
    A2, 20061101_01, 4
    A2, 20061101_23, 1
    A2, 20061130_23, 5
    UTP_MO table;
    RNC_ID (str), GLOBAL_ID (str), OBJECT_INSTANCE (str)
    1, A1, <A1_NAME>
    1, A2, <A2_NAME>
    1, A9, <A9_NAME>
    2, B1, <B1_NAME>
    2, B2, <B2_NAME>
    2, B9, <B9_NAME>
    3, C1, <C1_NAME>
    3, C2, <C2_NAME>
    3, C9, <C9_NAME>
    4, D1, <D1_NAME>
    4, D2, <D2_NAME>
    4, D9, <D9_NAME>
    Now, I want to update the value of CA in table KPIR:
    For instance, for RNC_ID='1' and at PST='20061105_13', the CA should equals to the sum of value RRC_CONN_STP_ATT for all GLOBAL_IDs under RNC_ID='1' and at STARTTIME '20061105_13'.
    So, is this SQL UPDATE statement will do it?
    update KPIR kr
    set ( CA ) = (
    select sum(sl.RRC_CONN_STP_ATT) CA
    from PV_WCEL_SERVICE_LEVEL sl, UTP_MO um
    where sl.GLOBAL_ID = um.GLOBAL_ID and kr.RNC_ID = um.RNC_ID and kr.PST = sl.STARTTIME
    group by sl.STARTTIME, um.OBJECT_INSTANCE
    And if so, why it is taking around 3 hours to be implemented? This issue is happened after upgrading from Oracle 8 to 9.
    Really appreciate your help and thanks in advance.

  • Updates using single SQL Update statement.

    Hi Guys,
    I had an interview... and was not able to give an answer of one of the question.
    Given the emp table... need to update the salary with following requirement using one SQL statement...
    1. Update salary+1000 for those employees who have salary between 500 and 1000
    2. Update salary+500 for those employees who have salary between 1001 and 1500
    Both above requirement should be done by using only one SQL update statement... can some one tell me how to do it?

    update emp
      set salary   = case when salary between 500 and 1000
                                then salary + 1000
                                when salary between 1001 and 1500
                                then salary + 500
                          end;Regards
    Arun

  • Help with this update statement..

    Hi everyone,
    I am trying to update a column in a table .I need to update that column
    with a function that takes patient_nbr and type_x column values as a parameter.
    That table has almost "300,000" records. It is taking long time to complete
    almost 60 min to 90 min.
    Is it usual to take that much time to update that many records?
    I dont know why it is taking this much time.Please help with this update statement.
    select get_partner_id(SUBSTR(patient_nbr,1,9),type_x) partner_id from test_load;
    (it is just taking 20 - 30 sec)
    I am sure that it is not the problem with my function.
    I tried the following update and merge statements .Please correct me if i am wrong
    in the syntax and give me some suggestions how can i make the update statement fast.
    update test_load set partner_id = get_partner_id(SUBSTR(patient_nbr,1,9),type_x);
    merge into test_load a
    using (select patient_nbr,type_x from test_load) b
    on (a.patient_nbr = b.patient_nbr)
    when matched
    then
    update
    set a.partner_id = get_partner_id(SUBSTR(b.patient_nbr,1,9),b.type_x);
    there is a index on patient_nbr column
    and the statistics are gathered on this table.

    Hi Justin,
    As requested here are the explain plans for my update statements.Please correct if i am doing anything wrong.
    update test_load set partner_id = get_partner_id(SUBSTR(patient_nbr,1,9),type_x);
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 3793814442"
    "| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |"
    "|   0 | UPDATE STATEMENT   |                  |   274K|  4552K|  1488   (1)| 00:00:18 |"
    "|   1 |  UPDATE            |        TEST_LOAD |       |       |            |          |"
    "|   2 |   TABLE ACCESS FULL|        TEST_LOAD |   274K|  4552K|  1488   (1)| 00:00:18 |"
    merge into test_load a
    using (select patient_nbr,type_x from test_load) b
    on (a.patient_nbr = b.patient_nbr)
    when matched
    then
    update
    set a.partner_id = get_partner_id(SUBSTR(b.patient_nbr,1,9),b.type_x);
    "PLAN_TABLE_OUTPUT"
    "Plan hash value: 1188928691"
    "| Id  | Operation            | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |"
    "|   0 | MERGE STATEMENT      |                  |   274K|  3213K|       |  6660   (1)| 00:01:20 |"
    "|   1 |  MERGE               |        TEST_LOAD |       |       |       |            |          |"
    "|   2 |   VIEW               |                  |       |       |       |            |          |"
    "|*  3 |    HASH JOIN         |                  |   274K|    43M|  7232K|  6660   (1)| 00:01:20 |"
    "|   4 |     TABLE ACCESS FULL|        TEST_LOAD |   274K|  4017K|       |  1482   (1)| 00:00:18 |"
    "|   5 |     TABLE ACCESS FULL|        TEST_LOAD |   274K|    40M|       |  1496   (2)| 00:00:18 |"
    "Predicate Information (identified by operation id):"
    "   3 - access("A"."patient_nbr"="patient_nbr")"Please give some suggestions..
    what's the best approach for doing the updates for huge tables?
    Thanks

  • Running SQL stored within a varchar2 for an update statement

    I have a table that stores option settings for different programs that run in our system. It just has an id (number), name (varchar2(4000)) and value (varchar2(4000)). I will have one record (id=1) that stores sql code to generate a date that is converted into a character format, something like "to_char(sysdate-60,'YYYY-MM-DD HH24:MI:SS')", and I have two records that store dates (id=2 and 3) as text.
    I want to run something like "update options set value=(select value from options where id=1) where id=2;", but I know that will store my sql code ('to_char(sysdate-60,''YYYY-MM-DD HH24:MI:SS'')'), and not a text-formatted date ('2009-04-16 13:04:05'). Is there any way within sql that I can use an update statement to store the results of value and not value itself? I generally use C++, so if I had to write a function to accomplish this, I would probably place it in the C++ program instead of a pl/sql function.

    "triggers...<shivers>
    Are you allowed to change database-side functionality?"
    For the most part, yes. We're a small group, and I still do most of the DBA work, despite being a C++ developer and having a part-time DBA. The few triggers we have created are for record id numbers on data that a customer will input through APEX, and I will have to keep those. They're only on inserts, do nothing if the id is populated, and don't exist on this table. The trigger here seems to be something deep within oracle that I cannot change.
    CREATE DATABASE LINK LOOPBACK USING '(description=(address=(protocol=beq)(program=/your/path/to/bin/oracle)))';
    UPDATE options SET VALUE = dbms_xmlgen.getxmltype('SELECT VALUE FROM options@loopback WHERE id = 1').EXTRACT('//text()').getstringval () WHERE id = 2;
    ERROR at line 1:
    ORA-12899: value too large for column "TEST"."OPTIONS"."VALUE" (actual: 65, maximum: 55)The "VALUE" column is a varchar(4000). I'm guessing something is too large for one of the oracle functions or there is a problem with a datatype, but I haven't found any answers. Will using extract affect my data in any way, say if my formula has a less than character for some strange reason? I happened to see a comparison between extract().getstringval() and extractval() at [http://pbarut.blogspot.com/2007/01/oracle-xmltype-exctractvalue-vs-extract.html], which makes it look like EXTRACT('//text()').getstringval () will change the less than to '&lt'. I can run tests to check once I get this working, but I'm not sure I can switch the procedure to extractval().
    Edited by: jbo5112 on Jun 15, 2009 5:25 PM -- The forum mangled my response when I actually typed the less than character

  • Use of ROWID in SQL Update Statement

    Hi All,
    I have an update statement which uses the rowid column to perform the selection on the target table. Given that a rowid represents the physical location of a row on disk we know that this reference can change when various activities are performed on the database/table/row etc...
    Here is an example of the statement I am issuing:
    UPDATE tabA outertab SET col1 = 'Value'
    WHERE EXISTS (SELECT 1 FROM tabA innertab WHERE outertab.ROWID = outertab.ROWID AND ...)
    Obviously the inner query is more complicated and uses other tables etc... but for the purposes of the example we dont need to include the detail.
    My question is: When using rowid in a single SQL statement as a reference from a subquery to the outer statement is there a risk that external activities can change the rowid and those changes be reflected within the database session that my query is executing thus causing an inconsistency between the inner and outer SQL clause?
    In response to the question which will follow this post: "Why don't you just use a PK", We would like to avoid maintaining a PK on the table as we are talking about very large volumes of data and we dont want to have to issue a call to a sequence if we can avoid it when we are inserting the data. If however there is a risk that this Update statement could fail or update the wrong rows then we may have to use a PK.
    I know there is a lot of threads about this but I havnt been able to find one that someone has answered with any kind of confidence or clarity, any help would be much appreciated.
    Thanks
    Keith
    Edited by: The Turtle on Mar 5, 2009 5:24 AM

    When using rowid in a single SQL statement as a reference from a subquery to the outer statement is there a risk that external activities can change the rowid and those changes be reflected within the database session that my query is executing thus causing an inconsistency between the inner and outer SQL clause?No, it's safe to use rowid in this type of query. Docs
    A row's assigned rowid remains unchanged unless the row is exported and imported using the Import and Export utilities. When you delete a row from a table and then commit the encompassing transaction, the deleted row's associated rowid can be assigned to a row inserted in a subsequent transaction.

  • How to convert the following FORALL Update to direct SQL UPDATE statement

    I have a FORALL loop to update a table. It is taking too long. I want to rewrite the code to a direct UPDATE sql. Also, any other tips or hints which can help run this proc faster?
    CURSOR cur_bst_tm IS
    SELECT listagg(tm, ' ') WITHIN GROUP(ORDER BY con_addr_id) best_time,
           con_addr_id
       FROM   (select Trim(Upper(con_addr_id)) con_addr_id,
                      ||decode(Initcap(start_day),
                                      'Monday', 'm',
                                    'Tuesday', 'tu',
                                    'Wednesday', 'w',
                                    'Thursday', 'th',
                                    'Friday', 'f',
                                     Initcap(start_day))
                      ||'='
                      ||trunc(( ( TO_DATE(start_tm,'HH12:MI:SS PM') - trunc(TO_DATE(start_tm,'HH12:MI:SS PM')) ) * 24 * 60 ))
                      ||','
                      ||trunc(( ( TO_DATE(end_tm,'HH12:MI:SS PM') - trunc(TO_DATE(end_tm,'HH12:MI:SS PM')) ) * 24 * 60 )) tm
               FROM   (SELECT DISTINCT * FROM ODS_IDL_EDGE_OFFC_BST_TM)
                 WHERE con_addr_id is not null)
      GROUP  BY con_addr_id
      ORDER BY con_addr_id;
    TYPE ARRAY IS TABLE OF cur_bst_tm%ROWTYPE;
    l_data ARRAY;
    BEGIN
    OPEN cur_bst_tm;
         LOOP
        FETCH cur_bst_tm BULK COLLECT INTO l_data LIMIT 1000;
        FORALL i IN 1..l_data.COUNT
          UPDATE ODS_CONTACTS_ADDR tgt
          SET best_times = l_data(i).best_time,
          ODW_UPD_BY = 'IDL - MASS MARKET',
           ODW_UPD_DT = SYSDATE,
          ODW_UPD_BATCH_ID = '0'
          WHERE Upper(edge_id) = l_data(i).con_addr_id
           AND EXISTS (SELECT 1 FROM ods_idl_edge_cont_xref src
                       WHERE tgt.contacts_odw_id = src.contacts_odw_id
                          AND src.pc_flg='Y')  
        EXIT WHEN cur_bst_tm%NOTFOUND;
        END LOOP;
      CLOSE cur_bst_tm;Record count:-
    select count(*) from
    ODS_IDL_EDGE_OFFC_BST_TM;
    140,000
    SELECT count(*)
    FROM ods_idl_edge_cont_xref src
    WHERE src.pc_flg='Y';
    118,000
    SELECT count(*)
    FROM ODS_CONTACTS_ADDR;
    671,925
    Database version 11g.
    Execution Plan for the update:
    Operation     Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=ALL_ROWS          6 K          8120                     
    UPDATE     ODW_OWN2.ODS_CONTACTS_ADDR                                   
    HASH JOIN SEMI          6 K     256 K     8120                     
    TABLE ACCESS FULL     ODW_OWN2.ODS_CONTACTS_ADDR     6 K     203 K     7181                     
    TABLE ACCESS FULL     ODW_OWN2.ODS_IDL_EDGE_CONT_XREF     118 K     922 K     938
    Edited by: user10566312 on May 14, 2012 1:07 AM

    The code tag should be in lower case like this {noformat}{noformat}. Otherwise your code format will be lost.
    Here is a update statementupdate ods_contacts_addr tgt
    set (
              best_times, odw_upd_by, odw_upd_dt, odw_upd_batch_id
         ) =
              select best_time, odw_upd_by, odw_upd_dt, odw_upd_batch_id
              from (
                   select listagg(tm, ' ') within group(order by con_addr_id) best_time,
                        'IDL - MASS MARKET' odw_upd_by,
                        sysdate odw_upd_dt,
                        '0' odw_upd_batch_id,
                        con_addr_id
                   from (
                             select Trim(Upper(con_addr_id)) con_addr_id,
                                  ||decode(Initcap(start_day), 'Monday', 'm', 'Tuesday', 'tu', 'Wednesday', 'w', 'Thursday', 'th', 'Friday', 'f', Initcap(start_day))
                                  ||'='
                                  ||trunc(((TO_DATE(start_tm,'HH12:MI:SS PM')-trunc(TO_DATE(start_tm,'HH12:MI:SS PM')))*24*60 ))
                                  ||','
                                  ||trunc(((TO_DATE(end_tm,'HH12:MI:SS PM')-trunc(TO_DATE(end_tm,'HH12:MI:SS PM')))*24*60)) tm
                             FROM (
                                  select distinct * from ods_idl_edge_offc_bst_tm
                             WHERE con_addr_id is not null
                   group
                   by con_addr_id
              where upper(tgt.edge_id) = con_addr_id
    where exists
              SELECT 1
              FROM ods_idl_edge_cont_xref src
              WHERE tgt.contacts_odw_id = src.contacts_odw_id
              AND src.pc_flg='Y'
    and exists
              select null
              from (
                   SELECT listagg(tm, ' ') WITHIN GROUP(ORDER BY con_addr_id) best_time,
                        'IDL - MASS MARKET' odw_upd_by,
                        SYSDATE odw_upd_dt,
                        '0' odw_upd_batch_id,
                        con_addr_id
                   FROM (
                             select Trim(Upper(con_addr_id)) con_addr_id,
                                  ||decode(Initcap(start_day), 'Monday', 'm', 'Tuesday', 'tu', 'Wednesday', 'w', 'Thursday', 'th', 'Friday', 'f', Initcap(start_day))
                                  ||'='
                                  ||trunc(((TO_DATE(start_tm,'HH12:MI:SS PM')-trunc(TO_DATE(start_tm,'HH12:MI:SS PM')))*24*60 ))
                                  ||','
                                  ||trunc(((TO_DATE(end_tm,'HH12:MI:SS PM')-trunc(TO_DATE(end_tm,'HH12:MI:SS PM')))*24*60)) tm
                             FROM (
                                  SELECT DISTINCT * FROM ODS_IDL_EDGE_OFFC_BST_TM
                             WHERE con_addr_id is not null
                   GROUP
                   BY con_addr_id
              where upper(tgt.edge_id) = con_addr_id
    This is an untested code. If there is any syntax error then please fix it and try.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • SQL Update statement taking too long..

    Hi All,
    I have a simple update statement that goes through a table of 95000 rows that is taking too long to update; here are the details:
    Oracle Version: 11.2.0.1 64bit
    OS: Windows 2008 64bit
    desc temp_person;
    Name                                                                                Null?    Type
    PERSON_ID                                                                           NOT NULL NUMBER(10)
    DISTRICT_ID                                                                     NOT NULL NUMBER(10)
    FIRST_NAME                                                                                   VARCHAR2(60)
    MIDDLE_NAME                                                                                  VARCHAR2(60)
    LAST_NAME                                                                                    VARCHAR2(60)
    BIRTH_DATE                                                                                   DATE
    SIN                                                                                          VARCHAR2(11)
    PARTY_ID                                                                                     NUMBER(10)
    ACTIVE_STATUS                                                                       NOT NULL VARCHAR2(1)
    TAXABLE_FLAG                                                                                 VARCHAR2(1)
    CPP_EXEMPT                                                                                   VARCHAR2(1)
    EVENT_ID                                                                            NOT NULL NUMBER(10)
    USER_INFO_ID                                                                                 NUMBER(10)
    TIMESTAMP                                                                           NOT NULL DATE
    CREATE INDEX tmp_rs_PERSON_ED ON temp_person (PERSON_ID,DISTRICT_ID) TABLESPACE D_INDEX;
    Index created.
    ANALYZE INDEX tmp_PERSON_ED COMPUTE STATISTICS;
    Index analyzed.
    explain plan for update temp_person
      2  set first_name = (select trim(f_name)
      3                    from ext_names_csv
      4                               where temp_person.PERSON_ID=ext_names_csv.p_id
      5                               and   temp_person.DISTRICT_ID=ext_names_csv.ed_id);
    Explained.
    @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 3786226716
    | Id  | Operation                   | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT            |                | 82095 |  4649K|  2052K  (4)| 06:50:31 |
    |   1 |  UPDATE                     | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL         | TEMP_PERSON    | 82095 |  4649K|   191   (1)| 00:00:03 |
    |*  3 |   EXTERNAL TABLE ACCESS FULL| EXT_NAMES_CSV  |     1 |   178 |    24   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       3 - filter("EXT_NAMES_CSV"."P_ID"=:B1 AND "EXT_NAMES_CSV"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    19 rows selected.By the looks of it the update is going to take 6 hrs!!!
    ext_names_csv is an external table that have the same number of rows as the PERSON table.
    ROHO@rohof> desc ext_names_csv
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    F_NAME                                                                                       VARCHAR2(300)
    L_NAME                                                                                       VARCHAR2(300)Anyone can help diagnose this please.
    Thanks
    Edited by: rsar001 on Feb 11, 2011 9:10 PM

    Thank you all for the great ideas, you have been extremely helpful. Here is what we did and were able to resolve the query.
    We started with Etbin's idea to create a table from the ext table so that we can index and reference easier than an external table, so we did the following:
    SQL> create table ext_person as select P_ID,ED_ID,trim(F_NAME) fst_name,trim(L_NAME) lst_name from EXT_NAMES_CSV;
    Table created.
    SQL> desc ext_person
    Name                                                                                Null?    Type
    P_ID                                                                                         NUMBER
    ED_ID                                                                                        NUMBER
    FST_NAME                                                                                     VARCHAR2(300)
    LST_NAME                                                                                     VARCHAR2(300)
    SQL> select count(*) from ext_person;
      COUNT(*)
         93383
    SQL> CREATE INDEX EXT_PERSON_ED ON ext_person (P_ID,ED_ID) TABLESPACE D_INDEX;
    Index created.
    SQL> exec dbms_stats.gather_index_stats(ownname=>'APPD', indname=>'EXT_PERSON_ED',partname=> NULL , estimate_percent=> 30 );
    PL/SQL procedure successfully completed.We had a look at the plan with the original SQL query that we had:
    SQL> explain plan for update temp_person
      2  set first_name = (select fst_name
      3                    from ext_person
      4                               where temp_person.PERSON_ID=ext_person.p_id
      5                               and   temp_person.DISTRICT_ID=ext_person.ed_id);
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 1236196514
    | Id  | Operation                    | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | UPDATE STATEMENT             |                | 93383 |  1550K|   186K (50)| 00:37:24 |
    |   1 |  UPDATE                      | TEMP_PERSON    |       |       |            |          |
    |   2 |   TABLE ACCESS FULL          | TEMP_PERSON    | 93383 |  1550K|   191   (1)| 00:00:03 |
    |   3 |   TABLE ACCESS BY INDEX ROWID| EXTT_PERSON    |     9 |  1602 |     1   (0)| 00:00:01 |
    |*  4 |    INDEX RANGE SCAN          | EXT_PERSON_ED  |     1 |       |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       4 - access("EXT_PERSON"."P_ID"=:B1 AND "RS_PERSON"."ED_ID"=:B2)
    Note
       - dynamic sampling used for this statement (level=2)
    20 rows selected.As you can see the time has dropped to 37min (from 6 hrs). Then we decided to change the SQL query and use donisback's suggestion (using MERGE); we explained the plan for teh new query and here is the results:
    SQL> explain plan for MERGE INTO temp_person t
      2  USING (SELECT fst_name ,p_id,ed_id
      3  FROM  ext_person) ext
      4  ON (ext.p_id=t.person_id AND ext.ed_id=t.district_id)
      5  WHEN MATCHED THEN
      6  UPDATE set t.first_name=ext.fst_name;
    Explained.
    SQL> @?/rdbms/admin/utlxpls.sql
    PLAN_TABLE_OUTPUT
    Plan hash value: 2192307910
    | Id  | Operation            | Name         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
    |   0 | MERGE STATEMENT      |              | 92307 |    14M|       |  1417   (1)| 00:00:17 |
    |   1 |  MERGE               | TEMP_PERSON  |       |       |       |            |          |
    |   2 |   VIEW               |              |       |       |       |            |          |
    |*  3 |    HASH JOIN         |              | 92307 |    20M|  6384K|  1417   (1)| 00:00:17 |
    |   4 |     TABLE ACCESS FULL| TEMP_PERSON  | 93383 |  5289K|       |   192   (2)| 00:00:03 |
    |   5 |     TABLE ACCESS FULL| EXT_PERSON   | 92307 |    15M|       |    85   (2)| 00:00:02 |
    Predicate Information (identified by operation id):
       3 - access("P_ID"="T"."PERSON_ID" AND "ED_ID"="T"."DISTRICT_ID")
    Note
       - dynamic sampling used for this statement (level=2)
    21 rows selected.As you can see, the update now takes 00:00:17 to run (need to say more?) :)
    Thank you all for your ideas that helped us get to the solution.
    Much appreciated.
    Thanks

  • Sql Update Statement (complex)

    Hi ,
    A very good morning to all.
    I am stuck in a problem , and i really hope for solutiion.
    I have a dimension table (DIM_CAR) for which i need to write a update statement.
    My table structure is :
    C_ID           Startdate                       EndDate
    7546343    2012-05-29 00:00:00    NULL
    7546343    2012-05-18 00:00:00    2012-05-29 00:00:00
    7546343    2012-05-14 00:00:00    2012-05-18 00:00:00
    7546343    2012-05-10 00:00:00    2012-05-10 00:00:00
    7546343    2012-05-10 00:00:00    2012-05-14 00:00:00
    7546343    2012-03-22 00:00:00    2012-03-22 00:00:00
    7546343    2012-03-15 00:00:00    2012-03-22 00:00:00
    7546343    2012-02-02 00:00:00    2012-03-15 00:00:00
    7546343    2012-01-31 00:00:00    2012-02-02 00:00:00
    Now the scenario is Whenever any update comes in car , then it closes the previous record and open a new record(SCD)
    For e.g the 1st update comes on 2012-02-02 , so it closes the last record (by putting 2012-02-02) in end date and open a new record by putting 2012-02-02 in startdate. Then update comes in 2012-03-15 and so on. The last record is for 2012-05-29 which is
    still open as there is no update after that,
    so whenever any update come it opens a new record with startdate value is extract date and enddate value NULL and closes the previous. Then  after another update , that NULL value is replace by the new date and another record opens.
    My prob is if you will closely analyse the dates , then there are some records missing in the above table due to which there are discrepencies in dates.
    E.g. If u see the 6th row, startdate is 2012-03-22 and enddate is also 2012-03-22. But here in this case the enddate should be 2012-05-10(which is next update)
    I need to write a sql script to update enddates with the latest update startdate in my table. I have 60000 of C_id which are affected.
    COULD ANYONE plz Help.. I knw its difficult but u can give a try ..........Plz ask if anyone requires more info.
    Abhishek

    Please post DDL, so that people do not have to guess what the keys, constraints, Declarative Referential Integrity, data types, etc. in your schema are. Learn how to follow ISO-11179 data element naming conventions and formatting rules. Temporal data should
    use ISO-8601 formats. Code should be in Standard SQL as much as possible and not local dialect. 
    This is minimal polite behavior on SQL forums. 
    >> I have a dimension table (DIM_CAR) for which I need to write a update statement. My table structure is : <<
    NO! This is a picture of the data; it is not DDL! Is this what you would have posted if you had manners? 
    CREATE TABLE Foobars
    (c_id CHAR(7) NOT NULL, 
     foo_start_date DATE NOT NULL, 
     PRIMARY KEY (c_id, foo_start_date), -- my guess!!
     CHECK (foo_start_date < foo_end_date),-- another guess! 
     foo_end_date DATE);
    INSERT INTO Foobar
    VALUES 
    ('7546343', '2012-05-29', 'NULL), 
    ('7546343', '2012-05-18', '2012-05-29'), 
    ('7546343', '2012-05-14', '2012-05-18'), 
    ('7546343', '2012-05-10', '2012-05-10'), 
    ('7546343', '2012-05-10', '2012-05-14'), 
    ('7546343', '2012-03-22', '2012-03-22'), 
    ('7546343', '2012-03-15', '2012-03-22'), 
    ('7546343', '2012-02-02', '2012-03-15'), 
    ('7546343', '2012-01-31', '2012-02-02');
    Now the scenario is Whenever any update comes in car, then it closes the previous record [sic: rows are not records! Please learn basic terms] and open a new record [sic](SCD). 
    What is “SCD” and why do you think it is meaningful to us? Here is an article on this topic. 
    Modifying Contiguous Time Periods
    This article explains how to modify contiguous time periods that were described in Joe Celko’s article 'Contiguous Time Periods in SQL'.  Joe describes the table itself that he calls the 'Kuznetsov History Table'. He explains how it is used to store contiguous
    time intervals with constraint to ensure that the date periods really are contiguous, The editor suggested that I give a brief description of how to modify the data in the History table as this may not be entirely obvious.
    When trusted constraints enforce data integrity, the data is guaranteed to be valid at the end of any statement, even if it is not committed. When we modify contiguous time periods, in order to get from one valid state to another we may need to insert a row
    and update another one, or we may need to delete a row and update another one. This is one of those cases when MERGE really shines – it allows us to get from one valid state to another in one statement, inserting, updating, and deleting rows as needed.
    CREATE TABLE Tasks
    (task_id INT NOT NULL, 
     task_score INTEGER NOT NULL, 
     current_start_date DATE NOT NULL, 
     current_end_date DATE NOT NULL, 
     previous_end_date DATE NULL, 
     CONSTRAINT PK_Tasks_task_id_current_end_date 
     PRIMARY KEY (task_id, current_end_date), 
     CONSTRAINT UNQ_Tasks_task_id_previous_end_date 
      UNIQUE (task_id, previous_end_date), 
     CONSTRAINT FK_Tasks_task_id_previous_end_date 
      FOREIGN KEY (task_id, previous_end_date) 
      REFERENCES Tasks (task_id, current_end_date), 
     CONSTRAINT CHK_Tasks_previous_end_date_NotAfter_current_start_date 
      CHECK (previous_end_date <= current_start_date), 
     CONSTRAINT CHK_Tasks_current_start_date_Before_current_end_date 
      CHECK (current_start_date < current_end_date)
    Some Easy Modifications.
    It is easy to begin a new series of time periods
    INSERT INTO Tasks
      (task_id, task_score, current_start_date, current_end_date, previous_end_date)
    VALUES (1, 100, '2010-1002', '2010-1023', NULL),
           (1, 80, '2010-1023', '2010-11-03', '2010-1023');  
     It is just as easy to continue adding periods to the end of the series.
    INSERT INTO Tasks
      (task_id, task_score, current_start_date, current_end_date, previous_end_date)
    VALUES(1, 98, '2010-11-20', '2010-11-25', '2010-11-03'),
          (1, 75, '2010-11-26', '2010-11-27', '2010-11-25');
    Deleting one or more rows from the end is just as easy, and we shall skip the example. As we have seen, it is easy to perform typical, the most common operations against history of periods.
    However, some other operations are less easy and need more explanations. Now that we have enough test data, let us move on to more complex examples. Here is the test data at this moment: 
    Adding periods to the beginning.
    Each series of periods has exactly one first period – this is enforced by the following constraint: Unique_task_id_and_previous_end_date
    As a result, when we are inserting one or more periods to the beginning of the series, we have to update the period that used to be the first, as follows:
    MERGE INTO Tasks 
    USING (VALUES (1, 98, '2009-03-01', '2009-03-06', NULL),
                  (1, 100, '2010-10-02', '2010-10-23', '2009-03-06') 
           ) AS Source (task_id, task_score, current_start_date, current_end_date, previous_end_date)
       ON Tasks.task_id = Source.task_id
           AND Tasks.current_start_date = Source.current_start_date
    WHEN MATCHED 
    THEN UPDATE
         SET task_score = Source.task_score, 
             current_start_date = Source.current_start_date, 
             current_end_date = Source.current_end_date, 
             previous_end_date = Source.previous_end_date
    WHEN NOT MATCHED 
    THEN INSERT (task_id, task_score, current_start_date, current_end_date, previous_end_date)
     VALUES (Source.task_id, Source.task_score, Source.current_start_date,
             Source.current_end_date, Source.previous_end_date); 
    Now we will verify that our test data looks as expected, with a new row at the beginning, and previous_end_date column is modified to point to the new row for the row that used to be the first before this modification:
    We are also going to discuss some other scenarios, such as adding/deleting periods in the middle of the series. In all these cases we shall be using MERGE, and the DML looks quite similar, so let us wrap it up in a stored procedure.
    CREATING A STORED PROCEDURE
    The following code implements this merging functionality with a stored procedure that uses a table to hold teh new rows:
    CREATE TABLE NewTasks
    (task_id INTEGER NOT NULL, 
    task_score INTEGER NOT NULL, 
    current_start_date DATE NOT NULL, 
    current_end_date DATE NOT NULL, 
    previous_end_date DATE NULL,
    deletion_flg CHAR(1));
    CREATE PROCEDURE MergeNewTasks
    AS BEGIN 
    MERGE INTO Tasks 
    USING (SELECT task_id, task_score, current_start_date, current_end_date, previous_end_date, deletion_flg
             FROM NewTasks) AS Source
       ON Tasks.task_id = Source.task_id
          AND Tasks.current_start_date = Source.current_start_date
    WHEN MATCHED AND deletion_flg = 'Y'
    THEN DELETE
    WHEN MATCHED 
    THEN UPDATE SET
         task_score = Source.task_score, 
         current_start_date = Source.current_start_date, 
         current_end_date = Source.current_end_date, 
         previous_end_date = Source.previous_end_date
    WHEN NOT MATCHED 
    THEN INSERT (task_id, task_score, current_start_date, current_end_date, previous_end_date) 
     VALUES (Source.task_id, Source.task_score, Source.current_start_date, Source.current_end_date, Source.previous_end_date);
    END; 
    Let us use this stored procedure.
    Filling a gap in the middle of the series
    The following code fills the gap on November 25th.
    CREATE TABLE NewTasks (..);
    INSERT INTO NewTasks
      (task_id, task_score, current_start_date, current_end_date, previous_end_date, deletion_flg)
    VALUES (1, 75, '2010-11-25', '2010-11-26', '2010-11-25', 'N'),
     (1, 80, '2010-11-26', '2010-11-27', '2010-11-26', 'N');
    EXEC MergeNewTasks NewTasks = NewTasks;  
    Here is the data after this modification, with a period added in the middle fo the series:
    Deleting a period in the middle of the series
    The following code deletes the period added in the previous example.
    CREATE TABLE NewTasks (..);
    INSERT INTO NewTasks
      (task_id, task_score, current_start_date, current_end_date, 
        previous_end_date, deletion_flg)
    VALUES (1, 75, '2010-11-25', '2010-11-26', '2010-11-25', 'Y'),
           (1, 80, '2010-11-26', '2010-11-27', '2010-11-25', 'N');
    EXEC MergeNewTasks;  
    Here is the data after this modification:
    Inserting two periods in the middle, and adjusting an exaisting period to make room for them. This is the last and most complex example involving our stored procedure:
    CREATE TABLE NewTasks (..);
    INSERT INTO NewTasks
      (task_id, task_score, current_start_date, current_end_date, previous_end_date, deletion_flg)
    VALUES (1, 98, '2010-11-20', '2010-11-22', '2010-11-03', 'N'), 
           (1, 75, '2010-11-22', '2010-11-23', '2010-11-22', 'N'),
           (1, 98, '2010-11-23', '2010-11-25', '2010-11-23', 'N');
    EXEC MergeNewTasks; 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • SQL Update statement not executing in Filter

    Hi,
    I am trying to update Documents table using the query resource in a custom component. The document table is requred to upate for vault file(1 row) and for weblayout file(1 row). The java code is executing in PostConversionFilter, the sql is defined in query resource.
    Query Resource is defined in componet like..
    MyQResource=UPDATE Documents SET dOriginalName = ?, dFormat = ?, dFileSize = ?, dExtension = ? WHERE (dID = ? AND dIsPrimary = ? AND dIsWebFormat = ?)
    This SQL executes for vault file means dIsPrimary = 1 AND dIsWebFormat = 0 however the same is not executing when for weblayout file dIsPrimary = 0 AND dIsWebFormat = 1
    I putting here final satement formed, got by systematabase trace as following
    For vault file:
    UPDATE Documents SET dOriginalName = 'XYZ009174~1.spdf', dFormat = 'application/vnd.sealedmedia.softseal.pdf', dFileSize = 0, dExtension = 'spdf' WHERE (dID = '9559' AND dIsPrimary = 1 AND dIsWebFormat = 0)
    For weblayuot file:
    UPDATE Documents SET dOriginalName = 'XYZ009174~1.spdf', dFormat = 'application/vnd.sealedmedia.softseal.pdf', dFileSize = 0, dExtension = 'spdf' WHERE (dID = '9559' AND dIsPrimary = 0 AND dIsWebFormat = 1)
    There is no error in logs, traces. I am using UCM 11g. in Documents table 1 row is updated for vault file only I am expecting for weblyaout updation also.
    Please help what could be the possible cause?
    Thanks,
    Manoj
    Edited by: Manoj Kumar on 15-May-2012 13:03

    If I got your question, you state that you see a SQL statement in systemdatabase tracing, but it is not executed in the database? Btw. you should see further details (such as how much time the query took) if you turn on full verbose tracing.
    I'd just go and try to execute the same queries directly in the database (e.g. in sqlplus console). My guess is that there will be no record according to the second WHERE clause...

  • Need help for SQL SELECT query to fetch XML records from Oracle tables having CLOB field

    Hello,
    I have a scenario wherein i need to fetch records from several oracle tables having CLOB fields(which is holding XML) and then merge them logically to form a hierarchy XML. All these tables are related with PK-FK relationship. This XML hierarchy is having 'OP' as top-most root node and ‘DE’ as it’s bottom-most node with One-To-Many relationship. Hence, Each OP can have multiple GM, Each GM can have multiple DM and so on.
    Table structures are mentioned below:
    OP:
    Name                             Null                    Type        
    OP_NBR                    NOT NULL      NUMBER(4)    (Primary Key)
    OP_DESC                                        VARCHAR2(50)
    OP_PAYLOD_XML                           CLOB       
    GM:
    Name                          Null                   Type        
    GM_NBR                  NOT NULL       NUMBER(4)    (Primary Key)
    GM_DESC                                       VARCHAR2(40)
    OP_NBR               NOT NULL          NUMBER(4)    (Foreign Key)
    GM_PAYLOD_XML                          CLOB   
    DM:
    Name                          Null                    Type        
    DM_NBR                  NOT NULL         NUMBER(4)    (Primary Key)
    DM_DESC                                         VARCHAR2(40)
    GM_NBR                  NOT NULL         NUMBER(4)    (Foreign Key)
    DM_PAYLOD_XML                            CLOB       
    DE:
    Name                          Null                    Type        
    DE_NBR                     NOT NULL           NUMBER(4)    (Primary Key)
    DE_DESC                   NOT NULL           VARCHAR2(40)
    DM_NBR                    NOT NULL           NUMBER(4)    (Foreign Key)
    DE_PAYLOD_XML                                CLOB    
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    SELECT
    j.op_nbr||'||'||j.op_desc||'||'||j.op_paylod_xml AS op_paylod_xml,
    i.gm_nbr||'||'||i.gm_desc||'||'||i.gm_paylod_xml AS gm_paylod_xml,
    h.dm_nbr||'||'||h.dm_desc||'||'||h.dm_paylod_xml AS dm_paylod_xml,
    g.de_nbr||'||'||g.de_desc||'||'||g.de_paylod_xml AS de_paylod_xml,
    FROM
    DE g, DM h, GM i, OP j
    WHERE
    h.dm_nbr = g.dm_nbr(+) and
    i.gm_nbr = h.gm_nbr(+) and
    j.op_nbr = i.op_nbr(+)
    +++++++++++++++++++++++++++++++++++++++++++++++++++++
    I am using above SQL select statement for fetching the XML records and this gives me all related xmls for each entity in a single record(OP, GM, DM. DE). Output of this SQL query is as below:
    Current O/P:
    <resultSet>
         <Record1>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <DM_PAYLOD_XML1>
              <DE_PAYLOD_XML1>
         </Record1>
         <Record2>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML2>
              <DM_PAYLOD_XML2>
              <DE_PAYLOD_XML2>
         </Record2>
         <RecordN>
              <OP_PAYLOD_XMLN>
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XMLN>
         </RecordN>
    </resultSet>
    Now i want to change my SQL query so that i get following output structure:
    <resultSet>
         <Record>
              <OP_PAYLOD_XML1>
              <GM_PAYLOD_XML1>
              <GM_PAYLOD_XML2> .......
              <GM_PAYLOD_XMLN>
              <DM_PAYLOD_XML1>
              <DM_PAYLOD_XML2> .......
              <DM_PAYLOD_XMLN>
              <DE_PAYLOD_XML1>
              <DE_PAYLOD_XML2> .......
              <DE_PAYLOD_XMLN>
         </Record>
         <Record>
              <OP_PAYLOD_XML2>
              <GM_PAYLOD_XML1'>
              <GM_PAYLOD_XML2'> .......
              <GM_PAYLOD_XMLN'>
              <DM_PAYLOD_XML1'>
              <DM_PAYLOD_XML2'> .......
              <DM_PAYLOD_XMLN'>
              <DE_PAYLOD_XML1'>
              <DE_PAYLOD_XML2'> .......
              <DE_PAYLOD_XMLN'>
         </Record>
    <resultSet>
    Appreciate your help in this regard!

    Hi,
    A few questions :
    How's your first query supposed to give you an XML output like you show ?
    Is there something you're not telling us?
    What's the content of, for example, <OP_PAYLOD_XML1> ?
    I don't think it's a good idea to embed the node level in the tag name, it would make much sense to expose that as an attribute.
    What's the db version BTW?

  • Sender JDBC: help to construct Update statement

    Hi,
    I need help in writing a update statement to DB2 database in sender JDBC adapter.
    The sender adapter picks up only few records say 10 at a time which match the Select query condition. So I would need to update only those selected "n" records and not all the records that match the Select query condition.
    Select looks like :
    Select * from <table> where <condition> fetch first <n> rows only.
    But it is not allowing the following update
    update <table> set <field> = <value> where <field> IN (Select * from <table> where <condition> fetch first <n> rows only)
    The above statement errors because DB2 does not allows u201CFetchu201D statement within the update sub query.
    Any pointers to a write update statement for n records is highly appreciatedu2026
    ~SaNvu2026

    Hi Santosh,
    In JDBC Sender Adapter provide the following select & update statements
    Select Statement:
    select * from <table> where <condition> AND ROWNUM<N (N==No.of Rows)
    Update Statement:
    update <table> set <field> = <value> where <field> IN (Select * from <table> where <condition> AND ROWNUM<N)
    (N==No.of Rows)
    And for the error you are facing is because of the server communication link failure between the two systems.
    Check the connections and try to stop & start communication channels once.
    Regards
    Venkat Rao .G

  • Using named parameters with an sql UPDATE statement

    I am trying to write a simple? application on my Windows 7 PC using HTML, Javascript and Sqlite.  I have created a database with a 3 row table and pre-populated it with data.  I have written an HTML data entry form for modifying the data and am able to open the database and populate the form.  I am having trouble with my UPDATE function.  The current version of the function will saves the entry in the last row of the HTML table into the first two rows of the Sqlite data table -- but I'm so worn out on this that I can't tell if it is accidental or the clue I need to fix it.
    This is the full contents of the function . . .
         updateData = new air.SQLStatement();
         updateData.sqlConnection = conn;
         updateData.text = "UPDATE tablename SET Gsts = "Gsts, Gwid = :Gwid, GTitle = :GTitle WHERE GId = ":GId;
              var x = document.getElementsById("formname");
              for (var i = 1, row, row = x.rows[i]; i++) {
                   updateData.parameters[":GId"] = 1;
                   for (var j = 0, col; col=row.cells[j]; j++) {
                        switch(j) {
                             case 0: updateData.parameters[":GTitle"] = col.firstChild.value; break;
                             case 1: updateData.parameters[":Gsts"] = col.firstChild.value; break;
                             case 2: updateData.parameters[":Gwid"] = col.firstChild.value; break;
    Note: When I inspect the contents of the col.firstChild.value cases, they show the proper data as entered in the form -- it just isn't being updated into the proper rows of the Sqlite table.
    Am I using the named parameters correctly? I couldn't find much information on the proper use of parameters in an UPDATE statement.

    Thank you for the notes.  Yes, the misplaced quotes were typos.  I'm handtyping a truncated version of the function so I don't put too much info in the post. And yes, i = 1 'cuz the first rows of the table are titles.  So the current frustration is that I seem to be assigning all the right data to the right parameters but nothing is saving to the database.
    I declare updateData as a variable at the top of the script file
    Then I start a function for updating the data which establishes the sql connection as shown above.
    The correctly typed.text statement is:
            updateData.text = "UPDATE tablename SET Gsts=:Gsts, Gwid=:Gwid, GTitle=:GTitle WHERE GId=:GId";
    (The data I'm saving is entered in text boxes inside table cells.) And the current version of the loop is:
            myTable = document.getElementById("GaugeSts");
            myRows= myTable.rows;
              for(i=1, i<myRows.length, i++) {
                   updateData.parameters[":GId"]=i;
                   for(j=0; j<myRows(i).cells.length, j++) {
                        switch(y) {
                             case 0: updateData.parameters[":GTitle"]=myrows[i].cells[y].firstChild.value; break;
                             case 1: updateData.parameters[":Gsts"]=myrows[i].cells[y].firstChild.value; break;
                             case 2: updateData.parameters[":Gwid"]=myrows[i].cells[y].firstChild.value; break;
                             updateData.execute;
    The whole thing runs without error and when I include the statements to check what's in myrows[i].cells[y].firstChild.value, I'm seeing that the correct data is being picked up for assignment to the parameters and the update. I haven't been able to double check that the contents of the parameters are actually taking the data 'cuz I don't know how to extract the data from the parameters. ( The only reference  I've found so far has said that it is not possible. I'm still researching that one.) I've also tried moving the position of the updateData execution statement to several different locations in the loop andstill nothing updates. 
    I am using a combination of air.Introspector.Console.log to check the results of code and I'm using Firefox's SQlite manager to handle the database.  The database is currently sitting on the Desktop to facilitate testing and I have successfully updated/changed data in this table through the SQLite Manager so I don't think I'm having permission errors and, see below, I have another function successfully saving data to another table.
    I currently have another function that uses ? for the parametersin the .text of a INSERT/REPLACE statement and that one works fine.  But, only one line of data is being saved so there is no loop involved.  I tried changing the UPDATE statement in this function to the INSERT/REPLACE just to make something save back to the database but I can't make that one work either.I've a (And, I've tried so many things now, I don't even remember what actually saved something --albeit incorrectly --to the database in one of my previous iterations with the for loops.)
    I'm currently poring through Sqlite and Javascript tomes to see if I can find what's missing but if anything else jumps out at you with the corrected code, I'd appreciate some ideas.
    Jeane

  • Help needed with Update statements.

    Hello All,
    I am trying to learn Berkeley XMLDB and facing problem to query the inserted XML file. I have a small XML file with the following contents:
    &lt;?xml version="1.0" standalone="yes"?&gt;
    &lt;Bookstore&gt;
    &lt;Book&gt;
    &lt;book_ID&gt;1&lt;/book_ID&gt;
    &lt;title&gt;Harry Potter and the Order of the Phoenix&lt;/title&gt;
    &lt;subtitle&gt;A Photographic History&lt;/subtitle&gt;
    &lt;author&gt;
    &lt;author_fname&gt;J.K.&lt;/author_fname&gt;
    &lt;author_lname&gt;Rowling&lt;/author_lname&gt;
    &lt;/author&gt;
    &lt;price&gt;9.99&lt;/price&gt;
    &lt;year_published&gt;2004&lt;/year_published&gt;
    &lt;publisher&gt;Scholastic, Inc.&lt;/publisher&gt;
    &lt;genre&gt;Fiction&lt;/genre&gt;
    &lt;quantity_in_stock&gt;28997&lt;/quantity_in_stock&gt;
    &lt;popularity&gt;20564&lt;/popularity&gt;
    &lt;/Book&gt;
    &lt;/Bookstore&gt;
    When I try to update the TITLE of this node I have the following error message:
    C:\Users\Chandra\Desktop\BDB&gt;javac -classpath .;"C:\Program Files\Sleepycat Soft
    ware\Berkeley DB XML 2.1.8\jar\dbxml.jar";"C:\Program Files\Sleepycat Software\B
    erkeley DB XML 2.1.8\jar\db.jar" bdb.java
    bdb.java:75: illegal start of expression
    public static final String STATEMENT1 = "replace value of node collection("twopp
    ro.bdbxml")/Bookstore/Book/bookid/title with 'NEWBOOK'";
    ^
    bdb.java:80: ')' expected
    System.out.println("Done query: " + STATEMENT1);
    ^
    2 errors
    But when I remove the update statements and just try to display the TITLE of this node, I dont see any outputs. Please help me to catch up with my mistakes. Below is source code I am using to run this functionality.
    Thanks.
    import java.io.File;
    import java.io.FileNotFoundException;
    import com.sleepycat.db.DatabaseException;
    import com.sleepycat.db.Environment;
    import com.sleepycat.db.EnvironmentConfig;
    import com.sleepycat.dbxml.XmlContainer;
    import com.sleepycat.dbxml.XmlException;
    import com.sleepycat.dbxml.XmlInputStream;
    import com.sleepycat.dbxml.XmlManager;
    import com.sleepycat.dbxml.XmlUpdateContext;
    import com.sleepycat.dbxml.XmlDocument;
    import com.sleepycat.dbxml.XmlQueryContext;
    import com.sleepycat.dbxml.XmlQueryExpression;
    import com.sleepycat.dbxml.XmlResults;
    import com.sleepycat.dbxml.XmlValue;
    public class bdb{
    public static void main(String[] args)
    Environment myEnv = null;
    File envHome = new File("D:/xmldata");
    try {
    EnvironmentConfig envConf = new EnvironmentConfig();
    envConf.setAllowCreate(true); // If the environment does not
    // exits, create it.
    envConf.setInitializeCache(true); // Turn on the shared memory
    // region.
    envConf.setInitializeLocking(true); // Turn on the locking subsystem.
    envConf.setInitializeLogging(true); // Turn on the logging subsystem.
    envConf.setTransactional(true); // Turn on the transactional
    envConf.setRunRecovery(true);
    // subsystem.
    myEnv = new Environment(envHome, envConf);
    // Do BDB XML work here.
    } catch (DatabaseException de) {
    // Exception handling goes here
    } catch (FileNotFoundException fnfe) {
    // Exception handling goes here
    } finally {
    try {
    if (myEnv != null) {
    myEnv.close();
    } catch (DatabaseException de) {
    // Exception handling goes here
    XmlManager myManager = null;
    XmlContainer myContainer = null;
    // The document
    String docString = "D:/xmldata/test.xml";
    // The document's name.
    String docName = "cia";
    try {
    myManager = new XmlManager(); // Assumes the container currently exists.
    myContainer =
    myManager.createContainer("twoppro.bdbxml");
    myManager.setDefaultContainerType(XmlContainer.NodeContainer); // Need an update context for the put.
    XmlUpdateContext theContext = myManager.createUpdateContext(); // Get the input stream.
    XmlInputStream theStream =
    myManager.createLocalFileInputStream(docString); // Do the actual put
    myContainer.putDocument(docName, // The document's name
    theStream, // The actual document.
    theContext, // The update context
    // (required).
    null); // XmlDocumentConfig object
    theStream.delete();
    // Update the title
    public static final String STATEMENT1 = "*replace value of node collection("twoppro.bdbxml")/Bookstore/Book/[bookid=1]/title with 'NEWBOOK'";*
    XmlQueryContext context = myManager.createQueryContext();
    XmlQueryExpression queryExpression1 = myManager.prepare(STATEMENT1, context);
    System.out.println("Try to execute query: " +
    System.out.println("Done query: " + STATEMENT1);
    queryExpression1.execute(context);
    // Get a query context
    XmlQueryContext context = myManager.createQueryContext();
    // Set the evaluation type to Lazy.
    context.setEvaluationType(XmlQueryContext.Lazy);
    // Declare the query string
    String queryString =
    "for $u in collection('twoppro.dbxml')/Bookstore/Book/[bookid=1]"
    + "*return $u/title";*
    // Prepare (compile) the query
    XmlQueryExpression qe = myManager.prepare(queryString, context);
    XmlResults results = qe.execute(context);
    System.out.println("ok");
    System.out.println(results);
    } catch (XmlException e) {
    // Error handling goes here. You may want to check
    // for XmlException.UNIQUE_ERROR, which is raised
    // if a document with that name already exists in
    // the container. If this exception is thrown,
    // try the put again with a different name, or
    // use XmlModify to update the document.
    } catch (FileNotFoundException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    } finally {
    try {
    if (myContainer != null) {
    myContainer.close();
    if (myManager != null) {
    myManager.close();
    } catch (XmlException ce) {
    // Exception handling goes here

    Thanks Rucong. The change you suggested did helped me to run the program correct. But I also have the display function to retrive the results and my program is parsed without any output.
    C:\Users\C\Desktop\BDB&gt;Clientbuild.bat
    C:\Users\C\Desktop\BDB&gt;javac -classpath .;"C:\Program Files\Sleepycat Soft
    ware\Berkeley DB XML 2.1.8\jar\dbxml.jar";"C:\Program Files\Sleepycat Software\B
    erkeley DB XML 2.1.8\jar\db.jar" bdb.java
    C:\Users\C\Desktop\BDB&gt;Client.bat
    C:\Users\C\Desktop\BDB&gt;java -classpath .;"C:\Program Files\Sleepycat Softw
    are\Berkeley DB XML 2.1.8\jar\dbxml.jar";"C:\Program Files\Sleepycat Software\Be
    rkeley DB XML 2.1.8\jar\db.jar" bdb
    See there is no OUPUT displayed. Is there somethinglike a 'print' I have to use in this java code.
    Any ideas ? Below is the code I am using now
    import java.io.File;
    import java.io.FileNotFoundException;
    import com.sleepycat.db.DatabaseException;
    import com.sleepycat.db.Environment;
    import com.sleepycat.db.EnvironmentConfig;
    import com.sleepycat.dbxml.XmlContainer;
    import com.sleepycat.dbxml.XmlException;
    import com.sleepycat.dbxml.XmlInputStream;
    import com.sleepycat.dbxml.XmlManager;
    import com.sleepycat.dbxml.XmlUpdateContext;
    import com.sleepycat.dbxml.XmlDocument;
    import com.sleepycat.dbxml.XmlQueryContext;
    import com.sleepycat.dbxml.XmlQueryExpression;
    import com.sleepycat.dbxml.XmlResults;
    import com.sleepycat.dbxml.XmlValue;
    public class bdb{
    public static void main(String[] args)
    Environment myEnv = null;
    File envHome = new File("D:/xmldata");
    try {
    EnvironmentConfig envConf = new EnvironmentConfig();
    envConf.setAllowCreate(true); // If the environment does not
    // exits, create it.
    envConf.setInitializeCache(true); // Turn on the shared memory
    // region.
    envConf.setInitializeLocking(true); // Turn on the locking subsystem.
    envConf.setInitializeLogging(true); // Turn on the logging subsystem.
    envConf.setTransactional(true); // Turn on the transactional
    envConf.setRunRecovery(true);
    // subsystem.
    myEnv = new Environment(envHome, envConf);
    // Do BDB XML work here.
    } catch (DatabaseException de) {
    // Exception handling goes here
    } catch (FileNotFoundException fnfe) {
    // Exception handling goes here
    } finally {
    try {
    if (myEnv != null) {
    myEnv.close();
    } catch (DatabaseException de) {
    // Exception handling goes here
    XmlManager myManager = null;
    XmlContainer myContainer = null;
    // The document
    String docString = "D:/xmldata/test.xml";
    // The document's name.
    String docName = "cia";
    try {
    myManager = new XmlManager(); // Assumes the container currently exists.
    myContainer =
    myManager.createContainer("twoppro.bdbxml");
    myManager.setDefaultContainerType(XmlContainer.NodeContainer); // Need an update context for the put.
    XmlUpdateContext theContext = myManager.createUpdateContext(); // Get the input stream.
    XmlInputStream theStream =
    myManager.createLocalFileInputStream(docString); // Do the actual put
    myContainer.putDocument(docName, // The document's name
    theStream, // The actual document.
    theContext, // The update context
    // (required).
    null); // XmlDocumentConfig object
    theStream.delete();
    // Update the title
    String STATEMENT1 = "for $n in collection('twoppro.bdbxml')/Bookstore/Book[book_ID=1]/title return replace value of node $n with 'NEWBOOK'";
    XmlQueryContext context = myManager.createQueryContext();
    XmlQueryExpression queryExpression1 = myManager.prepare(STATEMENT1, context);
    System.out.println("Done query: " + STATEMENT1);
    queryExpression1.execute(context);
    // Get a query context
    XmlQueryContext thiscontext = myManager.createQueryContext();
    // Set the evaluation type to Lazy.
    context.setEvaluationType(XmlQueryContext.Lazy);
    // Declare the query string
    String queryString =
    "for $u in collection('twoppro.dbxml')/Bookstore/Book/[bookid=1]"
    + "return $u/title";
    // Prepare (compile) the query
    XmlQueryExpression qe = myManager.prepare(queryString, thiscontext);
    XmlResults results = qe.execute(thiscontext);
    System.out.println("ok");
    System.out.println(results);
    } catch (XmlException e) {
    // Error handling goes here. You may want to check
    // for XmlException.UNIQUE_ERROR, which is raised
    // if a document with that name already exists in
    // the container. If this exception is thrown,
    // try the put again with a different name, or
    // use XmlModify to update the document.
    } catch (FileNotFoundException e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
    } finally {
    try {
    if (myContainer != null) {
    myContainer.close();
    if (myManager != null) {
    myManager.close();
    } catch (XmlException ce) {
    // Exception handling goes here
    Thanks.

  • Bulk SQL Update Statements

    I need to perform bulk updates on my tables using SQL. The tables are really very big and most of updates occur on couple of million records. As such the process is time consuming and very slow. Is there anything I could do to fine tune these update statements? Please advise. Some of the same SQL statements I use are as follows
    update test set gid=1 where gid is null and pid between 0 and 1;
    update test set gid=2 where gid is null and pid between 1 and 5;
    update test set gid=3 where gid is null and pid between 5 and 10;
    update test set gid=4 where gid is null and pid between 10 and 15;
    update test set gid=5 where gid is null and pid between 15 and 70;
    update test set gid=6 where gid is null and pid between 70 and 100;
    update test set gid=7 where gid is null and pid between 100 and 150;
    update test set gid=8 where gid is null and pid between 150 and 200;
    update test set gid=9 where gid is null and pid between 200 and 300;
    Message was edited by:
    user567669

    Indeed, check out the predicate:
    SQL> explain plan for
      2  select *
      3  from emp
      4  where sal between 1000 and 2000;
    Explained.
    SQL> @utlxpls
    PLAN_TABLE_OUTPUT
    Plan hash value: 3956160932
    | Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |      |     5 |   185 |     3   (0)| 00:00:01 |
    |*  1 |  TABLE ACCESS FULL| EMP  |     5 |   185 |     3   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    PLAN_TABLE_OUTPUT
       1 - filter("SAL"<=2000 AND "SAL">=1000)

Maybe you are looking for

  • Reading the XML message from Xi stored in XML format from a abap program.

    Hi Gurus, My requirement here is to read the data that will be coming from Xi from my custom abap program and updating 2 data base tables. The method is after the data mapping is done a class is generated in abap proxy in which a method is available.

  • Safari displays unreadable symbols

    Suddenly, and for no apparent reason, (I have made no font changes or additions in ages) Safari has started displaying unreadable garbage in fonts other than the ones I specify in Safari preferences. No matter what fonts I specify, they remain the sa

  • Align image

    I have an image in a table cell in my report. Naturally not all images fit 100% in the cell which is why I'd like to align the images in the middle of the cell, horisontally and vertically. I've been unable to figure out how to do this in Reporting S

  • Private/var/log/asl files are huge: weird iTunes stuff

    Hi all, I am running ancient OS 10.5.8 on my ancient MacBook.   I noticed some rather large log files in private/var/log/asl eating up HDD space, and wanted to delete them, but I scanned through them first to see if I could get an idea of what was go

  • Advice/Experience with division of tasks/roles for developers in JSF

    Hello, I've been reading various Sun docs and JSF sites and am looking for some information detailing the division of developer roles/tasks during JSF dev. I've seen the enclosed description in the Sun tutorials but was hoping to see if anyone has ad