Atomicity of a query

Hello All,
I have two queries which I believe are related to each other.
First is a general question, Is each query in oracle atomic in nature? In other words, if I run a query table on A, then does the state of the table remains consistent till the query has been executed ?
Coming to the second and more direct query, there is a scenario .There is table A with two fields criteria and id(primary key)
I need to select a random row from a table based on a criteria and then update the criteria column at the same time so that the row is not selected again. There were two techniques that I have followed in this -
use an update statement with returning clause since I need to know the id key for further processing.
use ROW_NUMBER() function to generate incremental numbers to the rows in the results so that I can pick a random row number
for selection.
Query snippet -
p_id tableA.id%type;
update tableA a set critera = 'SELECTED' where id = (
select id from (select id,ROW_NUMBER()
OVER ( ORDER BY id) as randomKey from tableA where
criteria is null) where randomKey = (select
round(dbms_random.value(1,(select count(*) from tableA where criteria is null)),0) as randomkey1 from dual))
selecting id into p_id;
Is this the best approach to solve this problem? Please advice.
Thanks
Peeyush

Hi,
Peeyush wrote:
First is a general question, Is each query in oracle atomic in nature? In other words, if I run a query table on A, then does the state of the table remains consistent till the query has been executed ? Yes, even if another session updates, or even drops table A, your query will finish and give you results based on the contents of table A at the moment the query began. Even SYSDATE will be the same throughout the query.
Coming to the second and more direct query, there is a scenario .There is table A with two fields criteria and id(primary key)
I need to select a random row from a table based on a criteria and then update the criteria column at the same time so that the row is not selected again. There were two techniques that I have followed in this -
use an update statement with returning clause since I need to know the id key for further processing.
use ROW_NUMBER() function to generate incremental numbers to the rows in the results so that I can pick a random row number
for selection.Sorry, I don't understand this part.
Could you give a specific example?
Why do you need to do "further processing": is there something that can't be done in one query?
If you want to pick N rows at random (N may be 1), you can use "ROW_NUMBER () OVER (ORDER BY dbms_random.value) AS r_num" in a sub-query, and pick the rows "WHERE r_num <= N" in the super-query.

Similar Messages

  • JDBC Sender Adapter - Transaction & Parameterized Query?

    Dear Experts,
    I'm curious about the JDBC sender adapter in SAP PI.
    As I see from the document and have been searching in the Internet, The default procedure of the sender JDBC adapter is to first run a SELECT/Store Procedure query then update the records that have been read before.
    Configuring the Sender JDBC Adapter - Advanced Adapter Engine - SAP Library
    What I want to ask is:
    - What is the database transaction used for the SELECT and the UPDATE? I mean what if the SELECT is successful and the records have been sent to the IE, but the UPDATE failed. This way, the next polling run, the same records could be read again. Is it possible? Are the SELECT and UPDATE query atomic (if one fails the other fails too)?
    - Is it possible to have a parameterized query / stored procedure in sender JDBC adapter? Because seeing at the default procedure, there should be at least a field that will be used as a flag (for example the processed field needs to be updated to '1'). Something like:
              - SELECT * FROM table_a WHERE docno > $last_doc_no
                             $last_doc_no is a paramter or variable from PI
              - EXEC sp_do_something ( $param_a, $param_b )
                             $param_a, $param_b are parameters or variable in PI
    Thank you,
    Suwandi C.

    Hi Suwandi,
    all action to database are in one transaction and thats mean if one failed all failed.
    And it is possible to have parameterized storied procedure. You sould send something like
    <?xml version="1.0" encoding="UTF-8"?>
    <ns0:mt_proc xmlns:ns0="http://aaaa">
       <statementname>
          <stProc action="EXECUTE"/>
          <TABLE>PROCEDURE NAME</TABLE>
          <access>
             <param_in isInput="1" type="some_type">input param</param_in>
             <param_out isOutput="1" type="some_type"></param_out>
          </access>
       </statementname>
    </ns0:mt_proc>

  • Accessing Atom XML entry by id?

    Hello,
    I have a strange problem when trying to access atom feeds (or
    any xml really).
    I'm using a standard HttpService to grab an atom feed, then
    i'm trying to access a particular node summary from its id with
    this xml query:
    myLabel.text=myFeed..entry.(id=="elementidhere").summary;
    But it will not match any entry based on the id, yet it will
    quite happily match against the title or any other node as such:
    myLabel.text= myFeed..entry.(title=="Welcome").summary;
    And will even output the id just to prove it exists:
    myLabel.text= myFeed..entry.(title=="Welcome").id;
    Is there something special about the id node so I cant do a
    straightforward match against it- I'm using the atom namespace
    which reports the id as a uri so do I need to handle it
    differently. I've tried converting it to a string also, but this
    then throws an error saying its null:
    myLabel.text=myFeed..entry.(id.toString()=="tag:elementid").summary;
    Any ideas?
    Anthony

    Use this as your schema. It should work then.
    <?xml version="1.0" encoding="UTF-8"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
    <xsd:complexType name="EntryClass">
    <xsd:attribute name="ID" type="xsd:ID" use="required"/>
    <xsd:attribute name="value" type="xsd:string" use="required"/>
    </xsd:complexType>
    <xsd:complexType name="MapClass">
    <xsd:sequence>
    <xsd:element maxOccurs="unbounded" minOccurs="1" name="entry" type="EntryClass"/>
    </xsd:sequence>
    </xsd:complexType>
    <xsd:element name="map" value="MapClass"/>
    </xsd:schema>

  • Trying to create non-numeric key figure in Query

    Hello, I am trying to add a formula into my BW Query that will return a non-numeric value.
    My cube will be used like a standard BW cube 90% of the time, but there is one query request that wants to display atomic data and have non-numeric categories determine at time of query run.
    Example:
    Field A has integer values
    Key           Value
    Record 1     -1
    Record 2      2
    Record 3      6
    I want to add a calculated field with the following logic:
    If Value is <= -1 then display "Early" else if Value is >-1 and <=5 then display "On-Time" else if Value is > 5 then display "Late".  The -1 and 5 values will be replaced with variables that will be required on the selection screen.
    This would return a grid as below:
    Key           Value    Calculated Field
    Record 1     -1           "Early"
    Record 2      2           "On-Time"
    Record 3      6           "Late"
    I haven't been able to figure out how to set up this field.  Any ideas?

    Hi,
    check these help links
    http://help.sap.com/saphelp_nw04/helpdata/en/8f/da1640dc88e769e10000000a155106/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/95/1ae03b591a9c7ce10000000a11402f/frameset.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6305e07211d2acb80000e829fbfe/content.htm
    http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6312e07211d2acb80000e829fbfe/content.htm
    It's a big topic so it's not possible to write everything here.
    You can search the forums also.
    Thanks

  • Very Slow Query with CTE inner join

    I have 2 tables (heavily simplified here to show relevant columns):
    CREATE TABLE tblCharge
    (ChargeID int NOT NULL,
    ParentChargeID int NULL,
    ChargeName varchar(200) NULL)
    CREATE TABLE tblChargeShare
    (ChargeShareID int NOT NULL,
    ChargeID int NOT NULL,
    TotalAmount money NOT NULL,
    TaxAmount money NULL,
    DiscountAmount money NULL,
    CustomerID int NOT NULL,
    ChargeShareStatusID int NOT NULL)
    I have a very basic View to Join them:
    CREATE VIEW vwBASEChargeShareRelation as
    Select c.ChargeID, ParentChargeID, s.CustomerID, s.TotalAmount, isnull(s.TaxAmount, 0) as TaxAmount, isnull(s.DiscountAmount, 0) as DiscountAmount
    from tblCharge c inner join tblChargeShare s
    on c.ChargeID = s.ChargeID Where s.ChargeShareStatusID < 3
    GO
    I then have a view containing a CTE to get the children of the Parent Charge:
    ALTER VIEW [vwChargeShareSubCharges] AS
    WITH RCTE AS
    SELECT ParentChargeId, ChargeID, 1 AS Lvl, ISNULL(TotalAmount, 0) as TotalAmount, ISNULL(TaxAmount, 0) as TaxAmount,
    ISNULL(DiscountAmount, 0) as DiscountAmount, CustomerID, ChargeID as MasterChargeID
    FROM vwBASEChargeShareRelation Where ParentChargeID is NULL
    UNION ALL
    SELECT rh.ParentChargeID, rh.ChargeID, Lvl+1 AS Lvl, ISNULL(rh.TotalAmount, 0), ISNULL(rh.TaxAmount, 0), ISNULL(rh.DiscountAmount, 0) , rh.CustomerID
    , rc.MasterChargeID
    FROM vwBASEChargeShareRelation rh
    INNER JOIN RCTE rc ON rh.PArentChargeID = rc.ChargeID and rh.CustomerID = rc.CustomerID
    Select MasterChargeID as ChargeID, CustomerID, Sum(TotalAmount) as TotalCharged, Sum(TaxAmount) as TotalTax, Sum(DiscountAmount) as TotalDiscount
    from RCTE
    Group by MasterChargeID, CustomerID
    GO
    So far so good, I can query this view and get the total cost for a line item including all children.
    The problem occurs when I join this table. The query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where s.ChargeID = 1291094
    Takes around 30 ms to return a result (tblCharge and Charge Share have around 3.5 million records).
    But the query:
    Select t.* from vwChargeShareSubCharges t
    inner join
    tblChargeShare s
    on t.CustomerID = s.CustomerID
    and t.MasterChargeID = s.ChargeID
    Where InvoiceID = 1045854
    Takes around 2 minutes to return a result - even though the only charge with that InvoiceID is the same charge as the one used in the previous query.
    The same thing occurs if I do the join in the same query that the CTE is defined in.
    I ran the execution plan for each query. The first (fast) query looks like this:
    The second(slow) query looks like this:
    I am at a loss, and my skills at decoding execution plans to resolve this are lacking.
    I have separate indexes on tblCharge.ChargeID, tblCharge.ParentChargeID, tblChargeShare.ChargeID, tblChargeShare.InvoiceID, tblChargeShare.ChargeShareStatusID
    Any ideas? Tested on SQL 2008R2 and SQL 2012

    >> The database is linked [sic] to an established app and the column and table names can't be changed. <<
    Link? That is a term from pointer chains and network databases, not SQL. I will guess that means the app came back in the old pre-RDBMS days and you are screwed. 
    >> I am not too worried about the money field [sic], this is used for money and money based calculations so the precision and rounding are acceptable at this level. <<
    Field is a COBOL concept; columns are totally different. MONEY is how Sybase mimics the PICTURE clause that puts currency signs, commas, period, etc in a COBOL money field. 
    Using more than one operation (multiplication or division) on money columns will produce severe rounding errors. A simple way to visualize money arithmetic is to place a ROUND() function calls after 
    every operation. For example,
    Amount = (Portion / total_amt) * gross_amt
    can be rewritten using money arithmetic as:
    Amount = ROUND(ROUND(Portion/total_amt, 4) * 
    gross_amt, 4)
    Rounding to four decimal places might not seem an 
    issue, until the numbers you are using are greater 
    than 10,000. 
    BEGIN
    DECLARE @gross_amt MONEY,
     @total_amt MONEY,
     @my_part MONEY,
     @money_result MONEY,
     @float_result FLOAT,
     @all_floats FLOAT;
     SET @gross_amt = 55294.72;
     SET @total_amt = 7328.75;
     SET @my_part = 1793.33;
     SET @money_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @float_result = (@my_part / @total_amt) * 
    @gross_amt;
     SET @Retult3 = (CAST(@my_part AS FLOAT)
     / CAST( @total_amt AS FLOAT))
     * CAST(FLOAT, @gross_amt AS FLOAT);
     SELECT @money_result, @float_result, @all_floats;
    END;
    @money_result = 13525.09 -- incorrect
    @float_result = 13525.0885 -- incorrect
    @all_floats = 13530.5038673171 -- correct, with a -
    5.42 error 
    >> The keys are ChargeID(int, identity) and ChargeShareID(int, identity). <<
    Sorry, but IDENTITY is not relational and cannot be a key by definition. But it sure works just like a record number in your old COBOL file system. 
    >> .. these need to be int so that they are assigned by the database and unique. <<
    No, the data type of a key is not determined by physical storage, but by logical design. IDENTITY is the number of a parking space in a garage; a VIN is how you identify the automobile. 
    >> What would you recommend I use as keys? <<
    I do not know. I have no specs and without that, I cannot pull a Kabbalah number from the hardware. Your magic numbers can identify Squids, Automobile or Lady Gaga! I would ask the accounting department how they identify a charge. 
    >> Charge_Share_Status_ID links [sic] to another table which contains the name, formatting [sic] and other information [sic] or a charge share's status, so it is both an Id and a status. <<
    More pointer chains! Formatting? Unh? In RDBMS, we use a tiered architecture. That means display formatting is in a presentation layer. A properly created table has cohesion – it does one and only one data element. A status is a state of being that applies
    to an entity over a period time (think employment, marriage, etc. status if that is too abstract). 
    An identifier is based on the Law of Identity from formal logic “To be is to be something in particular” or “A is A” informally. There is no entity here! The Charge_Share_Status table should have the encoded values for a status and perhaps a description if
    they are unclear. If the list of values is clear, short and static, then use a CHECK() constraint. 
    On a scale from 1 to 10, what color is your favorite letter of the alphabet? Yes, this is literally that silly and wrong. 
    >> I understand what a CTE is; is there a better way to sum all children for a parent hierarchy? <<
    There are many ways to represent a tree or hierarchy in SQL.  This is called an adjacency list model and it looks like this:
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     boss_emp_name CHAR(10) REFERENCES OrgChart(emp_name), 
     salary_amt DECIMAL(6,2) DEFAULT 100.00 NOT NULL,
     << horrible cycle constraints >>);
    OrgChart 
    emp_name  boss_emp_name  salary_amt 
    ==============================
    'Albert'    NULL    1000.00
    'Bert'    'Albert'   900.00
    'Chuck'   'Albert'   900.00
    'Donna'   'Chuck'    800.00
    'Eddie'   'Chuck'    700.00
    'Fred'    'Chuck'    600.00
    This approach will wind up with really ugly code -- CTEs hiding recursive procedures, horrible cycle prevention code, etc.  The root of your problem is not knowing that rows are not records, that SQL uses sets and trying to fake pointer chains with some
    vague, magical non-relational "id".  
    This matches the way we did it in old file systems with pointer chains.  Non-RDBMS programmers are comfortable with it because it looks familiar -- it looks like records and not rows.  
    Another way of representing trees is to show them as nested sets. 
    Since SQL is a set oriented language, this is a better model than the usual adjacency list approach you see in most text books. Let us define a simple OrgChart table like this.
    CREATE TABLE OrgChart 
    (emp_name CHAR(10) NOT NULL PRIMARY KEY, 
     lft INTEGER NOT NULL UNIQUE CHECK (lft > 0), 
     rgt INTEGER NOT NULL UNIQUE CHECK (rgt > 1),
      CONSTRAINT order_okay CHECK (lft < rgt));
    OrgChart 
    emp_name         lft rgt 
    ======================
    'Albert'      1   12 
    'Bert'        2    3 
    'Chuck'       4   11 
    'Donna'       5    6 
    'Eddie'       7    8 
    'Fred'        9   10 
    The (lft, rgt) pairs are like tags in a mark-up language, or parens in algebra, BEGIN-END blocks in Algol-family programming languages, etc. -- they bracket a sub-set.  This is a set-oriented approach to trees in a set-oriented language. 
    The organizational chart would look like this as a directed graph:
                Albert (1, 12)
        Bert (2, 3)    Chuck (4, 11)
                       /    |   \
                     /      |     \
                   /        |       \
                 /          |         \
            Donna (5, 6) Eddie (7, 8) Fred (9, 10)
    The adjacency list table is denormalized in several ways. We are modeling both the Personnel and the Organizational chart in one table. But for the sake of saving space, pretend that the names are job titles and that we have another table which describes the
    Personnel that hold those positions.
    Another problem with the adjacency list model is that the boss_emp_name and employee columns are the same kind of thing (i.e. identifiers of personnel), and therefore should be shown in only one column in a normalized table.  To prove that this is not
    normalized, assume that "Chuck" changes his name to "Charles"; you have to change his name in both columns and several places. The defining characteristic of a normalized table is that you have one fact, one place, one time.
    The final problem is that the adjacency list model does not model subordination. Authority flows downhill in a hierarchy, but If I fire Chuck, I disconnect all of his subordinates from Albert. There are situations (i.e. water pipes) where this is true, but
    that is not the expected situation in this case.
    To show a tree as nested sets, replace the nodes with ovals, and then nest subordinate ovals inside each other. The root will be the largest oval and will contain every other node.  The leaf nodes will be the innermost ovals with nothing else inside them
    and the nesting will show the hierarchical relationship. The (lft, rgt) columns (I cannot use the reserved words LEFT and RIGHT in SQL) are what show the nesting. This is like XML, HTML or parentheses. 
    At this point, the boss_emp_name column is both redundant and denormalized, so it can be dropped. Also, note that the tree structure can be kept in one table and all the information about a node can be put in a second table and they can be joined on employee
    number for queries.
    To convert the graph into a nested sets model think of a little worm crawling along the tree. The worm starts at the top, the root, makes a complete trip around the tree. When he comes to a node, he puts a number in the cell on the side that he is visiting
    and increments his counter.  Each node will get two numbers, one of the right side and one for the left. Computer Science majors will recognize this as a modified preorder tree traversal algorithm. Finally, drop the unneeded OrgChart.boss_emp_name column
    which used to represent the edges of a graph.
    This has some predictable results that we can use for building queries.  The root is always (left = 1, right = 2 * (SELECT COUNT(*) FROM TreeTable)); leaf nodes always have (left + 1 = right); subtrees are defined by the BETWEEN predicate; etc. Here are
    two common queries which can be used to build others:
    1. An employee and all their Supervisors, no matter how deep the tree.
     SELECT O2.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O1.emp_name = :in_emp_name;
    2. The employee and all their subordinates. There is a nice symmetry here.
     SELECT O1.*
       FROM OrgChart AS O1, OrgChart AS O2
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND O2.emp_name = :in_emp_name;
    3. Add a GROUP BY and aggregate functions to these basic queries and you have hierarchical reports. For example, the total salaries which each employee controls:
     SELECT O2.emp_name, SUM(S1.salary_amt)
       FROM OrgChart AS O1, OrgChart AS O2,
            Salaries AS S1
      WHERE O1.lft BETWEEN O2.lft AND O2.rgt
        AND S1.emp_name = O2.emp_name 
       GROUP BY O2.emp_name;
    4. To find the level and the size of the subtree rooted at each emp_name, so you can print the tree as an indented listing. 
    SELECT O1.emp_name, 
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN O2.sale_amt ELSE 0.00 END) AS sale_amt_tot,
       SUM(CASE WHEN O2.lft BETWEEN O1.lft AND O1.rgt 
       THEN 1 ELSE 0 END) AS subtree_size,
       SUM(CASE WHEN O1.lft BETWEEN O2.lft AND O2.rgt
       THEN 1 ELSE 0 END) AS lvl
      FROM OrgChart AS O1, OrgChart AS O2
     GROUP BY O1.emp_name;
    5. The nested set model has an implied ordering of siblings which the adjacency list model does not. To insert a new node, G1, under part G.  We can insert one node at a time like this:
    BEGIN ATOMIC
    DECLARE rightmost_spread INTEGER;
    SET rightmost_spread 
        = (SELECT rgt 
             FROM Frammis 
            WHERE part = 'G');
    UPDATE Frammis
       SET lft = CASE WHEN lft > rightmost_spread
                      THEN lft + 2
                      ELSE lft END,
           rgt = CASE WHEN rgt >= rightmost_spread
                      THEN rgt + 2
                      ELSE rgt END
     WHERE rgt >= rightmost_spread;
     INSERT INTO Frammis (part, lft, rgt)
     VALUES ('G1', rightmost_spread, (rightmost_spread + 1));
     COMMIT WORK;
    END;
    The idea is to spread the (lft, rgt) numbers after the youngest child of the parent, G in this case, over by two to make room for the new addition, G1.  This procedure will add the new node to the rightmost child position, which helps to preserve the idea
    of an age order among the siblings.
    6. To convert a nested sets model into an adjacency list model:
    SELECT B.emp_name AS boss_emp_name, E.emp_name
      FROM OrgChart AS E
           LEFT OUTER JOIN
           OrgChart AS B
           ON B.lft
              = (SELECT MAX(lft)
                   FROM OrgChart AS S
                  WHERE E.lft > S.lft
                    AND E.lft < S.rgt);
    7. To find the immediate parent of a node: 
    SELECT MAX(P2.lft), MIN(P2.rgt)
      FROM Personnel AS P1, Personnel AS P2
     WHERE P1.lft BETWEEN P2.lft AND P2.rgt 
       AND P1.emp_name = @my_emp_name;
    I have a book on TREES & HIERARCHIES IN SQL which you can get at Amazon.com right now. It has a lot of other programming idioms for nested sets, like levels, structural comparisons, re-arrangement procedures, etc. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Issue consuming Odata feed with Power Query (not in Power BI)

    I am trying to consume an OData feed from an SSRS report using Power Query ( latest release on Excel
     2010 x32). The Report server is in my company intranet.
    In the SSRS report, I can generate an Atom service file. When I use this file in PowerPivot, I can successfully import the report data and refresh it on demand. But I
    would like to do the same thing in Power Query using the "From OData feed" feature.
    1. I have tried supplying the URI to the service file
    = OData.Feed("file:///C:/Users/Bdarbonneau/Documents/temp/Manuf_cycle_time_mapping_table.atomsvc")
    I get this error : 
    DataFormat.Error: The supplied URL must be a valid 'http:' or 'https:' URL.
    2. I tried supplying the URL that the service file contains, but without success. 
    = OData.Feed("http://myssrsserver:8080/ReportServer?%2FMANUFACTURING%2FArchive%2FManuf_cycle_time_mapping_table&amp;rs%3ACommand=Render&amp;rs%3AFormat=ATOM&amp;rc%3AItemPath=Tablix1")
    I get this error:
    DataFormat.Error: OData: The given URL neither points to an OData service or a feed
    Am I missing something, or is what I am trying to do not supported ? 
    Regards,
    Bertrand

    My Current workaround for pulling data from SSRS until the dev team have worked out the odata issue is to pull the report in as a csv file:
    Csv.Document(Web.Contents("http://Servername/ReportServer?/SummaryReport&rs:Command=Render&rs:Format=Csv")),
    I also tried pulling an excel file from SSRS with no success.
    Tried:
    Excel.Workbook(URL)
    Excel.Workbook(Web.Contents(URL))
    Excel.Workbook(File.Contents(URL))
    Excel.Workbook(File.Contents(Web.Contents(URL)))
    if anyone has had luck pulling in an excel file from SSRS i would like to know how.
    Is there a rough release date for the odata functionality?

  • Best practices for creating and querying a history table?

    Suppose I have a table of name-value pairs, and I want to keep track of changes to them so that I can query the value of any pair at any point in time.
    A direct approach would be to use a schema like this:
    CREATE TABLE NAME_VALUE_HISTORY (
      NAME      VARCHAR2(...),
      VALUE     VARCHAR2(...),
      MODIFIED DATE
    );When a name-value pair is updated, a new row is added to this table with the date of the change.
    To determine the value associated with a name at a particular point in time, one uses a query like:
      SELECT * FROM NAME_VALUE_HISTORY
      WHERE NAME = :name
        AND MODIFIED IN (SELECT MAX(MODIFIED)
                        FROM NAME_VALUE_HISTORY
                        WHERE NAME = :name AND MODIFIED <= :time)My question is: is there a better way to accomplish this? What indexes/hints would you recommend?
    What about a two-table approach like this one? http://pratchev.blogspot.com/2007/05/keeping-history-data-in-sql-server.html
    Edited by: user10936714 on Aug 9, 2012 8:35 AM

    user10936714 wrote:
    There is one advantage... recording the change of a value is just one insert, and it is also atomic without the use of transactions.At the risk of being dumb, why is that an advantage? Oracle always and everywhere uses transactions so it's not like you're avoiding some overhead by not using transactions.
    If, for instance, the performance of reading the value of a name at a point in time is not important, then you can get by with just using one table - the history table.If you're not overly concerned with the performance implications of having the current data and the history data in the same table, rather than rolling your own solution, I'd be strongly tempted to use Workspace Manager to let Oracle keep track of the changes.
    You can create a table, enable versioning, and do whatever DML operations you'd like
    SQL> create table address(
      2    address_id number primary key,
      3    address    varchar2(100)
      4  );
    Table created.
    SQL> exec dbms_wm.enableVersioning( 'ADDRESS', 'VIEW_WO_OVERWRITE' );
    PL/SQL procedure successfully completed.
    SQL> insert into address values( 1, 'First Address' );
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> update address
      2     set address = 'Second Address'
      3   where address_id = 1;
    1 row updated.
    SQL> commit;
    Commit complete.Then you can either query the history view
    SQL> ed
    Wrote file afiedt.buf
      1  select address_id, address, wm_createtime
      2*   from address_hist
    SQL> /
    ADDRESS_ID ADDRESS                        WM_CREATETIME
             1 First Address                  09-AUG-12 01.48.58.566000 PM -04:00
             1 Second Address                 09-AUG-12 01.49.17.259000 PM -04:00Or, even cooler, you can go back to an arbitrary point in time, run a query, and see the historical information. I can go back to a point between the time that I committed the first change and the second change, query the ADDRESS view, and see the old data. This is invaluable if you want to take existing queries and/or reports and run them as of certain dates in the past when you're trying to debug a problem.
    SQL> select *
      2    from address;
    ADDRESS_ID ADDRESS
             1 First AddressYou can also do things like set savepoints which are basically named points in time that you can go back to. That lets you do things like create a savepoint for the data as soon as month-end processing is completed so you can easily go back to "July Month End" without needing to figure out exactly what time that occurred. And you can have multiple workspaces so different users can be working on completely different sets of changes simultaneously without interfering with each other. This was actually why Workspace Manager was originally created-- to allow users manipulating spatial data to have extremely long-running transactions that could span days or months-- and to be able to switch back and forth between the current live data and the data in each of these long-running scenarios.
    Justin

  • For each atom

    Hi experts,
    I would like to do a SQL querry in an atom SQL call inside a for each atom like that :
    My for each atom get back each result of my first sqlCall and my query in my second sqlCall compare an element with the current element of the for each atom like that : Select element from table where element1=current element of for each atom (which is one of results of first sql call)
    I don't know how to get back the current element in my query
    Does anyone know?
    Thank you
    Regards
    Sarah

    Hi
    Did you replace atom1 by atom4 here?
    Then in the  expression in the foreach atom use the Xpat like this:
    /vpf:Msg/vpf:Body/vpf:Payload[./@Role=&apos;X&apos; and ./@id=&apos;atom1&apos;]/Items/Item
    If true then try this:
    /vpf:Msg/vpf:Body/vpf:Payload[./@Role=&apos;X&apos; and ./@id=&apos;atom4&apos;]/*[local-name()=&apos;Items&apos;]/Item
    Regards

  • Still getting uncaught exception in c++ API running keywords query

    When I run a search based on keyword in java application, the first time, most likely the query results is returned, but for the subsequent keywords searches, the application throws the error below...
    com.sleepycat.dbxml.XmlException: Uncaught exception from C++ API, errcode = INTERNAL_ERROR
         at com.sleepycat.dbxml.dbxml_javaJNI.XmlQueryExpression_execute__SWIG_1(Native Method)
         at com.sleepycat.dbxml.XmlQueryExpression.execute(XmlQueryExpression.java:85)
         at epss.utilities.XQueryUtil.getQueryResultsByKeywords(XQueryUtil.java:168)
         at epss.search.XmlContentByKeywords.getDocumentContentByKeywords(XmlContentByKeywords.java:123)
         at com.epss.test.TestApp.main(TestApp.java:83)
    I know one of the many things to consider fixing this problem is to make sure all berkeley db xml objects (e.g. xmlContainer, XmlManager, XmlResults, XmlQueryExpression, etc) delete() method is called on those obects once they are done to free resources etc. I've been doing all that and still getting the error. This problem doesn't happen when i run a search for based on id (attribute value).
    Note: I'm not explicitly using trasanction since i turned on transaction in EnvironmentConfig to create XmlManager.
    This is the method that does the query and return us the results...
         * Gets the query results by keywords.
         * @param keywords
         * the keywords under search
         * @param manager
         * the object used to perform activities such as preparing XQuery
         * queries
         * @return the query results by keywords
         public static synchronized XmlResults getQueryResultsByKeywords(
                   final String keywords, XmlManager manager) {
              /* Represents a parsed XQuery expression. */
              XmlQueryExpression expr = null;
              /* Encapsulates the results of a query that has been executed. */
              XmlResults results = null;
              /* The query context */
              XmlQueryContext context = null;
              // The value
              XmlValue value = null;
              // Declare string variables
              String query = null;
              // Run logic
              try {
                   /* Do null check */
                   if (manager != null) {
                        // Make XmlValue object
                        value = new XmlValue(keywords);
                        // Get a query context
                        context = manager.createQueryContext();
                        // Bind xquery variable value to its variable name
                        context.setVariableValue(DataConstants.KEYWORD, value);
                        // Build the query string
                        query = QueryStringUtil.xQueryStringByKeywords(
                                  DataConstants.ELEMENTS, DataConstants.KEYWORD);
                        // Compile an XQuery expression into an XmlQueryExpression
                        expr = manager.prepare(query, context);
                        // Evaluates the XQuery expression against the containers
                        results = expr.execute(context);
                        /* Release resources */
                        if (results.size() == 0) {
                             results.delete();
                             results = null;
                        // Free the native resources
                        expr.delete();
                        // Dereference objects
                        expr = null;
                        value = null;
                        context = null;
                        query = null;
                        manager.delete();
                        manager = null;
                        return results;
              } catch (final XmlException e) {
                   // Free the native resources
                   expr.delete();
                   // dereference objects
                   expr = null;
                   value = null;
                   context = null;
                   query = null;
                   // Write to log
                   WriteLog.logExceptionToFile(e);
              return null;
    This is the callback method that return the query string...
         * Returns query keyword query string to retrive keywords.
         * @param elementName The particular node under search
         * @param keywords The keywords being searched under the node
         * @return The string used for the query
         public static synchronized String xQueryStringByKeywords(
                   final String elementName, final String keywords) {
              /* Build query string */
              final StringBuffer sb = new StringBuffer();
              sb.append("let $found := false\n");
              sb.append("let $terms := tokenize($");
              sb.append(keywords);
              sb.append(", \",\")\n");
              sb.append("for $element in collection('");
              sb.append(DataConstants.CONTAINER);
              sb.append("')");
              sb.append("/(FUNDOC | JOBDOC)");
              sb.append("//");
              sb.append(elementName);
              sb.append("//");
              sb.append("parent::*[1]");
              sb.append("\nlet $found := for $term in $terms\n");
              sb
                        .append(" return if (contains(lower-case($element), lower-case($term)))");
              sb.append(" \nthen \"true\"");
              sb.append(" else \"false\" \n");
              sb.append(" return if ($found = \"false\") \nthen () else $element");
              return sb.toString();
    Edited by: user3453165 on Jan 20, 2010 7:20 AM

    I am using berkeley db xml 2.5.13 on windows xp. Yes that's the complete error message. I am going to add my environment class and also part of the keyword search class that extends the environment, which will give u idea about how i'm creating and using transaction. I don't explicitly use transaction. I used to explicitly use it but i thought it's redundant. So when i create the db environment, i just call           envc.setTransactional(true) and pass the EnvironmentConfig object (i.e. envc) to the environment to create instance of XmlManager and this is fine. Look below and u will see what i mean. Please let me know if u need more information. Thanks for your help. Appreciate it.
    Tue, 2010-01-19 10:58:27 PM
    com.sleepycat.dbxml.XmlException: Uncaught exception from C++ API, errcode = INTERNAL_ERROR
         at com.sleepycat.dbxml.dbxml_javaJNI.XmlQueryExpression_execute__SWIG_1(Native Method)
         at com.sleepycat.dbxml.XmlQueryExpression.execute(XmlQueryExpression.java:85)
         at epss.utilities.XQueryUtil.getQueryResultsByKeywords(XQueryUtil.java:166)
         at epss.search.XmlContentByKeywords.getDocumentContentByKeywords(XmlContentByKeywords.java:123)
         at com.epss.test.TestApp.main(TestApp.java:66)
    The environment class...
    package epss.core;
    import java.io.File;
    import java.io.FilenameFilter;
    import java.io.IOException;
    import com.sleepycat.db.DatabaseException;
    import com.sleepycat.db.Environment;
    import com.sleepycat.db.EnvironmentConfig;
    import com.sleepycat.dbxml.XmlContainer;
    import com.sleepycat.dbxml.XmlContainerConfig;
    import com.sleepycat.dbxml.XmlManager;
    import com.sleepycat.dbxml.XmlManagerConfig;
    import epss.utilities.GlobalUtil;
    * Class used to open and close Berkeley Database environment.
    public class DatabaseEnvironment {
         /** The db env_. */
         private Environment dbEnv_ = null;
         /** The mgr_. */
         private XmlManager mgr_ = null;
         /** The opened container. */
         private XmlContainer openedContainer = null;
         /** The new container. */
         private XmlContainer newContainer = null;
         /** The path2 db env_. */
         private File path2DbEnv_ = null;
         /** Whether we are creating or opening database environment. */
         private int mode = -1;
         /** Constants for mode opening or mode creation. */
         private static final int OPEN_DB = 0, CREATE_DB = 1;
         * Set the Mode (CREATE_DB = 1, OPEN_DB = 0).
         * @param m
         * the m
         protected synchronized void setDatabaseMode(final int m) {
              if (m == OPEN_DB || m == CREATE_DB)
                   mode = m;
         * Gets the manager.
         * @return the manager
         protected synchronized XmlManager getManager() {
              return mgr_;
         * Gets the opened container.
         * @return the opened container
         protected synchronized XmlContainer getOpenedContainer() {
              return openedContainer;
         * Gets the new container.
         * @return the new container
         protected synchronized XmlContainer getNewContainer() {
              return newContainer;
         * Initialize database environment.
         * @throws Exception
         * the exception
         protected synchronized void doDatabaseSetup(String container)
                   throws Exception {
              switch (mode) {
              case OPEN_DB:
                   // check database home dir exist
                   if (!(isPathToDbExist(new File(DataConstants.DB_HOME)))) {
                        WriteLog.logMessagesToFile(DataConstants.DB_FILE_MISSING);
                        cleanup();
                        throw new IOException(DataConstants.DB_FILE_MISSING);
                   } else {
                        // Configure database environment
                        configureDatabaseEnv();
                        // Configuration settings for an XmlContainer instance
                        XmlContainerConfig config = new XmlContainerConfig();
                        // DB shd open within a transaction
                        config.setTransactional(true);
                        // Opens a container, returning a handle to an XmlContainer obj
                        openedContainer = getManager().openContainer(container, config);
                   break;
              case CREATE_DB:
                   // Set environment home
                   setDatabaseHome();
                   // Validate database home dir exist
                   if (isPathToDbExist(new File(DataConstants.DB_HOME))) {
                        // Configure database environment
                        configureDatabaseEnv();
                        // Configuration settings for an XmlContainer instance
                        XmlContainerConfig config = new XmlContainerConfig();
                        // Sets whether documents are validated
                        config.setAllowValidation(true);
                        // DB shd open within a transaction
                        config.setTransactional(true);
                        // The database container path
                        File file = new File(path2DbEnv_, container);
                        // Creates a container, returning a handle to
                        // an XmlContainer object
                        newContainer = getManager().createContainer(file.getPath(),
                                  config);
                        newContainer.setAutoIndexing(true);
                   break;
              default:
                   throw new IllegalStateException("mode value (" + mode
                             + ") is invalid");
         * Validate path2 db env.
         * @param path2DbEnv
         * the path2 db env
         * @return true, if checks if is path to db env
         private synchronized boolean isPathToDbExist(final File path2DbEnv) {
              boolean returnValue = false;
              if (!(path2DbEnv.isDirectory() || path2DbEnv.exists())) {
                   throw new IllegalArgumentException(DataConstants.DIR_ERROR
                             + path2DbEnv.getAbsolutePath()
                             + DataConstants.DOES_NOT_EXIST);
              } else {
                   path2DbEnv_ = path2DbEnv;
                   // Test whether db home exist when mode is 0
                   if (path2DbEnv_.exists() && mode == OPEN_DB) {
                        // Test whether all db files exist
                             returnValue = true;
                   } else {
                        // Test whether db home exist when mode is 1
                        if (path2DbEnv_.exists() && mode == CREATE_DB) {
                             returnValue = true;
              return returnValue;
         * Set database environment home.
         * @throws IOException
         * Signals that an I/O exception has occurred.
         private synchronized void setDatabaseHome() throws IOException {
              // The base dir
              File homeDir = new File(DataConstants.DB_HOME);
              // If db home delete fails, throw io exception
              if (!GlobalUtil.deleteDir(homeDir) && homeDir.exists()) {
                   WriteLog.logMessagesToFile(DataConstants.ERROR_MSG);
                   throw new IOException(DataConstants.ERROR_MSG);
              } else {
                   // If delete is successful, recreate db home
                   final boolean success = homeDir.mkdir();
                   // if home dir creation is successful
                   if (success) {
                        // Construct file object
                        File logDir = new File(homeDir, DataConstants.LOG_DIR);
                        // File dbHome = new File(homeDir, DataConstants.DB_DIR);
                        // Create log file
                        boolean logCreated = logDir.mkdir();
                        // Create db home
                        // boolean dbHomeCreated = dbHome.mkdir();
                        if (logCreated) {
                             WriteLog.logMessagesToFile(homeDir.getAbsolutePath()
                                       + " successfully created");
                   } else {
                        WriteLog.logMessagesToFile(homeDir.getAbsolutePath()
                                  + " failed to create");
         * Sets environment configuration and it's handlers.
         * @throws Exception
         * the exception
         private synchronized void configureDatabaseEnv() throws Exception {
              // Construct a new log file object
              File logDir = new File(path2DbEnv_, DataConstants.LOG_DIR);
              // The environment config
              EnvironmentConfig envc = new EnvironmentConfig();
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLockers(10000);
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLocks(10000);
              // estimate how much space to allocate
              // for various lock-table data structures
              envc.setMaxLockObjects(10000);
              // automatically remove log files
              // that are no longer needed.
              envc.setLogAutoRemove(true);
              // If environment does not exist create it
              envc.setAllowCreate(true);
              // For multiple threads or processes that are concurrently reading and
              // writing to berkeley db xml
              envc.setInitializeLocking(true);
              // This is used for database recovery from application or system
              // failures.
              envc.setInitializeLogging(true);
              // Provides an in-memory cache that can be shared by all threads and
              // processes
              envc.setInitializeCache(true);
              // Provides atomicity for multiple database access operations.
              envc.setTransactional(true);
              // location of logging files.
              envc.setLogDirectory(logDir);
              // set the size of the shared memory buffer pool
              envc.setCacheSize(500 * 1024 * 1024);
              // turn on the mutexes
              envc.setMaxMutexes(500000);
              // show error messages by BDB XML library
              envc.setErrorStream(System.err);
              // File db_home = new File(path2DbEnv_, "db");
              // Create a database environment
              dbEnv_ = new Environment(path2DbEnv_, envc);
              // Configure an XmlManager instance via its constructors
              XmlManagerConfig mgrConf = new XmlManagerConfig();
              mgrConf.setAllowExternalAccess(true);
              mgrConf.setAllowAutoOpen(true);
              // Create xml manager object
              mgr_ = new XmlManager(dbEnv_, mgrConf);
              mgr_.setDefaultContainerType(XmlContainer.NodeContainer);
         * This method is used to close the database environment freeing any
         * allocated resources that may have been held by it's handlers and closing
         * any underlying subsystems.
         * @throws DatabaseException
         * the database exception
         protected synchronized void cleanup() throws DatabaseException {
              if (path2DbEnv_ != null) {
                   path2DbEnv_ = null;
              if (newContainer != null) {
                   newContainer.delete();
                   newContainer = null;
              if (openedContainer != null) {
                   openedContainer.delete();
                   openedContainer = null;
              if (mgr_ != null) {
                   mgr_.delete();
                   mgr_ = null;
              if (dbEnv_ != null) {
                   dbEnv_.close();
                   dbEnv_ = null;
    // This is the keyword search class...
    public final class XmlContentByKeywords extends DatabaseEnvironment {
         public synchronized Document getDocumentContentByKeywords(String keywords)
                   throws Exception {
              // Encapsulates the results of a query that has been executed.
              XmlResults results = null;
              // The manager
              XmlManager manager = null;
              // Run the logic
              if (keywords != null) {
                   try {
                        // Flag to open db
                        final int OPEN_DB = 0;
                        // The keywords content
                        Document keywordsContent = null;
                        // Open db connection
                        try {
                             // Get database instance
                             setDatabaseMode(OPEN_DB);
                             // Open this container in db environment
                             doDatabaseSetup(DataConstants.CONTAINER);
                        } catch (Exception ex) {
                             // Create error node with error message
                             keywordsContent = Wrapper.createErrorDocument(ex
                                       .getMessage());
                             // Return the error node doc
                             return keywordsContent;
                        // Manager instance
                        // final XmlManager manager = getManager();
                        manager = getManager();
                        // Transaction instance
                        // final XmlTransaction txn_ = getTxn();
                        // The map
                        Map<String, Document> map = null;
                        // The temp map
                        Map<String, Document> tempMap = null;
                        // Return the query results
                        results = XQueryUtil.getQueryResultsByKeywords(keywords, manager);
    // use results here...
    // close results when done
    results.delete();
    results = null;
    manager.delete();
    manager = null;
    }

  • Testing a sql query

    Hi,
    We have MS SQL 2008 and I am creating a query but what I want to do is actually test a query and not affect any records.
    I am creating an update query and I want to run it and see how many records will be updated first. So in case there is an error I can make a change. Is there any way to test run the query in SQL management studio to see how many records will change?
    Thank you.
    Beau

    Hello,
    Begin tran
    update table_name set id=1 where name='Shanky
    --rollback
    Just make your statement atomic .Like i showed you above dont run rollback until you have checked your data.This wont be affective with huge update
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Query for updating prices if product IDs same

    I have a large Windows SQL 2000 database of products that
    need to have
    2007 prices.
    I have another Windows SQL 2000 database that has the correct
    prices
    that I can import into Windows SQL 2000 as a new table.
    I want to be able to UPDATE the Prices where product IDs
    match.
    What would an SQL query look like that would run in Query
    analyzer?
    So, basically, I need a query that would compare the product
    IDs and
    update the price column with the new price.
    Neither database is in the same location. One that holds the
    correct
    pricing is basically inaccessible to me. I can get a
    delimited file
    though. I know how to get that in.

    I've a table called Track, which has three columns named Part1, Part2 and Part3. I want all values of Part1 to be separated by a comma (,);
    No, don't go there.
    This breaks a fundamental point for relational databases: no repeating groups. A cell should hold an atomic value. And this is not only a matter of purism. Relational databases are designed from this principle, and breaking this means that you will need
    to write complex and higly inefficient code.
    The values in Part1 should be in a separate table, with one value per row.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SQL Query statists?

    I'd like to know if there is any easy, and convenient way, for someone to execute an SQL query an calculate, these measurments relative to the query:
    1) The I/O performed
    2)Number Read I/O
    3) Number of Write out I/O
    (such that 2+3 = 1
    4) Number of buffered reads
    5)Query Execution time
    6) Query CPU usage
    I've heard mention of such statisctis in views such as V$OSSTAT, etc.
    But these views give the current values, and not the specific cumulative values. Such as: cummulative CPU usage time since start of the query; cumulative I/O since query begin, etc...
    What is the right approach to this. Is it through the V$SESSION view? Would you about it by storing the V$SESSIOn values before the query, you run the query, and get the new V$SESSION values?

    Well, actually i stayed here a little longer to try you part 2 of the manual.
    It worked fine, following comes the output, originating from a spool file, of my first experiment:
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.14
    SQL>
    SQL> spool off
    SQL>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Pr
    oduction
    With the OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>my_part2_template.bat
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>sqlplus /NOLOG @my_part2_template.sql
    SQL*Plus: Release 11.1.0.7.0 - Production on Qui Jul 9 22:00:39 2009
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.01
    SQL>
    SQL> spool off
    SQL>
    SQL>
    SQL> -- rem End of Part 2
    SQL> show parameter statis
    NAME                                 TYPE        VALUE
    optimizer_use_pending_statistics     boolean     FALSE
    statistics_level                     string      ALL
    timed_os_statistics                  integer     5
    timed_statistics                     boolean     TRUE
    SQL> quitIf you notice, at the end of the execution I print my statistics session environment. The statistics_level was set to ALL, as you advisied. But the output I obtained seems a lot more incomplete than the one I got from using the autotrace feature.
    Am I missing something. Could it have something to do with the fact that I am running as system and not as sysdba? System shoul have enough permissions to access its session environment statistic values.
    May be it's just a language issue (I'm not a native speaker either) but your understanding of Oracle's read consistency model seems to be questionable.No, you could be right; my understanding is questionable indeed. I am familiar with general concepts of concurrency.
    Things like: Read uncommited data:
    T1 Writes A; T2 Reads A -> Here is a conflict
    This enough for you to not be able to guarantee that the execution is serializable.
    T1 Reads A, T2 Writes A and commits, T1 Reads A - You get another confli, the Unrepeatable read.
    And so on.
    I am also familiar with the different levels of atomicity that databse systems in general give you.
    Conflict Serializable, normally implemented by using the strict phase locking mechanism.
    Repeatable Reads, you lock the rows you access during a transaction. You are guaranteed that those data values you access do not change value; but other entires in the table could be put.
    Unrepeatable reads. Only the data you modify is guaranteed to stay the same. Only you write locks are kept throughout the transaction. And so on.
    But anyway...
    What you explained in your post is more or less what I was saying. In you case much more clear than in mine.
    For instance, if a thread T1 reads A; a thread T2 Writes on A
    In oracle, you could have the thread T1 read A again without geting an Unrepeatable Read error. This is strange: in a normal system you directly get an exception telling you that your vision of the system is inconsistent. But in oracel you can do so, because oracle tries to fetch from the Undo Table Space that same data objects consistent with the view of the system you had when you first accessed it. It looks for a block with an an SCN older than the current version SCN. Or something like that. The only problem is that those modified blocks do not stay indefinitely there. Once a transaction commits you have a time bomb in your hands. That is, if you are working with that is not at its most current version.
    But you are quite right, I have not read enough about Oracle concurrency. But I have a good enough understanding for mu current needs.
    I can not know everything, nor do i want to :D.
    My memory is very limited.
    My best regards, and deepest thanks for your time and attention.
    Edited by: user10282047 on Jul 9, 2009 2:41 PM

  • Self-referencing query

    Hello~
    This is probably a really simple question, but here is the query I am trying to run, with no success:
              <cfquery name="updatePage" datasource="#application.db#">
                    UPDATE
                        admin_nav_test
                    SET
                        lft =
                            <cfif lft GT parentLevel>
                                lft + 2
                            <cfelse>
                                lft
                            </cfif>
                    WHERE
                        rght >= #parentLevel#
                </cfquery>
    Basically, I want to reference the current value of lft in the table I am querying within the query itself, if that makes sense. However, all I end up with is an error that says "Variable LFT is undefined." I have tried using admin_nav_test.lft, and every other combo I can think of to self-reference this variable, but no luck! What am I missing? Thanks!

    I think you're probably reinventing the wheel unnecessarily here: have a look @ http://nstree.riaforge.org/
    If you roll your own code - as part of a learning exercise perhaps - then make sure to transactionalise those queries, because you want both those updates to run as an atom.  Consider what would happen to your hierarchy data a second ADD operation starts being processed whilst your first one is still running.  So instead of this:
    (FIRST ADD) UPDATE LEFT
    (FIRST ADD) UPDATE RIGHT
    (SECOND ADD) UPDATE LEFT
    (SECOND ADD) UPDATE RIGHT
    You ended up with this:
    (FIRST ADD) UPDATE LEFT
    (SECOND ADD) UPDATE LEFT
    (FIRST ADD) UPDATE RIGHT
    (SECOND ADD) UPDATE RIGHT
    This will stuff your tree up.
    I also recommend you make a generic function to pad the tree by a specified amount, rather than hardcoding "2".  When you come to want to be moving or deleting whole branches, the amount you will need to shift things will not necessarily be 2, but the operation will be the same other than the offset amount.  So you might as well factor it out into a separate function, and use that for all occasions.
    Even if you decide to roll your own solution (it is a good exercise), at least eyeball the stuff on RIAForge to see how it's done, and possibly flag some considerations that might not be immediately apparent.  I rolled my own solution for this - before the RIAForge implementation was done - and it took a lot of wailing and gnashing of teeth to get it right.  And unfortunately some of the bugs didn't get noticed until the code was in production.  Which caused... "problems".
    Adam

  • Complex query writing.

    I have one table for Loan_mst
    Below is the data for column name, salary and loan
    Here one person can have multiple loan, multiple loan are separted by ,.
    Example CAR1, CAR2, HOME1 . There are three loan separated by comma.
    name
    salary
    Loan
    robert
    1000000
    CAR1, CAR2, HOME1
    albert
    2520000
    CAR1, CAR2, HOME2, HOME3
    Using query i want to write data like below.
    name
    salary
    New column
    robert
    1000000
    CAR1
    robert
    1000000
    CAR2
    robert
    1000000
    HOME1
    albert
    2520000
    CAR1
    albert
    2520000
    CAR2
    albert
    2520000
    HOME2
    albert
    2520000
    HOME3
    Please help me with the sql query.

    Why are you storing multiple value in the column LOAN. That not the correct way to design you table. Your table is in violation of 1NF. 1NF states the column value must be atomic.
    Said that your requirement can be achieved like this.
    SQL> with t
      2  as
      3  (
      4  select 'robert' name, 1000000 salary, 'CAR1, CAR2, HOME1' loan
      5    from dual
      6  union all
      7  select 'albert' name, 2520000 salary, 'CAR1, CAR2, HOME2, HOME3' loan
      8    from dual
      9  )
    10  -- End of test data
    11  select name
    12       , salary
    13       , trim(regexp_substr(loan, '[^,]+', 1, level)) loan
    14    from t
    15  connect by level <= length(loan) - length(replace(loan, ',')) + 1
    16     and prior name = name
    17     and prior sys_guid() is not null
    18  /
    NAME       SALARY LOAN
    albert    2520000 CAR1
    albert    2520000 CAR2
    albert    2520000 HOME2
    albert    2520000 HOME3
    robert    1000000 CAR1
    robert    1000000 CAR2
    robert    1000000 HOME1
    7 rows selected.

  • Query under infor-provider

    in RSA1, I can view my multi-provider. I have a question: instead of using normal info-provider, what's the advantage of using ODS and multi-provider, does it improve performance?
    also how can I see query list under a specific multi-provider in RSA1? thanks

    Hi,
    MultiProvider :
    A MultiProvider is a type of InfoProvider that combines data from a number of InfoProviders and makes it available for analysis purposes. The MultiProvider itself does not contain any data. Its data comes entirely from the InfoProviders on which it is based. These InfoProviders are connected to one another by a union operation.
    DSO:
    A DataStore object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level.
    This data can be evaluated using a BEx query.
    A DataStore object contains key fields (such as document number, document item) and data fields that, in addition to key figures, can also contain character fields (such as order status, customer). The data from a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.
    Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables.
    Regarding Queries list in RSA1:
    We can not get the list of queries based on infoprovider in RSA1.
    However we can see the list of queries in table RSRREPDIR.
    Here provide following selections to get the list of queries related to a particular infoprovider:
    INFOCUBE :  <infoprovider name>
    OBJVERS:    A
    Regards,
    Geetanjali

Maybe you are looking for

  • GUI_DOWNLOAD Date format

    Hi, I am downloading an internal table to excel through GUI_DOWNLOAD function module, internal table  contains a date field and it's getting downloaded in "YYYYMMDD" format. Whereas if we use WS_DOWNLOAD it's getting downloaded with date separator (b

  • Sender file adapter conversion Error!!

    Hello every one.... I am stuck with a problem...in my scenario,, there are possiblities that file may contain more than required paramters... in such case,,, paramters defined in conversion should go ahead and others should be ignored... for this ver

  • Getting "Page Cannot be displayed" error when calling a procedure

    Hi all, I have a page having a table with more than 2000 rows. On click on save I need to call a pl/sql procedure and then a concurrent program and then based upon the result of CP I need to update the records from the back end. All this is done on t

  • FTP connection error w/DW5.5 - cannot connect to host

    I was able to upload to my site on Monday, but the next day I started receiving the FTP error: cannot connect to host. It's not an issue with my provider and I can post using net2ftp.com and Filezilla, but not in Dreamweaver CS5.5. I went to the mana

  • FrameMaker 7.2 on Windows XP non structured

    In this one book I'm working on, master page headers and footers show on the monitor. But when I print out a document sometimes in the last few pages the headers and footers are dropped from printing. Making the doccument useless. Some documents in t