Oracle in Loop Evaluation

Hi,
I have an assignment to check the consistency of Oracle database for different conditions.I am having some 200+ columns i a table i need to populate data using prepare statement in .net and check the conditions how it works for each loop values.I am having loop values starting from 1,5,50,100.... 10000k i will increment the loop value as 1,5,50,100.... 10000k and need to get the start time and end time of each loop, i need to check the consistency of the database.
Value for the loop(1,5,50,100.... 10000k) will be from a xml file or text file
For the 200+ Columns i am having 2PK's.
CONDIDTIONS
1)Whether i need to restart for every loop value say if i run loop value as 1 then i need to restart the system (not the services) and run for loop value 5 and restart and so on.
2)Whether i need to restart for every loop value say if i run loop value as 1 then i need to restart the services and run for loop value 5 and restart services and so on.
3)Whether i can continue my loop value starting from 1,5...10000k without restart or any other process continuously running starting from 1...10000k in single go
why i am asking this is whether Oracle will act as differently when i do the above process and will there be any performance improvement or performance variations from the above steps if i followed. Whether there be same value for all the three steps or will there be variation in from time and end time for each conditions
I need to check for insertion time(time difference b/w from time and then loop condition and then end time) for each loop
Thanks!

Billy Thanks for your response.In order to check the consistency across all the scenarios i have started with this later i will split the tables according to normal form and do other analysis.First i need to check the performance and scalability based on my criteria then need to simplify my work and again i need to start the same process from beginning.In order to see how the data will be working on different conditions we have started with normal method.
CONDIDTIONS
1)Whether i need to restart for every loop value say if i run loop value as 1 then i need to restart the system (not the services) and run for loop value 5 and restart and so on.
2)Whether i need to restart for every loop value say if i run loop value as 1 then i need to restart the services and run for loop value 5 and restart services and so on.
3)Whether i can continue my loop value starting from 1,5...10000k without restart or any other process continuously running starting from 1...10000k in single go
I want to satisfy all the above conditions.How the data insertion will be there if i shutdown the server for each loop,will it be faster than the continuous loop,since it services will be running for a so long time and might make a slowness.
Note while running this Test only Oracle services will be running rest of non services will be stopped except the OS Services.
Thanks!

Similar Messages

  • Have any evaluation copy of Oracle Application 11i for download?

    As subject, is Oracle Application 11i evaluation copy available? & where to download?
    Chris

    You will have to place an order with oracle and they will send you the copy of the latest erp software.. they may charge shipping and handling.. if this is for your personal evaluation.. it for the company.. it may be free.
    or you can go to metalink.oracle.com if you are a registered user... or store.oracle.com
    cheers

  • VMWare Evaluation Kit on Red Hat Linux

    I have downloaded RHEL3-DVD2-Linux-v1[1].4.zip (793,301KB). When I try to open that file with WinZip I get the message "Cannot open file: bIt does not appear to be a valid archive. If you downloaded this file, try downloading the file again."
    I have tried this several times without success.
    The Oracle Universal Installer is at "Copying files for 'Oracle on Linux Evaluation - Oracle Software 10.1.0.2.0'" and is waiting with a Disk Location messagebox message 'Please insert OracleOnLinuxEvaluation disk 2 into your disk drive or specify an alternate location.'
    Why does everything related to Oracle has to be difficult - even a simple download? Can someone please help with installing evaluation kit on Windows (XP,2000,or 2003).

    Please post this question in the Oracle/VMware forum:
    Desktop Datacenter
    Cheers, OTN

  • Is Oracle free for Linux?

    Thought I ask, I'm just learning.
    if so which one?

    No it's not. You have to pay for it. You can use the OTN versions you download from Oracle only for evaluation purposes

  • Installing Oracle DI

    I am installing Oracle DI for evaluation purposes. However during install, I receive an error TSS Service could not be found. It allows me to retry, ignore or cancel. If I select retry - same error. Ignore allows me to continue but the Data Profiling/Data Quality tools then do not work (Error: TSS Metabase Manager: Server Connection Failed. couldn't open socket: connection refused).
    No matter which drive I try the install, it fails. I contacted Tech Sales support. They insisted I remove all Oracle products and try the install again. Well over the weekend, I did so and receive the same error message. I removed all Oracle server instances, removed services using SC, scanned the registry and remove any rogue entries, removed all folders (Oracle, OraHome_*), used automated registry cleaning tool called Tune-Up utilities... and still get the same results.
    Running:
    WinXP 32-bit
    Dell Inspiron
    10 gig free on the C drive, > 700 gig free on G drive (tried to install to both).
    Any ideas would be greatly appreciated!
    Ben

    Any ideas on this one? I have it installed on a VM successfully but the laptop I am trying to install it on still refuses to work (TSS Scheduler error during setup).
    Ben

  • I need help on Oracle 9  file handler & utility packet!

    Hello ALL:
    I need to write a short SQL program to update the database via reading an external data file. How do I configure the standard oracle packet of oracle 9 to include the file handler? Please help me with details on oracle init files and some sample SQL commands to call the external file. I really appreciate the help. OracleUser.

    Tubby:
    Again, thank you. My answers to your feedback is Yes, No, Yes.
    I would thought that using the file handle would get the job done.
    Any how,
    1) how do I loop thru rows, say A1, A2.....A10,000 and use the content
    reading the this external table such as the content in A1 to compare against a certain column with many entries in a table in oracle using Loop/End Loop and select?
    2) What does oracle do the this external table at the end of session!
    3) Could I also create multiple external tables, let say 2?
    Thanks Tubby.

  • RelRank Evaluator Tool download

    Where can I download RelRank Evaluator Tool now?
    On the EDEN's Tools and Utilities page there is following notice:
    "To Our Customers: Following Endeca's acquisition by Oracle we are re-evaluating our EDeN community assets to ensure they are in compliance with Oracle standards. Therefore, as of December 5th, 2011, the Tools and Utilities are no longer available for download from EDeN with the exception of the Deployment Template, which may now be found under the Product Downloads section. Please contact Endeca Customer Support with any questions or comments. Thank you for your understanding."
    And it is not availble for download from Oracle website either.
    Thanks

    You can download it from edelivery.oracle.com
    Oracle Endeca Relrank evaluator 2.1.2 for Generic Platform
    Edited by: Mandar Shastrakar on May 14, 2012 5:28 PM

  • Oracle Client 10g XE licensing inconsistencies

    Oracle Client XE downloads require to accept "OTN License Agreement for Oracle Database Express Edition"
    http://www.oracle.com/technetwork/licenses/xe-license-152020.html
    but debain package oracle-xe-client_10.2.0.1-1.0_i386.deb contains different license in the file
    /usr/share/doc/oracle-xe-client/copyright
    This licenses do not match and the former states:
    "BETA TRIAL LICENSE: ...provided to you by Oracle solely for evaluation purposes until January 31, 2006"
    Does it effectively means that Oracle Client 10g XE cannot be legaly used at all?

    OTN uses the same license for both client and server download. See
    http://www.oracle.com/technetwork/database/express-edition/downloads/102xelinsoft-102048.html
    In my question I asked only about license inconsistencies on client software available as oracle-xe-client_10.2.0.1-1.0_i386.deb. I did not check if oracle-xe_10.2.0.1-1.0_i386.deb or oracle-xe-universal_10.2.0.1-1.0_i386.deb have the same sort of license inconsistencies, because I am currently not interested in XE database server.

  • Oracle R12 Download.

    Dear All,
    As from where i can download Oracle R12 Demo Evaluation Version with Demo Database. Please guide in this reference.
    Regards
    Hitesh Parsawala

    Hi,
    I have downloaded all the files of Oracle R12 from edelivery.For what OS?
    Now I have to install Oracle R12 so before starting the installation, I wanted to know what should be my system Prerequisite.Check the document referenced above, it covers the installation pre-req. for all Operating Systems (check the document of the OS you want to use).
    i.e What should be the Operating System, What should be the Hard disk size, and most important what should be the Database.For the hardware requirements, please refer to:
    Oracle Applications Installation Guide: Using Rapid Install
    http://download.oracle.com/docs/cd/B53825_03/current/acrobat/121oaig.pdf
    For the database, it will be installed as part of Oracle Apps installation (do not install any database separately).
    Regards,
    Hussein

  • Suggested feature - wire-format aware binary extractor

    I would like to suggest another feature, which might or might not improve performance of querying unindexed attributes (possibly at the cost of some more bytes in the wire-format).
         We found that in case of querying caches with many keys, but no indexes on the queried attribute, the performance of the cache is quite worse than querying Oracle, for instance.
         The foremost reason of that, I believe, is that for this scenario, the primary node has to deserialize all the entries in the cache to extract a single attribute from it, compared to a database which knows the exact position and representation of that value in the database storage.
         If the cached entry did not have to be deserialized, but we knew how to pointedly extract only the particular attribute from the wire format, it could be way faster, and even some object allocations may be avoided compared to the current situation.
         I think, that providing a new extraction method could speed up this scenario, all it requires is that the extractor be faster than the deserializing the entire object. This can for example be provided by having the attributes extracted this way put to a place where it is much faster to be found, probably to a fixed position in the wire format.
         As the entries (key and value) are stored in wire-format in the backing map, the wire-format can be passed without a byte array copy to the filter or to the extractor.
         I suggest the following interfaces could be used for such an extractor and a custom filter using similar techniques:
                  public interface WireFormatAwareBinaryExtractor {
                  Object extractFromBinary(byte[] wireFormat, int wireFormatStartsAt, int wireFormatLength);
             public interface WireFormatAwareBinaryFilter {
                  boolean evaluateBinaryValue(byte[] valueWireFormat, int valueStartsAt, int valueLength);
             public interface WireFormatAwareBinaryEntryFilter {
                  boolean isKeyRequiredAsBinary();
                  boolean evaluateKeyAndBinaryValue(
                            Object key,
                            byte[] valueWireFormat, int valueStartsAt, int valueLength);
                  boolean evaluateBinaryKeyAndValue(
                            byte[] keyWireFormat, int keyStartsAt, int keyLength,
                            byte[] valueWireFormat, int valueStartsAt, int valueLength);
                       The two int attributes are necessary only if the byte[] contains more than the binary representation of the value or the key, but other bytes as well (e.g. the key and the value in the same array, in which case one of the array parameters can actually go; or it might contain class name or class name cache index, you know it, this is an internal detail).
         In case of the entry filter, depending on the return value of the isKeyRequiredAsBinary() method, one or the other evaluate method would be invoked.
         After this, the developer is free to choose how he modifies the wire format of his cached classes, to suit this as much as possible, if he wants to use this feature.
         I don't know how much this may or may not help with the performance of queries, but I think it can help in some cases, for example when the index cost of a cache would be too prohibitive, compared to the wins it can provide (e.g. all queries to that attribute can be prefiltered to a number of candidate keys which is way less than the entire key set size, and the values in the attribute are so diverse that the index would consume almost as much space as (or even more than) the values themselves).
         Also it is a feature which is not too complex to implement, I believe.
         Also, it would be useful to have utility methods equivalent to the read methods on the ExternalizableHelper class to be provided but with a byte[] and int (start index) instead of a DataInput in the signature.
         Or otherwise the byte[] parameters in the methods might be replaced with a seekable DataInput wrapper containing the byte[] (in which the wrapped byte[] can be changed by the loop evaluating the entries).
         Best regards,
         Robert Varga
         Message was edited by:
         robvarga

    We found that in case of querying caches with many     > keys, but no indexes on the queried
         > > attribute, the performance of the cache is quite
         > worse than querying Oracle, for instance.
         >
         > That is to be expected.
         >
         > Please also note that Oracle (and other SQL
         > databases) are very refined and optimized
         > implementations of relational query engines, so it
         > should not be surprising when results are in their
         > favor for relational-style queries.
         >
         > > The foremost reason of that, I believe, is that for
         > this scenario, the primary node has to
         > > deserialize all the entries in the cache to extract
         > a single attribute from it, compared to a
         > > database which knows the exact position and
         > representation of that value in the database
         > > storage.
         >
         > Yes, deserialization can be quite expensive.
         >
         Yes, I am aware of it, and accept them as a trade-off for getting objects in the cache. And I was not even speaking about the relational part of RDBMS-es, only about the partially reproducable advantage of the known storage structure of a single table.
         And it was mostly just the preface text. :-)
         > > I think, that providing a new extraction method
         > could speed up this scenario, all it
         > > requires is that the extractor be faster than the
         > deserializing the entire object. This can
         > > for example be provided by having the attributes
         > extracted this way put to a place
         > > where it is much faster to be found, probably to a
         > fixed position in the wire format.
         >
         > We considered fixed-position techniques, but decided
         > against them for a number of reasons, although for
         > the use case you describe it would likely work well.
         > Our decision was based on the following:
         >
         > 1) The fixed length fields would all have to be
         > placed first into the binary structure in order for
         > those data to have fixed offsets, and
         > 2) Only the fixed length fields of the outermost
         > object would be directly extractable.
         >
         Yes, but if the developer has the freedom to choose such an extractor, it cannot hurt to provide the feature as an alternative to ReflectionExtractor and extracted value-using filters, if it is sort of straightforward to implement.
         And I believe, actually not only the first fixed length fields would be extractable, but you could also get the same sort of speed advantage, as you have when using a SAX parser instead of a DOM builder, because you don't need to instantiate objects which you do not even need, and you can skip (or at least search for the end of it) stuff which you do not need without actually instantiating it.
         > In your example, you suggested:
         >
         > Object extractFromBinary(byte[] wireFormat, int
             > wireFormatStartsAt, int wireFormatLength);     >
         > A couple things to keep in mind:
         >
         > 1) The data stored in a backing map is a Binary
         > object (com.tangosol.util.Binary)
         > 2) A Binary object can provide a Binary object within
         > it without copying:
         >
         > Binary binSub =
             > bin.toBinary(wireFormatStartsAt,
             > wireFormatLength);     >
         > 3) A Binary object can provide a BufferInput, which
         > is a combination of an Input Stream, a Data Input
         > implementation, etc.:
         >
         > ReadBuffer.BufferInput in =
             > bin.toBinary(wireFormatStartsAt,
             > wireFormatLength).getBufferInput();     >
         > .. or simply:
         >
         > ReadBuffer.BufferInput in =
             > bin.getBufferInput(wireFormatStartsAt,
             > wireFormatLength);     >
         > (See the JavaDoc documentation in com.tangosol.io
         > package.)
         >
         > 4) To extract an "int" value from offset 13, it's as
         > easy as:
         >
         > int n = bin.getBufferInput(13,
             > 4).readInt();     >
         > .. but since it's a known offset, you can simply do
         > this:
         >
         > int n =
             > bin.getBufferInput().setOffset(13).readInt();     >
         > > Also, it would be useful to have utility methods
         > equivalent to the read methods on the
         > > ExternalizableHelper class to be provided but with
         > a byte[] and int (start index) instead
         > > of a DataInput in the signature.
         >
         > BufferInput does that already, for the most part.
         >
         > > Or otherwise the byte[] parameters in the methods
         > might be replaced with a seekable
         > > DataInput wrapper containing the byte[] (in which
         > the wrapped byte[] can be changed
         > > by the loop evaluating the entries).
         >
         > That's exactly what BufferInput is.
         >
         > Check it out and tell me if we're talking about the
         > same thing.
         >
         I think we are. :-)
         >
         > In 3.2, an improved type of serialization will be
         > provided as well (as discussed previously on some of
         > the XmlBean threads), which may address this
         > question.
         >
         Can't wait to see it. When will it be released? In some post it was mentioned that 3.2 GA is tentatively scheduled for this July. Is there some update to that? :-)
         > Peace,
         >
         > Cameron.
         Thanks and best regards,
         Robert

  • How to write a procedure using collections

    how can we define collections inside procedure and use it
    i am getting an error executing this proc
    create or replace procedure p_collections
    is
    type calendar is varray(366) of  date;
    calendar c1;
    begin
    for i in 1 .. 100
    loop
    select sysdate bulk collect into c1 from dual;
    end loop;
    DBMS_OUTPUT.PUT_LINE(c1);
    end;Edited by: Rahul_India on Sep 12, 2012 2:54 PM
    Edited by: Rahul_India on Sep 12, 2012 3:07 PM

    That's because you only have one value in the array.
    Here is sample code that you can test in the SCOTT schema.
    set serveroutput on;
    declare
      cursor my_cur is
      select empno from emp;
      type my_type is table of emp.ename%type;
      type my_type2 is table of emp.empno%type;
      dizi my_type;
      dizi2 my_type2;
      query VARCHAR2(100);
    begin
      open my_cur;
      fetch my_cur bulk collect into dizi2;
      close my_cur;
      for i in dizi2.first..dizi2.last
      loop
        dbms_output.put_line(dizi2(i));
      end loop;
    end;
    /Or another sample using a LIMIT clause
    The FETCH does a BULK COLLECT of all data into 'v'. It will either get all the data or none if there isn't any.
    The LOOP construct would be used when you have a LIMIT clause so that Oracle would 'loop' back to
    get the next set of records. Run this example in the SCOTT schema and you will see how the LIMIT clause works.
    I have 14 records in my EMP table.
    DECLARE
      CURSOR c1 IS (SELECT * FROM emp);
      TYPE typ_tbl IS TABLE OF c1%rowtype;
      v typ_tbl;
    BEGIN
      OPEN c1;
      LOOP                                                 --Loop added
        FETCH c1 BULK COLLECT INTO v LIMIT 3; -- process 3 records at a time
            -- process the first 3 max records
           DBMS_OUTPUT.PUT_LINE('Processing ' || v.COUNT || ' records.');
            FOR i IN v.first..v.last LOOP
                DBMS_OUTPUT.PUT_LINE(v(i).empno);
            END LOOP; 
        EXIT WHEN c1%NOTFOUND;
      END LOOP;
      DBMS_OUTPUT.PUT_LINE('All done');
    END;In the FOR loop you would do any processing of the nested table you want to do
    and could use a FORALL to do an INSERT into another table.

  • Numeric overflow error using binary integer

    Hi experts,
    I am facing issue while solving a numeric overflow error. after analyzing we came to know that in the below code BINARY_INTEGER is causing the issue as input is exceeding its range. I tried to replace BINARY_INTEGER by varchar2(20) but its saying
    "Error(580,20): PLS-00657: Implementation restriction: bulk SQL with associative arrays with VARCHAR2 key is not supported."
    We need to remove this binary_integer. I dont know how to do this. Can anybody give some idea or what code change required here ? thanks in advance. Cheers.. Below is the code,
    ===================================================
       PROCEDURE UpdateCost_
          p_Cost_typ IN OUT NOCOPY CM_t,
       IS
          TYPE ObjektIdTab_itabt IS TABLE OF ObjektId_tabt INDEX BY BINARY_INTEGER;
          v_cost_IdTab_itab ObjektIdTab_itabt;
          v_CM_ID INTEGER := p_Cost_typ.costm.CM_ID;
          BEGIN
                SELECT CAST(MULTISET
                        (SELECT Costwps.CMKostId
                          FROM CM_Pos_r NRPos,
                                CMK_z_r costzpps,
                                CMG_Cost_v Costwps
                          WHERE NRPos.CM_ID = v_CM_ID
                            AND NRPos.SNRId_G = SNRCT.SNRPos.SNRId_G
                            AND costzpps.CM_ID = NRPos.CM_ID
                            AND costzpps.CMSNRPosId = NRPos.CMSNRPosId
                            AND costzpps.Kost_s = Kost.Costnzl.Kost_s
                            AND Costwps.CMKz_Id = costzpps.CMKz_Id
                            AND Costwps.TypCode NOT IN
                                (SELECT kw.TypCode
                                   FROM TABLE(Kost.Kostwt_tab) kw
                        ) AS ObjektId_tabt )
                  BULK COLLECT
                  INTO v_cost_IdTab_itab
                  FROM TABLE(p_Cost_typ.SNR_tab) SNRCT,
                       TABLE(SNRCT.Kost_tab) Kost
             FOR v_i IN 1 .. v_cost_IdTab_itab.COUNT LOOP
                FOR v_j IN 1 .. v_cost_IdTab_itab(v_i).COUNT LOOP
                   DELETE FROM CMG_Cost_v WHERE CMKostId = v_cost_IdTab_itab(v_i)(v_j);
                END LOOP;
             END LOOP;
    END;
    ===================================================

    Thanks for your reply. I tried with INDEX by NUMBER. but oracle says its not a valid use of index by thing. and moreover I also tried with by removing INDEX BY clause. but in that case we are not at all getting any data in for loop. some people says to use extend clause. But again I am not sure How to do so. Can you please let me know code for this.
    I know you are trying to help by you need to STOP telling us what problem you have and SHOW US. Saying 'Oracle says' is useless. Post EXACTLY what code you are using, the EXACT steps you are using to compile that code and the EXACT result that you are getting.
    You also made no comment about the 'overflow' issue. A BINARY_INTEGER (PLS_INTEGER) has a very large range of values:
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/datatypes.htm#i10726
    >
    The PLS_INTEGER data type stores signed integers in the range -2,147,483,648 through 2,147,483,647, represented in 32 bits.
    >
    If you are trying to create a collection of more than 2 BILLION of anything you have a serious problem with either WHAT you are trying to do or HOW you are trying to do it. Your 'overflow' issue is more likely a symptom that you are really running out of memory. You should ALWAYS have a LIMIT clause when you do BULK COLLECT statements.
    Also see this section in that doc: SIMPLE_INTEGER Subtype of PLS_INTEGER
    You need to address your LIMIT issue first and then address any other issues that arise from actually executing the code.
    Then see the section 'SELECT INTO Statement with BULK COLLECT Clause' in that doc
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/tuning.htm#BABEIACI
    That section has an example that shows you do NOT need to use an INDEX BY clause to create collections as you are trying to do. So your not 'getting any data in for loop' is NOT related to the lack of that clause.
    That example also shows you that you do NOT use 'extends' when doing BULK COLLECT. The bulk collection automatically extends the collection as needed to hold the entire results (assuming you don't run out of memory for 2 BILLION things).
    Example 12-22 in that same doc shows the proper way to use a double loop and a BULK COLLECT with a LIMIT clause
    http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/tuning.htm#BABCCJCB
    Here is very simple sample code you can use in the SCOTT schema to understand how the double loop and LIMIT clauses work together.
    >
    The FETCH does a BULK COLLECT of all data into 'v'. It will either get all the data or none if there isn't any.
    The LOOP construct would be used when you have a LIMIT clause so that Oracle would 'loop' back to
    get the next set of records. Run this example in the SCOTT schema and you will see how the LIMIT clause works.
    I have 14 records in my EMP table.
    DECLARE
      CURSOR c1 IS (SELECT * FROM emp);
      TYPE typ_tbl IS TABLE OF c1%rowtype;
      v typ_tbl;
    BEGIN
      OPEN c1;
      LOOP                                                 --Loop added
        FETCH c1 BULK COLLECT INTO v LIMIT 3; -- process 3 records at a time
            -- process the first 3 max records
           DBMS_OUTPUT.PUT_LINE('Processing ' || v.COUNT || ' records.');
            FOR i IN v.first..v.last LOOP
                DBMS_OUTPUT.PUT_LINE(v(i).empno);
            END LOOP; 
        EXIT WHEN c1%NOTFOUND;
      END LOOP;
      DBMS_OUTPUT.PUT_LINE('All done');
    END;
    In the FOR loop you would do any processing of the nested table you want to do
    and could use a FORALL to do an INSERT into another table.
    >
    I strongly suggest that you modify your code to work with a VERY SMALL set of data until it works properly. Then expand it to work with all of the data needed, preferably by using an appropriate LIMIT clause of no more than 1000.

  • Querying in the xmltype table

    apart from XQuery,
    i know Oracle provided X-Path based search also.
    can anyone direct me what's wrong with the following statement?
    it returns "no rows selected".
    THX~
    e.g.1
    select extract(value(it),'/Cap/@ctitle').getStringVal() as CTITLE
    from ord_xmltype_tbl,
    table (xmlsequence(extract(object_value,'Ordinace/Chapter/Cap'))) it
    where contains (object_value,'Commonwealth INPATH (/Ordinance/Chapter/Cap/Section/Content/English)') > 0;
    e.g.2
    select extract(object_value,'/Ordinance/Chapter/Cap/@ctitle').getStringVal() as CTITLE
    from ord_xmltype_tbl,
    table (xmlsequence(extract(object_value,'/Ordinace/Chapter/Cap'))) it
    where existsnode(object_value,'//English[contains(.,"SHORT TITLE AND APPLICATION")>0]')>0;
    e.g.3
    select extract(value(it),'/Cap/@ctitle').getStringVal() as CTITLE
    from ord_xmltype_tbl,
    table (xmlsequence(extract(object_value,'/Ordinace/Chapter/Cap'))) it
    where existsnode(object_value,'//English[ora:contains(.,"SHORT TITLE AND APPLICATION")>0]', 'xmlns:ora="http://xmlns.oracle.com/xdb"')>0;
    and what index i should create for the xmltype table?
    in order to speed up the search?
    this is the skeleton of the xmltype table
    <Ordinance xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://localhost:8081/public/hkliss/ordinance.xsd">
    <Chapter ctitle="致命意外條例" etitle="FATAL ACCIDENTS ORDINANCE" id="22">
    <Cap ctitle="致命意外條例" etitle="FATAL ACCIDENTS ORDINANCE" id="22">
    <Section ctitle="詳題" etitle="Long title" id="0">
    <VersionDate>1997-06-30</VersionDate>
    <Content>
    <Chinese></Chinese>
    <English></English>
    </Content>
    </Section>
    </Cap>
    <Cap>....</Cap>
    <Cap>....</Cap>
    <Cap>....</Cap>
    <Cap>....</Cap>
    <Cap>....</Cap>
    <Cap>....</Cap>
    </Chapter>
    </Ordinance>
    thx, expert~
    i'm held up in my project right now~~~

    (please read a little from the manuals...you have already spend so much time on asking stuff, that if you would have
    read manuals, you also would understand the things pointed out to you and it would have given you a faster result...)
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/functions051.htm#i1006712
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/functions048.htm#i1006711
    http://download-uk.oracle.com/docs/cd/B19306_01/server.102/b14200/functions052.htm#i1131042
    The optional namespace_string must resolve to a VARCHAR2 value that specifies a default mapping or namespace mapping for prefixes, which Oracle uses when evaluating the XPath expression(s).
    is xmlns:ora="http://xmlns.oracle.com/xdb evaluating your Xpath expression, I don't think so...
    If I take your examples shown here (http://forums.oracle.com/forums/message.jspa?messageID=1765571#1765571) as a starting point...
    the query:
    select extract(value(it),'/Cap/@ctitle').getStringVal() as CTITLE
    from ord_xmltype_tbl,
    table (xmlsequence(extract(object_value,'/Ordinace/Chapter/Cap'))) it
    where existsnode(object_value,'//English[ora:contains(.,"SHORT TITLE AND APPLICATION")>0]', 'xmlns:ora="http://xmlns.oracle.com/xdb"')>0;would resolve in
    select extract(value(it),'/Cap/@ctitle').getStringVal() as CTITLE
    from ord_xmltype_tbl,
    table (xmlsequence(extract(object_value,'/Ordinace/Chapter/Cap', 'xmlns:ora="http://localhost:8081/public/hkliss/ordinance.xsd"'))) it
    where existsnode(object_value,'//English[ora:contains(.,"SHORT TITLE AND APPLICATION")>0]', 'xmlns:ora="http://localhost:8081/public/hkliss/ordinance.xsd"')>0;
    But I would start with a simple "extract" or "existnode" statement, see if I get data output, and then build my SQL statement from there.

  • Query in native SQl

    I'm new to Native SQL commands.
    Can somebody tell me how to fine tune this code.
    Is there anything like FOR ALL ENTRIES in these commands also. We're using Oracle database.
    loop at it_cus1.
          EXEC SQL PERFORMING APPEND_IT_CUS2.
            SELECT SAP_MASTNR, ANK_MASTNR
            INTO
            :IT_CUS2-KUNNR, :IT_CUS2-DATLT
            FROM NOVARTIS.TABLE_ADRESS@ANKD
              WHERE ANK_MASTNR = :IT_CUS1-DATLT
          ENDEXEC.
    endloop.
    FORM append_it_cus2 .
      APPEND it_cus2.
      CLEAR it_cus2.
    ENDFORM.

    you could try cursor processing, it might be more efficient:
    EXEC SQL.
      OPEN C FOR
    SELECT SAP_MASTNR, ANK_MASTNR
    FROM NOVARTIS.TABLE_ADRESS@ANKD
    WHERE ANK_MASTNR = :IT_CUS1-DATLT
    ENDEXEC.
    loop at it_cus1.
      EXEC SQL.
        FETCH NEXT C into
        :IT_CUS2-KUNNR, :IT_CUS2-DATLT
      ENDEXEC.
      APPEND it_cus2.
      CLEAR it_cus2.
    ENDloop.
    EXEC SQL.
      CLOSE C
    ENDEXEC.

  • Does the entire query always have to execute?

    We have a large (100 line) query that is used for detecting bad data. The query looks like
    select * from table where
    clause1 is true -- for example, start_date is null
    OR clause2 is true -- end_date is null
    OR clause3 is true -- start_year < 1970
    . 100 more clauses
    It looks to me like every clause is being evaluated for every row even though the first clause that evaluates to true is enough to select the row. I believe every clause is being evaluated because we often get failures from the toDate() function for columns that are null and there are null checks before the toDate() call.
    Is it possible to make the query evaluation end "early"?
    Thanks for any help or advice,
    -=beeky

    Hi,
    As Justin said, Oracle will quit evaluating your WHERE clause as soon as it finds a condition that is TRUE. If you have evidence that it is evaluating some condition even though an earlier condition is TRUE, then it must be evaluating the conditions in a different order. Make sure you have up-to-date statistics and appropriate indexes, so the optimizer can make an intelligent choice about which conditions to evaluate first.
    You can force the order by putting the conditions in a CASE statement.
    For example, say you're now doing this:
    select  *
    from    table_x
    where   start_date is null
       OR   end_date is null
       OR   start_year < 1970
    ;If you know that the third condition (start_year < 1970) is the most restrictive, and therefore you want to make sure it is done first, then you can re-write the query like this:
    select  *
    from    table_x
    where   CASE
                WHEN  start_year < 1970   THEN 1
                WHEN  start_date is null  THEN 1
                WHEN  end_date is null    THEN 1
                                          ELSE 0
            END = 1
    ;

Maybe you are looking for