Mutilple database queries and forwarding

I need to make multiple queries from different tables. Is it the best practice to create multiple result sets, then close the connection?
I want to pass several values from the initial JSP page and the fields from the 2nd page (DB query) to another JSP page. If I use a forward from the 2nd, will all of the variables be passed to the 3rd?

Guys,
The rule for Database resources is this:
"Obtain your resources as late as possible and release them as early as possible".
What this means is that if in your code you need to query multiple tables, do not query them unless you need them. i.e. Open resultset1, process, close resultset1, open resultset2, process, close resultset2....
Please note that since opening a database connection is very expensive, it is recommended to use connection pooling and use one connection per page.
Srini

Similar Messages

  • Store and Forward Azure Database?

    I'm working on mobile applications (Win Phone 8.1 and Win Store 8.1) that allows our internal crew members to clock-in, clock-out, and report daily work activities. This will be stored on Azure mobile databases, however, our users will be going out
    of mobile connectivity routinely, throughout the day.
    What would the best practice be to store and forward this data to allow them to continue working and then sync when they get back in the connectivity area?

    Hello,
    Did you want to use Windows Azure Mobile Services support offine data with mobile applications as Cotega post above?
    Azure Mobile Service SDK support mobile application store table operations in a local data store in offine, and later sync the changes to the mobile service when in connectivity again.
    Reference:
    Azure Mobile Services - Get Started with Offline
    Deep dive on the offline support in the managed client SDK
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here. 
    Fanny Liu
    TechNet Community Support

  • Can you merge two user accounts on macbook? my wife has created a user on her new macbook , then inadvertently created a second one when using the migration tool. 1st ac has her office 365 install, yet 2nd has her itunes database, docs and contacts.

    Can you merge two user accounts on a macbook? my wife has created a new user on her new macbook air then, inadvertently, created a second one when using the migration tool. 1st a/c has her office 365 install, while 2nd has her itunes database, docs and contacts. What is the best way forward to get everything into the one account? Not sure if the office 365 will allow another installation into the second account, otherwise would just do that and delete the first, if that is possible?

    There is no merge but you can move data from one account to another via the Shared folder. Data is copied from Shared. Watch your free space when copying. These are large files.  Do one at a time if you are on a small drive. After making copy, delete from other users before you start next copy.
    Office365 installs in the main Applications folder and is available for all users on the computer. Activation is tied to the drive not the User.

  • Backing Up Database Table and Records in Oracle 10g

    Hi All,
    I created database for my company with Oracle 10g Database Server and want to backup all my database tables and records within (i.e creating another username inside the data and transfer them) and outside (i.e transfering them in another destination outside the database server) the database. Could you please instruct me on how to achieve?
    I look forward to hearing from you all.
    Thank you.
    Jc

    Hi, use RMAN utility
    do this
    rman target sys/*** nocatalog
    run {
      allocate channel t type disk;
      backup
        format '/app/oracle/backup/%d_t%t_s%s_p%p'
          (database);
       release channel t;
    {code
    Also read backup and recovery guide at http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/toc.htm
    regards                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • My back and forward buttons do not work also the refreash button does not work, I have reloaded the latest version of fire fox and that did not help. HELP please.

    My back and forward buttons do not work also the refresh button does not work, I have reloaded the latest version of fire fox and that did not help. All of navigation tool bar worked until I updated Fire Fox, now if I click on a link in yahoo, for instance, I can not get back to my yahoo home page without reloading Fire Fox. HELP please.

    Start Firefox in <u>[[Safe Mode]]</u> to check if one of the extensions or if hardware acceleration is causing the problem (switch to the DEFAULT theme: Firefox (Tools) > Add-ons > Appearance/Themes).
    *Don't make any changes on the Safe mode start window.
    *https://support.mozilla.com/kb/Safe+Mode
    A possible cause is a problem with the file places.sqlite that stores the bookmarks and the history.
    *http://kb.mozillazine.org/Bookmarks_history_and_toolbar_buttons_not_working_-_Firefox
    *https://support.mozilla.com/kb/Bookmarks+not+saved#w_places-database-file

  • Requiring several database queries for my GUI - where to put the reads?

    Requiring several database queries for my GUI
    Hi all,
    I am to create a GUI with a couple of drop downs
    These are populated from database queries, as well as the main program reading from the database based on all inputs in the GUI.
    Should I put all database reads into a class as seperate methods.
    e.g,
    one method for the database read to populate the first combo box.
    a second method to take the choice from combobox 1 and read from the database to populate combobox 2
    a third method to then perform the main database read based on GUI selections from the above two methods..
    is this the 'right' way to do it.
    my GUI would then be in a sperate class.
    or should I sperate the 3 database reads into 3 different classes?
    thanks in advance,
    Matt

    BigDaddyLoveHandles wrote:
    walker8 wrote:
    You might also read some info on three tier design using MVC (Model, View, Control) if i recall correctly.
    Here's an article by Martin Fowler on GUI architecture: [http://martinfowler.com/eaaDev/uiArchs.html]
    awesome! that's just what i needed. i haven't read all of it yet but it gives me ideas about the classes i need.
    regards
    walker8

  • Query on Database Export and Import

    Hi Techies,
    Currently we are running our Database on Oracle 10g and SAP on4. and OS is HP UX 11.11.
    We have a plan to migrate our HW from PA-RISC to Itanium and at the time of Production migration we are planning to use Export and Import method to get free space.
    Our plan is as below:
    We will not touch the original Production, Just restore the DB into new server.
    And post restore we will create the space on new server equallent to our DB size.
    Then will perform DB export from the new system to the null space
    Then Import the DB into same system.
    Here my queries are:
    1) Is it possible to export and Import the Database from/to null space?
    2) We have 2T size of DB, And good resources like 32G Ram, 12 CPUs etc. How much time can be expected to perform Export and Import?
    3) What are the challenges we can expect?
    4) Minimum how much of free space we can expect with this option?
    Regards,
    Nick Loy

    So with test runs I can expect rapid speed in DB export and Import (1T/H)........If I have good system then Database export and Import gets complete within 2 hrs (Database size is 2T).
    Well 1tb is at the very top of expectations here, you should be careful. I did an export/import of a 1.5tb database of an ERP system lately. We did parallel export (40 processes) / import (20 processes) using distmon, source was HP IA64, target Linux x64_64. The disks were midrange SAN storage systems on both sides.  After tuning we managed to do in 6-7hrs.
    But in your case, if you only have one system, this could mean you have to drop the source db first and then recreate the target db on the same disks. The creation of the 1-2tb database files alone can take up more than 1 hour , besides that you don't have an easy fallback.
    If you have a test system that is comparable from size and hardware perspective, then i suggest you try a test export to get a feeling for it.
    What about the Online re-org of database? What would be the best way to get free space within minimum downtime?
    Theoretically you should be able to gain more or less the same amount of space doing online reorgs. The advantage is less downtime, the downside is the reorgs will be running over a longer time period and put additional load on the system.
    Cheers Michael

  • Secondary database performance and CacheMode

    This is somewhat a follow-on thread related to: Lock/isolation with secondary databases
    In the same environment, I'm noticing fairly low-performance numbers on my queries, which are essentially a series of key range-scans on my secondary index.
    Example output:
    08:07:37.803 BDB - Retrieved 177 entries out of index (177 ranges, 177 iters, 1.000 iters/range) in: 87ms
    08:07:38.835 BDB - Retrieved 855 entries out of index (885 ranges, 857 iters, 0.968 iters/range) in: 346ms
    08:07:40.838 BDB - Retrieved 281 entries out of index (283 ranges, 282 iters, 0.996 iters/range) in: 101ms
    08:07:41.944 BDB - Retrieved 418 entries out of index (439 ranges, 419 iters, 0.954 iters/range) in: 160ms
    08:07:44.285 BDB - Retrieved 2807 entries out of index (2939 ranges, 2816 iters, 0.958 iters/range) in: 1033ms
    08:07:50.422 BDB - Retrieved 253 entries out of index (266 ranges, 262 iters, 0.985 iters/range) in: 117ms
    08:07:52.095 BDB - Retrieved 2838 entries out of index (3021 ranges, 2852 iters, 0.944 iters/range) in: 835ms
    08:07:58.253 BDB - Retrieved 598 entries out of index (644 ranges, 598 iters, 0.929 iters/range) in: 193ms
    08:07:59.912 BDB - Retrieved 143 entries out of index (156 ranges, 145 iters, 0.929 iters/range) in: 32ms
    08:08:00.788 BDB - Retrieved 913 entries out of index (954 ranges, 919 iters, 0.963 iters/range) in: 326ms
    08:08:03.087 BDB - Retrieved 325 entries out of index (332 ranges, 326 iters, 0.982 iters/range) in: 103ms
    To explain those numbers, a "range" corresponds to a sortedMap.subMap() call (ie: a range scan between a start/end key) and iters is the number of iterations over the subMap results to find the entry we were after (implementation detail).
    In most cases, the iters/range is close to 1, which means that only 1 key is traversed per subMap() call - so, in essence, 500 entries means 500 ostensibly random range-scans, taking only the first item out of each rangescan.
    However, it seems kind of slow - 2816 entries is taking 1033ms, which means we're really seeing a key/query rate of ~2700 keys/sec.
    Here's performance profile output of this process happening (via jvisualvm): https://img.skitch.com/20120718-rbrbgu13b5x5atxegfdes8wwdx.jpg
    Here's stats output after it running for a few minutes:
    I/O: Log file opens, fsyncs, reads, writes, cache misses.
    bufferBytes=3,145,728
    endOfLog=0x143b/0xd5b1a4
    nBytesReadFromWriteQueue=0
    nBytesWrittenFromWriteQueue=0
    nCacheMiss=1,954,580
    nFSyncRequests=11
    nFSyncTime=12,055
    nFSyncTimeouts=0
    nFSyncs=11
    nFileOpens=602,386
    nLogBuffers=3
    nLogFSyncs=96
    nNotResident=1,954,650
    nOpenFiles=100
    nRandomReadBytes=6,946,009,825
    nRandomReads=2,577,442
    nRandomWriteBytes=1,846,577,783
    nRandomWrites=1,961
    nReadsFromWriteQueue=0
    nRepeatFaultReads=317,585
    nSequentialReadBytes=2,361,120,318
    nSequentialReads=653,138
    nSequentialWriteBytes=262,075,923
    nSequentialWrites=257
    nTempBufferWrites=0
    nWriteQueueOverflow=0
    nWriteQueueOverflowFailures=0
    nWritesFromWriteQueue=0
    Cache: Current size, allocations, and eviction activity.
    adminBytes=248,252
    avgBatchCACHEMODE=0
    avgBatchCRITICAL=0
    avgBatchDAEMON=0
    avgBatchEVICTORTHREAD=0
    avgBatchMANUAL=0
    cacheTotalBytes=2,234,217,972
    dataBytes=2,230,823,768
    lockBytes=224
    nBINsEvictedCACHEMODE=0
    nBINsEvictedCRITICAL=0
    nBINsEvictedDAEMON=0
    nBINsEvictedEVICTORTHREAD=0
    nBINsEvictedMANUAL=0
    nBINsFetch=7,104,094
    nBINsFetchMiss=575,490
    nBINsStripped=0
    nBatchesCACHEMODE=0
    nBatchesCRITICAL=0
    nBatchesDAEMON=0
    nBatchesEVICTORTHREAD=0
    nBatchesMANUAL=0
    nCachedBINs=575,857
    nCachedUpperINs=8,018
    nEvictPasses=0
    nINCompactKey=268,311
    nINNoTarget=107,602
    nINSparseTarget=468,257
    nLNsFetch=1,771,930
    nLNsFetchMiss=914,516
    nNodesEvicted=0
    nNodesScanned=0
    nNodesSelected=0
    nRootNodesEvicted=0
    nThreadUnavailable=0
    nUpperINsEvictedCACHEMODE=0
    nUpperINsEvictedCRITICAL=0
    nUpperINsEvictedDAEMON=0
    nUpperINsEvictedEVICTORTHREAD=0
    nUpperINsEvictedMANUAL=0
    nUpperINsFetch=11,797,499
    nUpperINsFetchMiss=8,280
    requiredEvictBytes=0
    sharedCacheTotalBytes=0
    Cleaning: Frequency and extent of log file cleaning activity.
    cleanerBackLog=0
    correctedAvgLNSize=87.11789
    estimatedAvgLNSize=82.74727
    fileDeletionBacklog=0
    nBINDeltasCleaned=2,393,935
    nBINDeltasDead=239,276
    nBINDeltasMigrated=2,154,659
    nBINDeltasObsolete=35,516,504
    nCleanerDeletions=96
    nCleanerEntriesRead=9,257,406
    nCleanerProbeRuns=0
    nCleanerRuns=96
    nClusterLNsProcessed=0
    nINsCleaned=299,195
    nINsDead=2,651
    nINsMigrated=296,544
    nINsObsolete=247,703
    nLNQueueHits=2,683,648
    nLNsCleaned=5,856,844
    nLNsDead=88,852
    nLNsLocked=29
    nLNsMarked=5,767,969
    nLNsMigrated=23
    nLNsObsolete=641,166
    nMarkLNsProcessed=0
    nPendingLNsLocked=1,386
    nPendingLNsProcessed=1,415
    nRepeatIteratorReads=0
    nToBeCleanedLNsProcessed=0
    totalLogSize=10,088,795,476
    Node Compression: Removal and compression of internal btree nodes.
    cursorsBins=0
    dbClosedBins=0
    inCompQueueSize=0
    nonEmptyBins=0
    processedBins=22
    splitBins=0
    Checkpoints: Frequency and extent of checkpointing activity.
    lastCheckpointEnd=0x143b/0xaf23b3
    lastCheckpointId=850
    lastCheckpointStart=0x143a/0xf604ef
    nCheckpoints=11
    nDeltaINFlush=1,718,813
    nFullBINFlush=398,326
    nFullINFlush=483,103
    Environment: General environment wide statistics.
    btreeRelatchesRequired=205,758
    Locks: Locks held by data operations, latching contention on lock table.
    nLatchAcquireNoWaitUnsuccessful=0
    nLatchAcquiresNoWaitSuccessful=0
    nLatchAcquiresNoWaiters=0
    nLatchAcquiresSelfOwned=0
    nLatchAcquiresWithContention=0
    nLatchReleases=0
    nOwners=2
    nReadLocks=2
    nRequests=10,571,692
    nTotalLocks=2
    nWaiters=0
    nWaits=0
    nWriteLocks=0
    My database(s) are sizeable, but on an SSD in a machine with more RAM than DB size (16GB vs 10GB). I have CacheMode.EVICT_LN turned on, however, am thinking this may be harmful. I have tried turning it on, but it doesn't seem to make a dramatic difference.
    Really, I only want the secondary DB cached (as this is where all the read-queries happen), however, I'm not sure if it's (meaningfully) possible to only cache a secondary DB, as presumably it needs to look up the primary DB's leaf-nodes to return data anyway.
    Additionally, the updates to the DB(s) tend to be fairly large - ie: potentially modifying ~500,000 entries at a time (which is about 2.3% of the DB), which I'm worried tends to blow the secondary DB cache (tho don't know how to prove one way or another).
    I understand different CacheModes can be set on separate databases (and even at a cursor level), however, it's somewhat opaque as to how this works in practice.
    I've tried to run DbCacheSize, but a combination of variable length keys combined with key-prefixing being enabled makes it almost impossible to get meaningful numbers out of it (or at the very least, rather confusing :)
    So, my questions are:
    - Is this actually slow in the first place (ie: 2700 random keys/sec)?
    - Can I speed this up with caching? (I've failed so far)
    - Is it possible (or useful) to cache a secondary DB in preference to the primary?
    - Would switching from using a StoredSortedMap to raw (possibly reusable) cursors give me a significant advantage?
    Thanks so much in advance,
    fb.

    nBINsFetchMiss=575,490The first step in tuning the JE cache, as related to performance, is to ensure that nBINsFetchMiss goes to zero. That tells you that you've sized your cache large enough to hold all internal nodes (I know you have lots of memory, but we need to prove that by looking at the stats).
    If all your internal nodes are in cache, that means your entire secondary DB is in cache, because you've configured duplicates (right?). A dup DB does not keep its LNs in cache, so it consists of nothing but internal nodes in cache.
    If you're using EVICT_LN (please do!), you also want to make sure that nEvictPasses=0, and I see that it is.
    Here are some random hints:
    + In general always use getStats(new StatsConfig().setClear(true)). If you don't clear the stats every time interval, then they are cumulative and it's almost impossible to correlate them to what's going on in that time interval.
    + If you're starting with a non-empty env, first load the entire data set and clear the stats, so the fetches for populating the cache don't show up in subsequent stats.
    + If you're having trouble using DbCacheSize, you may want to find out experimentally how much cache is needed to hold the internal nodes, for a given data set in your app. You can do this simply by reading your data set into cache. When nEvictPasses becomes non-zero, the cache has overflowed. This is going to be much more accurate than DbCacheSize anyway.
    + When you measure performance, you need to collect the JE stats (as you have) plus all app performance info (txn rate, etc) for the same time interval. They need to be correlated. The full set of JE environment settings, database settings, and JVM params is also needed.
    On the question of using StoredSortedMap.subMap vs a Cursor directly, there may be an optimization you can make, if your LNs are not in cache, and they're not if you're using EVICT_LN, or if you're not using EVICT_LN but not all LNs fit. However, I think you can make the same optimization using StoredSortedMap.
    Namely when using a key range (whatever API is used), it is necessary to read one key past the range you want, because that's the only way to find out whether there are more keys in the range. If you use subMap or the Cursor API in the most obvious way, this will not only have to find the next key outside the range but will also fetch its LN. I'm guessing this is part of the reason you're seeing a lower operation rate than you might expect. (However, note that you're actually getting double the rate you mention from a JE perspective, because each secondary read is actually two JE reads, counting the secondary DB and primary DB.)
    Before I write a bunch more about how to do that optimization, I think it's worth confirming that the extra LN is being fetched. If you do the measurements as I described, and you're using EVICT_LN, you should be able to get the ratio of LNs fetched (nLNsFetchMiss) to the number of range lookups. So if there is only one key in the range, and I'm right about reading one key beyond it, you'll see double LNs fetched as number of operations.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • How do i make the back and forward buttons function? they are visible but not functional.

    I have firefox 8.0.1 and my back and forward button are not working. They are visible but not functioning. I have tried starting in safemode and without any add-ons and the back and forward buttons are still not functioning. What do I need to do to make the buttons functional?

    A possible cause is a problem with the file places.sqlite that stores the bookmarks and the history.
    *http://kb.mozillazine.org/Bookmarks_history_and_toolbar_buttons_not_working_-_Firefox
    *https://support.mozilla.com/kb/Bookmarks+not+saved#w_places-database-file

  • Generating XML from SQL queries and saving to an xml file?

    Hi there,
    I was wondering if somebody could help with regards to the following:
    Generating XML from SQL queries and saving to a xml file?
    We want to have a procedure(PL/SQL) that accepts an order number as an input parameter(the procedure
    is accessed by our software on the client machine).
    Using this order number we do a couple of SQL queries.
    My first question: What would be our best option to convert the result of the
    queries to xml?
    Second Question: Once the XML has been generated, how do we save that XML to a file?
    (The XML file is going to be saved on the file system of the server that
    the database is running on.)
    Now our procedure will also have a output parameter which returns the filename to us. eg. Order1001.xml
    Our software on the client machine will then ftp this XML file(based on the output parameter[filename]) to
    the client hard drive.
    Any information would be greatly appreciated.
    Thanking you,
    Francois

    If you are using 9iR2 you do not need to do any of this..
    You can create an XML as an XMLType using the new SQL/XML operators. You can insert this XML into the XML DB repository using DBMS_XDB.createResource. You can then access the document from the resource. You can also return the XMLType containing the XML directly from the PL/SQL Procedure.

  • Generating XML from SQL queries and saving to a xml file?

    Hi there,
    I was wondering if somebody could help with regards to the following:
    Generating XML from SQL queries and saving to a xml file?
    We want to have a stored procedure(PL/SQL) that accepts an order number as an input parameter(the procedure
    is accessed by our software on the client machine).
    Using this order number we do a couple of SQL queries.
    My first question: What would be our best option to convert the result of the
    queries to xml?
    Second Question: Once the XML has been generated, how do we save that XML to a file?
    (The XML file is going to be saved on the file system of the server that
    the database is running on.)
    Now our procedure will also have a output parameter which returns the filename to us. eg. Order1001.xml
    Our software on the client machine will then ftp this XML file(based on the output parameter[filename]) to
    the client hard drive.
    Any information would be greatly appreciated.
    Thanking you,
    Francois

    Hi
    Here is an example of some code that i am using on Oracle 817.
    The create_file procedure is the one that creates the file.
    The orher procedures are utility procedures that can be used with any XML file.
    PROCEDURE create_file_with_root(po_xmldoc OUT xmldom.DOMDocument,
    pi_root_tag IN VARCHAR2,
                                            po_root_element OUT xmldom.domelement,
                                            po_root_node OUT xmldom.domnode,
                                            pi_doctype_url IN VARCHAR2) IS
    xmldoc xmldom.DOMDocument;
    root xmldom.domnode;
    root_node xmldom.domnode;
    root_element xmldom.domelement;
    record_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    xmldom.setDoctype(xmldoc, pi_root_tag, pi_doctype_url,'');
    -- Create the root --
    root := xmldom.makeNode(xmldoc);
    -- Create the root element in the file --
    create_element_and_append(xmldoc, pi_root_tag, root, root_element, root_node);
    po_xmldoc := xmldoc;
    po_root_node := root_node;
    po_root_element := root_element;
    END create_file_with_root;
    PROCEDURE create_element_and_append(pi_xmldoc IN OUT xmldom.DOMDocument,
    pi_element_name IN VARCHAR2,
                                            pi_parent_node IN xmldom.domnode,
                                            po_new_element OUT xmldom.domelement,
                                            po_new_node OUT xmldom.domnode) IS
    element xmldom.domelement;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    BEGIN
    element := xmldom.createElement(pi_xmldoc, pi_element_name);
    child_node := xmldom.makeNode(element);
    -- Append the new node to the parent --
    newelenode := xmldom.appendchild(pi_parent_node, child_node);
    po_new_node := child_node;
    po_new_element := element;
    END create_element_and_append;
    FUNCTION create_text_element(pio_xmldoc IN OUT xmldom.DOMDocument, pi_element_name IN VARCHAR2,
    pi_element_data IN VARCHAR2, pi_parent_node IN xmldom.domnode) RETURN xmldom.domnode IS
    parent_node xmldom.domnode;                                   
    child_node xmldom.domnode;
    child_element xmldom.domelement;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    create_element_and_append(pio_xmldoc, pi_element_name, pi_parent_node, child_element, child_node);
    parent_node := child_node;
    -- Create a text node --
    textele := xmldom.createTextNode(pio_xmldoc, pi_element_data);
    child_node := xmldom.makeNode(textele);
    -- Link the text node to the new node --
    compnode := xmldom.appendChild(parent_node, child_node);
    RETURN newelenode;
    END create_text_element;
    PROCEDURE create_file IS
    xmldoc xmldom.DOMDocument;
    root_node xmldom.domnode;
    xml_doctype xmldom.DOMDocumentType;
    root_element xmldom.domelement;
    record_element xmldom.domelement;
    record_node xmldom.domnode;
    parent_node xmldom.domnode;
    child_node xmldom.domnode;
    newelenode xmldom.DOMNode;
    textele xmldom.DOMText;
    compnode xmldom.DOMNode;
    BEGIN
    xmldoc := xmldom.newDOMDocument;
    xmldom.setVersion(xmldoc, '1.0');
    create_file_with_root(xmldoc, 'root', root_element, root_node, 'test.dtd');
    xmldom.setAttribute(root_element, 'interface_type', 'EXCHANGE_RATES');
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mr', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'Joe', record_node);
    parent_node := create_text_element(xmldoc,'surname', 'Blogs', record_node);
    -- Create the record element in the file --
    create_element_and_append(xmldoc, 'record', root_node, record_element, record_node);
    parent_node := create_text_element(xmldoc, 'title', 'Mrs', record_node);
    parent_node := create_text_element(xmldoc, 'name', 'A', record_node);
    parent_node := create_text_element(xmldoc, 'surname', 'B', record_node);
    -- write the newly created dom document into the buffer assuming it is less than 32K
    xmldom.writeTofile(xmldoc, 'c:\laiki\willow_data\test.xml');
    EXCEPTION
    WHEN xmldom.INDEX_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Index Size error');
    WHEN xmldom.DOMSTRING_SIZE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'String Size error');
    WHEN xmldom.HIERARCHY_REQUEST_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Hierarchy request error');
    WHEN xmldom.WRONG_DOCUMENT_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Wrong doc error');
    WHEN xmldom.INVALID_CHARACTER_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Invalid Char error');
    WHEN xmldom.NO_DATA_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Nod data allowed error');
    WHEN xmldom.NO_MODIFICATION_ALLOWED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'No mod allowed error');
    WHEN xmldom.NOT_FOUND_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not found error');
    WHEN xmldom.NOT_SUPPORTED_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'Not supported error');
    WHEN xmldom.INUSE_ATTRIBUTE_ERR THEN
    RAISE_APPLICATION_ERROR(-20120, 'In use attr error');
    WHEN OTHERS THEN
    dbms_output.put_line('exception occured' || SQLCODE || SUBSTR(SQLERRM, 1, 100));
    END create_file;

  • Poorly Performing SQL Queries and AWR

    version : 10.2.0.4 on OEL 5.
    Snapshot duration is 30 mins. Its long, but sorry that is what is available right now.
    I have three queries in my database which are performing pretty slow. I took AWR reports for these queries and they are as follows:
    Query 1:
    ======
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    1   3714475000                113,449             9         60539          60540
    -> % Total DB Time is the Elapsed Time of the SQL statement divided  into the Total Database Time multiplied by 100
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                           113,449       12,605.4     3.9
    CPU Time (ms)                               108,620       12,068.9     4.0
    Executions                                        9            N/A     N/A
    Buffer Gets                                4.25E+07    4,722,689.0    11.7
    Disk Reads                                        0            0.0     0.0
    Parse Calls                                       9            1.0     0.0
    Rows                                             20            2.2     N/A
    User I/O Wait Time (ms)                           0            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                252            N/A     N/AQuery 2:
    ======
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    1   4197000940              1,344,458             3         60539          60540
    -> % Total DB Time is the Elapsed Time of the SQL statement divided   into the Total Database Time multiplied by 100
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                         1,344,458      448,152.7    46.5
    CPU Time (ms)                             1,353,670      451,223.3    49.7
    Executions                                        3            N/A     N/A
    Buffer Gets                                3.42E+07   11,383,856.7     9.4
    Disk Reads                                        0            0.0     0.0
    Parse Calls                                       3            1.0     0.0
    Rows                                             48           16.0     N/A
    User I/O Wait Time (ms)                           0            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                270            N/A     N/AQuery 3:
    ======
        Plan Hash           Total Elapsed                 1st Capture   Last Capture
    #   Value                    Time(ms)    Executions       Snap ID        Snap ID
    1   2000299266                104,060             7         60539          60540
    -> % Total DB Time is the Elapsed Time of the SQL statement divided   into the Total Database Time multiplied by 100
    Stat Name                                Statement   Per Execution % Snap
    Elapsed Time (ms)                           104,060       14,865.7     3.6
    CPU Time (ms)                               106,150       15,164.3     3.9
    Executions                                        7            N/A     N/A
    Buffer Gets                                4.38E+07    6,256,828.1    12.1
    Disk Reads                                        0            0.0     0.0
    Parse Calls                                       7            1.0     0.0
    Rows                                             79           11.3     N/A
    User I/O Wait Time (ms)                           0            N/A     N/A
    Cluster Wait Time (ms)                            0            N/A     N/A
    Application Wait Time (ms)                        0            N/A     N/A
    Concurrency Wait Time (ms)                        0            N/A     N/A
    Invalidations                                     0            N/A     N/A
    Version Count                                     2            N/A     N/A
    Sharable Mem(KB)                                748            N/A     N/AAny Ideas please as what is wrong with the above statistics? And what should I do next with it?
    Thanks.

    Here is one of the plan for query:
    | Id  | Operation                            | Name                   | Rows  | Bytes | Cost (%CPU)|
    Time     |
    |   0 | SELECT STATEMENT                     |                        |       |       |  9628 (100)|
              |
    |   1 |  VIEW                                |                        |    73 | 58546 |  9628   (1)|
    00:01:56 |
    |   2 |   WINDOW SORT PUSHED RANK            |                        |    73 | 22630 |  9628   (1)|
    00:01:56 |
    |   3 |    FILTER                            |                        |       |       |            |
              |
    |   4 |     NESTED LOOPS                     |                        |    73 | 22630 |  9627   (1)|
    00:01:56 |
    |   5 |      NESTED LOOPS                    |                        |    73 | 20586 |  9554   (1)|
    00:01:55 |
    |   6 |       NESTED LOOPS OUTER             |                        |    72 | 15552 |  9482   (1)|
    00:01:54 |
    |   7 |        NESTED LOOPS                  |                        |    72 | 13320 |  9410   (1)|
    00:01:53 |
    |   8 |         NESTED LOOPS                 |                        |    72 | 12168 |  9338   (1)|
    00:01:53 |
    |   9 |          NESTED LOOPS                |                        |  4370 |   277K|    29   (0)|
    00:00:01 |
    |  10 |           TABLE ACCESS BY INDEX ROWID| test_ORG                |     1 |    34 |     2   (0)|
    00:00:01 |
    |  11 |            INDEX UNIQUE SCAN         | test_ORG_PK             |     1 |       |     1   (0)|
    00:00:01 |
    |  12 |           TABLE ACCESS FULL          | test_USER               |  4370 |   132K|    27   (0)|
    00:00:01 |
    |  13 |          TABLE ACCESS BY INDEX ROWID | REF_CLIENT_FOO_ACCT   |     1 |   104 |     7   (0)|
    00:00:01 |
    |  14 |           INDEX RANGE SCAN           | RCFA_test_ORG_IDX       |   165 |       |     2   (0)|
    00:00:01 |
    |  15 |         TABLE ACCESS BY INDEX ROWID  | test_ACCOUNT            |     1 |    16 |     1   (0)|
    00:00:01 |
    |  16 |          INDEX UNIQUE SCAN           | test_CUSTODY_ACCOUNT_PK |     1 |       |     0   (0)|
              |
    |  17 |        TABLE ACCESS BY INDEX ROWID   | test_USER               |     1 |    31 |     1   (0)|
    00:00:01 |
    |  18 |         INDEX UNIQUE SCAN            | test_USER_PK_IDX        |     1 |       |     0   (0)|
              |
    |  19 |       TABLE ACCESS BY INDEX ROWID    | REF_FOO               |     1 |    66 |     1   (0)|
    00:00:01 |
    |  20 |        INDEX UNIQUE SCAN             | REF_FOO_PK            |     1 |       |     0   (0)|
              |
    |  21 |      TABLE ACCESS BY INDEX ROWID     | REF_FOO_FAMILY        |     1 |    28 |     1   (0)|
    00:00:01 |
    PLAN_TABLE_OUTPUT
    |  22 |       INDEX UNIQUE SCAN              | REF_FOO_FAMILY_PK     |     1 |       |     0   (0)|
              |
    40 rows selected.
    SQL>

  • SOON: Advisor Webcast - WebCenter Content: Database Searching and Indexing

    Learn how to improve search performance by attending the 1 hour Advisor Webcast: WebCenter Content: Database Searching and Indexing on March 15, 2012 at 16:00 UK / 167:00 CET / 08:00 am Pacific / 9:00 am Mountain / 11:00 am Eastern. For details, go here https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1399682.1.
    TOPICS WILL INCLUDE:
    * Optimize the database index for faster searching
    * Locate WebCenter Content problematic queries
    * Use SQL Developer to find explain plans and profile SQL statements
    * Configure WebCenter Content to tweak search settings
    Edited by: user754652 on Jan 30, 2012 7:44 AM
    Edited by: user754652 on Mar 7, 2012 7:09 AM

    Hi All,
    Not sure if this is the right forum however I am not able to find the installation for Release 11gR1: 11.1.1.7.0 at http://www.oracle.com/technetwork/middleware/webcenter/content/downloads/index.html
    Any pointers where can I download from?
    Thanks

  • Sample Oracle database Schema and sample training SQL

    Hi,
    I am running a windows 7, 64 bit machine. I am looking for sample oracle database schema with sample SQL queries and exercises(tutorial). Can you please point me in the right direction where I can download both.
    Thanks for your time and help.

    sb92075 wrote:
    ssk1974 wrote:
    Hi,
    I am running a windows 7, 64 bit machine. I am looking for sample oracle database schema with sample SQL queries and exercises(tutorial). Can you please point me in the right direction where I can download both.
    Thanks for your time and help.
    http://www.lmgtfy.com/?q=oracle+sample+schema
    LOL!!!!

  • How do I fix the back and forward button feature on my Logitech Unifying mice?

    Ever since the latest automatic FFX update to 6.0.2 my fancy new Logitech Unifying Mice, both the M505 and the M510 no longer function properly in FFX. The middle multi-function scroll-wheel, button used to be the back button when pushed sideways to the left and forward when pushed sideways to the right.
    I've updated software in the Logitech unifying software, I've reset the settings for the button controls. nothing seems to work. Please help

    A possible cause is a problem with the file places.sqlite that stores the bookmarks and the history.
    *http://kb.mozillazine.org/Bookmarks_history_and_toolbar_buttons_not_working_-_Firefox
    *https://support.mozilla.com/kb/Bookmarks+not+saved#w_places-database-file

Maybe you are looking for