Loading large number of coordinates

Hello,
We are trying to load the SDO_ORDINATES columnobject of the SDO_GEOMETRY column in a spatial feature table with a large number of coordinates (4844). This is not the largest linestring we will need to build. We have defined a procedure as:
CREATE OR REPLACE PROCEDURE
INSERT_GPS(GEOM MDSYS.SDO_GEOMETRY) IS
BEGIN
INSERT INTO GPS(long_lat_elv) VALUES (GEOM);
COMMIT;
END;
and are loading the SDO_GEOMETRY column as follows:
DECLARE
myGEOM MDSYS.SDO_GEOMETRY := MDSYS.SDO_GEOMETRY(3002,NULL,NULL,MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),
MDSYS.SDO_ORDINATE_ARRAY(
-86.929501,35.751834,32.907,
-86.929434,35.751906,32.913,
87.270367,35.447903,.854,
87.270273,35.447956,.86
BEGIN
dave_insert(myGEOM);
END;
This generates the following error:
PLS-00123: Program too large.
How can we load this many coordinates, using this workflow? We are using 8.1.6.
Thanks
Dave
null

<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by Tilen Skraba ([email protected]):
Why don't you use SQL*Loader to load the data? We have geometries which have more data than yours and sqlloader didn't have any problems.
I believe PLS-00123 means that your package is too big.
What else do you have in your package?
Try and break the package in more smaller ones.
Tilen<HR></BLOCKQUOTE>
Telim,
We are trying to use the LRS package and the measure value, which is part of the coordinate (x,y,m) has to be generated from a stored procedure for each x,y pair.
Thanks
Dave
null

Similar Messages

  • When loading large number of images, images begin to fail to load in IE11 with DOM: 7009 error (unable to decode) in console

    When loading many images on a single page in IE (reproduced in IE11), some of them begin to fail to load, and have something similar to the following warning in the console:
    DOM7009: Unable to decode image at URL: '[some unique url]'.
    When I look at the network traffic, there does appear to valid responses received for each of these images from the server. It's not always the same images each time. If I use the dev tools to force the image to reload (example: I update the url to include some some extraneous url parameter "&test=1"), it loads/renders normally without error. I've reproduced this behavior with different types of images (jpegs/pngs; example png included below). Apologies for the cross-posting. I wasn't sure what the particular etiquette was for that - I've also posted this issue on StackOverflow and MS Connect under the same title. If one of those is a more appropriate place for this matter, please let me know and I'll close this issue.

    yup, i have encountered this problem as well. I'm loading an array of image frames for a movie, and i'm using preloadJS (javascript preloading library) and not displaying the images upfront. 

  • Loading FF4 wiped out a large number of tabs, how do I restore?

    I thought I was loading a security update. WHAT?! ALL MY TABS I WAS USING ARE WIPED OUT? Really? Or have you added a way to restore them?
    Apparently the default setting on FF4 is to destroy the preserved tabs during the update.
    My browser had a large number of tabs preserved. FF4 apparently wiped them out without asking. WTF people, why pull such unexpected stuff on your users. (Yes, I'm really pissed. It will take me hours to track down a couple of those pages again.) It wouldn't have been so bad if I knew I was getting FF4. Then I could bookmark the open pages. Again, I thought it was a security update.
    The fix: have FF4 detect preserved tabs once (during the update,) then display a warning message to allow users to bring the tabs into the new updated browser.

    Thanks. I assumed I needed to set in and out points. I have not done an image sequence in FCP. If I have a 10 second image and a 2 second transition, but then change the duration of the transition to say 4 seconds then FCP will let me alter the transition duration?
    With regards to movie clips, what happens with setting global in and out points? How would I do that?
    Thanks in advance

  • Trouble loading a large number of csv files

    Hi All,
    I am having an issue loading a large number of csv files into my LabVIEW program. I have attached a png of the simplified code for the load sequence alone.
    What I want to do is load data from 5000 laser beam profiles, so 5000 csv files (68x68 elements), and then carry out some data analysis. However, the program will only ever load 2117 files, and I get no error messages. I have also tried, initially loading a single file, selecting a crop area - say 30x30 elements - and then loading the rest of the files cropped to these dimensions, but I still only get 2117 files.
    Any thoughts would be much appreciated,
    Kevin
    Kevin Conlisk
    Ph.D Student
    National Centre for Laser Applications
    National University of Ireland, Galway
    IRELAND
    Solved!
    Go to Solution.
    Attachments:
    Load csv files.PNG ‏14 KB

    How many elements are in the array of paths (your size(s) indicator) ?
    I suspect that the open file is somewhat limited to a certain number.
    You could also select a certain folder and use 'List Folder' to get a list of files and load those.
    Your data set is 170 MB, not really asthounising, however you should whatc your programming to prevent data-doublures.
    Ton
    Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
    Nederlandse LabVIEW user groep www.lvug.nl
    My LabVIEW Ideas
    LabVIEW, programming like it should be!

  • Loading a large number of strings into memory quickly

    Hello,
    I'm working on an iPhone application where I need to load a large number of strings into memory. Currently I'm simply reading from a file where each string is stored in plain text on a single line. I read the file contents into a string using stringWithContentsOfFile and then I create an NSSet object using NSSet setWithArray:[string componentsSeparatedByString:@"\n"]];
    This works like a charm but takes around 8 seconds to load on the iPhone. I'm looking for ways to speed this up. I've already tried a few things which weren't any faster:
    1) I used [NSKeyedArchiver archiveRootObject:myList toFile:appFile]; to store the NSSet data structure. Then instead of reading the text file storage. I read this file using [NSKeyedUnarchiver unarchiveObjectWithFile:appFile]; This was actually very slow and created a strings file that was about 2x the size of the original plain text.
    2) Instead of using an NSSet, I used and NSDictionary and used writeToFile and dictionaryWithContentsOfFile. This was also no faster.
    3) Finally I tried using the NSDictionary to write to a binary file format using NSPropertyListSerialization. This was also not any faster.
    I've been thinking about using SQLite instead of the flat file read, but I haven't had a chance to prototype that out to see if it would be faster. It's important that I can do fast searches for specific strings, which is why I originally used a set.
    Does any one else have any ideas how to load this into memory faster? If all else fails, I'm simply going to load the strings into memory using a separate thread on application launch so I don't prevent the user from getting to the main menu for 8 seconds.
    Thanks,
    -Keith

    I'd need to know more about what you're doing, but from what you're describing I think you should try to change your algorithm.
    For example: Instead of distributing one flat file, split your list of strings into 256 files, based on the first two hex digits of their MD5 hashes*. (Two digits might not be enough--you might need three or four. You may also want to use folders, especially if you need more than two digits.) When testing if a string exists, first calculate its MD5 hash and extract the necessary number of digits, then load that file into memory and scan its list. (You can cache these lists in memory so that you only have to load each file once--just make sure that a didReceiveMemoryWarning message will empty those caches.)
    Properly configured, SQLite may be faster than the eight second load time you talk about, especially if you ensure it indexes the column you store the strings in. But it's probably overkill for this application.
    \* A hash is a numeric code calculated from a string; on average, changing a single bit anywhere in the string should change half the bits in the hash, so even very similar strings should generate very different hashes. I suggest using MD5 instead of -\[NSString hash\] because the hash method is not guaranteed to return the same results on Mac OS and iPhone OS, or on different releases of either OS. You could also use a different algorithm, like a CRC; these are faster but I'm not as familiar with them. This thread discusses calculating MD5 hashes on iPhone OS: http://discussions.apple.com/thread.jspa?messageID=7362074
    Message was edited by: Brent Royal-Gordon

  • Large number of Hide/Show regions on a page, can't hide on page load

    I have a large number of Hide/Show regions on a page, 14 right now to be exact. I want all of these regions to start hidden when the page loads. In order to do this I have the following code in the onload page html body attribute:
    onLoad="document.getElementById('region3').style.display = 'none';
    document.getElementById('shIMG3').src = '/i/htmldb/builder/rollup_plus_dgray.gif'
    document.getElementById('region19').style.display = 'none';
    document.getElementById('shIMG19').src ='/i/htmldb/builder/rollup_plus_dgray.gif'"
    This works fine when I have 13 or fewer hide/show regions on the page. When I add the 14th region to the page, all the regions on the page start off as not hidden.
    Anyone have any idea why this could be happening? (I'm using Apex version 2.0)
    Thanks
    - Brian

    no ideas?

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • Large number of entries in Queue BW0010EC_PCA_1

    Dear BW experts,
    Our BW system 2004s is extracting data from R3 700. I am a Basis guy and observing large number of entries in SMQ1 in R3 system under Queue BW0010EC_PCA_1. I observe in RSA7 similar number of entries for 0EC_PCA_1.
    The number of entries for this queue in SMQ1 everyday are 50000+. The extraction job in BW is running everyday 5:00AM morning in BW but this only clears data which is lying before 2 days (Example on 10.09.2010, it will extract data of 08.09.2010)
    My questions
    1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
    2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
    Many thanks in advance for your valuable comments

    Hi,
    The entries lying in RSA7 and SMQ1 are one and the same. In SMQ1, BW0010EC_PCA_1 entry means that this data is lying to be sent across to you BW001 client system. Whereas in RSA7, same data is displayed as 0EC_PCA_1.
    1. Is it ok that such large number of entries lying in queue in SMQ1 and they are extracted once a day by a batch job.Then there is no mean of schedular pushing this entries.
    As I can understand from the data thats lying in your R/3 system in SMQ1, this datasource has delta mode as Direct Delta. SAP recommends that if the number of postings for a particular application is greater than 100000 per day, you should have delta mode as Queued Delta. Since in your system it is in some thousands, therefore BI guys would have kept it as direct delta. So, these entries lying in SMQ1 are not problem at all. As for scheduler from BI, it will pick up these entries every morning to clear the queue for the previous data.
    2. Any idea why extraction job only fetches data of 2 days before.any setting somewhere missing.
    I dont think that it is only fetching the data for 2 days before. The delta concept works in such a manner that once you have pulled the delta load from RSA7, this data would still be lying there under Repeat delta section until the next delta load has finished successfully.
    Since in your system, data is pulled only once a day, therefore even though your today's dataload has pulled yesterday's data, it would still be lying in the system till tomorrow's delta load from RSA7 is successful.
    Hope this was helpful.

  • How do I delete large number of duplicates on my Itunes w/o Ctrl+Click

    I have a large number of duplicates that were loaded onto my ITunes and I would like to delete them. So far the only way I have found is to go down the list one at a time and Ctrl+Click and then delete. Since Itunes can designate duplicates, is there a function for removing all the duplicates before I synch my ipod???
      Windows XP  

    Itunes can't mass delete duplicates, but one of the forum member has written a script to do it, see:
    http://home.comcast.net/~teridon73/itunesscripts/
    If you prefer to go commercial take a look at iTsync
    http://www.itsyncsoftware.com/itsync.htm

  • Problem with importing a large number of images to Lightroom?

    I'm trying to import a large number of images from disk to Lightroom. If I select more that approximately 1500 images (1250 works fine), I get an error message saying "files do not exist". If I reduce the number of files imported, everything works fine. I can import in groups as long as the number selected at any one time is less that the ~1500 limit.
    Has any one had a similar problems? I'm running LR 2.4 64-bit on Win7 RC 64-bit, Q6600 processor, 8 GB RAM.
    Does not seem like this should be a problem. Its a real hassle to my workflow. I'm working on a series of time-laps movies (here is a link to a sample: http://vimeo.com/6375019).

    Thanks for your reply Sean,
    It is an internal disk drive, SATA II interface.I have no other disk/file problems, this is unique to LR as far as I can tell.
    I'm loading files onto disk from a CF card reader outside of Lightroom, then opening LR and telling it to import from disk. I've never had a problem before with imports. Experimenting with number of files, somewhere between 1250 and 1500 is where the problem occurs.
    Been busy making time-lapse videos of the fire in LA, so I'm away from the computer most of the time.
    thanks,
    Dan Finnerty

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • Approach to parse large number of XML files into the relational table.

    We are exploring the option of XML DB for processing a large number of files coming same day.
    The objective is to parse the XML file and store in multiple relational tables. Once in relational table we do not care about the XML file.
    The file can not be stored on the file server and need to be stored in a table before parsing due to security issues. A third party system will send the file and will store it in the XML DB.
    File size can be between 1MB to 50MB and high performance is very much expected other wise the solution will be tossed.
    Although we do not have XSD, the XML file is well structured. We are on 11g Release 2.
    Based on the reading this is what my approach.
    1. CREATE TABLE XML_DATA
    (xml_col XMLTYPE)
    XMLTYPE xml_col STORE AS SECUREFILE BINARY XML;
    2. Third party will store the data in XML_DATA table.
    3. Create XMLINDEX on the unique XML element
    4. Create views on XMLTYPE
    CREATE OR REPLACE FORCE VIEW V_XML_DATA(
       Stype,
       Mtype,
       MNAME,
       OIDT
    AS
       SELECT x."Stype",
              x."Mtype",
              x."Mname",
              x."OIDT"
       FROM   data_table t,
              XMLTABLE (
                 '/SectionMain'
                 PASSING t.data
                 COLUMNS Stype VARCHAR2 (30) PATH 'Stype',
                         Mtype VARCHAR2 (3) PATH 'Mtype',
                         MNAME VARCHAR2 (30) PATH 'MNAME',
                         OIDT VARCHAR2 (30) PATH 'OID') x;
    5. Bulk load the parse data in the staging table based on the index column.
    Please comment on the above approach any suggestion that can improve the performance.
    Thanks
    AnuragT

    Thanks for your response. It givies more confidence.
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    Example XML
    <SectionMain>
    <SectionState>Closed</SectionState>
    <FunctionalState>CP FINISHED</FunctionalState>
    <CreatedTime>2012-08</CreatedTime>
    <Number>106</Number>
    <SectionType>Reel</SectionType>
    <MachineType>CP</MachineType>
    <MachineName>CP_225</MachineName>
    <OID>99dd48cf-fd1b-46cf-9983-0026c04963d2</OID>
    </SectionMain>
    <SectionEvent>
    <SectionOID>99dd48cf-2</SectionOID>
    <EventName>CP.CP_225.Shredder</EventName>
    <OID>b3dd48cf-532d-4126-92d2</OID>
    </SectionEvent>
    <SectionAddData>
    <SectionOID>99dd48cf2</SectionOID>
    <AttributeName>ReelVersion</AttributeName>
    <AttributeValue>4</AttributeValue>
    <OID>b3dd48cf</OID>
    </SectionAddData>
    - <SectionAddData>
    <SectionOID>99dd48cf-fd1b-46cf-9983</SectionOID>
    <AttributeName>ReelNr</AttributeName>
    <AttributeValue>38</AttributeValue>
    <OID>b3dd48cf</OID>
    <BNCounter>
    <SectionID>99dd48cf-fd1b-46cf-9983-0026c04963d2</SectionID>
    <Run>CPFirstRun</Run>
    <SortingClass>84</SortingClass>
    <OutputStacker>D2</OutputStacker>
    <BNCounter>54605</BNCounter>
    </BNCounter>
    I was not aware of Virtual column but looks like we can use it and avoid creating views by just inserting directly into
    the staging table using virtual column.
    Suppose OID id is the unique identifier of each XML FILE and I created virtual column
    CREATE TABLE po_Virtual OF XMLTYPE
    XMLTYPE STORE AS BINARY XML
    VIRTUAL COLUMNS
    (OID_1 AS (XMLCAST(XMLQUERY('/SectionMain/OID'
    PASSING OBJECT_VALUE RETURNING CONTENT)
    AS VARCHAR2(30))));
    1. My question is how then I will write this query by NOT USING COLMUN XML_COL
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM po_Virtual t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                          <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x;
    2. Insetead of creating the view then Can I do
    insert into STAGING_table_yyy ( col1 ,col2,col3,col4,
    SELECT x."SECTIONTYPE",
    x."MACHINETYPE",
    x."MACHINENAME",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionType VARCHAR2 (30) PATH 'SectionType',
    MachineType VARCHAR2 (3) PATH 'MachineType',
    MachineName VARCHAR2 (30) PATH 'MachineName',
    OIDT VARCHAR2 (30) PATH 'OID') x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    insert into STAGING_table_yyy ( col1 ,col2,col3
    SELECT x."SectionOID",
    x."EventName",
    x."OIDT"
    FROM xml_data t,
    XMLTABLE (
    '/SectionMain'
    PASSING t.xml_col                         <--WHAT WILL PASSING HERE SINCE NO XML_COL
    COLUMNS SectionOID PATH 'SectionOID',
    EventName VARCHAR2 (30) PATH 'EventName',
    OID VARCHAR2 (30) PATH 'OID',
    ) x
    where oid_1 = '99dd48cf-fd1b-46cf-9983';<--VIRTUAL COLUMN
    Same insert for other tables usind the OID_1 virtual coulmn
    3. Finaly Once done how can I delete the XML document from XML.
    If I am using virtual column then I beleive it will be easy
    DELETE table po_Virtual where oid_1 = '99dd48cf-fd1b-46cf-9983';
    But in case we can not use the Virtual column how we can delete the data
    Thanks in advance
    AnuragT

  • Large number of FNDSM and FNDLIBR processes

    hi,
    description of my system
    Oracle EBS 11.5.10 + oracle 9.2.0.5 +HP UX 11.11
    problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processes
    come down from 250 to 80 but these apps processes just dont get killed automatically .
    can i kill these processes manually??
    one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processes
    and under what circumstances , should i run cmclean ?

    Hi,
    problem : ther are large number of FNDSM , FNLIBR and sh processes during peak load around 300 , but even at no load these processes dont come down , though oracle processesThis means there are lots of zombie processes running and all these need to be killed.
    Shutdown your application and database and take a bounce of the server as there are too many zombie processes. I have once faced the issue in which due to these zombie process CPU utilization has gone to 100% on continuous count.
    Once you restart the server, start database and listener run cmclean and start the application services.
    one more thing , even after stopping apllications with adstpall.sh , these processes dont get killed , is it normal??, so i just dismount database so as to kill these processesNo it's not normal and should not be neglected. I should also advice you to run the [Oracle Application Object Library Concurrent Manager Setup Test|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=200360.1]
    and under what circumstances , should i run cmclean ?[CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables|https://metalink2.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=134007.1]
    You can run the cmclean if you find that after starting the applications managers are not coming up or actual processes are not equal to target processes.
    Thanks,
    Anchorage :)

  • BerkeleyDB + Tomcat + large number of databases.

    Hi all,
    for my bioinformatics project, I'd like to transform a large number of SQL databases (see http://hgdownload.cse.ucsc.edu/goldenPath/hg18/database/ ) to a set of read only BerkeleyDB JE databases.
    In my web application , the Environment would be loaded in tomcat and one can imagine a servlet/jsp querying/browsing each database.
    Then I wonder what are the best practices ?
    Should I open each JE Database for each http request and close it at the end of the request ?
    Or should I just let each Database open once it has been opened ? Wouldn't it be a problem if all the database and secondary databases are all open ? Can I share one Database for some multiple threads ?
    Something else ?
    Many thanks for your help
    Thanks in advance
    Pierre

    Hi Pierre,
    Normally you should keep the Environment and all Databases open for the duration of the process, since opening and closing a database (and certainly an environment) per request is expensive and unnecessary. However, each open database takes some memory, so if you have an extremely large number of databases (thousands or more), you should consider opening and closing the databases at each request, or for better performance keeping a cache of open databases. Whether this is necessary depends on how much memory you have and how many databases.
    You'll find the answer to your multi-threading question in the getting started guide.
    Please read the docs and also search the forum.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Maybe you are looking for

  • Vendor Down payments: VAT

    Good Day, When I post a vendor down payment with T-Code F-47. I post an amount and tick the calculate tax box but when I simulate the transaction no VAT(tax) was calculated. What should I change in customising so that I can calculate tax on my docume

  • Can't get external sound back

    iPad 1, under waranty, updated to latest iOS version (4.3.5 (8L1)). Plugged in head phones this morning, worked fine. Unplugged head phones, and the volume control still say head phones are plugged in. Plug in head phones, they work, I can hear stuff

  • How can I change dates to the UK format, DD/MM/YY, in Mac Numbers?

    How can I change dates in Mac Numbers to the UK format of DD/MM/YY? When I correct them individually they automatically return to the US format.

  • 3PL implementation without Decentralized WMS

    We are working towards outsourcing the warehouse for one division of our products. We cannot do the decentralized WMS as the 3PL doesn't want to have the terminal. This is the EDI flow I am working on. 1. O/B 846 (Inventory Snapshot) 2. O/B 940 (Ware

  • Interested in buying a macbook

    i have the powermac g4 from 2003 and its falling a part. i'm very interested in buying a mac book, but am low on funds. 1. do you recommend the mac book? comments? 2. any ideas of how to get one inexpensively? possibly a used one that is "guaranteed"