Bulk load of spatial objects

I have been able to load spatial point data trough sqlloader with data that already contains a decimal point. My problem now is to load that implies the decimal point and trying to load whith a control file that looks like this:
lat_long column object
sdo_gtype constant 2001,
sdo_point column object
( x position (411:419) float external ":x/1000000",
y position (420:427) float external ":y/1000000")
that portion to add the decimal point work in other regular data type load.
Any idea?
null

Sorry no help, Ian.
I had the same problems using field references within the object part of
the control file.
Anybody knows why?

Similar Messages

  • ORA-29516: Bulk load of method failed; insufficient shm-object space

    Hello,
    Just installed 11.2.0.1.0 on CentOS 5.5 64-bit. All dependencies satisfied, installation/linking went without a problem.
    Server has 32GB RAM, using AMM with target set at 29GB, no swapping is occuring.
    No matter what i do when loading Java code (loadjava with JARs or "create and compile java source") I keep getting the error:
    ORA-29516: Error in module Aurora: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    Checked shm-related kernel params, all seems to be normal:
    # Controls the maximum size of a message, in bytes
    kernel.msgmnb = 65536
    # Controls the default maxmimum size of a mesage queue
    kernel.msgmax = 65536
    # Controls the maximum shared segment size, in bytes
    kernel.shmmax = 68719476736
    # Controls the maximum number of shared memory segments, in pages
    kernel.shmall = 4294967296
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048576
    Please help.

    Hi there,
    I've stumbled into exactly the same issue for 11g. After I start the database up and I ran loadjava on an externally
    compiled class (Hello.class in my instance) I got the following error:
    Error while testing for existence of dbms_java.handleMd5
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at "SYS.DBMS_JAVA", line 679
    Error while creating class Hello
    ORA-29516: Aurora assertion failure: Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    ORA-06512: at line 1
    The following operations failed
    class Hello: creation (createFailed)
    exiting : Failures occurred during processing
    After this, I checked the trace file and saw the following error message:
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    peshmmap_Create_Memory_Map:
    Map_Length = 4096
    Map_Protection = 7
    Flags = 1
    File_Offset = 0
    mmap failed with error 1
    error message:Operation not permitted
    ORA-04035: unable to allocate 4096 bytes of shared memory in shared object cache "JOXSHM" of size "134217728"
    Assertion failure at joez.c:3311
    Bulk load of method java/lang/Object.<init> failed; insufficient shm-object space
    It seems as though that "JOXSHM" of size "134217728" (which is 128MB) corresponds to the java_pool_size setting in my init.ora file:
    memory_target=1000M
    memory_max_target=2000M
    java_pool_size=128M
    shared_pool_size=256M
    Whenever I change that size it propagates to the trace file. I also picked up that only 592MB of shm memory gets used. My df -h dump:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 592M 1.5G 29% /dev/shm
    The only way in which I could get loadjava to work was to remove java from the database by calling the rmjvm.sql script.
    After this I installed java again by calling the initjvm.sql script. I noticed that after these scripts my shm-memory usage
    increased to about 624MB which is 32MB larger than before:
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda7 39G 34G 4.6G 89% /
    udev 10M 288K 9.8M 3% /dev
    /dev/sda5 63M 43M 21M 69% /boot
    /dev/sda4 59G 45G 11G 81% /mnt/data
    shm 2.0G 624M 1.4G 31% /dev/shm
    However, after I stopped the database and started it again my Java was broken again and calling loadjava produced
    the same error message as before. The shm memory usage would also return to 592MB again. Is there something I
    need to do in terms of persisting the changes that initjvm and rmjvm does to the database? Or is there something else
    wrong that I'm overlooking like the memory management settings or something?
    Regards,
    Wiehann

  • Bulk loading BLOBs using PL/SQL - is it possible?

    Hi -
    Does anyone have a good reference article or example of how I can bulk load BLOBs (videos, images, audio, office docs/pdf) into the database using PL/SQL?
    Every example I've ever seen in PL/SQL for loading BLOBs does a commit; after each file loaded ... which doesn't seem very scalable.
    Can we pass in an array of BLOBs from the application, into PL/SQL and loop through that array and then issue a commit after the loop terminates?
    Any advice or help is appreciated. Thanks
    LJ

    It is easy enough to modify the example to commit every N files. If you are loading large amounts of media, I think that you will find that the time to load the media is far greater than the time spent in SQL statements doing inserts or retrieves. Thus, I would not expect to see any significant benefit to changing the example to use PL/SQL collection types in order to do bulk row operations.
    If your goal is high performance bulk load of binary content then I would suggest that you look to use Sqlldr. A PL/SQL program loading from BFILEs is limited to loading files that are accessible from the database server file system. Sqlldr can do this but it can also load data from a remote client. Sqlldr has parameters to control batching of operations.
    See section 7.3 of the Oracle Multimedia DICOM Developer's Guide for the example Loading DICOM Content Using the SQL*Loader Utility. You will need to adapt this example to the other Multimedia objects (ORDImage, ORDAudio .. etc) but the basic concepts are the same.
    Once the binary content is loaded into the database, you will need a to write a program to loop over the new content and initialize the Multimedia objects (extract attributes). The example in 7.3 contains a sample program that does this for the ORDDicom object.

  • Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).

    Hi,
    I have a file where fields are wrapped with ".
    =========== file sample
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    "asdsa","asdsadasdas","1123"
    ==========
    I am having a .net method to remove the wrap characters and write out a file without wrap characters.
    ======================
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    asdsa,asdsadasdas,1123
    ======================
    the .net code is here.
    ========================================
    public static string RemoveCharacter(string sFileName, char cRemoveChar)
                object objLock = new object();
                //VirtualStream objInputStream = null;
                //VirtualStream objOutStream = null;
                FileStream objInputFile = null, objOutFile = null;
                lock(objLock)
                    try
                        objInputFile = new FileStream(sFileName, FileMode.Open);
                        //objInputStream = new VirtualStream(objInputFile);
                        objOutFile = new FileStream(sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString(), FileMode.Create);
                        //objOutStream = new VirtualStream(objOutFile);
                        int nByteRead;
                        while ((nByteRead = objInputFile.ReadByte()) != -1)
                            if (nByteRead != (int)cRemoveChar)
                                objOutFile.WriteByte((byte)nByteRead);
                    finally
                        objInputFile.Close();
                        objOutFile.Close();
                    return sFileName.Substring(0, sFileName.LastIndexOf('\\')) + "\\" + Guid.NewGuid().ToString();
    ==================================
    however when I run the bulk load utility I get the error 
    =======================================
    Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 3 (NumberOfMultipleMatches).
    ==========================================
    the bulk insert statement is as follows
    =========================================
     BULK INSERT Temp  
     FROM '<file name>' WITH  
      FIELDTERMINATOR = ','  
      , KEEPNULLS  
    ==========================================
    Does anybody know what is happening and what needs to be done ?
    PLEASE HELP
    Thanks in advance 
    Vikram

    To load that file with BULK INSERT, use this format file:
    9.0
    4
    1 SQLCHAR 0 0 "\""      0 ""    ""
    2 SQLCHAR 0 0 "\",\""   1 col1  Latin1_General_CI_AS
    3 SQLCHAR 0 0 "\",\""   2 col2  Latin1_General_CI_AS
    4 SQLCHAR 0 0 "\"\r\n"  3 col3  Latin1_General_CI_AS
    Note that the format file defines four fields while the fileonly seems to have three. The format file defines an empty field before the first quote.
    Or, since you already have a .NET program, use a stored procedure with table-valued parameter instead. I have an example of how to do this here:
    http://www.sommarskog.se/arrays-in-sql-2008.html
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Sqlldr bulk loading

    i'm having a strange problem bulk loading a about 15k+ rows into an existing spatial table.
    i use a perl script to 'watch' a directory intto which the usgs pushes a file. The first line in the file gets parsed and entered into a table (no problem here). For the rest of the file i generate a sqlldr control file that appends the data into my table. After the insert i do a transform of the lat, long columns and make an sdo_point like so:
    $upstmt = $dbh->prepare("update shake_xyz set shape = dd832utm(lon,lat) where id = ?" );
    $upstmt->bind_param(1,$id);     
    $upstmt->execute() or die "cant update! $DBI::errstr\n";
    if a spatial index does not exist on the table, theres no problem.
    If one does and i dont use a direct path load, theres no problem (except that this takes waaaaay longer than i'd like).
    When i try to do a direct path load with an spatial index, i get the following rather distressing message
    SQL*Loader-926: OCI error while uldlfca:OCIDirPathColArrayLoadStream for table SHAKE_XYZ
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (ptc_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (ptc_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (pts_hp)
    ORA-03114: not connected to ORACLE
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (pts_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (sbpt_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtExecute (sbpt_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (pts_hp)
    ORA-24338: statement handle not executed
    SQL*Loader-925: Error while uldlgs: OCIStmtFetch (sbpt_hp)
    ORA-24338: statement handle not executed
    ive truncated the message, it keeps repeating for another 50+ lines
    here is what my input file looks like:
    elvis[kxv4]{8}%     head grid-9.xyz     
    51111345 5.5 39.81 -120.64 AUG 10 2001 20:19:27 GMT -121.883 38.9833 -119.383 40.65 (Process time: Thu Sep 26 09:24:45 2002)
    -121.8833 40.6500 0.9135 0.4790 3.0900 1.3490 0.5076 0.1056
    -121.8666 40.6500 0.7209 0.3115 2.8600 1.0659 0.3301 0.0688
    -121.8500 40.6500 0.7195 0.3110 2.8600 1.0650 0.3297 0.0689
    -121.8333 40.6500 0.9087 0.4771 3.0800 1.3465 0.5059 0.1059
    -121.8166 40.6500 0.7170 0.3102 2.8600 1.0638 0.3290 0.0690
    -121.8000 40.6500 0.7161 0.3100 2.8600 1.0640 0.3287 0.0690
    -121.7833 40.6500 0.7156 0.3098 2.8600 1.0649 0.3287 0.0690
    -121.7666 40.6500 0.7156 0.3099 2.8600 1.0666 0.3288 0.0690
    -121.7500 40.6500 0.7162 0.3103 2.8600 1.0693 0.3292 0.0690
    here is what my control file looks like:
    elvis[kxv4]{10}% head -20 51111345grid-9.xyz
    LOAD DATA
    INFILE *
    INTO TABLE shake_xyz
    APPEND
    FIELDS TERMINATED BY ',' TRAILING NULLCOLS
    ( ID,
    LON,
    LAT,
    PGA,
    PGV,
    MMI,
    PSA1 NULLIF PSA1=BLANKS,
    PSA2 NULLIF PSA2=BLANKS,
    PSA3 NULLIF PSA3=BLANKS,
    OBJECTID SEQUENCE(MAX,1)
    BEGINDATA
    51111345,-121.8833,40.6500,0.9135,0.4790,3.0900,1.3490,0.5076,0.1056
    51111345,-121.8666,40.6500,0.7209,0.3115,2.8600,1.0659,0.3301,0.0688
    for earthquakes with magnitude less than 5 the PSA% columns are blank ...
    i'm woefully ignorant of sqlldr and how it works with indexes
    any advice appreciated
    thanks
    --kassim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    thanks dan,
    thats a great idea (im using 9.2.0.4)
    each event is assigned a unique id by usgs. But this number does not look like a sequence (higher numbers do not correspond to more recent events), but i also assign a sequence number to the event_id which i keep track of in an event table which has only one entry for each event instead of the 15k entries for the shake table.
    time to read up on partitions...
    thanks again; you rock!
    --kassim                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Bulk Loader Program to load large xml document

    I am looking for a bulk loader database program that will load a very large xml document. The simple bulk loader application available on the oracle site will not load this document due to its size which is approximately 20MG. Please advise asap. Thank you.

    From the above document:
    Storing XML Data Across Tables
    Question
    Can XML- SQL Utility store XML data across tables?
    Answer
    Currently XML-SQL Utility (XSU) can only store to a single table. It maps a canonical representation of an XML document into any table/view. But of course there is a way to store XML with the XSU across table. One can do this using XSLT to transform any document into multiple documents and insert them separately. Another way is to define views over multiple tables (object views if needed) and then do the inserts ... into the view. If the view is inherently non-updatable (because of complex joins, ...), then one can use INSTEAD-OF triggers over the views to do the inserts.
    -- I've tried this, works fine.

  • Bulk-loading performance

    I'm loading Twitter stream data into JE. There's about 2 million data pieces daily, each about 1K. I have a user class and a twit (status) class, and for each twit, I update the user; I also have secondaries on twits for replies, and use DPL. In fact this is all in Scala, but works with JE just fine, as it should. Since each twit insertion updates its user, e.g. with total count of twits per user incremented, originally I had a transaction for each UserTwit insertion, and several threads working on inserting, similar to the architecture I developed first for PostgreSQL. However, that was too slow. So then I switched to a single thread, no transactions, and deferred write. Here's what happens with that: the loading works very quickly through all twits, in about 10-20 minutes, and then spends about 1-2 hours on store.sync; store.close; env.sync; env.close. Do I need to sync both if I have only one DPL store and nothing else in this environment, and/or do I lose any more time with two syncs? Should I do anything special to stop checkpointing thread or cleaning one?
    I already have 2,000+ small 10M jdb files, and wonder how can I agglomerate them together into say 1 GB files each, since this is about how much the database grows daily.
    Overall, the PostgreSQL performance is about 2-4 hours per bulkload, similar to BDB JE. I implemented exactly the same loading logic with the PG or BDB backends, and hoped that BDB will be faster, but for now not by an order of magnitude... And this is given that PG doesn't use RAM cache, while with JE I specify the cache size of 50 GB explicitly, and it takes about 15 GB of RAM when quickly going through the put phase, before hanging for an hour or two in sync.
    The project, tfitter, is open source, and is available at github:
    http://github.com/alexy/tfitter/tree/master
    I use certain tricks to convert the Java classes from and back to Scala's, but all the time is spent in sync, so it's a JE question --
    I'd appreciate any recommendations to make it faster with the JE.
    Cheers,
    Alexy

    Aleksy,
    A few of us were talking about your question, and had some more options to add. Without more detailed data, such as the stats obtained from Environment.getStats() or the thread dumps as Charles and Gordon (gojomo) suggested, our suggestions are bit hypothetical.
    Gordon's point about GC options and Charlie's suggestion of je.checkpointer.highPriority are CPU oriented. Charlie's point about Entity.sync vs Environment.sync is also in that category. You should try those suggestions because they will certainly reduce the workload some. (If you need to essentially sync up everything in an environment, it is less overhead to call Environment.sync, but if only some of the entity stores need syncing, it is more worthwhile to call Entity.sync).
    However, your last post implied that you are more I/O bound during the sync phase. In particular, are you finding that you have a small number of on-disk files before the call to sync, and a great many afterwards? In that case, the sync is dumping out the bulk of the modified objects at that time, and it may be useful to change the .jdb file size during this phase by setting je.log.fileMax through EnvironmentConfig.setConfigParam().
    JE issues a fsync at the boundary of each .jdb file, so increasing the .jdb file dramatically can reduce the number of fsyncs, and improve your write throughput. As a smaller, secondary benefit, JE is storing some metadata on a per-file basis, and increasing the file size can reduce that overhead, though generally that is a minor issue. You can see the number of fsyncs issued through Environment.getStats()
    There are issues to be careful about when changing the .jdb file size. The file is the unit of log cleaning. Increasing the log file size can make later log cleaning expensive if that data becomes obsolete later. If the data is immutable, that is not a concern.
    Enabling the write disk cache can also help during the write phase.
    Again, send us any stats or thread dumps that you generate during the sync phase.
    Linda

  • ORA-06502 during bulk load

    I am using v11.2 with the new Jena adapter.
    I am trying to upload data from a bunch of ntriple files to the triple store via the bulk load interface in the Jena adaptor- aka. bulk append. The code does something like this
    while(moreFiles exist)
    readFilesToMemory;
    bulkLoadToDatabase using the options "MBV_JOIN_HINT=USE_HASH PARALLEL=4"
    Loading the first set of triples goes well. But when I try to load the second set of triples, I get the exception below.
    Some thoughts:
    1) I dont think this is data problem because I uploaded all the data during an earlier test + when I upload the same data on an empty database it works fine.
    2) I saw some earlier posts with similar error but none of the seem to be using the Jena adaptor..
    3) The model also has a OWL Prime entailment in incremental mode.
    4) I am not sure if this is relevant but... Before I ran the current test, I mistakenly launched multiple of java processes that bulk loaded the data. Ofcourse I killed all the processes and dropped the sem_models and the backing rdf tables they were uploading to.
    EXCEPTION
    java.sql.SQLException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
    ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 3164
    ORA-06512: at "MDSYS.SDO_RDF_INTERNAL", line 4244
    ORA-06512: at "MDSYS.SDO_RDF", line 276
    ORA-06512: at "MDSYS.RDF_APIS", line 693
    ORA-06512: at line 1
    at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
    at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:204)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)
    at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)
    at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:191)
    at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:950)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1222)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3387)
    at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3488)
    at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:3840)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1086)
    at oracle.spatial.rdf.client.jena.Oracle.executeCall(Oracle.java:689)
    at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:740)
    at oracle.spatial.rdf.client.jena.OracleBulkUpdateHandler.addInBulk(OracleBulkUpdateHandler.java:463)
    at oracleuploadtest.OracleUploader.loadModelToDatabase(OracleUploader.java:84)
    at oracleuploadtest.RunOracleUploadTest.main(RunOracleUploadTest.java:81)
    thanks!
    Ram.

    The addInBulk method needs to be called twice to trigger the bug. Here is a test case that passes only while the bug is present! (It is to remind me to remove the workaround code when the fix gets through to my code).
    @Test
         public void testThatOracleBulkBugIsNotYetFixed() throws SQLException {
              char nm[] = new char[22-TestDataUtils.getUserID().length()-TestOracleHelper.ORACLE_USER.length()];
              Arrays.fill(nm,'A');
              TestOracleHelper helper = new TestOracleHelper(new String(nm)); // actual name is TestDataUtils.getUserID() +"_" + nm
              GraphOracleSem og = helper.createGraph();
              Node n = RDF.value.asNode();
              Triple triples[] = new Triple[]{new Triple(n,n,n)};
              try {
                   og.getBulkUpdateHandler().addInBulk(triples, null);
                   // Oracle bug hits on second call:
                   og.getBulkUpdateHandler().addInBulk(triples, null);
              catch (SQLException e) {
                   if (e.getErrorCode()==6502) {
                   return; // we have a work-around for this expected error;
                   throw e; // some other problem.
              Assert.fail("It seems that an Oracle update (has the ora jar been updated?) resolves a silly bug - please modify BulkLoaderExportMode");
    Jeremy

  • Converting large amounts of points - 76 million lat/lon's to spatial object...

    Hello, I need help.
    Platform - Oracle 11g 64bit on Windows Enterprise server 2008 64bit.  64 GB of ram with 2 CPUs totalling 24 cores
    Does any one know of a fast way to convert large amounts of points to a spatial object?  I need to convert 76 million lat/lon's to ESRI st_geometry or Oracle sdo_geometry.
    Currently, I have setup code using pipelined parallel functions and multiple jobs that run concurrently.  It still takes over 2.5 hours to process all of the points.
    Any pointers would be GREATLY appreciated!
    Thanks
    John

    Hi,
    Where is the lat/lon data at the moment?  In an external text file or in an existing database table as number attributes?
    If they're in an external text file, then I'd probably use an external table to load them in as quickly as possible.
    If they're in an existing database table, then you can just update the sdo_geometry column using:
    update <table> set <geometry column> = sdo_geometry(2001, <your srid>, sdo_point_type(<lon column>, <lat column>, null), null, null)
    where <lon column> is not null
    and <lat column> is not null;
    That should run very quick for you.  If you want to avoid the overhead of creating redo, you could use "create table .... as select...".  This example of creating 1,000,000 points runs in 9 seconds for me.
    create table sample_points (geometry) nologging as
      (select sdo_geometry(2001, null,
      sdo_point_type(
      trunc(100000 * dbms_random.value()),
      trunc(100000 * dbms_random.value()),
      null), null, null)
      from dual connect by level <= 1000000);
    I have setup code using pipelined parallel functions and multiple jobs that run concurrently
    You shouldn't need to use pl/sql for this task.  If you find you do, then provide some sample code and we'll take a look.
    Regards,
    John O'Toole

  • IDM Bulk Load of ROLES

    Can ROLES be bulked loaded by creating the necessary XML ?
    IF so, where does this XML need to be stored for the new ROLES to be picked up by the IDM...? randy

    Hey Randy.
    Sorry I didn't get back to this. Offered to help then basically left the building...!
    Here's a sample of what I was talking about. Basically, if you have a bunch of individual XML files where 1-file = 1-Role, then you can import a bunch at once with a "wrapper" file like this:
    <?xml version='1.0' encoding='UTF-8'?>
    <!DOCTYPE Waveset PUBLIC 'waveset.dtd' 'waveset.dtd'>
    <Waveset>
    <ImportCommand name="include" file="myDirectory/myRoleFile-1.xml"/>
    <ImportCommand name="include" file="myDirectory/myRoleFile-2.xml"/>
    </Waveset>
    So in your environment...
    -- have the "myDirectory" be the relative path (e.g. under $WSHOME (or %WSHOME)) where you put your Role files. Maybe they're in 'bin' or some new directory.
    -- the "myRoleFile-1.xml" is just the name you gave a file in that directory for the file that contains the valid EXPRESS for Role-1 in my example.
    You can have as many of these ImportCommands as you like in the "wrapper" file, though I'd try this with only 1-2 to make sure it works and then get more ambitious with more files represented in the "wrapper" file.
    Just be sure you're naming the objects uniquely. IdM will automatically generate the new ID value (e.g. the internal GUID), but you'll start over-writing objects if you don't remember to rename each one in the EXPRESS that's in each text file...
    As always with a change like this -- I'd strongly recommend backing up your DEV environment first. Then you can go to Configure -> Import Exchange File and find/select your "wrapper" file. Should work just fine.
    Good luck!

  • SSRS 2005 report: Cannot bulk load Operating system error code 5(Access is denied.)

    I built a SSRS 2005 report, which calls a stored proc on SQL Server 2005. The proc contains following code:
    CREATE TABLE #promo (promo VARCHAR(1000))
    BULK
    INSERT #promo
    FROM '\\aseposretail\c$\nz\promo_names.txt'
    WITH
    --FIELDTERMINATOR = '',
    ROWTERMINATOR = '\n'
    SELECT * from #promo
    It's ok when I manually execute the proc in SSMS.
    When I try to run the report from BIDS I got following error:
    *Cannot bulk load because the file "\aseposretail\c$\nz\promo_names.txt" could not be opened. Operating system error code 5(Access is denied.).*
    Note: I have gooled a bit and see many questions on this but they are not relevant because I CAN run the code no problem in SSMS. It's the SSRS having the issue. I know little about the security of SSRS.

    I'm having the same type of issue.  I can bulk load the same file into the same table on the same server using the same login on one workstation, but not on another.  I get this error:
    Msg 4861, Level 16, State 1, Line 1
    Cannot bulk load because the file "\\xxx\abc.txt" could not be opened. Operating system error code 5(Access is denied.).
    I've checked SQL client versions and they are the same, I've also set the client connection to TCP/IP only in the SQL Server Configuration Manager.  Still this one workstation is getting the error.  Since the same login is being used on both workstations and it works on one  but not the other, the issue is not a permissions issue.  I can also have another user login into the bad workstation and have the bulk load fail, but when they log into their regular workstation it works fine.  Any ideas on what the client configuration issue is?  These are the version numbers for Management Studio:
    Microsoft SQL Server Management Studio 9.00.3042.00
    Microsoft Analysis Services Client Tools 2005.090.3042.00
    Microsoft Data Access Components (MDAC) 2000.085.1132.00 (xpsp.080413-0852)
    Microsoft MSXML 2.6 3.0 5.0 6.0
    Microsoft Internet Explorer 6.0.2900.5512
    Microsoft .NET Framework 2.0.50727.1433
    Operating System 5.1.2600
    Thanks,
    MWise

  • How to UPDATE a big table in Oracle via Bulk Load

    Hi all,
    in a datastore target as Oracle 11g, I have a big table having 300milions of record; the structure is One integer key + 10 columns attributes .
    In IQ Source i have the same table with the same size ; the structure is One integer key + 1 column attributes .
    What i need to do is to UPDATE that single field in Oracle from the values stored in IQ .
    Any idea on how to organize efficiently the dataflow and the target writing mode ? bulk load ? api ?
    thank you
    Maurizio

    Hi,
    You cannot do bulk load when you need to UPDATE a field. Because all a bulk load does is add records to your table.
    Since you have to UPDATE a field, i would suggest to go for SCD with
    source > TC > MO > KG >target
    Arun

  • Error while running bulk load utility for account data with CSV file

    Hi All,
    I'm trying to run the bulk load utility for account data using CSV but i'm getting following error...
    ERROR ==> The number of CSV files provided as input does not match with the number of account tables.
    Thanks in advance........

    Please check your child table.
    http://docs.oracle.com/cd/E28389_01/doc.1111/e14309/bulkload.htm#CHDCGGDA
    -kuldeep

  • Bulk Load option doesn't work

    Hi Experts,
    I am trying to load data to HFM using Bulk load option but it doesnt work. When I Change the option to SQL insert, the loading is successful. The logs say that the temp file is missing. But when I go to the lspecified location , I see the control file and the tmp file. What am I missing to have bulk load working?Here's the log entry.
    2009-08-19-18:48:29
    User ID...........     kannan
    Location..........     KTEST
    Source File.......     \\Hyuisprd\Applications\FDM\CRHDATALD1\Inbox\OMG\HFM July2009.txt
    Processing Codes:
    BLANK............. Line is blank or empty.
    ESD............... Excluded String Detected, SKIP Field value was found.
    NN................ Non-Numeric, Amount field contains non numeric characters.
    RFM............... Required Field Missing.
    TC................ Type Conversion, Amount field could be converted to a number.
    ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
    Create Output File Start: [2009-08-19-18:48:29]
    [TC] - [Amount=NN]     Batch Month File Created: 07/2009
    [TC] - [Amount=NN]     Date File Created: 8/6/2009
    [TC] - [Amount=NN]     Time File Created: 08:19:06
    [Blank] -      
    Excluded Record Count.............. 3
    Blank Record Count................. 1
    Total Records Bypassed............. 4
    Valid Records...................... 106093
    Total Records Processed............ 106097
    Begin Oracle (SQL-Loader) Process (106093): [2009-08-19-18:48:41]
    [RDMS Bulk Load Error Begin]
         Message:      (53) - File not found
         See Bulk Load File:      C:\DOCUME~1\fdmuser\LOCALS~1\Temp\tWkannan30327607466.tmp
    [RDMS Bulk Load Error End]
    Thanks
    Kannan.

    Hi Experts,
    I am facing the data import error while importing data from .csv file to FDM-HFM application.
    2011-08-29 16:19:56
    User ID...........     admin
    Location..........     ALBA
    Source File.......     C:\u10\epm\DEV\epm_home\EPMSystem11R1\products\FinancialDataQuality\FDMApplication\BMHCFDMHFM\Inbox\ALBA\BMHC_Alba_Dec_2011.csv
    Processing Codes:
    BLANK............. Line is blank or empty.
    ESD............... Excluded String Detected, SKIP Field value was found.
    NN................ Non-Numeric, Amount field contains non numeric characters.
    RFM............... Required Field Missing.
    TC................ Type Conversion, Amount field could be converted to a number.
    ZP................ Zero Suppress, Amount field contains a 0 value and zero suppress is ON.
    Create Output File Start: [2011-08-29 16:19:56]
    [ESD] ( ) Inter Co,Cash and bank balances,A113000,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],1
    [ESD] ( ) Inter Co,"Trade receivable, prepayments and other assets",HFM128101,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],35
    [ESD] ( ) Inter Co,Inventories ,HFM170003,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],69
    [ESD] ( ) Inter Co,Financial assets carried at fair value through P&L,HFM241001,Actual,Alba,Dec,2011,MOF,MOF,,YTD,Input_Default,[NONE],[NONE],[NONE],103
    [Blank] -      
    Excluded Record Count..............4
    Blank Record Count.................1
    Total Records Bypassed.............5
    Valid Records......................0
    Total Records Processed............5
    Begin SQL Insert Load Process (0): [2011-08-29 16:19:56]
    Processing Complete... [2011-08-29 16:19:56]
    Please help me solve the issue.
    Regards,
    Sudhir Sinha

  • Bulk Insert Task Cannot bulk load because the file could not be opened.operating system error error code 3(The system cannot find the path specified.)

    Following error i am getting after i chnaged the Path in Config File from
    \\vs01\d$\\Deployment\Files\temp.txt
    to
    C:\Deployment\Files\temp.txt
    [Bulk Insert Task] Error: An error occurred with the following error message: "Cannot bulk load because the file "C:\Deployment\Files\temp.txt" could not be opened. Operating system error code 3(The system cannot find the path specified.).". 

    I think i know whats going on. The Bulk Insert task runs by executing sql command (bulk insert) internally from the target sql server to load the file. This means that the SQL Server Agent of the target sql server should have permissions on the file you trying to load. This also means that you need to use UNC path instead to specify the file path (if the target server in on different machine)
    Also from BOL (see section Usage Considerations - last bullet point)
    http://msdn.microsoft.com/en-us/library/ms141239.aspx
    * Only members of the sysadmin fixed server role can run a package that contains a Bulk Insert task.
    Make sure you take care of this as well.
    HTH
    ~Mukti
    Mukti

Maybe you are looking for