Load an existing Berkeley DB file into memory

Dear Experts,
I have created some Berkeley DB (BDB) files onto disk.
I noticed that when I issue key-value retrievals, the page faults are substantial, and the CPU utilization is low.
One sample of the time command line output is as follow:
1.36user 1.45system 0:10.83elapsed 26%CPU (0avgtext+0avgdata 723504maxresident)k
108224inputs+528outputs (581major+76329minor)pagefaults 0swaps
I suspect that the bottleneck is the high frequency of file I/O.
This may be because of page faults of the BDB file, and the pages are loaded in/out of disk fairly frequently.
I wish to explore how to reduce this page fault, and hence expedite the retrieval time.
One way I have read is to load the entire BDB file into main memory.
There are some example programs on docs.oracle.com, under the heading "Writing In-Memory Berkeley DB Applications".
However, I could not get them to work.
I enclosed below my code:
--------------- start of code snippets ---------------
/* Initialize our handles */
DB *dbp = NULL;
DB_ENV *envp = NULL;
DB_MPOOLFILE *mpf = NULL;
const char *db_name = "db.id_url";   // A BDB file on disk, size  66,813,952
u_int32_t open_flags;
/* Create the environment */
db_env_create(&envp, 0);
open_flags =
DB_CREATE | /* Create the environment if it does not exist */
DB_INIT_LOCK | /* Initialize the locking subsystem */
DB_INIT_LOG | /* Initialize the logging subsystem */
DB_INIT_MPOOL | /* Initialize the memory pool (in-memory cache) */
DB_INIT_TXN |
DB_PRIVATE; /* Region files are not backed by the filesystem.
* Instead, they are backed by heap memory. */
* Specify the size of the in-memory cache.
envp->set_cachesize(envp, 0, 70 * 1024 * 1024, 1); // 70 Mbytes, more than the BDB file size of 66,813,952
* Now actually open the environment. Notice that the environment home
* directory is NULL. This is required for an in-memory only application.
envp->open(envp, NULL, open_flags, 0);
/* Open the MPOOL file in the environment. */
envp->memp_fcreate(envp, &mpf, 0);
int pagesize = 4096;
if ((ret = mpf->open(mpf, "db.id_url", 0, 0, pagesize)) != 0) {
envp->err(envp, ret, "DB_MPOOLFILE->open: ");
goto err;
int cnt, hits = 66813952/pagesize;
void *p=0;
for (cnt = 0; cnt < hits; ++cnt) {
db_pgno_t pageno = cnt;
mpf->get(mpf, &pageno, NULL, 0, &p);
fprintf(stderr,"\n\nretrieve %5d pages\n",cnt);
/* Initialize the DB handle */
db_create(&dbp, envp, 0);
* Set the database open flags. Autocommit is used because we are
* transactional.
open_flags = DB_CREATE | DB_AUTO_COMMIT;
dbp->open(dbp, // Pointer to the database
NULL, // Txn pointer
NULL, // File name -- NULL for inmemory
db_name, // Logical db name
DB_BTREE, // Database type (using btree)
open_flags, // Open flags
0); // File mode. defaults is 0
DBT key,data; int test_key=103456;
memset(&key, 0, sizeof(key));
memset(&data, 0, sizeof(data));
key.data = (int*)&test_key;
key.size = sizeof(test_key);
dbp->get(dbp, NULL, &key, &data, 0);
printf("%d --> %s ", *((int*)key.data),(char*)data.data );
/* Close our database handle, if it was opened. */
if (dbp != NULL) {
dbp->close(dbp, 0);
if (mpf != NULL) (void)mpf->close(mpf, 0);
/* Close our environment, if it was opened. */
if (envp != NULL) {
envp->close(envp, 0);
/* Final status message and return. */
printf("I'm all done.\n");
--------------- end of code snippets ---------------
After compilation, the code output is:
retrieve 16312 pages
103456 --> (null) I'm all done.
However, the test_key input did not get the correct value retrieval.
I have been reading and trying this for the past 3 days.
I will appreciate any help/tips.
Thank you for your kind attention.
WAN
Singapore

Hi Mike
Thank you for your 3 steps:
-- create the database
-- load the database
-- run you retrievals
Recall that my original intention is to load in an existing BDB file (70Mbytes) completely into memory.
So following your 3 steps above, this is what I did:
Step-1 (create the database)
I have followed the oracle article on http://docs.oracle.com/cd/E17076_02/html/articles/inmemory/C/index.html
In this step, I have created the environment, set the cachesize to be bigger than the BDB file.
However, I have some problem with the code that opens the DB handle.
The code on the oracle page is as follow:
* Open the database. Note that the file name is NULL.
* This forces the database to be stored in the cache only.
* Also note that the database has a name, even though its
* file name is NULL.
ret = dbp->open(dbp, /* Pointer to the database */
NULL, /* Txn pointer */
NULL, /* File name is not specified on purpose */
db_name, /* Logical db name. */
DB_BTREE, /* Database type (using btree) */
db_flags, /* Open flags */
0); /* File mode. Using defaults */
Note that the open(..) API does not include the BDB file name.
The documentation says that this is so that the API will know that it needs an in-memory database.
However, how do I tell the API the source of the existing BDB file from which I wish to load entirely into memory ?
Do I need to create another DB handle (non-in-memory, with a file name as argument) that reads from this BDB file, and then call DB->put(.) that inserts the records into the in-memory DB ?
Step-2 (load the database)
My question in this step-2 is the same as my last question in step-1, on how do I tell the API to load in my existing BDB file into memory?
That is, should I create another DB handle (non-in-memory) that reads from the existing BDB file, use a cursor to read in EVERY key-value pair, and then insert into the in-memory DB?
Am I correct to say that by using the cursor to read in EVERY key-value pair, I am effectively warming the file cache, so that the BDB retrieval performance can be maximized ?
Step-3 (run your retrievals)
Are the retrieval API, e.g. c_get(..), get(..), for the in-memory DB, the same as the file-based DB ?
Thank you and always appreciative for your tips.
WAN
Singapore

Similar Messages

  • How to load a BDB file into memory?

    The entire BDB database needs to reside in memory for performance reasons, it needs to be in memory all the time, not paged in on demand. The physical memory and virtual process address space are large enough to hold this file. How can I load it into memory just before accessing the first entry? I've read the C++ API reference, and it seems that I can do the following:
    1, Create a DB environment;
    2, Call DB_ENV->set_cachesize() to set a memory pool large enough to hold the BDB file;
    3, Call DB_MPOOLFILE->open() to open the BDB file in memory pool of that DB environment;
    4, Create a DB handle in that DB environment and open the BDB file (again) via this DB handle.
    My questions are:
    1, Is there a more elegant way instead of using that DB environment? If the DB environment is a must, then:
    2, Does step 3 above load the BDB file into memory pool or just reserve enough space for that file?
    Thanks in advance,
    Feng

    Hello,
    Does the documentation on "Memory-only or Flash configurations" at:
    http://download.oracle.com/docs/cd/E17076_02/html/programmer_reference/program_ram.html
    answer the question?
    From there we have:
    By default, databases are periodically flushed from the Berkeley DB memory cache to backing physical files in the filesystem. To keep databases from being written to backing physical files, pass the DB_MPOOL_NOFILE flag to the DB_MPOOLFILE->set_flags() method. This flag implies the application's databases must fit entirely in the Berkeley DB cache, of course. To avoid a database file growing to consume the entire cache, applications can limit the size of individual databases in the cache by calling the DB_MPOOLFILE->set_maxsize() method.
    Thanks,
    Sandra

  • Loading a large number of strings into memory quickly

    Hello,
    I'm working on an iPhone application where I need to load a large number of strings into memory. Currently I'm simply reading from a file where each string is stored in plain text on a single line. I read the file contents into a string using stringWithContentsOfFile and then I create an NSSet object using NSSet setWithArray:[string componentsSeparatedByString:@"\n"]];
    This works like a charm but takes around 8 seconds to load on the iPhone. I'm looking for ways to speed this up. I've already tried a few things which weren't any faster:
    1) I used [NSKeyedArchiver archiveRootObject:myList toFile:appFile]; to store the NSSet data structure. Then instead of reading the text file storage. I read this file using [NSKeyedUnarchiver unarchiveObjectWithFile:appFile]; This was actually very slow and created a strings file that was about 2x the size of the original plain text.
    2) Instead of using an NSSet, I used and NSDictionary and used writeToFile and dictionaryWithContentsOfFile. This was also no faster.
    3) Finally I tried using the NSDictionary to write to a binary file format using NSPropertyListSerialization. This was also not any faster.
    I've been thinking about using SQLite instead of the flat file read, but I haven't had a chance to prototype that out to see if it would be faster. It's important that I can do fast searches for specific strings, which is why I originally used a set.
    Does any one else have any ideas how to load this into memory faster? If all else fails, I'm simply going to load the strings into memory using a separate thread on application launch so I don't prevent the user from getting to the main menu for 8 seconds.
    Thanks,
    -Keith

    I'd need to know more about what you're doing, but from what you're describing I think you should try to change your algorithm.
    For example: Instead of distributing one flat file, split your list of strings into 256 files, based on the first two hex digits of their MD5 hashes*. (Two digits might not be enough--you might need three or four. You may also want to use folders, especially if you need more than two digits.) When testing if a string exists, first calculate its MD5 hash and extract the necessary number of digits, then load that file into memory and scan its list. (You can cache these lists in memory so that you only have to load each file once--just make sure that a didReceiveMemoryWarning message will empty those caches.)
    Properly configured, SQLite may be faster than the eight second load time you talk about, especially if you ensure it indexes the column you store the strings in. But it's probably overkill for this application.
    \* A hash is a numeric code calculated from a string; on average, changing a single bit anywhere in the string should change half the bits in the hash, so even very similar strings should generate very different hashes. I suggest using MD5 instead of -\[NSString hash\] because the hash method is not guaranteed to return the same results on Mac OS and iPhone OS, or on different releases of either OS. You could also use a different algorithm, like a CRC; these are faster but I'm not as familiar with them. This thread discusses calculating MD5 hashes on iPhone OS: http://discussions.apple.com/thread.jspa?messageID=7362074
    Message was edited by: Brent Royal-Gordon

  • How  to load the data from excel  file  into table in oracle using UTL_FI

    How to load the data from excel file into table in oracle
    and from table to excel file
    using UTL_FILE package
    Please give me some example

    This is something i tried in oracle apex
    http://avdeo.com/2008/05/21/uploading-excel-sheet-using-oracle-application-express-apex/
    Regards,
    CKLP

  • Reading entire txt file into memory?

    When you are using BufferedReader to read info into a buffer, that means you are reading the file into memory, correct? (Is that what buffer means?)
    I want to look for pattern matches in text files (about 1000 of them) using the regex utils. But I don't want to read and examine the text files line by line. I want to read in the entire text file into memory first and then look for the pattern matches. The text files generally don't exceed about 15K in size. I'm only going one file at a time, too, so this won't give me any out of memory errors, will it?
    And more importantly, how do I do it? I mean the "reading in the file" part only. I have my RegEx, I have my array of files to examine already. I just can't figure out the right code to use to read each file into memory before I look for pattern matches.
    Could someone help, please?

    When you are using BufferedReader to read info into a
    buffer, that means you are reading the file into
    memory, correct? (Is that what buffer means?)Yes.
    I want to look for pattern matches in text files
    (about 1000 of them) using the regex utils. But I
    don't want to read and examine the text files line by
    line.Why not?
    I want to read in the entire text file into
    memory first and then look for the pattern matches.Why?
    The text files generally don't exceed about 15K in
    size. I'm only going one file at a time, too, so
    this won't give me any out of memory errors, will
    it?Depends on how much memory you've given the VM and how much of that it's using already at the time you read the files, but in general, probably not a problem.
    And more importantly, how do I do it? I mean the
    "reading in the file" part only.Use BufferedReader to read line by line and then append each line (plus a newline, since BR.readLine() strips those off) to a StringBuilder.
    Or use a BufferedInputStream and and array that's as big as the file, and in a loop, try to read as much as is left into that array at an offset equal to how much has been read so far.
    I still think this is probably not a good approach though.

  • Loading data from two flat files into one table

    Hi All,
    I want to load few fields from one file and few fields from another file and load the data into one table in the database.
    For Ex: I have two flat files file1.csv and file2.csv and file1.csv containes 25 columns and file2.csv containes 12 columns, i want to load 20 columns from file1.csv and 7 columns from file2.csv into the table
    how to combine the columns from both the files and load them into one table at a time?
    Any help is appriciated
    Thanks
    R.G

    Use external tables.
    If using Oracle9 or higher connect both your csv with database as separate external tables and load the requited fields in new/requited table.

  • How to load the data from excel file into temprory table in Forms 11g?

    Hi
    How to Load the data from excel file(Extension is .CSV) into the temporary table of oracle in Forms11g.
    My Forms Version is - Forms [64 Bit] Version 11.1.2.0.0 (Production)
    Kindly Suggest the Solution.
    Regards,
    Sachin

    Declare
        v_full_filename         varchar2(500);
        v_server_path           varchar2(2000);
        v_separator             VARCHAR2(1);
        v_filename              VARCHAR2(400);
        filename                VARCHAR2 (100);
        v_stop_load             varchar2 (2000);
        v_rec_error_log         varchar2(4000);
        v_error_log             varchar2(4000);
        ctr                     NUMBER (12);
        cols                    NUMBER (2);
        btn                     number;
        RES                     BOOLEAN;   
        application             ole2.obj_type;
        workbooks               ole2.obj_type;
        workbook                ole2.obj_type;
        worksheets              ole2.obj_type;
        worksheet               ole2.obj_type;
        cell                    ole2.obj_type;
        cellType                ole2.OBJ_TYPE;
        args                    ole2.obj_type;
        PROCEDURE olearg
        IS
        args   ole2.obj_type;
        BEGIN
        args := ole2.create_arglist;
        ole2.add_arg (args, ctr);                                
        ole2.add_arg (args, cols);                                   
        cell := ole2.get_obj_property (worksheet, 'Cells', args);
        ole2.destroy_arglist (args);
        END;
    BEGIN
    v_full_filename := client_get_file_name(directory_name => null
                                     ,file_name      => null
                                     ,file_filter    => 'Excel  files (*.xls)|*.xls|'  
                                                                            ||'Excel  files (*.xlsx)|*.xlsx|'                                                                 
                                     ,message        => 'Choose Excel file'
                                     ,dialog_type    => null
                                     ,select_file    => null
    If v_full_filename is not null Then
    v_separator := WEBUTIL_CLIENTINFO.Get_file_Separator ;
    v_filename := v_separator||v_full_filename ;
    :LOAD_FILE_NAME := substr(v_filename,instr(v_filename,v_separator,-1) + 1);                                
    RES := Webutil_File_Transfer.Client_To_AS(v_full_filename,"server_path"||substr(v_filename,instr(v_filename,v_separator,-1) + 1));     
    --Begin load data from EXCEL
    BEGIN
        filename := v_server_path||substr(v_filename,instr(v_filename,v_separator,-1) + 1); -- to pick the file
        application := ole2.create_obj ('Excel.Application');
        ole2.set_property (application, 'Visible', 'false');
        workbooks := ole2.get_obj_property (application, 'Workbooks');
        args := ole2.create_arglist;
        ole2.add_arg (args, filename); -- file path and name
        workbook := ole2.get_obj_property(workbooks,'Open',args);
        ole2.destroy_arglist (args);
        args := ole2.create_arglist;
        ole2.add_arg (args, 'Sheet1');
        worksheet := ole2.get_obj_property (workbook, 'Worksheets', args);
        ole2.destroy_arglist (args);
        ctr := 2;                                                     --row number
        cols := 1;                                                -- column number
        go_block('xxx');
        FIRST_RECORD;  
        LOOP       
                --Column 1 VALUE --------------------------------------------------------------------
            olearg;
            v_stop_load := ole2.get_char_property (cell, 'Text'); --cell value of the argument
            :item1 := v_stop_load;
            cols := cols + 1;                                                      
              --Column 2 VALUE --------------------------------------------------------------------
            olearg;
            :item2 := ole2.get_char_property (cell, 'Text'); --cell value of the argument
            cols := cols + 1;
            --<and so on>
        ole2.invoke (application, 'Quit');
        ole2.RELEASE_OBJ (cell);
        ole2.RELEASE_OBJ (worksheet);
        ole2.RELEASE_OBJ (worksheets);
        ole2.RELEASE_OBJ (workbook);
        ole2.RELEASE_OBJ (workbooks);
        ole2.RELEASE_OBJ (application);
    END;
    --End load data from EXCELPlease mark it as answered if you helped.

  • Reading a text data file into memory

    hi,
    I have a text file which contains data, The text file is parsed and objects are created. The problem is the text file is quite huge measuring upto 1.8~2 Mb. The format of the text file is as follows
    Units: METRIC (atm, m3)
    * Step: 1 Time: 0.00
    * Average Field Pressure : 204.14
    * Region 1 Pressure : 204.14
    Well GROUP Layer Blk_Pressure BHP ResRate OilRate WaterRate GasRate KhProd Windex PindeWELLTYPE
    1 FIELD 1 204.14 49.33 6601.22 6568.10 37.14 538.07 99999.00 260.35 99999.00 P
    1 FIELD 2 204.14 50.34 6558.13 6525.23 36.90 534.56 99999.00 260.35 99999.00 P
    1 FIELD 3 204.14 51.35 6515.04 6482.36 36.65 531.04 99999.00 260.35 99999.00 P
    1 FIELD Tot 204.14 50.34 19674.40 19575.69 110.69 1603.67 99999.00 99999.00 99999.00 P
    2 FIELD 1 204.14 377.66 7573.96 0.00 7403.68 0.00 99999.00 260.35 99999.00 I
    2 FIELD 2 204.14 378.40 7606.33 0.00 7435.32 0.00 99999.00 260.35 99999.00 I
    2 FIELD 3 204.14 379.14 7638.70 0.00 7466.96 0.00 99999.00 260.35 99999.00 I
    2 FIELD Tot 204.14 378.40 22818.99 0.00 22305.95 0.00 99999.00 99999.00 99999.00 I
    * Step: 2 Time: 20.23
    * Average Field Pressure : 300.11
    * Region 1 Pressure : 300.11
    Well GROUP Layer Blk_Pressure BHP ResRate OilRate WaterRate GasRate KhProd Windex PindeWELLTYPE
    1 FIELD 1 194.20 49.33 858.83 853.40 5.36 68.22 99999.00 260.35 99999.00 P
    1 FIELD 2 194.48 50.34 871.71 866.22 5.42 69.35 99999.00 260.35 99999.00 P
    1 FIELD 3 194.76 51.35 884.86 879.29 5.48 70.49 99999.00 260.35 99999.00 P
    1 FIELD Tot 194.48 50.34 2615.40 2598.91 16.26 208.06 99999.00 99999.00 99999.00 P
    2 FIELD 1 370.40 377.66 912.25 0.00 891.74 0.00 99999.00 260.35 99999.00 I
    2 FIELD 2 371.26 378.40 895.75 0.00 875.61 0.00 99999.00 260.35 99999.00 I
    2 FIELD 3 372.12 379.14 879.29 0.00 859.52 0.00 99999.00 260.35 99999.00 I
    2 FIELD Tot 371.26 378.40 2687.28 0.00 2626.86 0.00 99999.00 99999.00 99999.00 I
    The Step goes on till like 3000, I am creating an object for each step which inturn has nested object for each well and the well in turn for each layer. In the above case of step 3 the object would be
    class Step 2{
    inner class well
    {     //for well 1
    inner class layer { // for layer 1 }
    inner class layer {/ for layer 2  }
    inner class layer {/ for layer 3  }
    inner class well
    {     //for well 2
    inner class layer { // for layer 1 }
    inner class layer {/ for layer 2  }
    inner class layer {/ for layer 3  }
    This architecture of mine is proving to be heavy as I would end up with around 9000 Java objects in memory, though my classes only have have int, float, String data items. I am using this data to plot graphs, so I guess it wouldnt be optimal to read data from text file for each plot.
    So in short the problem is can anyone tell a better way to read the data into memory ? given that there could be 3000 steps in the format given above.
    Thanks
    AM

    I have implemented and it takes around 30-45 sec to parse and also the GUI has become very slow. I query the objects for multiple combinations of graphs.
    The data from the objects is being used to feed the graphs on my GUI. I have a number of options on my GUI for different kinds of graphs, for each combination chosen the Objects are queried for the data. The GUI is written using Swing.
    So is there anyway I can fine tune the application, any tips about the object architecture or how to improve the speed. I am also explicitly running the garbage collector a few times in my program. Also how can I make JVM occupy lesser memory so that my program can have more memory.
    Thanks
    am

  • Load XML-data from large file into plain table

    We have to load data from an XML file to a relational table row by row.
    The structure of the data is very simple:
    <ROOT>
    <repReport>
    <KNZ>XY</KNZ>
    <LA>test123</LA>
    <Ident>AA20</Ident>
    </repReport>
    <repReport>
    </repReport>
    </ROOT>.
    The table econtains the fields KNZ, LA, Ident, ...
    If the file is small the following PL/SQL-code works fine:
    CREATE OR REPLACE PROCEDURE Loadxml_Epd AS
    fil BFILE ;
    len INTEGER;
    insrow INTEGER;
    ctxHdl NUMBER;
    buffer_c CLOB ;
    amt NUMBER := dbms_lob.lobmaxsize;
    src_offset NUMBER := 1 ;
    dst_offset NUMBER := 1 ;
    lang_ctx NUMBER := dbms_lob.default_lang_ctx;
    warning NUMBER;
    errNo NUMBER;
    errMsg VARCHAR2(1999);
    BEGIN
    fil := BFILENAME('XML_DIR','200412MKBReport.xml');
    DBMS_LOB.OPEN(fil, DBMS_LOB.LOB_READONLY);
    len := DBMS_LOB.GETLENGTH(fil);
    INSERT INTO INIT_CLOB(init) VALUES( EMPTY_CLOB())
    RETURNING init INTO buffer_c;
    dbms_lob.loadfromfile(buffer_c,fil, amt, dst_offset, src_offset);
    ctxHdl := dbms_xmlsave.newContext ('SST_XML_EPD');
    dbms_xmlsave.setRowTag (ctxHdl , 'repReport');
    dbms_xmlsave.SetIgnoreCase(ctxHdl , 1);
    dbms_xmlsave.SetSqlToXmlnameEscaping (ctxHdl , TRUE);
    dbms_xmlsave.propagateOriginalException(ctxHdl , TRUE);
    dbms_xmlsave.SetCommitBatch(ctxHdl , 1);
    dbms_xmlsave.SetBatchSize(ctxHdl , 1);
    insrow := DBMS_XMLSAVE.insertXML(ctxHdl,buffer_c);
    DBMS_OUTPUT.PUT_LINE('Anzahl Sätze:' || TO_CHAR(insrow,'999999999'));
    IF DBMS_LOB.ISOPEN(fil) = 1 THEN
    DBMS_LOB.CLOSE(fil);
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE(SQLCODE);
    errMsg := SQLERRM;
    DBMS_OUTPUT.PUT_LINE(SUBSTR(errMsg,1,2000));
    IF DBMS_LOB.ISOPEN(fil) = 1 THEN
    DBMS_LOB.CLOSE(fil);
    END IF;
    END;
    But if the file is larger (50 MB in our case) this does not work. In various documents we found that we have to use another sort of XML-parser but we did not find any example how this is really done.
    Does anybody have an example how to load this data that is as simple as our problem?

    We have to load data from an XML file to a relational table row by row.
    The structure of the data is very simple:
    <ROOT>
    <repReport>
    <KNZ>XY</KNZ>
    <LA>test123</LA>
    <Ident>AA20</Ident>
    </repReport>
    <repReport>
    </repReport>
    </ROOT>.
    The table econtains the fields KNZ, LA, Ident, ...
    If the file is small the following PL/SQL-code works fine:
    CREATE OR REPLACE PROCEDURE Loadxml_Epd AS
    fil BFILE ;
    len INTEGER;
    insrow INTEGER;
    ctxHdl NUMBER;
    buffer_c CLOB ;
    amt NUMBER := dbms_lob.lobmaxsize;
    src_offset NUMBER := 1 ;
    dst_offset NUMBER := 1 ;
    lang_ctx NUMBER := dbms_lob.default_lang_ctx;
    warning NUMBER;
    errNo NUMBER;
    errMsg VARCHAR2(1999);
    BEGIN
    fil := BFILENAME('XML_DIR','200412MKBReport.xml');
    DBMS_LOB.OPEN(fil, DBMS_LOB.LOB_READONLY);
    len := DBMS_LOB.GETLENGTH(fil);
    INSERT INTO INIT_CLOB(init) VALUES( EMPTY_CLOB())
    RETURNING init INTO buffer_c;
    dbms_lob.loadfromfile(buffer_c,fil, amt, dst_offset, src_offset);
    ctxHdl := dbms_xmlsave.newContext ('SST_XML_EPD');
    dbms_xmlsave.setRowTag (ctxHdl , 'repReport');
    dbms_xmlsave.SetIgnoreCase(ctxHdl , 1);
    dbms_xmlsave.SetSqlToXmlnameEscaping (ctxHdl , TRUE);
    dbms_xmlsave.propagateOriginalException(ctxHdl , TRUE);
    dbms_xmlsave.SetCommitBatch(ctxHdl , 1);
    dbms_xmlsave.SetBatchSize(ctxHdl , 1);
    insrow := DBMS_XMLSAVE.insertXML(ctxHdl,buffer_c);
    DBMS_OUTPUT.PUT_LINE('Anzahl Sätze:' || TO_CHAR(insrow,'999999999'));
    IF DBMS_LOB.ISOPEN(fil) = 1 THEN
    DBMS_LOB.CLOSE(fil);
    END IF;
    EXCEPTION
    WHEN OTHERS THEN
    DBMS_OUTPUT.PUT_LINE(SQLCODE);
    errMsg := SQLERRM;
    DBMS_OUTPUT.PUT_LINE(SUBSTR(errMsg,1,2000));
    IF DBMS_LOB.ISOPEN(fil) = 1 THEN
    DBMS_LOB.CLOSE(fil);
    END IF;
    END;
    But if the file is larger (50 MB in our case) this does not work. In various documents we found that we have to use another sort of XML-parser but we did not find any example how this is really done.
    Does anybody have an example how to load this data that is as simple as our problem?

  • Loading files into memory

    Hi
    My question is as follows:
    The executable is  loaded into a temporary directory on their computer when the user clicks on the link on a web page.
    How do I load the data that the program needs into the same directory and then destroys the data upon exit of the program?
    Thank you.

    How are you getting the executable loaded?  Is it a case of click on a link and you hit save or run?
    Where are you loading the data from?
    You can use the path info of the current VI (File I/O >> File constants palette).  Strip the path enough times to get the path of the executable.  And use that in your program.  Use other file I/O functions to delete whatever file it is you are putting into the same path.

  • How to load data in plain-text file into database by using EJB?

    Hi,
    We need to develop an on-line application which requires us to load input data
    from a selected plain-text file on the Web server (iplanet server) into the backend
    oracle database. How could I use EJB to make it happen? it seems like using Entity
    EJB is not feasible.
    Really appreciate your help!
    Corona

    Hi Corona,
    "Corona Fang" <[email protected]> wrote in message
    news:3f3b0ea3$[email protected]..
    We need to develop an on-line application which requires us to load inputdata
    from a selected plain-text file on the Web server (iplanet server) intothe backend
    oracle database. How could I use EJB to make it happen? it seems likeusing Entity
    EJB is not feasible.If you are talking about loading thousands of records,
    the preferable way is using plain JDBC calls utilizing
    batch methods.
    Regards,
    Slava Imeshev

  • How to load content of my text file into a Vector?

    Hi!
    Two questions.
    Let say I have used BufferedReader(new FileReader(c:/java/MyText.txt)) to read from MyText.txt file. I want to know how to load content of this file (every line describe each photo in my photo-set) into a Vector? And how to associate lines of MyText.txt as an elements of the Vector?
    I am trying to do something, but no results.
    try{
    BufferedReader reader = new BufferedReader(new FileReader("c:/java/Applications/MyText.txt"));
    String str=reader.readLine();
    JList myList = new JList();
    StringTokenizer st = new StringTokenizer(str,"\n");
    Vector vr = new Vector(25);
    while(st.hasMoreTokens()){
    String nextToken = st.nextToken();
    vr.addElement(nextToken);
    myList.setListData(vr);
    TA1.setText(TA1.getText()+
    (String)myList.getSelectedValue());
    }catch(IOException evt) {};

    BufferedReader reader = new BufferedReader(new FileReader("c:/java/Applications/MyText.txt"));
    Vector photos = new Vector(0,0);
    while(reader.ready())
       photos.addElement(reader.readLine());
    }good luck,
    Felipe

  • Loading the Contents of a file into a JTable

    I am trying to load the content of a delimited (tab, comma, etc), into a JTable. The code is very simple and works for the most part. Here is some of it:
                   List tableRows = new ArrayList();
                   String lineItem = "";
                   InputStream in = new FileInputStream(fName);
                   InputStreamReader isr = new InputStreamReader(in);               
              BufferedReader br = new BufferedReader(isr);
         while((lineItem = br.readLine()) != null)
                        if(x == MAX)
                             break;
                        java.util.List rowData = new ArrayList();
                        StringTokenizer st = new StringTokenizer(lineItem, "\t");
                        while(st.hasMoreElements())
                             rowData.add(st.nextToken());
                        tableRows.add(rowData);     
                        x++;
    Basically, tableRows, ends up to be a List of Lists which I later use to populate the Object[][] for the JTable rows. This works perfect!
    My problem has to do with the tokenizer. In some situations, values in a line that is read contains "". For example an line read (quotes are showed only to distinquish values in the line):
    "item1", "item2", " ", "item4", " ", "item6", "item7"
    The tokenizer does not capture the " ". So instead of 7 tokens, it only sees 5.
    My question is, can someone tell me a better approach for tokenizing the string so that it captures everything?
    Thanks in advance,
    Augustine

    I am trying to load the content of a delimited (tab, comma, etc), into a JTable. The code is very simple and works for the most part. Here is some of it:
                   List tableRows = new ArrayList();
                   String lineItem = "";
                   InputStream in = new FileInputStream(fName);
                   InputStreamReader isr = new InputStreamReader(in);               
              BufferedReader br = new BufferedReader(isr);
         while((lineItem = br.readLine()) != null)
                        if(x == MAX)
                             break;
                        java.util.List rowData = new ArrayList();
                        StringTokenizer st = new StringTokenizer(lineItem, "\t");
                        while(st.hasMoreElements())
                             rowData.add(st.nextToken());
                        tableRows.add(rowData);     
                        x++;
    Basically, tableRows, ends up to be a List of Lists which I later use to populate the Object[][] for the JTable rows. This works perfect!
    My problem has to do with the tokenizer. In some situations, values in a line that is read contains "". For example an line read (quotes are showed only to distinquish values in the line):
    "item1", "item2", " ", "item4", " ", "item6", "item7"
    The tokenizer does not capture the " ". So instead of 7 tokens, it only sees 5.
    My question is, can someone tell me a better approach for tokenizing the string so that it captures everything?
    Thanks in advance,
    Augustine

  • Read XML file into memory

    Please forgive my ignorance, but I have a stupid question. Ths might not even be the correct forum.
    My environment is Tomcat 4.1, Oracle, Win/Solaris.
    I am coding a MVC application in JSP/Servlets. No standard framework is used.
    I have a need in one of my beans to read in some information from an xml file. Since this object is created when a search is run, it seems inefficient to read this xml document every time an instance of the object is created.
    I would like to declare this info static so that it could be used by all the instances of this object, however, I would also like this info to be updated/refreshed if I drop in a new instance of the xml file. How
    could I implement this refresh of the info?
    The info in this file is strictly read only so I would think that I would not have to have this info synchronized. (one instance would not change the information while the other was accessing)
    Any suggestions? Can you point me to a resource for more information?
    Thanks
    M

    Well this was a stupid question. After reading more about the meaning of static I found the answer. As they say RTFM
    Thanks
    M

  • Read a file without loading it into memory

    I have a VERY large file filled with words. I want to determine if a String in my program matches one of them. Loading the entire file into memory would take a long time and would use way too much memory. Is there some way to load just the part I need each time or make the comparison directly from the file on the hard disk?

    The file is static, so your plan seems to have merit.
    I've only used sequential files before, but I'm
    assuming this would require a random access file since
    you have to begin reading at a specified offset. Am I
    correct?Yes.
    In addition, if this file is fixed-length "records" so that you could programmatically determine the word offsets without having to keep a map of indexes in memory, you could just employ a search algorithm (I forget what this one is called) - you could start by defining 2 endpoints, initially at offset 0 and the offset of the last word.
    Iteratively read the word halfway between the 2 endpoints, adjusting the endpoints until a match is found or the endpoints converge.

Maybe you are looking for

  • Printing Events in ical

    Is there a way to print out a list of upcoming events so that it includes the times and descriptions of the events instead of printing the whole calendar which only shows the time span and name of the events?

  • How do I import songs from my ipod touch library to itunes on my computer?

    I am trying to figure out how to take songs from my ipod and transfer them to itunes on my computer.

  • Excel or table style grid in text cast member

    I copied an Excel style table out of MS Word and pasted it into a text cast member and to my surprise, the grid lines came with the text, nicely formatted and everything. Having a spreadsheet like grid (or something like an HTML table) that would aut

  • Windows crashes with Blue Screen

    i'm working with windows XP and after installing the BB Desktop software, i get quite often a blue screen.... Can anybody help me with this? Is a special Service pack necessary to run it properly? Thanks in advance for any help. J-P

  • How to prevent iTunes from deleting music on my iPhone

    I know this is a very basic question but I haven't ffigured it out for my old iPod, let alone my newly acquired iPhone 4s. I have a large amount of music, obtained by digitising high quality FM broadcasts which I want to transfer to my iPhone 4s (as