What java collection for large amount of data and user customizable record

I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
Ok, maybe I give some example. This will be some C++ code.
I made an interface:
class ParamI {
virtual string toString() = 0;
virtual void addValue( ParamI * ) = 0;
virtual void setValue( ParamI * ) = 0;
virtual BYTE getType() = 0;
Than I made some template class derived from interface ParamI:
template <class T>
class CParam : CParamI {
public:
     void setValue( T val );
     T getValue();
     string toString();
     void setValue( ParamI *src ) {
          if ( itemType == src->getType() ) {
               CParam<T> ptr = (CParam<T>)src;
               value = ptr->value;
private:
     BYTE itemType;
     T value;
sample constructor of <int> template:
template<> CParam<int>::CParam() {
     itemType = ParamType::INTEGER;
This solution makes me possible to write collection of CParamI:
std::vector<CParamI*> myCollection;
CParam<int> *pi = new CParam<int>();
pi->setValue(10);
myCollection.push_back((CParamI*)pi);
Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
Please could give me some advise, some idea to make it right using java.

If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

Similar Messages

  • Is the only way to import large amount of data and database objects into a primary database is to shutdown the standby, turn off archive log mode, do the import, then rebuild the standby?

    I have a primary database that need to import large amount of data and database objects. 1.) Do I shutdown the standby? 2.) Turn off archive log mode? 3.) Perform the import? 4.) Rebuild the standby? or is there a better way or best practice?

    Instead of rebuilding the (whole) standby, you take an incremental (from SCN) backup from the Primary and restore it on the Standby.  That way, if, for example
    a. Only two out of 12 tablespaces are affected by the import, the incremental backup would effectively be only the blocks changed in those two tablespaces (and some other changes in system and undo) {provided that there are no other changes in the other ten tablespaces}
    b. if the size of the import is only 15% of the database, the incremental backup to restore to the standby is small
    Hemant K Chitale

  • Java NIO - reading large amount of data

    Hi,
    I have diffuculties of reading large amount of data with SocketChannel (using directAllocated buffer & allocated one). Files greater than 300KB are cut even though I tried write the data into FileChannel.
    My Code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(150000);
    buffer = ByteBuffer.allocate(6000000);
    out = new     FOS("d:\\msgData.tmp");
    fc=out.getFOS().getChannel(); // FileChannel               
    int fileLength = (int)fc.size();
    while (clientChannel.read(directBlockBuffer)>0)
    {                              directBlockBuffer.flip()                         buffer.put(directBlockBuffer);
         directBlockBuffer.compact();
    //close data file
                                       buffer.flip();
                                       fc.write(buffer);
                                       fc.close();
    FOS.close();
    // end of code
    Any ideas?
    Thanks
    AST

    I don't understand how the "write" result will help read the whole data.
    Anyway, I changed the code so the SocketChannel will read in smaller chunks (~8KB) & the FileChannel writes in every read
    but the data stream is cut again (to ~5KB no matter what size of file I send).
    In the updated code when try to compare socketChannel.read to -1 I got endless loop.
    I'm basically trying to write POP3/SMTP server program, this part of code handles attachment that is received by the SocketChannel in one unit (i.e 1+ MB of data, the other SMTP commands/lines are no more than 27 chars and simple to handle).
    Therefore I need to be ready to accept large amount of data to the buffer & write it to filechannel. (In the POP3 thread I'm using MappedByteBuffer successfully).
    Updated code:
    ByteBuffer directBlockBuffer = ByteBuffer.allocateDirect(8192);
    while (clientChannel.read   (directBlockBuffer>0&&directBlockBuffer.hasRemaining))
              directBlockBuffer.flip();
              fc.write(directBlockBuffer);
              directBlockBuffer.clear();
         }I think based on API my code is logical (and good for small files) but what about handling bigger files (up to 5MB)?
    Thanks,
    AST 

  • Mail for large amount of data

    Hey guys,
    I just switched to Mac (and I love it ;)) and now I am searching for a mail program to host my office pop mail account. I get normally around 5000 messages / month, totalling over 3GB, mostly pictures.
    Can Mail handle the amount of data, or will it corrupt my inbox?
    Am I asking because under Windows, OE couldn't handle the amount of data, neither could Thunderbird before the update to 1.5.

    Hi Ernie,
    Thanks for your ideas.
    more below:
    Mick,
    The only place I know for sure that discusses this,
    can be found at:
    http://docs.info.apple.com/article.html?artnum=25812
    Yeah, I read this when I had a couple In Boxes blow a year or two ago. Sadly, it wasn't really up to date and I'd seen no warning or otherwise prior that this could happen. These days I try to keep the IN BOX as slim and trim as possible. Just wish I knew what the real specs were on how much is too much?
    Other than an individual xxxx.mbox folder, I know of
    no limit that would apply, either to the total size
    of the Mail folder, nor to number of messages.
    What's the limit to an individual xxxx.mbox folder?
    Having said that, I never rely solely upon the Mail
    files to archive important attachments. I also keep
    my Mail folder active on more than one Mac, and thus
    on more than one hard drive. I do not, however, sync
    general Sent messages between the various Macs, on
    the theory that other backup practices protect any
    information that I would ever attach to send. Of
    course some On My Mac mailboxes are archiving both
    received and sent messages for certain subject
    areas.
    Sent messages are important to most of the businesses I support so they all need to be there. Nice thought about archiving attachments. I wonder if it's possible to create a rule or automator flow that would do that? Any attachment over 500k would get archived and then deleted from the message. I wonder if that's possible and simple enough...
    My own Mail folder is in excess of 5 GB, and my
    largest individual xxxx.mbox barely exceeds 1 GB.
    Is each folder created "On My Mac" one mbox?
    Again, she has in excess of 20GB and I'm worried about all that weight in the program. Makes me long for the good old days of Eudora which could handle massive sizes like that.
    thanks, Ernie.
    cheers,
    Mick

  • How do I remove spaces or special characters within a cell for large amounts of data

    Is there any shortcut to remove spaces between words and numbers within a cell?
    Example:
    Current: .5 lt PET (6)
    Need: .5ltPET(6)
    Is there any shortcut to remove special characters between numbers within a cell?
    Example:
    Current: 0--000--000--0
    Need: 00000000

    Thanks Wayne.
    I have been away from using Numbers or Excel for 4-5 years so it is slowly coming back to me. I am get that I need to use the SUBSTITUTE function however I am having trouble with getting it to work.
    My Data
    ST PAULI 12/12 NR
    $27.16
    12oz NR(12)
    0--80660--95937--5
    ST PAULI 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95935--1
    ST PAULI DK 12/12 NR
    $0.00
    12oz NR(12)
    0--000--000--0
    ST PAULI DK 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95945--0
    ST PAULI N/A 4/6/12 NR
    $20.66
    12oz NR(6)
    0--80660--95955--9
    CAYMAN JACK 4/6/12 NR
    $29.12
    12oz NR(6)
    8--15829--01006--8
    CAYMAN JACK 8OZ/12PK CAN
    $23.18
    8oz CAN(12)
    8--15829--01061--7
    TGIF LIIT 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01043--3
    TGIF MARGARITA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01047--1
    TGIF PINA COLADA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01045--7
    TGIF STRAWBERRY 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01042--6
    BALLAST PT BIG EYE IPA 1/2 BBL
    $190.00
    KEG 1984oz (1/2 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 1/6 BBL
    $73.00
    KEG 660.1oz (1/6 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 4/6/12 CAN
    $33.00
    12oz CAN(6)
    6--72438--00052--7
    There are many more but this is enough to show you. I need to remove all spaces from the First and Third Columns. I need to remove all (--) from the fourth. Where do I put in the substitute function and what is source sting, existing-string, new-string, and occurrence.
    Thank You for your help.

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • XML-Export Error for large amount of data

    Hi there...
    I have an application process, which runs on demand (Button) and which exports data (from sql query) into a file (.xls).
    The result is being formated and the export works fine as long as the query returns just a small amount of data, approx. 8 to 10 rows.
    As the result is being stored in a clob, I output the data with "htp.prn" in a loop by "cutting" the clob into small pieces (varchar).
    However, as soon as the amount is bigger than the mentioned 8 to 10 rows, I get an error (sqlerrm:ORA-06502: PL/SQL: numeric or value error).
    I guess there must be something wrong with my loop or the way I "cut" the clob into pieces and output them.
    Maybe someone has a hint for me where to look at exactly!?
    Thanks in advance...
    Johnny
    Here is my code (I removed parts of it, which are not important for this issue):
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
    l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
    l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
    '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
    for z in (
    select
    a."Col1",
    a."Col2",
    b."Col3",
    b."ColN"
    from table1 a,
    table2 b
    where a.id = b.id
    loop
    l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    ... ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.ColN||'</Data></Cell>';
    l_xml_body := l_xml_body||'</Row>'||chr(10);
    runner := runner + 1;
    end loop;
    l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
    l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
    HTP.prn (l_xml_text);
    v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    EXCEPTION
    WHEN OTHERS
    THEN
    OWA_UTIL.mime_header ('application/octet', FALSE);
    HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
    OWA_UTIL.http_header_close;
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;
    #######################################################

    Thanks for the hint Paul,
    here is my code in code-tags.
    I appreciate any help!
    Johnny
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
      l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
      l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
                              '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
      for z in (
      select
       a."Col1",
       a."Col2",
       b."Col3",
       b."ColN"
      from table1 a,
           table2 b
      where a.id = b.id
      loop
          l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                             ...  ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.ColN||'</Data></Cell>';
          l_xml_body := l_xml_body||'</Row>'||chr(10);
          runner := runner + 1;  
    end loop;
        l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size           := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
       l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
       HTP.prn (l_xml_text);
       v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
       EXCEPTION
          WHEN OTHERS
          THEN
             OWA_UTIL.mime_header ('application/octet', FALSE);
             HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
             OWA_UTIL.http_header_close;
             HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;

  • Query For Large Amount of Data

    Hello All,
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster?
    I'm sorry if this is a vague question but any help, or point in the right direction will be greatly appreciated
    Thank You !

    976533 wrote:
    Hello All,Welcome to the forum: please read the FAQ and forum sticky threads (if you haven't done so already), and update your forum profile with a real handle instead of "976533".
    When you have a problem you'll get a faster, more effective response by including as much relevant information as possible upfront. This should include:
    <li>Full APEX version
    <li>Full DB/version/edition/host OS
    <li>Web server architecture (EPG, OHS or APEX listener/host OS)
    <li>Browser(s) and version(s) used
    <li>Theme
    <li>Template(s)
    <li>Region/item type(s) (making particular distinction as to whether a "report" is a standard report, an interactive report, or in fact an "updateable report" (i.e. a tabular form)
    With APEX we're also fortunate to have a great resource in apex.oracle.com where we can reproduce and share problems. Reproducing things there is the best way to troubleshoot most issues, especially those relating to layout and visual formatting. If you expect a detailed answer then it's appropriate for you to take on a significant part of the effort by getting as far as possible with an example of the problem on apex.oracle.com before asking for assistance with specific issues, which we can then see at first hand.
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.It might be more appropriate to the {forum:id=75} forum, so you should look at the following entries on their FAQ as well:
    <li>{message:id=9360002}
    <li>{message:id=9360003}
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster? Are there suitable indexes on the tables?
    Does the report query use them?
    As described above, either: reproduce the problem on apex.oracle.com; or post DDL to allow us to recreate the tables and indexes, and the SQL from your report.

  • Sort algorithm for LARGE amount of data?

    hi,
    i need a sorting scheme for the following situation:
    I have a data file where entries are in chunks of variable length. The size of each
    chunk is defined in the first 5 bytes as a string, so the length can be from
    00001-99999, though it is usually around 1000-3000 bytes long. In reality it is never
    over 10000 bytes, but it is possible for it to be.
    Anyways, I need to sort these files according to the data found in certain
    displacements in these chunks. I will be sorting anywhere from 200,000 to
    100,000,000 at a time. Time is an issue certainly, but if it takes a week to finish that is
    fine, i just need it to work.
    So, my problem is that none of the typical sorts will work for me (bubble, heap) as far
    as i can tell because in those sorts i need to have the data loaded into memory, and
    this much data will overload the system. I have used, in the past, a c method that
    feeds these chunks to the sort function a few at a time, then makes files. Hence, not
    all chunks need to be loaded at once. Does anyone know of any solution to this
    problem? Any sort algorithms or sort classes that can handle this much data? thanks!

    Ever tried the radix sort? it's got linear complexity.
    You can still work a chunk at a time, and simply separate the data into several different "buckets", each one identified by, oh, say, the unicode number for the first character in the chunk.
    You now have several smaller lists to sort, and when you're done, NO MERGING IS NECESSARY. Simply append the lists, because the main sets of lists are already sifted into different "buckets".
    Kinda like this:
    create 256 files, and store in each every record that contains a first character that corresponds to it's ascii value. Then create 256 files for each of the original 256 files, and store in each every recond that contains a second character that correstonds to it's second character.
    etc, etc, etc.
    This is very memery intensive for storage, but in terms of run time complexity, it is linear: You will make an explicit number of passes through the list of data. And, as you go along, the lists get shorter and shorter. So while it appears that you are making 256 ^ (max length of data) passes, you're really only making (max length of data) passes, with some additional overhead of creating extra multiple files.
    For that much data, I would definitely recommend a linear algorithm. Any other sorts would be extremely slow.

  • Using Siebel-OPA connector BO mapping for large amount of data

    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    Thanks

    nilskil wrote:
    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    ThanksFor passing lots of data between OPA and Siebel I would definitely recommend using an IO mapping. You will find it faster and also, the return IO xml will be easier to deal with.
    Cheers
    Frank

  • How to save large amount of data and memory management

    hi guys,
    I prepared a vi to take measurement of temperature and pressure simultaneously from Agilent 34970A with 34901A. The sampling rate is 1 sample/per second for each channel (20 temperature channel and 2 pressure channel (current). The device has 20 channels and most probably the test will lasts 2 days. So, it means 86400 sample per day totally. Could the attached vi  do this task? I really do not know how the "write to text file.vi" works and use the memory of the pc. And after collecting all data, could I export these data to microsoft office excel to analyse safely? And I really do not know can I work this vi without interrupting. is it possible? I am also open for all other suggests relevant the vi, too. And I wonder that is it possible to build a standalone application to take measurements from DAQ devices (here Agilent 34970A-34901A)  by a pc without labview?
    Egemen
    Attachments:
    Agilent_Labview_v2.4.vi ‏85 KB

    Hey,
    What I meant by check for an error while measuring is the following:
    you start the program
    If one of your Agilent VIs returns an error (maybe bad initialisation or so) you will still try to continue to measure.
    Even if it is not working, you still continue measuring. Instead you could also check at the end of each loop if an error occured, like this:
    this will only look for an error and stop right away. If you encounter typical errors with apperatus you might want to handle it. A sufficient way to do so might be a state machine that just re-initializes the setup if an error occurs and then continues with its task. Check out this article: http://www.ni.com/white-paper/3024/en
    Anyway, I would recommend to just try it out and see what problems arise, as you might spend hours of improving well working code (:

  • How to optimize DELETE for large number of data?

    Hi Guys,
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!
    Thanks,
    Jay

    JayPavia wrote:
    I am creating a DELETE sql for deleting large amount of data, approximately a million records. When I tested it, it got a lot amount of execution time. Is there a way how can I optimize this query?
    I highly appreciate any idea from you guys!Jay,
    one quite nice approach is described in the AskTom thread: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2345591157689#2356959585102
    In a nutshell: You can create a partitioned table with the same layout (using e.g. a RANGE partitioning with a single default partition), insert the remaining data into that table and swap the two segments using ALTER TABLE EXCHANGE PARTITION.
    This allows you to load the data into the partitioned table using all means available to you, e.g. direct-path parallel insert with nologging attribute of the table to make it as fast as possible, rebuilding/creating any indexes in bulk after the load completed.
    Furthermore, at least seamless read-access to the table is possible while you're performing this operation, and you don't have to deal with renaming/granting etc. which you would need to do with a traditional CTAS approach.
    Of course, if the table needs to be online for DML while performing the DELETE you can't use these options.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Looking for ideas for transferring large amounts of data between systems

    Hello,
    I am looking for ideas based on best practices for transferring Large Amounts of Data in and out of a Netweaver based application.
    We have a new system we are developing in Netweaver that will utilize both the Java and ABAP stack, and will require integration with other SAP and 3rd Party Systems. It is a standalone product that doesn't share any form of data store with other systems.
    We need to be able to support 10s of millions of records of tabular data coming in and out of our system.
    Since we need to integrate with so many different systems, we are planning to use RFC for our primary interface in and out of the system. As it turns out RFC is not good at dealing with this large amount of data being pushed through a single call.
    We have considered a number of possible ideas, however we are not very happy with any of them. I would like to see what the community has done in the past to solve problems like this as well as how SAP currently solves this problem in other applications like XI, BI, ERP, etc.

    Primoz wrote:Do you use KDE (Dolphin) 4.6 RC or 4.5?
    Also I've noticed that if i move / copy things with Dolphin they're substantially slower than if I use cp/mv. But cp/mv works fine for me...
    Also run Dolphin from terminal to try and see what's the problem.
    Hope that help at least a bit.
    Could you explain why Dolphin should be slower? I'm not attacking you, I'm just asking.
    Cause I thought that Dolphin is just a „little" wrapper around the cp/mv/cd/ls applications/commands.

  • How do I pause an iCloud restore for app with large amounts of data?

    I am using an iPhone app which is holding 10 Gb of data (media files) .
    Unfortunately, although all data was backed up, my iPhone 4 was faulty and needed to be replaced with a new handset. On restore, the 10Gb of data takes a very long time to restore over wi-fi. If interrupted (I reached the halfway point during the night) to go to work or take the dog for a walk, I end up of course on 3G for a short period of time.
    Next time I am in a wi-fi zone the app is restoring again right from the beginning
    How does anyone restore an app with large amounts of data or pause a restore?

    You can use classifications but there is no auto feature to archive like that on web apps.
    In terms of the blog, Like I have said to everyone that has posted about blog preview images:
    http://www.prettypollution.com.au/business-catalyst-blog
    Just one example of an image at the start of the blog post rendering out, not hard at all.

  • MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    MY phone is using large amounts of data, when i then go to system services, it s my mapping services thats causing it. what are mapping services and how do i swithch them off. i really need help.

    I Have the same problem, I switched off location services, maps in data, whatever else maps could be involved in nd then just last nite it chewed 100mb... I'm also on vodacom so I'm seeing a pattern here somehow. Siri was switched on however so I switched it off now nd will see what happens. but I'm gonna go into both apple and vodacom this afternoon because this must be sorted out its a serious issue we have on our hands and some uproar needs to be made against those responsible!

Maybe you are looking for