Storing more than 15 lakhs records in TreeMap

Hi All,
I have a requirement to cache more than 15 lakh records in TreeMap. When i tested it, it takes around one hour for 15 lakh records.
Is there any better way to cache this huge volume of records?
15 lakh records may increase in future. So my application should be able to cache huge volume to process.
Please suggest a solution as how to proceed with this....
Thanks in Advance.

Yes it takes around 1 hour.
Please find the sample code
public void fetchFromDB(TreeMap values)
throws SQLException
String query ="My Query";
try
stmt = con.prepareStatement(query);
rs = stmt.executeQuery();
while(rs.next())
String name = null;
String id= null;
if(rs.getString(1) != null)
name = rs.getString(1).trim();
if(rs.getString(2) != null)
id= rs.getString(2).trim();
map.put(name, id);
catch{ }
I dont think there is much in this code for slow implementation. But still pls let me know if iam going wrong somewhere.....

Similar Messages

  • System.ArgumentException: Illegal characters in path. When Processing More than 1 lakh records.

    Hi ALL,
    I am Having a trouble with processing the files using c# language. as it's giving the error saying that Illegal character into the path.
    although i am using below regular expression for checking and removing the illegal character by another method.
    Regex pattern = new Regex("\\/:*?\"<>|");
    Some of the records are being processed correctly  that means , records are being fetched from the database and written to the file system correctly by using BinaryWriter class.
    Can anybody help me for this error to resolve.
    Note:- There are more than 1 lakh records to be processed.

    Hi Michael Taylor,
    I have used the log4net to catch the errors while my loop was 
    continuing till the end of the processing of those many records.
    Also System.IO.Path.GetFileName(filename) was throwing the error ,
    so i have check the filename with regular expression and replace illegal character from there and then 
    call the System.IO.Path.GetFileName(filename) method.
    i have use Regex illegalInFileName = new Regex(@"[\\/:*?""<>|]");
    this regular expression to replace illegal character.

  • Update  more than one lakh records

    Hi i have a table which contains around 1 million records when the end user want to update a record of more than one lakh what could be the best process to get the updation quickly the current scenario is taking 1mnt for 10k records,thanks in advance

    SQL> drop table t;
    Table dropped.
    SQL>
    SQL> create table t
      2  as
      3  select rownum col1
      4  from   dual
      5  connect by rownum <= 10000;
    Table created.
    SQL>
    SQL> create index i1 on t (col1);
    Index created.
    SQL>
    SQL> select n.name, m.value
      2  from   v$mystat m, v$statname n
      3  where  m.statistic# = n.statistic#
      4  and    name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                          39009532
    SQL>
    SQL> update t set col1 = 56789;
    10000 rows updated.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> select n.name, m.value
      2  from   v$mystat m, v$statname n
      3  where  m.statistic# = n.statistic#
      4  and    name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                          46229748
    SQL>
    SQL> drop table t;
    Table dropped.
    SQL>
    SQL> create table t
      2  as
      3  select rownum col1
      4  from   dual
      5  connect by rownum <= 10000;
    Table created.
    SQL>
    SQL> create index i1 on t (col1);
    Index created.
    SQL>
    SQL> alter index i1 nologging;
    Index altered.
    SQL>
    SQL> select n.name, m.value
      2  from   v$mystat m, v$statname n
      3  where  m.statistic# = n.statistic#
      4  and    name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                          46295228
    SQL>
    SQL> update t set col1 = 56789;
    10000 rows updated.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> select n.name, m.value
      2  from   v$mystat m, v$statname n
      3  where  m.statistic# = n.statistic#
      4  and    name = 'redo size';
    NAME                                                                  VALUE
    redo size                                                          53515540
    SQL> select 46229748-39009532, 53515540-46295228 from dual;
    46229748-39009532 53515540-46295228
              7220216           7220312
    SQL>

  • SAP_CONVERT_TO_CSV_FORMAT for more than 2 lakh records

    Hi All,
    I am trying to uploadingfile to application server in .csv format.
    Using this FM SAP_CONVERT_TO_CSV_FORMAT,converting the itab values  to .csv format.Then using open data set i am uploading the file to application server.
    This logic works fine ,if number of records in internal table is very less.
    if size is more i am getting a dump error 'No more storage space available for extending an internal table'.
    Please suggest how to increase the size of this internal table.Please share the code .
    type-pools:truxs
    data:t_conv_data type truxs_t_text_data.
    Regards,
    Niranjan
    Moderator message: very common problem, please search for available information.
    Edited by: Thomas Zloch on Feb 24, 2011 1:04 PM

    Thanks for all your replies.
    Keshu,
    The drawback of using 'ALSM_EXCEL_TO_INTERNAL_TABLE' in more than one stroke is the amount of looping tht needs to be done in order to get data into the my internal table. And my prog is not going to be run in background, so timeout problem is there.
    Sekhar,
    I cannot loop on TEXT_CONVERT_XLS_TO_SAP as this excel is a single file and there is no end row logic in that.
    Karthik,
    I think the new excel file .xlsx format is having the flexibility to put more than 65536 rows.

  • How to get more than one lack record in 1 or 2 seconds

    pls help its urgent ,
    i need to retrieve more record from different table it have more than one lack record ,and its more than 20 seconds ,how to minimise the time to one seconds
    My sql:
    SELECT
    tl.ProjectID,
    pr.jobname,
    name as Department_name,
    ChargeNum,
    (ac.ActivityCode ||':'||ac.SubCode) as ActivityCodeName,
    SUM(HoursWorked), (Case When isBilled=1 or billedRate<>0 then BilledRate else ppr.Rate End) as RATE
    FROM
    TimeLogEntries tl INNER JOIN activitycodes ac on ac.ACTIVITYCODEID=tl.ACTIVITYCODEID INNER JOIN projectrates ppr on tl.ACTIVITYCODEID = ppr.ACTIVITYCODEID and tl.projectid=ppr.projectid ,
    projects pr INNER JOIN departments d on d.DEPARTMENTID =pr.REVENUECENTERID
    WHERE
    to_char(Date_,'yyyy-mm-dd') BETWEEN '2006-01-01' and '2008-12-30'
    AND
    tl.ProjectID = pr.ProjectID
    Group By
    tl.ProjectID,
    tl.ActivityCodeID,
    BilledRate,
    ChargeNum,
    pr.jobname,
    name,
    (ac.ActivityCode ||':'||ac.SubCode),
    (Case When isBilled=1 or billedRate<>0 then BilledRate else ppr.Rate End)
    ORDER BY
    tl.ChargeNum;

    hi,
    even i am searching for some thing similar.
    i want to have 3 calendars in one page.
    getting same message calendar already exists on page 2. You can only add one calander per page. Select a different page.
    pls help.

  • Not able to update more than 10,000 records in CT04 for a characteristic

    Hi all,
    We are not able to update more than 10,000 records in CT04 for a certain characteristic.
    Is there any possible way to do this?
    Please advise...its a production issue.
    Thanks.

    Hello ,
    Please consider using a check table for the characteristic involved if you are working with large
    number of values assigned
    With a check table you have a possibility to work with a huge amount of values , also the performance should improve                          
    Please refer to the link
    http://help.sap.com/saphelp_erp60_sp/helpdata/en/ec/62ae27416a11d1896d0000e8322d00/frameset.htm
    Section - Entering a Check Table 
    Hopefully the information helps
    Thanks
    Enda.

  • Handling internal table with more than 1 million record

    Hi All,
    We are facing dump for storage parameters wrongly set.
    Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
    Please advice is there any other way to handle these kinds of internal table.
    P:S we have tried the option of using hashed table, this does not suits our scenario.
    Thanks and Regards,
    Vijay

    Hi
    your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
    hope this code helps.
    G_PACKAGE_SIZE = 50000.
      * Using DB Cursor to fetch data in batch.
      OPEN CURSOR WITH HOLD DB_CURSOR FOR
             SELECT *
               FROM ZTABLE.
        DO.
        FETCH NEXT CURSOR DB_CURSOR
           INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
           PACKAGE SIZE G_PACKAGE_SIZE.
        IF SY-SUBRC NE 0.
          CLOSE CURSOR DB_CURSOR.
          EXIT.
        ENDIF.

  • Script logic record more than 300,000 record

    Hi Expert,
    When I run my logic I have error in my formula log:
    (More than 300,000 records. Details are not being logged)
    Ignoring Status
    Posting ok
    I check my script it pull out total 422076 records.
    Is it meaning I cannot More than 300,000 records??
    Is there any where I can set MAX records I can generate for my single script to run??
    Thanks..

    You should use
    *XDIM_MAXMEMBERS dimension = numberOfMembers to be processed at a time
    For example
    *XDIM_MAXMEMBERS Entity = 50
    Figure out wich dimension has the most members, and use it, this sections you script logic.
    I hope that helps
    Leandro Brasil

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • Joining 2 related records using PL SQL in Apex - Problems when there are more than 2 related records?

    Hi
    I am combining 2 related records of legacy data together that make up a marriage record.  I am doing this in APEX using a before header process using the following code below which works well when there are only 2 related records which joins the bride and groom record together on screen in apex.  I have appended a field called principle which is set to 'Y' for the groom and 'N' for the bride to this legacy data
    However there are lots of records where in some instances there are 3, 4 , 5, 6 or even 1 record which causes the PL/SQL in APEX to not return the correct data.  The difference in these related columns is that the name of the bride or groom could be different but it is the same person, its just that from the old system if a person had another name or was formally known as they would create another duplicate record for the marriage with the different name, but the book and entry number is the same as this is unique for each couple who get married.
    How can I adapt the script below so that if there are more than 2 records that match the entry and book values then it will display a message or is there a better possible work around?  Cleaning the data would be not an option as there are thousands of rows of where these occurrences occur
    declare 
         cursor c_mar_principle(b_entry in number, b_book in varchar2) 
         is 
              select DISTINCT  id, forename, surname, marriagedate, entry, book,  formername, principle
              from   MARRIAGES mar 
              where  mar.entry   = b_entry
              and    mar.book = b_book
              order by principle desc, id asc; 
         rec c_mar_principle%rowtype;
    begin 
    open c_mar_principle(:p16_entry,:p16_book)  ;
    fetch c_mar_principle into rec;
    :P16_SURNAME_GROOM   := rec.surname; 
    :P16_FORNAME_GROOM   := rec.forename;
                   :P16_ENTRY := rec.entry; 
                   :P16_BOOK :=rec.book;
    :P16_FORMERNAME :=rec.formername;
    :P16_MARRIAGEDATE :=rec.marriagedate;
    :P16_GROOMID  := rec.id;
    fetch c_mar_principle into rec;
    :P16_SURNAME_BRIDE   := rec.surname; 
    :P16_FORNAME_BRIDE   := rec.forename;
                   :P16_ENTRY := rec.entry; 
                   :P16_BOOK :=rec.book;
    :P16_FORMERNAME :=rec.formername;
    :P16_MARRIAGEDATE :=rec.marriagedate;
    :P16_BRIDEID  := rec.id;
    close c_mar_principle;
    end;

    rambo81 wrote:
    True but that answer is not really helping this situation either?
    It's indisputably true, which is more than can be said for the results of querying this data.
    The data is from an old legacy flat file database that has been exported into a relational database.
    It should have been normalized at the time it was imported.
    Without having to redesign the data model what options do I have in changing the PL/SQL to cater for multiple occurances
    In my professional opinion, none. The actual problem is the data model, so that's what should be changed.

  • Error while fetching more than 1000 mysql records

    I'm trying to fetch data from a MySQL database through PHP to a Flex application. When there are more than 1000 records in the resultset, the FaultEvent is returned. When I limit it to 1000 records, no problem at all. Any ideas? Code below:
    public function OldCustomerService(method:String=HTTPRequestMessage.POST_METHOD, resultFormat:String=RESULT_FORMAT_E4X, showBusyCursor:Boolean=true)
                super(null, null);
                this.requestTimeout = 0;
                this.method = method;
                this.resultFormat = resultFormat;
                this.showBusyCursor = showBusyCursor;
            public function getAllProspects():void {
                this.url = ALL_PROSPECTS_URL;
                this.addEventListener(ResultEvent.RESULT, getAllProspectSuccess);
                this.addEventListener(FaultEvent.FAULT, DAOUtil.communicationError);
                var oldCustomersToken:AsyncToken = this.send();

    It works with fewer records so it would be weird if it would be a php or mysql error. Seems to me Flex isn't waiting until everything is in. To give you a better idea of what's happening, I'll post the PHP code:
    <?php
    * Created on 19-mrt-10
    * To change the template for this generated file go to
    * Window - Preferences - PHPeclipse - PHP - Code Templates
         include '../../application/general/php/general.php';
         // connect to the database
         $mysql = mysql_connect(DATABASE_SERVER, DATABASE_USERNAME, DATABASE_PASSWORD) or die(mysql_error());
        // select the database
        mysql_select_db( DATABASE_NAME );
        // query the database to retrieve all customers.
        $query = "SELECT * FROM stores";
        $result = mysql_query($query);
        //start outputting the XML
        $output = "<result>";
        if($result) {
            $output .= "<success>yes</success>";
            $output .= "<stores>";
            // create a store tag for each retrieved store
            while($customer = mysql_fetch_object($result)) {
                $output .= "<store>";
                $output .= "<naam>$customer->naam</naam>";
                $output .= "<adres>$customer->adres</adres>";
                $output .= "<postc>$customer->postc</postc>";
                $output .= "<wpl>$customer->wpl</wpl>";
                $output .= "<land>$customer->land</land>";
                $output .= "<telprive>$customer->telprive</telprive>";
                $output .= "<telbureau>$customer->telbureau</telbureau>";
                $output .= "</store>";
            $output .= "</stores>";
        } else {
            $output .= "<success>no</success>";
            $output .= "<error>\n";
            $output .= "Reason: " . mysql_error() . "\n";
            $output .= "Query: " . $query ."\n";
            $output .= "</error>";
        $output .= "</result>";
        print ($output);
        mysql_close();
    ?>
    When the error occurs, I get a <success>yes</success> and yet again the Fault error is thrown. Again, with fewer records, no problem.

  • How to select more than 40 lakhs entries from a database table into a inter

    If there is a database table having 40 lakhs  entries & if I want to select all entries into a internal table than what to do.

    >
    Maen Anachronos wrote:
    > Bring a very large bag to put those 40 lakh records in.
    >
    > Sorry, but how is this possibly so difficult to try out?
    I like it

  • More than one mx-records

    i have more than one location.
    is it possible to have a second mx-record and point to second location?
    each location pointing to different exchange users.
    going to use sbs 2011 for second location

    You cannot have MX records specified for individual users on the same domain. If you have 2 domains, then you just treat them as individuals. What is you ultimate goal with these 2 locations?

  • Storing more than 2000 characters in a varchar2 column in Oracle 11g?

    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters. The DBAs don't like BLOB, CLOB or LONG datatypes for maintenance reasons.
    There are 2 solutions I can think of -
    1. Remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get more than 2000 characters. This table will be joined with the original table for queries.
    2. If maximum I need is 8000 characters, can I just add 3 more columns so that I will have 4 columns with 2000 char each to get 8000 chars. So when the first column is full, values would be spilled over to the next column and so on.
    Which one is a better and easier approach? Please suggest.

    Visu - Some people also do not like to use LOBs because of difficulty in reclaiming space and ever increasing LOB segments. Some of these problems were caused by Oracle bugs (eg, Bug 2944866 Free space in LOB table / tablespace not reused with ASSM, Bug 3019979 Space may not be reused efficiently in a LOB segment) - albeit in 9.2. I've seen a few bug reports for similar things in 10.2 (don't have the references). Still, if there is a workaround/patch is this reason enough to steer the application development into a new direction?

  • HT201318 under iCloud, the last backup was 9.4 GB. i also have 9.8 gb in photo stream. i can not figure out how i got 9.4 gb as i am not storing more than 2.0 gb. I want to remove sufficient storage so that I do not have to pay additional above the  free

    I have used 9.8 GB of storage. I want to get below 5 GB. But I cannot figure out where the 9.8 comes from. In looking at the menu, I don't use more than 2 GB. How can I find out what the other 7 GB contain

    If those gigabytes do not show up on the page described in that article,
    iCloud: Managing your iCloud storage - Support - Apple
    then to my experience that is your imessages. Every picture that you sent and video, is saved again and again in every friendly conversation.
    Keep in mind that it takes three backups until size of the backup is going to reflect changes, since icloud holds up to three.
    Keep in mind that your photostream does not go against your storage.

Maybe you are looking for

  • ITunes Store error msg: We could not complete your iTunes Store request. An unknown error occurred (-50).

    So I began having trouble connecting to the iTunes store with my computer about a month ago. It was working fine, I purchased and downloaded some music, and then the next day I got this message when I clicked on the iTunes Store: "We could not comple

  • DMS Document

    Hi! I'm a new comer in the forum and I'm learning about DMS. I saw SAP Help Document, but I wanna find some other document of DMS. Can everyone suggest me some important document I need to read ?. Finally, "nguyenan169 gmail com". If you wanna share

  • 32-bit Windows 7 app on 64-bit platform oddity

    An application compiled for a 32-bit platform runs correctly with an SDK that is compiled for 32-bit only, either in debug or release mode from Visual C++ Express 2008, on a 64-bit computer. The target platform in the compiler options is set as X86.

  • Record rejected using SQL*Loader

    I'm not too sure why I'm getting the following error message: Record 1: Rejected - Error on table LOAD_DATA, column CLIENT_NUMBER. ORA-01401: inserted value too large for column Here is my info: Table def: SQL> describe load_data Name Null? Type CLIE

  • OBYC PRD Postings with PO in Foreign Currency

    Hi Sapients, I am observing a peculiar problem - ECC 6.0, Material Pricing @ MAP, CIN Implemented (Indian Project) October 1, 2009 - PO created with USD 300 (Exchange Rate maintained) - A/c assigned "Q" (Project) Custom MIRO done on October 26, 2009