Searching Huge data

Hi,
I have a table T1 with 5 key columns (c1,c2,c3,c4,c5) having Huge data
(Datawarehouse)
Each Incremental load is having 600 million records and stored in table T2 with same key columns (c1,c2,c3,c4,c5).
I have to take every single record of T2 and search table T1 for the key columns.
If found i have to switch the flag in T1 as "Old" for the already existing combination
and load the current T2 load with "new". which will be stored finally in T1.
I have worked out a way using "Rank() function" and connect by prior (Hierarchical query) which is working fine. 10 g also suggests some more options like Lag function. But as the size of T1 grows very huge for each every load the process is getting slower and slower.
Can you please suggest a way or alternative?
Thanks in advance

Why are partitions faster? It's not even close. Just go to the Oracle documentation and look at that and you'll know.
In brief though, since you can partition the indexes along with the data and you can have many (many, many in 11g!) partitions and you can subpartition the partitions, the explain plan can quickly rule out the VAST majority of that huge amount of data way up at the top of the query. For instance, if you had billions of records of credit card transactions and you wanted to search for all transactions from a certain card # within a certain date range and this huge voluminous table was partitioned by date monthly then by credit card # within each monthly partition then even though you may have 30 years worth of transactions, Oracle would never even consider scanning (index or otherwise) anything but the partitions that contain the date range in question. So rather than index scan a few billion records it index scans a few million maybe? Add parallelism to this (since Oracle on a multi-CPU box can search each partition in its own independedent "thread") and it is truly impressive.
The other advantage is in loading data. You load/index the data for the new partition without having to touch the data/indexes in the existing partitions and then just add the new partition to the mix. Check it out!

Similar Messages

  • Huge data usage with Verison 4G LTE hotspot ( Verizon-890L-50DB)

    Recently upgraded my internet service to a Verizon 4G LTE hotspot (Verizon-890L-50DB).  First month - no problems - second month - I have 6GB of overage charges.  Every time I log in to check email or any other minimal data usage task, I get hit with another GB or more of download usage.  My computer is an HP Pavilion Laptop, Windows 7 with NETWORX installed.  Networx tells me it is indeed MY computer that is downloading this data and my hard drive is also showing the increases in data.  
    I've run AVG, Glary Utilities, Speedy PC PRO and Spybot Search and Destroy but found no virus, rootkit or anything on my computer that would explain this problem. 
    When I use a different wireless internet service, I do NOT see the extraneous data usage.
    I contacted Verizon's support team and got no help diagnosing the problem but they did agree to send a replacement for my hotspot device.  Hopefully this fixes the issue; then I have to battle with them about the unwarranted $60 or more in overage charges.
    Has anyone else experienced a problem similar to this?

    I would recomend getting out of the contract before the 14 day trial period ends.   Verizon will charge you an activation fee, restocking fee and one months service, but that is better than being stuck with this mess.  I have homefusion and am afraid to use since verizon seems to fabricate data usage.  Unfortunately I did not realize this untill after the 14 day trial.  Now is will cost me $350 to terminate my contract.
    Date: Mon, 18 Feb 2013 21:21:48 -0700
    From: [email protected]
    To: <Email address removed for privacy.>
    Subject: Re: Huge data usage with Verison 4G LTE hotspot ( Verizon-890L-50DB) - Re: Huge data usage with Verison 4G LTE hotspot ( Verizon-890L-50DB)
                                                                                    Re: Huge data usage with Verison 4G LTE hotspot ( Verizon-890L-50DB)
        created by Lyda1 in Verizon Jetpack 4G LTE Mobile Hotspot 890L - View the full discussion
    Exactly the same thing has happened to me.  I purchased the Verizon Jetpack™ 4G LTE Mobile Hotspot 890L two days ago.  After one day, I had supposedly used half the 5GB monthly allowance.  After two days, I am at 4.25 GB usage.  I don't stream movies, I have the hotspot password protected, I live alone, and no one else uses my computer.  I have not downloaded any large files.  At this rate, I'll go broke soon.
    Reply to this message by replying to this email -or- go to the message on Verizon Wireless Community
    Start a new discussion in Verizon Jetpack 4G LTE Mobile Hotspot 890L by email or at Verizon Wireless Community
    © 2011 Verizon Wireless
    Verizon Wireless | One Verizon Way | Mail Code: 180WVB | Basking Ridge, NJ 07920
    We respect your privacy.  Please review our privacy policy for more information.
                                 Not interested in these emails anymore, or want to change how often they come? Update your email preferences.
    Message was edited by: Verizon Moderator

  • Hi I'm running Addressbook and cannot clear previous entry easily when searching my data base of around 5,000 contacts.    I prefer to view in All contacts on a double page spread with details on the right page.  Searching doesn't seem to work correctly i

    Hi I'm running Addressbook and cannot clear previous entry easily when searching my data base of around 5,000 contacts. 
    I prefer to view in All contacts on a double page spread with details on the right page.  Searching doesn't seem to work correctly in this view.
    It's always the second search that is problematic.
    I've tried typing over and all it seems to do is confine the search to the the entries that have come up for the previous search.
    I've tried to use the x to clear the previous entry and then type the next search, same problem.  The only way seems to be to move from "All Contacts" to "Groups".  Then the searched name appears and I can return to All Contacts to see full details.
    Surely three key press' are not the way it's supposed to work?
    FYI
    Processor  2.7 GHz Intel Core i7
    Memory  8 GB 1333 MHz DDR3
    Graphics  Intel HD Graphics 3000 512 MB
    Software  Mac OS X Lion 10.7.3 (11D50d)
    Address book Version 6.1 (1083)
    MacBook Pro, Mac OS X (10.7.1), 8Mb RAM 2.7Ghz i7

    AddressBook experts are here:
    https://discussions.apple.com/community/mac_os/mac_os_x_v10.7_lion#/?tagSet=1386

  • TS4605 Hi, I was working in WORD on a file containing huge data. My machine just hung up one day while working and now I seem to have lost the file how do I get it back.  Please HELP me.

    Hi, I was working in WORD on a file containing huge data. My machine just hung up one day while working and now I seem to have lost the file how do I get it back.  Please HELP me.

    Well, iCloud has nothing to do with this.
    Do you have the built-in backup function Time Machine running on your Mac?
    See: http://support.apple.com/kb/ht1427

  • Strange error in requests returning huge data

    Hi
    Whenever a request returns huge data, we get the following error, which is annoying because there is no online source or documentation about this error:
    Odbc driver returned an error (SQLFetchScroll).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46073] Operation 'stat()' on file '/orabi/OracleBIData/tmp/nQS_3898_715_34357490.TMP' failed with error: (75) ¾§ . (HY000)
    Any idea what might be wrong?
    Thanks

    The TMP folder is also "working" directory for the cache management (not exclusively). OBIEE tries to check if the report can be run against a cache entry using the info in this file.
    Check if MAX_ROWS_PER_CACHE_ENTRY , MAX_ROWS_PER_CACHE_ENTRY and MAX_CACHE_ENTRY_SIZE are set correctly.
    Regards
    John
    http://obiee101.blogspot.com

  • EasyDMS - search by date range for characteristics of type date

    Hi Folks,
    I have a characteristic of type date in the additional data of a DMS document. If I enter the date (for example validity date) as 08/31/2009 and search using cv04n and give a date range for the characteristic (i.e. 01/01/2009 - 12/31/2009), the search result will bring up the document.
    However, I cannot do this from the EasyDMS find function. I need to specify an exact date. This is not very helpful for user who need to find documents with a validity date between 01/01/2009 - 12/31/2009 for example. Is there a way users can search for date range in EasyDMS find function?
    Thanks,
    Lashan

    To search a date range with EasyDMS Client you have to set the registry key
    \HKEY_CURRENT_USER\Software\SAP\EasyDms\Generel\DateSearchPick to Value 0
    Then you can use the input field like in SAP-Gui. (01.01.2009-31.01.2009)
    If you set the value to 1 or the key is not defined you can only search for one specified date.
    If you don't find this key on your PC create it as dword.
    Maybe you must restart you PC for takeing effect.
    Hope this will help you.
    Regards Wolfgang

  • Product - "Search on Data". How do I evaluate this?

    In the following url there is reference to a BO XI addin for 'Search on Data'. I would like more information on this but cannot find any further reference to it in the SAP website. Is there a trial version I can download? OR is this now bundled in with XI 3.x? I am using XI R2.
    SAP Research - Productized Prototypes
    The product description is as follows:
    SEARCH ON DATA
    From the same InfoView search, key word searches also directly query data sources and return answers. No existing BI reports are required for this to work. Results are dynamically returned in an auto-generated Web Intelligence document. This remarkably simple way to answer questions brings the power of BI to any employee who knows how to use Google and other intuitive key word searches. For existing BI users, this also provides a very fast way to start creating reports.
    Edited by: Ron Carron on May 8, 2009 11:01 AM
    Edited by: Ron Carron on May 8, 2009 11:08 AM

    This happened to me with the same device. I wouldn't worry about having it just say other as that is just information, if you really need to know what is using the most of your device's storage to to Settings > General > Usage > Manage Storage and that will tell you what apps are using in terms of your storage. Besides that, "other" is just iTunes knowing that there is information, how much there is, just not what it is. So syncing your device a few times and letting it stay plugged in should allow iTunes to recognize what information is on your iPod. Hope I could help!

  • Method for Downloading Huge Data from SAP system

    Hi All,
    we need to download the huge data from one SAP system  & then, need to migrate into another SAP system.
    is there any better method, except downloading data through SE11/SE16 ? please advice.
    Thanks
    pabi

    I have already done several system mergers, and we usually do not have the need to download data.
    SAP can talk to SAP. with RFC and by using ALE/IDOC communication.
    so it is possible to send e.g. material master with BD10 per IDOC from system A to system B.
    you can define that the IDOC is collected, which means it is saved to a file on the application server of the receiving system.
    you then can use LSMW and read this file with several hundred thousand of IDOCs as source.
    Field mapping is easy if you use IDOC method for import, because then you have a 1:1 field mapping.
    So you need only to focus on the few fields where the values changes from old to new system..

  • Posting huge data on to JSP page using JSTL tags

    Hi,
    I have one application where I have to post huge data (approximately 2000 rows of data) into JSP page. I am using JSTL tags and running on Tomcat 5.
    It is taking almost 20 to 25 seconds to load the entire page.
    Is it the optimal time to load or it could be improved?
    Please let me know.
    Thanks,
    --Subbu.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi Evnafets,
    Thank you for the response.
    Here are the tasks I am doing to display the data on JSP.
    0. We are running on Tomcat 5 and the memory size is 1024MB (1GB).
    1. Getting the data - I am not performing any database queries. The data is stored in the static cache memory. So the server side response is so quick - less than a milli second.
    2. Using Java beans to pass data to the presentation layer (JSP).
    3. 10 'if' conditions and 2 'for' loops and 2 'choose' statements are being used in the JSP page while displaying the data.
    4. Along with the above, there are 4 javascript files are being used.
    5. The jsp file size after rendering the data, is aprox. 160 kb
    Hope this information helps you to understand the problem.
    Thanks,
    --Subbu.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Problem in searching japanese data in DB2

    Hi,
    I have japanese data stored in my DB2 database. I have to search that data in DB2. For that I have writen test program.
    Following is the SQL query:
    "select adrnr from sapr3.sadr where land1='JP' and adrnr = '1000027051' and name1 like '"+sname+"%'"
    Where sname is japanese data stored in DB2.
    Before running this query I am retrieving sname from database using following:
    "select name1 from sapr3.sadr where adrnr='1000027051'";
    But its not able to search the same record if I also provide name1.
    Please guide.
    Thanks in advance.

    Hi,
    i am working on three operating systems, but lets take the example of linux
    oracle version :Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
    charaterset : AL32UTF8
    nls settings are :
    NLS_LANGUAGE               AMERICAN
    NLS_TERRITORY               AMERICA
    NLS_CURRENCY               $
    NLS_ISO_CURRENCY          AMERICA
    NLS_NUMERIC_CHARACTERS     .,
    NLS_CALENDAR               GREGORIAN
    NLS_DATE_FORMAT          DD-MON-RR
    NLS_DATE_LANGUAGE     AMERICAN
    NLS_CHARACTERSET          AL32UTF8
    NLS_SORT                         BINARY
    NLS_TIME_FORMAT          HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT     DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT     HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT     DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY     $
    NLS_NCHAR_CHARACTERSET     AL16UTF16
    NLS_COMP                    BINARY
    NLS_LENGTH_SEMANTICS     CHAR
    NLS_NCHAR_CONV_EXCP     FALSE
    Regards,
    Vikas Kumar

  • TSV_TNEW_PAGE_ALLOC_FAILED: Huge data into onternal table

    Hello Experts,
    I am selecting data from a database table into one internal table. The internal table was defined using occurs 0. I am getting Short Dump TSV_TNEW_PAGE_ALLOC_FAILED when the data selected from database table is huge. The internal table is unable to hold that amount of huge data.
    Can you please suggest how to overcome this using ABAP.
    regards.

    2 ways are there:
    1. I think you need to take help of Basis guys to change some parameters which suports huge data in to the memory which avoids this
    2. Split your internal table in two parts and do your calculations or manupulations....
    Hope the solution is prompt.

  • SQL Server 2005 huge data migration. Taking more than 18 hours ...

    Hi,
    We are migrating an SQL server 2005 DB to Oracle 10 g 64 bit 10.2.0.5. The source sql server DB has few tables (<10) and only few stored procedure(<5). However one table of source is very huge, it has 70 million records in it. I started the migration process; it was running till 18 hours passed away then I cancelled it. I am using SQL Developer 3.1.0.7 with online migration.
    Kindly let me know if I choose offline migration will it reduce the data move time. Is there some other tool by oracle that can migrate huge data (records in millions) quickly. Is there some configuration in SQL Developer that can catalyses the process?
    Kindly help.
    Best Regards!
    Irfan

    I tried using SQL Loader to move data. The file exported by SQL server for the transaction table is of size around 17 GB. The sql loader stopped giving error on file size limit. The error is given below:
    SQL*Loader: Release 10.2.0.5.0 - Production on Thu Mar 22 15:15:28 2012
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    SQL*Loader-510: Physical record in data file (E:/mssqldata/transaction.txt) is longer than the maximum(1048576)
    SQL*Loader-2026: the load was aborted because SQL Loader cannot continue.
    C:\Users\Administrator\Desktop\wuimig\datamove\wuimig\2012-03-22_14-20-29\WUI\dbo\control>
    Please help me out how do I move such huge data to oracle from sql server.
    Best Regards!

  • Huge data upload to custom table

    I created a custom table and the requirement is to load huge data to this table around 20million records. I have input file with 1million records. After loading few files, I got error max extents 300 is reached. We talked to basis and they increased to 1000. Still I'm getting this error.
    The way my porgram written was
    reading all input data to internal table i_zmstdcost_sum_final.
    INSERT zmstdcost_sum FROM TABLE i_zmstdcost_sum_final.
    Can any one suggest me the best solution or how to alter my program?
    Thanks,
    PV

    Hi
    If you need to load a very large number of records u shouldn't load the record in an internal table and use the commit statament after a certain number of record and close a file after loading all its records.
    LOOP AT T_FILE.
      OPEN DATASET T_FILE.
      IF SY-SUBRC = 0.
        DO.
            READ DATASET T_FILE INTO WA.
            IF SY-SUBRC <> 0.
              EXIT.
            ENDIF.
            INSERT ZTABLE FROM WA.
            IF SY-SUBRC = 0.
              COUNT = COUNT + 1.
            ENDIF.
            IF COUNT = 1000.
              COUNT = 0.
              COMMIT WORK.
            ENDIF.
        ENDDO.
        IF COUNT > 0.
          COMMIT WORK.
          COUNT = 0.
        ENDIF.
       CLOSE DATASET T_FILE.
      ENDIF.
    ENDLOOP.
    Max

  • Load Huge data into oracle table

    Hi,
    I am using oracle 11g Express Edition, I have a file of .csv forma, Which has a data of size 500MB which needs to be uploaded into oracle table.
    Please suggest which would be the best method to upload the data into table. Data is employee ticket history which is of huge data.
    How to do the mass upload of data into oracle table need experts suggestion on this requirement.
    Thanks
    Sudhir

    Sudhir_Meru wrote:
    Hi,
    I am using oracle 11g Express Edition, I have a file of .csv forma, Which has a data of size 500MB which needs to be uploaded into oracle table.
    Please suggest which would be the best method to upload the data into table. Data is employee ticket history which is of huge data.
    How to do the mass upload of data into oracle table need experts suggestion on this requirement.
    Thanks
    SudhirOne method is to use SQL Loader (sqlldr)
    Another method is to define an external table in Oracle which is allowing you to view your big file as a table in database.
    You may want to have a look at this guide: Choosing the Right Export/Import Utility and this Managing External Tables.
    Regards.
    Al
    Edited by: Alberto Faenza on Nov 6, 2012 10:24 AM

  • How to update a table with huge data

    Hi,
    I have a scenario where I need to update tables that are having huge data (each table is having more than 10,00000)
    I am writing this update in PLSQL block.
    Is there any way to improve the performance of this update statement? Please suggest...
    Thanks.

    user10403630 wrote:
    I am storing all tables and columns that needs to be updated in tab_list and forming a dynamic qry.
    for i in (select * from tab_list)
    loop
    v_qry := 'update '||i.table_name||' set '|| i.column_name ' = '''||new_value||'''' ||' where '||i.column_name = ''||old_value||'''';
    execute immediate v_qry;
    end loop;Sorry to say so but this code is aweful!
    Well the only thing to make this even more slow would be to add a commit inside the loop.
    Some advices. But I'm not sure which one works in your case.
    The fastest way to update a million rows is: write a single update statement. On typical systems this should only run for like a couple of minutes.
    if you need to update several tables then write a single update for each table.
    If you have different values that need to be updated then find a way how to consider those different values in a single update or merge statement. Either by joining another table or by using some in-lists.
    e.g.
    update myTable
    set mycolumn = decode(mycolumn
                                     ,'oldValue1','newvalue1'
                                     ,'oldValue2','newvalue2'
                                     ,'oldValue3','newvalue3')
    where mycolumn in ('oldValue1','oldvalue2','oldvalue3'....);If you need to do this in pl/sql then
    1) use bind variables to avoid hard parsing the same statement again and again
    2) use bulk binding to avoid pl/sql context switches

Maybe you are looking for