Importing huge (10 million rows) tables into HANA

Dear All,
i need to import huge tables (40.000.000 records)  into an HANA system, in order to develop a demo system for a customer.
I'm planning to use CSV files.
Does someone have experience in a task like this? is there a limit to CSV file dimension?
Many thanks
Leopoldo Capasso

Check out this blog for best practice to load high volume data from flat files:
http://scn.sap.com/community/hanainmemory/blog/2013/04/08/bestpracticesforsaphanadataloads
Hope this helps.
Rama

Similar Messages

  • SSIS 2012 is intermittently failing with below "Invalid date format" while importing data from a source table into a Destination table with same exact schema.

    We migrated Packages from SSIS 2008 to 2012. The Package is working fine in all the environments except in one of our environment.
    SSIS 2012 is intermittently failing with below error while importing data from a source table into a Destination table with same exact schema.
    Error: 2014-01-28 15:52:05.19
       Code: 0x80004005
       Source: xxxxxxxx SSIS.Pipeline
       Description: Unspecified error
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC0202009
       Source: Process xxxxxx Load TableName [48]
       Description: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80004005.
    An OLE DB record is available.  Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: "Invalid date format".
    End Error
    Error: 2014-01-28 15:52:05.19
       Code: 0xC020901C
       Source: Process xxxxxxxx Load TableName [48]
       Description: There was an error with Load TableName.Inputs[OLE DB Destination Input].Columns[Updated] on Load TableName.Inputs[OLE DB Destination Input]. The column status returned was: "Conversion failed because the data value overflowed
    the specified type.".
    End Error
    But when we reorder the column in "Updated" in Destination table, the package is importing data successfully.
    This looks like bug to me, Any suggestion?

    Hi Mohideen,
    Based on my research, the issue might be related to one of the following factors:
    Memory pressure. Check there is a memory challenge when the issue occurs. In addition, if the package runs in 32-bit runtime on the specific server, use the 64-bit runtime instead.
    A known issue with SQL Native Client. As a workaround, use .NET data provider instead of SNAC.
    Hope this helps.
    Regards,
    Mike Yin
    If you have any feedback on our support, please click
    here
    Mike Yin
    TechNet Community Support

  • Import IP Prefix from Global Table into a VRF Table

    Hello,
    Is it possible to import IP Prefix from Global Table into a VRF Table on ASR9001 with Version 4.3.0? Thanks.
    Regards,
    Eric

    hi Eric,
    In 4.3.1 there is a feature that will allow you to do this.
    The Border Gateway Protocol (BGP) dynamic route leaking feature provides the ability to import routes
    between the default-vrf (Global VRF) and any other non-default VRF, to provide connectivity between a
    global and a VPN host. The import process installs the Internet route in a VRF table or a VRF route in the
    Internet table, providing connectivity.
    You can follow the ASR9000 blog here to monitor when 4.3.1 will be posted to Cisco.com
    https://supportforums.cisco.com/blogs/asr9k
    or watch for the 4.3.1 Release Notes here
    http://www.cisco.com/en/US/products/ps5845/prod_release_notes_list.html
    regards,
    David

  • Fetch only more than or equal to 10 million rows tables

    Hi all,
    How to fetch tables has more than 10 million rows with is plsql? i got this from some other site I couldn't remember.
    Somebody can help me on this please. your help is greatly appreciated.
    declare
    counter number;
    begin
    for x in (select segment_name, owner
    from dba_segments
    where segment_type='TABLE'
    and owner='KOMAKO') loop
    execute immediate 'select count(*) from '||x.owner||'.'||x.segment_name into counter;
    dbms_output.put_line(rpad(x.owner,30,' ') ||'.' ||rpad(x.segment_name,30,' ') ||' : ' || counter ||' row(s)');
    end loop;
    end;
    Thank you,
    gg

    1) This code appears to work. Of course, there seems to be no need to select from DBA_SEGMENTS when DBA_TABLES would more straightforward. And, of course, you'd have to do something when the count exceeded 10 million.
    2) If you are using the cost-based optimizer (CBO) and your statistics are reasonably accurate and you can tolerate a degree of staleness/ approximation in the row counts, you could just select the NUM_ROWS column from DBA_TABLES.
    Justin

  • Import data from oracle database table into csv file

    Hi
    I have to import data from a table into a csv file. Could anyone suggest the best method to do it? My application is JSP as frontend and have put business logic in servlet.
    Thanks in advance.

    FastReader from wisdomforce will help you quickly export data into csv file. http://www.wisdomforce.com
    fastreader can be called and executed as an external process ( Runtime.exec(..) ) to extract data from Oracle tables into flat file

  • Delete from 95 million rows table ...

    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.

    user8604530 wrote:
    Hi folks, need to delete from a 95 millions rows regular table, what should be my best options, have tried CTAS using parallel, but it failed after 1+ hrs ... it was due to bad query, but checking is there any other way to achieve this.
    Thanks in advance.how many rows in the table BEFORE the DELETE?
    how many rows in the table AFTER the DELETE?
    How do I ask a question on the forums?
    SQL and PL/SQL FAQ
    Handle:     user8604530
    Status Level:     Newbie
    Registered:     Mar 10, 2010
    Total Posts:     64
    Total Questions:     26 (22 unresolved)
    I extend to you my condolences since you rarely get your questions answered.

  • Best way to refresh 5 million row table

    Hello,
    I have a table with 5 million rows that needs to be refreshed every 2 weeks.
    Currently I am dropping and creating the table which takes a very long time and gives a warning related to table space at the end of execution. It does create the table with the actual number of rows but I am not sure why I get the table space warning at the end.
    Any help is greatly appreciated.
    Thanks.

    Can you please post your query.
    # What is the size of temporary tablespace
    # Is you query performing any sorts ?
    Monitor the TEMP tablespace usage from below after executing your SQL query
    SELECT TABLESPACE_NAME, BYTES_USED, BYTES_FREE
    FROM V$TEMP_SPACE_HEADER;
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • How can Import Mysql my database and Table into website -

    Hi,
    friend i am using CS3 ,i made all need tables in database ,I need  to know
    how can import that database /table into website.
    I know  in domin there is admin Mysql inside i have found impotr/export/ but
    when  I click import i do not know where my database/table been save to  import it

    If all you are trying to do is change an existing tablepace from a single datafile to multiple datafiles, then a simple import will work fine. Just create the tablespace in the new database with multiple files, then do the import normally.
    If you want to move segments into different tablespaces, or change the names of the tablespaces, then you will need to precreate the tables before importing. Then, do the export with indexes=N and constraints=N. Run import with the resulting file, then re-create the constraints and indexes in the new tablespaces.
    TTFN
    John

  • Google style autosuggest with millions rows table

    Hi All,
    I'm exploring the ways of implementing a "google style autosuggest" on a table with no less than 30 millions rows. It has a field with an address (varchar) and I'd like to create a Ajax call while the user is typing that would suggest the user few addresses.
    I was thinking about using contains+fuzzy... but not sure if it will be fast enough and if it will return the right results.
    Any suggestions ?
    thanks

    2 million rows with XML type data
    link may be of your interest.
    HTH
    Girish Sharma

  • MINUS on a 4 million row table - how to enhance speed?

    I have a query like this:
    select col1, col2, col3
    from table1
    MINUS
    select col1, col2, col3
    from EXTERNAL_TABLE
    table1 has approximately 4 million records and external file has roughly 4 milllion rows.
    MINUS takes around 25 mins here. How can I speed up?
    Thanks in Advance!!!

    To make something go faster, you first need to know what makes it slow.
    Simple actually - to solve a problem we need to know what the problem is. Can't answer a question without knowing what the question is.
    Reading a total of 8 million rows means a lot more I/O than usual... so that is likely a culprit for the slow performance. If that is the case, you will need to find a way to increase the time it takes to perform all that I/O. Or do less I/O.
    But you need to pop the hood and take a look at just what is causing what you think is slow performance. (it may not even be slow - it may be as fast as it can do given the hardware and other limitations that are imposed on Oracle and this MINUS process)

  • How can I re-load a table into HANA using SLT

    Hi,
    I am using HANA Version: 1.0.24 and SLT SAPK-91705INDMIS.
    I have replicated several tables without any problems in this environment. However, when I tried to replicate T009, T009B and T009C; two tables T009B and T009C show status of replication as "Error" in both LTR and Hana Studio. Table T009 is replicated correctly.
    I noticed that the table structure for T009B and T009C is created in HANA but there is no Data. Not sure what happened during replication. It seems to have stopped in between due to some error and now I have Table structures but with no data. I have tried suspend / stop and replicate but it keeps shown status as 'Error'.
    Tcode LTR on SLT system shows following warnings
    "Trigger exists in Source System although it is internally set as initial" for both tables.
    I just want to start again and redo the load/replication of these two tables. What is the process of doing so?
    Thanks
    Sandeep

    Hi Sandeep,
    I cannot comment on the officially supported way of deleting these triggers. Maybe the following will work,
    Run the below command from SQLEditor:
    insert into <schema>."RS_ORDER" values(<your_table>, null, 'C');
    delete from <schema>."RS_STATUS" where TABLENAME=<your_table>
    Although I think it should delete source triggers(but not sure). You do the cleanup from SLT (transaction IUUC_SYNC_MON)-->Expert Functions --> 'Delete DB triggers' , 'Delete Logging tables entries' and 'Delete Logging tables' for your table.
    Also after doing this, you can check in source system (transaction IUUC_REMOTE) if triggers for your table are deleted.
    Caution Again :-): I don't know if the above method is a documented and officially supported one.
    Regards, Rahul

  • How to import csv file with multiple tables into sql server

    I have multiple csv files that has one sheet but has 130 headers with each header having different data. 
    I'd like to import each one of these header rows with data into its own file in sql server. 
    I know very basic SSIS and am but am not familiar with the scripting in it though which what I assume I'd have to use. 
    Each header in the csv file is structured as such(also see example pic):
    first header would be this:                             
          ITEM = ORG_V                              
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15      
    column names
    data
    second header would be this:
    ITEM = TER_V
          DATE = 2013-07-22 10:00 ~ 2013-07-22 10:15
    column names
    data
    The headers can be at any random row number as well as the data size in each excel file differs but they all start with "ITEM ="
    and then in the next row "DATE ="
    I could also convert these to excel files if it makes this process easier. 

    Why don't you put a filter on D3, filter out the blanks, copy/paste to a new CSV file, save it, and import it.
    There's no way you're going to get SQL to do that kind of thing for you.  The language is for set based operations, not for complex data manipulation tasks.
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

  • Importing word 2004 docs with tables into Pages

    I work in video production, and my clients send me shooting scrips as word documents formatted with tables. Needless to say, I was quick to discover that Pages doesn't really 'like' these documents. I can't keep asking my clients to reformat their docs as txt files for me - it's very unprofessional.
    What I've found is that all the text is there, but the tables don't 'translate' (one cell tries to fit on a whole page and the text gets 'lost' at the bottom). Is there a quick and easy way to remove the tables in Pages but keep the text?
    In Word, it's Alt+A, V, B, 0 (I've been formatting and reformatting scripts for 8 years!).
    Thanks for your help. I REALLY don't want to have to get Office - I've read so many horror stories about 2008 (especially the excel part).

    sknygrydg07 wrote:
    Is there a quick and easy way to remove the tables in Pages but keep the text?
    In Word, it's Alt+A, V, B, 0 (I've been formatting and reformatting scripts for 8 years!).
    Hello,
    Have you tried: Select table, then from the Pages Menu; Format > Table > Convert Table to Text?
    It might be of some value in this situation.
    Jerry

  • Import MS Access 2013 tables into SQL Server 2012

    Hi there,
    Is there a step by step example somewhere showing how to import an MS Access 2013 table into SQL Server 2012?
    I have read the existing posts and don't see a definitive answer.
    I have installed MS Access 2010 engine, first 32 bit then 64 bit.
    I have installed the MS Access 2013 runtime on my server.
    I use the Office 2015 Access Database Engine OLE DB Provider.
    I get the error:
    Error 0xc0202009: Source - APEntries [1]: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E37.
    Error 0xc02020e8: Source - APEntries [1]: Opening a rowset for "`TableName`" failed. Check that the object exists in the database.
    The post regarding the above errors doesn't resolve the issue.
    I have full administrative permissions on the server.
    What is the trick to making this work?
    Thanks,
    Ric
    Ric Miller

    Hi there,
    I tried the exact same operation on a third machine.
    This machine has Windows 8.1 64 bit, SQL Server 2012 64 bit, MS Office 2013 Plus 32 bit.
    I am the administrator on this machine.
    I installed this:
    Microsoft Access Database Engine 2010 Redistributable 32 bit (because I have MS Office 2013 plus 32 bit installed.)
    From here:
    http://www.microsoft.com/en-us/download/details.aspx?id=13255
    It won't let me install the 64 bit version without uninstalling MS Office 32 bit.
    I created an MS Access database on this machine using MS Access 2013 and created a table with 3 records.
    I used the "Import and Export Data (32 bit)" from the start menu.
    After I installed the "Database Engine 2010 32 bit" driver above, I now have the option of "Office 2015 Access Database Engine OLE DB Provider" as Data Source which I did not have prior to doing this installation.
    I selected the driver and added the Properties of the Data Source Name file location of the MS Access file. I am using a blank password.
    I go thru the same sequence of selecting a table to import and after running the result is the same:
    Error 0xc0202009: Source - APEntries [1]: SSIS Error Code DTS_E_OLEDBERROR.  An OLE DB error has occurred. Error code: 0x80040E37.
    "Error 0xc02020e8: Source - APEntries [1]: Opening a rowset for "`TableName`" failed. Check that the object exists in the database."
    This seems to be consistent across three machines with three operating systems with the same files and the same result.
    I understand that some people have gotten this to work.
    I would appreciate it if anyone can report an error in the above procedure to me.
    Thanks,
    Ric
    Ric Miller

  • Schema Design for 10^6+ rows table (Indicator Column / Bitmap Join Index?)

    Hi all,
    I read following suggestion for a SELECT with LEFT OUTER JOIN in a DB2 consulting company paper for a 10 million-rows table:
    SELECT columns
    FROM ACCTS A LEFT JOIN OPT1 O1
    ON      A.ACCT_NO = O1.ACCT_NO
    AND     A.FLAG1 = ‘Y’
    LEFT JOIN OPT2 O2
    ON      A.ACCT_NO = O2.ACCT_NO
    AND     A.FLAG2 = ‘Y’
    WHERE A.ACCT_NO = 1
    For DB2, according to the paper, the following is true: Iff A.FLAG1 <> ‘Y’ Then no Table or Index Access on OPT1 is done. Same for A.FLAG2/OPT2.
    I recreated the situation for ORACLE with the following script and came to some really interesting questions
    DROP TABLE maintbl CASCADE CONSTRAINTS;
    DROP TABLE opt1 CASCADE CONSTRAINTS;
    DROP TABLE opt2 CASCADE CONSTRAINTS;
    CREATE TABLE maintbl
    id INTEGER NOT NULL,
    dat VARCHAR2 (2000 CHAR),
    opt1 CHAR (1),
    opt2 CHAR (1),
    CONSTRAINT CK_maintbl_opt1 CHECK(opt1 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
    CONSTRAINT CK_maintbl_opt2 CHECK(opt2 IN ('Y', 'N')) INITIALLY IMMEDIATE ENABLE VALIDATE,
    CONSTRAINT PK_maintbl PRIMARY KEY(id)
    CREATE TABLE opt1
    maintbl_id INTEGER NOT NULL,
    adddat1 VARCHAR2 (100 CHAR),
    adddat2 VARCHAR2 (100 CHAR),
    CONSTRAINT PK_opt1 PRIMARY KEY(maintbl_id),
    CONSTRAINT FK_opt1_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
    CREATE TABLE opt2
    maintbl_id INTEGER NOT NULL,
    adddat1 VARCHAR2 (100 CHAR),
    adddat2 VARCHAR2 (100 CHAR),
    CONSTRAINT PK_opt2 PRIMARY KEY(maintbl_id),
    CONSTRAINT FK_opt2_maintbltable FOREIGN KEY(maintbl_id) REFERENCES maintbl(id)
    INSERT ALL
    WHEN 1 = 1 THEN
    INTO maintbl (ID, opt1, opt2, dat) VALUES (nr, is_even, is_odd, maintbldat)
    WHEN is_even = 'N' THEN
    INTO opt1 (maintbl_id, adddat1, adddat2) VALUES (nr, adddat1, adddat2)
    WHEN is_even = 'Y' THEN
    INTO opt2 (maintbl_ID, adddat1, adddat2) VALUES (nr, ADDdat1, ADDdat2)
    SELECT LEVEL AS NR,
    CASE WHEN MOD(LEVEL, 2) = 0 THEN 'Y' ELSE 'N' END AS is_even,
    CASE WHEN MOD(LEVEL, 2) = 1 THEN 'Y' ELSE 'N' END AS is_odd,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS maintbldat,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat1,
    TO_CHAR(DBMS_RANDOM.RANDOM) AS adddat2
    FROM DUAL
    CONNECT BY LEVEL <= 100;
    COMMIT;
    SELECT * FROM maintbl
    LEFT OUTER JOIN opt1 ON maintbl.id = opt1.maintbl_id AND maintbl.opt1 = 'Y'
    LEFT OUTER JOIN opt2 ON maintbl.id = opt2.maintbl_id AND maintbl.opt2 = 'Y'
    WHERE id = 1;
    Explain plan for "Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi":
    http://i.imgur.com/f0AiA.png
    As one can see, the DB uses a view to index-access the opt tables iff indicator column maintbl.opt1='Y' in the main table.
    Explain plan for "Oracle Database 11g Express Edition Release 11.2.0.2.0 - Production":
    http://i.imgur.com/iKfj8.png
    As one can see, the DB does NOT use the view, instead uses a pretty useless case-statement
    Now my questions:
    1) What does the optimizer do in 11.2 XE?!?
    2) In General: Do you suggest this table-setup? Does your yes/no suggestion depend on the rowcount in the tables? Of course I see the problem with incorrectly updated columns and would NEVER do it if there is another truly relational solution with same performance possibly.
    3) Is there a way to avoid performance issues if I don't use an indicator column in ORACLE? Is this what a [Bitmap Join Index|http://docs.oracle.com/cd/E11882_01/server.112/e25789/indexiot.htm#autoId14] is for?
    Thanks in advance and happy discussing,
    Blama

    Fair enough. I've included a cut-down set of SQL below.
    CREATE TABLE DIMENSION_DATE
    DATE_ID NUMBER,
    CALENDAR_DATE DATE,
    CONSTRAINT DATE_ID
    PRIMARY KEY
    (DATE_ID)
    CREATE UNIQUE INDEX DATE_I1 ON DIMENSION_DATE
    (CALENDAR_DATE, DATE_ID);
    CREATE TABLE ORDER_F
    ORDER_ID VARCHAR2(40 BYTE),
    SUBMITTEDDATE_FK NUMBER,
    COMPLETEDDATE_FK NUMBER,
    -- Then I add the first bitmap index, which works:
    CREATE BITMAP INDEX SUBMITTEDDATE_FK ON ORDER_F
    (DIMENSION_DATE.DATE_ID)
    FROM ORDER_F, DIMENSION_DATE
    WHERE ORDER_F.SUBMITTEDDATE_FK = DIMENSION_DATE.DATE_ID;
    -- Then attempt the next one:
    CREATE BITMAP INDEX completeddate_fk
    ON ORDER_F(b.date_id)
    FROM ORDER_F, DIMENSION_DATE b
    WHERE ORDER_F.completeddate_fk = b.date_id;
    -- which results in:
    -- ORA-01408: such column list already indexed

Maybe you are looking for