Fast Table Unload

Hello, We are running 8i. I need to perform a one-time extract of data from a dozen or so moderate sized tables (approx 3 million rows/table) for transmission to another platform.
Is there a "bulk" unload program/utility/command I can use to accomplish this task quickly (something similar to SQL Server BCP or DB2 UNLOAD utility)?
Any help would be greatly appreciated.

I usually use SQLPlus, spool and good old SQL! Be sure to set the following env variables so you have raw data being extracted.
SET WRAP OFF;
SET HEADING OFF;
SET SPACE 0;
SET FEEDBACK OFF;
SET PAGESIZE 0;
SET TERMOUT OFF;
SET ECHO OFF;
SET LINESIZE 166; <- change this to how wide you want your rows to be structured.

Similar Messages

  • Fast data unload

    I am awfully sorry, guys, but I can find the right place where to ask my question. I need to unload some data from ny table very fast. Say, I have a table with 3-15M records, and I need to unload records witch were not unloaded yet (I have an attribute for that). I wrote a PL/SQL stored procedure for that (using dynamic sql, table name is not constant) and I see about 800kb/s write speed. I understand, that's not quite sensible numer yet. But when I have to move data inside database I can see speed up to 200Mb/s (I am looking at iostat -x on RHEL 4). Today I wrote a Pro*C program to unload the data and I am able to archive almmost 1500Kb/s, almost twice as fast as PL/SQL! Yes, I saw http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:459020243348
    but I think I have a bit more efficient program. So, what do you think, am I able to archive more speed?

    but I think I have a bit more efficient program. Can you share the program (or the changes that give you that extra efficiency) so others in these forums can benefit from that?
    It provides 4 times better performanse than scripts suggested by Kyte I would not necessarily want to compare a free program available as a courtesy to a commerical product that needs to be paid for.

  • Update a column of a table to a value fast. table size is 16gb

    hi guys,
    set indexes to unusable will make processing faster.
    What other ways to make it faster? Am currently using the below but taking very long.
    version: 10.2.0.4
    UPDATE PDF.TAB_REF SET CODE='F';
    thanks

    set indexes to unusable will make processing fasterOnly for indexes on "CODE". Indexes on other columns are not updated by this UPDATE statement so they do not need to be set to UNUSABLE.
    Now you've unnecessarily added the overhead of having to REBUILD each of those indexes !
    (And if you do have a single-column index on "CODE", you don't need it anymore so you should drop the index !)
    This would be a FullTableScan.
    How large is the table ? Most DBAs/Developers answer a question with "x million rows". But "x million rows of 100bytes each" is very different from "x million rows of 492bytes each" which is very different from "x million rows of 1304bytes each". The FullTableScan has to read all the blocks -- so the effort is relative to the number of blocks, not the number of rows.
    Have you monitored the update by querying V$SESSTAT, V$SESS_IO and V$TRANSACTION ?
    (I guess you are using OEM's display of "Long Running SQL operations" which is based on V$SESSION_LONGOPS).
    Hemant K Chitale

  • Faster table creation

    Hai all,
    10.2.0.5 on Solaris 10
    We have the below query which is time consuming.
    {code}
    create table callst nologging parallel 12 as
    SELECT
    CONTRNO    CONTRNO,
    SUBSCR_TYPE    SUBSCR_TYPE,
    AREA    AREA,
    SUBNO    SUBNO,
    CHARGETYPE    CHARGETYPE,
    BILLCODE1    BILLCODE,
    TRUNC (MIN (TRANSDATE))    TRANSDATE,
    TRUNC (MAX (TRANSDATE))    TRANSDATE_TO,
    BILLTEXT    BILLTEXT,
    SUM(BILLAMOUNT)    BILLAMOUNT,
    FACTOR    FACTOR,
    SYSDATE    UPDDATE,
    TARIFFCLASS1    TARIFFCLASS,
    COUNT(1)    NO_OF_CALLS,
    SUM(DURATION)    DURATION,
    SUM(GROSS_AMOUNT)    GROSS_AMOUNT,
    SUM(ACT_DURATION)    ACT_DURATION,
    AR_BILLTEXT    AR_BILLTEXT,
    CALL_TYPE    CALL_TYPE,
    FACTOR    FACTOR_INT,
    LAST_TRAFFIC_DATE    LAST_TRAFFIC_DATE,
    RATE_TYPE    RATE_TYPE,
    TARIFF_GROUP    TARIFF_GROUP,
    RATE_POS    RATE_POS,
    RATE_PLAN    TARIFF_PROFILE,
    TARIFF_GROUP    ORG_TARIFF_GROUP
    FROM HISTCALLS
    WHERE (CONTRNO , SUBNO) IN (
    SELECT CONTRNO , SUBNO FROM COMPLETE_HEADER_SUBS)
    AND LAST_TRAFFIC_DATE ='21-SEP-2013'
    AND TRANSDATE +0  > TO_DATE ('01-AUG-2013 000000','DD-MON-YYYY HH24MISS')
    GROUP BY
    CONTRNO, SUBSCR_TYPE, AREA, SUBNO, CHARGETYPE, TARIFFCLASS1, BILLCODE1, BILLTEXT, AR_BILLTEXT, LAST_TRAFFIC_DATE, FACTOR, RATE_PLAN, CALL_TYPE, TARIFF_GROUP, RATE_TYPE,RATE_POS;
    {code}
    How I can put the above in a loop and insert values and perform commit frequently for faster inserts .
    Please advise.

    Hi,
    For all performance questions, see the forum FAQ: https://forums.oracle.com/message/9362003
    Not that this has much to do with performance, but I noticed you're saying
    AND LAST_TRAFFIC_DATE ='21-SEP-2013'
    If last_traffic_date a DATE, you  should comparieit to another DATE, such as the results of TO_DATE ('21-SEP-2013', 'DD-MON-YYYY').
    If last_traffic_date is not a DATE, why isn't it a DATE?

  • Tunf Fast Table Scan

    Hi all,
    I have a sql that uses full table scan since many rows will be selected?How can i further tune this sql?Can I speed up full table scan??
    Regards
    David

    One of the first issues that you can do to optimize a query is to analyze the explain plan of it. Example:
    SQL>
    SQL> explain plan for
      2  select sysdate from dual;
    Explained.
    SQL> set linesize 400
    SQL> desc dbms_xplan
    FUNCTION DISPLAY RETURNS DBMS_XPLAN_TYPE_TABLE
    Argument Name                  Type                    In/Out Default?
    TABLE_NAME                     VARCHAR2                IN     DEFAULT
    STATEMENT_ID                   VARCHAR2                IN     DEFAULT
    FORMAT                         VARCHAR2                IN     DEFAULT
    SQL>
    SQL>
    SQL> select * from table( dbms_xplan.display );
    PLAN_TABLE_OUTPUT
    | Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
    |   0 | SELECT STATEMENT     |             |       |       |       |
    |   1 |  TABLE ACCESS FULL   | DUAL        |       |       |       |
    Note: rule based optimization
    9 rows selected.
    SQL>Joel Pérez
    http://otn.oracle.com/experts

  • How to change the unload priority of a table in SAP HANA?

    Hi Experts,
    How we can change the unload priority of a table in SAP HANA? I know by default the priority is 5. Is there any way so that we can check the unload priority of a particular table in HANA studio? Is there any SQL statement to get the same?
    Please suggest.
    Thanks in advance.
    Regards,
    Arindam

    Hello Arindam,
    Just for the future:
    ALTER TABLE - SAP HANA SQL and System Views Reference - SAP Library
    To check before hand:
    select
    table_name, unload_priority from SYS.TABLES
    where table_name = '<Your Table>'
    To Make the change:
    alter table <Your Table>unload priority <Priority You Want>.
    As you have asked in the BW on HANA section I assume you're on BW and you could also have checked this with tx SE14.
    Hopefully the above gives you everything you need.
    Kind Regards,
    Amerjit

  • Channel table to table

    Hi All,
    I would like to Fill in a Table in the SUD with  a Channel Data (more than 20000 rows)
     I was trying with a loop  which takes more than 3 mins to fill the table and display it .I just need to consider the real data NOVALUES are not taken in to consider.
    For rowcnt = 1 To ChnLength("PosPeaks")
    If Not ( CHD(rowcnt,CNo("PosPeaks")) = "NOVALUE" ) Then
    tblCrossOver.Cells(cnt,1).Value = rowcnt
    tblCrossOver.Cells(cnt,2).Value = Round(CHD(rowcnt,CNo("PosPeaks")),10)
    cnt = cnt + 1
    End If
    Next 'rowcnt
    Output should be like the Table above.
    Kindly help me with this.
    regards,
    Swaroop K.B
    Solved!
    Go to Solution.

    Hi dragnov,
    Please find attached a SUD example which uses the XTable - the fast table for DIAdem dialogs.
    In this table the channels of the example dataset "Noise data" are displayed. Each of these channels contains 325 000 values. With using the XTable it is easy to scroll through the channels.
    In the XTable we have tow main events: "EventValGet" and "EventValSet"
    "EventValGet" is called for each cell to get the content which should be shown in that cell.
    "EventValSet" is called for each cell to save the content which have been changed.
    This table is very fast, because this is done only for the table row's and col's you currently see.
    To run the example, please load the example dataset, open the SUD editor and load and run the attached example.
    If it is necessary to change the displayed data go to the initialization event (row 14) and change the channel group name.
    The example was created with DIAdem 2011
    Greetings
    Walter
    Attachments:
    FastTable.SUD ‏11 KB

  • Aggregate tables have many partitions per request

    We are having some performance issues dealing with aggregate tables and
    Db partitions. We are on BW3.5 Sp15 and use Oracle DB 9.2.06. After
    some analysis, we can see that for many of our aggregates, there are
    sometimes as much as a hundred partitions in the aggregates fact table.
    If we look at infocube itself, there are only a few requests (for
    example, 10). However, we do often delete and reload requests
    frequently. We understood that there should only be one partition per
    request in the aggregate (infocube is NOT set up for partitioning by
    other than request).
    We suspect the high number of partitions is causing come performance
    issues. But we don;t understand why they are being created.
    I have even tried deleting the aggregate (all aggregate F tables and
    partitions were dropped) and reloading, and we still see many many more
    partitions than requests. (we also notice that many of the partitions
    have a very low record count - many less than 10 records in partition).
    We'd like to understand what is causing this. Could line item
    dimensions or high cardinality play a role?
    On a related topic-
    We also have seen an awful lot of empty partitions in both the infocube
    fast table and the aggregate fact table. I understand this is probably
    caused by the frequent deletion and reload of requests, but I am
    surprised that the system does not do a better job of cleaning up these
    empty partitions automatically. (We are aware of program
    SAP_DROP_EMPTY_FPARTITIONS).
    I am including some files which show these issues via screen shots and
    partition displays to help illustrate the issue.
    Any help would be appreciated.
    Brad Daniels
    302-275-1980
    215-592-2219

    Ideally the aggregates should get compressed by themselves - there could be some change runs that have affected the compression.
    Check the following :
    1. See if compressing the cube and rolling up the aggregates will merge the partitions.
    2. What is the delta mode for the aggregates ( are you loading deltas for aggregates or full loads ) ?
    3. Aggregates are partitioned according to the infocube and since you are partitioning according to the requests - the same is being done on the aggregates.
    Select another partitioning characteristic if possible. - because it is ideally recommended that request should not be used for partitioning.
    Arun
    Assign points if it helps..

  • Fastest way to fill an InDesign table with data

    Hello,
    I have to fill several InDesign tables with the content of my database.
    I have the database in memory and fill the cells in two Loops (For Each row..., For Each col...).
    But it is so slow! Is there a faster way?
    Here a code snippet of the solution today:
                For Each row In tableRecord
                    Dim inDRow = table.Rows.AsEnumerable().ElementAt(intRow)
                    For Each content In row
                        Dim cell = inDRow.Cells.AsEnumerable().ElementAt(content.Index)
                        cell.Contents = content.Value
                    Next
                    intRow+=1
                Next
    Thank you for help!
    Best regards
    Harald

    Hi, Harald!
    "This should be faster: table.Contents=Array. Or not?"
    Surprisingly is was not. It was slower. A lot slower.
    The array was gathered by (here ExtendScript(JavaScript) dummy code) :
    myArray = myTable.contents;
    Then I did operate on the array. Not on the table object or its cell objects. No direct access to InDesign's DOM objects. Just the built array.
    My text file was written by populating it with a string of the array:
    myString = myArray.join("separatorString");
    separatorString was something that was never used as contents in the table.
    Something like "§§§"…
    After importing the text file I used the convertToTable() method providing the separatorString as separator for the first and second argument with the number of columns as third argument. The number of columns was known from my original table.
    var myNewTable = myText.convertToTable("separatorString", "separatorString", myNumberOfColumns);
    Alternatively you could also remove the table after building the array and assign "myString" as contents for the insertionPoint of the removed table in the story. I think I tested that as well, but do not know, if there is a difference in speed opposed to placing a text file with the same contents (I think it was, but not I'm not sure anymore). So I ended up with:
    1. Contents of table to Array
    2. Array manipulation
    3. Array to String
    4. Write String as file
    5. Remove table
    6. Place file at InsertionPoint of (now removed) table
    Also to note: This was in InDesign CS5 with a very large table.
    Things could have changed in InDesign versions with 64-Bit support.
    But I did not test that yet. The customer I wrote this script for is still on CS5.
    Uwe

  • R3 Table Required for data load status

    Hi all,
    I am in 3.x version so rsstatmanpart(fast table only available in bi7) wont work.
    I want the no of records added and transferred for a specific cube on a specific date.
    Thanks in advance.

    Check Tables RSMONICTAB, RSMONFACT, RSMONICDP
    Hope this helps..
    /pradeep

  • SSIS Fast Load fails to copy correct number of rows

    Step 1 - truncate destination table
    on success
    Using obdc source table source to a odbc destination use a fast table load to take three colums out of source, and copy to destination.
    In the source, column 1 is the primary key (int)
    Other two columns are time stamps
    Destination table, column 1 is int (no keys) - does not allow nulls, column 2 & 3 allow nulls
    Noramlly the rowcount in the source and destination tables match after a run. However, on occasions, the destination table count is less than the source table. On the destination odbc source, we enable identify insert and check constraints. I can't see how we'd drop rows since by definition the row needs to existing in the source (we're copying the primary key).
    The first time this occurred, anecdotal information is that the source sql server was under memory stress.
    Has anyone seen this behavior before? Any ideas on how to resolve it?
    Ken

    I just ran into this same issue.  After a solid half day of troubleshooting I found this little 'fast load' setting to be the culprit. 
    We have a very simple copy operation taking rows out of an ODBC source, adding a column and then stuffing them into a destination ODBC source.  All the operations were run on a development machine with a local SQL Server install and plenty of RAM/CPU headroom.
    I'd enabled logging of all kinds everywhere trying to detect the problem but nothing was tripped.  When run under debug mode (in dev studio) I see the correct number of rows (684 in this case) reported being sent to the ODBC destination; however, when I look in the table itself I only ever see the first one.
    As soon as I turned off the fast load option then I started getting the full data set moving over properly.
    At this point, I'm of a mind to go through and remove 'fast load' from every one of my packages.  I'll take reliability over speed any day of the week.

  • Drop cache group in timesten 11.2.1

    Hello,
    I am trying to drop an asynchronous cache group in timesten. I follow the below steps to do so:
    a) I use the connection string with the DSN, UID, PWD, OracleID, OraclePWD specified
    b) If replication policy is 'always', change it to 'manual'
    c) Stop replication
    d) Drop the AWT cache group (+drop cache group cachegroupname;+)
    e) Create the modified AWT
    f) Start replication
    g) Set replication policy back to 'always'
    After step (d), I get the following error:
    Command> drop cache group cachegroupname;
    +5219: Temporary Oracle connection failure error in OCIServerAttach(): ORA-12541: TNS:no listener rc = -1+
    +5131: Cannot connect to backend database: OracleNetServiceName = "servicename", uid = "inputuid", pwd is hidden, TNS_ADMIN = "/opt/TT/linux/info", ORACLE_HOME= "/opt/TT/linux/ttoracle_home/instantclient_11_1"+
    +5109: Cache Connect general error: BDB connection not open.+
    The command failed.
    Command>
    Does the error suggest that cache connect has a problem? Should I restart the timesten daemon and try again? Please let me know what the real problem is.
    Let me know if you need information.
    Thanks,
    V

    The SQL*Plus problem is simply because you don't have all the correct directories listed in LD_LIBRARY_PATH. It's likely that your .profile (or equivalenbt) was setting those based on ORACLE_HOME and if this is now unset that could be he problem. Check that LD_LIBRARY_PATH is set properly and this problem will go away.
    The character set issues is potentially more problematic. it is mandatory that the Database character set used by TimesTen exactly matches that of Oracle DB when TT is being used as a cache. If the character sets truly are different then this is very serious and you need to rectify it as many things will fail otherwise. You either need to switch Oracle DB back to US7ASCII (this is probably a big job) or you need to change the TT character set to WE8MSWIN1252.
    To accomplish the latter you would:
    1. Take a backup of the TT datastore using ttBackup (just for safety).
    2. For any non-cache tables (i.e. TT only tables), unload data to flat files using ttBulkCp -o ...
    3. Save the schema for the datastore using ttSchema.
    4. Stop cache and replication agents.
    5. Ensure datastore is unloaded from memory and then destroy the datastore (ttDestroy)
    6. Edit sys.odbc.ini to change Datastore character set.
    7. Connect to datastore as instance administrator (to create datastore). Create all necessary users and grant required privileges.
    8. Set the cahce userid/password (call ttCacheUidPwdSSet(...,...))
    9. Start the cache agent.
    10. Run the SQL script generated by ttSchema to re-create all database objects (tables and cache groups etc.)
    11. Re-populate all non-cache tables from the flat files using ttBulkCp -i
    12. Re-load all cache groups using LOAD CACHE GROUP ...
    13. restart replication agent.
    That's pretty much it (hopefully I have not missed out any vital step).
    Chris

  • FM which will display a select option in a pop up

    Hi All,
    I am looking for a FM which will display a select option in a pop up and prompt me to enter some value into it.
    I tried FM POPUP_GET_VALUES_USER_BUTTONS, &  POPUP_GET_VALUES  which allows me to enter only single parameter values and not a select option.
    Regards
    Puja

    Hi
    u can create by yourself, it's very easy and fast:
    TABLES BKPF.
    SELECTION-SCREEN BEGIN OF SCREEN 100.
    SELECT-OPTIONS: S_BUKRS FOR BKPF-BUKRS,
                    S_BELNR FOR BKPF-BELNR.
    SELECTION-SCREEN END   OF SCREEN 100.
    CALL SELECTION-SCREEN 100 STARTING AT 5 10.
    IF SY-SUBRC = 0.
    *------> User has pressed F8
    ELSE.
    *------> User wants to leave the popup
    ENDIF.
    Max

  • InDesign content to Database

    We have InDesign files created and now we would like to populate the database with content available in InDesign file.
    Is there an SDK to read InDesign files content and populate it to database?
    Or Please suggest the best approach to get database populated from InDesign?
    I know there is an XML export from InDesign files and then that can be imported to DB... but this process has lot of manual work to tie-up relationships between the content in InDesign file.

    Hi, Harald!
    "This should be faster: table.Contents=Array. Or not?"
    Surprisingly is was not. It was slower. A lot slower.
    The array was gathered by (here ExtendScript(JavaScript) dummy code) :
    myArray = myTable.contents;
    Then I did operate on the array. Not on the table object or its cell objects. No direct access to InDesign's DOM objects. Just the built array.
    My text file was written by populating it with a string of the array:
    myString = myArray.join("separatorString");
    separatorString was something that was never used as contents in the table.
    Something like "§§§"…
    After importing the text file I used the convertToTable() method providing the separatorString as separator for the first and second argument with the number of columns as third argument. The number of columns was known from my original table.
    var myNewTable = myText.convertToTable("separatorString", "separatorString", myNumberOfColumns);
    Alternatively you could also remove the table after building the array and assign "myString" as contents for the insertionPoint of the removed table in the story. I think I tested that as well, but do not know, if there is a difference in speed opposed to placing a text file with the same contents (I think it was, but not I'm not sure anymore). So I ended up with:
    1. Contents of table to Array
    2. Array manipulation
    3. Array to String
    4. Write String as file
    5. Remove table
    6. Place file at InsertionPoint of (now removed) table
    Also to note: This was in InDesign CS5 with a very large table.
    Things could have changed in InDesign versions with 64-Bit support.
    But I did not test that yet. The customer I wrote this script for is still on CS5.
    Uwe

  • CSS 11501 100% CPU

    I have a CSS running at 100% CPU with no services up. I cannot ping the CSS interfaces after about 1 minute of up time. Im running 8.1. Any ideas? I turned logging back to default and turned off my ACLs.

    you'll need to go in llama/debug mode and do the following
    symbol-table load SPRITZ
    shell 1 1 spy
    shell 1 1 spyStart
    shell 1 1 spyReport
    shell 1 1 spyReport
    shell 1 1 spyStop
    symbol-table unload
    Do this when the CPU is high.
    This will tell us the cause.
    Also, if you only see the problem when logging is set to debug, then I would say do not set the level of logging so high unless you are trying to debug an issue.
    Gilles.

Maybe you are looking for

  • Error Activating the ODS data

    Hi Guys, I am facing an error while activating the data in the ODS. The data is loading perfectly fine but when activating the ODS data the following error is coming <b>Request REQU_3ZI9CZ369HR6SM49OL9ULRVKM, data package 000031 incorrect with status

  • Hard drives and Time Machine

    I purchased Super Duper and did a full backup on my Mac Pro (I have 3 hard drives, the main one and Two backup drives that are mounted on the desktop). Everything worked fine, I did the Super-Duper start-up/complete backup to one of the extra hard dr

  • Oracle 11g New Feature

    Hi Experts I am currently doing a Oracle 9.2.0.8 32bit upgrade to 11gR2 64bit and I come across Oracle Document ID 7691270.8 and Oracle Database 11gR2 Upgrade Companion (Version 2.80). Both mention about the new feature of datafilewrite_errors_crash_

  • User Exit to be used in Substitution

    Hello, We have recently got an requirement from one of the clients as they want to copy the material description to the GL text field (against Doc Type WA & WE) . I am able to find that this can be achieved by way of creating substitution, but I wand

  • Questions about "Diff Sequence File With..." function

    Hello, I've started working on a project where I'm maintaining a piece of older code written by a since-departed employee of my company.  It's a TestStand 3.0 sequence file calling on Labview 7.0 code. The thing is, there were two of these installati