Migrating a large table containing clobs to a new dedicated tablespace.

Hi All
I am seeking some advice on something.
I have a Oracle 10g live database with a table used for auditing purposes, this table has 5.8 million rows and contains clob data. When I last performed a full export of this table using data pump the dump file was around 40GB in size and as you can imagine took a long amount of time to complete.
This table is currently located within the main data table space, with that in mind I am looking at moving this table to a new dedicated table space onto dedicated hard disks. Theory being this will allow easier DBA administration regarding separate rman backups of this new table space and improve performance on the main data table space, writes to the audit table will now be on separate disks in a dedicated table space. I have a test system that is a complete copy of live I can use to test this.
Can anyone advise the best way to achieve the transfer of the audit table to the new table space, I intend to create a new table space of the same size on the located on separate dedicated disks.
Option 1
I Presume there are two main options, a full data export of the table and then reimport into the new table space ensuring to rebuild index's? While doing so I presume the database would need to be inaccessible to users? otherwise inserts will be done to the source table I am trying to migrate to the new table space? I think I would need to take the system offline while doing this anyway as the system would perform alot slower while the export is taking place.
Option 2
Use the alter table move table space feature? I am new to the DBA role so I have little knowledge on this function.
Please can you advise which of the above would be the best cause of action, or if deed there is any other options/solutions I am not aware of?
Many Thanks in advance.

Actually, most queries are using an index range scan or index full scan.
But, just for the sake of this discussion, here is the simple query I mentioned.
Note that my concern is of the chained or migrated rows, and how to resolve them.
But if my table contains 520 columns, how can I get around intra-block chaining?
The question also still remains how can I tell the difference between row chaining, row migration, and intra-block chaining and which is it that is showing up in dba_tables.chained_cnt?
simple query:
SELECT C536870916,COUNT(T2179.C536881135)
FROM aradmin.T2179
WHERE ((T2179.C536871037 = 'Trouble') AND ((T2179.C536870944 = 'New')
OR (T2179.C536870944 = 'Assigned') OR (T2179.C536870944 = 'On Hold')))
GROUP BY C536870916
ORDER BY C536870916
Explain plan for simple query is:
SELECT STATEMENT
SORT GROUP BY NOSORT
TABLE ACCESS BY INDEX ROWID
INDEX FULL SCAN
Also, on this table of 520 columns, there are already 61 indexes, of which 20 are LOB indexes, and one FB index. All others are normal indexes.
My thinking on moving the table is that I would have to rebuild each and every index, however, I'm not sure how to rebuild LOB or FB indexes.
So, back to my original question: can I safely "move" a table containing several CLOB columns and also how do I rebuild the LOB indexes?
Thanks.

Similar Messages

  • Select from table containing clob

    If i try to select from table containing clob column in SQL PLus it gives error.
    Tab1 contains 3 clob columns and 1 blob column
    select * from tab 1;
    SP2-0678: Column or attribute type can not be displayed by SQL*Plus
    The same statement works in SQL Developer and I am able to see the result.
    Actually i am writing the queries and they will be used by Java developers in their JSP page.
    So what happens here? Can Java use these select statements or will it throw error like SQL Plus?

    BLOB column content can't be displayed in SQL*Plus:
    SQL> create table t_blob (b blob);
    Table created.
    SQL> edit
    Wrote file afiedt.buf
      1* insert into t_blob values('01')
    SQL> /
    1 row created.
    SQL> commit;
    Commit complete.
    SQL> select * from t_blob;
    SP2-0678: Column or attribute type can not be displayed by SQL*PlusBlob and clob columns content can be processed using DBMS_LOB
    package procedures and functions or using client's language (like Java)
    methods. See JDBC specification.
    Rgds.

  • Deleting a row from a table containing CLOB as one of the columns

    When i delete a row from a table which contains a CLOB (internal clob) i.e. CLOB or BLOB column, Will the CLOB data will also be deleted ? I understand that what exactly stored in the CLOB column is the clob locator which points to the actual data.
    So, when I delete this row, the clob locator will be deleted, but will the actual data what this locator is pointing to is also deleted ??? if not what is the process to delete the data the locator is pointing to when the row containing the locator is deleted ? If this is not happening then the actual data might become an orphan data which nobody has access to, will automatic garbage cleaning occurs on a frequent intravels to delete unaddressed data residing on the database server ?
    Thanks in advance for the help, can email me at [email protected] alternatively.
    Regards,
    Srinivasa C.

    Michael,
    Thanks very much for your inputs, here are the results i got when i tried the way you explained in your answer, the TRUNCATE command made the actual size back to normal, but the delete is not the same, so, how can i delete the data that a particular clob locator may point to ?
    truncate would delete all the rows of the table, which might not serve my purpose, i would like to delete a row and also it's associated clob data from the database! is there anyway to do this ?
    is there any limitation on the ool_sample size? i am basically a c++ programmer, i am looking for some function like FREE which would free the allocated memory to the clob once the locator is deleted.
    your help is greatly appreciated - Thanks!
    :-) Srini.
    ==========================
    My Results:
    ==========================
    SQL> create table sample (
    2 id integer primary key,
    3 the_data CLOB default empty_clob() )
    4 lob (the_data) store as ool_sample;
    Table created.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    20K
    SAMPLE
    10K
    SQL> select count(*) from sample;
    COUNT(*)
    0
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into sample values (i, RPAD('some data', 4000) );
    5 end loop;
    6 end;
    7 /
    PL/SQL procedure successfully completed.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> delete sample;
    1000 rows deleted.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> commit;
    Commit complete.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    6420K
    SAMPLE
    70K
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into sample values (i, rpad('some data', 4000));
    5 end loop;
    6 end;
    7 /
    PL/SQL procedure successfully completed.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    9616K
    SAMPLE
    70K
    SQL> truncate table sample;
    Table truncated.
    SQL> select segment_name, round(sum(bytes)/1024, 2) || 'K' as sotrage_consumed
    2 from user_segments
    3 where segment_name in ('SAMPLE', 'OOL_SAMPLE')
    4 group by segment_name;
    SEGMENT_NAME
    SOTRAGE_CONSUMED
    OOL_SAMPLE
    20K
    SAMPLE
    10K

  • How to subdivide 1 large TABLE based on the output of a VIEW

    I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
    I am groping with the following strategy in PL/SQL:
    1 -- set up cursor, execute the view (so I have the list of identifiers)
    2 -- create a second cursor (or loop?) which:
    accepts each of the identifiers in turn
    executes a query (EXECUTE IMMEDIATE?) on the larger table
    INSERTs (or appends?) each resultset into the GTT
    3 -- Then the GTT contains just the requires subset of the larger table for further processing and eventual import into iReport for reporting.
    Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
    The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
    Thanks,
    Rob

    Welcome to the forum!
    >
    I am searching for a decent method / example code to subdivide a large table (into a global temp table (GTT) for further processing) based on a list of numeric/alphanumeric which is the resultset from a view.
    Can anyone point me to code that would "spoon feed" me on this? Or suggest the best / better way to go about it?
    The scale of the issue here -- GTT is defined and ready to go, the larger table contains approx 40,000 rows and I need to extract a dozen subsets or so which add up to approx 1000 rows.
    >
    No - there is no code to point you to.
    As many of the previous responses indicate part of the concern is that you seem to have already chosen and partially implemented a solution but the information you provided makes us question whether you have adequately analyzed and defined the actual problem and processing that needs to happen. Here's why I have questions about your approach
    1. GTT - a red flag issue - these tables are generally not needed in Oracle. So when you, or anyone says they plan to use one it raises a red flag. People want to be sure you really need one rather than not using a table at all, or just using a regular table instead of a GTT.
    2. Double nested CURSOR loops - a DOUBLE red flag issue - this is almost always SLOW-BY-SLOW (row-by-row) processing at its worst. It is seldom needed, doesn't perform well and won't scale. People are going to question this choice and rightfully so.
    3. EXECUTE IMMEDIATE - a red flag issue or at least a yellow/warning flag. This is definitely a legitimate methodology when it is needed but may times developers resort to it when it isn't needed because it seems easier than doing the hard work of actually defining ALL of the requirements. It seems easier because it appears that it will allow and work for those 'unexpected' things that seem to come up in new development.
    Unfortunately most of those unexpected things come up because the developer did not adequately define all of the requirements. The code may execute when those things arise but it likely won't do the right thing.
    Seeing all three of those red flag issues in the same question is like waving a red flag at a charging bull. The responses you get are all likely to be of the 'DO NOT DO THAT' variety.
    You are correct that a work table is appropriate when there is business logic to be applied to a set of data that cannot be applied using SQL alone. Use a regular table unless
    1. you plan to have multiple sessions working with the table simutaneously,
    2. each session needs to work with ONLY their own data in that table and not data from other sessions
    3. the data does NOT need to be available after the session ends
    4. you actually need a GTT to take advantage of the automatic data preservation (ON COMMIT PRESERVE/DELETE) functionality
    Remember - when a session ends the data in the GTT is gone. That can makek it very difficult to troubleshoot data related problems since a different session can't see what data is in the table. Even if a GTT is needed for the final product it is very useful to use a regular table so that the data can be examined after test runs to help find and fix problems. Then after development is complete and initial testing is done a GTT would be substituted and final testing performed.
    So the main remaining question is why you need to perform multiple dynamic queries to get the data populated into the work table? Especially why is a nested cursor loop needed? My suspicion is that you have the queries stored in a query table and one of your loops extracts the query and executes it dynamically.
    How many queries are we talking about? Do these queries change from run to run? Please provide more detail of the process and an example query for the selection filtering as well as a typical dynamic query you plan to use.

  • Memory leak in JCO when calling an ABAP-function that returns larg tables

    Hello everybody,
    I think discovered a memory leak in JCO when the calling functionions that have exporting tables with large datasets. For example the ABAP-function RFC_READ_TABLE, which in this example I use to retrieve data from a table called "RSZELTTXT", which contains ~ 120000 datasets. RFC_READ_TABLE exports the data as table "DATA".
    Here a simple JUnit test:
    http://pastebin.ca/1420451
    When running it with Sun Java 1.6 with standard heap size of 64mb I get a heapsize OutOfMemory error:
    http://pastebin.ca/1420472
    Looking at the heap dump (which I unfortunately cannot post here, because of it' size), I can see that I've 65000 char[512] array objects in my heap, which don't get cleaned up. I think, each char[512] array stands for one dataset in the exporting table "DATA", since the table contains 120000 datasets, the heap is full after the first 65000 datasets are parsed. Apparently, JCO tries to read all datasets in memory instead of justing reading the dataset to which the pointer (JCoTable.setRow(i)) currently points to and releasing it from memory after the pointer moves forward ...
    Did anybody else experience this?
    Is SAP going to remove to issue in upcoming versions of JCO?
    regards Samir

    Hi,
       Check Below links
    1) How To Analyze Performance Problems JCO
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/3fbea790-0201-0010-6481-8370ebc3c17d
    2) How to Avoid Memory Leaks 
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c3e598fe-0601-0010-e990-b8622840c8c2
    Salil...
    Edited by: salil chavan on Jun 2, 2009 5:21 AM

  • Need help in optimisation for a select query on a large table

    Hi Gurus
    Please help in optimising the code. It takes 1 hr for 3-4000 records. Its very slow.
    My Select is reading from a table which contains 10 Million records.
    I am writing the select on large table and Retrieving the values from large tables by comparing my table which has 3-4 k records.
    I am pasting the code. please help
    Data: wa_i_tab1 type tys_tg_1 .
    DATA: i_tab TYPE STANDARD TABLE OF tys_tg_1.
    Data : wa_result_pkg type tys_tg_1,
    wa_result_pkg1 type tys_tg_1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1 from
    /BIC/PZREB_SDAT *******************THIS TABLE CONTAINS 10 MILLION RECORDS
    into CORRESPONDING FIELDS OF table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE***************CONTAINS 3000-4000 RECORDS
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE-/BIC/ZLITEM1.
    sort RESULT_PACKAGE by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    sort i_tab by AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1.
    loop at RESULT_PACKAGE into wa_result_pkg.
    read TABLE i_tab INTO wa_i_tab1 with key
    /BIC/ZREB_SDAT =
    wa_result_pkg-/BIC/ZREB_SDAT
    AGREEMENT = wa_result_pkg-AGREEMENT
    /BIC/ZLITEM1 = wa_result_pkg-/BIC/ZLITEM1.
    IF SY-SUBRC = 0.
    move wa_i_tab1-/BIC/ZSETLRUN to
    wa_result_pkg-/BIC/ZSETLRUN.
    wa_result_pkg1-/BIC/ZSETLRUN = wa_result_pkg-/BIC/ZSETLRUN.
    modify RESULT_PACKAGE from wa_result_pkg1
    TRANSPORTING /BIC/ZSETLRUN.
    ENDIF.
    CLEAR: wa_i_tab1,wa_result_pkg1,wa_result_pkg.
    endloop.

    Hi,
    1) RESULT_PACKAGE internal table contains any duplicate records or not bassed on the where condotion like below
    2) Remove the into CORRESPONDING FIELDS OF table instead of that into table use.
    refer the below code is
    RESULT_PACKAGE1[] = RESULT_PACKAGE[].
    sort RESULT_PACKAGE1 by /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    delete adjustant duplicate form RESULT_PACKAGE1 comparing /BIC/ZREB_SDAT AGREEMENT /BIC/ZLITEM1.
    SELECT /BIC/ZSETLRUN AGREEMENT /BIC/ZREB_SDAT /BIC/ZLITEM1
    from /BIC/PZREB_SDAT
    into table i_tab
    FOR ALL ENTRIES IN RESULT_PACKAGE1
    where
    /bic/ZREB_SDAT = RESULT_PACKAGE1-/BIC/ZREB_SDAT
    AND
    AGREEMENT = RESULT_PACKAGE1-AGREEMENT
    AND /BIC/ZLITEM1 = RESULT_PACKAGE1-/BIC/ZLITEM1.
    and one more thing your getting 10 million records so use package size in you select query.
    Refer the following link also For All Entry for 1 Million Records
    Regards,
    Dhina..
    Edited by: Dhina DMD on Sep 15, 2011 7:17 AM

  • How to effectively manage large table which is rapidly growing

    All,
    My environment is single node database with regular file system.
    Oracle - 10.2.0.4.0
    IBM - AIX
    A tablespace in this database is growing rapidly. Especially a single table in that tablespace having "Long Raw" column datatype has grown from 4 GBs to 900 GBs in 6 months.
    We had discussion with application team and they mentioned that due to acquisitions, data volume is increased and we are expecting it to grow up to 4 TBs in next 2 years.
    In order to effectively manage the table and to avoid performance issues, we are looking for different options as below.
    1) Table is having date column. With that thought of converting to partitioned table like "Range" partitioning. I never converted a table of 900 GBs to a partitioned table. Is it a best method?
         a) how can I move the data from regular table to partitioned table. I looked into google, but not able to find good method for converting to regular table to partitioned table. Can you help me out / share best practices?
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype, how easy is to convert from "Long RAW" datatype. Will BLOB yield better performance and uses disk space effectively?
    3) Application team is having purging activity based on application logic. We thought of using shrinking of tables option with enable row movement- "alter table <table name> shrink space cascade". But it is returning the error that table contains "Long" datatype. Any suggestions.
    Any other methods / suggestions to handle this situation effectively..
    Note: By end of 2010, we have plans of moving to RAC with ASM.
    Thanks

    To answer your question 2:
    2) In one of the article, I read, BLOB is better than "Long RAW" datatype,
    how easy is to convert from "Long RAW" datatype. Will BLOB yield better
    performance and uses disk space effectively?Yes, LOBs, BLOBs, or CLOBs are (supposed) to be better than raws (or long raws). In addition, I believe Oracle has or will shortly be desupporting the use of long raws in favor of LOBs, CLOBs, or BLOBs (as appropriate).
    There is a function called "to_lob" that you have to use to convert. Its a pain because you have to create the second table and then insert into the second table from the first table using the 'to_lob' function.
    from my notes...
    =================================================
    Manually recreate the original table...
    Next, recreate (based on describe of the table), except using CLOB instead of LONG:
    SQL> create table SPACER_STATEMENTS
    2 (OWNER_NAME VARCHAR2(30) NOT NULL,
    3 FOLDER VARCHAR2(30) NOT NULL,
    4 SCRIPT_ID VARCHAR2(30) NOT NULL,
    5 STATEMENT_ID NUMBER(8) NOT NULL,
    6 STATEMENT_DESC VARCHAR2(25),
    7 STATEMENT_TYPE VARCHAR2(10),
    8 SCRIPT_STATEMENT CLOB,
    9 ERROR VARCHAR2(1000),
    10 NUMBER_OF_ROWS NUMBER,
    11 END_DATE DATE
    12 )
    13 TABLESPACE SYSTEM
    14 ;
    Table created.
    Try to insert the data using select from original table...
    SQL> insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG;
    insert into SPACER_STATEMENTS select * from SPACER_STATEMENTS_ORIG
    ERROR at line 1:
    ORA-00997: illegal use of LONG datatype
    That didn't work...
    Now, lets use TO_LOB
    SQL> insert into SPACER_STATEMENTS
    2 (OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, SCRIPT_STATEMENT, ERROR, NUMBER_OF_ROWS, END_DATE)
    3 select OWNER_NAME, FOLDER, SCRIPT_ID, STATEMENT_ID, STATEMENT_DESC, STATEMENT_TYPE, TO_LOB(SCRIPT_STATEMENT), ERROR, NUMBER_OF_ROWS, END_DATE
    4 from SPACER_STATEMENTS_ORIG;
    10 rows created.
    works well...
    ===============================================================

  • Which table contains net book value for Assets created with AS91.

    Which table contains net book value for Assets created with AS91.
    I have a problem locating where the net book value is stored in SAP.  Is it simply calculated and not stored in any one place?  I am trying to predict how SAP will calculate the net book value for some assets we plan on converting, but my formula doesn't always work consistently and I have not idea what is going on.  If it is stored in a table some place, can anyone please let me know!
    Thank you all

    Hi anar.samadzade & Michael Stewart
    It is not possible to directly get net book value of an any Table. You must migrate the gross book value (acquisition cost) and the accumulated depreciation. SAP will then calculate the NBV.
    Gross Block & Accumulate Dep you will get from Table: ANLC
    http://fixedassetsaccounting.net/migrating-fixed-assets-into-sap-a-harlex-guide/
    Dear anar.samadzade Ask Questions politely
    Regards
    Viswa

  • How do I open and use a large table from Word in Pages?

    I upgraded my MBP from Snow Leopard to Mountain Lion a couple days ago.  I knew that my old Word application wouldn't work, but several Apple people assured me that Pages could handle my old docs, including tables.
    So, I purchased and installed Pages '09 this morning.  I opened the table I use all the time - - and most of it is missing!  Apparently, Pages doesn't handle large tables.
    I need help - desperately!  This table contains all my medical expenses for the year, so I have to have it.
    Thanks

    J,
    Besides needing to have your Table Object Floating, you should know that the maximum number of rows in Pages Tables is 999.
    If you continue to have problems opening your Word document and its table in Pages, try one of the free Office Clones, LibreOffice, OpenOffice, IBM Lotus Symphony, etc.
    I'd bet that at least one of those free apps will work if Pages doesn't. By the way, have you tried viewing your Word document in Quick Look? To do that, click on the filename in Finder and hit the Spacebar key. You won't be able to do anything but view the document in Quick Look, but it would give you confidence that your file is OK.
    Jerry

  • Split a large table into multiple packages - R3load/MIGMON

    Hello,
    We are in the process of reducing the export and import downtime for the UNICODE migration/Conversion.
    In this process, we have identified couple of large tables which were taking long time to export and import by a single R3load process.
    Step 1:> We ran the System Copy --> Export Preparation
    Step 2:> System Copy --> Table Splitting Preparation
    We have created a file with the large tables which are required to split into multiple packages and where able to create a total of 3 WHR files for the following table under DATA directory of main EXPORT directory.
    SplitTables.txt (Name of the file used in the SAPINST)
    CATF%2
    E071%2
    Which means, we would like each of the above large tables to be exported using 2 R3load processes.
    Step 3:> System Copy --> Database and Central Instance Export
    During the SAPInst process at Split STR files screen , we have selected the option 'Split Predefined Tables' and select the file which has predefined tables.
    Filename: SplitTable.txt
    CATF
    E071
    When we started the export process, we haven't seen the above tables been processed by mutiple R3load processes.
    They were exported by a Single R3load processes.
    In the order_by.txt file, we have found the following entries...
    order_by.txt----
    # generated by SAPinst at: Sat Feb 24 08:33:39 GMT-0700 (Mountain
    Standard Time) 2007
    default package order: by name
    CATF
    D010TAB
    DD03L
    DOKCLU
    E071
    GLOSSARY
    REPOSRC
    SAP0000
    SAPAPPL0_1
    SAPAPPL0_2
    We have selected a total of 20 parallel jobs.
    Here my questions are:
    a> what are we doing wrong here?
    b> Is there a different way to specify/define a large table into multiple packages, so that they get exported by multiple R3load processes?
    I really appreciate your response.
    Thank you,
    Nikee

    Hi Haleem,
    As for your queries are concerned -
    1. With R3ta , you will split large tables using WHERE clause. WHR files get generated. If you have mentioned CDCLS%2 in the input file for table splitting, then it generates 2~3 WHR files CDCLS-1, CDCLS-2 & CDCLS-3 (depending upon WHERE conditions)
    2. While using MIGMON ( for sequencial / parallel export-import process), you have the choice of Package Order in th e properties file.
      E.g : For Import - In the import_monitor_cmd.properties, specify
    Package order: name | size | file with package names
        orderBy=/upgexp/SOURCE/pkg_imp_order.txt
       And in the pkg_imp_txt, I have specified the import package order as
      BSIS-7
      CDCLS-3
      SAPAPPL1_184
      SAPAPPL1_72
      CDCLS-2
      SAPAPPL2_2
      CDCLS-1
    Similarly , you can specify the Export package order as well in the export properties file ...
    I hope this clarifies your doubt
    Warm Regards,
    SANUP.V

  • How to migrate SQL serveer table-valued functions into Oracle 10g

    hi ,
    i'm trying to migrate from SQL server to Oracle. There are some Table_valued functions ( Function that returns table). while migrating using SQL Developer it uses cursor for every table query. is this is the only solution to migrate functions returning table? This tables contains more than LAKH of records? so if there any other solution on this please reply. the sample code is bellow.
    CREATE function FU_AIG_S_GET_LOG
    returns TABLE
    as
    RETURN     
    SELECT U_AIG.T_AIG_LOG.AIG_LOG_NB_IDLOG,
    U_AIG.T_AIG_LOG.AIG_LOG_NB_EVTSUIVI,
    U_AIG.T_AIG_LOG.AIG_LOG_NB_CODELOG,
    U_AIG.T_AIG_LOG.AIG_LOG_DT_CREATION,
    U_AIG.T_AIG_LOG.AIG_LOG_CL_XMLEVT,
    U_AIG.T_AIG_TYPELOG.AIG_TYPELOG_VC_NOMLOG
    FROM U_AIG.T_AIG_LOG INNER JOIN U_AIG.T_AIG_TYPELOG
              ON U_AIG.T_AIG_LOG.AIG_LOG_NB_CODELOG =
    U_AIG.T_AIG_TYPELOG.AIG_TYPELOG_NB_CODELOG
    WHERE AIG_LOG_NB_LU = 0;
    thanks
    sush
    Message was edited by:
    user610355
    Message was edited by:
    user610355

    I don't think there's good work around over that. Mysql .sql file syntax is a little different from Oracle. Especially the DDLs.
    Unless you are willing to blindly run the .sql file and fix the one failed manually.

  • Understanding logminer results -- inserting row into table with CLOB field

    In using log miner I have noticed that inserts into rows that contain a CLOB (I assume this applies to other LOB type fields as well, have only tested with CLOB so far) field are actually recorded as two DML entries.
    --the first entry is the insert operation that inserts all values with an EMPTY_CLOB() for the CLOB field
    --the second entry is the update that sets the actual CLOB value (+this is true even if the value of the CLOB field is not being set explicitly+)
    This separation makes sense as there may be separate locations that the values are being stored etc.
    However, what I am tripping over is the fact the first entry, the Insert, has a RowId value of 'AAAAAAAAAAAAAAAAAA' which is invalid if I attempt to use it in a flashback query such as:
    SELECT * FROM PERSON AS OF SCN #####'  where RowId = 'AAAAAAAAAAAAAAAAAA'The second operation, the Update of the CLOB field, has the valid RowId.
    Now, again, this makes sense if the insert of the new row is not really considered "+done+" until the two steps are done. However, is there some way to group these operations together when analyzing the log contents to know that these two operations are a "+matched set+"?
    Not a total deal breaker, but would be nice to know what is happening under the hood here so I don't act on any false assumptions.
    Thanks for any input.
    To replicate:
    Create a table with CLOB field:
    CREATE TABLE DEVUSER.TESTTABLE
            ID NUMBER
           , FULLNAME VARCHAR2(50)
          , AGE NUMBER  
          , DESCRIPTION CLOB
           );Capture the before SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Insert a new row in the test table:
    INSERT INTO TESTTABLE(ID,FULLNAME,AGE) VALUES(1,'Robert BUILDER',35);
         COMMIT;Capture the after SCN:
    SELECT DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM DUAL;Start logminer session with the bracketing scn values and options etc:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN=>2619174, ENDSCN=>2619191, -
               OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + -
               DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.NO_ROWID_IN_STMT + DBMS_LOGMNR.NO_SQL_DELIMITER)Query the logs for the changes in that range:
    SELECT
           commit_scn, xid,operation,table_name,row_id
           ,sql_redo,sql_undo, rs_id,ssn
           FROM V$LOGMNR_CONTENTS
        ORDER BY xid asc,sequence# ascResults:
    2619178     0C00070028000000     START                  AAAAAAAAAAAAAAAAAA     set transaction read write
    2619178     0C00070028000000     INSERT     TESTTABLE     AAAAAAAAAAAAAAAAAA     insert into "DEVUSER"."TESTTABLE" ...
    2619178     0C00070028000000     UPDATE     TESTTABLE     AAAFEXAABAAALEJAAB     update "DEVUSER"."TESTTABLE" set "DESCRIPTION" = NULL ...
    2619178     0C00070028000000     COMMIT                  AAAAAAAAAAAAAAAAAA     commitEdited by: 958701 on Sep 12, 2012 9:05 AM
    Edited by: 958701 on Sep 12, 2012 9:07 AM

    Scott,
    Thanks for the reply.
    I am inserting into the table over a database link.
    I am using the new version of HTML Db (2.0)
    HTML Db is connected to an Oracle 10 database I think, however the table I am trying to insert data into (via the database link) is in an Oracle 8 database - this is why we created a link to it as we couldn't have the HTML Db interacting with the Oracle 8 database directly due to compatibility problems (or so I've been told)
    Simon

  • LARGE TABLE IN ORACLE

    How could i migrate large tables multi-million row from Oracle 8i to 9i.Pls explain me the process

    Vijay,
    this is the wrong forum for this question. Ask yourself these questions though:
    Is this an insitu upgrade, hence there will be data migration scripts as part of the upgrade, otherwise, there may be transportable tablespaces you can use.
    Are the machines disparate, Is the data temporal in nature, can you subsection the datamove, holding back on the volatile data until you need to switch over?

  • Large table partition

    Hi,
    I have a large table with multiple columns, one of them has a date column. I would like to partition based on date column.
    Method 1. What is the advantage of assigning a new tablespace to a partition ?
    Method 2. Why not have one big tablespace with multiple datafiles assigned to it ?
    Which one is faster for querying results.
    Thanks.

    user633980 wrote:
    I have oracle's ASM running on 3 15K disks. But querying this big table is taking a while to return results. I thought that range partitioning may help.
    What other things would you suggest which will decrease the query time.There are 2 basic things one can do at DBA/developer level to improve I/O.
    <li>do less I/O (obviously)
    <li>do "faster" I/O
    The less I/O is where indexing and partitioning and so on come to the fore. Partitioning can reduce a large invoice table contains a years worth of invoices into 365 partitions (which is like having a bunch of "<i>mini tables</i>" each with its own local indexes). This can make a significant reduction in I/O when wanting to processes specific invoices. Partitioning also makes data management easier (enabling one to treat a partition as a physical entity ito data exchange, indexing, export/import, etc).
    How well partitioning will work in your case depends on how well the partitioning is designed (range/list/hash), on which columns, the indexing approach used, how the queries are structured to enable partitioning pruning (only hitting specific partitions instead of all the data in the partitioned table) and so on.
    Doing I/O faster - well, that's not much you can do to make the actual I/O faster. That requires relooking/redesigning/reconfiguring the storage layer. And this is typically a sysadmin function and not a developer one. What you can however do from a developer viewpoint is to make I/O faster using parallel processing.
    For example, if the query need to read a million rows, that can be made faster by using 10 parallel processes - with each processing a 100,000 rows. This however requires correctly using Oracle's PQ (Parallel Query) feature and utilising the available I/O capacity correctly. For example, PQ is of little use if the I/O pipes are already being hit at close to full capacity.
    The basic point about tablespaces is that this has no impact on any of this - nor should it have as it is a logical data management layer.

  • Large table

    Hi, I will have a large table with 1 million records inserted everyday. The table schema has a couple of CLOBs. We are going to keep 1 year's data, so the table will eventually have about 365 million records. I have following questions.
    1) Is this amount of data can be handled by Oracle?
    2) Is there anything I can do for performance tuning, like partition, etc.?
    3) Any other advice for large table? Any recommended online documents/articles?
    Thanks,
    Zhe

    Philippe Florent wrote:
    You work with RAC and Linux, do you think this OS is mature to build an HA solution ? As many advices as people I asked. I don't expect a definitive answer of course, just thoughts...Well, I am a bit biased as I used Linux since the pre 1.0 kernel days. :-)
    We have 5 RAC clusters - all running x86_64 h/w and Linux as the o/s. Linux today is equivalent today to any other nix system - from HP-UX to Solaris. Thus no backseat ito performance, flexibility and scalability. There seems to me to be a lot of FUD around Linux. Amongst the old nix hands, it is seen as a new snotty nose and immature kid on the block. This is helped along with marketing propaganda by companies like Microsoft.
    Last year local Oracle asked me to do a Q&A session with one of their customers as they were looking at RAC and wanted answers from a user/customer perspective and not Oracle sales. Halfway through it we got to talking operating systems - and I was surprised to still find this idea that Linux is a hobbyist/immature type operating system and not comparable to a "real' o/s like Solaris or HP-UX for example.
    There's a single fact that I use to try and counter this idea about Linux not being ready for running big commercial systems. The question: "<i>What o/s do the vast majority of the 500 largest and fastest computer systems on this planet use?</i>" The answer: <a href="http://www.top500.org/stats/list/35/osfam"><i>Linux.</i></a>
    455 of the world's faster 500 computer clusters use Linux. That is 91%. Up from around 84% last year this time. Not one of these run HP-UX. Only 5 runs Microsoft Windows. 2 runs Open Solaris. 19 runs AIX.
    So if Linux is the predominant choice for o/s to run these large clusters, why then would it not be "+good enough+" for a (many times smaller) corporate cluster? In these large clusters, everything is magnified. Performance. Administration. Support. Flexibility. Robustness and stability. Costs. Scalability. Etc. And 91% of the time, Linux was selected to address these requirements and concerns.
    So I like to reverse the question and instead ask "+What are your reasons for not using Linux and are these valid ones?+" :-)

Maybe you are looking for

  • How to open PEF (pentax raw) in Bridge CS4

    Cannot open Pentax RAW files (PEF) in Bridge CS4. Any help is appreciated. Canon RAW does open.

  • How to display selected value in MessageChoice

    HI, I have a messageChoice on my Update screen. I need to display the selected value when the user comes to update the data on the screen. In the Process Request when i am printing the stmt System.out.println("The type of bean is PortletItemID::"+web

  • Transfer posting options Need a printout prior to posting

    Hello Gurus Our company uses a 3rd party warehouse and carries serialized products. We often move items from BLOCKED Storage location A to BLOCKED Storage location B (different physical address). In transferring from SL A to SL B, we need a  printout

  • Extend WL Authentication Provider Password Validation

    Hi folks I'm looking for any advice on how to extend the OOB password validation that is available and documented here: http://docs.oracle.com/cd/E12840_01/wls/docs103/secmanage/atn.html#wp1212100 Specifically we'd like to test whether the desired pa

  • Nested Zip file filtering

    Hello All  I need to find a solution for scanning nested zip files recursively (a .exe in a zip file, in a zip file) on an exchange edge transport server.  Our third party exchange antivirus/antispam solution, cannot scan in depth, only the first lev