Migrating a new partition table with transportable tablespace

I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).

user564785 wrote:
I created a partitioned table with 2 partitions (2010 and 2011) and used transportable tablespace to migrate the data over to a new envionment. My question is, if I decide to add a partition (2012) in the future, can I simply move that new partition along with the associated datafile via transportable tablespace or would I have to move all the partitions (2010, 2011, 2012).Yes why not.
1) create a table as CTAS from 2012 in new Tablespace on source
2) transport the tablespace
3) Add partition to existing partition table Or exchange partition
Oracle has also documented this procedure:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/tspaces013.htm#i1007549

Similar Messages

  • How to import partitioned tables in different tablespace

    Hi everyone,
    I try to import the partitioned tables in different tablespace.
    Consider the following situation:
    I have a dump file which is created by using "Export" utility. Some data are in partitioned tables, some of them are in non-partitioned tables. Also, all tables are located in "MYTBS" tablesapce. I try to import all data from this dump file to another database. I didn't get error messages when importing the data from non-partitioned tables. However, I got error message when importing the data from partitioned tables. The error message is: tablespace 'MYTBS' does not exist.
    I just want to how I can solve this problem other than create 'MYTBS' tablespace for my new database.
    Thanks in advance.
    Angel

    Hi,
    I got the following error message:
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE TABLE "FACILITYCONNECTION",....., "CONNECTIONTYPE" "
    "NUMBER(1, 0) NOT NULL ENABLE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 25"
    "5 PARALLEL ( DEGREE DEFAULT INSTANCES 1) NOLOGGING STORAGE( PCTINCREASE 0) "
    "TABLESPACE "MYTBS" PARTITION BY RANGE ("CONNECTIONTYPE" ) (PARTITION "
    ""EXT" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 S"
    "TORAGE(INITIAL 65536) TABLESPACE "MYTBS" NOLOGGING, PARTITION "FAC" VA"
    "LUES LESS THAN (2) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(I"
    "NITIAL 65536) TABLESPACE "MYTBS" NOLOGGING )"
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'MYTBS' does not exist
    Thanks.
    Angel

  • Performance between two partitionned tables with different structure

    Hi,
    I would like if there is a difference between two partitionned tables with different structure in term of performance (access, query, insertions, updates ).
    I explain myself in detail :
    I have a table that stores one value every 10 minutes in a day (so we have 144 values (24*6) in the whole day), with the corresponding id.
    Here is the structure :
    | Table T1 |
    + id PK |
    + date PK |
    + sample1 |
    + sample2 |
    + ... |
    + sample144 |
    The table is partionned on the column date, with a partionned every months. The primary key is based on the columns (id, date).
    There is an additionnal index on the column (id) (is it useful ?).
    I would like to know if it is better to have a table with just (id, date, value) , so for one row in the first table we'll have 144 rows in the future? table. The partition will already be on the columns (id, date) with the index associated.
    What are the gains or loss in performance with this new structure ( access, DMLs , storage ) ?
    I discuss with the Java developers and they say it is simpler to manage in their code.
    Oracle version : Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    Thanks & Regards
    From France
    Oliver
    Edited by: 998239 on 5 avr. 2013 01:59

    I mean storage in tablespaces and datafiles on disk.
    Can you justify please and give me concrete arguments why the two structures are equivalent ( except inserting data in T(id, date,value))
    because i have to make a report.i didnt say any thing like
    two structures are equivalent ( except inserting data in T(id, date,value)i said
    About structure : TABLE1(id, date, value) is better than TABLE1(id, date, sample1, .... sample144)because
    1) oracle has restriction for numbers of column. Ok you can have 144 columns now but for future if you must have more than 1000 columns , what will you do?
    2) Restrictions on Table Compression (Table compression is not supported for tables with more than 255 columns.)
    3) store same type values on diffrent columns is bad practise
    http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#i4383
    i remember i seen Toms article about this but now i cant find it sorry ((( if i found i will post here

  • Oracle 11.2 - Perform parallel DML on a non partitioned table with LOB column

    Hi,
    Since I wanted to demonstrate new Oracle 12c enhancements on SecureFiles, I tried to use PDML statements on a non partitioned table with LOB column, in both Oracle 11g and Oracle 12c releases. The Oracle 11.2 SecureFiles and Large Objects Developer's Guide of January 2013 clearly says:
    Parallel execution of the following DML operations on tables with LOB columns is supported. These operations run in parallel execution mode only when performed on a partitioned table. DML statements on non-partitioned tables with LOB columns continue to execute in serial execution mode.
    INSERT AS SELECT
    CREATE TABLE AS SELECT
    DELETE
    UPDATE
    MERGE (conditional UPDATE and INSERT)
    Multi-table INSERT
    So I created and populated a simple table with a BLOB column:
    SQL> CREATE TABLE T1 (A BLOB);
    Table created.
    Then, I tried to see the execution plan of a parallel DELETE:
    SQL> EXPLAIN PLAN FOR
      2  delete /*+parallel (t1,8) */ from t1;
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 3718066193
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |  2048 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |  2048 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |  2048 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    PLAN_TABLE_OUTPUT
    Note
       - dynamic sampling used for this statement (level=2)
    And I finished by executing the statement.
    SQL> commit;
    Commit complete.
    SQL> alter session enable parallel dml;
    Session altered.
    SQL> delete /*+parallel (t1,8) */ from t1;
    2048 rows deleted.
    As we can see, the statement has been run as parallel:
    SQL> select * from v$pq_sesstat;
    STATISTIC                      LAST_QUERY SESSION_TOTAL
    Queries Parallelized                    1             1
    DML Parallelized                        0             0
    DDL Parallelized                        0             0
    DFO Trees                               1             1
    Server Threads                          5             0
    Allocation Height                       5             0
    Allocation Width                        1             0
    Local Msgs Sent                        55            55
    Distr Msgs Sent                         0             0
    Local Msgs Recv'd                      55            55
    Distr Msgs Recv'd                       0             0
    11 rows selected.
    Is it normal ? It is not supposed to be supported on Oracle 11g with non-partitioned table containing LOB column....
    Thank you for your help.
    Michael

    Yes I did it. I tried with force parallel dml, and that is the results on my 12c DB, with the non partitionned and SecureFiles LOB column.
    SQL> explain plan for delete from t1;
    Explained.
    | Id  | Operation             | Name     | Rows  | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | DELETE STATEMENT      |          |     4 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  DELETE               | T1       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR      |          |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)| :TQ10000 |     4 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR |          |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL| T1       |     4 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    The DELETE is not performed in Parallel.
    I tried with another statement :
    SQL> explain plan for
    2        insert into t1 select * from t1;
    Here are the results:
    11g
    | Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT         |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  LOAD TABLE CONVENTIONAL | T1       |       |       |            |          |        |      |            |
    |   2 |   PX COORDINATOR         |          |       |       |            |          |        |      |            |
    |   3 |    PX SEND QC (RANDOM)   | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   4 |     PX BLOCK ITERATOR    |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    |   5 |      TABLE ACCESS FULL   | T1       |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    12c
    | Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | INSERT STATEMENT                   |          |     4 |  8008 |     2   (0)| 00:00:01 |        |      |            |
    |   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)              | :TQ10000 |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
    |   3 |    LOAD AS SELECT                  | T1       |       |       |            |          |  Q1,00 | PCWP |            |
    |   4 |     OPTIMIZER STATISTICS GATHERING |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWP |            |
    |   5 |      PX BLOCK ITERATOR             |          |     4 |  8008 |     2   (0)| 00:00:01 |  Q1,00 | PCWC |            |
    It seems that the DELETE statement has problems but not the INSERT AS SELECT !

  • Migration to new g/l with migration cockpit

    Dear all,
    we need to describe to our customer the scenario related to migration to new g/l with migration cockpit; new g/l will be implemented this year and the customer will use the old sap till may 2015; then we will need to migrate data from old accounting to new g/l.  I don' t know the tool 'migration cockpit', some of you can describe me which are the steps related to the  migration with 'migration cockpit'?
    thanks.
    Elena

    Hi Elena,
    This is a very complex area. You need to check the following notes:
    1070629 - FAQs: Migration to General Ledger Accounting (new)
    1014364 - New G/L migration: Information, prerequisites, performance
    You may also have to contact SAP Migration Services
    [email protected]
    Regards,
    Ravi

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • ORA-00604 ORA-00904 When query partitioned table with partitioned indexes

    Got ORA-00604 ORA-00904 When query partitioned table with partitioned indexes in the data warehouse environment.
    Query runs fine when query the partitioned table without partitioned indexes.
    Here is the query.
    SELECT al2.vdc_name, al7.model_series_name, COUNT (DISTINCT (al1.vin)),
    al27.accessory_code
    FROM vlc.veh_vdc_accessorization_fact al1,
    vlc.vdc_dim al2,
    vlc.model_attribute_dim al7,
    vlc.ppo_list_dim al18,
    vlc.ppo_list_indiv_type_dim al23,
    vlc.accy_type_dim al27
    WHERE ( al2.vdc_id = al1.vdc_location_id
    AND al7.model_attribute_id = al1.model_attribute_id
    AND al18.mydppolist_id = al1.ppo_list_id
    AND al23.mydppolist_id = al18.mydppolist_id
    AND al23.mydaccytyp_id = al27.mydaccytyp_id
    AND ( al7.model_series_name IN ('SCION TC', 'SCION XA', 'SCION XB')
    AND al2.vdc_name IN
    ('PORT OF BALTIMORE',
    'PORT OF JACKSONVILLE - LEXUS',
    'PORT OF LONG BEACH',
    'PORT OF NEWARK',
    'PORT OF PORTLAND'
    AND al27.accessory_code IN ('42', '43', '44', '45')
    GROUP BY al2.vdc_name, al7.model_series_name, al27.accessory_code

    I would recommend that you post this at the following OTN forum:
    Database - General
    General Database Discussions
    and perhaps at:
    Oracle Warehouse Builder
    Warehouse Builder
    The Oracle OLAP forum typically does not cover general data warehousing topics.

  • Large partitioned tables with WM

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    Hello Ben
    Thank you for your reply.
    The table structure we have is like so:
    CREATE TABLE hdr
    (   pk_id               NUMBER PRIMARY KEY,
        customer_id         NUMBER FOREIGN KEY REFERENCES customer,
        entry_type          NUMBER NOT NULL
    CREATE TABLE dtl_daily
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_200709
            VALUES LESS THAN (TO_DATE('200710','YYYYMM'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_200710
            VALUES LESS THAN (TO_DATE('200711','YYYYMM'))
            TABLESPACE x COMPRESS
    CREATE TABLE dtl_hourly
    (   pk_id               NUMBER PRIMARY KEY,
        hdr_id              NUMBER FOREIGN KEY REFERENCES hdr
        active_date         DATE NOT NULL,
        active_hour         NUMBER NOT NULL,
        col1                NUMBER
        col2                NUMBER
    PARTITION BY RANGE(active_date)
    (   PARTITION ptn_20070901
            VALUES LESS THAN (TO_DATE('20070902','YYYYMMDD'))
            TABLESPACE x COMPRESS,
        PARTITION ptn_20070902
            VALUES LESS THAN (TO_DATE('20070903','YYYYMMDD'))
            TABLESPACE x COMPRESS
        PARTITION ptn_20070903
            VALUES LESS THAN (TO_DATE('20070904','YYYYMMDD'))
            TABLESPACE x COMPRESS
        ...For every day for 20 years
    /The hdr table holds one or more rows for each customer and has it's own synthetic key generated for every entry as there can be multiple rows having the same entry_type for a customer. There are two detail tables, daily and hourly, which hold detail data at those two granularities. Some customers require hourly detail, in which case the hourly table is populated and the daily table is populated by aggregating the hourly data. Other customers require only daily data in which case the hourly table is not populated.
    At the moment, changes to customer data require that the content of these tables are rebuilt for that customer. This rebuild is done every night for the changed customers and I want to change this to be a near real time rebuild. The rebuild involves deleteing all existing entries from the three tables for the customer and then re-inserting the new set using new synthetic keys. If we do make this near real time, we need to be able to provide a snapshot of the data asof close of business every day, and we need to be able to report as of a point of time up to 10 days in the past.
    For any one customer, they may have rows in the hourly table that goes out 20 years at a hourly granularity, but once the active date has passed(by 10 days), we no longer need to keep it. This is why we were considering partitioning as it gives us a simple way of dropping off old data, and as a nice side effect, helps to improve performance of queries that are looking for active data between a range of dates (which is most of them).
    I did have a look at the idea of save points but I wasn't sure it would be efficient. So in this case, would the idea be that we don't partition the table but instead at close of business every day, we create a savepoint like "savepoint_20070921" and instead of using dbms_wm.gotodate. we would use dbms_wm.gotosavepoint. Then every day we would do
    DBMS_WM.DeleteSavepoint(
       workspace                   => 'LIVE',
       savepoint_name              => 'savepoint_20070910', --10 days ago
       compress_view_wo_overwrite  => TRUE,
    DBMS_WM.CompressWorkspace(
       workspace                   => 'LIVE,
       compress_view_wo_overwrite  => TRUE,
       firstSP                     => 'savepoint_20070911', --the new oldest save point
       );Is my understanding correct?
    David
    Message was edited by:
    fixed some formatting
    David Tyler

  • Migrating to new macbook air with Lion, entourage 2008 doesn't work

    I  bought a new macbook air with lion installed. trying to migrate data from my old macbook with snow leopard. Got most data across, and installed my most frequently used applications:
    - office 2008: most office apps 2008 work, if buggily. entourage not, see below.
    - firefox (hangs up when downloading updates)
    - dropbox (ditto)
    <these two problems may have something to do with internet proxy settings, will try to sort that out later>
    MAIN ISSUE:
    entourage wont open at all.  so for now i'm using my old comp for e-mail and the new for the rest.
    MY OPTIONS:
    please advise (1) what you think is the most time & nerve-saving option (2) how to do it smoothly:
    A) try to get office 2008 to work.
    when trying to open entourage, it tells me this version (office 2008 12.0) cannot open this entourage database (12.3)
    error for other apps says there is a problem with database, and rebuilding several times did not fix it.
    would have to start with updating to the version i used last (office 2008 12.3). Problem is the ms office updater does not seem to work on lion. the first update does not open, says it does not support powermac.
    B) buying office 2011, and migrating entourage 2008 to outlook.
    I suppose the best way to do this is to install office 2011 on the old mac first, then on the new one, then move database, right?
    C) migrating from entourage 2008 to lion mail.
    best way to do this would be to import entourage email to leopard mail, then move those files to the new mac?  and move entourage addresses into the mac addressbook. my major concern is that i have a lot of mailing rules, and a fair bunch of notes. a lot. i;d rather not have to recreate those by hand. Is there a way to import rules from entourage to mail?
    i also habe a huge backstock of mail, over 10,000, and a correspondigly huge addressbook (within entourage). i'm ready to abandon entourage at last (though i used it for a decade), but i want to have my addresses, my rules, and at least the last 3 yrs of emails. i get about 130 mails a day so i need plenty of automatic organizing capability.
    i never had issues  my old mac/snow leopard, i just ran out of ram and the one usb broke, so i figured it's time. kinda sorry i did not just get the usb repaired and extra ram. what a hassle these migrations always are.
    cheers!
    maedi

    Kappy wrote:
    Yes, repairing first is a sound idea. In my experience I've only moved from Entourage to the OS X applications or from one version of Entourage to another. In the latter I've been able to just copy over the Microsoft User Data folder. But I have not used Entourage for quite some time.
    Microsoft don't make it quite that easy to migrate to their own applications, but the import from a 'tidy' .rge file works well, I had a lot of issues with the first few I migrated but after performing a database repair prior to import all proceeded smoothly.

  • Access key needed when creating a new database table with SE11

    Hi,
    I'm using SAP Testdrive (evaluation) on linux in order to learn a bit about ABAP programming. I want to create a new database table in the  dictionary to be used in my programs. I proceed in the following way:
    1) I run the SE11 transaction
    2) At the first entry I write the name of the table to be created (in the Database Table field)
    3) I click on the create button.
    But then the system asks me an Access Key to register, where can I get this?
    Thanks in advance,
    Kind Regards,
    Dariyoosh

    Ok I found the answer to my question in another thread
    Developer Key
    Make sure that your program names starts with "Z" or "Y", otherwise the system will ask you to register the object because it thinks you are creating/changing in the SAP namespace.
    In fact this was my error, my table name didn't start with neither "Z" nor "Y".
    Kind Regards,
    Dariyoosh
    Edited by: dariyoosh on Nov 13, 2010 12:34 PM

  • Character set problem with transportable tablespace

    Hi,
    I'm trying to import a transportable tablespace with data pump into a database with a different character set compared to the source database. I know this is by default not possible. But there's no violating data in the tablespace that could make it a problem when transfering the TS. So I issued 'ALTER SYSTEM SET "_tts_allow_nchar_mismatch"=true;' on the target to force the import. However; I still get the eror:
    ORA-29345: can not plug a tablespace into a database using a different character set
    How can I fix this?

    Hi,
    What're the character sets of the source and target database?
    A general restriction of transportable tablespace is that the source and target databases must use the same database character set.
    Regards
    Nat

  • Mailboxes migration to new partition

    Dear Folks,
    We had to migrate the mailboxes from one partition to another, (partition 1 got full). We mounted another disk from SAN (600 GB) the double the size of existing partition (300 GB). How ever after the migration (mailboxes migration from old partition to new) , some how the partition size expanded, old partition was 290 GB and now after migrating the mailboxes to the newly mounted disk, partition size is 380 GB......Here is the question, How we can (OR keep the mailboxes compression) compress the mailboxes to keep the size of partition as it was in old partition (disk) ...???
    We are running :
    Sun Java(tm) System Messaging Server 6.2-4.03 (built Sep 22 2005)
    libimta.so 6.2-4.03 (built 04:37:42, Sep 22 2005)
    SunOS msgbak1 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-V490
    Many Thanks

    Sp00ky_Geek wrote:
    We had to migrate the mailboxes from one partition to another, (partition 1 got full). We mounted another disk from SAN (600 GB) the double the size of existing partition (300 GB). How ever after the migration (mailboxes migration from old partition to new) , some how the partition size expanded, old partition was 290 GB and now after migrating the mailboxes to the newly mounted disk, partition size is 380 GB......How did you move the emails from the old->new partition (single user mboxutil -r / copy/rsync the files across?).
    Here is the question, How we can (OR keep the mailboxes compression) compress the mailboxes to keep the size of partition as it was in old partition (disk) ...???
    If the expansion has occurred due to single-message copy being expanded by the move, you can use the relinker utility to reset the hard-links between identical messages.
    The theory behind single-message copy and the relinker utility are described here:
    http://docs.sun.com/app/docs/doc/819-2650/6n4u4dttt?a=view#bgaye
    Sun Java(tm) System Messaging Server 6.2-4.03 (built Sep 22 2005)
    libimta.so 6.2-4.03 (built 04:37:42, Sep 22 2005)
    SunOS msgbak1 5.10 Generic_120011-14 sun4u sparc SUNW,Sun-Fire-V490I recommend patching to 118207-63 before using the relinker utility to fix bug #6496709 - "relinker should doublecheck"
    Regards,
    Shane.

  • Migrating from Sql Server tables with column name starting with integer

    hi,
    i'm trying to migrate a database from sqlserver but there are a lot of tables with column names starting with integer ex: *8420_SubsStatusPolicy*
    i want to make an offline migration so when i create the scripts these column are created with the same name.
    can we create rules, so when a column like this is going to be migrated, to append a character in front of it?
    when i use Copy to Oracle option it renames it by default to A8420_SubsStatusPolicy
    Edited by: user8999602 on Apr 20, 2012 1:05 PM

    Hi,
    Oracle doesn't allow object names to start with an integer. I'll check to see what happens during a migration about changing names as I haven't come across this before.
    Regards,
    Mike

  • How to put Snow Leopard onto new partition along with Yosemite

    Made a second partition on my 2010 iMac that had only one Yosemite partition before. I thought it was supposed to wipe out the original partition when creating another one but much to my surprise Disk Utility merely shrank the size of the Yosemite partition. Result: Yosemite 3.9 TB and new partition (that I want to put Snow Leopard onto) is less than 100 GB. New partition shows under devices in Finder. When I insert Snow Leopard installation CD of course I'm told that I cannot install on this computer or at least this partition. When I restart and hold down the option key the only choice I have is the original partition with Yosemite but not the Snow Leopard partition. So...can anyone tell me what to do to get the Snow Leopard OS onto the second/new partition?

    ehstoker wrote:
    Didn't work. When I held down the C key I got the black screen with white letters saying that basically I had crashed the computer.
    That sounds like your disc is unable to install Snow Leopard - what install disc are you using? Is it grey or a white disk with a Snow Leopard printed on it? Grey discs are for a specific model & cannot be installed on other models. If you can provide a copy of the error message we may be able to help (an image or some of the final output may help). We are left assuming it may be a kernel panic.
    The white 'cat' disc is a retail version that can be installed on compatible Macs.
    ehstoker wrote:
    When I restarted and held down the Option key, I got what I said before which is "the only choice I have is the original partition with Yosemite but not the Snow Leopard partition".
    That sounds correct - you don't appear to have managed to install 10.6 yet so you can't select that as a boot option.

  • Partitioning table with sequence made in java base 36

    hi ,sorry for my poor english
    i would like to partition a big table with more than 5 millions of rows on a sequence made in java base 36
    thanks in advance for you help
    regards

    Is this sequence stored in a column in the database?
    How do you want to partition the table? RANGE? HASH? Or is there some sort of list partitioning scheme you're thinking of?
    If the table already exists, you know that you're going to have to re-create the table as a partitioned table, re-create appropriate constraints, indexes, triggers, etc. and move all the data over, right? Online table redefinition can hide some of this complexity from you (though it may well introduce other sorts of complexity).
    Justin

Maybe you are looking for

  • Can I open garageband files in logic?

    I am going to be upgrading to Logic Express from garageband in a week or so, and I have some more questions: Okay, my first question is; Can I open garageband 08 files in logic, and if I do will they be mixed the same? And my second one is a bit more

  • Logs overflooding

    1,6GB of logs (kernel.log, everything.log, ...) generated in one hour until low memory and finally kernel panic. My firewall was made by some online generator. Can I change the log level for iptables? Apr 5 16:57:33 skywalker ipt# bledne_pakiety IN=e

  • Opening A Premiere Pro CS6 Project in Premiere Pro CS 5.5

    I downloaded Premiere Pro CS 6. I already have Premiere Pro CS 5.5 on my computer. I had read you can open up a CS 6 Project in CS 5.5 by doing the following: 1) Saving your project file on your Hard Drive 2) Copy it just for back up 3) Right Click (

  • FW  "An Internal Error Has Occured"

    Talk about frustrating!!!  Adobe Support? What is that? I am a newbie, that doesn't mean bend me over?  I have been working on not only learning the Adobe Web Premium software "which is a totally different conversation" but WTF. Back in October I sta

  • Unable to move/play a song bought frm iTunes 7.1 to my iPod

    Updated iTunes to 7.1 while my iPod was connected. iPod (Nano) updated to 1.1.2. Bought a song from iTunes...so far so good. The song is in my library...BUT...wont play ! When I move it to my iPod I get the message: .....was not copied, because it ca