Export very big database.

Hello,
I have 9i database on Linux. This database is very big in tera bytes.I want to shift the database to other server.
export backup is taking too much time. Can you please suggest me how can i shift my database in few hours?
Thanks in advance.
Anand.

Tricky. Especially since you don't say if the new server is running Linux, too. And you also don't say (which makes a big difference) if the new server will be running 10g.
But you might be able to do a transportable tablespace migration. That involves exporting only the contents of your existing data dictionary (a matter of a few minutes at most, usually); copying the DBF files to the new server; and then plugging them in by importing your data dictionary export. The major time factor in that lot is the physical act of copying the datafiles between servers. But at least you're not extracting terabytes of data and then trying to re-insert the same terabytes!
If your new server is not running Linux, forget it, basically, because cross-platform tablespaces are only do-able in 10g and with lots of restrictions and caveats (but you might get lucky... you'd have to read tahiti.oracle.com to find out if you could get away with it).
If your new server is running 10g, you're also going to be in for tricky times, though it's not impossible to transport between 9i and 10g. Easiest thing, if possible, is to create your 10g database with COMPATIBLE set to 9.x.x, do the transport and then increase your compatible parameter afterwards.

Similar Messages

  • Optimize delete in a very big database table

    Hi,
    For delete entries in database table i use instruction:
    Delete from <table> where <zone> = 'X'.
    The delete take seven hours (the table is very big and  <zone> isn't an index)
    How can i optimize for reduce the delete time.
    Thanks in advance for your response.
    Regards.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • Dynamic JCombobox for a very big database resulset

    I want to use a JCombobox or similar for selecting values from a big database resultset. I'm using an editable one with SwingX autocomplete decorator. My strategy is:
    * show only first Xs regs.
    * let the user to enter some text in the combobox and refine the search reloading the model.
    Someone have some sample code or know some components that do that.
    Or can point me to some implementation details?�
    A lot of thanks in advance,
    PD:
    I need something efficient that don't query database to much.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Very Big Cell when export in Excel

    Dear Tech guys,
    I use VS 2008 with VB and CR 2008.
    Crystal report, and export in PDF are OK, but when i export the report in Excel, i have the bellow problems.
    The report is a delivery note with 7 columns and many rows.
    1. In all pages, the page numbers are lost, except from the last page.
    2. After last row, excel has a very big in height (height > 300) row. Because of this, excel creates a new empty page and at the bottom of the new page i see the page number (Page 5 of 5).
    Can you help me with this problem??
    I have this problem after the last update (Service Pack 3).
    Visual Studio 2008: 9.030729.1 SP 1
    Crystal Reports 2008: 12.3.0.601
    Thank you in advance.
    Best regards,
    Navarino Technology Dept.
    Edited by: Navarino on Jul 15, 2010 2:47 PM

    Dear all good morning from Greece.
    First of all, i like to thank you for your quick respond.
    Dear Ludek Uher,
    1. Yes, this is from a .NET (3.5) application with VB.
    2. I do the export via code.
    3. From the CR designer i have the same problem.
    Dear David Hilton,
    The photo, is not working.
    I found the option "On Each Page" from the CR designer and i changed it. Now i get the page number on every page but i can see that something is wrong with the page height and with the end of the page of the report.
    I will try to show you the problem of the Excel file, after the option "On Each Page":
    Header........................
                      Field1     field2    field3 ......
    row1 .......
    row2 ......
    row3.....
    row56 ......
    {end of page 1}
    {new page 2}
    row57
    row58
    row59
    Page 1 of 4 (the footer of the first page must be in the first page, but it shown in the second page)
    row60
    row61
    row62
    {end of page 2}
    {new page 3}
    row110
    row111
    row112
    Page 2 of 4 (the footer of the second page must be in the second page, but it shown in the third page)
    row140
    row141
    row142
    {end of page 3}
    {new page 4}
    and go on.....
    I hope this helped.
    If i change the margins from Page Break Preview in Excel, the pages are OK. So i thing that something conflicts with the end of the page. The report does not understand the end of the page.
    If there is a way to send you the file or screen shots, please tell me how.
    Thank you in advance again.
    Best regards,
    Navarino Technology Dept.
    Edited by: Navarino on Jul 16, 2010 9:09 AM

  • Very simple database require

    Hi
    I'm looking for a very simple database solution. I have some very large .csv files that I need to query against before importing to Excel. Filemaker, etc. is over the top for what I need. Any ideas?
    Thanks.
    PowerPC G5   Mac OS X (10.4.6)  

    Welcome to Apple Discussions!
    You could use a script that converts CSV to Appleworks:
    http://www.tandb.com.au/appleworks/import/
    And then export from Appleworks to Excel.
    I know the PowerMac G5 doesn't come with Appleworks, but it is a quarter of the price of Filemaker Pro.
    Maybe the authors of the script could help you.

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • System Tablespace getting very big!

    Hi All,
    I am working on a 10g RAC database on RHEL5. The tablespace System has grown very big. It is due to
    C_OBJ#_INTCOL# and
    I_H_OBJ#_COL#.
    The database has 4 tables which are quite big and have lots of partitions. It is the object stats and histogram stats on the partitions which are collected by the auto stats gather job. I have disabled that for the time being. Is there any way that i can modify that job in such a way that AUTO stats gather job runs but does not collect the histograms stats?

    Can i enable the AUTO_STATS_GATHER job after applying the bug patch 6390838?

  • What are the down sides with very big SGA?

    Hi
    I have a heavy 9.2.0.8 64-bit database with very high dataloading and reporting having 24G SGA, 80 dual core CPU solaris 9. I would like to figure out what are the down sides of very big SGAs, and/or if increases the SGA.
    Thanks

    user5797895 wrote:
    My question was not for a particular performance issue. I am thinking in different possibilities to improve to the best performance. I have statspack configured and we do tuning according to the stats.
    Are you able to anlyze few spreport and give me your comments?Usually you have one or more particular issues that are worth to look into, e.g. particular standard reports that take longer than expected, batch runs that don't complete within the given timeframe, end-users complaining about ad-hocs reports taking too long, transactions that start to time out during peak workloads etc. etc.
    If you're just interested in someone checking the current state/health of your system it's probably worth a try to post the most important parts of some statspack reports that you determine to be most representative for your system load. There are definitely some contributers here participating that should be able to provide some valuable insights.
    I think it's always good to remember that the most significant performance improvement of a system is achieved by letting it do less work resp. avoid unnecessary work. So if you're e.g. able to reduce the logical I/O performed by the top n SQLs that put the most workload on your system significantly you usually gain much more than by just adding more and more memory/CPU to your system. Of course this is not the case if this kind of tuning has already been performed thoroughly but there is usually much room for improvement in this field.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Export/Import of Database to increase the number of datafiles

    My BW system is close to 400 GB and it only has 3 datafiles.  Since we have 8 CPU cores, I need 8 datafiles.  What is the best way to export/import the database in order to achive 8 datafiles?

    With a BW system that size you can probably get away with it.  You most likely do at least a few full loads so all that data will be evenly distrubuted when you drop and reload.  If you can clean up some of your PSAs and log tables you can probably shrink the 3 files down a little anyway.  If you do little maintenance like that every few weeks, after locking auto growth, you can probably shave 2-3 GBs off each file each week.  Do that for a few months and your large files will love 20 GBs while the other ones start to grow.  Rebuilding indexes also helps with that.  You will be surprised how fast they will level out that way.
    With regard to performance you should be fine.  I certainly wouldnt do it at 10 am on Tuesday though :-).  You can probably get away with it over a weekend though.  It will take basically no time at all to create them and very little IO.  If you choose to clean out the 3 existing ones that will take some time.  I have found it takes about 1-3 hours to shrink a 150 GB datafile down to 100 GBs, that was with 8 CPUs, 30 GBs of RAM, and an SAN that I don't fully understand

  • Wordpress big database

    Hi,
    I'm using wordpress and mysql to build my magazine website ( it is chung khoan ), now the data is very big i saw it more slow then the first time I run.
    How can I update my database (mysql) to other like nosql or other? or May i update my mysql to higher version?
    Please advice me.
    Thank you very much.

    These forums are for Oracle databases - pl ask in the MySQL or NoSQL forums

  • What is the easiest way to create and manage very big forms?

    I need to create a form that will contain few hundred questions. Could you please give me some advise on what is the easiest way to do that? I mean for example is it easier to create everything in Word (since it is easier to manage) and than create a form based on that?
    My concern is that when I will have a very big form, containing different kinds of questions and with many scripts, managing it during work will be slow and difficult, for example adding a question in the middle of the form which would require moving half of the questions down which could smash the layout etc.
    What is the best practise for that?
    Thanks in advance

    Try using Table and Rows for this kind of forms. These forms will have the same look throught with a question and and answer section..
    In the future if you want to add a new section, you can simply add rows in between..
    Thanks
    Srini

  • Improve the performance in stored procedure using sql server 2008 - esp where clause in very big table - Urgent

    Hi,
    I am looking for inputs in tuning stored procedure using sql server 2008. l am new to performance tuning in sql,plsql and oracle. currently facing issue in stored procedure - need to increase the performance by code optmization/filtering the records using where clause in larger table., the requirement is Stored procedure generate Audit Report which is accessed by approx. 10 Admin Users typically 2-3 times a day by each Admin users.
    It has got CTE ( common table expression ) which is referred 2  time within SP. This CTE is very big and fetches records from several tables without where clause. This causes several records to be fetched from DB and then needed processing. This stored procedure is running in pre prod server which has 6gb of memory and built on virtual server and the same proc ran good in prod server which has 64gb of ram with physical server (40sec). and the execution time in pre prod is 1min 9seconds which needs to be reduced upto 10secs or so will be the solution. and also the exec time differs from time to time. sometimes it is 50sec and sometimes 1min 9seconds..
    Pl provide what is the best option/practise to use where clause to filter the records and tool to be used to tune the procedure like execution plan, sql profiler?? I am using toad for sqlserver 5.7. Here I see execution plan tab available while running the SP. but when i run it throws an error. Pl help and provide inputs.
    Thanks,
    Viji

    You've asked a SQL Server question in an Oracle forum.  I'm expecting that this will get locked momentarily when a moderator drops by.
    Microsoft has its own forums for SQL Server, you'll have more luck over there.  When you do go there, however, you'll almost certainly get more help if you can pare down the problem (or at least better explain what your code is doing).  Very few people want to read hundreds of lines of code, guess what's it's supposed to do, guess what is slow, and then guess at how to improve things.  Posting query plans, the results of profiling, cutting out any code that is unnecessary to the performance problem, etc. will get you much better answers.
    Justin

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Is this beta? The image for Yosemite is a very big rock, appropriate as this OS sends your mac back to the stone-age. iTunes crashes on open, iPhoto won't download, movies won't play, Safari won't show graphics, wifi down to speed of crippled snail.

    Is this beta. The image for Yosemite is a very big rock, appropriate as this OS sends your mac back to the stone-age. iTunes crashes on open, iPhoto won't update, Safari shows text without graphics, can't get Java 8 'cause Safari "can't find server". Wifi slower than a crippled snail. Does anyone know how to uninstall this rubbish?

    travellingbirder wrote:
    Is this beta ?
    You tell us Finding the OS X version and build information on your Mac.

Maybe you are looking for

  • Purchase Invoice

    Hi all, how to handle Purchase Invoice Including or Excluding Tax. For eg. Client is having Export Business in Foreign countries and for that they are making Export Invoice and with in country they are using normal invoice system. If they made A/P In

  • [SOLVED]Oracle jre / jdk conflicts

    Hi, I just recently installed arch for the first time and is attempting to replace OpenJDK with the version of Oracle. I downloaded the jre and jdk from AUR, but I get conflicts between them. The JDK is needed for development purposes. I extracted th

  • Chapter point End Action not working

    I'm creating a a training DVD in which one timeline has several spots where the DVD should pause while the class discusses the scene that has just played. I've added 5 seconds of black to mark those spots, and at the beginning of the black I have a c

  • Upgrade version

    <p>Hi </p><p> </p><p>I just up upgrade from Crystal Report Developer Edition 10 to Crystal Report XI. I used apache tomcat version 4.1.31 and the Microsoft SQL Server 2000 as for database. The crystalreportviewer is version 10.</p><p>I've created a r

  • Error Free Flag in Transaction

    HI, While defining an action condition, we have an option of Error Free Flag in the Container where we give the condition definition. Where can we see this Error free flag in the transaction? If i set the condition       &CRM Lead.ErrorFreeFlag& = X