Impdp := Estimate schema size

I have compressed dump of one schema of 4 GB. Now i wanted to import it . can anyone please tell me that how much space i should reserve for importing dump file.
Thanks in advance ..

You should be able to get it from the dump file. This is what you need to do if you are importing the complete dumpfile(s):
impdp user/password directory=your_dir dumpfile=your_dump.dmp master_only=y keep_master=y jobname=est_size
sqlplus user/password
select sum(dump_orig_length) from est_size where process_order > 0 and duplicate = 0 and object_type = 'TABLE_DATA'
If you only want some of the dumpfile, then
add your filters to the impdp command, like schemas=foo, or tables=foo.tab1, or whatever... then
select sum(dump_orig_length) from est_size where process_order > 0 and duplicate = 0 and object_type = 'TABLE_DATA' and processing_state != 'X';
This will give you the uncompressed size of the data that was exported.
when you are done
sql> drop table user.est_size;
Hope this helps.
Dean

Similar Messages

  • How to find the Schema size

    Hi,
    How to find the size of the schema in a daabase.
    Thanks,
    Mahi

    Mahi,
    One more option, though not so clean would be use Data Pump and its estimate file size option for the schema. The estimate would tell you the info about the size of the schema.
    HTH
    Aman....

  • Estimate index size on a table column before creating it

    Hi
    Is it possible to estimate the size of the index before actually creating it on a table column.
    I tried the below query. It gives size of the index after creating it.
    SELECT (SUM(bytes)/1048576)/1024 Gigs, segment_name
    FROM user_extents
    WHERE segment_name = 'IDX_NAME'
    GROUP BY segment_name.
    Can anyone through some light which system table will give this information.

    You can get an approximation by estimating the number of rows to be indexes, the average column lengths of the columns in the index, and the overheads for an index entry - once you have some reasonable stats on the table.
    I wrote a piece of code to demonstrate the method a few years ago - it's got some errors, but I've highlighted them in an update to the note: http://www.jlcomp.demon.co.uk/index_efficiency_2.html
    Regards
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Estimate Database Size

    We are planning to use Oracle database (Oracle 10g in Unix). We are in the process of estimating the hard disk capacity(in GB) required based on the past data for each business process. But we are faced with following queries
    1. Assuming the structure for a table is as follows :-
    create table temp1(
    Name varchar2(4),
    Age Number(2),
    Salary Number(8,2),
    DOB Date)
    The estimated records per year is assumed to be 500. How can we estimate the size that will be required for this table.
    2. We are planning to allocate 20% space for indexes on each table. Is it ok?
    3. Audit Logs (to keep track of changes made through update) :- Should it be kept in different partition/hard disk?
    Is there anything else to consider. Is there a better way to estimate the hard disk capacity required in more accurate manner?
    Our current database in Informix takes around 100GB per year, but there r lot of redundant data and due to business process change we cannot take that into consideration.
    Kindly guide. Thanks in advance.

    Well you can estimate the size of a table by estimating the average row size then multiplying this by the expected number of rows then add in overhead.
    3 for the row header + 4 for name + 2 for age + 5 for Salary + 7 for a date + 1 for each column null/length indicator is about 25 bytes per row (single byte character set) * 500 rows * 1.20 (20% overhead) = small.
    The overhead is the fixed block header, the ITL, the row table (2 bytes per row) and the pctfree. You can estimate this but I just said 20% which is probably a little high.
    The total space needed by indexes often equals or surpassed the space needed by tables. You need to know the design and be pretty sure additional indexes will not be necessary for performance reasons before you allocate such a small percentage of the table space to index space.
    Where you keep audit tables or extracts is totally dependent on your disk setup.
    HTH -- Mark D Powell --

  • Estimate db size and change tablespace configuration before restoring RMAN

    Hi,
    I am wondering whether it is possible to estimate the size of the restored database just from the RMAN files I have?
    Also, Is there a setting I can change in these files to disable 'autoextend' feature in the tablespace of the restored database?
    thank you in advance.

    Hi
    But using this method I will damage the existing control file of the original server.
    SQL 'alter database mount';
    sql statement: alter database mount
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03009: failure of sql command on default channel at 09/16/2009 17:18:59
    RMAN-11003: failure during parse/execution of SQL statement: alter database mount
    ORA-01103: database name 'OLD_DB' in control file is not 'JUPITER'
    And when I start JUPITER:
    startupORACLE instance started.
    Total System Global Area 192937984 bytes
    Fixed Size 2169752 bytes
    Variable Size 134079592 bytes
    Database Buffers 54525952 bytes
    Redo Buffers 2162688 bytes
    ORA-00205: error in identifying control file, check alert log for more info
    Is there a way to restore the control file without damaging the existing server?
    thank you for your support.

  • Estimate Redo Size

    Hii All
    Is there any math to estimate how much redo a sql will generate ? I know redo size statistic shows but I am just asking for understand basic concept.
    For example I am updating LOC column on dept table that is the scoot schema and has not index on column so this dml generate about 200-300 byte redo
    Best Regards..

    >
    What purpose of the giving my user statistics ?
    >
    The purpose is to remind you that you appear to be violating forum etiquette by not marking your questions answered.
    You have 66 previous questions that you have not marked as ANSWERED and it is statistically unlikely that none of them have actually been answered.
    The likely reason is that you are not following forum etiquette and marking questions answered. See the FAQ (link in upper right corner of this page) for forum rules.
    >
    What is proper discussion forum etiquette?
    When asking a question, provide all the details that someone would need to answer it including your database version, e.g. Oracle 10.2.0.4.
    Format your code using code tags (see "How do I format code in my post?" below). Consulting documentation first is highly recommended. Furthermore, always be courteous; there are different levels of experience represented. A poorly worded question is better ignored than flamed - or better yet, help the poster ask a better question.
    Finally, it is good form to reward answerers with points (see "What are 'reward points'?" below) and also to mark the question answered when it has been.
    >
    When people review forum questions for possible answers they do NOT want to waste their time on questions that have already been answered. So when you do not mark your questions answered when they have been those questions just junk up the forum and waste people's time.
    No marking questions answered also suggests that you are not a team player; you want people to help you but you are unwilling to help them by keeping the forum clean.
    Please revisit those 66 previous questions, give HELPFUL or ANSWERED credit where credit is due and then mark them ANSWERED if they have been answered.

  • ImpDP Create schema

    Hello Experts!
    I am using Oracle ImpDP to import a fresh DB to a test system. I was the impdp to create all the schema.
    My original import was done using sys as sysdba
    I am running the import using the same user. What option should I set in the par file to do so?
    -- Oracle 10.2G
    -- Running on Linux
    Thanks!

    hi
    Part of the Exp Logs
    Export: Release 10.2.0.4.0 - Production on Monday, 19 September, 2011 19:09:09
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_FULL_01": "system/******** AS SYSDBA" DUMPFILE=EXP_DUMP_DIR:expdp_STAGEDB_09192011.dmp logfile=EXP_LOG_DIR:expdp_STAGEDB_09192011.log PARFILE=/home/oracle/wavemark/export/scripts/expdp_STAGEDB.par
    Estimate in progress using BLOCKS method...
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 9.944 GB
    Processing object type DATABASE_EXPORT/TABLESPACE
    Processing object type DATABASE_EXPORT/SYS_USER/USER
    Processing object type DATABASE_EXPORT/SCHEMA/USER
    Processing object type DATABASE_EXPORT/ROLE
    Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
    Processing object type DATABASE_EXPORT/RESOURCE_COST
    Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
    Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
    Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/CONTEXT
    Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
    Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
    Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/RLS_CONTEXT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/RLS_GROUP
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/RLS_POLICY
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
    Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
    Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
    Processing object type DATABASE_EXPORT/SCHEMA/JOB
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
    Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
    . . exported "UWAVE"."PRODUCTITEM" 1.607 GB 4961525 rows
    . . exported "UWAVE"."PRODUCTEVENT" 1.363 GB 7668052 rows
    . . exported "UWAVE"."RFIDTAG" 716.7 MB 6206230 rows
    . . exported "UWAVE"."ERRORLOG" 856.5 MB 628430 rows
    . . exported "UWAVE"."CABINETRUN" 979.5 MB 20389386 rows
    . . exported "URPT"."STG_RTHOSPITAL_PRODUCTEVENT" 252.4 MB 1854228 rows
    . . exported "UWAVE"."AUDITLOG_ARCHIVE" 232.3 MB 571450 rows
    . . exported "UWAVE"."PERFORMANCETIMING" 194.8 MB 330004 rows
    . . exported "UWAVE"."REPORTLOG" 186.2 MB 2315240 rows
    . . exported "UWAVE"."ENDPOINTITEMEND_MV" 138.8 MB 306043 rows
    Part of the Imp Logs
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYS"."SYS_IMPORT_FULL_01": "system/******** AS SYSDBA" DUMPFILE=IMP_DUMP_DIR:expdp_ProdDB_2_EngDB02.dmp LOGFILE=IMP_LOG_DIR:impdp_ProdDB_2_rows.log PARFILE=impdp_ProdDB_2_eng.par
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "UWAVE"."PRODUCT" 8.966 MB 39674 rows
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USPLY' does not exist
    Failing sql is:
    GRANT ALTER ON "UWAVE"."PRODUCT" TO "USPLY"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USPLY' does not exist
    Failing sql is:
    GRANT DELETE ON "UWAVE"."PRODUCT" TO "USPLY"
    ORA-39083: Object type OBJECT_GRANT failed to create with error:
    ORA-01917: user or role 'USPLY' does not exist

  • How to estimate the size of the database object, before creating it?

    A typical question arise in mind.
    As DBA's, we all known to determine the objects size from the database.
    But before creating an object(s) in database or in a schema, we'll do an analysis on object(s) size in relation with data growth. i.e. the size we are estimating for the object(s) we are about to create.
    for example,
    Create table Test1 (Id Number, Name Varchar2(25), Gender Char(2), DOB Date, Salary Number(7));
    A table is created.
    Now what is the maximum size of a record for this table. i.e. maximum row length for one record. And How we are estimating this.
    Please help me on this...

    To estimate a table size before you create it you can do the following.  For each variable character column try to figure out the average size of the data.  Say on average the name will be 20 out of the allowed 25 characters.  Add 7 for each date column.  For numbers
    p = number of digits in value
    s = 0 for positive number and 1 for a negative number
    round((( length((p) + s) / 2)) + 1
    Now add one byte for the null indicator for each colmn plus 3 bytes for the row header.  This is your row length.
    Multiply by the expected number of rows to get a rough size which you need to adjust for the pctfree factor that will be used in each block and block overhead.  With an 8K Oracle block size if you the default pctfree of 10 then you lose 892 bytes of storage.  So 8192 - 819 - 108 estimated overhead = 7265 usable.  Now divide 7265 by the average row length to get an estimate of the number of rows that will fit in this space and reduce the number of usable bytes by 2 bytes for each row.  This is your new usable space.
    So number of rows X estimate row length divided by usable block space in blocks.  Convert to Megabytes or Gigabytes as desired.
    HTH -- Mark D Powell --
    ed add "number of usable"

  • Impdp import schema to different tablespace

    Hi,
    I have PRTDB1 schema in PRTDTS tablespace, I need encrypt this tablespace.
    and I have done this steps:
    1. I created new encrypted PRTDTS_ENC tablespace.
    2. I take export PRTDB1 schema. (expdp system/system schema=PRTDB1 directory=datapump dumpfile=data.dmp logfile=log1.log)
    and I want import this dump to new PRTDTS_ENC tablespace.
    can you give me impdp command, which I need?

    Hello,
    ORA-14223: Deferred segment creation is not supported for this table
    Failing sql is:
    CREATE TABLE "PRTDB1"."DATRE_CONT" ("BRON" NUMBER NOT NULL ENABLE, "PCDID" NUMBER(9,0) NOT NULL ENABLE, "OWNERSID" NUMBER NOT NULL ENABLE, "VALUESTR" VARCHAR2(4000 CHAR), "VALNUM" NUMBER, "VALDATE" DATE, "VALXML" "XMLTYPE") SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING So it means that these Tables "need" a Segment but, they are created with the option SEGMENT CREATION DEFERRED.
    A workaround is to create a Segment for these Tables on the Source Database before exporting them. Then, at the Import you won't have the option SEGMENT CREATION DEFERRED. The following Note of MOS detail this Solution:
    - *Error Ora-14223 Deferred Segment Creation Is Not Supported For This Table During Datapump Import [ID 1293326.1]*
    The link below gives more informations about the DEFERRED SEGMENT CREATION:
    http://docs.oracle.com/cd/E14072_01/server.112/e10595/tables002.htm#CHDGJAGB
    With the DEFERRED SEGMENT CREATION feature, a Segment is created automatically at the first insert on the Table. So the Tables for which the error ORA-14223 occurs are empty.
    You may also create them separately by using a script generated by the SQLFILE parameter in which you change the option SEGMENT CREATION DEFERRED by SEGMENT CREATION IMMEDIATE.
    Hope this help.
    Best regards,
    Jean-Valentin

  • XML schema size restrictions

    I was wondering what size restrictions there are on XML schemas? I'm developing a schema that has just raised the following error on registration.
    ERROR at line 1:
    ORA-31084: error while creating table "CAS"."swift564357_TAB" for element "swift564"
    ORA-01792: maximum number of columns in a table or view is 1000
    ORA-02310: exceeded maximum number of allowable columns in table
    ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 0
    ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 151
    ORA-06512: at line 828
    On removing a few elements from the schema it registers fine, but querying the generated table swift564xxx_TAB there is only ever one column, typed with an ADT that itself only has 5 elements. In fact there doesn't seem to be, on the face of it, any type that has more than 20-30 elements. Where does this error come from then?
    Unfortunately the schema exceeds the 20k limit on postings. I can split it up and post it in two parts if this would help.
    Thanks
    Marc

    Each attribute in the ADT and each attribute of attributes which are an ADT count as one column
    Here's a snippet from the next version of the doc that may help...
    3-20 Oracle XML DB Developer’s Guide, Rel. 1(10.1) Beta 2 Draft
    A number of issues can arise when working with large, complex XML Schemas.
    Sometimes the error "ORA-01792: maximum number of columns in a table or view
    is 1000" will be ecountered when registering an XML Schema or creating a table
    based on a global element defined by an XML Schema. This error occurs when an
    attempt is made to create an XMLType table or column based on a global element
    and the global element is defined as a complexType that contains a very large
    number of element and attribute definitions.
    The errors only occurs when creating an XMLType table or column that uses object
    relational storage. When object relational storage is selected the XMLType is
    persisted as a SQL Type. When a table or column is based on a SQL Type, each
    Registering an XML Schema with Oracle XML DB
    attribute defined by the Type counts as a column in the underlying table. If the SQL
    Type contains attributes that are based on other SQL Types, the attributes defined
    by those Types also count as columns in the underlying table. If the total number of
    attributes in all the SQL types exceeds the Oracle limits of 1000 columns in a table
    the storage table cannot be created.
    This means that as the total number of elements and attributes defined by a
    complexType approaches 1000, it is no longer possible to create a single Table that
    can manage the SQL Objects generated when an instance of the Type is stored in the
    database.
    In order to resolve this problem it is necessary to reduce the total number of
    attributes in the SQL Types that are used create the storage tables. Looking at the
    schema there are two approaches that can be used to achieve this:
    The first approach uses a ’top-down’ technique that uses multiple XMLType
    tables to manage the XML documents. This technique reduces the number of
    SQL attributes in the SQL Type heirarchy for a given storage table. As long as
    none of the tables need manage more than 1000 attributes the problem is
    resolved.
    The second approach uses a ’bottom-up’ technique that reduces the number of
    SQL attributes in the SQL Type herirarchy collapsing some of elements and
    attributes defined by the XMLSchema so that they are stored as a single CLOB.
    Both techniques rely on annotating the XML Schema to define how a particular
    complexType will stored in the database.
    In the case of the top down techniqueby the annotations SQLInline="false" and
    defaultTable are used to force some sub-elements within the XML Document to
    be stored as rows in a seperate XMLType table. Oracle XML DB maitains the
    relationship between the two tables using a REF of XMLType Good candidates
    for this approach are XML Schemas that define a choice where each element
    within the choice is defined as a complexType, or where the XML Schema
    defines an element based on a complexType that contains a very large number
    of element and attribute definitions.
    The bottom up technique involves reducing the total number of attributes in the
    SQL object types by choosing to store some of the lower level complexTypes as
    CLOBs, rather than objects. This is acieved by annotating the complexType or
    the usage of the complexType with SQLType="CLOB".
    Which technique is best depends on the application, and the kind of queries and
    updates that need to be performed against the data.

  • Schema size?

    How to identify the size of schema in a database. I want to know the size of 5 schema alone in a database and there are some more schema also in the db.
    PS: I would appreciate a query that can relate the size of schema to the schema name and not the owner and segment alone.

    SELECT   owner, SUM (BYTES) / 1024 / 1024 "Size (MB)" FROM dba_segments
       WHERE owner IN ('list of schemas') GROUP BY owner;

  • Estimate table size for last 4 years

    Hi,
    I am on Oracle 10g
    I need to estimate a table size for the last 4 years. So what I plan to do is get a count of data in the table for last 4 years and then multiply that value by avg_row_length to get the total size for 4 years. Is this technique correct or do I need to add some overhead?
    Thanks

    Yes, the technique is correct, but it is better to account for some overhead. I usually multiply the results by 10 :)
    The most important thing to check is if there is any trend in data volumes. Was the count of records 4 years ago more or less equal to the last year? Is the business growing or steady? How fast is it growing? What are prospects for the future? Last year in not always 25% of last 4 years. It happens that last year is more than 3 other years added together.
    The other, technical issue is internal organisation of data in Oracle datafiles. The famous PCTFREE. If you expact that the data will be updated then it is much better to keep some unused space in each database block in case some of your records get larger. This is much better for performance reasons. For example, you leave 10% of each database block free and when you update your record with longer value (like replace NULL column with actual 25-characters string) then your record still fits into the same block. You should account for this and add this to your estimates.
    On the other hand, if your records get never updated and you load them in batch, then maybe they can be ORDERed before insert and you can setup a table with COMPRESS clause. Oracle COMPRESS clause has very little common with zip/gzip utilities, however it can bring you significant space savings.
    Finally, there is no point to make estimates too accurate. They are just only estimates and the reality will be almost always different. In general, it is better to overestimate and have some disk space unused than underestimate and need to have people to deal with the issue. Disks are cheap, people on the project are expensive.

  • IMPDP into schema with truncated tables, best way to do this?

    Hello all,
    I'm a bit of a noob witih datapump.
    I've got a schema that I've truncated all tables. I have a full schema export I took awhile back, and I'm wanting to import this into the schema to basically 'reset' it.
    First time run, I got the :
    ORA-39151: Table "xyz.tablename" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
    I've been reading through, and see suggestions to add to the par file:
    CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=APPEND
    And I've seen others use the option for:
    table_exists_action=replace
    I'm wondering if I could get suggestions best approach to use...I basically want to put the data back into the tables, and have the indexes rebuilt....what is the best course of action here?
    Thank you in advance,
    cayenne

    If all you want is to reload the data, then
    impdp user/password ... content=data_only table_exists_action=append
    This will only load the data into the existing tables and it will append the data to those tables. If the table definitions may have changed, then you would remove the content=data_only and use table_exists_action=replace.
    If you need to load dependent objects like indexes, constraints, etc, then don't do content=data_only and use table_exists_action=replace.
    Hope this helps.
    Dean

  • Impdp specific schema from dump which has been created using full=y

    Hi,
    I have received a dump (expdp) which has been created using the parameter FULL=Y. I just want to import (impdp) only one schema into my database. I have used remap_schema and exclude=grants. The schema i want to import is carried out successfully, but in my import log i keep getting the messages such as :
    ORA-31684: Object type TABLESPACE:"TS_BCST1" already exists
    ORA-31684: Object type USER:"OUTLN" already exists
    ORA-31684: Object type SEQUENCE:"SYSTEM"."MVIEW$_ADVSEQ_GENERIC" already exists
    How can i use impdp only to import objects under my specific schema and not have impdp attempt to create system/sys users and its objects, create tablespaces, etc etc.
    Kindly assist.
    Regards,
    Karan

    Use SCHEMAS parameter in the impdp command to import the scpecific schema.
    Eg.
    impdp system/password schemas=user1 remap_schema=user1:user2 directory=dir dumpfile=user1.dmp...........
    You can specify list of schemas you want to import with , (comma) saperated in the SCHEMAS parameter.

  • Restrict schema size

    Hi everyone,
    Just wondering if there is a way in HANA to restrict the size (memory allowance) of a particular schema? We have some users that can create objects but want to be able to set a limit on the size of the tables they can create.
    Regards,
    Mark.

    As schema is an object namespace and not a ownership construct, size limitations based on the schema doesn't make really make sense.
    Beyond that, SAP HANA currently (SPS8) does not provide options to limit the amount of data storage on a user base.
    - Lars

Maybe you are looking for