Dump File Size vs Database Table Size
Hi all!
Hope you're all well. If Datapump estimates that 18 million records will produce a 2.5GB dumpfile, does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?
Many thanks in advance!
Regards
AC
does this mean that 2.5GB will also be consumed on the database table when this dump file is imported into a database?No since the size after import depends on various factors like block size, block storage parameters etc.
Similar Messages
-
How can I list used database/table size resp user space ? Compression after
How can I list used + maximum tablespace/database/table size of an Oracle database ?
How can I list the space used currently by a user resp. his tables+indizes ?
By the way: If I delete a user or table from an Oracle database installation: are then only the entires deleted
or is the space released as well ?
In otherwords: After I deleted a user resp. table is it recommended to do something like a "compress" similarly to Outlook
or other eMail clients to shrink really the occupied space on hard disc ?I hope this helps you
select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tu
where df.tablespace_name = tu.tablespace_name; -
Index size increases than table size
Hi All,
Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
Thanks in advance
sheriefhi,
The size of a index depends how inserts and deletes occur.
With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
here is a exemple how index size can become larger than table size:
Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
Connected as admin
SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
Table created
SQL> create index rich_i on rich(c1);
Index created
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1179648 144 9
INDEX 1179648 144 9
SQL> delete from rich where mod(c1,2)=0;
29475 rows deleted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1179648 144 9
INDEX 1179648 144 9
SQL> insert into rich select rownum+100000, 'qq' from all_objects;
58952 rows inserted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1703936 208 13
INDEX 2097152 256 16
SQL> insert into rich select rownum+200000, 'aa' from all_objects;
58952 rows inserted
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 2752512 336 21
INDEX 3014656 368 23
SQL> delete from rich where mod(c1,2)=0;
58952 rows deleted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 2752512 336 21
INDEX 3014656 368 23
SQL> insert into rich select rownum+300000, 'hh' from all_objects;
58952 rows inserted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 3014656 368 23
INDEX 4063232 496 31
SQL> alter index rich_i rebuild;
Index altered
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 3014656 368 23
INDEX 2752512 336 21
SQL> -
Hello Anybody, I have a question. Can any of you please suggest me how to make an xml file from the database table with all the records?
Note:- I am having the XSD Schema file and the resulted XML file should be in that XSD format only.The Oracle documentation has a good overview of the options available
Generating XML Data from the Database
Without knowing your version, I just picked 11.2, so you made need to look for that chapter in the documentation for your version to find applicable information.
You can also find some information in XML DB FAQ -
A query while importing an XML file into a Database Table
Hi,
I am Creating an ODI Project to import an XML file into a Database Table.With the help of the following link
http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/odi/odi_11g/odi_project_xml-to-table/odi_project_xml-to-table.htm
I am facing a problem while creating Physical Schema for the XML Source Model.
For the
Schema(Schema)
and
Schema(Work Schema) field they have selected GEO_D.
What is GEO_D here??
or
What I have to select here?
1) schema of the xml file (NB:-I havn't created any .xsd or .dtd file for my .xml file)
or
2)my target servers schema
Please tell me what I'll do??
Thanksand
Schema(Work Schema) field they have selected GEO_D.
What is GEO_D here??This is the schema name which is specified in the XML file .
What I have to select here?
1) schema of the xml file (NB:-I havn't created any .xsd or .dtd file for my .xml file)Yes
2)my target servers schema
Please tell me what I'll do??
Thanks -
Migrate sql server dump file to Oracle database on a network
I have a sql server dump file provided to me by DBA's. I have to migrate this dump file to Oracle database on a network. Please suggest me the steps to do this.
5c1ab566-05d1-4cc9-894a-fd1fe724c752 wrote:
That would be a text file with around 100 records. I have migrate those records to oracle database. The firewall constraints doesn't allow me to create a link to that database. Will Migration workbench would be apt for this case. How can i capture the database offline in this case ?
regardless of the tool/technique you use, those bits have to get from machine running sqlserver to the machine running Oracle. Either over the network (through the firewall) or via 'sneakernet.' What, exactly, are the firewall constraints?
If the export is a simple text file (character delimited fields? fixed length fields?) and only 100 records ... Migration Workbench may be overkill, perhaps a simple sqlldr job. But you still have to get the bits from 'there' to 'here'.
============================================================================
BTW, it would be really helpful if you would go to your profile and give yourself a recognizable name. It doesn't have to be your real name, just something that looks like a real name. Who says my name is really Ed Stevens? But at least when people see that on a message they have a recognizable identity. Unlike the system generated name of 'ed0f625b-6857-4956-9b66-da280b7cf3a2', which is like going to the pub with a bag over your head. -
Index size greated then Table Size
Hi all,
We are running BI7.0 in our environment.
One of the tables' index size is much greated than the table itself. The Details are listed below:
Table Name: RSBERRORLOG
Total Table Size: 141,795,392 KB
Total Index Size: 299,300,576 KB
Index:
F5: Index Size / Allocated Size: 50%
Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
Please letme know on this as I am not very clear on DB much.
Thanks and Regards,
RaghavanHi Hari
Its basically degenerated index. You can follow the below steps
1. Delete some entries from RSBERRORLOG.
BI database growing at 1 Gb per day while no data update on ECC
2. Re-organize this table from BRSPACE . Now the size of the table would be very less. I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required. ---Basis job
3. Delete and recreate Index on this table
You will gain lot of space.
I assumed you are on Oracle.
More information on reoganization is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
Anindya
Regards
Anindya -
Csv file uploading for database table creation
Hi there,
I'm in the process of making an application that will be able to upload a csv file and create a table based on the same file. As of now, I have managed to make my application upload a csv file into the database. My problem now is to transfer the data in the csv into a table. If there is a function that can do this, please let me know. But as of now, I have tried all that I can but in vain. I would appreciate any assistance rendered as to how I can go about this.
Kind regards,
Lusunthahai Lusuntha ,
Go to search forum and type "upload within html db".here u will find the required information ,as well as the code.go for each topic in the search result. -
Index size greater than table size
HI ,
While checking the large segments , I came to know that index HZ_PARAM_TAB_N1 is larger than table HZ_PARAM_TAB . I think it's highly fragmented and requires defragmentation . Need your suggestion on the same that how can I collect more information on the same . Providing you more information .
1.
select sum(bytes)/1024/1024/1024,segment_name from dba_segments group by segment_name having sum(bytes)/1024/1024/1024 > 1 order by 1 desc;
SUM(BYTES)/1024/1024/1024 SEGMENT_NAME
81.2941895 HZ_PARAM_TAB_N1
72.1064453 SYS_LOB0000066009C00004$$
52.7703857 HZ_PARAM_TAB
2. Index code
<pre>
COLUMN_NAME COLUMN_POSITION
ITEM_KEY 1
PARAM_NAME 2
</pre>
Regards
RahulHi ,
Thanks . I know that rebuild will defragment it . But as I'm on my new site , I was looking for some more supporting information before drafting the mail on the same that it requires re org activity .It's not possible for an index to have the size greater than tables as it contains only 2 columns values + rowid . Whereas tables contains 6 columns .
<pre>
Name Datatype Length Mandatory Comments
ITEM_KEY VARCHAR2 (240) Yes Unique identifier for the event raised
PARAM_NAME VARCHAR2 (2000) Yes Name of the parameter
PARAM_CHAR VARCHAR2 (4000)
Value of the parameter only if its data type is VARCHAR2.
PARAM_NUM NUMBER
Value of the parameter only if its data type is NUM.
PARAM_DATE DATE
Value of the parameter only if its data type is DATE.
PARAM_INDICATOR VARCHAR2 (3) Yes Indicates if the parameter contains existing, new or >replacement values. OLD values currently exist. NEW values create initial values or replace existing values.</pre>
Regds
Rahul -
Importing 9i dump file into 10g database
Hello guys,
i know that 9i import is incompatible with 10g datapump.I want to know how i can move the data from my production database(9i) to the new site(10g) without much hassle.
I don't want to use upgrade because not all objects in 9i is needed in the new version of the application.I just want to export what i need from 9i to a dump file and then reimport it into 10g.
Has imp/exp being deprecated in 10g??
And i know i will get invalid objects,i hope recompilation would solve that.I will start the test next week.Just wanted to confirm.
Thank you
CharlesI don't think Export/Import is deprecated in Oracle 10g but Datapump is mostly used in 10g.
But if in 10g Exp/Imp is deprecated - Still you can have a Oracle 9i Client installation with Exp/Imp and then use imp and connect to a 10g database
Thanks
Shasik -
Is tablespace block size and database block size have different meanings
at the time of database creation we can define database block size
which can not be changed afterwards.while in tablespace we can also
define block size which may be different or same as define block size
at the time of database creation.if it is different block size from
the database creation times then what the actual block size using by oracle database.
can any one explain in detail.
Thanks in Advance
RegardsYou can't meaningfully name things when there's nothing to compare and contrast them with. If there is no keep or recycle cache, then whilst I can't stop you saying, 'I only have a default cache'... well, you've really just got a cache. By definition, it's the default, because it's the only thing you've got! Saying it's "the default cache" is simply linguistically redundant!
So if you want to say that, when you set db_nk_cache_size, you are creating a 'default cache for nK blocks', be my guest. But since there's no other bits of nk cache floating around the place (of the same value of n, that is) to be used as an alternate, the designation 'default' is pointless.
Of course, at some point, Oracle may introduce keep and recycle caches for non-standard caches, and then the use of the differentiator 'default' becomes meaningful. But not yet it isn't. -
Inserting records from a txt file to a database table
I would like to know how to insert records from a file (txt) to a database table through a java application?
[BufferedReader |http://java.sun.com/javase/6/docs/api/java/io/BufferedReader.html] and PreparedStatement
IO and JDBC -
Loading an XML file into a database table.
What is the convenient way to parse XML data present in a data file in server?
Hi;
Please check:
http://riteshkk2000.blogspot.com/2012/01/loadimport-xml-file-into-database-table .html
http://docs.oracle.com/cd/E18283_01/appdev.112/e16659/xdb03usg.htm#BABIFADB
http://www.oracle.com/technetwork/articles/quinlan-xml-095823.html
Regard
Helios -
Uploading Excel file into SAP Database table?
I built a table in the SAP Data Dictionary, and i need to write a program that uploads the Excel table, into the SAP Database table. Does anybody have a sample program that may help me? Thanks!
TYPES:
BEGIN OF ty_upload,
matnr like mara-matnr,
meins like mara-meins,
mtart like mara-mtart,
mbrsh like mara-mbrsh,
END OF ty_upload.
DATA it_upload TYPE STANDARD TABLE OF ty_upload WITH header line.
DATA wa_upload TYPE ty_upload.
DATA: itab TYPE STANDARD TABLE OF alsmex_tabline WITH header line.
CALL FUNCTION 'ALSM_EXCEL_TO_INTERNAL_TABLE'
EXPORTING
filename = 'C:\Documents and Settings\venkatapp\Desktop\venkat.xls'
i_begin_col = 1
i_begin_row = 1
i_end_col = 4
i_end_row = 65535
TABLES
intern = itab.
if not itab[] is initial.
loop at itab .
case itab-col.
when '0001'.
it_upload-matnr = itab-value.
when '0002'.
it_upload-meins = itab-value.
when '0003'.
it_upload-mtart = itab-value.
when '0004'.
it_upload-mbrsh = itab-value.
append it_upload.
clear it_upload.
clear itab.
endcase.
endloop.
endif.
loop at it_upload into wa_upload.
ztable-matnr = wa_upload-matnr.
ztable-meins = wa_upload-meins.
ztable-mtart = wa_upload-mtart.
ztable-mbrsh = wa_upload-mbrsh.
insert ztable.
endloop. -
Problem when import a dump file to a database.
Hi.
I created the tablespace as following:
CREATE TABLESPACE BDS_L_DATA datafile '/local/target/oracle/data/orcl/l_data.dbf'
size 100M
autoextend on maxsize unlimited ;
I'm new to Oracle and trying to import a dump to a database and got this error.
Please help me to fix this problem:
IMP-00003: ORACLE error 1659 encountered
ORA-01659: unable to allocate MINEXTENTS beyond 48 in tablespace BDS_L_DATA
IMP-00017: following statement failed with ORACLE error 1659:
"CREATE TABLE "AUDIT_LOG" ("ACL_SYS_ID" NUMBER NOT NULL ENABLE, "ACL_CREATE_"
"TS" DATE NOT NULL ENABLE, "ACL_CREATE_UI" VARCHAR2(15) NOT NULL ENABLE, "AC"
"L_ACTION_CD" CHAR(1) NOT NULL ENABLE, "ACL_TABLE_NM" VARCHAR2(30) NOT NULL "
"ENABLE, "ACL_COLUMN_NM" VARCHAR2(30), "ACL_PRIMARY_KEY_TX" VARCHAR2(250), ""
"ACL_PRIMARY_KEY2_TX" VARCHAR2(250), "ACL_PRIMARY_KEY3_TX" VARCHAR2(250), "A"
"CL_PRIMARY_KEY4_TX" VARCHAR2(250), "ACL_PRIMARY_KEY5_TX" VARCHAR2(250), "AC"
"L_PRIMARY_KEY6_TX" VARCHAR2(250), "ACL_PRIMARY_KEY7_TX" VARCHAR2(250), "ACL"
"_PRIMARY_KEY8_TX" VARCHAR2(250), "ACL_PRIMARY_KEY9_TX" VARCHAR2(250), "ACL_"
"PRIMARY_KEY10_TX" VARCHAR2(250), "ACL_BEFORE_DATA_SNPSHT_TX" VARCHAR2(2000)"
", "ACL_AFTER_DATA_SNPSHT_TX" VARCHAR2(2000), "ACL_REQUEST_TS" DATE, "ACL_RE"
"QUESTOR_UI" VARCHAR2(15)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 ST"
"ORAGE(INITIAL 52428800 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "BDS_L_DAT"
"A" LOGGING NOCOMPRESS"
ThanksThanks for the quick response. However, It wasn't what i asked.
I like to addfile to the tablespace as you said.
What's the command?
I did like this:
ALTER TABLESPACE BDS_L_DATA ADD datafile '/local/target/oracle/data/orcl/l_data1.dbf' size 4M AUTOEXTEND ON;
Still got error about tablespace:
IMP-00017: following statement failed with ORACLE error 1659:
"CREATE TABLE "DTC_INSTR_ACTVY" ("SEC_CUSIP_NO" VARCHAR2(9) NOT NULL ENABLE,"
" "DAC_ACCT_ID" VARCHAR2(12) NOT NULL ENABLE, "ACC_DEALER_ID" VARCHAR2(6) NO"
"T NULL ENABLE, "DIA_RECORD_DT" DATE NOT NULL ENABLE, "STY_SEC_TY_CD" VARCHA"
"R2(6) NOT NULL ENABLE, "DIA_ORIG_AVAIL_PAR_VL" NUMBER(17, 2) NOT NULL ENABL"
"E, "DIA_COLLAT_PAR_VL" NUMBER(17, 2) NOT NULL ENABLE, "DIA_PAY_PAR_VL" NUMB"
"ER(17, 2) NOT NULL ENABLE, "DIA_CREATE_TS" DATE NOT NULL ENABLE, "DIA_CREAT"
"E_UI" VARCHAR2(15) NOT NULL ENABLE, "DIA_UPDATE_TS" DATE, "DIA_UPDATE_UI" V"
"ARCHAR2(15), "DIA_SCHED_PAY_DT" DATE, "DIA_CALC_SHARE_QY" NUMBER(21, 6), "D"
"IA_INT_AM" NUMBER(17, 2), "DIA_INT_RT" NUMBER(14, 10), "DIA_PRINCIPAL_AM" N"
"UMBER(17, 2), "DIA_PRINCIPAL_RT" NUMBER(14, 10), "DIA_ACTION_CD" VARCHAR2(1"
"), "DIA_ACTVY_TY" VARCHAR2(3), "ACC_SOURCE_SYS_CD" VARCHAR2(3)) PCTFREE 10"
" PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 52428800 FREELISTS 1 FR"
"EELIST GROUPS 1) TABLESPACE "BDS_L_DATA" LOGGING NOCOMPRESS"
IMP-00003: ORACLE error 1659 encountered
ORA-01659: unable to allocate MINEXTENTS beyond 20 in tablespace BDS_L_DATA
And i got another error from the import job as well:
ORA-06564: object XTBL_DAT_DIR does not exist.
Anybody knows what the object XTBL_DAT_DIR is?
Maybe you are looking for
-
Writing data from a table to Excel file
Hello friends, I am using Forms 5.0 on Windows NT. I need to write data from table A to Excel sheet from trigger 'when-botton-pressed'. Could you help me Any help will be appriciated. Thanks Charg null
-
10.4.6 update limbo, disk image mounts fail
Not sure what happened with the 10.4.6 update. Software Update was in process. Then I blinked and returned to a shut down machine. In "about the computer" the ac OS X version is listed as 10.4.6. But in System Preferences / Software Update, the 10.4.
-
HI, I RECENTLY PURCHASED CC(PREVIOUSLY USING CS6) AND INSTALLED PHOTOSHOP CC ON MY PC WITH WIN 8, I7-3770, 16GB DDR3, 2G NVIDIA 670 GPU, 120GB SSD. AND I AM FACING A WERIED PROBLEM, LIKE WHEN I APPLY HIGH PASS FILTER TO A LAYER IT NOT AFFECTING IMMED
-
Hi all, anybody got the BT Voyager 240 router? Is it new, it seems to have replaced the 205...? Does it work anybetter with video chats/user did not respond? PS if you want to buy one get it at http://www.businessshop.bt.com/invt/bar114 Its cheaper t
-
Message type for modifying a Delivery via DELVRY03
Hi folks, I have a requirement to modify an existing delivery with event dates/times in my clients SAP system based on a file received from a 3rd party Logistics provider. I intend to update the Idoc with DELVRY03. What Message type and process cod