Importing table structure only consumes space in GB
Hello,
I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.
Please help
Thanks
UNKNOWN007 wrote:
Hello,
I have used oracle 9i Exp commmand to export a schema structure.(No rows) There are only 150 tables and the export is perfect without any warnings.
When i try to import into a different schema , I can see that it consumes 3 GB of space in the default tablespace for that schema. I checked the free space in the tablespace before importing...there is 3 GB of free space..
When i start import it gets full , and import hangs. It does not proceed any further. Im not able to understand, why 3 GB of space it is using for just the table structure.neither are we without your export and import commands. I could take a guess though. That would be that your export uses compress=y either explicitly or because it's the default, and also that you are using dictionary managed tablespaces on your target. In that case each new table gets created with an initial extent the same size as the data on the source. In this case the immediate solution is not to use compress and the longer term one is to move as well to locally managed tablespaces.
still if we could see the commands you used then we wouldn't need to guess.
Niall Litchfield
http://www.orawin.info/
Similar Messages
-
Hai everybody
is there any possibility to import only the Table structure from a oracle backup dump file. After importing the table strucutre again any possibility to import the data into the tables from the same/other backup dump file.
What i meant to say is - first i will import the table structure from the dump file. After importing the table structure i want to verify the constraints. finally i want to import the data from the same dump file.
Thanks
JayaDevHi JayaDev,
If you are using IMP to import into database use ROWS=N parameter
for further reference, refer link [Importing schemas|http://www.psoug.org/reference/import.html|Importing schemas]
*009* -
Am I correct in assuming there is no way to import a table structure (non-delivered) into a custom repository in MDM? In other words, you must manually create each table?
...the gui is a bit tedious, are there any quicker/better ways to create custom tables?Hi,
I guess the answer to your question is Yes.
Heres the justification..
Firstly, a database table shouldn't be confused with the tables we see in the repository. MDM stores the tables in the repository in the database in a different manner.
So, there is no one to one relationship.
Also, MDM also stores other details like unique, display fields, keywords, multilingual options etc. So, even if you import a table you would have to enter these details manually.
Also buddy, this is one time job and also a very critical job as it decides the efficieny of your MDM solution.
I believe that using the API's, you can expedite the process..Not very sure abt this though..
Hope that helps.
Regards,
Tanveer.
<b>Please mark helpful answers</b> -
Export schema table structure only- no procedures
Hi Friends,
I try to export a schema structure with rows=n option in exp command. As log records show the procedures had exported too.
Do we have any way to export a schema tables only?
Thanks for help!
Jinuser589812 wrote:
Database is 10.2.04 I are not able to use data pump based on system configuration.Please elaborate on what you mean by "I are not able to use data pump based on system configuration"
We have more one hundred tables. that needs to be include tables option.
Do you have any simple way to export a schema tables structure without list 120 table's name in exp command?
Edited by: user589812 on Jan 4, 2012 10:19 AMI do not believe you have a choice. One method could be to get a list of all needed tables (SELECT TABLE_NAME FROM USER_TABLES) and create a par file with this list.
HTH
Srini -
Import tables structure to document and add comments
I want to import to a document the structure of all que tables and add some coments to the columns.
Are there tools that do that?
Thanks
AndréHi,
Oracle Designer or 3-rd party tools like PL/SQL Developer may help you...
Simon -
Resource for R/3 functional processes and table structures
Dear Experts,
I want to have brief but concise understanding of R/3 modules in aspects of business process flows, table important table structures. Ideally, the document or book should phrase it in a way easy for non-functional people to understand. I am sure as Abap Developers, you gurus have to understand business processes & tables all the time. Appreciate some help here. I am even willing to pay a sum for such resource.
My contact : [email protected]
regards,
BryanHi Bryan,
There's one PDF file on the web which I think is okay. Here's he link -
http://www.auditware.co.uk/SAP/Extras/SAPTables.pdf
Please let me know whether or nor this is what you are looking for.
Regards,
Anand Mandalika. -
How to export and import only data not table structure
Hi Guys,
I am not much aware about import ,export utility please help me ..
I have two schema .. Schema1, Schema2
i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
In schema2 , i have only table structure , not any data ..user1118517 wrote:
Hi Guys,
I am not much aware about import ,export utility please help me ..
I have two schema .. Schema1, Schema2
i used to use Schema1 in that my valuable data is present . now i want to move this data from Schema1 to Schema2 ..
In schema2 , i have only table structure , not any data ..Nothing wrong with exporting the structure. Just use 'ignore=y' on the import. When it tries to do the CREATE TABLE it (the CREATE statement) will fail because the table already exists, but the ignore=y means "ignore failures of CREATE", and it will then proceed to INSERT the data. -
Export and import only table structure
Hi ,
I have two schema scott and scott2. scott schema is having table index and procedure and scott2 schema is fully empty.
Now i want the table structure, indexes and procedure from scott schema to scott2 schema. No DATA needed.
What is the query to export table structure, indexes and procedure from scott schema and import in scott2 schema.
Once this done, i want scott schema should have full access to scott2 schema.
Oracle Database 10g Release 10.2.0.1.0 - 64bit Production
Please help...Pravin wrote:
I used rows=n
it giving me below error while importing dump file:-
IMP-00003: ORACLE error 604 encountered
ORA-00604: error occurred at recursive SQL level 1
ORA-01013: user requested cancel of current operation^CYou are getting this error because you hit "Ctrl C" during the import, which essentially cancels the import.
IMP-00017: following statement failed with ORACLE error 604:
"CREATE TABLE "INVESTMENT_DETAILS_BK210509" ("EMP_NO" VARCHAR2(15), "INFOTYP"
"E" VARCHAR2(10), "SBSEC" NUMBER(*,0), "SBDIV" NUMBER(*,0), "AMOUNT" NUMBER("
"*,0), "CREATE_DATE" DATE, "MODIFY_DATE" DATE, "FROM_DATE" DATE, "TO_DATE" D"
"ATE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 6684672"
" FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "DMSTG" LOGG"
"ING NOCOMPRESS"Srini -
Same table in LMT consume 10% more space than in DMT
We have a table in DMT with pct_free=10, the size is 3,891,200 block, after I use impdp import it to another database with LMT manually management extents tablespace(table has the same pct_free), the size is 4,141,059. I hope the new table should be smaller than the old one, but the fact is opposite. There is nearly no update to the table, so it seems the difference is because using LMT.
Does LMT has a much higher overhead than DMT?It has a much lower overhead, in fact, in terms of better coping with segments which frequently extend (no ST enqueue waits, for example).
But if you specify mad uniform sizes when creating LMTs, you will end up with mad results! There's not enough information in your post to know if that's happening here or not, but if you do:
create tablespace TB1 datafile 'tb1.dbf' size 10m extent management local uniform size 1M;
and then create a table which only stores 48KB of data in that tablespace, the table will nevertheless consume 1MB of space and look very inefficient, because that large uniform size is being applied.
How, specifically, did you create your LMT? If you said, 'uniform size 1GB', for example, then you're bound to end up with whole gig extents, and thus forced to consume 32GB when your table (say) only really wanted to occupy 31.01GB
If you let autoallocate determine the extent sizes, the same sort of effect is going to happen, but it might not be so noticeable.
I'm not convinced in any case that the difference between a table occupying 29.6GB and one occupying 31.6GB is worth worrying about: 2GB of disk space seems to me to be the least of your concerns. For the total elimination of tablespace fragmentation and ST enqueue waits, 2GB seems to me a small price to pay! -
To export and import oracle 11g table data only
Hi Gurus,
Just not sure of the procedure to follow in the export just the table data and then truncate the table do some changes(not table structure changes ) and then import the same table data in to the relevent table .
Could some please help me in the setps involved in it .
Thanks a Lot in advanceIf you can use Data Pump, here are your commands:
expdp table_owner/password directory=<your_directory> dumpfile=table_name.dmp tables=table_name content=data_only
impdp table_owner/password directory=<your_directory> dumpfile=table_name.dmp tables=table_name table_exists_action=append
Data Pump requires version 10.1 or later.
Dean -
How to write export dump commad with no datable data only table structure.
How to write export dump commad with no datable data only table structure will there and command for hole schma.
e.g. export dump command for scott schema and all table within scott schema in it no table data should be exported.If I understand the question, it sounds like you just need to add the flag "ROWS=N" to your export command (I assume that you're talking about the old export utility, not the Data Pump version).
Justin -
Recently, I have been unable to import all of the "songs" or discs from multi-CD operas in itunes to my ipod classic or Seagate network hard drive. Only 1 or 2 discs or 20 or so "songs" out of a total of 40 - 80 import despite plenty of available space in both devices. When I check itunes, each opera disc with all of the "songs" are listed. Very frustrating! Any suggestions?
right click on the songs that do transfer and select create aac version then re sync
Peace, Clyde -
TIPS(18) : CREATING SCRIPTS TO RECREATE A TABLE STRUCTURE
제품 : SQL*PLUS
작성날짜 : 1996-11-12
TIPS(18) : Creating Scripts to Recreate a Table Structure
=========================================================
The script creates scripts that can be used to recreate a table structure.
For example, this script can be used when a table has become fragmented or to
get a defintion that can be run on another database.
CREATES SCRIPT TO RECREATE A TABLE-STRUCTURE
INCL. STORAGE, CONSTRAINTS, TRIGGERS ETC.
This script creates scripts to recreate a table structure.
Use the script to reorganise a table that has become fragmented,
to get a definition that can be run on another database/schema or
as a basis for altering the table structure (eg. drop a column!).
IMPORTANT: Running the script is safe as it only creates two new scripts and
does not do anything to your database! To get anything done you have to run the
scripts created.
The created scripts does the following:
1. save the content of the table
2. drop any foreign key constraints referencing the table
3. drop the table
4. creates the table with an Initial storage parameter that
will accomodate the entire content of the table. The Next
parameter is 25% of the initial.
The storage parameters are picked from the following list:
64K, 128K, 256K, 512K, multiples of 1M.
5. create table and column comments
6. fill the table with the original content
7. create all the indexes incl storage parameters as above.
8. add primary, unique key and check constraints.
9. add foreign key constraints for the table and for referencing
tables.
10.Create the table's triggers.
11.Compile any depending objects (cascading).
12.Grant table and column privileges.
13.Create synonyms.
This script must be run as the owner of the table.
If your table contains a LONG-column, use the COPY
command in SQL*Plus to store/restore the data.
USAGE
from SQL*Plus:
start reorgtb
This will create the scripts REORGS1.SQL and REORGS2.SQL
REORGS1.SQL contains code to save the current content of the table.
REORGS2.SQL contains code to rebuild the table structure.
undef tab;
set echo off
column a1 new_val stor
column b1 new_val nxt
select
decode(sign(1024-sum(bytes)/1024),-1,to_char((round(sum(bytes)/(1024*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/1024), -1,'1M',
decode(sign(256-sum(bytes)/1024), -1,'512K',
decode(sign(128-sum(bytes)/1024), -1,'256K',
decode(sign(64-sum(bytes)/1024) , -1,'128K',
'64K'
a1,
decode(sign(1024-sum(bytes)/4096),-1,to_char((round(sum(bytes)/(4096*1
024))+1))||'M', /* > 1M new rounded up to nearest Megabyte */
decode(sign(512-sum(bytes)/4096), -1,'1M',
decode(sign(256-sum(bytes)/4096), -1,'512K',
decode(sign(128-sum(bytes)/4096), -1,'256K',
decode(sign(64-sum(bytes)/4096) , -1,'128K',
'64K'
b1
from user_extents
where segment_name=upper('&1');
set pages 0 feed off verify off lines 150
col c1 format a80
spool reorgs1.sql
PROMPT drop table bk_&1
prompt /
PROMPT create table bk_&1 storage (initial &stor) as select * from &1
prompt /
spool off
spool reorgs2.sql
PROMPT spool reorgs2
select 'alter table '||table_name||' drop constraint
'||constraint_name||';'
from user_constraints where r_constraint_name
in (select constraint_name from user_constraints where
table_name=upper('&1')
and constraint_type in ('P','U'));
PROMPT drop table &1
prompt /
prompt create table &1
select decode(column_id,1,'(',',')
||rpad(column_name,40)
||decode(data_type,'DATE' ,'DATE '
,'LONG' ,'LONG '
,'LONG RAW','LONG RAW '
,'RAW' ,'RAW '
,'CHAR' ,'CHAR '
,'VARCHAR' ,'VARCHAR '
,'VARCHAR2','VARCHAR2 '
,'NUMBER' ,'NUMBER '
,'unknown')
||rpad(
decode(data_type,'DATE' ,null
,'LONG' ,null
,'LONG RAW',null
,'RAW' ,decode(data_length,null,null
,'('||data_length||')')
,'CHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR' ,decode(data_length,null,null
,'('||data_length||')')
,'VARCHAR2',decode(data_length,null,null
,'('||data_length||')')
,'NUMBER' ,decode(data_precision,null,' '
,'('||data_precision||
decode(data_scale,null,null
,','||data_scale)||')')
,'unknown'),8,' ')
||decode(nullable,'Y','NULL','NOT NULL') c1
from user_tab_columns
where table_name = upper('&1')
order by column_id
prompt )
select 'pctfree '||t.pct_free c1
,'pctused '||t.pct_used c1
,'initrans '||t.ini_trans c1
,'maxtrans '||t.max_trans c1
,'tablespace '||s.tablespace_name c1
,'storage (initial '||'&stor' c1
,' next '||'&stor' c1
,' minextents '||t.min_extents c1
,' maxextents '||t.max_extents c1
,' pctincrease '||t.pct_increase||')' c1
from user_Segments s, user_tables t
where s.segment_name = upper('&1') and
t.table_name = upper('&1')
and s.segment_type = 'TABLE'
prompt /
select 'comment on table &1 is '''||comments||''';' c1 from
user_tab_comments
where table_name=upper('&1');
select 'comment on column &1..'||column_name||
' is '''||comments||''';' c1 from user_col_comments
where table_name=upper('&1');
prompt insert into &1 select * from bk_&1
prompt /
set serveroutput on
declare
cursor c1 is select index_name,decode(uniqueness,'UNIQUE','UNIQUE')
unq
from user_indexes where
table_name = upper('&1');
indname varchar2(50);
cursor c2 is select
decode(column_position,1,'(',',')||rpad(column_name,40) cl
from user_ind_columns where table_name = upper('&1') and
index_name = indname
order by column_position;
l1 varchar2(100);
l2 varchar2(100);
l3 varchar2(100);
l4 varchar2(100);
l5 varchar2(100);
l6 varchar2(100);
l7 varchar2(100);
l8 varchar2(100);
l9 varchar2(100);
begin
dbms_output.enable(100000);
for c in c1 loop
dbms_output.put_line('create '||c.unq||' index '||c.index_name||' on
&1');
indname := c.index_name;
for q in c2 loop
dbms_output.put_line(q.cl);
end loop;
dbms_output.put_line(')');
select 'pctfree '||i.pct_free ,
'initrans '||i.ini_trans ,
'maxtrans '||i.max_trans ,
'tablespace '||i.tablespace_name ,
'storage (initial '||
decode(sign(1024-sum(e.bytes)/1024),-1,
to_char((round(sum(e.bytes)/(1024*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/1024), -1,'1M',
decode(sign(256-sum(e.bytes)/1024), -1,'512K',
decode(sign(128-sum(e.bytes)/1024), -1,'256K',
decode(sign(64-sum(e.bytes)/1024) , -1,'128K',
'64K'))))) ,
' next '||
decode(sign(1024-sum(e.bytes)/4096),-1,
to_char((round(sum(e.bytes)/(4096*1024))+1))||'M',
decode(sign(512-sum(e.bytes)/4096), -1,'1M',
decode(sign(256-sum(e.bytes)/4096), -1,'512K',
decode(sign(128-sum(e.bytes)/4096), -1,'256K',
decode(sign(64-sum(e.bytes)/4096) , -1,'128K',
'64K'))))) ,
' minextents '||s.min_extents ,
' maxextents '||s.max_extents ,
' pctincrease '||s.pct_increase||')'
into l1,l2,l3,l4,l5,l6,l7,l8,l9
from user_extents e,user_segments s, user_indexes i
where s.segment_name = c.index_name
and s.segment_type = 'INDEX'
and i.index_name = c.index_name
and e.segment_name=s.segment_name
group by s.min_extents,s.max_extents,s.pct_increase,
i.pct_free,i.ini_trans,i.max_trans,i.tablespace_name ;
dbms_output.put_line(l1);
dbms_output.put_line(l2);
dbms_output.put_line(l3);
dbms_output.put_line(l4);
dbms_output.put_line(l5);
dbms_output.put_line(l6);
dbms_output.put_line(l7);
dbms_output.put_line(l8);
dbms_output.put_line(l9);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select constraint_name, decode(constraint_type,'U',' UNIQUE',' PRIMARY
KEY') typ,
decode(status,'DISABLED','DISABLE',' ') status from user_constraints
where table_name = upper('&1')
and constraint_type in ('U','P');
cname varchar2(100);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) coln
from user_cons_columns
where table_name = upper('&1')
and constraint_name = cname
order by position;
begin
for q1 in c1 loop
cname := q1.constraint_name;
dbms_output.put_line('alter table &1');
dbms_output.put_line('add constraint '||cname||q1.typ);
for q2 in c2 loop
dbms_output.put_line(q2.coln);
end loop;
dbms_output.put_line(')' ||q1.status);
dbms_output.put_line('/');
end loop;
end;
declare
cursor c1 is
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
c.table_name = upper('&1')
union
select c.constraint_name,c.r_constraint_name cname2,
c.table_name table1, r.table_name table2,
decode(c.status,'DISABLED','DISABLE',' ') status,
decode(c.delete_rule,'CASCADE',' on delete cascade ',' ')
delete_rule
from user_constraints c,
user_constraints r
where c.constraint_type='R' and
c.r_constraint_name = r.constraint_name and
r.table_name = upper('&1');
cname varchar2(50);
cname2 varchar2(50);
cursor c2 is
select decode(position,1,'(',',')||rpad(column_name,40) colname
from user_cons_columns
where constraint_name = cname
order by position;
cursor c3 is
select decode(position,1,'(',',')||rpad(column_name,40) refcol
from user_cons_columns
where constraint_name = cname2
order by position;
begin
dbms_output.enable(100000);
for q1 in c1 loop
cname := q1.constraint_name;
cname2 := q1.cname2;
dbms_output.put_line('alter table '||q1.table1||' add constraint ');
dbms_output.put_line(cname||' foreign key');
for q2 in c2 loop
dbms_output.put_line(q2.colname);
end loop;
dbms_output.put_line(') references '||q1.table2);
for q3 in c3 loop
dbms_output.put_line(q3.refcol);
end loop;
dbms_output.put_line(') '||q1.delete_rule||q1.status);
dbms_output.put_line('/');
end loop;
end;
col c1 format a79 word_wrap
set long 32000
set arraysize 1
select 'create or replace trigger ' c1,
description c1,
'WHEN ('||when_clause||')' c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is not null
select 'create or replace trigger ' c1,
description c1,
trigger_body ,
'/' c1
from user_triggers
where table_name = upper('&1') and when_clause is null
select 'alter trigger '||trigger_name||decode(status,'DISABLED','
DISABLE',' ENABLE')
from user_Triggers where table_name='&1';
set serveroutput on
declare
cursor c1 is
select 'alter table
'||'&1'||decode(substr(constraint_name,1,4),'SYS_',' ',
' add constraint ') a1,
decode(substr(constraint_name,1,4),'SYS_','
',constraint_name)||' check (' a2,
search_condition a3,
') '||decode(status,'DISABLED','DISABLE','') a4,
'/' a5
from user_constraints
where table_name = upper('&1') and
constraint_type='C';
b1 varchar2(100);
b2 varchar2(100);
b3 varchar2(32000);
b4 varchar2(100);
b5 varchar2(100);
fl number;
begin
open c1;
loop
fetch c1 into b1,b2,b3,b4,b5;
exit when c1%NOTFOUND;
select count(*) into fl from user_tab_columns where table_name =
upper('&1') and
upper(column_name)||' IS NOT NULL' = upper(b3);
if fl = 0 then
dbms_output.put_line(b1);
dbms_output.put_line(b2);
dbms_output.put_line(b3);
dbms_output.put_line(b4);
dbms_output.put_line(b5);
end if;
end loop;
end;
create or replace procedure dumzxcvreorg_dep(nam varchar2,typ
varchar2) as
cursor cur is
select type,decode(type,'PACKAGE BODY','PACKAGE',type) type1,
name from user_dependencies
where referenced_name=upper(nam) and referenced_type=upper(typ);
begin
dbms_output.enable(500000);
for c in cur loop
dbms_output.put_line('alter '||c.type1||' '||c.name||' compile;');
dumzxcvreorg_dep(c.name,c.type);
end loop;
end;
exec dumzxcvreorg_dep('&1','TABLE');
drop procedure dumzxcvreorg_Dep;
select 'grant '||privilege||' on '||table_name||' to '||grantee||
decode(grantable,'YES',' with grant option;',';') from
user_tab_privs where table_name = upper('&1');
select 'grant '||privilege||' ('||column_name||') on &1 to
'||grantee||
decode(grantable,'YES',' with grant option;',';')
from user_col_privs where grantor=user and
table_name=upper('&1')
order by grantee, privilege;
select 'create synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from user_synonyms where table_name=upper('&1');
PROMPT REM
PROMPT REM YOU MAY HAVE TO LOG ON AS SYSTEM TO BE
PROMPT REM ABLE TO CREATE ANY OF THE PUBLIC SYNONYMS!
PROMPT REM
select 'create public synonym '||synonym_name||' for
'||table_owner||'.'||table_name||';'
from all_synonyms where owner='PUBLIC' and table_name=upper('&1') and
table_owner=user;
prompt spool off
spool off
set echo on feed on verify on
The scripts REORGS1.SQL and REORGS2.SQL have been
created. Alter these script as necesarry.
To recreate the table-structure, first run REORGS1.SQL.
This script saves the content of your table in a table
called bk_.
If this script runs successfully run REORGS2.SQL.
The result is spooled to REORGTB.LST.
Check this file before dropping the bk_ table.
*/Please do NOT cross-postings: create a deep structure for dynamic internal table
Regards
Uwe -
Table structure changed in testing system after system refresh.
Hi Team,
Recently we underwent a system refresh in Testing System where the Testing data is filled with Production data. But now we find that in one table some fields which we had deleted they are again found there. The version history of table is also gone. The table object was under testing in testing system and after which it was supposed to be transported to production. I believe System refresh only means refresh of data and it has nothing to do with table structure. Please correct me if I am wrong. Please also let me know what could be the reason of those changes in table if it is not System Refresh.
Regards,
AmitI believe System refresh only means refresh of data and it has nothing to do with table structure.
Alas, you were wrong, after all table definition are data too, as program sources...
You have to re-import every transport request which was not yet transported to production in your test system.
Regards,
Raymond -
Oracle Data Pump - Table Structure change
Hi,
we have daily partitioned table, and for backup we are using data pump (expdp). we policy to drop partition after backup (archiving).
we have archived dump files for 1year, few days back developer made changes with table structure they added one new column to table.
Now we are unable to restore old partitions is there a way to restore partition if new column added / dropped from currect table.
Thanks
SachinIf a new column has been added to the table, you can import only the the data from the old structure to the new structure. Use the parameter CONTENT=DATA_ONLY.
Maybe you are looking for
-
How to show JTree on html,jsp
Any help?What to use JFrame, scrollpane or what? Thanxs
-
BlackBerry Cruve 9380 corrupt/damaged invisible images/folders?
I have a Curve 9380 and in September of last year i went to 2 concerts the day after one another and used my BlackBerry as my camera for both nights. On the first night I took 46 images and on the second night I took 106 images. A few weeks after the
-
Controlling tab order in a form
Hi! I'm using Acrobat 9 Pro. I created a form. For the most part, when I tab, it does go to the next (correct) field. But occasionally it skips a field. How can I "tell it" to go to the next correct field? Thanks. Julie
-
US iTunes account, now in Canada
I started my iTunes account while living in the US, now am back in Canada (with Cdn iTunes gift card). It won't let me redeem it. How can I either 1) redeem it, or 2) switch my account to a 'Canadian' iTunes account in order to redeem? Thanks!
-
HT5624 how to open disabled ipad after installing update
help me to open or wipe clean my ipad which was disabled after system update in which the computer says i entered 4 digit pin of which i did not enter i tried itunes but it cant connect due to it being disabled. is there a phone number i can call.