Export all tables in schema using exp utility
I need to export all the tables in a schema based on a where clause, how can I do this without having to identify all the tables in the tables= parameter?
You can get all the tables by doing a user-level export, i.e.
exp scott/tiger@TNS_name owner=scottwill export all Scott's tables. If you need to export only some of the tables owned by a particular user, you're stuck giving an explicit list until 10g.
Justin
Distributed Database Consulting, Inc.
http://www.ddbcinc.com/askDDBC
Similar Messages
-
after i bought mac os x lion from web then download , after that to install but when they ask me to choose disk to install i can not choose, it say this disk does not use the GUID partiton table scheme. use disk utility to change the partition scheme. but
after i bought mac os x lion from web then download , after that to install but when they ask me to choose disk to install i can not choose, it say this disk does not use the GUID partiton table scheme. use disk utility to change the partition scheme. but
-
EXP: Export all tables for one user except the contents of one table
I need to export all tables (> 100 Tables) of an user except the contents of one table. Is there a possibility to call the EXP command and exclude the one table ??
Thanks in advance .-)
(Oracle11G)
Edited by: gromit on Feb 14, 2011 4:41 AMIt is not possible to perform it from a client. Is this correct? The datapump task itself can be started from a client, but - you are right - datapump is server-based. Especially that refers to the dumpfile location, which has to be defined as DIRECTORY on the server:
http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_overview.htm#i1007107
Werner -
What is the easiest way to export all tables data from Oracle to MS SQL Server?
Hello MS,
I would like to export all tables from Oracle 11.2 to MS SQL Server 2012 R1.
Using the tool "Microsoft SQL Server Migration Assistant v6.0 for Oracle" did not work for me because there are too many warnings and errors regarding the schema creation (MS cannot know it because they are not the schema designer). My idea is
to leave/skip the schema creation to the application designer/supplier and instead concentrate on the Oracle data export and MS SQL data import.
What is the easiest way to export all tables data from Oracle to MS SQL Server quickly?
Is it:
- the „MS SQL Import and Export Data“ Tool
- the “MS SQL Integration Services” Tool
- not Oracle dump *.dmp format because it is a propritery binary format
- flat file *.csv (delimited format)
Thanks!Hi lingodingo,
If you want to directly export all tables from Oracle database to SQL Server, I suggest you use SQL Server Import and Export Wizard. Because you just need to follow the wizard with GUI, this is the easiest way.
If you want to make some modification for the tables‘ data before loading to SQL Server, I suggest you use SQL Server Integration Services package. For more details, please refer to the following similar thread:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/38b2bed2-9d4e-48d4-a33f-1d9eed1c062d/flat-file-to-sql-server?forum=sqldatamining
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Export all tables' name under user by informatica
Hi,
I have a requirement about informatica mapping,
we want to user one mapping to export all tables' name under user,
like sql :
SELECT t.OWNER || '.' || t.TABLE_NAME ||
FROM all_tables t
WHERE t.OWNER = 'BIEBS';
How to realize it?
Thank you!Get the table all_table in source analyzer save it. While building the map choose this as ur source, modify source qualifier as select table_name,table_name from all_tables where table_name='BIEBS'.
Once u have this, use an expression transformation to build ur expression and load it into a flat file.
Please mark this answered or helpful if it solves your purpose.
Rgds. -
Export all tables except one table
i have db with more than one user
i need to export all tables in DB except one tableHi..
As you are on 10g , use EXPDP with EXCLUDE parameter
[http://www.oracle-base.com/articles/10g/OracleDataPump10g.php]
HTH
Anand
Edited by: Anand... on Mar 26, 2009 9:54 PM
Edited by: Anand... on Mar 26, 2009 9:59 PM -
Analyzing all tables in schema
Hello everyone,
I am used below command to analyze all tables in schema
EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);when look at tables in dba_tables, for none of the tables LAST_ANALYZED date is changed to today. But when I did below
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);I am see LAST_ANALYZED changed to today in dba_tables.
If I need to change LAST_ANALYZED to all tables do I need to produce the above command for all tables? There are more then 700 tables for this application.
BANNER
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Productionuser3636719 wrote:
EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
and
EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);are fundamentally different, you cannot compare them. In gather_schema_stats, oracle used most defaults, decided none needed new stats collected, so it didn't do anything. In the second, you changed method_opt, granularity and degree etc from default values (as set in your db perhaps), so db went ahead and collected stats.
You need to look up manual and try to understand the default and non-default behavior for parameters and then make an educated decision. Changing stats randomly is not generally a great idea. -
Can we relink the executables for a particular schema using adadmin utility
Is it possible to do the following tasks for a particular schema (say inv):
1. Relink the executables for INV schema using adadmin utility.
2. Regenerate the forms for INV schema using adadmin utility.
3. Check if there are any invalid objects in the database where the issue is replicated INV schema.
Thanks,
VijayHi,
Is it possible to do the following tasks for a particular schema (say inv):
1. Relink the executables for INV schema using adadmin utility.
2. Regenerate the forms for INV schema using adadmin utility.Yes, it is possible for specific schema -- See (Note: 141118.1 - How To Relink Forms Library Files Using Adadmin) for details.
3. Check if there are any invalid objects in the database where the issue is replicated INV schema.Through adadmin, it is not possible for specific schema. To check the list of invalid objects under the INV schema, you need to run this query:
SQL> select object_name, object_type
from dba_objects
where status = 'INVALID' and owner = 'INV';Regards,
Hussein -
Export all tables for particular schema
Please how can i export all the tables belonging to a schema instead of listing them
To export a schema:
exp file=filename,dmp owner=(schema_name) log=logfile.log
You can do exp help=y to find out the exp options
Paulo
Message was edited by:
pmcda -
Hi,
I am getting the following oracle error while using exp,
can someone please suggest on this,
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user SIEBEL
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user SIEBEL
About to export SIEBEL's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export SIEBEL's tables via Conventional Path ...
. . exporting table EIM_ACCDTL_TNT
EXP-00008: ORACLE error 904 encountered
ORA-00904: "MAXSIZE": invalid identifier
. . exporting table EIM_ACCNTROUTE
EXP-00008: ORACLE error 1003 encountered
ORA-01003: no statement parsed
. . exporting table EIM_ACCNT_CUT
EXP-00008: ORACLE error 904 encountered
ORA-00904: "MAXSIZE": invalid identifier
. . exporting table EIM_ACCNT_DTL
EXP-00008: ORACLE error 1003 encountered
ORA-01003: no statement parsedSatish is right, there exists bug for 11.1.0.6.
Look at metalink note 741984.1.
Bug number is 5872788 - there exists patch as well. -
Datapump API: Import all tables in schema
Hi,
how can I import all tables using a wildcard in the datapump-api?
Thanks in advance,
tensai_tensai_ wrote:
Thanks for the links, but I already know them...
My problem is that I couldn't find an example which shows how to perform an import via the API which imports all tables, but nothing else.
Can someone please help me with a code-example?I'm not sure what you mean by "imports all tables, but nothing else". It could mean that you only want to import the tables, but not the data, and/or not the statistics etc.
Using the samples provided in the manuals:
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
BEGIN
-- Create a (user-named) Data Pump job to do a "schema" import
h1 := DBMS_DATAPUMP.OPEN('IMPORT','SCHEMA',NULL,'EXAMPLE8');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DATA_PUMP_DIR');
-- A metadata remap will map all schema objects from one schema to another.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','RANDOLF','RANDOLF2');
-- Include and exclude
dbms_datapump.metadata_filter(h1,'INCLUDE_PATH_LIST','''TABLE''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/C%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/F%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/G%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/I%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/M%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/P%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/R%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/TR%''');
dbms_datapump.metadata_filter(h1,'EXCLUDE_PATH_EXPR','LIKE ''TABLE/STAT%''');
-- no data please
DBMS_DATAPUMP.DATA_FILTER(h1, 'INCLUDE_ROWS', 0);
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
-- dbms_datapump.stop_job(h1);
dbms_datapump.detach(h1);
END;
/This should import nothing but the tables (excluding the data and the table statistics) from an schema export (including a remapping shown here), you can play around with the EXCLUDE_PATH_EXPR expressions. Check the serveroutput generated for possible values used in EXCLUDE_PATH_EXPR.
Use the DBMS_DATAPUMP.DATA_FILTER procedure if you want to exclude the data.
For more samples, refer to the documentation:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_api.htm#i1006925
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/ -
How to delete all TABLEs in Schema SYS which are created since 09:15?
Unfortunately a script created lots of tables in the wrong Tablespace (=SYSTEM) and Schema (=SYS).
How can I delete (in one DDL command) all TABLES which are created inTablespace=SYSTEM and SCHEMA=SYS
during the last 3 hours resp. since 09:15 of 25th Sep 2011 ?
Alternatively: How can I move these TABLEs to another Schema (e.g. ATEST) and Tablespace (USERS)?
Is this possible with Oracle XE or only with Oracle Enterprise?
Peteruser559463 wrote:
Unfortunately a script created lots of tables in the wrong Tablespace (=SYSTEM) and Schema (=SYS).
How can I delete (in one DDL command) all TABLES which are created inTablespace=SYSTEM and SCHEMA=SYS
during the last 3 hours resp. since 09:15 of 25th Sep 2011 ?
Alternatively: How can I move these TABLEs to another Schema (e.g. ATEST) and Tablespace (USERS)?
Is this possible with Oracle XE or only with Oracle Enterprise?
PeterYou can query dba_objects and join it with dba_tables where tablespace_name='SYSTEM' , then drop the tables result of the query; the idea is to use the following query;
SQL> select OWNER, OBJECT_NAME from dba_objects where OBJECT_TYPE='TABLE' and OWNER = 'SYS' and CREATED >= sysdate - 3 / 24;Please consider marking your questions as answered, when it is the case;
Handle: user559463
Status Level: Newbie
Registered: Feb 18, 2007
Total Posts: 583
Total Questions: 266 (186 unresolved)Edited by: orawiss on Sep 26, 2011 4:03 PM -
Not able to export all table data to excel
Hi All,
I am using jdev 11.1.1.4.0
I want to export all my table data to a excel file.
I used ADF inbuilt <af:exportCollectionActionListener/> to do the same.
Also, I have pagination on my JSPX.
When I click on export button to export the data, it exports current records in the page not all the records.
For instance, I have 100 records in my table and I am displaying 20 records per page.
When I click on export to excel image, it exports current 20 records instead of exporting all 100 records.
Please tell me how to export all the records to the excel.
Sample code,
<af:exportCollectionActionListener exportedId="t1" type="excelHTML"/>t1 is id of my table from which I want to export the data.
also, I have tried exportedRows="all" - but it doesn't work!!!!
Appreciate your help.
Thanks and Regards,
Madhav K.Hi Arunkumar,
thanks for your reply.
Yes, it works....
But I don't want to do the same in such way.
Because almost every page I have export to excel functionality.
So if follow this way then I have extra table on every page. I don't want this in my application.
Is there any other way???
Thanks and Regards,
Madhav K. -
How to list all tables/stored procedures used by the report
All the reports i create are getting data from stored procedure(s). Is there a way to obtaining a listing of all the stored procedures without having to open report by report and check under Database > Set Datasource Location > Properties > Table Name?
Finding this info it would be extremely valuable, as it would help me to judge the impact of any changes that i might be considering to one or more of the stored proc.
So far i maintained a manual listing but it is not up-to-date and reliable. I would rather prefer to get an updated listing every time i want to change/drop a stored procedure.
Thanks so much for your help.
RickDell can you be a little bit more specific about the SDK solution. I could ask one of the developers to help me but i need to gather more details.
I took a look .rpt inspector Pro but it does not do what i need. All i need is the the listing of all the database tables (in my case stored procs) used in my reports. No need to replace or change anything. I need to scan the directory where i have all the reports for the different applications and get report names and table/stored procs used. i can export the txt file to excel and that's all. -
Avoid trigger on all tables in schema
I need to set a CREATE TIME STAMP to the database server timestamp (not the session timestamp) in all tables in a schema for both create and update. Other than creating a trigger on all the tables in the schema is there a less tedious way to do that?
Similarly I need to to set columns such as CREATE_USER, LAST_UPDATE_USER.
Thanks in anticipation.You can easily generate the DDL to add the new columns.
As far as populating the columns your choices are either use table befire insert and update triggers to populate the columns or have the application provide the necessary information.
The basic trigger logic would be pretty well the same for all the tables so writing a little SQL or PL/SQL to generate the trigger code should be pretty straight forward.
Depending on your application, such as web based with only one Oracle user, you may need to obtain the real user via dbms_application_info set by the application server based logic.
HTH -- Mark D Powell --
Edited by: Mark D Powell on May 5, 2010 7:48 AM
Maybe you are looking for
-
Adobe Bridge CS3 photo downloader won't auto-open.
Hi, This is my mystery: I can open Bridge/File/Get Photos From Camera/, and select my cf card drive. But the Photo Downloader box USED to pop up when I first put my card in the reader. This does not happen any more - and I can't figure out why, or ho
-
Importing existing project to Project Online
Hello, my company recently purchased Project Online to add to our existing Office 365 environment. From the Project Web App home page, I would like to use the Create or Import projects tile to add an existing .mpp project file that I have uploaded to
-
MC.A,MC.B,MB5B etc How to make it useful for Canadians?
Hi, We need an AUM option in selection screen & Output of MB5B. Users can know what was available in stock on a prior day when trying to re-concile inventory? Is there any Standard setting to do this kind of things or will it be possible only with Z
-
I tried to run the new update to iTunes and I got the error....iTunes was not installed properly. Please reinstall iTunes. I have tried that twice with no success. Anyone have any ideas what the problem is?
-
How to Read data From Excel File in Labview 8.5?
We can read it from xls file, but it is encrypted..... we need exact data...