EXPDP NONASM TO ASM
Hi All,
I have to take export of full database and import to another database. source DB is having NONASM disks where as target is ASM what are the steps to
follow for full expdp impdp in such a scenario.
source db
1. expdp FULL=y option
Traget Server
2.Create tablespaces and users ? -- should i skip this step and go with only next step
3.impdp FULL=y with REMAP_DATAFILE --> '/ora01/dbname/tbs/prd/prod01.dbf':'+DATA/dbname/datafile/prod01.dbf'
I cant test this before the scheduled change time
SR.
I was really telling you that both would work and your results would be the same. I like both and her is the difference.
Your step 1: If you have lots of dataifles, say 200 or 2000, I would not want to have to type in 200 or 2000 remap datafiles. That is just me. Since you have only 19 I think typing in 19 remap_datafiles would be simple enough and I would probably use this step.
Your step 2: with my addition of creating a create tablespace sql file would be ideal if you lots of data files. SInce I'm lazy and want to use an editor, I would choose this step. This would take a total of 2 imports, some work in my favorite editor, then running the .sql script that I generated.
1 - import to sql file
2 - edit sql file to update the data files.
3 - run the sql file to create the tablespaces
4 - run the real import with exclude tablespace.
Since you only have 19 datafiles, I would probably just use step 1. I would generate a parameter fie and put all of the remap data parameters in the parameter file. It would look like:
import.par
remap_datafile=/old/directory/file1.dbf:+ASM/file1.dbf
remap_datafile=/old/directory/file2.dbf:+ASM/file2.dbf
remap_datafile=/old/directory/file3.dbf:+ASM/file3.dbf
remap_datafile=/old/directory/file4.dbf:+ASM/file4.dbf
etc
Then
impdp user/password directory=your_dir dumpfile=your_dump parameter_file=import.par ...
Hope this helps.
Dean
Similar Messages
-
<p>以下是Maclean.Liu所编写或收集的Oracle数据库实用脚本的列表,在这里分享一下:</p>
<p> </p>
<p><a title="Script:收集Enterprise Manager Grid Control/Agent/Database Control诊断信息" href="http://www.oracledatabase12g.com/archives/script-collect-grid-control-agent-db-console-diag.html" target="_blank">Script:收集Enterprise Manager Grid Control/Agent/Database Control诊断信息<br>
</a><a title="Script:收集Exadata诊断信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%86exadata%e8%af%8a%e6%96%ad%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集Exadata诊断信息</a><br>
<a title="Script:收集RAC诊断信息" href="http://www.oracledatabase12g.com/archives/script-collect-rac-diag.html" target="_blank">Script:收集RAC诊断信息<br>
</a><a title="Script:收集自动SGA内存管理ASMM诊断信息" href="http://www.oracledatabase12g.com/archives/script-collect-sga-asmm-diag.html" target="_blank">Script:收集自动SGA内存管理ASMM诊断信息</a><br>
<a title="Script:Collect vip resource Diagnostic Information" href="http://www.oracledatabase12g.com/archives/script-collect-vip-resource-diagnostic-information.html" target="_blank">Script:Collect vip resource Diagnostic Information</a><br>
<a title="11g新特性:hangdiag.sql实例hang诊断脚本" href="http://www.oracledatabase12g.com/archives/11g%e6%96%b0%e7%89%b9%e6%80%a7hangdiag-sql%e5%ae%9e%e4%be%8bhang%e8%af%8a%e6%96%ad%e8%84%9a%e6%9c%ac.html" target="_blank">11g新特性:hangdiag.sql实例hang诊断脚本</a><br>
<a title="Script:verify Oracle Object timestamp discrepancy" href="http://www.oracledatabase12g.com/archives/script-verify-oracle-object-timestamp-discrepancy.html" target="_blank">Script:verify Oracle Object timestamp discrepancy</a><br>
<a title="Script:SQL调优健康检查脚本" href="http://www.oracledatabase12g.com/archives/sql-tuning-health-check-script.html" target="_blank">Script:SQL调优健康检查脚本</a><br>
<a title="Script:列出本会话的细节信息" href="http://www.oracledatabase12g.com/archives/script-list-session-details.html" target="_blank">Script:列出本会话的细节信息</a><br>
<a title="利用rowid分块实现非分区表的并行update与delete" href="http://www.oracledatabase12g.com/archives/%e5%88%a9%e7%94%a8rowid%e5%88%86%e5%9d%97%e5%ae%9e%e7%8e%b0%e9%9d%9e%e5%88%86%e5%8c%ba%e8%a1%a8%e7%9a%84%e5%b9%b6%e8%a1%8cupdate%e4%b8%8edelete.html" target="_blank">利用rowid分块实现非分区表的并行update与delete</a><br>
<a title="Script:计算Oracle Streams进程所占用的内存大小" href="http://www.oracledatabase12g.com/archives/script%e8%ae%a1%e7%ae%97oracle-streams%e8%bf%9b%e7%a8%8b%e6%89%80%e5%8d%a0%e7%94%a8%e7%9a%84%e5%86%85%e5%ad%98%e5%a4%a7%e5%b0%8f.html" target="_blank">Script:计算Oracle Streams进程所占用的内存大小</a><br>
<a title="利用RMAN检测数据库坏块的脚本" href="http://www.oracledatabase12g.com/archives/rman-validate-check-logical-database-corrupted-block.html" target="_blank">利用RMAN检测数据库坏块的脚本</a><br>
<a title="Script:利用外部表实现SQL查询Oracle告警日志Alert.log" href="http://www.oracledatabase12g.com/archives/%e5%88%a9%e7%94%a8%e5%a4%96%e9%83%a8%e8%a1%a8%e5%ae%9e%e7%8e%b0sql%e6%9f%a5%e8%af%a2oracle%e5%91%8a%e8%ad%a6%e6%97%a5%e5%bf%97alert-log.html" target="_blank">Script:利用外部表实现SQL查询Oracle告警日志Alert.log</a><br>
<a title="Script: 收集RAC DRM 诊断信息" href="http://www.oracledatabase12g.com/archives/script-%e6%94%b6%e9%9b%86rac-drm-%e8%af%8a%e6%96%ad%e4%bf%a1%e6%81%af.html" target="_blank">Script: 收集RAC DRM 诊断信息</a><br>
<a title="Script:10g中不用EM显示Active Session Count by Wait Class" href="http://www.oracledatabase12g.com/archives/script-10g-show-active-session-count-wait-class.html" target="_blank">Script:10g中不用EM显示Active Session Count by Wait Class</a><br>
<a title="Script:数据库最近的性能度量" href="http://www.oracledatabase12g.com/archives/script-show-instance-recent-performance-metric.html" target="_blank">Script:数据库最近的性能度量</a><br>
<a title="Script:收集数据库中用户的角色和表空间等信息" href="http://www.oracledatabase12g.com/archives/script-gather-user-role-tablespace-profile-info.html" target="_blank">Script:收集数据库中用户的角色和表空间等信息</a><br>
<a title="Script:收集介质恢复诊断信息" href="http://www.oracledatabase12g.com/archives/script-media-recovery-diag-info.html" target="_blank">Script:收集介质恢复诊断信息</a><br>
<a title="Script:收集Flashback Database Log诊断信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%86flashback-database-log%e8%af%8a%e6%96%ad%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集Flashback Database Log诊断信息</a><br>
<a title="Script:列出Oracle每小时的redo重做日志产生量" href="http://www.oracledatabase12g.com/archives/script%e5%88%97%e5%87%baoracle%e6%af%8f%e5%b0%8f%e6%97%b6%e7%9a%84redo%e9%87%8d%e5%81%9a%e6%97%a5%e5%bf%97%e4%ba%a7%e7%94%9f%e9%87%8f.html" target="_blank">Script:列出Oracle每小时的redo重做日志产生量</a><br>
<a title="Script:收集11g Oracle实例IO性能信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%8611g-oracle%e5%ae%9e%e4%be%8bio%e6%80%a7%e8%83%bd%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集11g Oracle实例IO性能信息</a><br>
<a title="Script:检查数据库当前是否有备份操作在执行中" href="http://www.oracledatabase12g.com/archives/script%e6%a3%80%e6%9f%a5%e6%95%b0%e6%8d%ae%e5%ba%93%e5%bd%93%e5%89%8d%e6%98%af%e5%90%a6%e6%9c%89%e5%a4%87%e4%bb%bd%e6%93%8d%e4%bd%9c%e5%9c%a8%e6%89%a7%e8%a1%8c%e4%b8%ad.html" target="_blank">Script:检查数据库当前是否有备份操作在执行中</a><br>
Script:List Schema/Table Constraints<br>
<a title="Script:RAC Failover检验脚本loop.sh" href="http://www.oracledatabase12g.com/archives/script-rac-failover%e6%a3%80%e9%aa%8c%e8%84%9a%e6%9c%acloop-sh.html" target="_blank">Script:RAC Failover检验脚本loop.sh</a><br>
<a title="Script:Diagnostic Resource Manager" href="http://www.oracledatabase12g.com/archives/script-diagnostic-resource-manager.html" target="_blank">Script:Diagnostic Resource Manager</a><br>
<a title="Script:List Grid Control Jobs" href="http://www.oracledatabase12g.com/archives/script-list-grid-control-jobs.html" target="_blank">Script:List Grid Control Jobs</a><br>
<a title="Script:GridControl Repository Health Check" href="http://www.oracledatabase12g.com/archives/script-grid-control-repository-health-check.html" target="_blank">Script:GridControl Repository Health Check</a><br>
<a title="Script:诊断Scheduler信息" href="http://www.oracledatabase12g.com/archives/script%e8%af%8a%e6%96%adscheduler%e4%bf%a1%e6%81%af.html" target="_blank">Script:诊断Scheduler信息</a><br>
<a title="Script:优化crs_stat命令的输出" href="http://www.oracledatabase12g.com/archives/script%e4%bc%98%e5%8c%96crs_stat%e5%91%bd%e4%bb%a4%e7%9a%84%e8%be%93%e5%87%ba.html" target="_blank">Script:优化crs_stat命令的输出</a><br>
<a title="Script:Diagnostic Oracle Locks" href="http://www.oracledatabase12g.com/archives/script-diagnostic-oracle-locks.html" target="_blank">Script:Diagnostic Oracle Locks</a><br>
<a title="Script:列出用户表空间的定额" href="http://www.oracledatabase12g.com/archives/script-list-user-tablespace-quotas.html" target="_blank">Script:列出用户表空间的定额</a><br>
<a title="Backup Script:Expdp Schema to ASM Storage" href="http://www.oracledatabase12g.com/archives/backup-script-expdp-schema-to-asm-storage.html" target="_blank">Backup Script:Expdp Schema to ASM Storage</a><br>
<a title="Script:Speed Up Large Index Create or Rebuild" href="http://www.oracledatabase12g.com/archives/script-speed-up-large-index-create-rebuild.html" target="_blank">Script:Speed Up Large Index Create or Rebuild</a><br>
<a title="Script:列出失效索引或索引分区" href="http://www.oracledatabase12g.com/archives/list-unusable-index-partition-subpartition.html" target="_blank">Script:列出失效索引或索引分区</a><br>
<a title="Script:列出数据库中5%以上链式行的表" href="http://www.oracledatabase12g.com/archives/list-tables-with-5-chained-rows.html" target="_blank">Script:列出数据库中5%以上链式行的表</a><br>
<a title="Script:列出没有主键或唯一索引的表" href="http://www.oracledatabase12g.com/archives/list-tables-with-no-primary-key-no-unique-key-or-index.html" target="_blank">Script:列出没有主键或唯一索引的表</a><br>
<a title="Script:收集ASM诊断信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%86asm%e8%af%8a%e6%96%ad%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集ASM诊断信息</a><br>
<a title="Script:收集Oracle备份恢复信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%86oracle%e5%a4%87%e4%bb%bd%e6%81%a2%e5%a4%8d%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集Oracle备份恢复信息</a><br>
<a title="监控一个大事务的回滚" href="http://www.oracledatabase12g.com/archives/%e7%9b%91%e6%8e%a7%e4%b8%80%e4%b8%aa%e5%a4%a7%e4%ba%8b%e5%8a%a1%e7%9a%84%e5%9b%9e%e6%bb%9a.html" target="_blank">监控一个大事务的回滚</a><br>
<a title="Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)" href="http://www.oracledatabase12g.com/archives/script-to-collect-db-upgrademigrate-diagnostic-information-dbupgdiag-sql.html" target="_blank">Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)</a><br>
<a title="Script:partition table into rowid extent chunks" href="http://www.oracledatabase12g.com/archives/script-partition-table-into-rowid-extent-chunks.html" target="_blank">Script:partition table into rowid extent chunks</a><br>
<a title="Script:Oracle EBS数据库初始化参数健康检查脚本" href="http://www.oracledatabase12g.com/archives/script-oracle-ebs%e6%95%b0%e6%8d%ae%e5%ba%93%e5%88%9d%e5%a7%8b%e5%8c%96%e5%8f%82%e6%95%b0%e5%81%a5%e5%ba%b7%e6%a3%80%e6%9f%a5%e8%84%9a%e6%9c%ac.html" target="_blank">Script:Oracle EBS数据库初始化参数健康检查脚本</a><br>
<a title="Script:Monitoring Memory and Swap Usage to Avoid A Solaris Hang" href="http://www.oracledatabase12g.com/archives/script-monitoring-memory-and-swap-usage-to-avoid-a-solaris-hang.html" target="_blank">Script:Monitoring Memory and Swap Usage to Avoid A Solaris Hang</a><br>
<a title="SQL脚本:监控当前重做日志文件使用情况" href="http://www.oracledatabase12g.com/archives/sql%e8%84%9a%e6%9c%ac%e7%9b%91%e6%8e%a7%e5%bd%93%e5%89%8d%e9%87%8d%e5%81%9a%e6%97%a5%e5%bf%97%e6%96%87%e4%bb%b6%e4%bd%bf%e7%94%a8%e6%83%85%e5%86%b5.html" target="_blank">SQL脚本:监控当前重做日志文件使用情况</a><br>
<a title="Streams Health Check on 10g Release 2" href="http://www.oracledatabase12g.com/archives/streams-health-check-on-10g-release-2.html" target="_blank">Streams Health Check on 10g Release 2</a><br>
<a title="从视图查询表分区的相关信息" href="http://www.oracledatabase12g.com/archives/%e4%bb%8e%e8%a7%86%e5%9b%be%e6%9f%a5%e8%af%a2%e8%a1%a8%e5%88%86%e5%8c%ba%e7%9a%84%e7%9b%b8%e5%85%b3%e4%bf%a1%e6%81%af.html" target="_blank">从视图查询表分区的相关信息</a><br>
<a title="Script To Monitor RDBMS Session UGA and PGA Current And Maximum Usage Over Time" href="http://www.oracledatabase12g.com/archives/script-to-monitor-rdbms-session-uga-and-pga-current-and-maximum-usage-over-time.html" target="_blank">Script To Monitor RDBMS Session UGA and PGA Current And Maximum Usage Over Time</a><br>
<a title="Script:收集RAC性能诊断信息" href="http://www.oracledatabase12g.com/archives/script%e6%94%b6%e9%9b%86rac%e6%80%a7%e8%83%bd%e8%af%8a%e6%96%ad%e4%bf%a1%e6%81%af.html" target="_blank">Script:收集RAC性能诊断信息</a><br>
<a title="Script:收集UNDO诊断信息" href="http://www.oracledatabase12g.com/archives/automatic-undo-management-common-analysis-diagnostic-scripts.html" target="_blank">Script:收集UNDO诊断信息</a><br>
<a title="Script:列出数据库中子表上没有对应索引的外键" href="http://www.oracledatabase12g.com/archives/list-foreign-keys-with-no-matching-index-on-child-table-causes-locks.html" target="_blank">Script:列出数据库中子表上没有对应索引的外键</a><br>
<a title="Script: Listing Memory Used By All Sessions" href="http://www.oracledatabase12g.com/archives/script-listing-memory-used-by-all-sessions.html" target="_blank">Script: Listing Memory Used By All Sessions</a><br>
<a title="Collecting Diagnostic Data for OCFS2 Issues" href="http://www.oracledatabase12g.com/archives/collecting-diagnostic-data-for-ocfs2-issues.html" target="_blank">Collecting Diagnostic Data for OCFS2 Issues</a><br>
<a title="Script to Identify Objects and Amount of Blocks in the Buffer Pools – Default, Keep, Recycle, nK Cache" href="http://www.oracledatabase12g.com/archives/script-to-identify-objects-and-amount-of-blocks-in-the-buffer-pools-default-keep-recycle-nk-cache.html" target="_blank">Script to Identify Objects and Amount of Blocks in the Buffer Pools – Default, Keep, Recycle, nK Cache</a><br>
<a title="Script:Generate A DDL Script For A Table" href="http://www.oracledatabase12g.com/archives/script-generate-ddl-script-for-table.html" target="_blank">Script:Generate A DDL Script For A Table</a><br>
<a title="SCRIPT TO CHECK FOR FOREIGN KEY LOCKING ISSUES" href="http://www.oracledatabase12g.com/archives/script-to-check-for-foreign-key-locking-issues.html" target="_blank">SCRIPT TO CHECK FOR FOREIGN KEY LOCKING ISSUES</a><br>
<a title="如何找出Oracle中需要或值得重建的索引" href="http://www.oracledatabase12g.com/archives/script-lists-all-indexes-that-benefit-from-a-rebuild.html" target="_blank">如何找出Oracle中需要或值得重建的索引</a><br>
Script:Diagnostic ORA-01000 maximum open cursors exceeded<br>
ORA-4030 PGA Usage Diagnostic Script<br>
Script:Tune Very Large Hash Join<br>
Script to Collect Log File Sync Diagnostic Information (lfsdiag.sql)<br>
Script:List Buffer Cache Details<br>
Script:List NLS Parameters and Timezone<br>
Script:List SORT ACTIVITY<br>
Script:List OBJECT DEPENDENT<br>
Script:Logfile Switch Frequency Map<br>
Script:Tablespace Report<br>
Script:收集数据库安全风险评估信息<br>
脚本:格式化的V$SQL_SHARED_CURSOR报告<br>
脚本:监控并行进程状态<br>
脚本:监控数据库中的活跃用户及其运行的SQL<br>
脚本:监控临时表空间使用率<br>
Script to show Active Distributed Transactions<br>
Gather DBMS_STATS Default parameter<br>
Script:Datafile Report<br>
Script to Collect Data Guard Diagnostic Information<br>
Script:To Report Information on Indexes<br>
ORA-4031 Common Analysis/Diagnostic Scripts<br>
Script:when transaction will finish rollback<br>
Script: Computing Table Size<br>
Script to Detect Tablespace Fragmentation<br>
“hcheck.sql” script to check for known problems in Oracle8i, Oracle9i, Oracle10g and Oracle 11g<br>
Script to Prevent Excessive Spill of Message From the Streams Buffer Queue To Disk<br>
Oracle Systemstate dump analytic tool: ASS.AWK V1.09<br>
SCRIPT TO GENERATE SQL*LOADER CONTROL FILE</p>
Edited by: Maclean Liu on Jan 22, 2012 1:23 AM谢谢大家支持! :)
-
Error while doing an expdp on a large datafile
Hello,
I tried an export using expdp in oracle 10g express edition. It was working perfectly until when the db size reached 2.1 gb. I got the following error message:
---------------- Start of error message ----------------
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Starting "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05": service_2_8/******** LOGFILE=3_export.log DIRECTORY=db_pump DUMPFILE=service_2_8.dmp CONTENT=all
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.GET_TABLE_DATA_OBJECTS while calling DBMS_METADATA.FETCH_XML_CLOB []
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 3
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6235
----- PL/SQL Call Stack -----
object line object
handle number name
0x3b3ce18c 14916 package body SYS.KUPW$WORKER
0x3b3ce18c 6300 package body SYS.KUPW$WORKER
0x3b3ce18c 9120 package body SYS.KUPW$WORKER
0x3b3ce18c 1880 package body SYS.KUPW$WORKER
0x3b3ce18c 6861 package body SYS.KUPW$WORKER
0x3b3ce18c 1262 package body SYS.KUPW$WORKER
0x3b0f9758 2 anonymous block
Job "SERVICE_2_8"."SYS_EXPORT_SCHEMA_05" stopped due to fatal error at 03:04:34
---------------- End of error message ----------------
Selinux was disabled completely and I have put the permission of 0777 to the appropriate datafile.
Still, it is not working.
Can you please tell me how to solve this problem or do you have any ideas or suggestions regarding this?Hello rgeier,
I cannot access this tablespace which is service_3_0 (2.1 gb) through a php web-application or through sqlplus. I can access a small tablespace which is service_2_8 through the web-application or through sqlplus. When I tried to access service_3_0 through sqlplus, the following error message was returned:
---------------- Start of error message ----------------
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/usr/lib/oracle/xe/oradata/service_3_0.dbf'
ORA-27041: unable to open file
Linux Error: 13: Permission denied
Additional information: 3
---------------- End of error message ----------------
The following are the last eset of entries in the alert_XE.log file in the bdump folder:
---------------- Start of alert log ----------------
db_recovery_file_dest_size of 40960 MB is 9.96% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Aug 20 05:13:59 2008
Completed: alter database open
Wed Aug 20 05:19:58 2008
Shutting down archive processes
Wed Aug 20 05:20:03 2008
ARCH shutting down
ARC2: Archival stopped
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=27, OS id=7463 to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8', 'KUPC$C_1_20080820054031', 'KUPC$S_1_20080820054031', 0);
kupprdp: worker process DW01 started with worker id=1, pid=28, OS id=7466 to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_06', 'SERVICE_2_8');Wed Aug 20 05:40:48 2008
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
The value (30) of MAXTRANS parameter ignored.
---------------- End of alert log ---------------- -
How can I move back the rman convert file from file system to ASM?
I have no idea on pluging-in the data files which were unloaded as follows:
SQL> alter tablespace P_CDDH_DSPGD_V1_2011 read only;
SQL> alter tablespace P_IDX_CDDH_DSPGD_V1_2011 read only;
SQL> exec dbms_tts.transport_set_check('P_CDDH_DSPGD_V1_2011,P_IDX_CDDH_DSPGD_V1_2011',true);
SQL> select * from transport_set_violations;
UNIX> expdp tossadm@pmscdhf1 dumpfile=ttsfy1.dmp directory=trans_dir transport_tablespaces = P_CDDH_DSPGD_V1_2011,P_IDX_CDDH_DSPGD_V1_2011
RMAN> convert tablespace P_CDDH_DSPGD_V1_2011, P_IDX_CDDH_DSPGD_V1_2011 format = '/appl/oem/backup/temp/%I_%s_%t_extbspace.dbf';
Starting conversion at source at 03-OCT-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=116 instance=pmscdhf11 device type=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00079 name=+PMSCDHF1/p_cddh_dspgd_v1_2011_01.dbf
converted datafile=/appl/oem/backup/temp/3536350174_2820_827849001_extbspace.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:15
channel ORA_DISK_1: starting datafile conversion
input datafile file number=00080 name=+PMSCDHF1/p_idx_cddh_dspgd_v1_2011_02.dbf
converted datafile=/appl/oem/backup/temp/3536350174_2821_827849136_extbspace.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:01:45
Finished conversion at source at 03-OCT-13
Starting Control File Autobackup at 03-OCT-13
piece handle=/dbms/oracle/r1110/db_01/dbs/c-3536350174-20131003-02 comment=NONE
Finished Control File Autobackup at 03-OCT-13
SQL> drop tablespace P_CDDH_DSPGD_V1_2011 including contents;
SQL> drop tablespace P_IDX_CDDH_DSPGD_V1_2011 including contents;
Afterward, how can I relocate the backup files "/appl/oem/backup/temp/3536350174_2820_827849001_extbspace.dbf", "/appl/oem/backup/temp/3536350174_2821_827849136_extbspace.dbf" back to the ASM group +PMSCDHF1 ???The 11.1 documentation only says "Enables you to copy files between ASM disk groups on local instances to and from remote instances" and "You can also use this command to copy files from ASM disk groups to the operating system."
http://docs.oracle.com/cd/B28359_01/server.111/b31107/asm_util.htm#CHDJEIEA
The 11.2 documentation says "Copy files from a disk group to the operating system Copy files from a disk group to a disk group Copy files from the operating system to a disk group"
http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_util003.htm#CHDJEIEA
I've never tried 11.1
Hemant K Chitale -
EXPDP is too slow even though the value of cursor_sharing changed to EXACT.
Hi
We are having a 10g standarad edition database (10.2.0.4) on Solaris 5 which is RAC with ASM. Infact we are planning to migrate it to LINUX x86-64 and to 11.2.0.3. The database size is around 1.3 TB. We are planning to go with an expdp backup and impdp to new server and new version database.
SQL> select * from v$version;
BANNER
Oracle Database 10g Release 10.2.0.4.0 - Production
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> !uname -a
SunOS ibmxn920 5.10 Generic_127128-11 i86pc i386 i86pc
As per the plan I started the expdp. But unfortunately the processing of tables even continued for one and half days and the backup didnt started also. After going through few docs I found that the CURSOR_SHARING should be EXACT to make the expdp more faster (Previously it was SIMILAR). So I changed the parameter to EXACT in one of the node and started the backup again yesterday night on the same node where I change the parameter. When today I came back still the processing going on. I checked the job status and found that the table processing is still going. It is not hanged at all. But its too slow.
What could be the reason. Here is the memory details and kernal parameter details.
Mem
Memory: 24G phys mem, 6914M free mem, 31G swap, 31G free swap
Kernal Parameters
forceload: sys/msgsys
forceload: sys/semsys
forceload: sys/shmsys
set noexec_user_stack=1
set msgsys:msginfo_msgmax=65535
set msgsys:msginfo_msgmnb=65535
set msgsys:msginfo_msgmni=2560
set msgsys:msginfo_msgtql=2560
set semsys:seminfo_semmni=3072
set semsys:seminfo_semmns=6452
set semsys:seminfo_semmnu=3072
set semsys:seminfo_semume=240
set semsys:seminfo_semopm=100
set semsys:seminfo_semmsl=1500
set semsys:seminfo_semvmx=327670
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=268435456
set shmsys:shminfo_shmmni=4096
set shmsys:shminfo_shmseg=1024
set noexec_user_stack = 1
set noexec_user_stack_log = 1
#Non-administrative users cannot change file ownership.
rstchown=1
Do I need to make changes above of these. The dump is taking to local file system.Hi,
I'd be looking at doing this in parallel over a database link and completely miss out sending anything to nfs - it will make the whole process quicker (you effectively skip the export part and everything is an import into the new instance).
I ran a 600GB impdp in this way over a db link and it maybe took 12 hours (can't remember exactly) - a lot of that time is index build in the new database so make sure your pga etc is set up correctky for that.
LOB data massively slows down datapump so that could be the issue here also. You should be able to acheive the whole process in less than a day (if you have no lobs...)
Cheers,
Harry -
How to move a schema from one databes to another on ASM?
Hi All
I have a schema in database A on ASM and I want to move the complete schema to database B on ASM . Both databases are on RAC and on the same server.
Please tell me the steps how to do it or if there is a script to do it.
Thank you.With the same way on single instance, through utilities like exp/imp, expdp/impdp or insert as select through database link.
Asm can't cause interferance in this task.
Regards,
Rodrigo Mufalani -
Is it possible to export to an asm directory?
I tried this:
CREATE DIRECTORY "DGT01_EXP" AS '+DGT01/EXP';
(yes, this directory does exist in ASM)
But when I tried using this directory for expdp, I got this error.
$ expdp datapump/&pw schemas=test directory=DGT01_EXP dumpfile=test.dmp
Export: Release 10.2.0.1.0 - Production on Thursday, 27 December, 2007 14:44:53
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
With the Real Application Clusters option
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
Please let me know if anyone has been able to do this.
Thanks.Hi,
Sorry, misread your first post. You can use data pump to export data to a disk but not to ASM.
If you are trying to move data from one database to another, you may use NETWORK_LINK parameter with datapump to directly read from the source database and copy it to destination database.
Regards -
Hello All,
We are using oracle database 10g on AIX. We are using asm for storage on a shared location.
Now we want to move our database from old servers to new servers.
I want to know if we could use the same asm based datafiles for new database to avoide database data migration. Can we switch our database from one server to another and use the same datafiles??? or we have to migrate our database by RMAN or expdp etc???we want to move 9 Tb database from tru64 and filesystem to solaris and asm on 10.2.0.3, we will be testing RMAN with CONVERT;
http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/dbxptrn.htm#sthref1379
and physical dataguard options.
so do you have any experience with the source and the target OS different case, like source as Tru64 and destination as Solaris;
SQL> SELECT a.platform_name, a.endian_format
2 FROM v$transportable_platform a
3 WHERE a.platform_id IN (2, 5)
4 ;
PLATFORM_NAME ENDIAN_FORMAT
Solaris[tm] OE (64-bit) Big
HP Tru64 UNIX Littleor is there any similar step by step reference like Howard's article for same OS case?
Thank you, best regards. -
RMAN Alternate host restore, Source database on ASM
I need to restore a tablespace to a remote host. The source Database is on ASM. I'm fairly new to ASM, but I have experience with RMAN. I have a database that's on ASM and I would like to restore one of its tablespaces to a remote host. Any help would be appreciated.
Thanks,
AlexHi user12345,
one way would be to use Transportable Tablespace.
This would utilize expdp resp. exp.
There is something new in 10gR2 call transportable tablespace from backup.
Here expdp is intergerated into an RMAN command.
For more see my post:
http://sysdba.wordpress.com/2006/04/11/transportable-tablespaces-from-backup-with-rman-in-oracle-10g-release-2/
Hope it hleps,
=;-)
Lutz -
when should w to use ACFS and ASM Vol on 11G Rel 2 11.2.0.3.0
http://docs.oracle.com/cd/E16338_01/server.112/e10500/asmfilesystem.htm
•Oracle ASM is the preferred storage manager for all database files. It has been specifically designed and optimized to provide the best performance for database file types.
•Oracle ACFS is the preferred file manager for non-database files. It is optimized for general purpose files.
•Oracle ACFS does not support any file that can be directly stored in Oracle ASM.
is ACFS optional and does it incur any additional licensing costsOracle has thrown in all sorts of nonsensical verbiage to completely confuse everyone as to how you can us e ACFS. You can use it for "database-related" stuff but you can't use it for data files, archivelog files, backup files. You can store exports because if you do a parallel expdp the export file(s) must be visible to the entire cluster. Which **I think** is to be interpreted that you cannot store "application" files that are required to be accessible by your applications. I surmise that the reason for this is that Oracle decided they could make more $$$ by releasing a "product" called CloudFS. CloudFS **is** ASM/ACFS - and nothing more. So, on your Weblogic clustered middle-tier you can use Cloudfs to store your application data files.
Example for mid-tier:
some process transfers 100's of files to be processed
The middle-tier - running on multiple servers (nodes)
take the files and process them in parallel (mechanisms in place to ensure only one node processes a given file )
writes output files to another shared directory.
In the past, you would provide a "share" by using NFS. Well, as it turns out - really bad things can happen when when one of the nodes processing and not receiving the files dies, you can have files get "lost" if that NFS share dies.
ACFS works great for this - and Oracle realized that since "ASM/GI" was sort-f "free" they needed to close that loophole. In RAC, the only thing that "cost extra" was the RAC license which only applied to the RDBMS code that made the database cluster-aware. <this from a conversation from an ex-Oracle Salesman>. -
ORA-39070 Error when using datapump and writing to ASM storage
I am able to export data using datapump when i write to a file system. However, when i try to write to an ASM storage, i get the following errors.
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
below are the steps i tooks.
create or replace directory jp_dir2 as '+DATA/DEV01/exp_dir';
grant read,write on directory jp_dir2 to jpark;
expdp username/password schemas=testdirectory=jp_dir2 dumpfile=test.dmp log=test.log
Edited by: user564785 on Aug 25, 2011 6:49 AMgoogle: expdp ASM
first hit:
http://asanga-pradeep.blogspot.com/2010/08/expdp-and-impdp-with-asm.html
"Log files created during expdp cannot be stored inside ASM, for log files a directory object that uses OS file system location must be given. If not following error will be thrown
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
" -
Hi,
We are in the process of upgrading our 10.2.0.4 databases to 11.2.0.2. We are going from RAC to RAC. I have already upgraded many databases successfully with the exp/imp method. However I have run into a 10.2.0.4 database that is not relatively big that is so slow on the export method, that it is not feasible.
So I am tring the expdp/impdp method, and the expdp was completed in 45 minutes (not bad). I have been running the impdp for 21 hours, and it is still not complete. It says it is still EXECUTING.
Now I am thinking that I need to do something even more creative. I have a single node server, running 10.2.0.4 database on top of 10.2.0.4 ASM. I can move the database to this server, and try to install the 11.2.0.2 database software to see if I can upgrade in place, then migrate the database via RMAN. The question is, can an 11.2.0.2 database use or live on a 10.2.0.4 ASM?
I am also open to any suggestions on any ideas about how to expedite the exp or impdp processes. This database is a copy of a Peoplesoft database that is used for staging changes with the Stat product. It has 52,063 tables, 201 tablespaces, 26,297 views and it only takes up 17GB of space according to Cloud Control. The expdp file is only 2.4 GB.
Thank you for your time.
Regards
//KarlFor a quick upgrade try using the transportable tablespace feature, MOS note pasted below should get you started in the right direction.
Since you're on ASM there will be an RMAN `backup as copy` step somewhere in the process, but generally a transport will be much quicker than an exp or datapump process. For exp we would usually specify the indexes parameter to get separate DDLs to rebuild indexes at the target to get more work done in parallel, and there is probably a similar method with datapump. But the transport might do the trick much more easily.
1166564.1 Master Note for Transportable Tablespaces (TTS) -- Common Questions and Issues -
Expdp data without storage meta_data
Hi,
I'm working on a process to move database 10.2 from HP-UX with filesystem to Exadata 11.2 with ASM.
We have limited room for downtime on the database and would like to start on Exadata as fresh as possible.
I've just started with writing this process but wanted to check if someone had any ideas or comments on how to speed up the process. Each database on HP-UX will be moved sepratly and it will be one database at a time.
Alternative one:
1. Create ASM and database instance on Exadata
2. Run create script for tablespaces
3. Run create script for users
4. Expdp relevant schemas (not system/sys and statistics)
5. Move the dump files to Exadata (maybe connect the Exadata to our existing SAN)
6. Impdp schemas
7. create statistics
8. Test database
9. Production :-)
Alternative two:
1. Create ASM and database instance on Exadata
2. Run create script for tablespaces
3. Run create script for users
4. export table, index, etc metadata but without storage info (initial extent 4gb on a index and so on) with dbms_metadata
5. export data based on user tables.
6. Configure external tables in new database on Exadata with ASM
6b. insert data with the use of dump files and select against external tables
7. create statistics
8. test database
9. production
Regarding alternativ 1 I do know that it's possible to dump only metadata, but is it possible to dump only metadata but without storage info as described above?
If you have another solution please feel free to add a alternativ 3 and 4 :)
Thanks all.If you want to have the storage attributes of your segments modified during Data Pump Import, you could do the following:
1) create a tablespace with your desired storage attributes in the target database
2) impdp with transform=storage:n and remap_tablespace
That way, your imported segments can get initial extents of 8m, e.g.
(4m as initial extent will not give you a 4m initial extent on a tablespace with autoallocate - they will use 64k, 1m, 8m or 64m sized extents instead)
See here for the documentation of this - it's not Exadata specific:
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_import.htm#BEHEDGJJ
Kind regards
Uwe Hesse
"Don't believe it, test it!"
http://uhesse.com
Edited by: Uwe Hesse on 07.05.2012 09:07 added the part with autoallocate -
Hi guru's
i m getting following error while running expdp
[oracle@localhost home]$ expdp scott/tiger@orcl DIRECTORY=test DUMPFILE=full.dmp FULL=y LOGFILE=full.log
Export: Release 10.2.0.1.0 - Production on Sunday, 26 June, 2011 12:16:50
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operationHello,
ORA-39070: Unable to open the log file.It seems there's something wrong with the log file.
How is defined the Directory TEST ?
Do you use ASM ?
If yes, the log file must be written to a disk file, and not written into the Oracle ASM storage.
You'll find more informations about the LOGFILE parameter below:
http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_export.htm#SUTIL855
http://download.oracle.com/docs/cd/E11882_01/server.112/e16536/dp_overview.htm#i1009537
Hope this help.
Best regards,
Jean-Valentin -
I could create exp dump on an asm disk.
1. How do I transfer the files to and from ASM to file system and vise versa?
2. How to compress the dumpfile using gzip or any other way?
Thanks.Ok, Found the following from the Utilities doc:
Using Directory Objects When Automatic Storage Management Is Enabled
If you use Data Pump Export or Import with Automatic Storage Management (ASM) enabled, you must define the directory object used for the dump file so that the ASM disk-group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the ASM dump file as follows:
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';
Then you would create a separate directory object for the log file:
SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';
To enable user hr to have access to these directory objects, you would assign the necessary privileges, for example:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr;
SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;
You would then use the following Data Pump Export command:
expdp hr/hr DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log
Maybe you are looking for
-
Why cant I edit a received email?
I just switched from a PC using Outlook. My Outlook has always been much more than a bunch of sent and received emails. I have lots of folders and file things away for reference, and I frequently edit emails to make them easier to find via a search o
-
Hello, I'm using LabVIEW2009 under windows XP (4Go RAM). When I create a large array using the "Initilaize an array" fucntion (for example a 2D 512x512 array of U16), I've an error message "Insufficient memory" while this table weighs only 520kb if I
-
How to test Siebel webservice in Jdeveloper that uses Authentication?
Hi.. I am testing siebel webservice in Jdeveloper 11.1.1.5 by right clicking on WSDL n using "Test webService".But it is giving 500:internal server error..I tried everything..basically i have to pass username password in authentication.I have tried "
-
Bought TV series using Apple TV. Can I view it on my iMac?
I just bought the entire 1st season of Homeland (TV series) through iTunes on my Apple TV. I assumed that once I bought it, I would be able to view it on any of my Apple devices, including my iMac (10.8.3). I can't seem to locate it anywhere on my
-
Samsung d357 bluetooth internet connect
I'm trying to connect to the internet using PPP over a bluetooth connection to my phone. my phone is a Samsug d357 I can get the phone to start dialing, but the call always fails and the computer (an iBook running 10.4.7) says that no carrier was det