DATA-PUMP ERROR: ORA-39070 Database on Linux, Client on Win 2008
Hi,
i want to make a datapump export from client Windows 2008. I define dpdir as 'C:\DPDIR'.
While making expdp
expdp login\pass@ora directory=dpdir dumpfile=dump.dmp logfile=log.log full=y
i get those errors
ORA-39002:niepoprawna operacja
ORA-39070:nie mozna otworzyc pliku dziennik
ORA-29283:niepoprawna operacja na pliku
ORA-06512:przy "sys.utl_file", linia 536
ORA-29283:niepoprawna operacja na pliku
(decriptions in polish)
I found out, that datapump is saving files to the Linux Server (where database is). When i define 'C:\DPDIR' it doesn't recognize it because there is no such directory on Linux.
How can I save datapump export dumpfile on Windows?
tstefanski wrote:
Hi,
i want to make a datapump export from client Windows 2008. I define dpdir as 'C:\DPDIR'.
While making expdp
expdp login\pass@ora directory=dpdir dumpfile=dump.dmp logfile=log.log full=y
i get those errors
ORA-39002:niepoprawna operacja
ORA-39070:nie mozna otworzyc pliku dziennik
ORA-29283:niepoprawna operacja na pliku
ORA-06512:przy "sys.utl_file", linia 536
ORA-29283:niepoprawna operacja na pliku
(decriptions in polish)
I found out, that datapump is saving files to the Linux Server (where database is). When i define 'C:\DPDIR' it doesn't recognize it because there is no such directory on Linux.
How can I save datapump export dumpfile on Windows?
>Hi,
i want to make a datapump export from client Windows 2008. I define dpdir as 'C:\DPDIR'.
While making expdp
expdp login\pass@ora directory=dpdir dumpfile=dump.dmp logfile=log.log full=y
i get those errors
ORA-39002:niepoprawna operacja
ORA-39070:nie mozna otworzyc pliku dziennik
ORA-29283:niepoprawna operacja na pliku
ORA-06512:przy "sys.utl_file", linia 536
ORA-29283:niepoprawna operacja na pliku
(decriptions in polish)
I found out, that datapump is saving files to the Linux Server (where database is). When i define 'C:\DPDIR' it doesn't recognize it because there is no such directory on Linux.
How can I save datapump export dumpfile on Windows?
expdp can only create dump file on DB Server system itself.
Similar Messages
-
Hi All,
I am getting the following errors when I am trying to connect with datapump.My db is 10g and os is linux.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-31626: job does not exist
ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at "SYS.DBMS_LOGSTDBY", line 24
ORA-06512: at "SYS.KUPV$FT", line 676
ORA-04063: package body "SYS.DBMS_INTERNAL_LOGSTDBY" has errors
ORA-06508: PL/SQL: could not find program unit being called
When I tried to compile this package I am getting the following error
SQL> alter package DBMS_INTERNAL_LOGSTDBY compile body;
Warning: Package Body altered with compilation errors.
SQL> show error
Errors for PACKAGE BODY DBMS_INTERNAL_LOGSTDBY:
LINE/COL ERROR
1405/4 PL/SQL: SQL Statement ignored
1412/38 PL/SQL: ORA-00904: "SQLTEXT": invalid identifier
1486/4 PL/SQL: SQL Statement ignored
1564/7 PL/SQL: ORA-00904: "DBID": invalid identifier
1751/2 PL/SQL: SQL Statement ignored
1870/7 PL/SQL: ORA-00904: "DBID": invalid identifier
Can anyony suggest/guide me how to resolve the issue.
Thanks in advanceSQL> SELECT OBJECT_TYPE,OBJECT_NAME FROM DBA_OBJECTS
2 WHERE OWNER='SYS' AND STATUS<>'VALID';
OBJECT_TYPE OBJECT_NAME
VIEW DBA_COMMON_AUDIT_TRAIL
PACKAGE BODY DBMS_INTERNAL_LOGSTDBY
PACKAGE BODY DBMS_REGISTRY_SYS
Thanks -
Data pump error ORA-39065, status undefined after restart
Hi members,
The data pump full import job hung, continue client also hung, all of a sudden the window exited.
;;; Import> status
;;; Import> help
;;; Import> status
;;; Import> continue_client
ORA-39065: unexpected master process exception in RECEIVE
ORA-39078: unable to dequeue message for agent MCP from queue "KUPC$C_1_20090923181336"
Job "SYSTEM"."SYS_IMPORT_FULL_01" stopped due to fatal error at 18:48:03
I increased the shared_pool to 100M and then restarted the job with attach=jobname. After restarting, I have queried the status and found that everything is undefined. It still says undefined now and the last log message says that it has been reopened. Thats the end of the log file and nothing else is being recorded. I am not sure what is happening now. Any ideas will be appreciated. This is 10.2.0.3 version on windows. Thanks ...
Job SYS_IMPORT_FULL_01 has been reopened at Wednesday, 23 September, 2009 18:54
Import> status
Job: SYS_IMPORT_FULL_01
Operation: IMPORT
Mode: FULL
State: IDLING
Bytes Processed: 3,139,231,552
Percent Done: 33
Current Parallelism: 8
Job Error Count: 0
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest%u.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest01.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest02.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest03.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest04.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest05.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest06.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest07.dmp
Dump File: D:\oracle\product\10.2.0\admin\devdb\dpdump\devtest08.dmp
Worker 1 Status:
State: UNDEFINED
Worker 2 Status:
State: UNDEFINED
Object Schema: trm
Object Name: EVENT_DOCUMENT
Object Type: DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Completed Objects: 1
Completed Rows: 78,026
Completed Bytes: 4,752,331,264
Percent Done: 100
Worker Parallelism: 1
Worker 3 Status:
State: UNDEFINED
Worker 4 Status:
State: UNDEFINED
Worker 5 Status:
State: UNDEFINED
Worker 6 Status:
State: UNDEFINED
Worker 7 Status:
State: UNDEFINED
Worker 8 Status:
State: UNDEFINED39065, 00000, "unexpected master process exception in %s"
// *Cause: An unhandled exception was detected internally within the master
// control process for the Data Pump job. This is an internal error.
// messages will detail the problems.
// *Action: If problem persists, contact Oracle Customer Support. -
Standard Data Collection Failing with Error ORA-04054: database link does not exist.
Hi Gurus,
When I am running Standard Data Collection in ASCP(APS) instance R12.1.3, its failing with error : ORA-04054: database link does not exist.
There is no such Database link exits which is showing in above error.
Also the database link name in the above error is not profile values in the database.
I think, concurrent might be fetching this database link name from some tables related to plan.
I am not having much knowledge about how this ASCP/APS works.
Need your help to resolve this issue.
Thanks,Hi,
ASCP Collections looks at the dblink from instances definitions from.
1. Responsibility: Advanced Planning Administrator
2. Navigation: Admin > Instances
You may review the note in support.oracle.com - Understanding DB Links Setup for APS Applications - ASCP and ATP Functionality (Doc ID 813231.1) -
Hi,everyone,
I had installed R Enterprise in my Oracle 11.2.0.1 base on win7,using the R 2.13.2, ORE 1.1, I can using the part function: like
library(ORE)
options(STERM='iESS', str.dendrogram.last="'", editor='emacsclient.exe', show.error.locations=TRUE)
> ore.connect(user = "RQUSER",password = "RQUSERpsw",conn_string = "", all = TRUE)
> ore.is.connected()
[1] TRUE
> ore.ls()
[1] "IRIS_TABLE"
> demo(package = "ORE")
Demos in package 'ORE':
aggregate Aggregation
analysis Basic analysis & data processing operations
basic Basic connectivity to database
binning Binning logic
columnfns Column functions
cor Correlation matrix
crosstab Frequency cross tabulations
derived Handling of derived columns
distributions Distribution, density, and quantile functions
do_eval Embedded R processing
freqanalysis Frequency cross tabulations
graphics Demonstrates visual analysis
group_apply Embedded R processing by group
hypothesis Hyphothesis testing functions
matrix Matrix related operations
nulls Handling of NULL in SQL vs. NA in R
push_pull RDBMS <-> R data transfer
rank Attributed-based ranking of observations
reg Ordinary least squares linear regression
row_apply Embedded R processing by row chunks
sql_like Mapping of R to SQL commands
stepwise Stepwise OLS linear regression
summary Summary functionality
table_apply Embedded R processing of entire table
> demo("aggregate",package = "ORE")
demo(aggregate)
---- ~~~~~~~~~
Type <Return> to start : Return
> #
> # O R A C L E R E N T E R P R I S E S A M P L E L I B R A R Y
> #
> # Name: aggregate.R
> # Description: Demonstrates aggregations
> # See also summary.R
> #
> #
> #
>
> ## Set page width
> options(width = 80)
> # List all accessible tables and views in the Oracle database
> ore.ls()
[1] "IRIS_TABLE"
> # Create a new table called IRIS_TABLE in the Oracle database
> # using the built-in iris data.frame
>
> # First remove previously created IRIS_TABLE objects from the
> # global environment and the database
> if (exists("IRIS_TABLE", globalenv(), inherits = FALSE))
+ rm("IRIS_TABLE", envir = globalenv())
> ore.drop(table = "IRIS_TABLE")
> # Create the table
> ore.create(iris, table = "IRIS_TABLE")
> # Show the updated list of accessible table and views
> ore.ls()
[1] "IRIS_TABLE"
> # Display the class of IRIS_TABLE and where it can be found in
> # the search path
> class(IRIS_TABLE)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> search()
[1] ".GlobalEnv" "ore:RQUSER" "ESSR"
[4] "package:ORE" "package:ORExml" "package:OREeda"
[7] "package:OREgraphics" "package:OREstats" "package:MASS"
[10] "package:OREbase" "package:ROracle" "package:DBI"
[13] "package:stats" "package:graphics" "package:grDevices"
[16] "package:utils" "package:datasets" "package:methods"
[19] "Autoloads" "package:base"
> find("IRIS_TABLE")
[1] "ore:RQUSER"
> # Select count(Petal.Length) group by species
> x = aggregate(IRIS_TABLE$Petal.Length,
+ by = list(species = IRIS_TABLE$Species),
+ FUN = length)
> class(x)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> x
species x
1 setosa 50
2 versicolor 50
3 virginica 50
> # Repeat FUN = summary, mean, min, max, sd, median, IQR
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = summary)
species Min. 1st Qu. Median Mean 3rd Qu. Max. NA's
1 setosa 1.0 1.4 1.50 1.462 1.575 1.9 0
2 versicolor 3.0 4.0 4.35 4.260 4.600 5.1 0
3 virginica 4.5 5.1 5.55 5.552 5.875 6.9 0
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = mean)
species x
1 setosa 1.462
2 versicolor 4.260
3 virginica 5.552
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = min)
species x
1 setosa 1.0
2 versicolor 3.0
3 virginica 4.5
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = max)
species x
1 setosa 1.9
2 versicolor 5.1
3 virginica 6.9
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = sd)
species x
1 setosa 0.1736640
2 versicolor 0.4699110
3 virginica 0.5518947
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = median)
species x
1 setosa 1.50
2 versicolor 4.35
3 virginica 5.55
> aggregate(IRIS_TABLE$Petal.Length, by = list(species = IRIS_TABLE$Species),
+ FUN = IQR)
species x
1 setosa 0.175
2 versicolor 0.600
3 virginica 0.775
> # More than one grouping column
> x = aggregate(IRIS_TABLE$Petal.Length,
+ by = list(species = IRIS_TABLE$Species,
+ width = IRIS_TABLE$Petal.Width),
+ FUN = length)
> x
species width x
1 setosa 0.1 5
2 setosa 0.2 29
3 setosa 0.3 7
4 setosa 0.4 7
5 setosa 0.5 1
6 setosa 0.6 1
7 versicolor 1.0 7
8 versicolor 1.1 3
9 versicolor 1.2 5
10 versicolor 1.3 13
11 versicolor 1.4 7
12 virginica 1.4 1
13 versicolor 1.5 10
14 virginica 1.5 2
15 versicolor 1.6 3
16 virginica 1.6 1
17 versicolor 1.7 1
18 virginica 1.7 1
19 versicolor 1.8 1
20 virginica 1.8 11
21 virginica 1.9 5
22 virginica 2.0 6
23 virginica 2.1 6
24 virginica 2.2 3
25 virginica 2.3 8
26 virginica 2.4 3
27 virginica 2.5 3
> # Sort the result by ascending value of count
> ore.sort(data = x, by = "x")
species width x
1 virginica 1.4 1
2 virginica 1.7 1
3 versicolor 1.7 1
4 virginica 1.6 1
5 setosa 0.5 1
6 setosa 0.6 1
7 versicolor 1.8 1
8 virginica 1.5 2
9 versicolor 1.1 3
10 virginica 2.4 3
11 virginica 2.5 3
12 virginica 2.2 3
13 versicolor 1.6 3
14 setosa 0.1 5
15 virginica 1.9 5
16 versicolor 1.2 5
17 virginica 2.0 6
18 virginica 2.1 6
19 setosa 0.3 7
20 versicolor 1.4 7
21 setosa 0.4 7
22 versicolor 1.0 7
23 virginica 2.3 8
24 versicolor 1.5 10
25 virginica 1.8 11
26 versicolor 1.3 13
27 setosa 0.2 29
> # by descending value
> ore.sort(data = x, by = "x", reverse = TRUE)
species width x
1 setosa 0.2 29
2 versicolor 1.3 13
3 virginica 1.8 11
4 versicolor 1.5 10
5 virginica 2.3 8
6 setosa 0.4 7
7 setosa 0.3 7
8 versicolor 1.0 7
9 versicolor 1.4 7
10 virginica 2.1 6
11 virginica 2.0 6
12 virginica 1.9 5
13 versicolor 1.2 5
14 setosa 0.1 5
15 versicolor 1.6 3
16 versicolor 1.1 3
17 virginica 2.4 3
18 virginica 2.5 3
19 virginica 2.2 3
20 virginica 1.5 2
21 virginica 1.6 1
22 virginica 1.4 1
23 setosa 0.6 1
24 setosa 0.5 1
25 versicolor 1.8 1
26 virginica 1.7 1
27 versicolor 1.7 1
> # Preserve just 1 row for duplicate x's
> ore.sort(data = x, by = "x", unique.keys = TRUE)
species width x
1 setosa 0.5 1
2 virginica 1.5 2
3 versicolor 1.1 3
4 setosa 0.1 5
5 virginica 2.0 6
6 setosa 0.3 7
7 virginica 2.3 8
8 versicolor 1.5 10
9 virginica 1.8 11
10 versicolor 1.3 13
11 setosa 0.2 29
> ore.sort(data = x, by = "x", unique.keys = TRUE, unique.data = TRUE)
species width x
1 setosa 0.5 1
2 virginica 1.5 2
3 versicolor 1.1 3
4 setosa 0.1 5
5 virginica 2.0 6
6 setosa 0.3 7
7 virginica 2.3 8
8 versicolor 1.5 10
9 virginica 1.8 11
10 versicolor 1.3 13
11 setosa 0.2 29
but when I use the following The ore.doEval command get the errors,
> ore.doEval(function() { 123 })
Error in .oci.GetQuery(conn, statement, ...) :
ORA-29400: data cartridge error
ORA-24323: ?????
ORA-06512: at "RQSYS.RQEVALIMPL", line 23
ORA-06512: at line 4
and I try to run the demo("row_apply", package="ORE") get the same errors:
demo("row_apply",package = "ORE")
demo(row_apply)
---- ~~~~~~~~~
Type <Return> to start : Return
> #
> # O R A C L E R E N T E R P R I S E S A M P L E L I B R A R Y
> #
> # Name: row_apply.R
> # Description: Execute R code on each row
> #
> #
>
> ## Set page width
> options(width = 80)
> # List all accessible tables and views in the Oracle database
> ore.ls()
[1] "IRIS_TABLE"
> # Create a new table called IRIS_TABLE in the Oracle database
> # using the built-in iris data.frame
>
> # First remove previously created IRIS_TABLE objects from the
> # global environment and the database
> if (exists("IRIS_TABLE", globalenv(), inherits = FALSE))
+ rm("IRIS_TABLE", envir = globalenv())
> ore.drop(table = "IRIS_TABLE")
> # Create the table
> ore.create(iris, table = "IRIS_TABLE")
> # Show the updated list of accessible table and views
> ore.ls()
[1] "IRIS_TABLE"
> # Display the class of IRIS_TABLE and where it can be found in
> # the search path
> class(IRIS_TABLE)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> search()
[1] ".GlobalEnv" "ore:RQUSER" "ESSR"
[4] "package:ORE" "package:ORExml" "package:OREeda"
[7] "package:OREgraphics" "package:OREstats" "package:MASS"
[10] "package:OREbase" "package:ROracle" "package:DBI"
[13] "package:stats" "package:graphics" "package:grDevices"
[16] "package:utils" "package:datasets" "package:methods"
[19] "Autoloads" "package:base"
> find("IRIS_TABLE")
[1] "ore:RQUSER"
> # The table should now appear in your R environment automatically
> # since you have access to the table now
> ore.ls()
[1] "IRIS_TABLE"
> # This is a database resident table with just metadata on the R side.
> # You will see this below
> class(IRIS_TABLE)
[1] "ore.frame"
attr(,"package")
[1] "OREbase"
> # Apply given R function to each row
> ore.rowApply(IRIS_TABLE,
+ function(dat) {
+ # Any R code goes here. Operates on one row of IRIS_TABLE at
+ # a time
+ cbind(dat, dat$Petal.Length)
+ })
Error in .oci.GetQuery(conn, statement, ...) :
ORA-29400: data cartridge error
ORA-24323: ?????
ORA-06512: at "RQSYS.RQROWEVALIMPL", line 26
ORA-06512: at line 4
>
whether my oracle's version 11.2.0.1 has no the RDBMS bug fix, and other problems? ThanksOracle R Enterprise 1.1. requires Oracle Database 11.2.0.3, 11.2.0.4. On Linux and Windows. Oracle R Enterprise can also work with an 11.2.0.1 or 11.2.0.2 database if it is properly patched.
Embedded R execution will not work without a patched database. Follow this procedure to patch the database:
1. Go to My Oracle Support:http://support.oracle.com
2. Log in and supply your Customer Support ID (CSI).
3. Choose the Patches & Updates tab.
4. In the Patch Search box, type 11678127
and click Search
5. Select the patch for your version of Oracle Database, 11.2.0.1.
6. Click Download to download the patch.
7. Install the patch using OPatch. Ensure that you are using the latest version of OPatch.
Sherry -
Database not found/Error: ORA-16621: database name for ADD DATABASE must be
I am new to Data Guard and am trying to set up Data Guard Broker. I had created a configuration file with both my primary and standby databases and at one time I could show both databases. But now I can no longer show the standby database nor can I enable, disable or reinstate it. Here is what I have:
Primary Database: orcl10g
Standby Database: 10gSB
DGMGRL> show configuration
Configuration
Name: orcl10g
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl10g - Primary database
10gSB - Physical standby database
Current status for "orcl10g":
SUCCESS
DGMGRL> show database verbose orcl10g
Database
Name: orcl10g
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
orcl10g
Properties:
InitialConnectIdentifier = 'orcl10g'
LogXptMode = 'ASYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '30'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = '10gSB, orcl10g'
LogFileNameConvert = '/oracle/oracle/product/10.2.0/oradata/orcl10g/redo01.log, /oracle/oracle/product/10.2.0/oradata/10gSB/redo01.log, /oracle/oracle/product/10.2.0/oradata/orcl10g/redo02.log, /oracle/oracle/product/10.2.0/oradata/10gSB/redo02.log, /oracle/oracle/product/10.2.0/oradata/orcl10g/redo03.log, /oracle/oracle/product/10.2.0/oradata/10gSB/redo03.log'
FastStartFailoverTarget = ''
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'remarkable.mammothnetworks.com'
SidName = 'orcl10g'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=remarkable.mammothnetworks.com)(PORT=1521))'
StandbyArchiveLocation = '/oracle/flash_recovery_area/orcl10g/archivelog'
AlternateLocation = ''
LogArchiveTrace = '1024'
LogArchiveFormat = '%t_%s_%r.arc'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'
Current status for "orcl10g":
SUCCESS
DGMGRL> show database verbose 10gSB
Object "10gsb" was not found
DGMGRL>
DGMGRL> remove database 10gSB
Object "10gsb" was not found
DGMGRL>
DGMGRL> reinstate database 10gSB
Object "10gsb" was not found
DGMGRL>
DGMGRL> enable database 10gSB
Object "10gsb" was not found
DGMGRL>
DGMGRL> add database '10gSB' as
connect identifier is 10gSB
maintained as physical;Error: ORA-16621: database name for ADD DATABASE must be unique
Failed.
How can I get Data Guard to see the standby database correctly again?Thank you for the constructive feedback. I have been able to make progress on this issue.
I did check the Data Guard Log files as you suggested. I did not find anything when I checked them before but this time I found the following:
DG 2011-06-16-17:23:18 0 2 0 RSM detected log transport problem: log transport for database '10gSB' has the following error.
DG 2011-06-16-17:23:18 0 2 0 RSM0: HEALTH CHECK ERROR: ORA-16737: the redo transport service for standby database "10gSB" has an error
DG 2011-06-16-17:23:18 0 2 0 NSV1: Failed to connect to remote database 10gSB. Error is ORA-12514
DG 2011-06-16-17:23:18 0 2 0 RSM0: Failed to connect to remote database 10gSB. Error is ORA-12514
DG 2011-06-16-17:23:18 0 2 753988034 Operation CTL_GET_STATUS cancelled during phase 2, error = ORA-16778
DG 2011-06-16-17:23:18 0 2 753988034 Operation CTL_GET_STATUS cancelled during phase 2, error = ORA-16778
I verified that I am able to connect to both the primary and standby databases via external connections:
-bash-3.2$ lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 17-JUN-2011 12:41:03
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 17-JUN-2011 01:40:30
Uptime 0 days 11 hr. 0 min. 32 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /oracle/oracle/product/10.2.0/db_1/network/admin/listener.ora
Listener Log File /oracle/oracle/product/10.2.0/db_1/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=remarkable.mammothnetworks.com)(PORT=1521)))
Services Summary...
Service "10gSB" has 1 instance(s).
Instance "10gSB", status READY, has 1 handler(s) for this service...
Service "10gSB_DGB" has 1 instance(s).
Instance "10gSB", status READY, has 1 handler(s) for this service...
Service "10gSB_XPT" has 1 instance(s).
Instance "10gSB", status READY, has 1 handler(s) for this service...
Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "orcl10g" has 1 instance(s).
Instance "orcl10g", status READY, has 1 handler(s) for this service...
Service "orcl10gXDB" has 1 instance(s).
Instance "orcl10g", status READY, has 1 handler(s) for this service...
Service "orcl10g_DGB" has 1 instance(s).
Instance "orcl10g", status READY, has 1 handler(s) for this service...
Service "orcl10g_XPT" has 1 instance(s).
Instance "orcl10g", status READY, has 1 handler(s) for this service...
The command completed successfully
-bash-3.2$
-bash-3.2$
-bash-3.2$ sqlplus system/dbas4ever@orcl10g
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 17 12:43:41 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
-bash-3.2$ sqlplus system/dbas4ver@10gSB
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 17 12:43:59 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress <== I think this is normal since the database is in mount mode
Enter user-name:
I also checked the listener log file and did see and error associated with a known bug:
WARNING: Subscription for node down event still pending
So I added the following to the listener.ora file and bounced the listener:
SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF
That seems to have taken care of the error.
The following is my listener.ora file:
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1)
(PROGRAM = extproc)
(SID_DESC = ( GLOBAL_DBNAME = 10gsb_DGMGRL.remarkable.mammothnetworks.com )
( SERVICE_NAME = 10gsb.remarkable.mammothnetworks.com )
( SID_NAME = 10gsb )
( ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1 )
(SID_DESC = ( GLOBAL_DBNAME = orcl10g_DGMGRL.remarkable.mammothnetworks.com )
( SERVICE_NAME = orcl10g.remarkable.mammothnetworks.com )
( SID_NAME = orcl10g )
( ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1 )
(SID_DESC = ( GLOBAL_DBNAME = orcl10g.remarkable.mammothnetworks.com )
( SERVICE_NAME = orcl10g.remarkable.mammothnetworks.com )
( SID_NAME = orcl10g )
( ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1 )
(SID_DESC = ( GLOBAL_DBNAME = 10gsb.remarkable.mammothnetworks.com )
( SERVICE_NAME = 10gsb.remarkable.mammothnetworks.com )
( SID_NAME = 10gsb )
( ORACLE_HOME = /oracle/oracle/product/10.2.0/db_1 )
SUBSCRIBE_FOR_NODE_DOWN_EVENT_LISTENER=OFF
I again tried connecting externally to the standby database:
-bash-3.2$ sqlplus system/dbas4ever@10gSB
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 17 13:09:00 2011
Copyright (c) 1982, 2005, Oracle. All rights reserved.
ERROR:
ORA-01033: ORACLE initialization or shutdown in progress
Enter user-name:
and see this in the listener.log file:
17-JUN-2011 13:10:22 * (CONNECT_DATA=(SERVICE_NAME=10gSB_XPT)(SERVER=dedicated)(CID=(PROGRAM=oracle)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=199.187.124.130)(PORT=11357)) * establish * 10gSB_XPT * 0
17-JUN-2011 13:10:22 * (CONNECT_DATA=(SERVICE_NAME=10gSB_XPT)(SERVER=dedicated)(CID=(PROGRAM=oracle)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=199.187.124.130)(PORT=11358)) * establish * 10gSB_XPT * 0
17-JUN-2011 13:10:24 * service_update * 10gSB * 0
17-JUN-2011 13:10:30 * (CONNECT_DATA=(SID=orcl10g)(CID=(PROGRAM=perl)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=25119)) * establish * orcl10g * 0
17-JUN-2011 13:10:30 * (CONNECT_DATA=(SID=orcl10g)(CID=(PROGRAM=perl)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=25120)) * establish * orcl10g * 0
17-JUN-2011 13:10:30 * (CONNECT_DATA=(SID=orcl10g)(CID=(PROGRAM=emagent)(HOST=localhost.localdomain)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=25121)) * establish * orcl10g * 0
17-JUN-2011 13:11:22 * (CONNECT_DATA=(SERVICE_NAME=10gSB_XPT)(SERVER=dedicated)(CID=(PROGRAM=oracle)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=199.187.124.130)(PORT=11420)) * establish * 10gSB_XPT * 0
17-JUN-2011 13:11:22 * (CONNECT_DATA=(SERVICE_NAME=10gSB_XPT)(SERVER=dedicated)(CID=(PROGRAM=oracle)(HOST=remarkable.mammothnetworks.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=199.187.124.130)(PORT=11422)) * establish * 10gSB_XPT * 0
17-JUN-2011 13:11:24 * service_update * 10gSB * 0
I tried again to see the database in Data Guard Broker:
DGMGRL> show database 10gSB
Object "10gsb" was not found
however, I then was able to add the database in Data Guard Broker:
DGMGRL> add database 10gSB
as connect identifier is 10gSB
maintained as physical;Database "10gsb" added <== this is progress!!!
However the configuration shows the following:
DGMGRL> show database 10gSB
Database
Name: 10gsb
Role: PHYSICAL STANDBY
Enabled: NO
Intended State: OFFLINE
Instance(s):
10gSB
Current status for "10gsb":
DISABLED <=====
So I tried to enable the database:
DGMGRL> enable database 10gSB
Error: ORA-16626: failed to enable specified object
Failed.
and I tried to reinstate the database:
DGMGRL> reinstate database 10gSB
Reinstating database "10gsb", please wait...
Error: ORA-16653: failed to reinstate database
Failed.
Reinstatement of database "10gsb" failed
So I checked the configuration and now see two entries for the standby database but with case differences:
DGMGRL> show configuration
Configuration
Name: orcl10g
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
orcl10g - Primary database
10gSB - Physical standby database
10gsb - Physical standby database (disabled)
Current status for "orcl10g":
SUCCESS
Question: How do I get rid of 10gSB and enable 10gsb? -
Differences between using Data Pump to back up database and using RMAN ?
what are differences between using Data Pump to back up database and using RMAN ? what is CONS and PROS ?
ThanksSearch for Database backup in
http://docs.oracle.com/cd/B28359_01/server.111/b28318/backrec.htm#i1007289
In short
RMAN -> Physical backup.(copies of physical database files)
Datapump -> Logical backup.(logical data such as tables,procedures)
Docs for RMAN--
http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcncpt.htm#
Docs for Datapump
http://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm
Edited by: Sunny kichloo on Jul 5, 2012 6:55 AM -
Dear All,
Using datapump to export data for a schema;
#!/bin/sh
PS1='$PWD # '
ORACLE_BASE=/orabin/oracle
ORACLE_HOME=/orabin/oracle/product/10.1.0
ORACLE_SID=vimadb
PATH=$ORACLE_HOME/bin:$PATH:.
export PATH PS1 ORACLE_BASE ORACLE_HOME ORACLE_SID
/orabin/oracle/product/10.1.0/bin/expdp vproddta/vproddta@vimadb schemas = vproddta EXCLUDE = STATISTICS directory = datadir1 dumpfile = datadir1:`date '+Full_expdp_vproddta_%d%m%y_%H%M'`.dmp logfile= datadir1:`date '+Full_expdp_vproddta_%d%m%y_%H%M'`.logAlready the directory is created at OS level;
/syslog/datapump/
SQL> create directory datadir1 as '/syslog/datapump/'
grant read, write on directory datadir1 to vproddta;Getting the error -
Export: Release 10.1.0.4.0 - 64bit Production on Wednesday, 10 August, 2011 16:52
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Release 10.1.0.4.0 - 64bit Production
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation1. Check OS permitions for this directory.
2. Try to create export using simple name, like "export.dmp", if DP will be successfull, then check generated filename. -
Data pump export full RAC database in window single DB by network_link
Hi Experts,
I have a window 32 bit 10.2 database.
I try to export a full rac database (350G some version with window DB) in window single database by dblink.
exp syntax as
exdpd salemanager/********@sale FULL=y DIRECTORY=dataload NETWORK_LINK=sale.net DUMPFILE=sale20100203.dmp LOGFILE=salelog20100203.log
I created a dblink with fixed instance3. It was working for two day and display message as
ORA-31693: Table data object "SALE_AUDIT"."AU_ITEM_IN" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEPOPULATE callout
ORA-01555: snapshot too old: rollback segment number with name "" too small
ORA-02063: preceding line from sale.netL
I stoped export and checked window target alert log.
I saw some message as
kupprdp: master process DM00 started with pid=16, OS id=4444
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_02', 'SYSTEM', 'KUPC$C_1_20100202235235', 'KUPC$S_1_20100202235235', 0);
Tue Feb 02 23:56:12 2010
The value (30) of MAXTRANS parameter ignored.
kupprdp: master process DM00 started with pid=17, OS id=4024
to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_FULL_01', 'SALE', 'KUPC$C_1_20100202235612', 'KUPC$S_1_20100202235612', 0);
kupprdp: worker process DW01 started with worker id=1, pid=18, OS id=2188
to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_FULL_01', 'SALE');
In RAC instance alert.log. I saw message as
SELECT /*+ NO_PARALLEL ("KU$") */ "ID","RAW_DATA","TRANSM_ID","RECEIVED_UTC_DATE ","RECEIVED_FROM","ACTION","ORAUSER",
"ORADATE" FROM RELATIONAL("SALE_AUDIT"."A U_ITEM_IN") "KU$"
How to fixed this error?
add more undotbs space in RAC instance 3 or window database?
Thanbks
Jim
Edited by: user589812 on Feb 4, 2010 10:15 AMI usually increate undo space. Is your undo retention set smaller than the time it takes to run the job? If it is, I would think you would need to do that. If not, then I would think it would be the space. You were in the process of exporting data when the job failed which is what I would have expected. Basically, DataPump want to export each table consistent to itself. Let's say that one of your tables is partitioned and it has a large partition and a smaller partition. DataPump attempts to export the larger partiitons first and it remembers the scn for that partition. When the smaller partitions are exported, it will use the scn to get the data from that partition as it would have looked like if it exported the data when the first partiiton was used. If you don't have partitioned tables, then do you know if some of the tables in the export job (I know it's full so that includes just about all of them) are having data added to them or removed from them? I can't think of anything else that would need undo while exporting data.
Dean -
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
--BACKUP DIRECTORY NAME
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','BACKUP');
-- A metadata filter is used to specify the schema that will be exported.
--ORVETL USER NAME
DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''orvetl'')');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
error-
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2926
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3162
ORA-06512: at line 20
Message was edited by:
anutoshI assume all the other dimensions are being specified via a load rule header (i.e. the rule is otherwise valid).
What is your data source? What does the number (data) format look like? Can you identify (and post) specific rows that are causing the error? -
Change Data Capture error ORA-31428 at the subscription step?
I am following this cookbook; http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf it was very helpful for me but at the subscription step when I give the list of all columns that I provided at create_change_table step with column_type_list I recieve this error;
ORA-31428 : No publication contains all of the specified columns. One or more of the specified columns cannot be found in a single publication. Consult the ALL_PUBLISHED_COLUMNS view to see the current publications and change the subscription request to select only the columns that are in the same publication.
When I check the view mentioned ALL_PUBLISHED_COLUMNS my columns are listed, strange behaviour. I searched for any comments on forums.oracle.com and metalink.oracle.com and even google but nothing just the explaination above :(
If you have any comments it would be great, thank you again.
Best regards.
Hotlog Source : 9iR2 Solaris
Hotlog Target : 10gR2 Solaris
begin
dbms_cdc_publish.create_change_table(
owner => ‘cdc_stg_pub’,
change_table_name => ‘udb_tcon_ct’,
change_set_name => ‘udb_tcon_set’,
source_schema => ‘udb’,
source_table => ‘tcon’,
column_type_list => ‘ncon number(12), ncst number(12), dwhencon date, twhomcon varchar2(50), cchancon number(3), cacticon number(5), tdatacon varchar2(1000)’,
capture_values => ‘both’,
rs_id => ‘y’,
row_id => ‘n’,
user_id => ‘n’,
timestamp => ‘y’,
object_id => ‘n’,
source_colmap => ‘n’,
target_colmap => ‘y’,
options_string => null) ;
end ;
select x.change_set_name, x.column_name from ALL_PUBLISHED_COLUMNS x ;
begin
dbms_cdc_subscribe.create_subscription(
change_set_name => ‘udb_tcon_set’,
description => ‘UDB TCON change subscription’,
subscription_name => ‘udb_tcon_sub1′);
end;
begin
dbms_cdc_subscribe.subscribe(
subscription_name => ‘udb_tcon_sub1′,
source_schema => ‘udb’,
source_table => ‘tcon’,
column_list => ‘ncon,ncst,dwhencon,twhomcon,cchancon,cacticon,tdatacon’,
subscriber_view => ‘udb_tcon_chg_view’) ;
end ;
CHANGE_SET_NAME COLUMN_NAME
UDB_TCON_SET NCON
UDB_TCON_SET NCST
UDB_TCON_SET DWHENCON
UDB_TCON_SET TDATACON
UDB_TCON_SET CCHANCON
UDB_TCON_SET CACTICON
UDB_TCON_SET TWHOMCON
7 rows selected
PL/SQL procedure successfully completed
begin
dbms_cdc_subscribe.subscribe(
subscription_name => ‘udb_tcon_sub1′,
source_schema => ‘udb’,
source_table => ‘tcon’,
column_list => ‘ncon,ncst,dwhencon,twhomcon,cchancon,cacticon,tdatacon’,
subscriber_view => ‘udb_tcon_chg_view’) ;
end ;
ORA-31428: no publication contains all the specified columns
ORA-06512: at “SYS.DBMS_CDC_SUBSCRIBE”, line 19
ORA-06512: at line 2
I added the OS and Oracle versions of source and target.
Message was edited by:
TongucYnice catch, the error changed but still strange;
SQL> select upper('ncon,ncst,dwhencon,twhomcon,cchancon,cacticon,tdatacon') from dual ;
UPPER('NCON,NCST,DWHENCON,TWHO
NCON,NCST,DWHENCON,TWHOMCON,CCHANCON,CACTICON,TDATACON
SQL> begin
2 dbms_cdc_subscribe.subscribe(
3 subscription_name => 'udb_tcon_sub1',
4 source_schema => 'udb',
5 source_table => 'tcon',
6 column_list => 'NCON,NCST,DWHENCON,TWHOMCON,CCHANCON,CACTICON,TDATACON',
7 subscriber_view => 'udb_tcon_chg_view') ;
8 end ;
9 /
begin
dbms_cdc_subscribe.subscribe(
subscription_name => 'udb_tcon_sub1',
source_schema => 'udb',
source_table => 'tcon',
column_list => 'NCON,NCST,DWHENCON,TWHOMCON,CCHANCON,CACTICON,TDATACON',
subscriber_view => 'udb_tcon_chg_view') ;
end ;
ORA-31466: no publications found
ORA-06512: at "SYS.DBMS_CDC_SUBSCRIBE", line 19
ORA-06512: at line 2 -
I am trying to add standby database to the broker configuration and am getting this error
ORA-16796 : one or more properties could not be imported from the database
-- are there any specific checks I can do to locate the error ?
Thanks,
ramyaHi..
What is the oracle version???
Refer to metalink Doc ID: 194529.1
From metalink for 10g
>
Error: ORA-16796 (ORA-16796)
Text: One or more properties could not be imported from the database.
Cause: The broker was unable to import property values for the database
being added to the broker configuration. This error indicates: -
the net-service-name specified in DGMGRL's CREATE CONFIGURATION or
ADD DATABASE command is not one that provides access to the
database being added, or - there are no instances running for the
database being added.
Action: Remove the database from the configuration using the REMOVE
CONFIGURATION or REMOVE DATABASE command. Make sure that the
database to be added has at least one instance running and that
the net-service-name provides access to the running instance. Then
reissue the CREATE CONFIGURATION or ADD DATABASE command.
>
Anand
Edited by: Anand... on Mar 15, 2009 9:44 AM -
ORA-01157: cannot identify/lock data file error in standby database.
Hi,
i have a primary database and standby database (11.2.0.1.0) running in ASM with different diskgroup names. I applied an incremental backup on standby database to resolve archive log gap and generated a controlfile for standby in primary database and restored the controlfile in standby database.But when i started the MRP process its not starting and thows error in alert log ORA-01157: cannot identify/lock data file. When i queried the standby database file it shows the location on primary database datafiles names not the standby database.
PRIMARY DATABASE
SQL> select name from v$datafile;
NAME
+DATA/oradb/datafile/system.256.788911005
+DATA/oradb/datafile/sysaux.257.788911005
+DATA/oradb/datafile/undotbs1.258.788911005
+DATA/oradb/datafile/users.259.788911005
STANDBY DATABASE
SQL> select name from v$datafile;
NAME
+STDBY/oradb/datafile/system.256.788911005
+STDBY/oradb/datafile/sysaux.257.788911005
+STDBY/oradb/datafile/undotbs1.258.788911005
+STDBY/oradb/datafile/users.259.788911005
The Actual physical location of standby database files in ASM in standby server is shown below
ASMCMD> pwd
+STDBY/11gdb/DATAFILE
ASMCMD>
ASMCMD> ls
SYSAUX.259.805921967
SYSTEM.258.805921881
UNDOTBS1.260.805922023
USERS.261.805922029
ASMCMD>
ASMCMD> pwd
+STDBY/11gdb/DATAFILE
i even tried to rename the datafiles in standby database but it throws error
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01275: Operation RENAME is not allowed if standby file management is
automatic.
Regards,
007Hi saurabh,
I tried to rename the datafiles in standby database after restoring it throws the below error
ERROR at line 1:
ORA-01511: error in renaming log/data files
ORA-01275: Operation RENAME is not allowed if standby file management is
automatic.
Also in my pfile i have mentioned the below parameters
*.db_create_file_dest='+STDBY'
*.db_domain=''
*.db_file_name_convert='+DATA','+STDBY'
*.db_name='ORADB'
*.db_unique_name='11GDB'
Regards,
007 -
DATA PUMP - IMPORT FROM A DATABASE
Can anyboby please help with the following:
We have a 10.2R2-schema, that should be imported into an existing schema on another machine. Whether doing dump on a remote machine or importing over the Lan, it gives me
ORA-39006,ORA-39065,ORA-02083,ORA-39097 with error 2083.
Yes, I have "-"-character in the domain name and I can not switch off GLOBAL_NAME to false due to some dependecies.
Thanks in advance!Yes, I have "-"-character in the domain name and I can not switch off GLOBAL_NAME to false due to some dependecies.This bug is characterized by using a fully qualified database link with a hyphen in the domain name in PL/SQL, and receiving an ORA-02083 error.
Bug 3096445 - ORA-02083 USING DB LINK WHEN DB DOMAIN NAME CONTAINS HYPEN '-'
Unfortunately, its not been fixed yet in any release of Oracle.
Temporary solution/workaround would be :
SolutionThe bug was unresolved at the time of writing this article (February-2006).
1. The normal method of changing the GLOBAL_NAME is to use the command:
SQL> ALTER DATABASE RENAME GLOBAL_NAME TO <new name without hyphen>;
2. If the GLOBAL_NAME already contains illegal characters, then the table MUST be manually updated via:
UPDATE GLOBAL_NAME SET GLOBAL_NAME = '<new name without hyphen>';
For more details on this, read metalink note : 331169.1
Jaffar -
Data retrival error in Logical Database
Hello Gurus,
I am working on a Report on ASSET ACTIVITY BY DATE RANGE .
The program is copied from std. program S_ALR_87011990.
The above std. program displays for the whole financial year. This is modified for a particalar period range in the new leveraged program.
My question is in the below code.
We are fetching data using LDB ADA. The statement "GET anlcv" works fine here, I mean Sy-subrc is 0 and anlcv structure has some data in it.
When it comes to statement "GET anepv" in the below code, we are not getting any data into that structure and sy-subrc NE 0. Then it is skipping all the get statements and directly going to statement " PERFORM abga_simulieren.".
My logic lies in between this Get statement and the perform statement. When i see it in debugging mode my statement is not executed at all.
What needs to be done. Please anyone help me.
GET anlcv.
CHECK select-options.
MOVE anlcv TO sav_anlcv.
GET anepv.
CHECK select-options.
Nur Bewegungen des Jahres des Berichtsdatums durchlassen.
CHECK anepv-bzdat GE sav_gjbeg.
CHECK anepv-bzdat IN so_bzdat. "Added for SIR-3132
Bewegungen in SAV_ANEPV sammeln.
MOVE anepv TO sav_anepv.
APPEND sav_anepv.
GET anlb LATE.
Check auf Bestandskonto bei Gruppensummen erst hier, wegen
fehlender Abgänge/Umbuchungen
IF NOT summb IS INITIAL.
IF NOT anlav-ktansw IN so_ktanw.
REJECT 'ANLAV'.
ENDIF.
ENDIF.
ANLCV aus Save-Area zurueckholen.
CHECK NOT sav_anlcv-anln1 IS INITIAL.
MOVE sav_anlcv TO anlcv.
Abg-Simu: Abgang simulieren.
PERFORM abga_simulieren.
Promise to reward points
Regards
MacHi
look at the following stuff related to LDB and do accordingly
A logical database is a special ABAP/4 program which combines the contents of certain database tables. You can link a logical database to an ABAP/4 report program as an attribute. The logical database then supplies the report program with a set of hierarchically structured table lines which can be taken from different database tables.
LDB offers an easy-to-use selection screens. You can modify the pre-generated selection screen to your needs. It offers check functions to check whether user input is complete, correct, and plausible. It offers reasonable data selections. It contains central authorization checks for data base accesses. Enhancements such as improved performance immediately apply to all report programs that use the logical database.
Less coding s required to retrieve data compared to normal internel tables.
Tables used LDB are in hierarchial structure.
Mainly we used LDBs in HR Abap Programming.
Where all tables are highly inter related so LDBs can optimize the performance there.
Check this Document. All abt LDB's
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.highlightedcontent?documenturi=%2flibrary%2fabap%2fabap-code-samples%2fldb+browser.doc
GO THROUGH LINKS -
http://www.sap-basis-abap.com/saptab.htm
http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9bfa35c111d1829f0000e829fbfe/content.htm
http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/c6/8a15381b80436ce10000009b38f8cf/frameset.htm
/people/srivijaya.gutala/blog/2007/03/05/why-not-logical-databases
Re: **LDB**
www.sapbrain.com/FAQs/TECHNICAL/SAP_ABAP_Logical_Database_FAQ.html
www.sap-img.com/abap/abap-interview-question.htm
www.sap-img.com/abap/quick-note-on-design-of-secondary-database-indexes-and-logical-databases.htm
http://help.sap.com/saphelp_nw04/helpdata/en/9f/db9b5e35c111d1829f0000e829fbfe/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/9f/db9bb935c111d1829f0000e829fbfe/content.htm
Gothru the blog which provides info on LDB's:
/people/srivijaya.gutala/blog/2007/03/05/why-not-logical-databases
Sample code
TABLES: SPFLI,
SFLIGHT,
SBOOK,
SCARR.
START-OF-SELECTION.
GET SPFLI.
WRITE:/ SPFLI: , SPFLI-CARRID, SPFLI-CONNID,
SPFLI-AIRPFROM, SPFLI-AIRPTO.
GET SFLIGHT.
WRITE:/ SFLIGHT: , SFLIGHT-CARRID, SFLIGHT-CONNID, SFLIGHT-FLDATE.
GET SBOOK.
WRITE:/ SBOOK: , SBOOK-CARRID, SBOOK-CONNID,
SBOOK-FLDATE, SBOOK-BOOKID.
GET SFLIGHT LATE.
WRITE:/ GET SFLIGHT LATE: , SFLIGHT-FLDATE.
Regards
Anji
Maybe you are looking for
-
Issue with Calendar not syncing with iPhone 3g
Issue with Calendar not syncing with iPhone 3g. Solution Create backup of your entire mailbox before starting (THIS IS IMPORTANT) Create offline folder (.pst file) Check windows help on how to do this for your version. Open folders view on Outlook Ri
-
I downloaded Adobe PhotoShop CC and Lightroom 5 (paid version). Lightroom 5 seams to work but Adobe PhotoShop gives an error message "impossible d'initialiser Photoshop en raison d'une erreur du programme" (sorry it's in French, in English this could
-
How do i make a "fill in the blank" line????
I am making a questionnaire and need to have lines/spaces that can be filled in. When i use the underscore it does not work bc it moves the lines and words. I need something that when i send the document to a client they can fill it out and e-mail it
-
How come i get alert sounds at every action in iCal?
I got a little problem with iCal. Everytime I am doing an action in iCal I get this "error" sound/alart sound. Actions like editing an event, moving an event, making a new event, etc. make that sound appear. I doesn't seem like this is doing somethin
-
I find that if I don't scroll back up to the top of a lengthy message, upon selecting another message I'm automatically placed far down the message body instead of at the beginning--I assume its the same distance down as previous read message. I don'