Data pump using table query
I am trying to perform a data pump export on a table using a query within a parfile and I am getting some odd behaviour. The database version is 10.2.0.4.3 and the OS is AIX 5.3. The query looks like this.
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This works but gets 0 rows. If I run the query against the instance in an SQLPlus session as below then I get 0 rows returned.
select * from POSDECLARATIONQUEUE where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' AND 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252');
If I take out the single quotes from around the columns within the query against the instance within SQLPlus, I get over 2000 rows returned.
SQL> select count(*) from POSDECLARATIONQUEUE where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252);
COUNT(*)
2098
If I remove the single quotes from the parfile query then I get the following error within the data pump export.
UDE-00014: invalid value for parameter, 'schemas'.
The SCHEMAS option is not specified within the parfile and the TABLES option only specifies the table POSDECLARATIONQUEUE.
Can someone assist with this, I just can't seem to be able to get the syntax right for it to work within data pump.
Kind Regards.
Graeme.
Edited by: user12219844 on Apr 14, 2010 3:34 AM
It looks like your query might be a little wrong:
This is what you have:
QUERY="POSDECLARATIONQUEUE:where SESSIONID in (select 'B.SESSIONID' from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where 'B.SESSIONID' = 'C.ID' and 'C.ACCOUNTID' = 'A.ID' and 'A.SITE' = '10252')"
This is what I would have thought it should look like:
QUERY=POSDECLARATIONQUEUE:"where SESSIONID in (select B.SESSIONID from POSACCOUNT A, POSDECLARATIONQUEUE B, POSDECLARATIONSESSION C where B.SESSIONID = C.ID and C.ACCOUNTID = A.ID and A.SITE = 10252)"
You want double " arount the complete query, and you don't need the single ' around all of the =. The single ' are treating those values as strings and it says
'B.SESSIONID' = 'C.ID'
is the string B.SESSIONID equal to the string C.ID
In your query that you used in sql was
B.SESSIONID = C.ID
which says is the value stored B.SESSIONID equal to the value stored at C.ID
Which is what you want.
Dean
Similar Messages
-
Help needed with Export Data Pump using API
Hi All,
Am trying to do an export data pump feature using the API.
while the export as well as import works fine from the command line, its failing with the API.
This is the command line program:
expdp pxperf/dba@APPN QUERY=dev_pool_data:\"WHERE TIME_NUM > 1204884480100\" DUMPFILE=EXP_DEV.dmp tables=PXPERF.dev_pool_data
Could you help me how should i achieve the same as above in Oracle Data Pump API
DECLARE
h1 NUMBER;
h1 := dbms_datapump.open('EXPORT','TABLE',NULL,'DP_EXAMPLE10','LATEST');
dbms_datapump.add_file(h1,'example3.dmp','DATA_PUMP_TEST',NULL,1);
dbms_datapump.add_file(h1,'example3_dump.log','DATA_PUMP_TEST',NULL,3);
dbms_datapump.metadata_filter(h1,'NAME_LIST','(''DEV_POOL_DATA'')');
END;
Also in the API i want to know how to export and import multiple tables (selective tables only) using one single criteria like "WHERE TIME_NUM > 1204884480100\"Yes, I have read the Oracle doc.
I was able to proceed as below: but it gives error.
============================================================
SQL> SET SERVEROUTPUT ON SIZE 1000000
SQL> DECLARE
2 l_dp_handle NUMBER;
3 l_last_job_state VARCHAR2(30) := 'UNDEFINED';
4 l_job_state VARCHAR2(30) := 'UNDEFINED';
5 l_sts KU$_STATUS;
6 BEGIN
7 l_dp_handle := DBMS_DATAPUMP.open(
8 operation => 'EXPORT',
9 job_mode => 'TABLE',
10 remote_link => NULL,
11 job_name => '1835_XP_EXPORT',
12 version => 'LATEST');
13
14 DBMS_DATAPUMP.add_file(
15 handle => l_dp_handle,
16 filename => 'x1835_XP_EXPORT.dmp',
17 directory => 'DATA_PUMP_DIR');
18
19 DBMS_DATAPUMP.add_file(
20 handle => l_dp_handle,
21 filename => 'x1835_XP_EXPORT.log',
22 directory => 'DATA_PUMP_DIR',
23 filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
24
25 DBMS_DATAPUMP.data_filter(
26 handle => l_dp_handle,
27 name => 'SUBQUERY',
28 value => '(where "XP_TIME_NUM > 1204884480100")',
29 table_name => 'ldev_perf_data',
30 schema_name => 'XPSLPERF'
31 );
32
33 DBMS_DATAPUMP.start_job(l_dp_handle);
34
35 DBMS_DATAPUMP.detach(l_dp_handle);
36 END;
37 /
DECLARE
ERROR at line 1:
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3043
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3688
ORA-06512: at line 25
============================================================
i have a table called LDEV_PERF_DATA and its in schema XPSLPERF.
value => '(where "XP_TIME_NUM > 1204884480100")',above is the condition i want to filter the data.
However, the below snippet works fine.
============================================================
SET SERVEROUTPUT ON SIZE 1000000
DECLARE
l_dp_handle NUMBER;
l_last_job_state VARCHAR2(30) := 'UNDEFINED';
l_job_state VARCHAR2(30) := 'UNDEFINED';
l_sts KU$_STATUS;
BEGIN
l_dp_handle := DBMS_DATAPUMP.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'ldev_may20',
version => 'LATEST');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'ldev_may20.dmp',
directory => 'DATA_PUMP_DIR');
DBMS_DATAPUMP.add_file(
handle => l_dp_handle,
filename => 'ldev_may20.log',
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
DBMS_DATAPUMP.start_job(l_dp_handle);
DBMS_DATAPUMP.detach(l_dp_handle);
END;
============================================================
I dont want to export all contents as the above, but want to export data based on some conditions and only on selective tables.
Any help is highly appreciated. -
Data provisioning using table import: error occurs!
HI ,experts:
I created a project according to develop
document. Objects include: schema\table\model\data\csv data.
When I tried to active the table-import data, error occurred.
The source code of tidata file is:
implements TiPackage:TiModel.hdbtim;
csvFiles = ["TiPackage:TiCsv.csv"];
The error message is:
An error occurred during activation of
{tenant: , package: Pactera.twb.XSpro01.TiPackage, name: TiData, suffix:
hdbtid} : Object {tenant: , package: Pactera.twb.XSpro01.TiPackage, name:
TiData, suffix: hdbtid} refers to object {tenant: , package: TiPackage, name:
TiModel, suffix: hdbtim}, which doesn't exist in the current session.
But my TiModel is exist and still active .
Thanks for your help.
Best regards.why do not try with expdp/impdp remap_tablespace?
REMAP_TABLESPACEDefault: none
Purpose
Remaps all objects selected for import with persistent data in the source tablespace to be created in the target tablespace.
Syntax and Description
REMAP_TABLESPACE=source_tablespace:target_tablespace
Multiple REMAP_TABLESPACE parameters can be specified, but no two can have the same source tablespace. The target schema must have sufficient quota in the target tablespace.
Note that use of the REMAP_TABLESPACE parameter is the only way to remap a tablespace in Data Pump Import. This is a simpler and cleaner method than the one provided in the original Import utility. In original Import, if you wanted to change the default tablespace for a user, you had to perform several steps, including exporting and dropping the user, creating the user in the new tablespace, and importing the user from the dump file. That method was subject to many restrictions (including the number of tablespace subclauses) which sometimes resulted in the failure of some DDL commands.
By contrast, the Data Pump Import method of using the REMAP_TABLESPACE parameter works for all objects, including the user, and it works regardless of how many tablespace subclauses are in the DDL statement.
Example
The following is an example of using the REMAP_TABLESPACE parameter.
impdp hr/hr REMAP_TABLESPACE='tbs_1':'tbs_6' DIRECTORY=dpump_dir1 PARALLEL=2JOB_NAME=cf1n02 DUMPFILE=employees.dmp NOLOGFILE=Y
refer:-http://download.oracle.com/docs/cd/B13789_01/server.101/b10825/dp_import.htm#i1010670 -
TopLink doesn't sort data received using Named query
Hello!
I'm trying to build a Tree table using a TopLink Named query in my application.
There are Id, Parent|_Id, Code and Name columns in the corresponding database table, where Id and Parent|_Id are linked together by Foreign key (typical tree). I used TopLink to describe this table in the Model part of my application. There I wrote a Named query, without using a query constructor (not Expression radio button, but SQL radio button). Here the text of query:
select * from regions a connect by prior a.Id = a.Parent_Id start
with a.Id is null order by a.Code
Then I created a Data control and tried to build a JSF page with data based on this query, as tree-table.
But I discovered the fact, that data under the same node on the same leaf level of the Tree are not sorted by code on the JSF page. For example, data may be situated as follows (Code - Name):
2. wwwwwwwwwwww
2.3.kkkkkkkkkkkk
2.1.fffffffffffffffffffff
2.2.oooooooooo
1. qqqqqqqqqqqqqqqq
1. 2. lllllllllllllllllllllllllllll
1. 1. hhhhhhhhhhhhhh
etc.
I verified this query on other environment (PL/SQL Developer, SQL+), but didn't found such unsorted results anywhere.
Please, give me an advice to avoid this problem.
Thanks.Hi
Something to do with TreeMap(TreeSet). I tried with TreeSet but it didn't work. Here is the code :
In servlet :
Connection lConnection = getConnection(pRequest);
String lSQL = "";
Statement lStatement;
ResultSet lResultSet;
Hashtable lLtypeHashtable = new Hashtable();
lStatement = lConnection.createStatement();
lSQL = "SELECT RCID,RCMEANING FROM REFERENCECODES WHERE RCDOMAIN = 'LOCATIONTYPE' AND RCROWSTATE > 0 order by RCMEANING";
lResultSet = lStatement.executeQuery(lSQL);
while(lResultSet.next())
String lRcid = lResultSet.getString(1);
String lRcmeaning = lResultSet.getString(2);
lLtypeHashtable.put(lRcid.trim(),lRcmeaning.trim());
if(lResultSet != null) lResultSet.close();
if(lStatement != null) lStatement.close();
pRequest.setAttribute("LtypeHashtable",lLtypeHashtable);
//Below Query is executed when one data from select element is selected
String lLType = DisplayUtilities.getString(pRequest.getParameter("LType"),true);
//LType is name of select element in JSP.
if (lLType != null)
lSQL = lSQL + " AND " + lUpperCaseFunction + "(LOCATIONTYPE)" +
" = " + DBUtilities.formatString(lLType.toUpperCase());
pRequest.setAttribute("rLType",lLType+"");
In JSp :
<%
Hashtable lLtypeHashtable = (Hashtable)request.getAttribute("LtypeHashtable");
%>
<TR>
<TD width="15%"> <div align="left">
<select name="LType" size="1" >
<option Value="">< Select ></option>
<%
if(lLtypeHashtable != null)
Enumeration enum = lLtypeHashtable.keys();
while(enum.hasMoreElements())
String key = (String)enum.nextElement();
String value = (String)lLtypeHashtable.get(key);
String flagBack = "";
if(key.equals((String)request.getAttribute("rLType")))
flagBack = "selected";
%>
<option Value="<%=key%>" <%=flagBack%>><%=value%></option>
<%
%>
</select></div>
</TD>
</TR>
How should I implement TreeSet?
Looking forward for an early reply.
Thanks. -
Data pump using network_link problem
Hi,
I defined a database link for the sys user as:
create database link xe_remote
connect to hr
identified by hr
using 'xe_remote'
and it works fine:
select * from dual@xe_remote
D
X
Then, tried to import with data pump the user hr from xe_remote in the local system, but it seemed that the database link wasn't valid anymore:
impdp system/pwd schemas=hr network_link=xe_remote remap_schema=hr:hr_remote table_exists_action=replace
ORA-39001: invalid argument value
ORA-39200: Link name "xe_remote" is invalid
ORA-02019: connection description for remote database not found
Thanks,
GianlucaHello
oerr says:
oerr ora 39149
39149, 00000, "cannot link privileged user to non-privileged user"
// *Cause: A Data Pump job initiated be a user with
// EXPORT_FULL_DATABASE/IMPORT_FULL_DATABASE roles specified a
// network link that did not correspond to a user with
// equivalent roles on the remote database.
// *Action: Specify a network link that maps users to identically privileged
// users in the remote database.
Solution:
Create a db link from hr_remote to hr
or
Create a db link from system to system (I think, it's easier, because you have all rights...)
Greetings
Sven -
Data Pump using 11g2 and 11g1 question during migration
Our DBE put our test database on 11g2 and our production on 11g1. Due to some ingest failure we wanted to move the data in test(11g2) to production(11g1) using data pump, however I was told that you cannot go from 11g2 to 11g1. I was also told that because the database contained public synonyms that I would have to recreate all public synonyms. he said it had something to do with lbascys. Can someone clarify this for me..
user11171364 wrote:
... Can I still use these parameters during the import ONLY without having used them during the export.Nope, read the restriction : during the import "+This parameter is valid only when the NETWORK_LINK parameter is also specified.+"
http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref299
Consequently, you cannot use it within your dumpfile.
Nicolas. -
Checking correct data format using sql query
1) I got column date of joining which accepts date in below format
DD-MON-YYYY
DD-MON-YY
MON-DD-YYYY
MON-DD-YY
Month DD,YYYY
Question:- how do i check whether all dates in Date of joining column are in above format or not using sql query?
2) I got one more date column which accepts date in below format
MMDDYYYY
YYYYMMDD
MM/DD/YYYY
MM/DD/YY
YYYY/DD/MM
Question:- how do i check whether all dates in date column are in above format or not using sql query?
sorry if it is a very simple question since I am new to sql and trying to learn ......Thanks for the answers from the group............In short, NO, it's not possible. If you store dates correctly in the database as DATE datatype then you don't have this problem. If you store them as VARCHAR2 you have a problem.
So, you get a date of 20092012
Is that 20th September 2012? or is it 20th December 2009?
What about...
11-Jan-12
Is that 11th January 2012 or 12th January 2011?
Dates should never be stored on the database as strings. That is why Oracle gives you a DATE datatype so you can store them properly.
Also, when dates are passed from an application to the database, the application should be passing them as DATE datatype, and the application interface should be designed to accept dates from the user in a format specific to their country/locality and it would then know what that format is and automatically convert it to a DATE datatype before it gets anywhere near the database or any SQL. -
Error while creating data source using table KONP
Hi Frnds,
I am creating a data source (RSo2) from Extraction from view, using the Table KONP , then i getting an error saying that
Field KBETR with reference field KONWA: ZOXPTS0031 is to replace reference table RV13A
Message no. R8390
Field MXWRT with reference field KONWA: ZOXPTS0031 is to replace reference table RV13A
Message no. R8390
Field GKWRT with reference field KONWA: ZOXPTS0031 is to replace reference table RV13A
Message no. R8390
Regards
rakeshYou have to include reference fields also in the extract structure.
-
Date parameters using MDX query
Hello experts,
Not sure if this question falls under SSAS or SSRS but I'm writing it here for now and hopefully get the answer.
I'm using SSAS cube to develop a report using SSRS and this is the first time I'm doing it. I want to filter records based on date range and did some research online. My report contains two datasets:
1. dsMain dataset -> it contains all the field which I want to use in the report and added a parameter thru query designed with following settings:
Dimension : Dates
Hierachary : Date
Operator : Range (Inclusive)
Parameters : checked
it created two parameters called FromDatesDate and toDatesDate
2. I created another dataset called dsDate and wrote a custom query (found at following link) and changed FromDatesDate and ToDatesDate using this date dataset
https://jsimonbi.wordpress.com/2011/03/22/using-a-date-parameter-in-ssrs-with-mdx/
Query for dsDate
WITH
MEMBER DateValue
AS
[Dates].[Date].CurrentMember.UniqueName
MEMBER DateLabel
AS
[Dates].[Date].CurrentMember.Name
SELECT
[Measures].[DateValue],
[Measures].[DateLabel]
} ON 0,
[Dates].[Date].[Date]
} ON 1
FROM [myCube]
Here is the value returned by dsDate dataset (above query)
DateValue DateLabel
06/04/1980 [Dates].[Date].&[29375]
06/04/1980
06/05/1980 [Dates].[Date].&[29376]
06/05/1980
06/06/1980 [Dates].[Date].&[29377]
06/06/1980
06/07/1980 [Dates].[Date].&[29378]
06/07/1980
06/08/1980 [Dates].[Date].&[29379]
06/08/1980
06/09/1980 [Dates].[Date].&[29380]
06/09/1980
06/10/1980 [Dates].[Date].&[29381]
06/10/1980
06/11/1980 [Dates].[Date].&[29382]
06/11/1980
06/12/1980 [Dates].[Date].&[29383]
06/12/1980
06/13/1980 [Dates].[Date].&[29384]
06/13/1980
Here is what I changed in FromDatesDate and ToDatesDate parmeter:
Under Available Values tab:
Dataset : dsDate
Value Field : DateValue
Label Field : DateLabel
Here are my questions:
1. I want to use date/time parameter so that user doesn't have to scroll thru whole date dimension.
2. I changed the FromDatesDate and ToDatesDate to date/time parameter, removed the values from Available values tab and made the following changes on Parmaeters expression under dsMain dataset
=”[Dates].[Date].&[” + Format(CDate(Parameters!FromDatesDate.Value),”MM/dd/yyyy”)
+ “T00:00:00]”
=”[Dates].[Date].&[” + Format(CDate(Parameters!ToDatesDate.Value),”MM/dd/yyyy”)
+ “T00:00:00]”
Now when I run the report I get following error:
Query (1, 55) The restrictions imposed by the CONSTRAINED flag in the STRTOMEMBER function are violated.
I think the reason is by changing parameter to date/time, now I cannot get "DateValue" which is required for the query.
1. What is the best way to work with date parameters?
Hope it is all clear and look forward to hear from experts.
Thanks,
P
mark it as answer if it answered your question :)Hi Parry2k,
In Analysis Services, a member can be referenced by either its member name or by its member key. The member key is used by the dimension to specifically identify a given member. The ampersand (&) character is used in MDX to differentiate
a member key from a member name. In this scenario, the datetime is member name, not the member key. So you should not add "&".
Reference:
Member Names and Keys
Simon Hou
TechNet Community Support -
ASE 15.7 how to find data entry using table page nr
Hi,
I am looking for a dbcc () to get data when I know page number from a table
Thank youHi Isabella,
What your asking is not supported by SAP .. although you can achieve the result by using dbcc page:
http://wiki.scn.sap.com/wiki/display/SYBASE/DBCC+page
The only problem is that the data is in binary format so you have to format it by yourself to a readable format.
Regards,
Adam -
Hi All,
I'm trying generate xml data using data template approach. For this do i need to use XSQL to generate xml data?
Can some one please tell me what is this XSQL and how it works?
Thanks in advance,Hi Srini,
Thank you very much for the link, its elucidates the xml data generation process using data template.
One more question: is this method called (uses) XSQL anywhere??
Thanks, -
How to display last 10 minutes data only using sql query
Hi,
Presently, I'm using version is,
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
So, please consider SCOTT Schema to resolve my issue,
I want to display only last 10 minutes what records are inserted or updated or deleted.
Please provide in many ways!!
ThankQ!!Hi,
See below:
select
from
emp
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
14 rows selected
select
from
emp
minus
select
from
emp
as of timestamp (systimestamp - interval '10' minute)
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
0 rows selected
update
emp
set
ename = ename || ' x'
where
empno = 7934
--1 rows updated.
select
from
emp
minus
select
from
emp
as of timestamp (systimestamp - interval '10' minute)
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
7934 MILLER x CLERK 7782 23-JAN-82 1300 10
*/For changes, deletes and inserts compair the two result (emp and emp as of timestamp (systimestamp - interval '10' minute) ) with each other.
Regards,
Peter -
How to use Power Query to load data from a single source into multiple tables
Hi all,
I have a requirement to load my data into three different data models using Power Query. Is that possible?
My source is a SharePoint survey list, with similar questions like:
1) Course lecturer - John Doe
1a) The course was useful (rate 1 to 5)
1b) The lecturer displayed good knowledge of topic (rate 1 to 5)
2) Course Lecturer - Mary Brown
2a) The course was useful (rate 1 to 5)
2b) The lecturer displayed good knowledge of topic (rate 1 to 5)
I would like to split the data into separate data models (one for John Doe; another for Mary Brown), so that I can compare the different lecturers. Other than running separate surveys for each of them, I thought of using Power Query to transform the data.
Is it possible?
Thanks.
Regards
GMYes, this is possible.
Start with a single query that returns you the data for all lecturers.
Right-click on the "all lecturers" query in the queries pane, and choose Reference, once for each lecturer. This will create a query for each lecturer.
Open the query for each lecturer and filter the data so that only that lecturer's results are visible.
Click "Close & Load To..." for each lecturer's query to load the data into the data model. This will create a data model table for each lecturer.
If your question is more about how to transform such a survey list into a table that can be easily filtered, please provide an example of how the list shows up in Power Query.
Ehren -
Clear the data region using Maxl
Hi,
I am trying to clear the data region using Maxl query, I am getting error like: Dynamic members not allowed in data clear region specificaion.
+alter database 'PGSASO'.'ASOPGS' clear data in region 'CrossJoin(CrossJoin({[Actual]},{[FY11]}),{[Inputs].Levels( 0 ).Members})';+
(Inputs( not a dynamic member, parent of this meber is a dynamic) comes under Account dim).
I have tried the below code also(Syntax error in input MDX queryon line 1 at token ',').
alter database 'PGSASO'.'ASOPGS' clear data in region 'CrossJoin([Inputs].Levels(0), CrossJoin({[ACT]},{[FY11]}))';
Can anyone do let me know if there is any syntax error or alternate function would be available for clearing level-0 members/descendants under Dynamic parent...
Thanks,
BharathiThere is no syntax error in your statement. The problem is you input dimension(I guess) has some members with formulas on them and the clear statement does not allow those members to be in the clear. Ther ware a couple of ways around it. seperate out the members with formulas under a different parent and then modify the crossjoin to only pick up the level zero members that are not in that parent or you could put a UDA on formula members and use the exclude command to exclude anything with that UDA
-
ORA-39097: Data Pump job encountered unexpected error -39076
Hi Everyone,
Today i tried to take a export dump pump(table specific) from my test database, version is 10.2.0.4 on Solaris10(64-bit) and i got the following error message,
Job "SYSTEM"."SYS_EXPORT_TABLE_23" successfully completed at 09:51:36
ORA-39097: Data Pump job encountered unexpected error -39076
ORA-39065: unexpected master process exception in KUPV$FT_INT.DELETE_JOB
ORA-39076: cannot delete job SYS_EXPORT_TABLE_23 for user SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 934
ORA-31632: master table "SYSTEM.SYS_EXPORT_TABLE_23" not found, invalid, or inaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 1079
ORA-20000: Unable to send e-mail message from pl/sql because of:
ORA-29260: network error: Connect failed because target host or object does not exist
ORA-39097: Data Pump job encountered unexpected error -39076
ORA-39065: unexpected master process exception in MAIN
ORA-39076: cannot delete job SYS_EXPORT_TABLE_23 for user SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 934
ORA-31632: master table "SYSTEM.SYS_EXPORT_TABLE_23" not found, invalid, or inaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT_INT", line 1079
ORA-20000: Unable to send e-mail message from pl/sql because of:
ORA-29260: network error: Connect failed because target host or object does not exist
i hope the export dumpfile is valid one but i don't know why i am getting this error message. Does any one have faced this kind of problem. please advice me
Thanks
ShanOnce you see this:
Job "SYSTEM"."SYS_EXPORT_TABLE_23" successfully completed at 09:51:36The Data Pump job is done with the dumpfile. There is some clean up that is needed and it looks like something in the cleanup failed. Not sure what it was, but you dumpfile should be fine. One easy way to test it is to run impdp with sqlfile. This will do everything import will do, but instead of creating objects, it writes the ddl to the sql file.
impdp user/password sqlfile=my_test.sql directory=your_dir dupmfile=your_dump.dmp ...
If that works, then your dumpfile should be fine. The last thing the export does is write the Data Pump master table to the dumpfile. The first thing that import does is read that table in. So, if you can read it in (which impdp sqlfile does) your dump is good.
Dean
Maybe you are looking for
-
Business partners synchronization
Hi We have FSCM configured with collections, disputes and Credit management. All 3 of them are active and working in production. We are migrating another business into existing implemented SAP FSCM. In Existing implementation we have business partner
-
What is the best way to interface to an iDevice via Bluetooth?
I am going to be designing a piece of hardware which will interact with an iDevice, (iphone / ipad), via Bluetooth. I will be enrolling in the MFi program. It is my intent to stream data from my hardware to an iDevice through a Bluetooth module ess
-
I need to get internet to both the house and my office, which is in a separate building approx 20m from the house. I was advised by Apple to buy an Airport Express, set it up on one or other building, and connect wirelessly to the other. Tried that
-
The Best way to Handle Database Manipulation?
Hello! I am only really active in the AS3 forum and so I throw out this question here. I have a request to create a "flash-like" app that is connected to a MS Access Database (DB). The customer has built up a product configurator in Access and wants
-
Is it too late to download iOS 5.1
Is it tool late to download iOS 5.1.1?