Low performances when querying data dictionary Oracle 11g
Query:
SELECT
ucc_fk.column_name FK_COLUMN_NAME,
ucc_pk.table_name PK_TABLE_NAME,
ucc_pk.column_name PK_COLUMN_NAME
FROM user_constraints uc_fk
INNER JOIN user_cons_columns ucc_fk ON (ucc_fk.constraint_name = uc_fk.constraint_name)
INNER JOIN user_constraints uc_pk ON (uc_pk.constraint_name = uc_fk.r_constraint_name)
INNER JOIN user_cons_columns ucc_pk ON (ucc_pk.constraint_name = uc_fk.r_constraint_name AND ucc_pk.position = ucc_fk.position)
WHERE
uc_fk.constraint_type = 'R' AND
uc_pk.constraint_type = 'P' AND
uc_fk.table_name = 'TABLE_NAME';
works ok on 10g but is very slow on 11g? How to improve performances?
You dont need to join with user_constraints again unless you are trying to avoid references to a unique key.
SELECT ucc_fk.column_name FK_COLUMN_NAME, ucc_pk.table_name PK_TABLE_NAME, ucc_pk.column_name PK_COLUMN_NAME
FROM user_constraints uc_fk
JOIN user_cons_columns ucc_fk
ON (ucc_fk.constraint_name = uc_fk.constraint_name)
JOIN user_cons_columns ucc_pk
ON (ucc_pk.constraint_name = uc_fk.r_constraint_name
AND ucc_pk.position = ucc_fk.position)
WHERE uc_fk.constraint_type = 'R'
AND uc_fk.table_name = 'TABLE_NAME';As you can see I have removed the join to user_constraints and hence the condition uc_pk.constraint_type = 'P'
The reason being the it is already validated to be in ('P','U') for all r_constraint_names.
G.
Similar Messages
-
How to mask data in oracle 11g database release 1
how to mask data in oracle 11g database release 1
my environment is
Database: 11g release 1
os: AIX 6 (64 bit)
GC:10g release 1DBA-009 wrote:
thx but how i can mask data with above give environment?What does "mask data" mean to you and what is the environment you are referring to?
Telling us that you are using 11.1 helps. But it is far from a complete description of the environment. What edition of the database are you using? What options are installed? What Enterprise Manager packs are licensed? Is any of this changable? Could you license another option for the database or another pack for Enterprise Manager if that was necessary?
What does it mean to you to "mask data"? Do you want to store the data on disk and then mask the data in the presentation layer when certain people query it (i.e. store the patient's social security number in the database but only present ***-**- and the last 4 digits to certain sets of employees)? Do you want to hide entire fields from certain people? Do you want to change the data stored in the database when you are refreshing a lower environment with production data? If so, do you need and/or want this process to be either determinisitic or reversable or both?
Justin -
Send encrypted data from oracle 11g to Ms SQL Server 12
Hi every body,
we want to send encrypted data from oracle 11g to Ms SQL Server 12:
- data are encrypted to oracle
- data should be sent encrypted to Ms SQL server
- data will be decrypted in Ms SQL server by sensitive users.
How can we do this senario, any one has contact simlare senario?
can we use asymetric encription to do this senario?
Please Help!!
Thanks in advance.Hi,
What you want to do about copying data from Oracle to SQL*Server using insert will work with the 12c gateway. There was a problem trying to do this using the 11.2 gateway but it should be fixed with the 12c gateway.
If 'insert' doesn't work then you can use the SQLPLUS 'copy' command, for example -
SQL> COPY FROM SCOTT/TIGER@ORACLEDB -
INSERT SCOTT.EMP@MSQL -
USING SELECT * FROM EMP
There is further information in this note available on My Oracle Support -
Copying Data Between an Oracle Database and Non-Oracle Foreign Data Stores or Databases Using Gateways (Doc ID 171790.1)
However, if the data is encrypted already in the Oracle database then it will be sent in the encrypted format. The gateway cannot decrypt the data before it is sent to SQL*Server.
There is no specific documentation about the gateways and TDE. TDE encrypts the data as it is in the Oracle database but I doubt that SQL*Server will be able to de-encrypt the Oracle data if it is passed in encrypted format and as far as I know it is not designed to be used for non-Oracle databases.
The Gateway encrypts data as it is sent across the network for security but doesn't encrypt the data at source in the same way as TDE does.
Regards,
Mike -
Querying 2.5D data in Oracle 11g
I'm trying to use the SDO_RELATE function on some 2.5D data in 11g and have some questions:
Lets say if I have two tables with some points in one and two polygons in the other:
-- TEST_TABLE_A
CREATE TABLE TEST_TABLE_A (
VALUE VARCHAR2(100),
GEOMETRY MDSYS.SDO_GEOMETRY);
-- Metadata
INSERT INTO user_sdo_geom_metadata VALUES ('TEST_TABLE_A','GEOMETRY',
SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', -10000000, 10000000, .001),
SDO_DIM_ELEMENT('Y', -10000000, 10000000, .001),
SDO_DIM_ELEMENT('Z', -100000, 100000, .001))
, 262152);
-- Create an index with sdo_indx_dims=3
CREATE INDEX TEST_TABLE_A_SPIND ON TEST_TABLE_A (GEOMETRY)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS ('sdo_indx_dims=3');
INSERT INTO TEST_TABLE_A (VALUE, GEOMETRY) VALUES
('POINT1', SDO_GEOMETRY(3001, 262152, SDO_POINT_TYPE(561802.689, 839675.061, 1), NULL, NULL));
INSERT INTO TEST_TABLE_A (VALUE, GEOMETRY) VALUES
('POINT2', SDO_GEOMETRY(3001, 262152, SDO_POINT_TYPE(561802, 839675, 1), NULL, NULL));
INSERT INTO TEST_TABLE_A (VALUE, GEOMETRY) VALUES
('POINT3', SDO_GEOMETRY(3001, 262152, SDO_POINT_TYPE(561808.234, 839662.731, 1), NULL, NULL));
-- TEST_TABLE_A
CREATE TABLE TEST_TABLE_B (
VALUE VARCHAR2(100),
GEOMETRY MDSYS.SDO_GEOMETRY);
-- Metadata
INSERT INTO user_sdo_geom_metadata VALUES ('TEST_TABLE_B','GEOMETRY',
SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', -10000000, 10000000, .001),
SDO_DIM_ELEMENT('Y', -10000000, 10000000, .001),
SDO_DIM_ELEMENT('Z', -100000, 100000, .001))
, 262152);
-- Create an index with sdo_indx_dims=3
CREATE INDEX TEST_TABLE_B_SPIND ON TEST_TABLE_B (GEOMETRY)
INDEXTYPE IS MDSYS.SPATIAL_INDEX
PARAMETERS ('sdo_indx_dims=3');
INSERT INTO TEST_TABLE_B (VALUE, GEOMETRY) VALUES
('NON-FLAT POLYGON',
SDO_GEOMETRY(3003, 262152, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARRAY(561902.814, 839647.609, 10.022,
561891.19, 839652.227, 10.424, 561879.129, 839656.427, 10.932, 561867.892, 839659.927, 11.136, 561851.813, 839664.222, 11.594,
561831.714, 839668.612, 11.797, 561802.689, 839675.061, 11.975, 561778.461, 839680.155, 12.611, 561753.474, 839685.085, 12.153,
561750.606, 839685.756, 12.026, 561748.963, 839671.963, 15.309, 561747.899, 839659.764, 16.35, 561798.912, 839651.036, 15.801,
561808.702, 839650.973, 15.225, 561844.265, 839648.912, 14.62, 561874.846, 839647.57, 13.018, 561897.681, 839647.338, 10.704, 561902.814, 839647.609, 10.022))
INSERT INTO TEST_TABLE_B (VALUE, GEOMETRY) VALUES
('FLAT POLYGON',
SDO_GEOMETRY(3003, 262152, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARRAY(561902.814, 839647.609, 1,
561891.19, 839652.227, 1, 561879.129, 839656.427, 1, 561867.892, 839659.927, 1, 561851.813, 839664.222, 1,
561831.714, 839668.612, 1, 561802.689, 839675.061, 1, 561778.461, 839680.155, 1, 561753.474, 839685.085, 1,
561750.606, 839685.756, 1, 561748.963, 839671.963, 1, 561747.899, 839659.764, 1, 561798.912, 839651.036, 1,
561808.702, 839650.973, 1, 561844.265, 839648.912, 1, 561874.846, 839647.57, 1, 561897.681, 839647.338, 1,
561902.814, 839647.609, 1))
COMMIT;So now, lets say I want to find out what polygon interacts with a particular point.
I would write this query like:
SELECT /*+ORDERED*/ b.value
FROM TEST_TABLE_b b, TEST_TABLE_a a
WHERE sdo_relate (a.geometry, b.geometry, 'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE'
AND a.value = 'POINT1';Running this I get:
SELECT /*+ORDERED*/ b.value
ERROR at line 1:
ORA-29902: error in executing ODCIIndexStart() routine
ORA-13243: specified operator is not supported for 3- or higher-dimensional R-tree
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 333But if I reverse the table order I get the correct answer
SQL> SELECT /*+ORDERED*/ b.value
2 FROM TEST_TABLE_a a, TEST_TABLE_b b
3 WHERE sdo_relate (a.geometry, b.geometry, 'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE'
4 AND a.value = 'POINT1';
VALUE
FLAT POLYGON
1 row selected.Q1. Why do I get an error if I reverse the table order?
Then if I try to find what points in the polygons:
SQL> SELECT /*+ORDERED*/ a.value
2 FROM TEST_TABLE_b b, TEST_TABLE_a a
3 WHERE sdo_relate (b.geometry, a.geometry, 'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE'
4 AND b.value = 'NON-FLAT POLYGON';
no rows selected
SQL> SELECT /*+ORDERED*/ a.value
2 FROM TEST_TABLE_b b, TEST_TABLE_a a
3 WHERE sdo_relate (b.geometry, a.geometry, 'MASK=ANYINTERACT QUERYTYPE=WINDOW') = 'TRUE'
4 AND b.value = 'FLAT POLYGON';
VALUE
POINT1
POINT2
POINT3
3 rows selected.So this suggests that the Z value is considered in the SDO_RELATE query.
Q2. Is there anyway to run an SDO_RELATE query in 11g with 2.5D data, but get it to ignore the Z value?
I have tried using sdo_indx_dims=2 when creating the indexes, but I get the same result.
I'm using Enterprise Edition Release 11.1.0.6.0 on Windows server 2003 32bit.DBA-009 wrote:
thx but how i can mask data with above give environment?What does "mask data" mean to you and what is the environment you are referring to?
Telling us that you are using 11.1 helps. But it is far from a complete description of the environment. What edition of the database are you using? What options are installed? What Enterprise Manager packs are licensed? Is any of this changable? Could you license another option for the database or another pack for Enterprise Manager if that was necessary?
What does it mean to you to "mask data"? Do you want to store the data on disk and then mask the data in the presentation layer when certain people query it (i.e. store the patient's social security number in the database but only present ***-**- and the last 4 digits to certain sets of employees)? Do you want to hide entire fields from certain people? Do you want to change the data stored in the database when you are refreshing a lower environment with production data? If so, do you need and/or want this process to be either determinisitic or reversable or both?
Justin -
Year getting reduced by 1, when querying date fields
Hi, I am an Oracle retail support engineer. One of my clients logged an SR with the following:
When querying application table (shown below), the date comes out as corrupted (data snippet also shown below):
Query:
select create_date, to_char(create_date,'DD-MON-RRRR') from shipment_table;
Result:
CREATE_DATE TO_CHAR(CREATE_DATE,'DD-MON-RRRR')
08-JAN-10 08-JAN-2009
08-JAN-10 08-JAN-2009
08-JAN-10 08-JAN-2009
08-JAN-10 08-JAN-2009
08-JAN-10 08-JAN-2009
Any idea, what can cause this? Any help is greatly appreciated.
Thanks
SrinivasUnfortunately, i'm not able to produce that error.
scott@ORCL>
scott@ORCL>select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE 11.1.0.6.0 Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
Elapsed: 00:00:00.11
scott@ORCL>
scott@ORCL>
scott@ORCL>with shipment
2 as
3 (
4 select to_date('08-JAN-10','DD-MON-YY') create_date from dual
5 union all
6 select to_date('08-JAN-10','DD-MON-YY') create_date from dual
7 union all
8 select to_date('08-JAN-10','DD-MON-YY') create_date from dual
9 union all
10 select to_date('08-JAN-10','DD-MON-YY') create_date from dual
11 union all
12 select to_date('08-JAN-10','DD-MON-YY') create_date from dual
13 )
14 select create_date,
15 to_char(create_date,'DD-MON-RRRR') char_date
16 from shipment;
CREATE_DA CHAR_DATE
08-JAN-10 08-JAN-2010
08-JAN-10 08-JAN-2010
08-JAN-10 08-JAN-2010
08-JAN-10 08-JAN-2010
08-JAN-10 08-JAN-2010
Elapsed: 00:00:00.04
scott@ORCL>
scott@ORCL>What is your Oracle version?
Regards.
Satyaki De. -
How to query data from Oracle, MySQL, and MSSQL?
For an environment consisting of Oracle 11g/12c enterprise edition, MySQL 5.7 community edition, and MSSQL 2008/2012 stanard/enterprise edition, is there any major issue using DG4ODBC to query data from all 3 platforms?
Is there other free alternatives?
If the queried data is mostly contained in MySQL or MSSQL, will it be more efficient to query from MySQL or MSSQL?
If yes, any suggestion of how to do it in those platforms? I know MSSQL can use linked server but it is quite slow.mkirtley-Oracle wrote:
Hi Ed,
It is semantics. By multiple instances I mean you have the gateway installed in an ORACLE_HOME which has 1 listener. However, if you are connecting to different non-Oracle databases or different individual databases of that non-Oracle database then you need multiple gateway instances for each database being connected. I did not mean that you need a gateway installed in a separate ORACLE_HOME for each non-Oracle database to which you are connecting.
Each of these would have a separate instance file within that ORACLE_HOME/hs/admin directory with the connection details for the non-Oracle database to which that instance connects.. So, you would have -
initgtw1.ora - connects to MySQL
initgtw2.ora - connect to SQL*Server northwind database
initgtw3.ora - connect to SQL*Server test database
etc
etc
Each of these instances would have a separate entry in the gateway listener.ora.
In MOS have a look at this note -
How To Add A New Database or Destination To An Existing Gateway Configuration (Doc ID 1304573.1)
Regards,
Mike
Ah yes, we are in agreement, it was just semantics. Thanks. -
Cannot load any data from Oracle 11G data base
Hi Gurus!
We using SAP BI 7.0.
We have a source system, which is an Oracle Data warehouse based on Oracle 11G data bases.
We created the source system in the BI without any problem.
The connection is open to the Oracle databases.
We created data source in the BI (trn. RSA1), and when we would like to Read Preview Data (Display data source, Preview tab) we can't see anything.
The system is working, in trn. SM50 we see the process is runing, but we can't see any data (we wait more than 5000 sec.)
When we tried data load from the source system, we got a short dump with time-out (after 50000 sec.)
Summarize:
- the connection to the source system is OK,
- the process is running
- we can't see any Preview data
Can somebody help us?
Thank you.
Best regards,
Gergely GombosWe really need to know what errors or warnings the Cache Agent is reporting in the TimesTen ttmesg.log files, when an autorefresh fails, to be able to give advice. If the size of the datastore segment is 512MB (though you don't say how that is divided between Perm, Temp and Log Buffer) but you're only using 30MB of Perm, then on the face of it, it's not a problem in running out of memory. You say autorefresh doesn't complete when "there are a lot of updates on cached tables" - do you mean updates being done directly on the rows in the TimesTen cachegroups, by users? If so, it sounds to me like you have locking contention between the user updates, and the autorefresh mechanism trying to bring new rows in from Oracle. Again, the ttmesg.log should spell this out.
-
SOS!!!!----Error for Loading data from Oracle 11g to Essbase using ODI
Hi all.
I want to load data from oracle database to essbase using ODI.
I configure successfully the physical and logical Hyperion essbase on Topology Manager, and got the ESSBASE structure of BASIC app DEMO.
The problem is.
1. When I try view data right click on the essbase table,
va.sql.SQLException: Driver must be specified
at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java)
at com.sunopsis.sql.SnpsConnection.testConnection(SnpsConnection.java)
at com.sunopsis.graphical.frame.b.jc.bE(jc.java)
at com.sunopsis.graphical.frame.bo.bA(bo.java)
at com.sunopsis.graphical.frame.b.ja.dl(ja.java)
at com.sunopsis.graphical.frame.b.ja.<init>(ja.java)
at com.sunopsis.graphical.frame.b.jc.<init>(jc.java)
I got answer from Oracle Supporter It's ok, just omit it. Then the second problem appear.
2. I create an interface between oracle database and essbase, click the option "staging area deffirent from target"(meaning the staging is created at oracle database), and using IKM SQL to Hyperion Essbase(metadata), execute this interface
org.apache.bsf.BSFException: exception from Jython:
Traceback (innermost last):
File "<string>", line 61, in ?
com.hyperion.odi.essbase.ODIEssbaseException: Invalid value specified [RULES_FILE] for Load option [null]
at com.hyperion.odi.essbase.ODIEssbaseMetaWriter.validateLoadOptions(Unknown Source)
at com.hyperion.odi.essbase.AbstractEssbaseWriter.beginLoad(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java)
at org.python.core.PyMethod.__call__(PyMethod.java)
at org.python.core.PyObject.__call__(PyObject.java)
at org.python.core.PyInstance.invoke(PyInstance.java)
at org.python.pycode._pyx1.f$0(<string>:61)
at org.python.pycode._pyx1.call_function(<string>)
at org.python.core.PyTableCode.call(PyTableCode.java)
at org.python.core.PyCode.call(PyCode.java)
at org.python.core.Py.runCode(Py.java)
at org.python.core.Py.exec(Py.java)
I'm very confused by it. Anybody give me a solution or related docs.
Ethan.Hi ethan.....
U always need a driver to be present inside ur <ODI installation directory>\drivers folder.....for both the target and the source......Because ojdbc14.jar is the driver for oracle already present inside the drivers folder ....we don't have to do anything for it.....But for Essbase ...you need a driver......If u haven't already done this.....try this.....
Hope it helps...
Regards
Susane -
Query Data From Oracle to Excel
Dear all,
Please guide me to solve.
Regards
DharmaQuestion's Caption is : Data From Oracle to Excel
While you are asking :
data in execel (windows ) move to Oracle Database table (Linux *64)Anyway, if your question is export data from Excel to Oracle then below thread may be of your interest which are saying that you have to install Dg4ODBC on Windows and use the ODBC driver there.
When running DG4ODBC on Unix you need an ODBC driver on this Unix box which can connect to the MS Excel file. I'm not aware of any suitable driver so far that fulfills the ODBC level 3 standard required for DG4ODBC.
So instead of using DG4ODBC, get the Windows software of DG4ODBC and install it on a Windows machine. You can then connect from your Oracle database to the Dg4ODBC listener on Windows which then spawns the DG4ODBC process. This DG4ODBC process then uses the MS Excel ODBC driver to connect to the Excel sheet.
https://cn.forums.oracle.com/forums/thread.jspa?messageID=10466197
For this purpose there is separate forum too : [url https://forums.oracle.com/forums/forum.jspa?forumID=63] Heterogeneous Connectivity
Regards
Girish Sharma -
ACL error when sending email from Oracle 11g
Hi,
It returned something like "error...ACL security" when I tried to send email from Oracle 11g. Is there any security thing that I need to release in Oracle 11g? I used to send emails from Oracle 10g and didn't find any problem.
Thanks.
AndyIn Database 11g Oracle introduced Network Access Control Lists (ACLs) to protect the database from users using the many internet-capable packages such as UTL_INADDR, UTL_HTTP, UTL_TCP, etc.
Read all about it in the docs and look at the code demos here:
http://www.morganslibrary.org/library.html
under DBMS_NETWORK_ACL_... -
Low performance when connected users exceed 250 in OAS10g
hi,
in OAS10g app server whenconnected user to app server are less than approximately 250-280 every thin is good and there is no performance problem,
but where connected users exceed 280 , server response to requests such as opening a form(without query data from db ) took 10-12 seconds.
in this case Os resources are: cpu usage <45% and mem usage <50%.
is it HTTPServer performance issue?
how can i solve this problem?
thank in advance
carol.You can try to change the JVM for Forms Servlet, maybe add more Threats with more memory assigment and that this threats get distroyed after resolving certain number of request, also in Apache, you may want to increase the number of MaxClients Directive, as well the Max Connections in WebCache.
Greetings. -
I want to know the queries to found out the following:
What query would you use to determine the most popular column name in the tables of your database?
What query would you use to determine the attributes that are found in the dba_synonyms data dictionary that are not found in the user_synonyms data dictionary?Column names and how many times each used:
select COLUMN_NAME , count(COLUMN_NAME)
from dba_tab_columns
where owner not like '%SYS%'
group by COLUMN_NAME
having count(COLUMN_NAME) > 1
order by count(COLUMN_NAME)
The other request, I didn't understand it. -
Optimizing performance when querying XML data
I have a table in my database containing information about persons. The table has a xmltype column with a lot of data about that person.
One of the things in there is a telephone number. What I now need to figure out is whether there are any duplicate phone numbers in there.
The xml basically looks like this (simplefied example):
<DATAGROUP>
<PERSON>
<BUSINESS_ID>123</BUSINESS_ID>
<INITIALS>M.</INITIALS>
<NAME>Testperson</NAME>
<BIRTHDATE>1977-12-12T00:00:00</BIRTHDATE>
<GENDER>F</GENDER>
<TELEPHONE>
<COUNTRYCODE>34</COUNTRYCODE>
<AREACODE>06</AREACODE>
<LOCALCODE>4318527235</LOCALCODE>
</TELEPHONE>
</PERSON>
</DATAGROUP>
As a result I would need the pk_id of the table with the xmltype column in it and a id that's unique for the person (the business_id that's also somewhere in the XML)
I've conducted this query which will give me all telephone numbers and the number of times they occur.
SELECT OD.pk_ID,
tel.business_id ,
COUNT ( * ) OVER (PARTITION BY tel.COUNTRYCODE, tel.AREACODE, tel.LOCALCODE) totalcount
FROM xml_data od,
XMLTABLE ('/DATAGROUP/PERSON' PASSING OD.DATAGROUP
COLUMNS "COUNTRYCODE" NUMBER PATH '/PERSON/TELEPHONE/COUNTRYCODE',
"AREACODE" NUMBER PATH '/PERSON/TELEPHONE/AREACODE',
"LOCALCODE" NUMBER PATH '/PERSON/TELEPHONE/LOCALCODE',
"BUSINESS_ID" NUMBER PATH '/PERSON/BUSINESS_ID'
) tel
WHERE tel.LOCALCODE is not null --ignore persons without a tel nr
Since I am only interested in the telephone number that occur more than once, I used the above query as a subquery:
WITH q as (
SELECT OD.pk_ID,
tel.business_id ,
COUNT ( * ) OVER (PARTITION BY tel.COUNTRYCODE, tel.AREACODE, tel.LOCALCODE) totalcount
FROM xml_data od,
XMLTABLE ('/DATAGROUP/PERSON' PASSING OD.DATAGROUP
COLUMNS "COUNTRYCODE" NUMBER PATH '/PERSON/TELEPHONE/COUNTRYCODE',
"AREACODE" NUMBER PATH '/PERSON/TELEPHONE/AREACODE',
"LOCALCODE" NUMBER PATH '/PERSON/TELEPHONE/LOCALCODE',
"BUSINESS_ID" NUMBER PATH '/PERSON/BUSINESS_ID'
) tel
WHERE tel.LOCALCODE is not null) --ignore persons without a tel nr
SELECT OD.pk_ID, tel.business_id
FROM q
WHERE totalcount > 1
Now this is working and is giving me the right results, but the performance is dreadful with larger sets of data and will even go into errors like "LPX-00651 VM Stack overflow.".
What I see is when I do a explain plan for the query is that there are things happening like "COLLECTION ITERATOR PICKLER FETCH PROCEDURE SYS.XQSEQUENCEFROMXMLTYPE" which seems to be something like a equivalent of a full table scan if I google on it.
Any ideas how I can speed up this query? are there maybe smarter ways to do this?
One thing to note is that the XMLTYPE data is not indexed in any way. Is there a possibility to do this? and how? I read about it in the oracle docs, but they where not very clear to me.The "COLLECTION ITERATOR PICKLER FETCH" operation means that most likely the XMLType storage is BASICFILE CLOB, therefore greatly limiting the range of optimization techniques that Oracle could apply.
You can confirm what the current storage is by looking at the table DDL, as Jason asked.
CLOB storage is deprecated now in favor of SECUREFILE BINARY XML (the default in 11.2.0.2).
Migrating the column to BINARY XML should give you a first significant improvement in the query.
If the query is actually a recurring task, then it may further benefit from a structured XML index.
Here's a small test case :
create table xml_data nologging as
select level as pk_id, xmlparse(document '<DATAGROUP>
<PERSON>
<BUSINESS_ID>'||to_char(level)||'</BUSINESS_ID>
<INITIALS>M.</INITIALS>
<NAME>Testperson</NAME>
<BIRTHDATE>1977-12-12T00:00:00</BIRTHDATE>
<GENDER>F</GENDER>
<TELEPHONE>
<COUNTRYCODE>34</COUNTRYCODE>
<AREACODE>06</AREACODE>
<LOCALCODE>'||to_char(trunc(dbms_random.value(1,10000)))||'</LOCALCODE>
</TELEPHONE>
</PERSON>
</DATAGROUP>' wellformed) as datagroup
from dual
connect by level <= 100000 ;
create index xml_data_sxi on xml_data (datagroup) indextype is xdb.xmlindex
parameters (q'{
XMLTABLE xml_data_xtab '/DATAGROUP/PERSON'
COLUMNS countrycode number path 'TELEPHONE/COUNTRYCODE',
areacode number path 'TELEPHONE/AREACODE',
localcode number path 'TELEPHONE/LOCALCODE',
business_id number path 'BUSINESS_ID'
call dbms_stats.gather_table_stats(user, 'XML_DATA');
SQL> set autotrace traceonly
SQL> set timing on
SQL> select pk_id
2 , business_id
3 , totalcount
4 from (
5 select t.pk_id
6 , x.business_id
7 , count(*) over (partition by x.countrycode, x.areacode, x.localcode) totalcount
8 from xml_data t
9 , xmltable(
10 '/DATAGROUP/PERSON'
11 passing t.datagroup
12 columns countrycode number path 'TELEPHONE/COUNTRYCODE'
13 , areacode number path 'TELEPHONE/AREACODE'
14 , localcode number path 'TELEPHONE/LOCALCODE'
15 , business_id number path 'BUSINESS_ID'
16 ) x
17 where x.localcode is not null
18 ) v
19 where v.totalcount > 1 ;
99998 rows selected.
Elapsed: 00:00:03.79
Execution Plan
Plan hash value: 3200397756
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 100K| 3808K| | 2068 (1)| 00:00:25 |
|* 1 | VIEW | | 100K| 3808K| | 2068 (1)| 00:00:25 |
| 2 | WINDOW SORT | | 100K| 4101K| 5528K| 2068 (1)| 00:00:25 |
|* 3 | HASH JOIN | | 100K| 4101K| 2840K| 985 (1)| 00:00:12 |
| 4 | TABLE ACCESS FULL| XML_DATA | 100K| 1660K| | 533 (1)| 00:00:07 |
|* 5 | TABLE ACCESS FULL| XML_DATA_XTAB | 107K| 2616K| | 123 (1)| 00:00:02 |
Predicate Information (identified by operation id):
1 - filter("V"."TOTALCOUNT">1)
3 - access("T".ROWID="SYS_SXI_0"."RID")
5 - filter("SYS_SXI_0"."LOCALCODE" IS NOT NULL)
Statistics
0 recursive calls
1 db block gets
2359 consistent gets
485 physical reads
168 redo size
2352128 bytes sent via SQL*Net to client
73746 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
99998 rows processed
If the above is still not satisfying then you can try structured storage (schema-based). -
Poor query performance when using date range
Hello,
We have the following ABAP code:
select sptag werks vkorg vtweg spart kunnr matnr periv volum_01 voleh
into table tab_aux
from s911
where vkorg in c_vkorg
and werks in c_werks
and sptag in c_sptag
and matnr in c_matnr
that is translated to the following Oracle query:
SELECT
"SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN 000000000100000000 AND 000000000999999999;
Because the field SPTAG is not enclosed by apostropher, the oracle query has a very bad performance. Below the execution plans and its costs, with and without the apostrophes. Please help me understanding why I am getting this behaviour.
##WITH APOSTROPHES
SQL> EXPLAIN PLAN FOR
2 SELECT
3 "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN '20101201' AND '20101231' AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
Explained.
SQL> SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
PLAN_TABLE_OUTPUT
Id
Operation
Name
Rows
Bytes
Cost (%CPU)
0
SELECT STATEMENT
932
62444
150 (1)
1
TABLE ACCESS BY INDEX ROWID
S911
932
62444
149 (0)
2
INDEX RANGE SCAN
S911~VAC
55M
5 (0)
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("VKORG"='D004' AND "SPTAG">='20101201' AND
"SPTAG"<='20101231')
2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
"MATNR"<='000000000999999999')
##WITHOUT APOSTROPHES
SQL> EXPLAIN PLAN FOR
2 SELECT
3 "SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" , "PERIV" , "VOLUM_01" ,"VOLEH" FROM SAPR3."S911" WHERE "MANDT" = '003' AND "VKORG" = 'D004' AND "SPTAG" BETWEEN 20101201 AND 20101231 AND "MATNR" BETWEEN '000000000100000000' AND '000000000999999999';
SELECT PLAN_TABLE_OUTPUT FROM TABLE(DBMS_XPLAN.DISPLAY());
Explained.
SQL>
PLAN_TABLE_OUTPUT
Id
Operation
Name
Rows
Bytes
Cost (%CPU)
0
SELECT STATEMENT
2334
152K
150 (1)
1
TABLE ACCESS BY INDEX ROWID
S911
2334
152K
149 (0)
2
INDEX RANGE SCAN
S911~VAC
55M
5 (0)
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
1 - filter("VKORG"='D004' AND TO_NUMBER("SPTAG")>=20101201 AND
TO_NUMBER("SPTAG")<=20101231)
2 - access("MANDT"='003' AND "MATNR">='000000000100000000' AND
"MATNR"<='000000000999999999')
Best Regards,
Daniel G.Volker,
Answering your question, regarding the explain from ST05. As a quick work around I created an index (S911~Z9), but still I'd like to solve this issue without this extra index, as primary index would work ok, as long as date was correctly sent to oracle as string and not as number.
SELECT
"SPTAG" , "WERKS" , "VKORG" , "VTWEG" , "SPART" , "KUNNR" , "MATNR" ,
"PERIV" , "VOLUM_01" , "VOLEH"
FROM
"S911"
WHERE
"MANDT" = :A0 AND "VKORG" = :A1 AND "SPTAG" BETWEEN :A2 AND :A3 AND "MATNR"
BETWEEN :A4 AND :A5
A0(CH,3) = 003
A1(CH,4) = D004
A2(NU,8) = 20101201 (NU means number correct?)
A3(NU,8) = 20101231
A4(CH,18) = 000000000100000000
A5(CH,18) = 000000000999999999
SELECT STATEMENT ( Estimated Costs = 10 , Estimated #Rows = 6 )
5 3 FILTER
Filter Predicates
5 2 TABLE ACCESS BY INDEX ROWID S911
( Estim. Costs = 10 , Estim. #Rows = 6 )
Estim. CPU-Costs = 247.566 Estim. IO-Costs = 10
1 INDEX RANGE SCAN S911~Z9
( Estim. Costs = 7 , Estim. #Rows = 20 )
Search Columns: 4
Estim. CPU-Costs = 223.202 Estim. IO-Costs = 7
Access Predicates Filter Predicates
The table originally includes the following indexes:
###S911~0
MANDT
SSOUR
VRSIO
SPMON
SPTAG
SPWOC
SPBUP
VKORG
VTWEG
SPART
VKBUR
VKGRP
KONDA
KUNNR
WERKS
MATNR
###S911~VAC
MANDT
MATNR
Number of entries: 61.303.517
DISTINCT VKORG: 65
DISTINCT SPTAG: 3107
DISTINCT MATNR: 2939 -
Facing low Performance when iterating records of database using cursor
Hi ,
i inserted nearly 80,000,000 records into a database, by reading a file whose size is nealry 800MB, in 10 minutes.
when i am iterating the records using Cursor with Default lock mode , it is taking nearly 1 hour.
My Bdb details are as follows
Environment : Non transactional , non locking
Database : Deferred write.
Cache : 80% of JVM ( -Xms=1000M -Xmx=1200m )
Could you please explain why it is taking such a long time ? did i make any mistakes on settings ?
Thanks
nvseenu
Edited by: nvseenu on Jan 15, 2009 5:47 AMHello Gary,
StoredMap is a convenience API wrapper for a Database. It has the same performance and multi-threading characteristics as a Database. You don't need to synchronize a StoredMap, or use Database to get better performance.
The lock conflicts are the thing to focus on here. This is unrelated to the topic discussed earlier in this thread.
How many threads are inserting and how many performing queries?
What other work, other than inserting and reading, are these threads performing?
Does any thread keep an iterator (which is a cursor) open?
How large are the data items in the map?
What is the resolution of the timestamp? Milliseconds?
I don't think the exception you posted is complete. Please post the full exception including the cause exception.
I can't tell from the exception but it looks like multiple insertion threads are conflicting with each other, not with the query threads. If you test only the insertions (no queries), do the lock conflicts still occur?
One possibility is that multiple insertions threads are using the same timestamp as the key. Only one thread will be able to access that key at a time, the others will wait. Even so, I don't understand why it's taking so long to perform the inserts. But you can easily make the key unique by appending a sequence number -- use a two part key {timestamp, sequence}.
Please upgrade to JE 3.3 in order to take advantage of the improvements and so we can better support you. We're not actively working with JE 3.2, which is very outdated now.
--mark
Maybe you are looking for
-
Multiple copies of the same song iPad 2
I recently purchased two items on iTunes - a full cd from my MacBook and a single from my iPhone. However, on my iPad, in the "Recently Added" playlist, it shows that each song has a duplicate copy. But, these copies cannot be swiped to be deleted an
-
How to remove a computer from my shared folder in Finder?
Just purchased a new iMac. After setting up Wi-Fi network connection, from DSL, now have a unknown computer listed within the shared folder in Finder. Have no idea who this is, but want to remove it. How?
-
TCRMBP_REFCUST --- Replication of BPs from CRM to ECC
Hi - We have a CRM - ECC (SD / ISU) environment. I have a question about TCRMBP_REFCUST table. 1. How exactly is this table used? Is this table used when replicating the CRM BP into ECC? 2. This table seems to give me a better control over the sales
-
Macbook and the Cinema display
Hi, I'm about to purchase a Cinema display for my Macbook, before doing so I have several questions. In the Apple store at the Cinema display section there is told that the Apple displays are specially designed for the Mac Pro and Macbook pro's. I wo
-
Drivers usb 2.0 windows 7
i need usb drivers 2.0 . i dowgraded my HP 15 r185nd to windows 7. but now my usb don't work. please help