Oracle text - issue with contains query
Hello,
Need urgent help.
Following code in my procedure is giving me error.
TYPE c_1 is ref cursor;
result_cursor c1;
i_text2 := 'NEW%';
open result_cursor for
'select /*+ INDEX_SS_DESC(e cad_addr_idx2 )*/
from cad_address
where
contains(text, {:i_text2}, 1) > 0
and rec_type in (1,2,3,4)
order by occur_count desc'
using
i_text2;
ORA-00936: missing expression
ORA-06512: at "AV_OWNER.MY_PROC", line 43
ORA-06512: at line 6
Oracle version is 11.2.0.3.0.
Thanks,
check your table is 'text indexed' on this 'Text' column.To knoow more about 'text index' go to
http://docs.oracle.com/cd/B19306_01/text.102/b14217/ind.htm
Also refer to the below thread where someone had faced issues with CONTAINS clause.
ORA-20000: Oracle Text error: DRG-10599: column is not indexed
Similar Messages
-
Hi Friends,
I am having a problem with this query. The query is to fetch all the elements for the employees. The elements that need to be fetched are set up in the flex sets.
In the table fnd_flex_values there are columns start_date_active and end_date_active fields and these are null. In the below query when I use the condition
and trunc(sysdate) between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
the query fetches the results in 5seconds
but when I replace the same query with the statement
and to_date(to_char(ppa.date_earned,'DD-MON-YYYY'),'DD-MON-YYYY') between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
the query takes a lot of time. Infact it gets timed out
Columns start_date_active and end_date_active fields are date type. Can anyone say what is the issue with this query
When I give the sysdate instead of ppa.date_earned date
select papf.person_id, papf.full_name, papf.employee_number
,petf.element_name, pivf.name,
prrv.result_value, ppa.date_earned,ppa.effective_date, to_char(ppa.date_earned,'DD-MON-YYYY') date_earned, paaf.assignment_id, petf.effective_start_date
,petf.effective_end_date, ffv.*
from per_all_people_f papf
,per_all_assignments_f paaf
,pay_payroll_actions ppa
,pay_assignment_actions paa
,pay_element_types_f petf
,pay_input_values_f pivf
,pay_run_results prr
,pay_run_result_values prrv
,per_person_type_usages_f pptuf
,per_person_types ppt
,fnd_flex_values ffv
,fnd_flex_value_sets ffvs
where 1=1
and papf.person_id = paaf.person_id
and trunc(ppa.date_earned) between trunc(papf.effective_start_date) and trunc(papf.effective_end_date)
and trunc(ppa.date_earned) between trunc(paaf.effective_start_date) and trunc(paaf.effective_end_date)
and paa.assignment_id = paaf.assignment_id
and paa.action_status='C'
and ppa.payroll_id = 61
--and ppa.consolidation_set_id = 108
and ppa.payroll_action_id = paa.payroll_action_id
and prr.assignment_action_id = paa.assignment_action_id
and prr.element_type_id = petf.element_type_id
-- and trunc(ppa.date_earned) between trunc(petf.effective_start_date) and trunc(petf.effective_end_date)
and trunc(sysdate) between petf.effective_start_date and petf.effective_end_date
and prrv.run_result_id = prr.run_result_id
and prrv.input_value_id = pivf.input_value_id
and prr.status in ('P','PA')
and pivf.name ='Pay Value'
and ppa.date_earned between pivf.effective_start_date and pivf.effective_end_date
and ppa.time_period_id=145
and paaf.person_id = pptuf.person_id
and pptuf.person_type_id = ppt.person_type_id
and ppt.user_person_type = nvl('Employee',ppt.user_person_type)
and trunc(ppa.date_earned) between pptuf.effective_start_date and pptuf.effective_end_date
and petf.element_name = ffv.flex_value
and ffv.flex_value_set_id = ffvs.flex_value_set_id
and ffv.enabled_flag ='Y'
and ffvs.flex_value_set_name='GROUP_ELEMENTS'
and trunc(sysdate) between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and NVL(ffv.end_Date_active,TO_DATE('31-DEC-4712','DD-MON-YYYY'))
-- and trunc(ppa.effective_date) between nvl(ffv.start_Date_active,trunc(ppa.effective_date)) and NVL(ffv.end_Date_active,trunc(ppa.effective_date))
-- and to_date(to_char(ppa.date_earned,'DD-MON-YYYY'),'DD-MON-YYYY') between NVL(ffv.start_Date_active,TO_DATE('01-JAN-1951','DD-MON-YYYY')) and
order by ffv.parent_flex_value_low, papf.employee_number;
Thanksʃʃp wrote:
/* Formatted on 2012/06/11 13:34 (Formatter Plus v4.8.8) */
SELECT DISTINCT fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_ir, no_of_supps_start_ir
FROM comm_exst_rp_comit_aggr_irview
WHERE fr_ir_code = 'AS01'
AND srta_period = 2
AND srta_year = 2011
AND param_rep_prd = '2011-2'
AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 1, 1)
GROUP BY fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_ir, no_of_supps_start_ir
UNION
SELECT DISTINCT fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_irda AS no_of_links_start_ir,
no_of_supps_start__irda AS no_of_supps_start_ir
FROM comm_exst_rp_comit_aggr_irview
WHERE fr_ir_code = 'AS01'
AND srta_period = 2
AND srta_year = 2011
AND param_rep_prd = '2011-2'
AND da_status = 'N'
AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 0, 0)
GROUP BY fr_ir_code, fr_fc_code1, product1, param_rep_prd, no_of_links_start_irda, no_of_supps_start__irdaYes Union is one of the best solutions for handling two queries.
As per my understanding DECODE function use is:
DECODE (value,<if this value>,<return this value>,
<if this value>,<return this value>,
<otherwise this value>)
But I am little bit jumbled for understanding below two lines in the query..
1)
AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 1, 1)
2)
AND DECODE (:bind_variable, 'All', 1, 0) = DECODE (:bind_variable, 'All', 0, 0)Can you please tell me how the comparision is done using DECODE function? -
Issues with Bex query structures and Crystal Reports/Webi
Hi experts,
I'm having an issue with Bex Query structures and nulls. I've built a Crystal Report against a Bex query that uses a Bex Query structure. The structure looks like the following
Budget $
Budget %
Actual $
Actual %
Budget YTD
etc
if I drag the structure into the Crystal Report detail section with a key figure it displays like this
Budget $ <null>
Budget % <null>
Actual $ 300
Actual % 85
Budget YTD 250
the null values are displayed (and this is what is required). However if I filter using a Record selection or group on a profit centre then the nulls along with the associated structure component are not displayed.
Actual $ 300
Actual % 85
Budget YTD 250
Webi is also behaving similarly. Can anyone explain why the above is happening and suggest a solution either on the Bex side of things or on the Crystal Reports side of things? I'm confused as to why nulls are displayed in the first example and not the second.
Business Objects Edge 3.1 SP2
SAP Int Kit SP2
OS: Linux
BW 701 Level 6
Crystal Reports 2008 V1
Thanks
KeithHi,
Crystal Reports and Web Intelligence will only show data which is in the cube. You could have an actual 0 or Null entry whithout grouping but by changing the selection / grouping in the report the data does not include such entry anymore.
ingo -
Performance issue with insert query !
Hi ,
I am using dbxml-2.4.16, my node-storage container is loaded with a large document ( 54MB xml ).
My document basically contains around 65k records in the same table ( 65k child nodes for one parent node ). I need to insert more records in to my DB, my insert XQuery is consuming a lot of time ( ~23 sec ) to insert one entry through command-line and around 50sec through code.
My container is indexed with "node-attribute-equality-string". The insert query I used:
insert nodes <NS:sampleEntry mySSIAddress='70011' modifier = 'create'><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
If I modify my query with
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:sampleTable/NS:sampleEntry[@mySSIAddress='1']
insted of
into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable)
Time taken reduces only by 8 secs.
I have also tried to use insert "after", "before", "as first", "as last" , but there is no difference in performance.
Is anything wrong with my query, what should be the expected time to insert one record in a DB of 65k records.
Has anybody got any idea regarding this performance issue.
Kindly help me out.
Thanks,
Kapil.Hi George,
Thanks for your reply.
Here is the info you requested,
dbxml> listIndexes
Index: unique-node-metadata-equality-string for node {http://www.sleepycat.com/2002/dbxml}:name
Index: node-attribute-equality-string for node {}:mySSIAddress
2 indexes found.
dbxml> info
Version: Oracle: Berkeley DB XML 2.4.16: (October 21, 2008)
Berkeley DB 4.6.21: (September 27, 2007)
Default container name: n_b_i_f_c_a_z.dbxml
Type of default container: NodeContainer
Index Nodes: on
Shell and XmlManager state:
Not transactional
Verbose: on
Query context state: LiveValues,Eager
The insery query with update takes ~32 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS';insert nodes <NS:sampleEntry mySSIAddress='70000' modifier = 'create' ><NS:sampleIPZone1Address>AABBCCDD</NS:sampleIPZone1Address><NS:myICMPFlag>1</NS:myICMPFlag><NS:myIngressFilter>1</NS:myIngressFilter><NS:myReadyTimer>4</NS:myReadyTimer><NS:myAPNNetworkID>ggsntest</NS:myAPNNetworkID><NS:myVPLMNFlag>2</NS:myVPLMNFlag><NS:myDAC>100</NS:myDAC><NS:myBcastLLIFlag>2</NS:myBcastLLIFlag><NS:sampleIPZone2Address>00000000</NS:sampleIPZone2Address><NS:sampleIPZone3Address>00000000</NS:sampleIPZone3Address><NS:sampleIPZone4Address>00000000</NS:sampleIPZone4Address><NS:sampleIPZone5Address>00000000</NS:sampleIPZone5Address><NS:sampleIPZone6Address>00000000</NS:sampleIPZone6Address><NS:sampleIPZone7Address>00000000</NS:sampleIPZone7Address></NS:sampleEntry> into doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 32.5002
and the query without the updation part takes ~14 sec ( shown below )
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//NS:NS//NS:sampleTable"
Time in seconds for command 'query': 13.7289
The query :
time query "declare namespace foo='MY-SAMPLE';declare namespace NS='NS'; doc('dbxml:/n_b_i_f_c_a_z.dbxml/doc_Running-SAMPLE')//PMB:sampleTable/PMB:sampleEntry[@mySSIAddress='1000']"
Time in seconds for command 'query': 0.005375
is very fast.
The Updation of the document seems to consume much of the time.
Regards,
Kapil. -
Oracle text performance with context search indexes
Search performance using context index.
We are intending to move our search engine to a new one based on Oracle Text, but we are meeting some
bad performance issues when using search.
Our application allows the user to search stored documents by name, object identifier and annotations(formerly set on objects).
For example, suppose I want to find a document named ImportSax2.c: according to user set parameters, our search engine format the following
search queries :
1) If the user explicitely ask for a search by document name, the query is the following one =>
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;
2) If the user don't specify any extra parameters, the query is the following one =>
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0;
Oracle text only need around 7 seconds to answer the second query, whereas it need around 50 seconds to give an answer for the first query.
Here is a part of the sql script used for creating the Oracle Text index on the column OBJFIELDURL
(this column stores a path to an xml file containing properties that have to be indexed for each object) :
begin
Ctx_Ddl.Create_Preference('wildcard_pref', 'BASIC_WORDLIST');
ctx_ddl.set_attribute('wildcard_pref', 'wildcard_maxterms', 200) ;
ctx_ddl.set_attribute('wildcard_pref','prefix_min_length',3);
ctx_ddl.set_attribute('wildcard_pref','prefix_max_length',6);
ctx_ddl.set_attribute('wildcard_pref','STEMMER','AUTO');
ctx_ddl.set_attribute('wildcard_pref','fuzzy_match','AUTO');
ctx_ddl.set_attribute('wildcard_pref','prefix_index','TRUE');
ctx_ddl.set_attribute('wildcard_pref','substring_index','TRUE');
end;
begin
ctx_ddl.create_preference('doc_lexer_perigee', 'BASIC_LEXER');
ctx_ddl.set_attribute('doc_lexer_perigee', 'printjoins', '_-');
ctx_ddl.set_attribute('doc_lexer_perigee', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('doc_lexer_perigee','index_themes','yes');
ctx_ddl.create_preference('english_lexer','basic_lexer');
ctx_ddl.set_attribute('english_lexer','index_themes','yes');
ctx_ddl.set_attribute('english_lexer','theme_language','english');
ctx_ddl.set_attribute('english_lexer', 'printjoins', '_-');
ctx_ddl.set_attribute('english_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.create_preference('german_lexer','basic_lexer');
ctx_ddl.set_attribute('german_lexer','composite','german');
ctx_ddl.set_attribute('german_lexer','alternate_spelling','GERMAN');
ctx_ddl.set_attribute('german_lexer','printjoins', '_-');
ctx_ddl.set_attribute('german_lexer', 'BASE_LETTER', 'YES');
ctx_ddl.set_attribute('german_lexer','NEW_GERMAN_SPELLING','YES');
ctx_ddl.set_attribute('german_lexer','OVERRIDE_BASE_LETTER','TRUE');
ctx_ddl.create_preference('japanese_lexer','JAPANESE_LEXER');
ctx_ddl.create_preference('global_lexer', 'multi_lexer');
ctx_ddl.add_sub_lexer('global_lexer','default','doc_lexer_perigee');
ctx_ddl.add_sub_lexer('global_lexer','german','german_lexer','ger');
ctx_ddl.add_sub_lexer('global_lexer','japanese','japanese_lexer','jpn');
ctx_ddl.add_sub_lexer('global_lexer','english','english_lexer','en');
end;
begin
ctx_ddl.create_section_group('axmlgroup', 'AUTO_SECTION_GROUP');
end;
drop index ADSOBJ_XOBJFIELDURL force;
create index ADSOBJ_XOBJFIELDURL on ADSOBJ(OBJFIELDURL) indextype is ctxsys.context
parameters
('datastore ctxsys.file_datastore
filter ctxsys.inso_filter
sync (on commit)
lexer global_lexer
language column OBJFIELDURLLANG
charset column OBJFIELDURLCHARSET
format column OBJFIELDURLFORMAT
section group axmlgroup
Wordlist wildcard_pref
Oracle created a table named DR$ADSOBJ_XOBJFIELDURL$I which now contains around 25 millions records.
ADSOBJ is the table contaings information for our documents,OBJFIELDURL is the field that contains the path to the xml file containing
data to index. That file looks like this :
<?xml version="1.0" encoding="UTF-8" ?>
<fields>
<OBJNAME><![CDATA[NomLnk_177527o.jpgp]]></OBJNAME>
<OBJREM><![CDATA[Z_CARACT_141]]></OBJREM>
<OBJID>295926o.jpgp</OBJID>
</fields>
Can someone tell me how I can make that kind of request
"select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0;"
run faster ?Below are the execution plan for both the 2 requests :
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c WITHIN objname' , 1 ) > 0
PLAN_TABLE_OUTPUT
| Id | Operation |Name |Rows |Bytes |Cost (%CPU)|
| 0 | SELECT STATEMENT | |1272 |119K | 4 (0) |
| 1 | TABLE ACCESS BY INDEX ROWID |ADSOBJ |1272 |119K | 4 (0) |
| 2 | DOMAIN INDEX |ADSOBJ_XOBJFIELDURL | | | 4 (0) |
Note
- 'PLAN_TABLE' is old version
Executed in 2 seconds
select objid FROM ADSOBJ WHERE CONTAINS( OBJFIELDURL , 'ImportSax2.c' , 1 ) > 0
PLAN_TABLE_OUTPUT
| Id |Operation |Name |Rows |Bytes |Cost (%CPU)|
| 0 | SELECT STATEMENT | |1272 |119K | 4 (0) |
| 1 | TABLE ACCESS BY INDEX ROWID |ADSOBJ |1272 |119K | 4 (0) |
| 2 | DOMAIN INDEX |ADSOBJ_XOBJFIELDURL | | | 4 (0) |
Sorry for the result formatting, I can't get it "easily" readable :( -
Oracle text catsearch sub index query
Hello,
I wonder if you can help me with a query about Oracle Text Catsearch.
I have a database which has 10Gb of data.
There is a text column in the database on which I have to find a partial match on the data contained in it
I have indexed this column with a CTXSYS.CTXCAT index.
In addition I have added a sub index to the index set for a Date Field and ran EXEC DBMS_STATS.GATHER_TABLE_STATS to make sure the query execution path is optimised
Here's my Question:
How can I make sure that the Date sub query always runs before the finding the Partial Match on the text column?
Caveat I am a programmer not a DBA, but I've ended up doing some databasey type stuff, apologies if question is thick.
Cheers
Mark
p.s. Performance is good, but I have a feeling that the Date subquery is not being used as efficiently as it should be (the subquery should massively reduce the result set to be searched for the partial match)You can't - ctxcat doesn't support the "functional invocation" which would be needed if another index is used first. So reducing the set of docs to index doesn't help.
If you can find a way to denormalize the information used in the sub-query such that it can be included in the main query index set, that should help performance considerably. -
Hello, everyone.
I am having issues with running a DELETE statement on an Oracle 10g database.
DELETE
FROM tableA
WHERE ID in (1,2,3)
If there is only one ID for the IN clause, it works. But if more than one ID is supplied, I get an "SQL command not properly ended" error message. Here is the query as CF:
DELETE
FROM TRAINING
WHERE userID = <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#trim(form.userID)#">
AND TRAINING_ID in <cfqueryparam value="#form.trainingIDs#" cfsqltype="CF_SQL_INTEGER" list="yes">
Anyone work with Oracle that can help me with this? I'm an experienced MS-SQL developer; Oracle is new to me.
Thanks,
^_^Nevermind.. a co-worker just told me that I still have to use parenthesis around the values for the IN clause.
-
Oracle Text Help with XML column values
Hello. In addition to being new to Oracle Text, I am inheriting an Oracle Text application and have a couple of questions.
First, A context-based index has been set-up on a CLOB column which contains an XML formatted document. The Auto Section Group parameter has been set to created zones for each tag of the XML document. I have found that when using a browser to display the content of the CLOB, some of the column values have trouble displaying in the browser, where I receive an XML processing error. I believe this is due to the fact that some of the XML document rows contain URLs that are not embedded in the CDATA tag. In any case, if the browser has trouble displaying the XML, will oracle text have trouble indexing the XML and creating the section group zones?
Second, I understand that the NOT operator takes a right operand term and left operand term. Can either of the terms be the results of the WITHIN operator, i.e. "dogs not (cats within animals)".
Thank you.I bet you just whipped that out, and I thank you with all my
heart, its amazing to me how many ways I tried to do what you did.
Thanks
I have a second question relating to the same problem and
that is in referencing the over state. Currently, I can write
'text' into the text field and see what I have coming in from xml
in its place during the 'up' state.
However, when the timeline hits the 'over' state, the
textfield will display nothing, or 'text' if I have that written
in. I suspect that I am not referencing the'over' state correctly.
Should I add one line of code sort of referencing the text
field and not just the button while in the over state? -
Update Results not Displayed in Oracle Text search with Transactional Index
Hi,
I am working on a solution utilising Oracle Text to give me a probable list of matching records. The problem I have the table I am searching on is prepopulated with seed data and the application we are building is assigning a record and updating the details(columns) against it. This detail is what we are searching on using an Multi Column Datastore index which is refreshed every hr and also has the transactional parameter specified. Unfortunately the Transactional Index does not pick up the updated details, it only seems to work if I insert a new record (which will never happen). This to me sounds like a bug. Any assistance would be greatly appreciated.Barbara,
I think you may have eluded to my problem. I haven't updated the "dummy" column
The table structure is as follows:
CREATE TABLE WAGN (
WAGN VARCHAR2(8) NOT NULL PRIMARY KEY,
last_name VARCHAR2(240),
first_name VARCHAR2(240),
middle_name VARCHAR2(240),
date_of_birth DATE,
gender VARCHAR2(1),
status VARCHAR2(1) NOT NULL,
signature RAW(64));
The preference creation is:
BEGIN
ctx_ddl.create_preference('WAGN_NAME_SRCH', 'MULTI_COLUMN_DATASTORE');
ctx_ddl.set_attribute('WAGN_NAME_SRCH', 'columns', 'last_name, first_name, middle_name, date_of_birth, gender');
END;
The Index Creation statement is:
CREATE INDEX wagn_srch_idx1 ON WAGN(signature) --Dummy Column
INDEXTYPE IS ctxsys.CONTEXT
PARAMETERS ('DATASTORE WAGN_NAME_SRCH SYNC(EVERY "SYSDATE+60/24/60" PARALLEL 10) TRANSACTIONAL');
And a typical update statement is (contained with PL/SQL):
UPDATE WAGN
SET status = x_wagn_assigned_status,
last_name = p_employee_details.last_name,
first_name = p_employee_details.first_name,
middle_name = p_employee_details.middle_name,
date_of_birth = p_employee_details.date_of_birth,
gender = p_employee_details.gender
WHERE WAGN = l_wagn;
So my guess is that because the dummy column (signature) is not updated it is not being reflected in the transactional memory area. -
Perf tuning issue with a query involving between clause
Hi all,
I am getting issues with performance when I try to execute this query. Given below the query and it is going for a full table scan. I think the problem is with the between clause. But I dont know how to resolve this issue
SELECT psm.member_id
FROM pre_stg_member psm
WHERE psm.map_tran_agn BETWEEN :start_transaction_agn and :end_transaction_agn
and psm.partition_key = :p_partition_key;
Having composite index on map_tran_agn and partition_key.
Please help me in this regard.
Thanks,
SwamiPlease consider the following when you post a question.
1. New features keep coming in every oracle version so please provide Your Oracle DB Version to get the best possible answer.
You can use the following query and do a copy past of the output.
select * from v$version 2. This forum has a very good Search Feature. Please use that before posting your question. Because for most of the questions
that are asked the answer is already there.
3. We dont know your DB structure or How your Data is. So you need to let us know. The best way would be to give some sample data like this.
I have the following table called sales
with sales
as
select 1 sales_id, 1 prod_id, 1001 inv_num, 120 qty from dual
union all
select 2 sales_id, 1 prod_id, 1002 inv_num, 25 qty from dual
select *
from sales 4. Rather than telling what you want in words its more easier when you give your expected output.
For example in the above sales table, I want to know the total quantity and number of invoice for each product.
The output should look like this
Prod_id sum_qty count_inv
1 145 2 5. When ever you get an error message post the entire error message. With the Error Number, The message and the Line number.
6. Next thing is a very important thing to remember. Please post only well formatted code. Unformatted code is very hard to read.
Your code format gets lost when you post it in the Oracle Forum. So in order to preserve it you need to
use the {noformat}{noformat} tags.
The usage of the tag is like this.
<place your code here>\
7. If you are posting a *Performance Related Question*. Please read
{thread:id=501834} and {thread:id=863295}.
Following those guide will be very helpful.
8. Please keep in mind that this is a public forum. Here No question is URGENT.
So use of words like *URGENT* or *ASAP* (As Soon As Possible) are considered to be rude. -
I have an XE Database 11.2 on Windows, with Oracle text 11.2.0.2.0, all object verified, which has one imported Apex Application that was exported form Oracle 10 without issue.
I used data pump to populate the underlying application tables, which has also been successful, but need to recreate the text indexes in the target 11.2 instance.
I used ctx_report.create_index_script to create the 4 scripts I needed, no issue.
When running the first script on the target instance it all executed OK except for the "create index" section, and I cant seem to get around the issue.
The part of the script that fails is
create index "SCHEMA_OWNER"."A_INDEX1"
on "SCHEMA_OWNER"."A"
("A1")
indextype is ctxsys.context
parameters('
datastore "A_INDEX1_DST"
filter "A_INDEX1_FIL"
section group "A_INDEX1_SGP"
lexer "A_INDEX1_LEX"
wordlist "A_INDEX1_WDL"
stoplist "A_INDEX1_SPL"
storage "A_INDEX1_STO"
Which returns the following errors
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-20000: Oracle Text error:
DRG-50857: oracle error in drvxtab.create_index_tables
ORA-00905: missing keyword
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 366
Can anyone help point me in the right direction, going slightly mad.Thanks for the help, it's much appreciated.
I have truncated the stoplist, but otherwise here is the complete script as created by ctx_report.create_index_script.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
begin
ctx_ddl.create_preference('"A_INDEX1_DST"','DIRECT_DATASTORE');
end;
begin
ctx_ddl.create_preference('"A_INDEX1_FIL"','AUTO_FILTER');
end;
begin
ctx_ddl.create_section_group('"A_INDEX1_SGP"','HTML_SECTION_GROUP');
end;
begin
ctx_ddl.create_preference('"A_INDEX1_LEX"','BASIC_LEXER');
end;
begin
ctx_ddl.create_preference('"A_INDEX1_WDL"','BASIC_WORDLIST');
ctx_ddl.set_attribute('"A_INDEX1_WDL"','STEMMER','ENGLISH');
ctx_ddl.set_attribute('"A_INDEX1_WDL"','FUZZY_MATCH','GENERIC');
end;
begin
ctx_ddl.create_stoplist('"A_INDEX1_SPL"','BASIC_STOPLIST');
ctx_ddl.add_stopword('"A_INDEX1_SPL"','Mr');
ctx_ddl.add_stopword('"A_INDEX1_SPL"','Mrs');
ctx_ddl.add_stopword('"A_INDEX1_SPL"','Ms');
ctx_ddl.add_stopword('"A_INDEX1_SPL"','might');
ctx_ddl.add_stopword('"A_INDEX1_SPL"','yours');
end;
begin
ctx_ddl.create_preference('"A_INDEX1_STO"','BASIC_STORAGE');
ctx_ddl.set_attribute('"A_INDEX1_STO"','R_TABLE_CLAUSE','lob (data) store as (cache)');
ctx_ddl.set_attribute('"A_INDEX1_STO"','I_INDEX_CLAUSE','compress 2')
end;
begin
ctx_output.start_log('A_INDEX1_LOG');
end;
create index "SCHEMA_OWNER"."A_INDEX1"
on "SCHEMA_OWNER"."A"
("A1")
indextype is ctxsys.context
parameters('
datastore "A_INDEX1_DST"
filter "A_INDEX1_FIL"
section group "A_INDEX1_SGP"
lexer "A_INDEX1_LEX"
wordlist "A_INDEX1_WDL"
stoplist "A_INDEX1_SPL"
storage "A_INDEX1_STO"
begin
ctx_output.end_log;
end;
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
Hope this helps -
Issue with SQL Query with Presentation Variable as Data Source in BI Publisher
Hello All
I have an issue with creating BIP report based on OBIEE reports which is done using direct SQL. There is this one report in OBIEE dashboard, which is written using direct SQL. To create the pixel perfect version of this report, I am creating BIP data model using SQL Query as data source. The physical query that is used to create OBIEE report has several presentation variables in its where clause.
select TILE4,max(APPTS), 'Top Count' from
SELECT c5 as division,nvl(DECODE (C2,0,0,(c1/c2)*100),0) AS APPTS,NTILE (4) OVER ( ORDER BY nvl(DECODE (C2,0,0,(c1/c2)*100),0)) AS TILE4,
c4 as dept,c6 as month FROM
select sum(case when T6736.TYPE = 'ATM' then T7608.COUNT end ) as c1,
sum(case when T6736.TYPE in ('Call Center', 'LSM') then T7608.CONFIRMED_COUNT end ) as c2,
T802.NAME_LEVEL_6 as c3,
T802.NAME_LEVEL_1 as c4,
T6172.CALENDARMONTHNAMEANDYEAR as c5,
T6172.CALENDARMONTHNUMBERINYEAR as c6,
T802.DEPT_CODE as c7
from
DW_date_DIM T6736 /* z_dim_date */ ,
DW_MONTH_DIM T6172 /* z_dim_month */ ,
DW_GEOS_DIM T802 /* z_dim_dept_geo_hierarchy */ ,
DW_Count_MONTH_AGG T7608 /* z_fact_Count_month_agg */
where ( T802.DEpt_CODE = T7608.DEPT_CODE and T802.NAME_LEVEL_1 = '@{PV_D}{RSD}'
and T802.CALENDARMONTHNAMEANDYEAR = 'July 2013'
and T6172.MONTH_KEY = T7608.MONTH_KEY and T6736.DATE_KEY = T7608.DATE_KEY
and (T6172.CALENDARMONTHNUMBERINYEAR between substr('@{Month_Start}',0,6) and substr('@{Month_END}',8,13))
and (T6736.TYPE in ('Call Center', 'LSM')) )
group by T802.DEPT_CODE, T802.NAME_LEVEL_6, T802.NAME_LEVEL_1, T6172.CALENDARMONTHNAMEANDYEAR, T6172.CALENDARMONTHNUMBERINYEAR
order by c4, c3, c6, c7, c5
))where tile4=3 group by tile4
When I try to view data after creating the data set, I get the following error:
Failed to load XML
XML Parsing Error: mismatched tag. Expected: . Location: http://172.20.17.142:9704/xmlpserver/servlet/xdo Line Number 2, Column 580:
Now when I remove those Presention variables (@{PV1}, @{PV2}) in the query with some hard coded values, it is working fine.
So I know it is the PV that's causing this error.
How can I work around it?
There is no way to create equivalent report without using the direct sql..
Thanks in advanceI have found a solution to this problem after some more investigation. PowerQuery does not support to use SQL statement as source for Teradata (possibly same for other sources as well). This is "by design" according to Microsoft. Hence the problem
is not because different PowerQuery versions as mentioned above. When designing the query in PowerQuery in Excel make sure to use the interface/navigation to create the query/select tables and NOT a SQL statement. The SQL statement as source works fine on
a client machine but not when scheduling it in Power BI in the cloud. I would like to see that the functionality within PowerQuery and Excel should be the same as in Power BI in the cloud. And at least when there is a difference it would be nice with documentation
or more descriptive errors.
//Jonas -
Issue with Select Query in the Delivery userexit USEREXIT_SAVE_DOCUMENT
Hi All,
I am facing a strang issue with delivery userexit
1) I have a delivery user exit MV50AFZ1 - USEREXIT_SAVE_DOCUMENT.
2) In this user exit. I have written a select query as shown below
*Get the already delivered data
SELECT vbeln
vgbel
vgpos
erdat
erzet
lfimg
FROM lips
INTO TABLE t_lips
FOR ALL ENTRIES IN t_xlips_reference
WHERE vgbel EQ t_xlips_reference-vgbel
AND vgpos EQ t_xlips_reference-vgpos.
IF sy-subrc EQ 0.
ENDIF.
3) The use of the above select query is to find out if there is any delivery that has already been created for the reference slaes order for which the current delivery is being created.
4) The issue here is that for the FIRST DELIVERY, this select query should fail becuase there is no delivery created earlier and LIPS table would not have data. But, I am seeing some data getting populated in the internal table. The data that I am seeing in the internal table is the data of XLIPS which is nothing but the one that is about to get saved in the database after finishing this userexit.
5) STRANGE Point is that this is working fine in case if I create delivery using the transaction VL01N. But, if I create delivery using VL10A program I am facing this issue.
<< Removed >>
Thanks,
Babu Kilari
Edited by: Rob Burbank on Jun 16, 2010 4:22 PMThen why don't you add
AND vbeln NE likp-vbeln -
Trying to understand Text Filters with "contain"?
I setup a text filter with the option of "contain" or "contains all" and a filter value of "Scouts" yet I get numerous pictures that don't match this? For example the following picture with these keywords, "band, Flutes, Matthew Leach, ZORK", come back from the filter request.
If I switch to "contains words" then the filter appears to work as suspected.
This is version 2.6.
DavidDavid, this does seem anomalous. Are you sure that you have cleared the value field? Have you selected the correct Folder for searching? I cannot see how images not containing the search term in the chosen field are returned.
Try a simple experiment. Select a folder, any folder; select one image and examine the keywords. Copy one of those keywords and search the whole folder for that term. Pictures with that keyword should appear. Check randomly to confirm the presence of the search term. If that works, your technique is OK.
Select 'none' in the search panel and start again with a folder known to contain 'scouts' and choose 'select'. You don't need to use 'select all' unless you include more than one search term. Hope this helps.
David -
Hi, I have a problem with filtering binary documents (.doc, .pdf, etc...). I use SQL*PLUS for remote access to Oracle 10.2 on Linux and I create table:
CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
I insert to this table:
INSERT into test values(1, 'PATH/text1.doc‘);
INSERT into test values(2,'PATH/text2.doc‘);
and then:
CREATE INDEX test_index ON test(text) indextype is ctxsys.context
parameters (’datastore ctxsys.file_datastore
filter ctxsys.auto_filter’);
Message "Index created" is displayed, but objects: DR$test_index$I, DR$test_index$K, DR$test_index$N, DR$test_index$R and DR$test_index$P are empty => index wasn´t created probably.
I don´t know, where is bug, either bug is somewhere in this code or on the server (wrong installation oracle or constraint privileges). Do you know in what is bug?The following is an excerpt from the 10g online documentation. Note the items that I have put in bold.
"FILE_DATASTORE
The FILE_DATASTORE type is used for text stored in files accessed through the local file system.
Note:
FILE_DATASTORE may not work with certain types of remote mounted file systems.
FILE_DATASTORE has the following attribute(s):
Table 2-4 FILE_DATASTORE Attributes
Attribute Attribute Value
path path1:path2:pathn
path
Specify the full directory path name of the files stored externally in a file system. When you specify the full directory path as such, you need only include file names in your text column.
You can specify multiple paths for path, with each path separated by a colon (:) on UNIX and semicolon(;) on Windows. File names are stored in the text column in the text table.
If you do not specify a path for external files with this attribute, Oracle Text requires that the path be included in the file names stored in the text column.
PATH Attribute Limitations
The PATH attribute has the following limitations:
If you specify a PATH attribute, you can only use a simple filename in the indexed column. You cannot combine the PATH attribute with a path as part of the filename. If the files exist in multiple folders or directories, you must leave the PATH attribute unset, and include the full file name, with PATH, in the indexed column.
On Windows systems, the files must be located on a local drive. They cannot be on a remote drive, whether the remote drive is mapped to a local drive letter."
With accessible paths and files, you get something like:
SCOTT@orcl_11g> CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
Table created.
SCOTT@orcl_11g>
SCOTT@orcl_11g>
SCOTT@orcl_11g> INSERT into test values(1,'c:\oracle11g\banana.pdf');
1 row created.
SCOTT@orcl_11g> INSERT into test values(2,'c:\oracle11g\cranberry.pdf');
1 row created.
SCOTT@orcl_11g>
SCOTT@orcl_11g> CREATE INDEX test_index ON test(text) indextype is ctxsys.context
2 parameters ('datastore ctxsys.file_datastore
3 filter ctxsys.auto_filter');
Index created.
SCOTT@orcl_11g>
SCOTT@orcl_11g> select count(*) from dr$test_index$i
2 /
COUNT(*)
608
SCOTT@orcl_11g> In the following, I used a non-existent path and non-existent file name, which produces the same results as when you use a remote path that does not exist locally.
SCOTT@orcl_11g> CREATE TABLE test (id NUMBER PRIMARY KEY, text VARCHAR2(100));
Table created.
SCOTT@orcl_11g>
SCOTT@orcl_11g>
SCOTT@orcl_11g> INSERT into test values(3,'c:\nosuchpath\nosuchfile.pdf');
1 row created.
SCOTT@orcl_11g>
SCOTT@orcl_11g> CREATE INDEX test_index ON test(text) indextype is ctxsys.context
2 parameters ('datastore ctxsys.file_datastore
3 filter ctxsys.auto_filter');
Index created.
SCOTT@orcl_11g>
SCOTT@orcl_11g> select count(*) from dr$test_index$i
2 /
COUNT(*)
0
SCOTT@orcl_11g>
Maybe you are looking for
-
How do I find the phone # associated with my dryloop account?
Customer service said they would be open til midnight but they're closed at 11:20. How do I find the phone # associated with my dryloop account so that I can pay online.
-
Stateless session beans and idle timeouts (weblogic 10.3.1)
Need clarification about stateless session beans and the idle-timeout-seconds setting. Situation is this – we have a process that is timing out due to an API call to a very slow Authentication server. I am trying to resolve this with a code change, b
-
Configuring Commit driver for IPM 11g in ODC
Hi, I am trying to configure the commit driver for a commit profile. I have selected the Oracle I\PM 11g Commit Driver and clicked on configure button, As I click on configure button, it pops up an error message saying: Run Time Error '40009' :* The
-
Possible to move a virtual PC from VirtualPC to Tiger on Intel Mac?
I am migrating from a G5 running Panther to an Intel Mac running Tiger. I've got VirtualPC on the G5, and a virtual machine all set up. I know there are several possible solutions for running WinXP stuff on the Intel Mac. Is there any way, other than
-
I am using MacBook with 4GB RAM and 500GB hard disc, 2.26GHz CPU, Mountain Lion OS. The machine has become so slow it hamperes lots of work as with almost all the application it beclome unresponsive and hang like. Also disc permission changes over th