Add an Infoobject in Mutiprovider for a query
Hi Folks,
I have to create a typical report on a multiprovider where I have to use a same characteristic twice. I cant use a selection as I have KF's involved in other selections. So, I have created a characteristic as a reference type to required char & added it to the multiprovider as infoprovider. But Iam unable to see the added char in a new query on the multiprovider. Any ideas ?
Thanks & regards,
Manoj.
Hi Bhanu/Srini,
I have created a characteristic as reference to orig. char. I only want a char bcos I need to restrict its values. I dont know whether this is possible for Nav. attr. I have added this to both cubes under Multiprovider (Is it OK to add to both cubes as I need to specify the relationship in Identification of Edit Multiprov. or how can this be acheived ??). I have also added this to Multiprov. So, now this char does not act as a seperate Infoprov. Iam not getting any data for the result of query & also Iam not able to run other queries existing on the multiprovider. Iam getting the below errors.
1) System error in program SAPLRRK0 and form RSRDR;SRRK0F30-01-
Message no. BRAIN299
2)Error 'Exception condition "NO_DESTINATION" raised.' in
RSDRC_CUBE_DATA_GET_RFC could not be caught
Now my questions are:
1) Can we add a reference char to cubes having data??
2) Can we add a single char to a dimension of any cube without disturbing the setup?
Please suggest acheive this feat.
Regards,
Manoj.
Similar Messages
-
Help for a query to add columns
Hi,
I need for a query where I should add each TableC value as an additional column.
Please suggest...
I have 3 tables (TableA, TableB, TableC). TableB stores TableA Id and TableC stores TableB Id
Considering Id of TableA.
Sample data
TableA :
ID NAME TABLENAME ETYPE
23 Name1 TABLE NAMEA Etype A
TableB :
ID A_ID RTYPE RNAME
26 23 RTYPEA RNAMEA
61 23 RTYPEB RNAMEB
TableC :
ID B_ID COMPNAME CONC
83 26 Comp Name AA 1.5
46 26 Comp Name BB 2.2
101 61 Comp Name CC 4.2
Scenario 1: AS PER ABOVE SAMPLE DATA Put each TableC value as an additional column.
For an Id in TableA(23) where TableB contains 2 records of A_ID (26, 61) and TableC contains 2 records for 26 and 1 record for 61.
Output required: Put each TABLEC value as an additional column
TableA.NAME TableA.ETYPE TableB.RTYPE TableC_1_COMPNAME TableC_1_CONC TableC_2_COMPNAME TableC_2_CONC
Name1 EtypeA RTypeA Comp Name AA 1.5 Comp Name BB 2.2 so on..
Name1 EtypeA RTypeB Comp Name CC 4.2 NULL NULL
Scenario 2: If Table C contains ONLY 1 row for each Id in TableB, output should be somewhat
Output:
TableA.NAME TableA.ETYPE TableB.RTYPE TableC_1_COMPNAME
TableC_1_CONCvalue value value value valueHi,
Welcome to the forum!
Do you want the data from TableC presented
(1) in one column, or
(2) in several columns (a different column of results for each row in the original TableC)?
(1) Is called String Aggregation and is easier than (2).
The best way to do this is with a user-defined aggregate function (STRAGG) which you can copy from asktom.
Ignoring TableA for now, you could get what you want by saying
SELECT b.rtype
, STRAGG ( c.compname
|| ' '
|| c.conc
) AS c_data
FROM TableB b
JOIN TableC c ON b.id = c.b_id
GROUP BY b.rtype;(2) Presenting N rows of TableC as it they were N columns of the same row is called a pivot. Search for "pivot" or "rows to columns" to find examples of how to do this.
The number of columns in a result set is hard-coded into the query. If you don't know ahead of time how many rows in TableC will match a row in TableB, you can:
(a) guess high (for example, hard-code 20 columns and let the ones that never contain a match be NULL) or,
(b) use Dynamic SQL to write a query for you, which has exactly as many columns as you need.
The two scripts below contain basic information on pivots.
This first script is similar to what you would do for case (a):
-- How to Pivot a Result Set (Display Rows as Columns)
-- For Oracle 10, and earlier
-- Actually, this works in any version of Oracle, but the
-- "SELECT ... PIVOT" feature introduced in Oracle 11
-- is better. (See Query 2, below.)
-- This example uses the scott.emp table.
-- Given a query that produces three rows for every department,
-- how can we show the same data in a query that has one row
-- per department, and three separate columns?
-- For example, the query below counts the number of employess
-- in each departent that have one of three given jobs:
PROMPT ========== 0. Simple COUNT ... GROUP BY ==========
SELECT deptno
, job
, COUNT (*) AS cnt
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY deptno
, job;
Output:
DEPTNO JOB CNT
20 CLERK 2
20 MANAGER 1
30 CLERK 1
30 MANAGER 1
10 CLERK 1
10 MANAGER 1
20 ANALYST 2
PROMPT ========== 1. Pivot ==========
SELECT deptno
, COUNT (CASE WHEN job = 'ANALYST' THEN 1 END) AS analyst_cnt
, COUNT (CASE WHEN job = 'CLERK' THEN 1 END) AS clerk_cnt
, COUNT (CASE WHEN job = 'MANAGER' THEN 1 END) AS manager_cnt
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY deptno;
-- Output:
DEPTNO ANALYST_CNT CLERK_CNT MANAGER_CNT
30 0 1 1
20 2 2 1
10 0 1 1
-- Explanation
(1) Decide what you want the output to look like.
(E.g. "I want a row for each department,
and columns for deptno, analyst_cnt, clerk_cnt and manager_cnt)
(2) Get a result set where every row identifies which row
and which column of the output will be affected.
In the example above, deptno identifies the row, and
job identifies the column.
Both deptno and job happened to be in the original table.
That is not always the case; sometimes you have to
compute new columns based on the original data.
(3) Use aggregate functions and CASE (or DECODE) to produce
the pivoted columns.
The CASE statement will pick
only the rows of raw data that belong in the column.
If each cell in the output corresponds to (at most)
one row of input, then you can use MIN or MAX as the
aggregate function.
If many rows of input can be reflected in a single cell
of output, then use SUM, COUNT, AVG, STRAGG, or some other
aggregate function.
GROUP BY the column that identifies rows.
PROMPT ========== 2. Oracle 11 PIVOT ==========
WITH e AS
( -- Begin sub-query e to SELECT columns for PIVOT
SELECT deptno
, job
FROM scott.emp
) -- End sub-query e to SELECT columns for PIVOT
SELECT *
FROM e
PIVOT ( COUNT (*)
FOR job IN ( 'ANALYST' AS analyst
, 'CLERK' AS clerk
, 'MANAGER' AS manager
NOTES ON ORACLE 11 PIVOT:
(1) You must use a sub-query to select the raw columns.
An in-line view (not shown) is an example of a sub-query.
(2) GROUP BY is implied for all columns not in the PIVOT clause.
(3) Column aliases are optional.
If "AS analyst" is omitted above, the column will be called 'ANALYST' (single-quotes included).
{code}
The second script, below, shows one way of doing a dynamic pivot in SQL*Plus:
{code}
How to Pivot a Table with a Dynamic Number of Columns
This works in any version of Oracle
The "SELECT ... PIVOT" feature introduced in Oracle 11
is much better for producing XML output.
Say you want to make a cross-tab output of
the scott.emp table.
Each row will represent a department.
There will be a separate column for each job.
Each cell will contain the number of employees in
a specific department having a specific job.
The exact same solution must work with any number
of departments and columns.
(Within reason: there's no guarantee this will work if you
want 2000 columns.)
Case 0 "Basic Pivot" shows how you might hard-code three
job types, which is exactly what you DON'T want to do.
Case 1 "Dynamic Pivot" shows how get the right results
dynamically, using SQL*Plus.
(This can be easily adapted to PL/SQL or other tools.)
PROMPT ========== 0. Basic Pivot ==========
SELECT deptno
, COUNT (CASE WHEN job = 'ANALYST' THEN 1 END) AS analyst_cnt
, COUNT (CASE WHEN job = 'CLERK' THEN 1 END) AS clerk_cnt
, COUNT (CASE WHEN job = 'MANAGER' THEN 1 END) AS manager_cnt
FROM scott.emp
WHERE job IN ('ANALYST', 'CLERK', 'MANAGER')
GROUP BY deptno
ORDER BY deptno
PROMPT ========== 1. Dynamic Pivot ==========
-- ***** Start of dynamic_pivot.sql *****
-- Suppress SQL*Plus features that interfere with raw output
SET FEEDBACK OFF
SET PAGESIZE 0
SPOOL p:\sql\cookbook\dynamic_pivot_subscript.sql
SELECT DISTINCT
', COUNT (CASE WHEN job = '''
|| job
|| ''' ' AS txt1
, 'THEN 1 END) AS '
|| job
|| '_CNT' AS txt2
FROM scott.emp
ORDER BY txt1;
SPOOL OFF
-- Restore SQL*Plus features suppressed earlier
SET FEEDBACK ON
SET PAGESIZE 50
SPOOL p:\sql\cookbook\dynamic_pivot.lst
SELECT deptno
@@dynamic_pivot_subscript
FROM scott.emp
GROUP BY deptno
ORDER BY deptno
SPOOL OFF
-- ***** End of dynamic_pivot.sql *****
EXPLANATION:
The basic pivot assumes you know the number of distinct jobs,
and the name of each one. If you do, then writing a pivot query
is simply a matter of writing the correct number of ", COUNT ... AS ..."\
lines, with the name entered in two places on each one. That is easily
done by a preliminary query, which uses SPOOL to write a sub-script
(called dynamic_pivot_subscript.sql in this example).
The main script invokes this sub-script at the proper point.
In practice, .SQL scripts usually contain one or more complete
statements, but there's nothing that says they have to.
This one contains just a fragment from the middle of a SELECT statement.
Before creating the sub-script, turn off SQL*Plus features that are
designed to help humans read the output (such as headings and
feedback messages like "7 rows selected.", since we do not want these
to appear in the sub-script.
Turn these features on again before running the main query.
{code} -
Multiple Queries in Workbook - Refresh Screen Shows Up for Every Query
We have multiple queries in a workbook. All of these queries have the exact same selections for the variable selection screen. When all the queries are refreshed once, the selection screen used to show up once and all the queries are refreshed with the same selections.
We were on BI 7.0 and SP10. We recently moved to SP12. Since the SP12 installation, the multiple query refresh pops-up the selection screen for every query. It is nothing like "multiple query refresh" at once since the user has to click "execute" button for every single query. It is interesting to note that the selection screen only contains hierarchy variables and hierarchy node variables. The other variables of selection screen do not show up. I couldn't find any OSS note on this topic. Please let me know if anyone has any comments on this issue. I will assign points to useful posts.hi Sameer,
try to update front end patch to latest version ?
Using the BI 7.x Add-On for SAP GUI 7.10 - Requirements
hope this helps. -
How to add new infoobject to existing cube or dso?
Hi all,
Can we add new infoobject to exixting cube in BW 3.5 ver? if yes how can we add? Please provide me steps.
Thank you.
SunilHi Sunil,
If you want to add new info object to the IC or DSO which is already holding data - then you need to make a copy of that particular DSO/IC and load the data into dummy.
Now delete the data from the IC/DSO to which you want to add the new info object. once the data is deleted it will allow you to add new info object to you IC/DSO.
save it and activate the IC/DSO
load the data from Dummy DSO and from new run on wards the newly updated info object will also get updated(historical data will not be updated if its newly added to DS as well).
If you want the historical data for the newly added field then you need to drop the compelte data and need to extract from source.
Note: You can't include the new info objects are change the existing info objects if there is data exist in the IC/DSO
Regards
KP
Edited by: prashanthk on Dec 31, 2010 10:54 AM -
How to add an infoobject in a DSO ,on which Infoset is built
Hi,
How to add an infoobject to a DSO,on which an infoset is build.
In general common infoobjects can be for reporting purpose right?
Do I need to add the new infoobject in all the DSOs.My infoset consists of 6 DSOs.
If my concept is wrong.what is the correct method of extracting infoobjects from DSOs to Infoset.
Regards
LuckyHi,
You need to add these two fields in the DSO only. Make sure that you are mapping these two fields in the transformation. After that, drop the data and reload the DSO. Then make changes to the infoset.
By Component, what I mean:
I'll take your example only...
Say, for Material_Group, this data is not coming to DSo data Source. In this case, even if you will add this object to your DSO, you wont be able to map this in the transformation as R3 filed is not available. Ultimately, you wont be able to load data for this. So, its worthless.
But, you have a master data called 0Material. and Material_Group is an attribute of this 0Material.In this case, you will add 0Material to the infoset directly, as a component of infoset(from the tab infoobject), and you will select Material group. In this way, we generally aceess the master data attributes.
Revert for more clarification.
Thanks...
Shambhu -
Passing parameters for a query throught XML and capturing response in the same
Hi All,
I have defined a RequestParameters object and i am passing paramerts for a query through XML and trying to capture the result in the same and send back to the source. In this case i am send XML from excel.
Below is my XML format.
<?xml version="1.0" encoding="utf-8"?>
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Insert xmlns="http://tempuri.org/">
<dataContractValue>
<dsRequest>
<dsRequest>
<SOURCE></SOURCE>
<ACTION>Insert</ACTION>
<RequestParams>
<RequestParams>
<ACC_NO>52451</ACC_NO>
<EMP_CITY>HYD</EMP_CITY>
<EMP_NAME>RAKESH</EMP_NAME>
<EMP_CONTACT>99664</EMP_CONTACT>
<EMP_JOM>NOV</EMP_JOM>
<EMP_SALARY>12345</EMP_SALARY>
</RequestParams>
<RequestParams>
<ACC_NO>52452</ACC_NO>
<EMP_CITY>HYD</EMP_CITY>
<EMP_NAME>RAKESH</EMP_NAME>
<EMP_CONTACT>99664</EMP_CONTACT>
<EMP_JOM>NOV</EMP_JOM>
<EMP_SALARY>12345</EMP_SALARY>
</RequestParams>
</RequestParams>
</dsRequest>
<dsRequest>
<SOURCE></SOURCE>
<ACTION>Update</ACTION>
<RequestParams>
<RequestParams>
<ACC_NO>52449</ACC_NO>
<EMP_CITY>HYD1</EMP_CITY>
<EMP_NAME>RAKESH1</EMP_NAME>
<EMP_SALARY>1345</EMP_SALARY>
</RequestParams>
<RequestParams>
<ACC_NO>52450</ACC_NO>
<EMP_CITY>HYDer</EMP_CITY>
<EMP_NAME>RAKEH</EMP_NAME>
<EMP_SALARY>1235</EMP_SALARY>
</RequestParams>
</RequestParams>
</dsRequest>
</dsRequest>
</dataContractValue>
</Insert>
</s:Body>
</s:Envelope>
Where i have a List of dsRequest and RequestParams, where i can send any number of requests for Insert,Update. I have two a XML element defined in RequestParams "RowsEffected","error" where the result will be caputred and is updated
to the response XML.
I have 6 defined in RequestParams
EMP_SALARY(int),ACC_NO(int),EMP_CITY(string),EMP_NAME(string),EMP_CONTACT(string),EMP_JOM(string)
My Question is:
When i am trying to build response XML with the following code, the parameters which are not given in the Request XML are also appearing in the Response.
ResponseParams.Add(
newdsResponse()
ACTION = OriginalParams[a].ACTION,
SOURCE = OriginalParams[a].SOURCE,
Manager = OriginalParams[a].Manager,
RequestParams = OriginalParams[a].RequestParams
Where the OriginalParams is dsRequest
Ex: In my update query i will only send three parameters, but in my response building with ablove code, i am getting all the variables defined as INT in the RequestParameters.
Is there any way i can avoid this and build response with only the parameters given in the Request ??
Appreciate ur help..Thanks
Cronsey.Hi Kristin,
My project is, User will be giving the parameters in the excel, and using VBA, the values are captured and an XML is created in the above mentioned format and is send to web service for the Insert/Update.
I created a webservice which reads the values from <datacontract> and it consist of list of <dsRequests> where any number of Insert/Upate commands can be executed, with in which it contains a list of <RequestParams> for multiple insertion/Updation.
//function call
OriginalParams = generator.Function(query, OriginalParams);
where OriginalParams is List<dsRequest>
//inside function
command.Parameters.Add()// parameters adding
int
val = command.ExecuteNonQuery();
after the execution,an XML element is added for the response part.and it is looped for all the RequestParams.
OriginalParams[i].Result.Add(
newResult()
{ ERROR = "No Error",
ROWS_EFFECTEFD = 1 });
//once all the execution is done the response building part
for(inta
= 0; a < OriginalParams.Count; a++)
ResponseParams.Add(
newdsResponse()
Result = OriginalParams[a].Result
QUEST: When i am trying to build response XML with the following code, the parameters which are not given in the Request XML are also appearing in the Response.
Ex: In my update query i will only send three parameters, but in my response building with ablove code, i am getting all the variables defined as INT in the RequestParameters.
Is there any way i can avoid this and build response with only the parameters given in the Request ??
Appreciate ur help..Thanks
Cronsey. -
Can I use a OID rule for a Query SQL Lov of BIP?
Hi. Can I use OID data (rules) for a query sql lov in BIP? Ex. filters users/store.
Thank you.
R.Hi,
I didn't look at the example, but if you want to secure your application then you should use container managed security. Read this .
Anyway, you could add this before return "good"; in your login_action()
FacesContext.getCurrentInstance().getExternalContext().getSessionMap().put("username", user);Then, you can access this from anywhere in the application by using #{sessionScope.username}.
Pedja -
Pre-fill the OLAP cache for a query on Data change event of infoprovider
Hi Gurus,
I have to pre-fill the OLAP cache for a query,which has bad performance.
I read a doc 'Periodic Jobs and Tasks in SAP BW'
which suggested sum steps to do this
i hav created the setting for Bex broadcasting for scheduling job Execution with data change in info provider
thereafter doc says "an event has to be raised in the process chain which loads the data to this InfoProvider.When the process chain executes the process u201CTrigger Event Data Change (for Broadcaster)u201D, an event is raised to inform the Broadcaster that the query can be filled in the OLAP cache."
how can this b done please provide with sum proper steps
Answers are always appreciated.
Thanks.Hi
U need to create a process chain or use the existing process chain which you are using to load your current solution, just add event change process type in the process chian and inside it add the info provider which are going to be affected.
Once you are done with this go to the broadcaster and create new setting for that query...you will see the option for event data chainge in infoprovider just choose that and create the settings.
hope it helps -
How to skip existing execution plan for a query
Hi,
I want to skip existng execution plan for a query which I am executing often. I dont want it to use the same execution plan everytime. Please let me know if any method is there skip the existing execution plan.
Thanks in advance.......
Edited by: 900105 on Dec 1, 2011 4:52 AMChange the query so it is syntactically different, but has the same semantics (meaning). That way CBO will reparse it and you might get a new execution plan.
One simple way to do that is to add a dummy predicate ( 45=45) to the where clause. The predicate must be changed every time the query is executed ( 46=46 , 47=47 ,… ).
Iordan Iotzov
http://iiotzov.wordpress.com/ -
Differenet Explain Plan for Same Query
DB Version : 11.2.0.3
OS Version : AIX 6
I have two Queries ( The Difference between Them Only 940 and 584 ) When I Generate Explain Plan Different Output Why ? Why CPU time is Different Each Time
First Query Statement :
INSERT INTO TempSearchResult (t_aid,
t_umidl,
t_umidh,
X_CREA_DATE_TIME_MESG)
SELECT z.aid,
z.mesg_s_umidl,
z.mesg_s_umidh,
z.mesg_crea_date_time
FROM ( SELECT m.aid,
m.mesg_s_umidl,
m.mesg_s_umidh,
m.mesg_crea_date_time
FROM RSMESG_ESIDE m
WHERE 1 = 1
AND m.mesg_crea_date_time BETWEEN TO_DATE (
'20120131 10:00:00',
'YYYYMMDD HH24:MI:SS')
AND TO_DATE (
'20120131 13:00:00',
'YYYYMMDD HH24:MI:SS')
AND m.mesg_frmt_name = 'Swift'
AND m.mesg_sender_x1 = 'SOGEFRPPXXX'
AND m.mesg_nature = 'FINANCIAL_MSG'
AND m.mesg_type LIKE '950'
ORDER BY mesg_crea_date_time) z
WHERE ROWNUM <= 5000
Explain Plan for First Query :
PLAN_TABLE_OUTPUT
Plan hash value: 3901722890
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | INSERT STATEMENT | | 2866 | 134K| 197 (3)| 00:00:03 | | |
| 1 | LOAD TABLE CONVENTIONAL | TEMPSEARCHRESULT | | | | | | |
|* 2 | COUNT STOPKEY | | | | | | | |
| 3 | VIEW | | 2866 | 134K| 197 (3)| 00:00:03 | | |
|* 4 | SORT ORDER BY STOPKEY | | 2866 | 333K| 197 (3)| 00:00:03 | | |
| 5 | NESTED LOOPS | | 2866 | 333K| 196 (2)| 00:00:03 | | |
PLAN_TABLE_OUTPUT
| 6 | NESTED LOOPS | | 1419 | 148K| 196 (2)| 00:00:03 | | |
|* 7 | HASH JOIN | | 1419 | 141K| 196 (2)| 00:00:03 | | |
| 8 | NESTED LOOPS | | 91 | 1911 | 2 (0)| 00:00:01 | | |
| 9 | TABLE ACCESS BY INDEX ROWID | SUSER | 1 | 10 | 1 (0)| 00:00:01 | | |
|* 10 | INDEX UNIQUE SCAN | IX_SUSER | 1 | | 0 (0)| 00:00:01 | | |
|* 11 | INDEX FULL SCAN | PK_SUNITUSERGROUP | 91 | 1001 | 1 (0)| 00:00:01 | | |
| 12 | PARTITION RANGE SINGLE | | 1450 | 114K| 193 (2)| 00:00:03 | 2 | 2 |
|* 13 | TABLE ACCESS BY LOCAL INDEX ROWID| RMESG | 1450 | 114K| 193 (2)| 00:00:03 | 2 | 2 |
|* 14 | INDEX SKIP SCAN | IX_RMESG | 415 | | 14 (15)| 00:00:01 | 2 | 2 |
|* 15 | INDEX UNIQUE SCAN | PK_SMSGUSERGROUP | 1 | 5 | 0 (0)| 00:00:01 | | |
|* 16 | INDEX UNIQUE SCAN | PK_SBICUSERGROUP | 2 | 24 | 0 (0)| 00:00:01 | | |
PLAN_TABLE_OUTPUT
PLAN_TABLE_OUTPUT
Predicate Information (identified by operation id):
2 - filter(ROWNUM<=5000)
4 - filter(ROWNUM<=5000)
7 - access("X_INST0_UNIT_NAME"="UNIT")
10 - access("SUSER"."USERNAME"="SIDE"."GETMYUSER"())
11 - access("SUSER"."GROUPID"="SUNITUSERGROUP"."GROUPID")
filter("SUSER"."GROUPID"="SUNITUSERGROUP"."GROUPID")
PLAN_TABLE_OUTPUT
13 - filter("RMESG"."MESG_SENDER_X1"='SOGEFRPPXXX' AND "RMESG"."MESG_NATURE"='FINANCIAL_MSG' AND
"RMESG"."MESG_FRMT_NAME"='Swift')
14 - access("RMESG"."MESG_CREA_DATE_TIME">=TO_DATE(' 2012-01-31 10:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"RMESG"."MESG_TYPE"='950' AND "RMESG"."MESG_CREA_DATE_TIME"<=TO_DATE(' 2012-01-31 13:00:00', 'syyyy-mm-dd hh24:mi:ss'))
filter("RMESG"."MESG_TYPE"='950')
15 - access("X_CATEGORY"="CATEGORY" AND "SUSER"."GROUPID"="SMSGUSERGROUP"."GROUPID")
16 - access("X_OWN_LT"="BICCODE" AND "SUSER"."GROUPID"="SBICUSERGROUP"."GROUPID")
40 rows selected.
Second query
INSERT INTO TempSearchResult (t_aid,
t_umidl,
t_umidh,
X_CREA_DATE_TIME_MESG)
SELECT z.aid,
z.mesg_s_umidl,
z.mesg_s_umidh,
z.mesg_crea_date_time
FROM ( SELECT m.aid,
m.mesg_s_umidl,
m.mesg_s_umidh,
m.mesg_crea_date_time
FROM RSMESG_ESIDE m
WHERE 1 = 1
AND m.mesg_crea_date_time BETWEEN TO_DATE (
'20120117 10:00:00',
'YYYYMMDD HH24:MI:SS')
AND TO_DATE (
'20120117 13:00:00',
'YYYYMMDD HH24:MI:SS')
AND m.mesg_frmt_name = 'Swift'
AND m.mesg_sender_x1 = 'SOGEFRPPGSS'
AND m.mesg_nature = 'FINANCIAL_MSG'
AND m.mesg_type LIKE '548'
ORDER BY mesg_crea_date_time) z
WHERE ROWNUM <= 5000
Explain Plan For Second Query :
PLAN_TABLE_OUTPUT
Plan hash value: 4106071428
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | INSERT STATEMENT | | 1073 | 51504 | | 2622 (1)| 00:00:32 | | |
| 1 | LOAD TABLE CONVENTIONAL | TEMPSEARCHRESULT | | | | | | | |
|* 2 | COUNT STOPKEY | | | | | | | | |
| 3 | VIEW | | 1073 | 51504 | | 2622 (1)| 00:00:32 | | |
|* 4 | SORT ORDER BY STOPKEY | | 1073 | 124K| | 2622 (1)| 00:00:32 | | |
| 5 | NESTED LOOPS | | 1073 | 124K| | 2621 (1)| 00:00:32 | | |
PLAN_TABLE_OUTPUT
| 6 | NESTED LOOPS | | 531 | 56817 | | 2621 (1)| 00:00:32 | | |
| 7 | NESTED LOOPS | | 531 | 54162 | | 2621 (1)| 00:00:32 | | |
| 8 | NESTED LOOPS | | 543 | 49413 | | 2621 (1)| 00:00:32 | | |
| 9 | TABLE ACCESS BY INDEX ROWID | SUSER | 1 | 10 | | 1 (0)| 00:00:01 | | |
|* 10 | INDEX UNIQUE SCAN | IX_SUSER | 1 | | | 0 (0)| 00:00:01 | | |
| 11 | PARTITION RANGE SINGLE | | 543 | 43983 | | 2621 (1)| 00:00:32 | 2 | 2 |
|* 12 | TABLE ACCESS BY LOCAL INDEX ROWID| RMESG | 543 | 43983 | | 2621 (1)| 00:00:32 | 2 | 2 |
| 13 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |
| 14 | BITMAP AND | | | | | | | | |
| 15 | BITMAP CONVERSION FROM ROWIDS | | | | | | | | |
|* 16 | INDEX RANGE SCAN | IX_SENDER | 25070 | | | 894 (1)| 00:00:11 | 2 | 2 |
PLAN_TABLE_OUTPUT
| 17 | BITMAP CONVERSION FROM ROWIDS | | | | | | | | |
| 18 | SORT ORDER BY | | | | 408K| | | | |
|* 19 | INDEX RANGE SCAN | IX_RMESG | 25070 | | | 1405 (1)| 00:00:17 | 2 | 2 |
|* 20 | INDEX UNIQUE SCAN | PK_SUNITUSERGROUP | 1 | 11 | | 0 (0)| 00:00:01 | | |
|* 21 | INDEX UNIQUE SCAN | PK_SMSGUSERGROUP | 1 | 5 | | 0 (0)| 00:00:01 | | |
|* 22 | INDEX UNIQUE SCAN | PK_SBICUSERGROUP | 2 | 24 | | 0 (0)| 00:00:01 | | |
Predicate Information (identified by operation id):
PLAN_TABLE_OUTPUT
2 - filter(ROWNUM<=5000)
4 - filter(ROWNUM<=5000)
10 - access("SUSER"."USERNAME"="SIDE"."GETMYUSER"())
12 - filter("RMESG"."MESG_NATURE"='FINANCIAL_MSG' AND "RMESG"."MESG_FRMT_NAME"='Swift')
16 - access("RMESG"."MESG_SENDER_X1"='SOGEFRPPGSS')
19 - access("RMESG"."MESG_CREA_DATE_TIME">=TO_DATE(' 2012-01-17 10:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"RMESG"."MESG_TYPE"='548' AND "RMESG"."MESG_CREA_DATE_TIME"<=TO_DATE(' 2012-01-17 13:00:00', 'syyyy-mm-dd hh24:mi:ss'))
filter("RMESG"."MESG_TYPE"='548' AND "RMESG"."MESG_CREA_DATE_TIME"<=TO_DATE(' 2012-01-17 13:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "RMESG"."MESG_CREA_DATE_TIME">=TO_DATE(' 2012-01-17 10:00:00', 'syyyy-mm-dd hh24:mi:ss'))
20 - access("X_INST0_UNIT_NAME"="UNIT" AND "SUSER"."GROUPID"="SUNITUSERGROUP"."GROUPID")
21 - access("X_CATEGORY"="CATEGORY" AND "SUSER"."GROUPID"="SMSGUSERGROUP"."GROUPID")
PLAN_TABLE_OUTPUT
22 - access("X_OWN_LT"="BICCODE" AND "SUSER"."GROUPID"="SBICUSERGROUP"."GROUPID")
45 rows selected.
Table Structure TEMPSEARCHRESULT
CREATE GLOBAL TEMPORARY TABLE TEMPSEARCHRESULT
T_AID NUMBER(3),
T_UMIDL NUMBER(10),
T_UMIDH NUMBER(10),
X_CREA_DATE_TIME_MESG DATE
ON COMMIT PRESERVE ROWS
NOCACHE;
CREATE INDEX SIDE.TEMP_SEARCH_INDEX ON SIDE.TEMPSEARCHRESULT
(T_AID, T_UMIDL, T_UMIDH, X_CREA_DATE_TIME_MESG);Again Thank you For your amazing Answer.
For indexes it's a balance. Check this query which is Simple
Select * from RMESGI generated Explain Plan for it to see effect of indexes .
PLAN_TABLE_OUTPUT
Plan hash value: 1686435785
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 11M| 8920M| 376K (1)| 01:15:20 | | |
| 1 | PARTITION RANGE ALL| | 11M| 8920M| 376K (1)| 01:15:20 | 1 | 12 |
| 2 | TABLE ACCESS FULL | RMESG | 11M| 8920M| 376K (1)| 01:15:20 | 1 | 12 |
---------------------------------------------------------------------------------------------1:15:20 For table access and Full Scan Also , I generate new Indexes on the table like the following
CREATE TABLE RMESG(
aid NUMBER(3) NOT NULL,
mesg_s_umidl NUMBER(10) NOT NULL,
mesg_s_umidh NUMBER(10) NOT NULL,
mesg_validation_requested CHAR(18) NOT NULL,
mesg_validation_passed CHAR(18) NOT NULL,
mesg_class CHAR(16) NOT NULL,
mesg_is_text_readonly NUMBER(1) NOT NULL,
mesg_is_delete_inhibited NUMBER(1) NOT NULL,
mesg_is_text_modified NUMBER(1) NOT NULL,
mesg_is_partial NUMBER(1) NOT NULL,
mesg_crea_mpfn_name CHAR(24) NOT NULL,
mesg_crea_rp_name CHAR(24) NOT NULL,
mesg_crea_oper_nickname CHAR(151) NOT NULL,
mesg_crea_date_time DATE NOT NULL,
mesg_mod_oper_nickname CHAR(151) NOT NULL,
mesg_mod_date_time DATE NOT NULL,
mesg_frmt_name VARCHAR2(17) NOT NULL,
mesg_nature CHAR(14) NOT NULL,
mesg_sender_x1 CHAR(11) NOT NULL,
mesg_sender_corr_type VARCHAR2(24) NOT NULL,
mesg_uumid VARCHAR2(50) NOT NULL,
mesg_uumid_suffix NUMBER(10) NOT NULL,
x_own_lt CHAR(8) NOT NULL,
x_inst0_unit_name VARCHAR2(32) default 'NONE' NOT NULL,
x_category CHAR(1) NOT NULL,
archived NUMBER(1) NOT NULL,
restored NUMBER(1) NOT NULL,
mesg_related_s_umid CHAR(16) NULL,
mesg_status CHAR(12) NULL,
mesg_crea_appl_serv_name CHAR(24) NULL,
mesg_verf_oper_nickname CHAR(151) NULL,
mesg_data_last NUMBER(10) NULL,
mesg_token NUMBER(10) NULL,
mesg_batch_reference VARCHAR2(46) NULL,
mesg_cas_sender_reference VARCHAR2(40) NULL,
mesg_cas_target_rp_name VARCHAR2(20) NULL,
mesg_ccy_amount VARCHAR2(501) NULL,
mesg_copy_service_id VARCHAR2(4) NULL,
mesg_data_keyword1 VARCHAR2(80) NULL,
mesg_data_keyword2 VARCHAR2(80) NULL,
mesg_data_keyword3 VARCHAR2(80) NULL,
mesg_delv_overdue_warn_req NUMBER(1) NULL,
mesg_fin_ccy_amount VARCHAR2(24) NULL,
mesg_fin_value_date CHAR(6) NULL,
mesg_is_live NUMBER(1) NULL,
mesg_is_retrieved NUMBER(1) NULL,
mesg_mesg_user_group VARCHAR2(24) NULL,
mesg_network_appl_ind CHAR(3) NULL,
mesg_network_delv_notif_req NUMBER(1) NULL,
mesg_network_obso_period NUMBER(10) NULL,
mesg_network_priority CHAR(12) NULL,
mesg_possible_dup_creation VARCHAR2(8) NULL,
mesg_receiver_alia_name VARCHAR2(32) NULL,
mesg_receiver_swift_address CHAR(12) NULL,
mesg_recovery_accept_info VARCHAR2(80) NULL,
mesg_rel_trn_ref VARCHAR2(80) NULL,
mesg_release_info VARCHAR2(32) NULL,
mesg_security_iapp_name VARCHAR2(80) NULL,
mesg_security_required NUMBER(1) NULL,
mesg_sender_x2 VARCHAR2(21) NULL,
mesg_sender_x3 VARCHAR2(21) NULL,
mesg_sender_x4 VARCHAR2(21) NULL,
mesg_sender_branch_info VARCHAR2(71) NULL,
mesg_sender_city_name VARCHAR2(36) NULL,
mesg_sender_ctry_code VARCHAR2(3) NULL,
mesg_sender_ctry_name VARCHAR2(71) NULL,
mesg_sender_institution_name VARCHAR2(106) NULL,
mesg_sender_location VARCHAR2(106) NULL,
mesg_sender_swift_address CHAR(12) NULL,
mesg_sub_format VARCHAR2(6) NULL,
mesg_syntax_table_ver VARCHAR2(8) NULL,
mesg_template_name VARCHAR2(32) NULL,
mesg_trn_ref VARCHAR2(16) NULL,
mesg_type CHAR(3) NULL,
mesg_user_issued_as_pde NUMBER(1) NULL,
mesg_user_priority_code CHAR(4) NULL,
mesg_user_reference_text VARCHAR2(30) NULL,
mesg_zz41_is_possible_dup NUMBER(1) NULL,
x_fin_ccy CHAR(3) NULL,
x_fin_amount NUMBER(21,4) NULL,
x_fin_value_date DATE NULL,
x_fin_ocmt_ccy CHAR(3) NULL,
x_fin_ocmt_amount NUMBER(21,4) NULL,
x_receiver_x1 CHAR(11) NULL,
x_receiver_x2 VARCHAR2(21) NULL,
x_receiver_x3 VARCHAR2(21) NULL,
x_receiver_x4 VARCHAR2(21) NULL,
last_update DATE NULL,
set_id NUMBER(10) NULL,
mesg_requestor_dn VARCHAR2(101) NULL,
mesg_service VARCHAR2(31) NULL,
mesg_request_type VARCHAR2(31) NULL,
mesg_identifier VARCHAR2(31) NULL,
mesg_xml_query_ref1 VARCHAR2(101) NULL,
mesg_xml_query_ref2 VARCHAR2(101) NULL,
mesg_xml_query_ref3 VARCHAR2(101) NULL,
mesg_appl_sender_reference VARCHAR2(51) NULL,
mesg_payload_type VARCHAR2(31) NULL,
mesg_sign_digest_reference VARCHAR2(41) NULL,
mesg_sign_digest_value VARCHAR2(51) NULL,
mesg_use_pki_signature NUMBER(1) NULL
PARTITION BY RANGE(MESG_CREA_DATE_TIME) (
PARTITION SIDE_MIN VALUES LESS THAN (TO_DATE(20000101, 'YYYYMMDD')) TABLESPACE TBS_SIDEDB_DA_01);
CREATE UNIQUE INDEX SIDE.IX_PK_RMESG on SIDE.RMESG (AID, MESG_S_UMIDH, MESG_S_UMIDL, MESG_CREA_DATE_TIME) LOCAL;
ALTER TABLE SIDE.RMESG ADD CONSTRAINT IX_PK_RMESG PRIMARY KEY (AID, MESG_S_UMIDH, MESG_S_UMIDL, MESG_CREA_DATE_TIME) USING INDEX SIDE.IX_PK_RMESG;
CREATE INDEX SIDE.ix_rmesg_cassender ON SIDE.rmesg (MESG_CAS_SENDER_REFERENCE) LOCAL;
CREATE INDEX SIDE.ix_rmesg_creationdate ON SIDE.rmesg (MESG_CREA_DATE_TIME) LOCAL;
CREATE INDEX SIDE.ix_rmesg_trnref ON SIDE.rmesg (MESG_TRN_REF) LOCAL;
CREATE INDEX SIDE.ix_rmesg_uumid ON SIDE.rmesg (MESG_UUMID, MESG_UUMID_SUFFIX) LOCAL;
CREATE INDEX SIDE.IX_UNIT_NAME_RMESG on RMESG(mesg_crea_date_time,X_INST0_UNIT_NAME) LOCAL;
CREATE INDEX SIDE.IX_RMESG on RMESG(mesg_crea_date_time ,mesg_type,x_fin_ccy) LOCAL;
CREATE INDEX SIDE.IX_NAME_FORMAT_TYPE_RMESG on RMESG(mesg_frmt_name,mesg_sub_format,mesg_type,mesg_crea_date_time ) LOCAL;same Explain Plan Same Result .
I always remember TOM Quote "full scans are not evil, indexes are not good"
Which Mean Something Wrong Goes with Indexes , the partition depend on MESG_CREA_DATE_TIME Column I create Index for this column but same explain plan Appear every time. With Same Time.
Thank you
Osama -
Hi,
I have a query which I want to attach it to a tcode and then move to quality as well as production.I am getting the program name for the query from the additional functions options under the Quick View in the menu bar of SQVI.The same I am using to create a tcode.How to create a transport request for the query as well as this program.Can I assign the same program which is AQA0SYSTQV000015ZMI===== to a package?
Thanks,
K.Kiran.hi Kiran,
tcode for query, pls. have a look:
Create a transaction calling transaction START_REPORT with the following parameters/attributes filled :
D_SREPOVARI-REPORTTYPE= AQ "parameter indicating Abap Query
D_SREPOVARI-REPORT= 'ZGRP' "Query User group
D_SREPOVARI-EXTDREPORT= 'ZNAME' "Query name
to transport queries, pls. use the following program: RSAQR3TR
you have to follow the following steps:
1. With the above program, choose Export and add the objects you want to transport. after execution you get a transport request from the system.
2. Release this transport request
3. Import this transport request into the target system (like any other transport request)
4. In the target system run the above program, but choose Import now and add the transport request name
hope this helps
ec -
Auto suggest behavior for af:query component LOV fields.
Hi,
I am new to ADF development and need help on implementing auto suggest behavior to the LOV fields generated by af:query component. For inputList and inputCombo.. fields we can add af:autosuggestbehavior tag to enable this. How do we enable the same for af:query generated LOV fields.
Regards,
C.RThanks Timo for such a quick response.
JDev version we are using is 11.1.1.6.0
Unfortunately, we have gone too far with the AF:Query and Everything else is working apart from Auto-Suggest on AF:Query. Now will take a lot of time to implement and test. Also, users will have to spend considerable time to test it again.
Thanks and Regards,
Satya -
ORA-29983: Unsupported query for Continuous Query Notification
Hi, i'm having a LOV with auto refresh. If i add "Order by " to the LOV query , the following exception is thrown.
SQL error during statement preparation. Statement: SELECT * FROM (SELECT DISTINCT XXXX FROM YYYY) QRSLT ORDER BY "XXXX"
Error ORA-29983: Unsupported query for Continuous Query Notification
I tried to override the create method of the VO, with "this.setNestedSelectForFullSql(false);" .. But this one disables the auto referesh property...
Could some one help to solve this. Thanks .Hi, i looked into the ADF run-time source code and i can see that , the connection object " OracleDatabaseChangeListenerWrapper" class setting the mode as "BEST_EFFORT".
I would assume , by default adf run-time using the best effort mode , not the guaranteed mode.
The data base doc also says in best effort mode "order by" clause as well as key word "like" can be used for continuous query notification. But still i'm getting "ORA-29983".
Could some one clarify me. Thanks . -
How to see the underlying program for Infoset query in ECC?
HI all ,
we have generic datasource based on Infoset .
now ii need to add 2 fields from batch characteristics .
my question is how to go & edit the underlying abap program .
i have added the fields to the extract structure , but not populated the 2 fiields .
regards ,
srinivas .Hello,
Go to SQ02 T code -
> Give your Info set Name -
>Click on your Info set -
>There top you will find Option GO TO click on Go To then you will get Global Properties -
>if you click on Global Properties -
>You will get another scree there if you click on External Program------->You will get underlying program for Infoset query in ECC.
if you find the answer usefull kindly asign some points.
Regards, -
TR not generating for modifiying query
Hi all,
I have design query and imported to BW quality. Now we have some modification in that. In BW development system query
displaying in display mode not possible to change. Kindly give yr suggestion how to change and add one more filter option for that
and also have to generate TR for that modification.
thanksHI raj,
As the query is already in Quality and been transported to QA system you can't edit the query directly in developemnt for changes.
For this create a BEX request and asign the TR to your package in the BEX request. then you will be able to make changes to the query which is already transported to QA.
Steps to create a BEx Request
Goto Transaction RSA1 --> select "Transport connection" --> on the top right hand side you will find an option BEX transport(truck symbol). --> click on it IT will pop up a screen where you have the options "BEx transport Requests for single requests" --> there you can see the packages --> click on the request/task beside your package it will ask for the Transport request. create you own Reauest and asign that request to you package and save it.
Now go to query designer and try to change the query it will allow you now.
Regards
KP
Maybe you are looking for
-
Bluetooth on Macbook Pro Retina not working
I am having trouble with the bluetooth on my MacBook Retina Display (software is up to date). My MacBook will not connect to my iPhone 5, nor will it connect to the bluetooth headphones I just purchased (Sony MDR-XB950BT). When I try to pair the devi
-
Getting a runtime error in reading long text from production order
Hi all, I am trying to read production order long text and the code is not showing any syntax error but if i execute it i am getting a runtime error as "Text object aufk is not available". But i did check for the text object , text id enties in TTXOB
-
Hello. So am I missing something, or does flar create totally ignore data ZFS file systems? I'm using Solaris 10-10/08 ... Thanks.
-
I have just purchased a 9300 and wish to transfer all data from my old 9210. I have tried to use "Install Data Collector" on the 9300 with both communicators facing IR ports but the following error message occurs. Error while sending via Infra Red Sy
-
"Mail" can't remember password for cloud mail
In preparation for the iCloud move on July 1, 2012, I updated to latest version of OS X 10.7, and had everything working. As of July 1 I get: Enter Password for Account "MobileMe" The iCloud IMAP server "p99-imap.mail.me.com" rejected the password fo