Other way to Map query transform in given scenario.
Hi Experts,
I am doing a scenario where i need to map target from 3 sources.
In source there is a field of Purchase date.
1.)I have to map Purchase date(source) to column1(Target) when purchase date < '2013-04-01'.
2.)Map Purchase date(source) to column2(Target) when purchase date >'2013-03-31'.
I have done this through query transform.
I am attaching a screenshot for better explanation.
I am using where clause in both the query transform.
Is there any other way to do this activity??
Regards,
Neha Khetan
You can easily do this in one join of your 3 tables. use the built-in ifthenelse function in the mapping of both your target columns:
column1= ifthenelse(purchase_date < '2013-04-01', purchase_date, null)
column2= ifthenelse(purchase_date >'2013-03-31', purchase_date, null)
Similar Messages
-
How to write a query for the given scenario ?
Hi All ,
I am having two tables EMP, DEPT with the below data.
EMP TABLE :--
EID ENAME JOB SAL DEPID
111 RAM MANAGER 1500 10
222 SAM ASST MANAGER 2000 20
333 KALA CLERK 2500 10
444 BIMA MANAGER 3000 20
555 CHALA MANAGER 3500 30
666 RANI ASST MANAGER 4000 10
777 KAMAL MANAGER 2400 10
DEPT TABLE :--
DEPID DNAME
10 XX
20 YY
30 ZZ
Q1 : I want the sum of salary of each department and for the particular job . Here in each departmant manager, asst. manager, clerk posts are there .
I want to display the result like below ....
JOB 10 20 30
MANAGER 3900 3000 3500
ASST MANAGER 4000 2000 NULL
CLERK 2500 NULL NULL
please tell me how to write a sql query ?
Thanks
SaiIn general case, you cannot write this query.
This is one of the limits of relational database concepts. The number of columns must be known up-front. In the SELECT clause, you have to list and name all columns returned by the query. So you have to know number of departments. (There are some workarounds - you can return one column with concatenated values for all departments, separated by space character).
If you know that you have 3 departments then you qurey will return 4 columns:
SELECT
e.job,
SUM ( CASE WHEN d.deptid = 10 THEN e.sal ELSE NULL END) d10,
SUM ( CASE WHEN d.deptid = 20 THEN e.sal ELSE NULL END) d20,
SUM ( CASE WHEN d.deptid = 30 THEN e.sal ELSE NULL END) d30
FROM dept d, emp e
WHERE d.deptno = e.deptno
GROUP BY e.job -
Other Ways for SubQuery...
Hi
Please suggest i want to Use other way for Sub-Query for Performence Tunning Point of View. I am Useing Oracle 10g R2
Thanks In Advance...Thank you for not posting the same question again in two forums... ;)
other option for SubQuery... -
Analyze the Field Mapping in a Query Transform
My client is running Data Services 3.1 SP1 on WIndows Server 2003. They have the following scenario:
A data flow has muliple flat file source transforms of different file formats, each of which is connected to a query transform. On the output side of each query, new output columns have been defined, representing a standard house schema. For each query transform, they need to:
1) Compare the source fields to the query output fields, to determine which source fields have been mapped
2) Compare source/mapped field lengths to determine if a source field has been truncated.
I checked the script functions, but none of the functions are useful for this. Is there a way to get a query transform's output fields/lengths from the data Services local repository?OR
It may help to have Control Files for each source file as your source and compare it to the target schema structure to get the flags for truncates.
The Control File structure would be as such
FILENAME | COLUMN_NAME | DATA_TYPE | PRECISION | SCALE | LENGTH | IS_NULL | IS_PRIMARYKEY
The same Structure can be used for your target schema.
As for Identifying columns in the source that are not mapped to the target schema, I would gather the information at design time. I don't see an easy way of getting this informaiton from the Query transform.
May be you can try using the following views to see if you can utilize the local repository for some information.
ALVW_MAPPING
combined with ALVW_COLUMNINFO
and ALVW_TABLEINFO
Hope this helps!
Regards
Tiji
Edited by: Tiji Mathew on May 6, 2009 11:48 AM
Make sure you right click in the Local Object Library area and choose the option "Calculate column mappings" to populate the updated values into AL_COLMAP_TEXT that gives the expression used to map the source to the target. You will not see the Query tranform names in this text but see the source column with mapping type column showing values like Direct, Direct - Merged, Computed etc. -
HELP! SQL Query: Other ways to reorder column display?
I have a SQL query report with a large number of columns (users can hide/show columns as desired). It would be great if the column display order could be changed by changing the order of the columns in the SELECT list in the Report Definition, but that doesn't work -- it puts changed or added columns at the end regardless of the order in the SELECT list of the query.
Is there some other way to reorder the columns displayed without using the Report Attributes page? It's extremely tedious to move columns around using the up/down arrows which redisplays the page each time. Am I missing a way to change display order, or does anyone have a "trick" to do this? It's so painful....
When defining forms you can reoder columns by specifying a sequence number for each column. Just curious as to why reports were not done the same way, and are there any plans to address this in a future release?
KarenYes, reordering columns is extremely painful.
It is supposed to be much improved in the next version.
See
Re: Re-ordering columns on reports
Moving columns up/down in Report Attributes
See my example at
http://htmldb.oracle.com/pls/otn/f?p=24317:141
Basically, let the users move columns around until they are blue in the face, provide a Save button to save the column order in a user preference and reorder the columns when the page reloads.
Or you can use Carl's PL/SQL shuttle as the widget to specify the columns shown and their order. The shuttle is at http://htmldb.oracle.com/pls/otn/f?p=11933:27
Hope this helps.
Message was edited by:
Vikas -
Fill default value with SQL query or by an other way
Hello everyone,
I use Jdeveloper 11g and Weblogic.
when I click on create button, I would like fill id_employee field automatically with the current user. I think there is a possibility in default value but I don't know how.
My query is: SELECT Employee.ID_EMPLOYEE FROM Employee WHERE Employee.EMPLOYEE_NAME = :userName
userName is a bind variable with this value: #{facesContext.externalContext.userPrincipal.name}
If there is an other way I accept. Maybe with createwithparams, I tried but I didn't success.
Please help me, I spent 2 weeks for this small problem.
Thank you
RegardsHi,
if using ADF BC, you the create with params operation of the Viewobject to create a new row. On the created action binding, use the right mouse button to create a "NamedData" entry. Set the name of this entry to the name of the attribute you want to add teh default value to. In the NDValue field use EL to reference the authenticated user
Frank
Ps.: Your query is not a create statement, so I hope you are not confusing usecases here -
Any way to set the sort order other than in the query?
The report has an OrderBy parameter so the user can select which field is used for sorting. The query has an OrderBy clause referencing the parameter. The problem is that it doesn't always pay any attention to the parameter. (Since it appears to be intermittent my suspicion is that it really never pays any attention to it but that sometimes whatever actually is determining the order gives the same results.)
The problem is that putting an OrderBy clause in a query is the only way I know of to determine the output order of a report. Is there any other way?
Thanks.Unfortunately Break Order doesn't seem to be controlling the sort order. I found that the value was set on numerous fields but I've changed them all to None & it still isn't displaying in the order specified by the OrderBy parameter.
Note that this is a 'form layout' report with one page/record so it doesn't really have columns-but the pages are supposed to be printed in the order chosen by the user from a list of values. It doesn't seem to matter what's selected from that list though, the output appears in the same order as if no Order By clause is specified.
Can you think of anything else that would cause the report to ignore the order by clause?
thanks. -
Please help me what other way i can tune this select query..
Hello Guru,
I have a select query which retrieve data from 10 tables and around 4 tables having 2-4 Lac record and rest are having 80,000 - 1 Lac record.
It is taking around 7-8 seconds to fetch 55000 record.
I was strictly told by the client that i should not use HINTS in my query. My query is below. Please help me what other way i can tune this select query..
select
CT.CUST_ID
,CT.ROMANISED_SURNAME
,CT.SURNAME
,CT.ROMANISED_GIVEN_NAME
,CT.GIVEN_NAME
,CT.ROMANISED_MIDDLE_NAME
,CT.MIDDLE_NAME
,CT.ROMANISED_NAME_SUFFIX
,CT.NAME_SUFFIX
,CT.ROMANISED_TITLE
,CT.TITLE
,CT.ROMANISED_NAME_INITIALS
,CT.NAME_INITIALS
,CT.NAME_TEXT
,CT.CUST_JRNY_ID
,RK.REMARK_TYPE
,RK.REMARK_ID+CT.CUST_ID as REMARK_ID
,RK.REMARK_STATUS
,RK.REMARK_TEXT
,RK.HOST_ONLY_IND
,RK.SUPERVISORY_IND
,RK.CUST_COMM_IND
,RK.REMARK_SEQ
,RK.REMARK_CODE
,RK.DEFAULT_CUST_REL_IND
,RK.DEFAULT_FLIGHT_SEG_REL_IND
,RK.IATA_CODE
,RK.ICAO_CODE
,CJ.RECORD_LOCATOR "SITA_RECORD_LOCATOR"
,Cjv.Record_Locator "ORIGINATOR_RECORD_LOCATOR"
,FS.TRAVELLING_GROUP_CODE
,CG.GROUP_NAME
FROM FLIGHT_LEG FL
,CUST_FLIGHT_LEG CFL
,CUST CT
,CUST_REMARK CTR
,REMARK RK
,FLIGHT_SEG_FLIGHT_LEG FSFL
,FLIGHT_SEG FS
,CUST_JRNY CJ
,CUST_JRNY_VERSION CJV
,CUST_GROUP CG
WHERE FL.OPR_FLIGHT_NUMBER = 1--I_OPR_FLIGHT_NUMBER
and FL.HISTORY_VERSION_NUMBER = 0
and FL.DEPARTURE_STATION_CODE = 'DEL'--I_DEPARTURE_STATION_CODE
and FL.DEPARTURE_DATETIME = TO_DATE('10-DEC-2012 18.45.00', 'DD-MON-YYYY HH24.MI.SS')
and FL.OPR_SERVICE_PROVIDER_CODE= 'AI'--i_opr_service_provider_code
and FL.OPR_FLIGHT_SUFFIX = 'A'--NVL(I_OPR_FLIGHT_SUFFIX, FL.OPR_FLIGHT_SUFFIX)
AND FL.FLIGHT_LEG_ID = CFL.FLIGHT_LEG_ID
AND CFL.CUST_ID = CT.CUST_ID
AND FL.FLIGHT_LEG_ID=FSFL.FLIGHT_LEG_ID
AND FSFL.FLIGHT_SEG_ID=FS.FLIGHT_SEG_ID
AND CT.CUST_ID = CTR.CUST_ID(+)
AND CTR.REMARK_ID = RK.REMARK_ID(+)
AND FL.CUST_JRNY_ID = CJ.CUST_JRNY_ID
and CJ.CUST_JRNY_ID = CJV.CUST_JRNY_ID
AND CG.CUST_JRNY_ID(+) = CT.CUST_JRNY_ID
AND CFL.HISTORY_VERSION_NUMBER = 0
AND CT.HISTORY_VERSION_NUMBER = 0
AND NVL(CTR.HISTORY_VERSION_NUMBER,0) = 0
AND NVL(RK.HISTORY_VERSION_NUMBER,0) = 0
AND FS.HISTORY_VERSION_NUMBER = 0
AND FSFL.HISTORY_VERSION_NUMBER = 0
-- AND CJ.HISTORY_VERSION_NUMBER = 0
and CJV.VERSION_NUMBER = 0 --- Need to check
AND NVL(CG.HISTORY_VERSION_NUMBER,0) = 0
order by CT.CUST_JRNY_ID,CT.CUST_ID;
The Tables having record:
select COUNT(*) from FLIGHT_LEG -----241756
select COUNT(*) from CUST_FLIGHT_LEG---632585
select COUNT(*) from CUST---240015
select COUNT(*) from CUST_REMARK---73724
select COUNT(*) from REMARK---73654
select COUNT(*) from FLIGHT_SEG_FLIGHT_LEG---241789
select COUNT(*) from FLIGHT_SEG----260004
select COUNT(*) from CUST_JRNY----74288
select COUNT(*) from CUST_JRNY_VERSION----74477
select COUNT(*) from CUST_GROUP----55819
Thanks,
HP..Plan hash value: 3771714931
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 10239 | 2949K| | 7515 (1)| 00:01:31 | | |
| 1 | SORT ORDER BY | | 10239 | 2949K| 3160K| 7515 (1)| 00:01:31 | | |
|* 2 | HASH JOIN | | 10239 | 2949K| | 6864 (1)| 00:01:23 | | |
| 3 | PARTITION HASH ALL | | 73687 | 1079K| | 417 (1)| 00:00:06 | 1 | 512 |
|* 4 | TABLE ACCESS FULL | CUST_JRNY_VERSION | 73687 | 1079K| | 417 (1)| 00:00:06 | 1 | 512 |
|* 5 | HASH JOIN | | 10239 | 2799K| | 6445 (1)| 00:01:18 | | |
| 6 | PARTITION HASH ALL | | 73654 | 863K| | 178 (1)| 00:00:03 | 1 | 512 |
| 7 | TABLE ACCESS FULL | CUST_JRNY | 73654 | 863K| | 178 (1)| 00:00:03 | 1 | 512 |
|* 8 | FILTER | | | | | | | | |
|* 9 | HASH JOIN RIGHT OUTER | | 10239 | 2679K| | 6267 (1)| 00:01:16 | | |
| 10 | PARTITION HASH ALL | | 55315 | 756K| | 137 (1)| 00:00:02 | 1 | 512 |
| 11 | TABLE ACCESS FULL | CUST_GROUP | 55315 | 756K| | 137 (1)| 00:00:02 | 1 | 512 |
|* 12 | FILTER | | | | | | | | |
|* 13 | HASH JOIN OUTER | | 10240 | 2540K| 2056K| 6129 (1)| 00:01:14 | | |
|* 14 | FILTER | | | | | | | | |
|* 15 | HASH JOIN RIGHT OUTER | | 10242 | 1930K| | 5531 (1)| 00:01:07 | | |
| 16 | INDEX FAST FULL SCAN | CUST_REMARK_PK | 73677 | 935K| | 190 (0)| 00:00:03 | | |
|* 17 | HASH JOIN | | 10257 | 1802K| | 5339 (1)| 00:01:05 | | |
|* 18 | HASH JOIN | | 10257 | 701K| | 3516 (1)| 00:00:43 | | |
|* 19 | HASH JOIN | | 3963 | 220K| | 2476 (1)| 00:00:30 | | |
|* 20 | HASH JOIN | | 3963 | 181K| | 1300 (1)| 00:00:16 | | |
| 21 | PARTITION HASH ALL | | 3963 | 131K| | 728 (1)| 00:00:09 | 1 | 512 |
|* 22 | TABLE ACCESS FULL | FLIGHT_LEG | 3963 | 131K| | 728 (1)| 00:00:09 | 1 | 512 |
|* 23 | INDEX FAST FULL SCAN| FLIGHT_SEG_FLIGHT_LEG_PK | 240K| 3059K| | 571 (1)| 00:00:07 | | |
| 24 | PARTITION HASH ALL | | 259K| 2531K| | 1175 (1)| 00:00:15 | 1 | 512 |
|* 25 | TABLE ACCESS FULL | FLIGHT_SEG | 259K| 2531K| | 1175 (1)| 00:00:15 | 1 | 512 |
| 26 | PARTITION HASH ALL | | 631K| 8011K| | 1037 (1)| 00:00:13 | 1 | 512 |
|* 27 | TABLE ACCESS FULL | CUST_FLIGHT_LEG | 631K| 8011K| | 1037 (1)| 00:00:13 | 1 | 512 |
| 28 | PARTITION HASH ALL | | 239K| 25M| | 1822 (1)| 00:00:22 | 1 | 512 |
|* 29 | TABLE ACCESS FULL | CUST | 239K| 25M| | 1822 (1)| 00:00:22 | 1 | 512 |
| 30 | PARTITION HASH ALL | | 73623 | 4385K| | 243 (1)| 00:00:03 | 1 | 512 |
| 31 | TABLE ACCESS FULL | REMARK | 73623 | 4385K| | 243 (1)| 00:00:03 | 1 | 512 |
Predicate Information (identified by operation id):
2 - access("CJ"."CUST_JRNY_ID"="CJV"."CUST_JRNY_ID")
4 - filter("CJV"."VERSION_NUMBER"=0)
5 - access("FL"."CUST_JRNY_ID"="CJ"."CUST_JRNY_ID")
8 - filter(NVL("CG"."HISTORY_VERSION_NUMBER",0)=0)
9 - access("CG"."CUST_JRNY_ID"(+)="CT"."CUST_JRNY_ID")
12 - filter(NVL("RK"."HISTORY_VERSION_NUMBER",0)=0)
13 - access("CTR"."REMARK_ID"="RK"."REMARK_ID"(+))
14 - filter(NVL("CTR"."HISTORY_VERSION_NUMBER",0)=0)
15 - access("CT"."CUST_ID"="CTR"."CUST_ID"(+))
17 - access("CFL"."CUST_ID"="CT"."CUST_ID")
18 - access("FL"."FLIGHT_LEG_ID"="CFL"."FLIGHT_LEG_ID")
19 - access("FSFL"."FLIGHT_SEG_ID"="FS"."FLIGHT_SEG_ID")
20 - access("FL"."FLIGHT_LEG_ID"="FSFL"."FLIGHT_LEG_ID")
22 - filter("FL"."DEPARTURE_STATION_CODE"='DEL' AND "FL"."DEPARTURE_DATETIME"=TO_DATE(' 2012-12-10 18:45:00', 'syyyy-mm-dd
hh24:mi:ss') AND "FL"."OPR_SERVICE_PROVIDER_CODE"='AI' AND "FL"."OPR_FLIGHT_NUMBER"=1 AND "FL"."OPR_FLIGHT_SUFFIX"='A' AND
"FL"."HISTORY_VERSION_NUMBER"=0)
23 - filter("FSFL"."HISTORY_VERSION_NUMBER"=0)
25 - filter("FS"."HISTORY_VERSION_NUMBER"=0)
27 - filter("CFL"."HISTORY_VERSION_NUMBER"=0)
29 - filter("CT"."HISTORY_VERSION_NUMBER"=0) -
Opinion needed on best way to map multiple table joins (of the same table)
Hi all
I have a query of the format:
select A.col1, B.col1,C.col1
FROM
MASTER_TABLE A, ATTRIBUTE_TABLE B, ATTRIBUTE_TABLE C
WHERE
A.key1 = B.key1 (+)
AND
A.key1 = C.key1(+)
AND
B.key2(+) = 100001
AND
C.key2(+) = 100002
As you can see, I am joining the master table to the attribute table MANY times over, (over 30 attributes in my actual query) and I am struggling to find the best way to map this efficiently as the comparison for script vs. mapping is 1:10 in execution time.
I would appreciate the opinion of experienced OWB users as to how they would tackle this in a mapping and to see if they use the same approach as I have done.
Many thanks
AdiSELECT external_reference, b.attribute_value AS req_date,
c.attribute_value AS network, d.attribute_value AS spid,
e.attribute_value AS username, f.attribute_value AS ctype,
g.attribute_value AS airtimecredit, h.attribute_value AS simnum,
i.attribute_value AS lrcredit, j.attribute_value AS airlimitbar,
k.attribute_value AS simtype, l.attribute_value AS vt,
m.attribute_value AS gt, n.attribute_value AS dt,
o.attribute_value AS datanum, p.attribute_value AS srtype,
q.attribute_value AS faxnum,
R.ATTRIBUTE_VALUE AS FAXSRTYPE,
s.attribute_value AS extno,
t.attribute_value AS tb, u.attribute_value AS gb
v.attribute_value AS mb, w.attribute_value AS stolenbar,
x.attribute_value AS hcredit, y.attribute_value AS adminbar,
z.attribute_value AS portdate
FROM csi_item_instances a,
csi_iea_values b,
csi_iea_values c,
csi_iea_values d,
csi_iea_values e,
csi_iea_values f,
csi_iea_values g,
csi_iea_values h,
csi_iea_values i,
csi_iea_values j,
csi_iea_values k,
csi_iea_values l,
csi_iea_values m,
csi_iea_values n,
csi_iea_values o,
csi_iea_values p,
csi_iea_values q,
CSI_IEA_VALUES R,
csi_iea_values s,
csi_iea_values t,
csi_iea_values u,
csi_iea_values v,
csi_iea_values w,
csi_iea_values x,
csi_iea_values y,
csi_iea_values z
WHERE a.instance_id = b.instance_id(+)
AND a.instance_id = c.instance_id(+)
AND a.instance_id = d.instance_id(+)
AND a.instance_id = e.instance_id(+)
AND a.instance_id = f.instance_id(+)
AND A.INSTANCE_ID = G.INSTANCE_ID(+)
AND a.instance_id = h.instance_id(+)
AND a.instance_id = i.instance_id(+)
AND a.instance_id = j.instance_id(+)
AND a.instance_id = k.instance_id(+)
AND a.instance_id = l.instance_id(+)
AND a.instance_id = m.instance_id(+)
AND a.instance_id = n.instance_id(+)
AND a.instance_id = o.instance_id(+)
AND a.instance_id = p.instance_id(+)
AND a.instance_id = q.instance_id(+)
AND A.INSTANCE_ID = R.INSTANCE_ID(+)
AND a.instance_id = s.instance_id(+)
AND a.instance_id = t.instance_id(+)
AND a.instance_id = u.instance_id(+)
AND a.instance_id = v.instance_id(+)
AND a.instance_id = w.instance_id(+)
AND a.instance_id = x.instance_id(+)
AND a.instance_id = y.instance_id(+)
AND a.instance_id = z.instance_id(+)
AND b.attribute_id(+) = 10000
AND c.attribute_id(+) = 10214
AND d.attribute_id(+) = 10132
AND e.attribute_id(+) = 10148
AND f.attribute_id(+) = 10019
AND g.attribute_id(+) = 10010
AND h.attribute_id(+) = 10129
AND i.attribute_id(+) = 10198
AND j.attribute_id(+) = 10009
AND k.attribute_id(+) = 10267
AND l.attribute_id(+) = 10171
AND m.attribute_id(+) = 10184
AND n.attribute_id(+) = 10060
AND o.attribute_id(+) = 10027
AND p.attribute_id(+) = 10049
AND q.attribute_id(+) = 10066
AND R.ATTRIBUTE_ID(+) = 10068
AND s.attribute_id(+) = 10065
AND t.attribute_id(+) = 10141
AND u.attribute_id(+) = 10072
AND v.attribute_id(+) = 10207
AND w.attribute_id(+) = 10135
AND x.attribute_id(+) = 10107
AND y.attribute_id(+) = 10008
AND z.attribute_id(+) = 10103
AND external_reference ='07920490103'
If I run this it takes less than a second in TOAD, when mapped in OWB it takes ages. 10:1 is a conservative estimate. In reality it takes 15-20 minutes. CSI_IEA_VALUES has 30 million rows CSI_ITEM_INSTANCES has 500,000 rows.
Hope that helps. I would love to know how others would tackle this query. -
Is there any other way to limit the data a user can see
We are using Discoverer 10.2.xxx
We are running it against an Oracle applications database but we do not have Discoverer integrated with the apps. I have created database users for discoverer and granted access to the data at the db level.
We have about 40 salesreps and we only want each rep to see their own data. We have a high turn over with the reps
For now, I have created the row level security in that folder in my eul. If the username = XXXX then Salesrep = XXXXX
OR
If username = YYYY then DSM = YYYY
etc.. This goes on for about 40 users We have 3 levels of "users" - Salesrep, DSM or RSM (District or Region sales manager)
These users are setup in my apps and I can write a database procedure to select their salesrep name based on their username.
Is there any other way to limit the data that the salesreps can see ?
Possibly with a stored procedure some way?
Thanks
AngieAt a previous client's, I had created a system not too unlike what you're describing.
Basically, I created an Oracle table that stored information concerning the salesperson as in Oracle Apps the data was not reliable for what they were doing (ie: with adding Apps modules, sometimes salesperson data was there, but with say, CRM, it was in another place, etc.).
So, since this table was going to be used for driving everything about a salesperson (commissions, who they report to, when they started, territory, when / if they moved territories and/or manager, etc, etc.) I put lots of good stuff in it that would make life easier for them.
The end user in charge of all this had was given a Form to add, edit, delete all this good stuff and they were happy.
When it came time for security, it worked like a charm in that I used the concept of the BIS views where a user has an apps id associated to them and all data was filtered by that id. All I really had to do was to create this view that simply filtered data to that user's id. Then whenever I had a Discoverer report for salespeople, managers, etc. I just made sure the folder used had a join to this filtering view and I chose an item from the filtering view and all worked fine. All Discoverer reports only returned row level information for the salesperson who ran it.
Likewise, the same Oracle table was queried by another security view that only brought back the same IDs for the salespeople associated to each manager. When this simple view was joined to any sales Discoverer folder (and used in Disco), it limited all data to the salespeople the manager was responsible for.
Obviously this is just an overview and would take pages to explain, but essentially, if you can associate some kind of unique ID to each salesperson (I used the one returned in BIS views when someone logs into Apps, but you could just assign a different in your case), then you can create a view that filters to that id. Then when that view is brought into the EUL and joined to any other folder in the sales area (and you choose a column from the security view), you'll have what you're referring to - row level security for all salespeople and managers.
When it works - it sure does look impressive.
Russ -
PL/SQL to map query results to a column?
Is there a programmatic way to map the query result to a value in a table?
I have table A. There's a column with a carat-delimited string 0^0^1^ that I can parse with substr/instr functions. So, query that table/column for a 5th digit that sits between 5th and 6th ^ character.
There's table B. It has a 'position' column and 'key' column. How do I let the table A know that when I query table A for 5th digit, it needs to map to table B's where position=5?
thanks,Did you try to run those statements? Please do so next time.
Also the creation of table C (or is it D) and E are missing.
However, the tricky part is in deciphering table A to make up for the flawed design.
I hope this piece of SQL is helpful for you, because you could join the outcome to your other tables:
SQL> create table a
2 ( visitor_id number(*,0),
3 adate date,
4 carat varchar2(4000 byte),
5 ip_address varchar2(4000 byte),
6 state varchar2(4000 byte),
7 city varchar2(4000 byte),
8 id number(*,0) not null enable,
9 constraint "a" primary key (id)
10 )
11 /
Tabel is aangemaakt.
SQL> insert into A
2 (VISITOR_ID,ADATE,CARAT,IP_ADDRESS,STATE,city,id) VALUES(194296532,TO_DATE('2007-06-26.00.01.46',''),'-1^1^2^0^3^85741^3^0^176^0^1
^-1^41^-1^-1^US^0^-1^2^0^1^^^^^^^','71.226.9.44','az','tucson',1);
1 rij is aangemaakt.
SQL> insert into A
2 (VISITOR_ID,ADATE,CARAT,IP_ADDRESS,STATE,city,id) VALUES(37482918,TO_DATE('2007-06-26.00.01.46',''),'0^1^2^5^^78154^3^7^184^0^1^2^
17^2^1^US^1^0^1^0^0^^^^^^^','70.163.196.111','tx','san antonio',2);
1 rij is aangemaakt.
SQL> select id
2 , visitor_id
3 , i position
4 , c value
5 from a
6 model
7 return updated rows
8 partition by (id, visitor_id)
9 dimension by (0 i)
10 measures ('^' || carat || '^' c)
11 rules
12 ( c[for i from 1 to length(regexp_replace(c[0],'[^\^]'))-1 increment 1]
13 = regexp_substr(c[0],'[^\^]+',1,cv(i))
14 )
15 order by id
16 , position
17 /
ID VISITOR_ID POSITION VALUE
1 194296532 1 -1
1 194296532 2 1
1 194296532 3 2
1 194296532 4 0
1 194296532 5 3
1 194296532 6 85741
1 194296532 7 3
1 194296532 8 0
1 194296532 9 176
1 194296532 10 0
1 194296532 11 1
1 194296532 12 -1
1 194296532 13 41
1 194296532 14 -1
1 194296532 15 -1
1 194296532 16 US
1 194296532 17 0
1 194296532 18 -1
1 194296532 19 2
1 194296532 20 0
1 194296532 21 1
1 194296532 22
1 194296532 23
1 194296532 24
1 194296532 25
1 194296532 26
1 194296532 27
1 194296532 28
2 37482918 1 0
2 37482918 2 1
2 37482918 3 2
2 37482918 4 5
2 37482918 5 78154
2 37482918 6 3
2 37482918 7 7
2 37482918 8 184
2 37482918 9 0
2 37482918 10 1
2 37482918 11 2
2 37482918 12 17
2 37482918 13 2
2 37482918 14 1
2 37482918 15 US
2 37482918 16 1
2 37482918 17 0
2 37482918 18 1
2 37482918 19 0
2 37482918 20 0
2 37482918 21
2 37482918 22
2 37482918 23
2 37482918 24
2 37482918 25
2 37482918 26
2 37482918 27
2 37482918 28
56 rijen zijn geselecteerd.Regards,
Rob. -
Technical Design Review Question: Data target Mapping and transformation
I got my hands on technical design documentation for a project on COPA budget. I came up with a few questions but I will post them separately for fast closing and awards:
In the discussions of Data target Mapping and transformation, there was a table of characteristics, showing dimensions, BW filed, Source field, data type, etc.
1. What is the technique in deciding which characteristics get grouped together into a particular dimension?
2. Why do some dimensions only have one characteristic and what is its significance?
3. I saw one BW field, OVAL_TYPE (description= valuation type) included in three dimensions: Customer, Material and Valuation Type(only field=OVAL_TYPE). What is the significance of this repetition?
ThanksMorning,
To define Dimension means, to group all characters together, which do have a relationship "1 to n" and not "n to m" to reduce data volume (cardinality). If you reduce cardinality (which means to define 1:m relationship whenever it is possible and group them within one common dimension) you increase performance (make performance better) because of less data volume. It is essential to understand that this can not be changed easily after definition in PROD. So data modeling is also from this point of view very important.
Example:
An accounting object has a "m:n" relation ship to the accounting partner object. that's the reason way accounting object and accounting partner object do not belong to the same dimension.
Ni hao
Eckhard Lewin -
Automatic Fields Mapping in Transformation - BI7
Greeting,
Getting some development procedures done automatically will definitely boost up productivity and efficiency. Examples include programs to: create mass infoObjects, delete mass infoObjects, and activate mass InfoObjects.
Again, that are just examples.
In line with this, have you ever come across any program for automatic mass mapping in transformation?
Meaning instead of doing it manually, you just fill the source fields and the target field in the program then you press the button and have it ready on the spot.
Your quick response will be highly appreciated.Hi,
I have not tried this out. But I guess you can use the class CL_RSTRAN_BUILD and generate transformations via an ABAP program.
Bye
Dinesh -
How to find the name of query for a given report
Hi All,
I am having the name of a report and i need to find out the name of query for that report . Plz tell me how to find out the name of the query for a given report.
Thanks.
Regards,
Pooja Joshi.Use this FM
RSAQ_DECODE_REPORT_NAME
This FM takes program name as I/P and gives Query Name as O/P.
This FM uses the structure AQADEF to fetch the data.
Hope this helps.
Regards
Vinayak -
Is there a better way to do this transformation
I have the following code where I create a DOM and I transform it to text. My question since Im new to all this is if theres a cleaner way to capture the transformation and convert to a string that I need to include within the body of an email. Currently it seems like a waste to store the transformation into a text file and in turn do a .readLine() to obtain the contents. And I dont want to flush the output to the client by using an OutputStream or Writer. Thank you all for your time.
Source xmlSource=new DOMSource(createDom());
File file = new File("A:\\email.txt");
Result result=new StreamResult(file);
TransformerFactory transFact = TransformerFactory.newInstance();
Transformer transformer = transFact.newTransformer(new StreamSource("A:\\xslTEXT.xsl"));
transformer.transform(xmlSource, result);
//redirect to email
Mail mail = new Mail(file);If you look at the possible StreamResult constructors in the API documentation you'll see that a File is only one of many things you can pass in there. One of the other options is a Writer; if you create a StringWriter and wrap it in a StreamResult, then after your transformation is complete you can get its output as a String from that StringWriter.
Maybe you are looking for
-
Apple TV won't turn on, respond, or appear in iTunes after update-Help!
The update didn't fully finish, I believe, and eventually I think the Apple TV turned off. Now it won't turn on on respond in any way. I've tried plugging it into iTunes with micro USB, several times. Never shows up. It doesn't light up at all no m
-
Can't open the application "EPSON Scan.app"
I have an iMac running OS X Mavericks 10.9.1, connected by USB wire to an Epson WF-3540 All-in-One. Under Mountain Lion, all of the Epson's functions worked by clicking on the Epson icon. Subsequent to the Mavericks update, the Epson programs, parti
-
I lost all my notes after having switched off in icloud: how to recover them from a back up pls ?
-
How I can to do get a data sources on sun appserver 8 like this?
jsc GUI develop jsf pages ,I want to get a data sources by Context lookup jndi have binding data sources. I know weblogic to get this. such as : Context ctx = null; Properties p = new Properties(); p.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jnd
-
Satellite A200-23U - DVD-ROM problem after firmware update
Hi! I have A200-23U (TSSPcorp CDDVDW TS-L632H). I update firmare first to TO02, then to TO03. None errors was shown. After reboot my device is locked (doesn't open/eject by button or by any software) and ImgBurn (for example) can't find any writers.