Using CASE & GROUP BY
I am trying to right a query that displays a location with the numer of targets within each price range from a table I have created which looks like.
TARGET_ID COST LOCATION_ID
22 6.76 1
83 13.79 1
256 6.71 1
50 6.76 2
127 13.52 2
186 12.07 2
8 6.71 3
200 17.33 3
48 13.79 4 The code is not counting the targets in each location it just counts the targets and groups them in one location.
Can anyone explain why this is happening?
SELECT location_id,
CASE
WHEN cost < 1 THEN count(target_id) ELSE 0
END "< 1",
CASE
WHEN cost BETWEEN 1 AND 2.99 THEN count(target_id) ELSE 0
END "1 - 2.99",
CASE
WHEN cost BETWEEN 3 AND 4.99 THEN count(target_id) ELSE 0
END "3 - 4.99",
CASE
WHEN cost BETWEEN 5 AND 6.99 THEN count(target_id) ELSE 0
END "5 - 6.99",
CASE
WHEN cost >7 THEN count(target_id) ELSE 0
END ">= 7"
FROM
SELECT
t.target_id target_id,
SUM(ac.cost) cost,
t.location_id location_id
FROM
targets t
inner JOIN
information i
ON
t.target_id=i.target_id
LEFT OUTER JOIN
access_cost ac
ON
i.security_level BETWEEN
NVL(ac.lower_bound_level,i.security_level) AND
NVL(ac.upper_bound_level,i.security_level)
group by
t.target_id, t.location_id
order by target_id
GROUP BY
location_id, cost
Maybe I need to rap the whole thing in another SELECT statement and then GROUP BY location_id and have the SUM of each column for each location.
I trie this and I am getting an error:
SELECT loc, sum('<1') FROM (
ERROR at line 1:
ORA-01722: invalid number
SELECT Loc,
SUM('<1')
FROM (SELECT Location_Id Loc,
CASE
WHEN COST < 1 THEN COUNT(Target_Id)
ELSE 0
END "< 1",
CASE
WHEN COST BETWEEN 1
AND 2.99 THEN COUNT(Target_Id)
ELSE 0
END "1 - 2.99",
CASE
WHEN COST BETWEEN 3
AND 4.99 THEN COUNT(Target_Id)
ELSE 0
END "3 - 4.99",
CASE
WHEN COST BETWEEN 5
AND 6.99 THEN COUNT(Target_Id)
ELSE 0
END "5 - 6.99",
CASE
WHEN COST > 7 THEN COUNT(Target_Id)
ELSE 0
END ">= 7"
FROM (SELECT t.Target_Id Target_Id,
SUM(ac.COST) COST,
t.Location_Id Location_Id
FROM Targets t
INNER JOIN Information i
ON t.Target_Id = i.Target_Id
LEFT OUTER JOIN Access_Cost ac
ON i.Security_Level BETWEEN Nvl(ac.Lower_Bound_Level,i.Security_Level)
AND Nvl(ac.Upper_Bound_Level,i.Security_Level)
GROUP BY t.Target_Id,
t.Location_Id
ORDER BY Target_Id)
GROUP BY Location_Id,
COST
ORDER BY Location_Id)
GROUP BY Loc
ORDER BY Loc
Similar Messages
-
Using case when to an aggregate function
Hi,
I have a sql statement like below,
Select CASE WHEN (Sum(Amount) Over (Partition By Name),1,1) = '-' THEN 0 ELSE Sum(Amount) Over (Partition By Name) END AS Amount_Person
From tbPerson
But when I run the sql statement above I got error ORA-00920: invalid relational operator. What I'm trying to do is when the total amount for each person is negative then it will return 0 else it will return the positive value. I dont want to use the GROUP BY function. Is there any other way than using the Sum Over function? ThanksLike this?
SELECT CASE WHEN Sum(Amount) Over (Partition By Name) < 0 THEN 0
ELSE Sum(Amount) Over (Partition By Name)
END AS Amount_Person
FROM tbPerson
;or using GREATEST function :
SELECT GREATEST(
Sum(Amount) Over (Partition By Name)
, 0
) as Amount_Person
FROM tbPerson
;Edited by: odie_63 on 24 févr. 2011 09:12 -
Need help in this sql query to use Case Statement
hi All,
I have the below query -
SELECT DISTINCT OFFC.PROV_ID
,OFFC.WK_DAY
,CASE
WHEN OFFC.WK_DAY ='MONDAY' THEN 1
WHEN OFFC.WK_DAY ='TUESDAY' THEN 2
WHEN OFFC.WK_DAY ='WEDNESDAY' THEN 3
WHEN OFFC.WK_DAY ='THURSDAY' THEN 4
WHEN OFFC.WK_DAY ='FRIDAY' THEN 5
WHEN OFFC.WK_DAY ='SATURDAY' THEN 6
WHEN OFFC.WK_DAY ='SUNDAY' THEN 7
END AS DOW
,OFFC.OFFC_OPENG_TIME
,OFFC.OFFC_CLSNG_TIME
FROM GGDD.PROV_OFFC_HR OFFC
WHERE OFFC.PROV_ID='0000600'
WITH UR;
this query is bringing results in 6 differnt rows with opening and closing time for each day separately. I want to generate the data in one row with each day having opening and closing time, so for 7 days, total 14 columns with opening and closing time. But i am not able to do that using case statement.
can somebody help me in achieving that.
thanks,
iamhereHi,
Welcome to the forum!
That's called a Pivot .
Instead of having 1CASE expression, have 14, one for the opening and one for the closing time each day, and do GROUP BY to combine them onto one row.
SELECT OFFC.PROV_ID
, MIN (CASE WHEN OFFC.WK_DAY ='MONDAY' THEN OFFC.OFFC_OPENG_TIME END) AS mon_opn
, MIN (CASE WHEN OFFC.WK_DAY ='MONDAY' THEN OFFC.OFFC_CLSNG_TIME END) AS mon_cls
, MIN (CASE WHEN OFFC.WK_DAY ='TUESDAY' THEN OFFC.OFFC_OPENG_TIME END) AS tue_opn
, MIN (CASE WHEN OFFC.WK_DAY ='TUESDAY' THEN OFFC.OFFC_CLSNG_TIME END) AS tue_cls
FROM GGDD.PROV_OFFC_HR OFFC
WHERE OFFC.PROV_ID = '0000600'
GROUP BY offc.prov_id
;This assumes there is (at most) only one row in the table for each distinct prov_id and weekday. If not, what do you want to do? Post a little sample data (CREATE TABLE and INSERT statements) and the results you want from that data.
The staement above works in Oracle 8.1 and up, but there's a better way (SELECT ... PIVOT) available in Oracle 11. What version are you using? (It's always a good idea to include this when you post a question.)
Edited by: Frank Kulash on Jan 6, 2011 8:22 PM -
I have a fre pretty basic use case questions about Photostream.
I take a lot of pictures on my iPhone. They magically appear in iPhoto on my Mac. I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured? Is there any benefit to importing a duplicate photo if I have Photostream importing turned on? And are they truly, really duplicate photos? Are the iPhone photos in my Photostream duplicated on my phone until they fall out of Photostream? Are iPhone photos duplicated if they're in a shared stream?
I just don't know what I should do with the photos on my iPhone once they're on my iMac, and I'm not truly confident that they're on my iMac for good or that they're truly the same file as the original.on the mac/iphoto, move photos in the Photo stream group into some other album.
foatbttpo1567,In iPhoto on a mac you need to download the photos to an event - not an album. An album would only reference them in the photo stream, but not store the photos in the iPhoto library. Turning on "Automatic Import", as Old Toad suggested, will do that and create monthly events.
I'm told that by having the import feature in iPhoto turned on, those photos will stay in iPhoto on my iMac forever - or at least after I pass the 30 days/1,000 most recent photos limits.
The 30 days/1000 photos limit applies to the temporary storage in iCloud - the time you have to grab them and to import them. Once they are in an event, you have them safe.
So that leaves me to ask: what am I supposed to do with the photos on my phone? Just leave them there to eat up storage? Or delete them at some point? Delete all the photos on my phone after I import those that Photostream hadn't already captured?
Photo Stream is a handy feature for transmitting photos, but don't rely on it for permanent storage. If you ever have to reset your iPhone or reinstall your Mac, or to reset the photo stream, your photos from the stream may be gone. Keep always a copy of your photos either in the Camera roll on your phone or in iPhoto events on your mac, and make sure that these copies are regularly backed up.
As for the "truly duplicates" - Photo stream will send optimized versions to the devices, but to a mac the full original version. You may want to read the FAQ: iCloud: Photo Stream FAQ -
Hi,
We have a requirement to purge the Azure WADLogs table on a periodic basis. We are achieving this by using Entity group transactions to delete the
records older than 15 days. The logic is like this.
bool recordDoesNotExistExceptionOccured = false;
CloudTable wadLogsTable = tableClient.GetTableReference(WADLogsTableName);
partitionKey = "0" + DateTime.UtcNow.AddDays(noOfDays).Ticks;
TableQuery<WadLogsEntity> buildQuery = new TableQuery<WadLogsEntity>().Where(
TableQuery.GenerateFilterCondition("PartitionKey",
QueryComparisons.LessThanOrEqual, partitionKey));
while (!recordDoesNotExistExceptionOccured)
IEnumerable<WadLogsEntity> result = wadLogsTable.ExecuteQuery(buildQuery).Take(1000);
//// Batch entity delete.
if (result != null && result.Count() > 0)
Dictionary<string, TableBatchOperation> batches = new Dictionary<string, TableBatchOperation>();
foreach (var entity in result)
TableOperation tableOperation = TableOperation.Delete(entity);
if (!batches.ContainsKey(entity.PartitionKey))
batches.Add(entity.PartitionKey, new TableBatchOperation());
// A Batch Operation allows a maximum 100 entities in the batch which must share the same PartitionKey.
if (batches[entity.PartitionKey].Count < 100)
batches[entity.PartitionKey].Add(tableOperation);
// Execute batches.
foreach (var batch in batches.Values)
try
await wadLogsTable.ExecuteBatchAsync(batch);
catch (Exception exception)
// Log exception here.
// Set flag.
if (exception.Message.Contains(ResourceDoesNotExist))
recordDoesNotExistExceptionOccured = true;
break;
else
break;
My questions are:
Is this an efficient way to purge the WADLogs table? If not, what can make this better?
Is this the correct way to handle the "Specified resource does not exist exception"? If not, how can I make this better?
Would this logic fail in any particular case?
How would this approach change if this code is in a worker which has multiple instances deployed?
I have come up with this code by referencing the solution given
here by Keith Murray.Hi Nikhil,
Thanks for your posting!
I tested your and Keith's code on my side, every thing worked fine. And when result is null or "result.count()<0", the While() loop is break. I found you code had some logic to handle the error "ResourceDoesNotExist" .
It seems that the code worked fine. If you always occurred this error, I suggest you could debug your code and find which line of code throw the exception.
>> Is this an efficient way to purge the WADLogs table? If not, what can make this better?
Base on my experience, we could use code (like the above logic code) and using the third party tool to delete the entities manually. In my opinion, I think the code is every efficient, it could be auto-run and save our workload.
>>Is this the correct way to handle the "Specified resource does not exist exception"? If not, how can I make this better?
In you code, you used the "recordDoesNotExistExceptionOccured " as a flag to check whether the entity is null. It is a good choice. I had tried to deleted the log table entities, but I used the flag to check the result number.
For example, I planed the query result count is 100, if the number is lower than 100, I will set the flag as false, and break the while loop.
>>Would this logic fail in any particular case?
I think it shouldn't fail. But if the result is "0", your while loop will always run. It will never stop. I think you could add "recordDoesNotExistExceptionOccured
= true;" into your "else" block.
>>How would this approach change if this code is in a worker which has multiple instances deployed?
You don't change anything expect the "else" block. It would work fine on the worker role.
If any question about this issue, please let me know free.
Regards,
Will
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
How to use case when in Select qry?
Hi Friends,
I want to use Case when in Select qry, my situation is like this
SELECT bmatnr blgort bj_3asiz bmat_kdauf b~mat_kdpos
SUM( ( case when bshkzg = 'S' then bmenge else 0 END ) -
( case when bshkzg = 'H' then bmenge else 0 END ) ) AS qty
INTO corresponding fields of table it_projsal
FROM mseg AS b
INNER JOIN mkpf AS a ON bmblnr = amblnr
AND bmandt = amandt
WHERE abudat < '20061201' AND blgort IN ('1050')
and b~mandt = '350'
GROUP BY bmatnr bj_3asiz bmat_kdauf bmat_kdpos b~lgor
If we give like this it gives an error.
Please help me, how to use or handle in select qry itself.
Regards
Shankarthis is not a way to select data from the DB tables.
first get all the data from the DB tables then u have to do SUM Ups .
Regards
prabhu -
Sql proposed to use case statement
Hi All
Can anyone help me here
This code works fine,here inthe inner sub queries(b,c,d,e,f),i am getting the weekly counts of usage data from the table mf_wer_OBI_USAGE_reqq.
As this is hitting same table with the similar set of queries so i was adviced to use case statement by taking the wk_1...5 in variable and making the query better
I am unable to figure out how to proceed.
Appreciate your help here.
Thanks
create table mf_wer_OBI_USAGE_reqq_WK
as select x.user_name id,x.mon MONTH_COUNT,x.wk_1 WEEK1_COUNT,x.wk_2 WEEK2_COUNT,x.wk_3 WEEK3_COUNT,x.wk_4 WEEK4_COUNT,x.wk_5 WEEK5_COUNT,x.subject_area_name,
y.EMP_FIRST_NAME FIRSTNAME,y.EMP_LAST_NAME SURNAME,y.E_MAIL_ADDRESS USER_MAILID,y.ouc OUC
from (select a.user_name,a.mon,a.subject_area_name,b.wk_1,c.wk_2,d.wk_3,e.wk_4,f.wk_5
from (select user_name,sum(count_us_st) mon,subject_area_name from mf_wer_OBI_USAGE_reqq group by user_name,subject_area_name) a,
(select user_name,sum(count_us_st) wk_1,subject_area_name from mf_wer_OBI_USAGE_reqq where extract(day from start_dt) between 1 and 7
group by user_name,subject_area_name) b,
(select user_name,sum(count_us_st) wk_2,subject_area_name from mf_wer_OBI_USAGE_reqq where extract(day from start_dt) between 8 and 14
group by user_name,subject_area_name) c,
(select user_name,sum(count_us_st) wk_3,subject_area_name from mf_wer_OBI_USAGE_reqq where extract(day from start_dt) between 15 and 21
group by user_name,subject_area_name) d,
(select user_name,sum(count_us_st) wk_4,subject_area_name from mf_wer_OBI_USAGE_reqq where extract(day from start_dt) between 22 and 28
group by user_name,subject_area_name) e,
(select user_name,sum(count_us_st) wk_5,subject_area_name from mf_wer_OBI_USAGE_reqq where extract(day from start_dt) between 29 and 31
group by user_name,subject_area_name) f
where a.user_name=b.user_name(+)
and a.subject_area_name=b.subject_area_name(+)
and a.user_name=c.user_name(+)
and a.subject_area_name=c.subject_area_name(+)
and a.user_name=d.user_name(+)
and a.subject_area_name=d.subject_area_name(+)
and a.user_name=e.user_name(+)
and a.subject_area_name=e.subject_area_name(+)
and a.user_name=f.user_name(+)
and a.subject_area_name=f.subject_area_name(+)) x,
dm_employee y
where x.user_name=y.id and
y.active_flg='Y';Swas_fly wrote:
This code works fineIf it's fine, why try to fix it?
Post your table (only the relevant columns as a CREATE TABLE statement) and some sample data (INSERT into) and your required output.
Post your code between these tags: -
Hi All,
I've a question about potential use case for Oracle spatial. Data structures are following:
Clients
Account (have a dimension of balance, can be zero or above zero)
Client to account relationship
E.g.
Client C1 is a borrower to Account A1 (balance = 0)
Client C1 is a co borrower to Account A2 (balance > 0)
Client C2 is a co borrower to Account A1 (balance > 0)
Client C3 is a co borrower to Account A3 (balance > 0)
Currently, database is modeled as a set of three tables, e.g.
Client
ID
DATA
Account
ID
DATA
BALANCE
CLIENT_TO_ACCOUNT
CLIENT_ID
RELATIONSHIP (E.g borrower)
ACCOUNT_ID
Business limitations:
We are not interested in independent graphs for which all accounts have balance = 0 (let's call it inactive graph), however we might need occasionally query it
Users are interested in vertices/edges with account which have balance = 0, but linked (up to level N) to active account for analysis purposes
There is no well defined root (e.g. there can be 2 or more clients which are co borrowers to same account)
99% of queries will be against active graphs
Graphs are mutable, e.g. new relationships (edges) may be created/deleted during the day
Users are potentially interested in free navigation in whole independent graph, starting from root.
Root is determined by certain business rule
Need to process active graphs daily as bulk
Problems which I am trying to solve:
Limit the amount of data which may need to be processed - based on the analysis of current system, we only need 5% of data + some delta for 99% processing
Make sure performance does not degrade with time as we get more historical (processed data) - we can not deleted accounts with balance = 0 as potentially new relationship may arrive with new accounts with balance > 0
Current solution that I am thinking of :
Artificially partition the data universe as active and inactive graphs. All indexes would be local to two partitions.
E.g.
GROUP
GROUP_ID PK
ACTIVE_FLAG (partition key)
CLIENT
GROUP_ID (PARTITION BY FK TO GROUP)
ACCOUT
GROUP_ID (PARTITION BY FK TO GROUP)
CLIENT_TO_ACCOUNT
GROUP_ID (PARTITION BY FK TO GROUP)
The issues I am seeing right now:
1. Graphs(groups) may be potentially unlimited, so I will need a artificially limit the size using some dividing algorithms - leading to
2. Graphs(groups) may need to be joined or divided
3. Graphs(groups) will have to be activated/deactivated - e.g. moved to different partitions.
4. Data loading, activation/deactivation algorithms are not simple
So I am thinking about Oracle Spatial (Network) to model this problem.
Questions:
1) Can I model this problem using Oracle Spatial?
2) Will I gain any performance improvement?
3) Is there any explanation or white paper on how to do this for this particular type of problem?
4) Will the solution based on Oracle Spatial solve the problems outlined above?
5) Will my solution (without using Oracle spatial) work at all? Or there are some fundamental issues..
Thanks you!Either add a LOV to the JobID attribute definition in the VO (if the JobID will be editable) or simply add the job description to the select statement (join to the job table) as a reference attribute
-
Hi All,
I am new to oracle BPM and learning it through some samples. We need to build a process very soon and would like your advise on the use case.
1. Everyday we need to run a process which grabs the records from the database based on some criteria. How can we implement this in BPM?
2. We need to post these records(or notifications) to a group of users through webcenter client. How can we integrate webcenter to BPM?
3. Users need to act on these notifications and after that the result should be synched up with the database.
4. Automatic notifications should be sent to higher management if no action is taken on the prev notification.
I know i am asking too much. But we just need some pointers or high level suggestions as what kind of activities or process we need to use in BPM to implement this.
Thanks very much.Hi
I am also a newbie and the following are some of my suggestions, experts can suggest a better way if there is any.
1. Everyday we need to run a process which grabs the records from the database based on some criteria. How can we implement this in BPM? - I would guess you can use Mediator to poll the database and create the process
2. We need to post these records(or notifications) to a group of users through webcenter client. How can we integrate webcenter to BPM? - BPM Integration with Webcenter is done using Process Portal
3. Users need to act on these notifications and after that the result should be synched up with the database. - I would guess you can use Database Adapater to sync up the database
4. Automatic notifications should be sent to higher management if no action is taken on the prev notification. - SLA can be set in the Human Task.
Hope this helps.
Venkat -
Is there a process for using Detail Group Regions?
Hi Guys,
I have a question regarding dynamically displaying stacked detail groups. I want to display a detail group depending on a value that is set in the master group.
The documentation surrounding the Detail Group Regions is vague. Is there a set process on how to do this or even a documented case on how to get it to work? We have tried using Detail Group Regions but when we did the form of the master group was not displayed.
Regards
Bar
JHS: 10.1.3.2.51
JDev: 10.1.3.2.0.4066Steven,
Below is a stripped down version of my app def including the group layout settings. Even with no rendered expression only the relationship and address detail groups appear (person and organization are missing), and instead of the SeachDetails/SearchCriteria form fields the organization table appears above the detail groups.
If I create detail group regions for all four detail groups then no detail groups are displayed and its the address table that appears instead of the form fields.
Regards,
Bar
<Service name="MainAppModule">
--<Group layoutStyle="table-form" name="main_Search">
----<RegionContainer name="Regions">
------<ItemRegion name="SearchDetails">
--------<Item/><Item/>
------</ItemRegion>
------<ItemRegion name="SearchCriteria">
--------<Item/><Item/><Item/><Item/>
------</ItemRegion>
------<GroupRegion name="main_PersonGroupRegion" groupName="main_Person" title="Person Group Region"/>
------<GroupRegion name="main_OrganizationGroupRegion" groupName="main_Organization" title="Organization group Region"/>
----</RegionContainer>
----<Group layoutStyle="table-form" samePage="true" name="main_Person">
------<Item/>
------<RegionContainer name="Regions">
--------<ItemRegion name="PersonRecord">
----------<Item/><Item/>
--------</ItemRegion>
--------<ItemRegion name="PersonDetails">
----------<Item/><Item/><Item/><Item/>
--------</ItemRegion>
------</RegionContainer>
----</Group>
----<Group layoutStyle="table-form" samePage="true" name="main_Organization">
------<Item/>
------<RegionContainer name="Regions">
--------<ItemRegion name="PersonRecord">
----------<Item/><Item/>
--------</ItemRegion>
--------<ItemRegion name="PersonDetails">
----------<Item/><Item/><Item/><Item/>
--------</ItemRegion>
------</RegionContainer>
----</Group>
----<Group layoutStyle="table-form" samePage="true" name="main_Relationship">
------<Item/>
------<RegionContainer name="Regions">
--------<ItemRegion name="RelationshipRecord">
----------<Item/><Item/>
--------</ItemRegion>
--------<ItemRegion name="RelationshipDetails">
----------<Item/><Item/><Item/><Item/>
--------</ItemRegion>
------</RegionContainer>
----</Group>
----<Group layoutStyle="table-form" samePage="true" name="main_Address">
------<Item/>
------<RegionContainer name="Regions">
--------<ItemRegion name="AddressRecord">
----------<Item/><Item/>
--------</ItemRegion>
--------<ItemRegion name="AddressDetails">
----------<Item/><Item/><Item/><Item/>
--------</ItemRegion>
------</RegionContainer>
----</Group>
--</Group>
</Service> -
Problem trying to use a group by clause
hey good day,
i'm trying to use a group by clause in my application, but I'm getting the following error.
Error:- Query cannot be parsed, please check the syntax of your query. (ORA-00979: not a GROUP BY expression)
select "INSTRUCTOR"."EMPNUM",
"INSTRUCTOR"."FIRSTNAME",
"INSTRUCTOR"."LASTNAME",
"QUALN"."SPECIALIZE_FIELD" as "SPECIALIZE_FIELD",
"INSTRUCTOR"."USERNAME"
from "QUALN" "QUALN",
"INSTRUCTOR" "INSTRUCTOR"
where "INSTRUCTOR"."EMPNUM"="QUALN"."EMPNUM"
group by "INSTRUCTOR"."EMPNUM", "INSTRUCTOR"."FIRSTNAME", "INSTRUCTOR"."LASTNAME"Thanks in advance,
RichieRichie wrote:
hey thanks for your reply,
i have tried what you have suggested, but now i got another error
Error :- The report query needs a unique key to identify each row. The supplied key cannot be used for this query. Please edit the report attributes to define a unique key column. ORA-01446: cannot select ROWID from, or sample, a ...
This error message is not from oracle, btu from your reporting tool. it might be MS Access or whatever other tool that you use. Some of these tools want a unique value to identify the current row. The logic applied depends on the relationship of your tables. however in your case you could do it without the group by condition. THen the rowid can still be applied to your query.
Example
note the usage of alias names to simplified the select
select i.EMPNUM ,
i.FIRSTNAME ,
i.LASTNAME ,
i.USERNAME
from INSTRUCTOR i
where i.EMPNUM in (select q.EMPNUM from QUALN q); -
Using identy Group as condition
Hi ,
I wanna create a authorization Policy using two identity Group as condition . But i juste heve "OR " as operator for those two condition !!I wanna i use operator and is this Possible ???Configuring Policy Elements Conditions
Cisco ISE provides a way to create conditions that are individual, reusable policy elements that can be referred from other rule-based policies. Whenever a policy is being evaluated, the conditions that comprise it are evaluated first.
Under Policy > Policy Elements > Conditions, the initial Conditions pane displays the following policy
element condition options: Authentication, Authorization, Profiling, Posture, Guest, and Common.
Simple Conditions
Simple Condition Format
This type uses the form attribute operand value. Rule-based conditions are essentially a comparison of values (the attribute with its value), and these can be saved and reused in other rule-based policies. Simple conditions take the format of A operand B, where A can be any attribute from a Cisco ISE dictionary and B can be one of the values that attribute A can take.
Compound Conditions
Compound Condition Format
Authorization policies can contain conditional requirements that combine one or more identity groups using a compound condition that includes authorization checks that can return one or more authorization profiles. This condition type comprises one or more simple conditions that use an AND or OR relationship. These are built on top of simple conditions and can be saved and reused in other rule-based policies. Compound Conditions can take any of the following forms:
• (X operand Y) AND (A operand B) AND (X operand Z) AND ... (so on)
• (X operand Y) OR (A operand B) OR (X operand Z) OR ... (so on)
(*Where X and A are attributes from the Cisco ISE dictionary and can include username and device type.
For example, compound conditions can take the following form:
– DEVICE: Model Name Matches Catalyst6K AND Network Access: Use Case Equals Host
Lookup.)
Creating New Authorization Policy Element Conditions
Use this procedure to create new authorization policy element conditions (simple or compound).
To create new authorization policy element conditions, complete the following steps:
Step 1 Click Policy > Policy Elements> Conditions > Authorization> Simple Conditions (or Compound
Conditions).
The Conditions page appears listing all existing configured authorization policy element conditions.
Step 2 To create a new simple condition, click Create.
The Simple Conditions page displays.
Step 3 Enter values in the following fields to define a new simple condition:
• Name—Enter the name of the simple condition.
• Description—Enter the description of the simple condition.
• Attribute—Click to choose a dictionary from the drop-down list of dictionary options, and choose an
attribute from the corresponding attribute choices.
• Operator—Enter Equals or Not Equals.
• Value—Enter a value that matches the selected attribute.
Step 4 Click Submit to save your changes to the Cisco ISE database and create this authorization condition.
The Name, Attribute, Operator, and Value fields in simple conditions are required and are marked with an asterisk (*).
For Complete Reference visit:
http://www.cisco.com/en/US/docs/security/ise/1.0/user_guide/ise10_authz_polprfls.pdf -
Withdrawal using strategy group 10
Hi Dear SAP Expert,
I have the follow scenario:
I using strategy group 10 in some material to create demand for them and only reduce this demand by Sales Order, just I have a case for two material where demand was reduced. after check if it was created some sales order for this material I did not found none.
It is possible to create withdrawal by other process that it is not sales order using strategy group 10?
How can find what o how this demand was reduced?
Is there some report to show withdrawal for material?
Example:
Material 147420311 STATIC CONVERTER(TV)-G5E
Plant TVSO Reqmts type LSF Version/active PP / ReqPlanNo. SRVC
Plan. qty 30 EA MRP area
P------ Reqmts dt. -Planned qty-Whitdrawal qyt
D -
09/07/2010 -
30-----10
Thanks in advance for all help
Best Regard,
JulioHi Mario,
FIrts,
Thanks for answer soon :).
You rigth, withdrawal it is done by GI from delivery I wanted to say that, then this GI delivery is from a sales order, rigth?
I know that it can be GI from Production Ordrer, but when we use strategy 10 withrawal will be done by GI delivery from sales order, according with sap help.
Just I am invetigatiing why I have a withdrawal if I have not any delivery for this material.
And I use Transaction MD63 to see demand for this material and I see withdrawal reduce on Schedule line tab on this view. just I want to know what or how was reduced demand if I have not any GI delivery.
I have this issue only with a couple material, all them are working properly.
just I have compared those a couple material with other that are working fine but I do not see diference on its configuration on material master.
Could you advice me on this pls? get a address to follow?
I appreciate you great a attention!!
Best Regards,
Julio -
From where can i get the use cases for practice on OIM OID provisioning reconciliation and other aspects
Hi Dear,
thanks for your reply and we are using OIM 9.x version. Checked Root DN value as you suggested (see below snap shot for oid resource definition):-
Admin Id cn=username
Admin Password *******
Group Reconciliation Time Stamp
Last Target Delete Recon TimeStamp
Last Target Recon TimeStamp
Last Trusted Delete Recon TimeStamp
Last Trusted Recon TimeStamp
Port 6060
Prov Attribute Lookup Code AttrName.Prov.Map.OID
Prov Group Attribute Lookup Code AttrName.Group.Prov.Map.OID
Prov Role Attribute Lookup Code AttrName.Role.Prov.Map.OID
Role Reconciliation Time Stamp
Root DN DC=oracle,DC=com
SSL false
Server Address My server name
Use XL Org Structure false -
How can i construct this query without using CASE statement?
I've a following code. I'm using this script in Hibernet. So, i cannot use CASE there. Because, hibernet doesn't support case in select statement. How can i construct the same thing which will give me the same result without using CASE?
SELECT ofc.FLT_LCL_ORIG_DT
, ofc.CARR_IATA_CD
, ofc.FLT_NBR
, ofc.ORIG_ARPT_CD
, ofc.DEST_ARPT_CD
, sum( ofc.CNCT_PSGR_CNT) AS BOOKED_CNCT_PSGR_CNT
, sum( CASE WHEN o.fsdr_mrkt_cd = 'D' AND d.fsdr_mrkt_cd = 'D' THEN '0'
ELSE to_char(ofc.CNCT_PSGR_CNT,'99') END ) AS BOOKED_INTL_CNCT_PSGR_CNT
, sum(CASE WHEN o.fsdr_mrkt_cd||d.fsdr_mrkt_cd = 'DD'
THEN '0'
ELSE to_char(ofc.CNCT_PSGR_CNT,'99')
END) AS NEW_BCNT
FROM OPS_FLT_CNCT ofc
, STN o
, STN d
WHERE ofc.CNCT_ORIG_ARPT_CD = o.STN_CD
AND ofc.CNCT_DEST_ARPT_CD = d.STN_CD
-- AND TRUNC(ofc.FLT_LCL_ORIG_DT) = trunc(to_date('22-MAY-2007','DD-MON-YYYY'))
AND ofc.CARR_IATA_CD = 'UA'
AND ofc.FLT_NBR = '1218'
AND ofc.ORIG_ARPT_CD = upper('DEN') AND ofc.DEST_ARPT_CD = upper('IAD') GROUP BY ofc.FLT_LCL_ORIG_DT
, ofc.CARR_IATA_CD
, ofc.FLT_NBR
, ofc.ORIG_ARPT_CD
, ofc.DEST_ARPT_CD ;And, the output look like this --
FLT_LCL_O CARR FLT_N ORI DES BOOKED_CNCT_PSGR_CNT BOOKED_INTL_CNCT_PSGR_CNT NEW_BCNT
22-MAY-07 UA 1218 DEN IAD 9 0 0
23-MAY-07 UA 1218 DEN IAD 1 0 0
24-MAY-07 UA 1218 DEN IAD 2 1 1
25-MAY-07 UA 1218 DEN IAD 1 0 0Thnaks in advance for reply.
Regards.
Satyaki De.
#####2 ideas:
1. Inline function to perform the CASE funcionaltity for you
2. Piplelined function to generate the entire dataset
Both will be slower than just using CASE in a query, but we're working around big constraints
Maybe you are looking for
-
ITunes 10.3.1 freezes when I try to delete files I have ripped from my own CDs.
I am in the process of updating a number of CDs that I ripped at 128kbps with rips at 256kbps. When I delete the 128kbps files most of the time iTunes 10.3.1 freezes with the spinning beach ball. The files are in the trash but I have to force quit
-
Playback of file causes appletv to reboot
i converted a whole batch of tv episodes, all converted using the same program and the same preset. they are all a 4:3 ratio and were converted from DVD. now, it appears that some have a resolution of 788x576 and some are 788x592. now, for some reaso
-
PLease help: High Volume database hitting limits
Hi, I am using Berkeley DB to store real time ticks for market data. To give you an idea of the amount of data... - I get roughly 15000 ticks/sec on an average - by the end of the day the database file grows upto 50GB. My system configuration is... $
-
File Panel Expanded View - DW8
Using DW 8 in Windows XP, open File Panel expanded view. (F8) When I open a page from my site (local file) the page opens int he editing view but the File Panel view remains in front of the page I want to work on. The only way to get rid of it is to
-
Com.sun.media.protocol.sunvideo. HEEELLLP
I have Installed JMF2.1.1b and downloaded the source code for JMF studio.. I like to be able to compile the code and eventually add couple of proprietary layer for our use... But I'm not able to successfully compile the code because of the missing pa