Dynamic Rule based implementation in PL/SQL
Hi,
We are trying to implement a dynamic rule based application in Oracle 9i. Its simple logic where we store expressions as case statments and actions seperated by commas as follows.
Rule: 'Age > 18 and Age <65'
True Action: 'Status = ''Valid'' , description = ''age in range'''
False Action: 'Status =''Invalid'', Description=''Age not in range'''
Where Age,Status, description are all part of one table.
One way of implementing this is fire rule for each record in the table and then based on true or false call action as update.
i.e
select (case when 'Age > 18 and Age <65' then 1 else 0 end) age_rule from tableX
(above query will in in a cursor xcur)
Then we search for
if age_rule = 1 then
update tablex set Status = ''Valid'' , description = ''age in range'' where id=xcur.id;
else
update tablex set Status =''Invalid'', Description=''Age not in range'' where id=xcur.id;
end if;
This method will result in very slow performance due to high i/o. We want to implement this in collection based method.
Any ideas on how to dynamically check rules and apply actions to collection without impact on performance. (we have nearly 3million rows and 80 rules to be applied)
Thanks in advance
Returning to your original question, first of all, there is a small flaw in the requirements, because if you apply all the rules to the same table/cols, than the table will have results of only last rule that was processed.
Suppose rule#1:
Rule: 'Age > 18 and Age <65'
True Action: 'Status = ''Valid'' , description = ''age in range'''
False Action: 'Status =''Invalid'', Description=''Age not in range'''
and Rule#2:
Rule: 'Name like ''A%'''
True Action: 'Status = 'Invalid'' , description = ''name begins with A'''
False Action: 'Status =''Invalid'', Description=''name not begins with A'''
Then after applying of rule#1 and rule#2, results of the rule#1 will be lost, because second rule will modify the results of the first rule.
Regarding to using collections instead of row by row processing, I think that a better approach would be to move that evaluating cursor inside an update statement, in my tests this considerably reduced processed block count and response time.
Regarding to the expression filter, even so, that you are not going to move to 10g, you still can test this feature and see how it is implemented, to get some ideas of how to better implement your solution. There is a nice paper http://www-db.cs.wisc.edu/cidr2003/program/p27.pdf that describes expression filter implementation.
Here is my example of two different methods for expression evaluation that I've benchmarked, first is similar to your original example and second is with expression evaluation moved inside an update clause.
-- fist create two tables rules and data.
drop table rules;
drop table data;
create table rules( id number not null primary key, rule varchar(255), true_action varchar(255), false_action varchar(255) );
create table data( id integer not null primary key, name varchar(255), age number, status varchar(255), description varchar(255) );
-- populate this tables with information.
insert into rules
select rownum id
, 'Age > '||least(a,b)||' and Age < '||greatest(a,b) rule
, 'Status = ''Valid'', description = ''Age in Range''' true_action
, 'Status = ''Invalid'', description = ''Age not in Range''' false_action
from (
select mod(abs(dbms_random.random),60)+10 a, mod(abs(dbms_random.random),60)+10 b
from all_objects
where rownum <= 2
insert into data
select rownum, object_name, mod(abs(dbms_random.random),60)+10 age, null, null
from all_objects
commit;
-- this is method #1, evaluate rule against every record in the data and do the action
declare
eval number;
id number;
data_cursor sys_refcursor;
begin
execute immediate 'alter session set cursor_sharing=force';
for rules in ( select * from rules ) loop
open data_cursor for 'select case when '||rules.rule||' then 1 else 0 end eval, id from data';
loop
fetch data_cursor into eval, id;
exit when data_cursor%notfound;
if eval = 1 then
execute immediate 'update data set '||rules.true_action|| ' where id = :id' using id;
else
execute immediate 'update data set '||rules.false_action|| ' where id = :id' using id;
end if;
end loop;
end loop;
end;
-- this is method #2, evaluate rule against every record in the data and do the action in update, not in select
begin
execute immediate 'alter session set cursor_sharing=force';
for rules in ( select * from rules ) loop
execute immediate 'update data set '||rules.true_action|| ' where id in (
select id
from (
select case when '||rules.rule||' then 1 else 0 end eval, id
from data
where eval = 1 )';
execute immediate 'update data set '||rules.false_action|| ' where id in (
select id
from (
select case when '||rules.rule||' then 1 else 0 end eval, id
from data
where eval = 0 )';
end loop;
end;
Here are SQL_TRACE results for method#1:
call count cpu elapsed disk query current rows
Parse 37 0.01 0.04 0 0 0 0
Execute 78862 16.60 17.50 0 187512 230896 78810
Fetch 78884 3.84 3.94 2 82887 1 78913
total 157783 20.46 21.49 2 270399 230897 157723
and this is results for method#2:
call count cpu elapsed disk query current rows
Parse 6 0.00 0.00 0 0 0 0
Execute 6 1.93 12.77 0 3488 170204 78806
Fetch 1 0.00 0.00 0 7 0 2
total 13 1.93 12.77 0 3495 170204 78808
You can compare this two methods using SQL_TRACE.
Similar Messages
-
Dynamic, rules based security
My organization has an application that needs a very fine grained scurity model, that changes very often and is based upon a rules machnisem (written in PL/SQL). Is there a way to combine a rule based mechanism with the internal ACL mechanism of the iFS ?
nullHi Harvey_SO,
According to your description, you get the security ignored when using custom dynamic role-based security. Right?
In Analysis Services, it has some role overlapping scenarios, if two roles used to secure attributes in two different dimensions, which might both apply to some users simultaneously, it can cause the user has no security applied from either role. Please
refer to workarounds in the link below:
The Additive Design of SSAS Role Security
If you have any question, please feel free to ask.
Best Regards,
Simon Hou
TechNet Community Support -
we have an airline pricing system whose rules change often, and rather than re-designing tables and the client application code everytime that happens, we would like to develop a rule-based system to dynamically store the rules in a rule-based system that doesn't require re-design and code modification.
any ideas ?
here is an example of the data that we would like to store
if seasonality is LOW
and passenger is CHILD
and carrier is TRW
and origination is COS
and destination is PIT
and ticket code is Y
then price is 908
seasonality for carrier TRW is LOWfor travel between 01/15/06 to 04/15/06
seasonality for carrier JAR is LOW for travel between 02/01/06 to 05/01/06
age for carrier TRW is CHILD for passenger age betwen 2 and 12
age for carrier JAR is CHILD for passenger age between 6 and 14
.. and another price can be given with an entirely different set of rules
reason is ask in this forum is I read a blog of someone who praises the set-based nature of the SQL language for developing exactly this kind of system, rather than trying to implement it on the client (for example using JRules)What version of Oracle are you on? 10g introduced a rules engine into the database (Oracle Rules Manager and/or Oracle Expression Filter) which would seem to be right up your alley.
Justin -
Connect SharePoint to SQL Server Database Then Build Rules Based Returns System
Hello Guys,
I work for an ecommerce business. We sell a wide range of products to customers all around the world which are ordered from our websites and then dispatched to our customers from our warehouses.
I have been tasked with developing a computerised return system from the company because at the moment everything is done using paper forms.
We have all our customer, order and product data within SQL Server databases.
What I would like to know is...
1. Can we connect sharepoint online to a local sql server database
2. Could we then build searches within sharepoint to display data contained within these databases e.g. customer information etc
3. How is the data presented in sharepoint - is there a way to design how the data is displayed within sharepoint etc?
4. Can we then build a rules based return system within sharepoint? The on screen workflow would need to vary according to data contained within the database e.g. the weight if the product being returned and also on fields input by the service agent such
as the reason for the return, what solution the customer would like etc.
5. is it possible to build these workflows in such a way that they can be saved part way through then gone back to later
6. Can reports be build based on the returns that are being generated e.g. list of products most commonly returned
Sorry for all the questions, I am a bit of sharepoint novice. I think it may possible be able to do what we need but I just wondered if the answer to any of the above questions is definately a no because if it is that could mean it is not suitable
ThanksYou could use a BCS connection
http://community.office365.com/en-us/b/office_365_community_blog/archive/2012/10/11/business-data-connectivity-services-in-office-365-sharepoint-online.aspx, this will allow you to edit data in your non SharePoint SQL DB, on premises, from Office 365 SharePoint.
Search will index the web applications you point it at, and the lists from the BCS will be part of those web apps, site collections, sites at some place and will get indexed.
You can create views on the data, that can sort of work like a search, but when you search on the site where the lists are the query will return results based on the BCS data.
These views can be based on criteria such as the weight of the product being returned and other fields.
The data is presented as a list.
You can make it read only or read-write based on SharePoint permissions on the list. The account used to create the connection can edit.
BCS is possible in on-premises SharePoint too
here is a good read on it,
http://www.dotnetcurry.com/showarticle.aspx?ID=632
Stacy Simpkins | MCSE SharePoint | www.sharepointpapa.com -
Implementing roles and rules based authorisation with Azure AD
Hi all,
I would greatly appreciate some input on feasibility and patterns I should look at for a complex technical requirement that I am currently tasked with designing.
We have a system that comprises a web and mobile app. In the past we have implemented session based authentication through ADAM and authorisation through custom business rules contained within the applications. The authentication mechanism is in the process
of being migrated to Azure AD and authorisation is planned to be moved to Azure AD for our next release.
Existing authorisation within our web application is already complex. We have users that belong to different groups with a range of permissions such as read, write or admin. Additionally each user is granted access to N customers and also N locations within
each customer. We have a requirement that any number of combinations of customers and locations be supported. Users also need to have different permissions for each entity, i.e. read access to customer 1 location 2, write access to customer 4 and administer
customer 7. Currently these privileges are maintained within a relational database and enforced as part of each PageLoad(). Essentially this is a combination of roles and rules based authorisation.
We are struggling to represent this complex matrix structure within Azure AD and efficiently implement the authorisation decision in Azure AD. The driver for this technical requirement is to provide re-usability of the authorisation component to other (as
yet unidentified) applications.
Currently the best option we have come up with is implementing custom attributes for each class of permissions and storing within this 2048 bit field a bitmask that represents whether this permission is granted for a given location (which has a many to one
relationship with customer).
Any help or comment would be gratefully received,
PhilHi
When "Advance routing" is used for Task assignment; the task service asserts the folllowing fact types : Task, PreviousOutcome and TaskAction to the rules engine. These facts gives all the reqd info about the task (like outcome of the participant, task stage .. etc)
Now in the defined ruleset; we can have rules as per our requirement that can extract info from the asserted fact types and assign task to the required/next participant.
Also note that we write the advance rules for exception cases only.
For example; let's say all participants have 2 possible Outcomes [COMPLETE, RECHECK]. We have defined the ideal task routing flow as :
Participant A -> Participant B -> Participant C. This is the flow when all participant selects "COMPLETE"
Now suppose B selects outcome as "RECHECK" then the task shld move back to A. So for this case only we need to write a advance rule.
Pls refer to the code sample at : http://download.oracle.com/technology/sample_code/hwf/workflow-106-IterativeDesign.zip
Also dev guide : refer to section 28.3.7.2 http://download.oracle.com/docs/cd/E14571_01/integration.1111/e10224/bp_hwfmodel.htm#BABBFEJJ
Thanks
Edited by: Kania on May 19, 2010 2:41 AM -
Partitioning on Oracle 8i (Rule Based vs. Cost Based)
At my current engagement, we are using Oracle Financials 11.0.3 on Oracle 8.0.6. The application uses rule-based optimizer. The client wants to implement Oracle partitioning. With this in mind, we are concerned about possible performance issues that the implementation of partitioning may cause since RBO does not recognize it.
We agree that the RBO will see a non-partitioned table the same as a partitioned. In this scenario where you gain the most is with backup/recoverability and general maintenance of the partitioned table.
Nevertheless, we have a few questions:
When implementing partitions, will the optimizer choose to go with Cost base vs. Rule base for these partitioned tables?
Is it possible that the optimizer might get confused with this?
Could it degrade performance at the SQL level?
If this change from RBO to CBO does occur, the application could potential perform poorly because of the way it has been written.
Please provide any feedback.
Thanks in advance.If the CBO is invoked when accessing these tables, you may run into problems.
- You'll have to analyze your tables & ensure that the statistics are kept up to date.
- It's possible that any SQL statements which invoke the CBO rather than the RBO will have different performance characteristics. The SYSTEM data dictionary tables, for example, must use the RBO or their performance suffers dramatically. Most of the time, the CBO beats the RBO, but applications which have been heavily tuned with the RBO may have problems with the CBO.
- Check your init.ora to see what optimizer mode you're in. If you're set to CHOOSE, the CBO will be invoked whenever statistics are available on the table(s) involved. If you choose RULE, you'll only invoke the CBO when the RBO encounters situations it doesn't have rules for.
Justin -
Hi all,
I am currently working on a project which required to Implemented 500-600 rules in the System. Some of them are global Rules and some are just module level rules.
Is there any pattern exist for Rule Based System?
Thanks in Advance
RashmikantJust 1 pattern?
The short answer: No.
The longer (though still short answer):
A rules engine is not the easiest thing to implement, just 2 or 3 classes don't hack it if you're working with a dynamic rule base that is steered based on input/ouput and states.
There are rules engines for sale and there also a few opensource, look on the internet, use your best best friend. -
hi,
my database is 10.2.0.1...by default optimizer_mode=ALL_ROWS..
for some sessions..i need rule based optimizer...
so can i use
alter session set optimizer_mode=rule;
will it effect that session only or entire database....
and following also.i want to make them at session level...
ALTER SESSION SET "_HASH_JOIN_ENABLED" = FALSE;
ALTER SESSION SET "_OPTIMIZER_SORTMERGE_JOIN_ENABLED" = FALSE ;
ALTER SESSION SET "_OPTIMIZER_JOIN_SEL_SANITY_CHECK" = TRUE;
will those effect only session or entire database...please suggest< CBO outperforms RBO ALWAYS! > I disagree - mildlyWhen I tune SQL, the first thing I try is a RULE hint, and in very simple databases, the RBO still does a good job.
Of course, you should not use RULE hints in production (That's Oracle job).
When Oracle eBusiness suite migrated to the CBO, they placed gobs of RULE hints into their own SQL!!
Anyway, always adjust your CBO stats to replicate an RBO execution plan . . . .
specifically CAST() conversions from collections and pipelined functions.Interesting. Hsve you tried dynamic sampling for that?
Hope this helps. . .
Don Burleson
Oracle Press author
Author of “Oracle Tuning: The Definitive Reference”
http://www.dba-oracle.com/bp/s_oracle_tuning_book.htm -
Hi,
Rule Based Optimization is a deprecated feature in Oracle 10g.We are in the process of migrating from Oracle 9i to 10g.I have never heard of this Rule based Optimization earlier.I have googled for the same.But, got confused with the results.
Can anybody shed some light on the below things...
Is this Optimization done by Oracle or as a developer do we need to take care of the rules while writing SQL statements?
There is another thing called Cost Based Optimization...
Who will instruct the Oracle whether to use Rule Based Optimization or cost Based Optimization?
Thanks & Regards,
user569598Hope the following explanation would be helpful.
Whenever a statement is fired, Oracle should goes through the following stages:
Parse -> Execute -> Fetch (fetch only for select statement).
During Parse, Oracle first evaluates, Syntatic checking (SELECT, FROM, WHERE, ORDER BY ,GROUP and etc) and then Semantic Checking (columns names, table name, user permission on the objects and etc). Once these two stages passes, then, it has to decided whether to do soft parse or hard parse. If similar cursor(statement) doesn't exits in the shared pool, Oracle goes for Hard parse where Optimizer comes in picture for generating query plan.
Oracle has to decide either RBO or CBO. It also depends on the OPTIMIZER_MODE parameter value. If RULE hint is used, RBO will be used, if there are no statistics for those tables involved in the query, Oracle decides RBO, (condition applies). If statistics are available, or dynamic samplying is defined then Oracle use CBO to prepare the Optimal execution plan.
RBO is simply relies on set of rules where CBO relies on statistical information.
Jaffar -
Document rule based classification
from the example in oracle text developers guide i tried to build a rule based document classification, using the code given below:
create or replace package classifier as
procedure this;
end;
show errors
create or replace package body classifier as
procedure this
is
v_document blob;
v_item number;
v_doc number;
begin
for doc in (select document_id, content from documents)
loop
v_document :=doc.content;
v_item:=0;
v_doc:=doc.document_id;
for c in (select category_id, category_name from docs_cats_rule_based_class
where matches(query,v_document)>0)
loop
v_item:=v_item +1;
insert into doc_cat_rule_based_class values (doc.document_id, category_id);
end loop;
end loop;
end this;
end;
show errors
exec classifier.this
this gives the following errors:
package classifier Compiled.
line 5: SQLPLUS Command Skipped: show errors
package body Compiled.
line 32: SQLPLUS Command Skipped: show errors
Error starting at line 33 in command:
exec classifier.this
Error report:
ORA-04063: package body "STARDOC.CLASSIFIER" has errors
ORA-06508: PL/SQL: could not find program unit being called: "STARDOC.CLASSIFIER"
ORA-06512: at line 1
i think i am missing some grant to package. please help!What version of Oracle are you using? Did you create the required tables and index in the earlier steps? What did you run it from? It appears that you did not run it from SQL*Plus. Please see the following demonstration that shows that it works fine on Oracle 10g when run from SQL*Plus with minimal privileges. I did not use any data.
SCOTT@10gXE> CREATE USER stardoc IDENTIFIED BY stardoc
2 /
User created.
SCOTT@10gXE> GRANT CONNECT, RESOURCE TO stardoc
2 /
Grant succeeded.
SCOTT@10gXE> CONNECT stardoc/stardoc
Connected.
STARDOC@10gXE>
STARDOC@10gXE> create table news_table
2 (tk number primary key not null,
3 title varchar2(1000),
4 text clob)
5 /
Table created.
STARDOC@10gXE> create table news_categories
2 (queryid number primary key not null,
3 category varchar2(100),
4 query varchar2(2000))
5 /
Table created.
STARDOC@10gXE> create table news_id_cat
2 (tk number,
3 category_id number)
4 /
Table created.
STARDOC@10gXE> create index news_cat_idx on news_categories (query)
2 indextype is ctxsys.ctxrule
3 /
Index created.
STARDOC@10gXE> create or replace package classifier
2 as
3 procedure this;
4 end classifier;
5 /
Package created.
STARDOC@10gXE> show errors
No errors.
STARDOC@10gXE> create or replace package body classifier
2 as
3 procedure this
4 is
5 v_document clob;
6 v_item number;
7 v_doc number;
8 begin
9 for doc in (select tk, text from news_table)
10 loop
11 v_document := doc.text;
12 v_item := 0;
13 v_doc := doc.tk;
14 for c in
15 (select queryid, category from news_categories
16 where matches (query, v_document) > 0)
17 loop
18 v_item := v_item + 1;
19 insert into news_id_cat values (doc.tk,c.queryid);
20 end loop;
21 end loop;
22 end this;
23 end classifier;
24 /
Package body created.
STARDOC@10gXE> show errors
No errors.
STARDOC@10gXE> exec classifier.this
PL/SQL procedure successfully completed.
STARDOC@10gXE> -
A dynamic table based on run-time created view object -- please help!
Hello!
I'm trying to create a dynamic table based on an run-time created view object. All go ok, but table binding component take the first view/iterator state and don't reflect changes they have. Please, take a look:
1. At run-time the view is being replaced by new red-only one based on query in application module:
getQueryView().remove();
createViewObjectFromQueryStmt("QueryView", statement);
2. Page definition file contains an iterator (using iterator or methodIterator - doesn't matter) binding and table, which binds to the iterator, like:
<methodIterator id="distributeQuery1Iter" Binds="distributeQuery1.result"
DataControl="QueryServiceDataControl" RangeSize="10"/>
<table id="distributeQuery11" IterBinding="distributeQuery1Iter"/>
3. The page code uses <af:table>. But, if I use table binding (it's right) like this:
<af:table var="row" value="#{bindings.distributeQuery11.collectionModel}">
<af:forEach items="#{bindings.distributeQuery11.attributeDefs}" var="def">
the table will never changed (i.e. still show the first view instance).
When I tried to use iterator binding directly (it's bad and cannot provide all needed features unlike CollectionModel from table binding) I saw that table works!
(Code is somehing like:
<af:table var="row" value="#{bindings.myIterator.allRowsInRange}">
<af:forEach items="#{bindings.myIterator.attributeDefs}" var="def">
Why the table binding do not reflect changes in iterator? Or should I use different approach?
Thanks in advance!
Ilya.I got it to work! I used a hybrid approach comprised of some of your code and some of Steve Muench's AcceessAppModuleInBackingBean example.
In the setBindings method, I execute an app module method that redefines the query, then I used your code to delete and recreate bindings and iterator:
public void setBindingContainer(DCBindingContainer bc) {
this.bindingContainer = bc;
rebuildVO();
The rebuildVO() method looks like the code you provided in your example:
private void rebuildVO() {
DCDataControl dc;
DispatchAppModule dApp;
DCBindingContainer bc;
DCIteratorBinding it;
OperationBinding operationBinding;
ViewObject vo;
DCControlBinding cb;
try {
bc = getBindingContainer();
dc = bc.findDataControl(DATACONTROL);
dApp = (DispatchAppModule)dc.getDataProvider();
// Execute App Module Method to rebuild VO based upon new SQL Statement.
dApp.setDispatchViewSQL();
vo = dApp.findViewObject(DYNAMIC_VIEW_NAME);
it = bc.findIteratorBinding(DYNAMIC_VO_ITER_NAME);
it.bindRowSetIterator(vo, true);
// logger.info("Remove value binding...");
cb = bc.findCtrlBinding(DYNAMIC_VIEW_NAME);
cb.getDCIteratorBinding().removeValueBinding(cb);
bc.removeControlBinding(cb);
// logger.info("Creating new value binding...");
FacesCtrlRangeBinding dynamicRangeBinding =
new FacesCtrlRangeBinding(null,
bc.findIteratorBinding(DYNAMIC_VO_ITER_NAME), null);
// logger.info("Add control binding...");
bc.addControlBinding(DYNAMIC_VIEW_NAME, dynamicRangeBinding);
} catch (Exception e) {
e.printStackTrace();
And my App Module method that redefines the view object looks like this:
public void setDispatchViewSQL() {
String SQL =
"begin ? := PK_BUsiNESS.F_GETDISPATCHVIEWSQL();end;";
CallableStatement st = null;
String ViewSQL = null;
try {
st = getDBTransaction().createCallableStatement(SQL,
DBTransaction.DEFAULT);
* Register the first bind parameter as our return value of type LONGVARCHAR
st.registerOutParameter(1, OracleTypes.LONGVARCHAR);
st.execute();
ViewSQL = ((OracleCallableStatement) st).getString(1);
findViewObject(DYNAMIC_VO_NAME).remove();
ViewObject vo = createViewObjectFromQueryStmt(DYNAMIC_VO_NAME, ViewSQL);
vo.executeQuery();
} catch (SQLException s) {
throw new JboException(s);
} finally {
try {
st.close();
} catch (SQLException s) {
s.printStackTrace();
When I run it I get my desired results. One thing I don't quite understand is why when the page is first rendered it shows the last set of records rather than the first. Now I have to figure out how to put navigation URLS in each of the table cells.
Thanks for your help; I would not have gotten this far without it,
Jeff -
Improving performace for a Rule Based Optimizer DB
Hi,
I am looking for information on improving the current performance of an ancient 35GB Oracle 7.3.4 using RULE based optimizer mode. It is using 160 MB SGA and the physical memory on the system is 512MB RAM.
As of now, all the major tasks which take time, are run after peak hours so that the 130 user sessions are not affected significantly.
But recently am told some procedures take too long to execute ( procedure has to do with truncating tables and re-populating data into it ) and I do see 54% of the pie chart for WAITS are for "sequential reads" followed by "scattered reads" of 36%. There are a couple of large tables of around 4GB in this DB.
Autotrace doesn't help me much in terms of getting an explain plan of slow queries since COST option doesnt show up and am trying to find ways of improving the performance of DB in general.
Apart from the "redo log space requests" which I run into frequently (which btw is something I am trying to resolve ..thanks to some of you) I dont see much info on exactly how to proceed.
Is there any info that I can look towards in terms of improving performance on this rule based optimizer DB ? Or is identifying the top sql's in terms of buffer gets the only way to tune ?
Thank you for any suggestions provided.Thanks Hemant.
This is for a 15 minute internal under moderate load early this morning.
Statistic Total Per Transact Per Logon Per Second
CR blocks created 275 .95 5.19 .29
Current blocks converted fo 10 .03 .19 .01
DBWR buffers scanned 74600 258.13 1407.55 78.44
DBWR free buffers found 74251 256.92 1400.96 78.08
DBWR lru scans 607 2.1 11.45 .64
DBWR make free requests 607 2.1 11.45 .64
DBWR summed scan depth 74600 258.13 1407.55 78.44
DBWR timeouts 273 .94 5.15 .29
OS Integral shared text siz 1362952204 4716097.59 25716079.32 1433177.92
OS Integral unshared data s 308759380 1068371.56 5825648.68 324668.12
OS Involuntary context swit 310493 1074.37 5858.36 326.49
OS Maximum resident set siz 339968 1176.36 6414.49 357.48
OS Page faults 3434 11.88 64.79 3.61
OS Page reclaims 6272 21.7 118.34 6.6
OS System time used 19157 66.29 361.45 20.14
OS User time used 195036 674.87 3679.92 205.09
OS Voluntary context switch 21586 74.69 407.28 22.7
SQL*Net roundtrips to/from 16250 56.23 306.6 17.09
SQL*Net roundtrips to/from 424 1.47 8 .45
background timeouts 646 2.24 12.19 .68
bytes received via SQL*Net 814224 2817.38 15362.72 856.18
bytes received via SQL*Net 24470 84.67 461.7 25.73
bytes sent via SQL*Net to c 832836 2881.79 15713.89 875.75
bytes sent via SQL*Net to d 42713 147.8 805.91 44.91
calls to get snapshot scn: 17103 59.18 322.7 17.98
calls to kcmgas 381 1.32 7.19 .4
calls to kcmgcs 228 .79 4.3 .24
calls to kcmgrs 20845 72.13 393.3 21.92
cleanouts and rollbacks - c 86 .3 1.62 .09
cleanouts only - consistent 40 .14 .75 .04
cluster key scan block gets 1051 3.64 19.83 1.11
cluster key scans 376 1.3 7.09 .4
commit cleanout failures: c 18 .06 .34 .02
commit cleanout number succ 2406 8.33 45.4 2.53
consistent changes 588 2.03 11.09 .62
consistent gets 929408 3215.94 17536 977.3
cursor authentications 1746 6.04 32.94 1.84
data blocks consistent read 588 2.03 11.09 .62
db block changes 20613 71.33 388.92 21.68
db block gets 40646 140.64 766.91 42.74
deferred (CURRENT) block cl 668 2.31 12.6 .7
dirty buffers inspected 3 .01 .06 0
enqueue conversions 424 1.47 8 .45
enqueue releases 1981 6.85 37.38 2.08
enqueue requests 1977 6.84 37.3 2.08
execute count 20691 71.6 390.4 21.76
free buffer inspected 2264 7.83 42.72 2.38
free buffer requested 490899 1698.61 9262.25 516.19
immediate (CR) block cleano 126 .44 2.38 .13
immediate (CURRENT) block c 658 2.28 12.42 .69
logons cumulative 53 .18 1 .06
logons current 1 0 .02 0
messages received 963 3.33 18.17 1.01
messages sent 963 3.33 18.17 1.01
no work - consistent read g 905734 3134.03 17089.32 952.4
opened cursors cumulative 2701 9.35 50.96 2.84
opened cursors current 147 .51 2.77 .15
parse count 2733 9.46 51.57 2.87
physical reads 490258 1696.39 9250.15 515.52
physical writes 2265 7.84 42.74 2.38
recursive calls 37296 129.05 703.7 39.22
redo blocks written 5222 18.07 98.53 5.49
redo entries 10575 36.59 199.53 11.12
redo size 2498156 8644.14 47135.02 2626.87
redo small copies 10575 36.59 199.53 11.12
redo synch writes 238 .82 4.49 .25
redo wastage 104974 363.23 1980.64 110.38
redo writes 422 1.46 7.96 .44
rollback changes - undo rec 1 0 .02 0
rollbacks only - consistent 200 .69 3.77 .21
session logical reads 969453 3354.51 18291.57 1019.4
session pga memory 35597936 123176.25 671659.17 37432.11
session pga memory max 35579576 123112.72 671312.75 37412.8
session uga memory 2729196 9443.58 51494.26 2869.82
session uga memory max 20580712 71213.54 388315.32 21641.13
sorts (memory) 1091 3.78 20.58 1.15
sorts (rows) 12249 42.38 231.11 12.88
table fetch by rowid 57246 198.08 1080.11 60.2
table fetch continued row 111 .38 2.09 .12
table scan blocks gotten 763421 2641.6 14404.17 802.76
table scan rows gotten 13740187 47543.9 259248.81 14448.15
table scans (long tables) 902 3.12 17.02 .95
table scans (short tables) 4614 15.97 87.06 4.85
total number commit cleanou 2489 8.61 46.96 2.62
transaction rollbacks 1 0 .02 0
user calls 15266 52.82 288.04 16.05
user commits 289 1 5.45 .3
user rollbacks 23 .08 .43 .02
write requests 331 1.15 6.25 .35Wait Events :
Event Name Count Total Time Avg Time
SQL*Net break/reset to client 7 0 0
SQL*Net message from client 16383 0 0
SQL*Net message from dblink 424 0 0
SQL*Net message to client 16380 0 0
SQL*Net message to dblink 424 0 0
SQL*Net more data from client 1 0 0
SQL*Net more data to client 24 0 0
buffer busy waits 169 0 0
control file sequential read 55 0 0
db file scattered read 74788 0 0
db file sequential read 176241 0 0
latch free 6134 0 0
log file sync 225 0 0
rdbms ipc message 10 0 0
write complete waits 4 0 0I did enable the timed_stats for the session but dont know why the times are 0's. Since I cant bounce the instance until weekend, cant enable the parameter in init.ora as well. -
Hello,
We wish to implement ATP check using Ent Services.
Details:
Environment : SAP ECC 6.o with Enhancement Package 3/ SCM 5.0
Ent service used: /SAPAPO/SDM_PARCRTRC : ProductAvailabilityRequirementCreateRequestConfirmation
We were able to carry out Product check using the service. However we are unable to carry out rule based ATP check using the same service.
We have carried out the entire configuration as per SAP's building block configuration guide for Global ATP & SAP Note 1127895.
For RBA <Rule based ATP check>, we are getting the results as expected when we create Sales order from SAP R/3 (Transaction VA01), however ATP simulation in APO & Ent service does not give the results as expected. When we carry out ATP simulation in APO / Ent service, results are same as Product check & not as RBA i.e. they respect only requested Product location stock & does not propose alternate Product or Location in case of shortages
Plz share the experience to fix the issue
Mangesh A. KulkarniHi mangesh
Check this links , not very sure , but may help you...
https://www.sdn.sap.com/irj/scn/wiki?path=/display/erplo/availability%252bchecking%252bin%252bsap
Re: ATP confirmation in CRM
https://www.sdn.sap.com/irj/scn/wiki?path=/display/scm/rba
Regards
Abhishek -
Rules-based GATP with sales BOM
Hi,
with reference to the following thread which asks the same question, but although in "answered" status, actually contains no answer to the final question...
GATP Question
We are investigating the possibility of using GATP with rules-based ATP to propose a sourcing plant. The result of this is of course a subitem in the sales order.
My question is, is there any restriction to using the sales BOM with rules-based ATP? We use the sales BOM currently to insert the packaging items into the sales order. From a previous project, I have in the back of my mind that there was a restriction in this respect, and that with the subitem from the GATP result, it was not possible to explode the sales BOM.
I cannot find what I recall to be the explicit SAP-help statement in this respect. Anybody with experience in this?
Regards,
DouglasThe current situation is as follows:
We have the saleable product A in plants X and Y, with a sales BOM in each of the plants, for the packaging items B and C. When product A is entered in the sales order in, say, plant X, the sales BOM is exploded and items B and C are inserted into the sales order. ATP is done in R/3, and if there is no availability in X, then the plant can be manually changed to Y, but each of the subitems from the sales BOM must also be manually changed to Y.
Desired to be situation:
We implement GATP, with a rule to subsitute plant X with plant Y in the event that plant X has no stock. So the desired system response is: enter item A in the sales order with plant X, GATP subsitutes plant X with plant Y and generates a subitem for product A in plant Y. Now the sales BOM is exploded for the new subitem, generating the further subitems for B and C, also in plant Y.
Is this possible? -
How to dynamic select based on runtime value ?
how to dynamic select based on runtime value ?
I want to write a select function, which do selecting based on parameters. eg,
CREATE OR REPLACE FUNCTION myfunction
(tableName VARCHAR2, pkName VARCHAR2, pkValue VARCHAR2, requestString VARCHAR2)
RETURN VARCHAR2 AS
BEGIN
select requestString from tableName where pkName=pkValue;
RETURN NULL;
END;
myfunction('users', 'user_id', '100', 'user_name'); it will select 'user_name' from table 'users' where 'user_id' = '100'.
This way could save lots of coding. but it can't pass compiler. how to work out ?
Thanks.While this may save code, if used frequently it will be ineffecient as all [explicative deleted]. The danger is that it would be used even for repeatable statements.
This mode of operation ensures that every statement [calling the funciton] needs to be reparsed, which is extremely expensive in Oracle (in CPU cycles, recursive SQL and shared pool memory).
Such reparsing is rarely a good thing for the environment ... it could easily lead to buying more CPU (bigger box) and therefore adding more Oracle license ... which could quickly exceed the typical developer's salary.
However - if you really, really want to do this, look up 'execute immendiate' in the PL/SQL manuals.
Maybe you are looking for
-
How to Create a Table with Merge and partitions in HANA
Hi, What is the best way to create a Table with MERGE and PARTITION and UNLOAD PRIORITIES. Any body can you please give me some examples. Regards, Deva
-
I"m running a new G5 imac 20". Just started today getting a problem where the pixels on my monitor get blurred for a quick second. Then everything goes back to normal. Also as I'm typing here, some of the text gets blurry for a sec. Haven't moved or
-
I deleted Imovie And I NEED IT BACK NOW!
today i accidently deleted my imovie app and i dont have a time machine set up so i dont know how to get it back. PLEASE HELP NOW!!!! IM DESPERATE TO GET IT BACK!!!
-
Hi, Does anyone know of a user exit that gets activated when saving a document using transaction FB01. I need to exit to include structures containg all the information being posted in the financial document. Ive tried looking for enhancement in SMOD
-
Hello, Recently I have been trying to sync a 30 MB video file in MP4 format to my 8 GB iPod Touch first generation. I have nearly 6 GB of space on my MacBook and nearly 5 GB of space on my device. Here's my problem: I open up iTunes, attach my device