WMS dropping rules execution time.

Hi Community!
We're facing problem in our OEBS 12.1.3 production environment with dropping rules execution time.
Execution can take a long time (10-15 minutes) if it started from standart interface by warehouse worker, but from the other side same query executes in few seconds in sqlplus.
I'll be very grateful if someone helps me to find problem source.
Kind regards.

Well, these rules are not unique – most of them are executed repeatedly for various Entities. In whole, it is a big budget calculation model.
It surely can be and must be optimized, but it will take some time (I started to administrate this outsource-developed Planning system not long ago).
But the question now is not in the amount of BRs, but in the execution delay.
I tried to run a singe rule the same way, and got _18 sec in CmdLineLauncher vs 1 sec in EAS Console_.
Just can't get the delay reason...

Similar Messages

  • Changing Business rule execution time

    Hi,
    I'm working on EPMA 11.1.1.3 when i'm launching BR on dataform after certain time the dataform is opening in a new window.when i'm looking in logs it shows that
    BR launching execution time has exceeded now the BR is running in the background.
    can you gurus tell me where can i change the default BR execution time.....
    thanks in advance
    Edited by: kailash on Sep 14, 2011 4:15 PM

    Have a read of - http://download.oracle.com/docs/cd/E12825_01/epm.111/hp_admin/ch02s07s06.html
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Execution time of Business Rule

    Hi,
    My rule is taking more than a day to run. How can i watch its exact execution time. I can't see it in job console as it shows that the business rule is still in processing. And i have also checked in HSP_JOB_STATUS table. It also shows that it is in processing.
    Is there an other way to see the exact execution time of business rule.
    Thanks and Regards.

    To see the exact execution time, you can check in HBRlaunch.log file in planning.
    If you want to track the execution time of different parts of your calculation script/business rule then see the essbase application log in detail in EAS.
    To see log specific to your business rule, use maxl and spool log to text file, this will give you the logs related to your business rule only and the execution time.
    Thanks
    YSP

  • How to know the execution time of rule in Calculation Manager

    Hi All,
    How do we come to know the time of execution of rule in calculation manager ?
    Regards
    Vikram

    At this point there is no way to know the execution time of a rule in calculation Manager . If you are working on Planning rules, I believe planning displays the execution time in its job console.
    -SR

  • How to improve the execution time of my VI?

    My vi does data processing for hundreds of files and takes more than 20 minutes to commplete. The setup is firstly i use the directory LIST function to list all the files in a dir. to a string array. Then I index this string array into a for loop, in which each file is opened one at a time inside the loop, and some other sub VIs are called to do data analysis. Is there a way to improve my execution time? Maybe loading all files into memory at once? It will be nice to be able to know which section of my vi takes the longest time too. Thanks for any help.

    Bryan,
    If "read from spreadsheet file" is the main time hog, consider dropping it! It is a high-level, very multipurpose VI and thus carries a lot of baggage around with it. (you can double-click it and look at the "guts" )
    If the files come from a just executed "list files", you can assume the files all exist and you want to read them in one single swoop. All that extra detailed error checking for valid filenames is not needed and you never e.g. want it to popup a file dialog if a file goes missing, but simply skip it silently. If open generates an error, just skip to the next in line. Case closed.
    I would do a streamlined low level "open->read->close" for each and do the "spreadsheet string to array" in your own code, optimized to the exact format of your files. For example, notice that "read from spreadheet file" converts everything to SGL, a waste of CPU if you later need to convert it to DBL for some signal processing anyway.
    Anything involving formatted text is not very efficient. Consider a direct binary file format for your data files, it will read MUCH faster and take up less disk space.
    LabVIEW Champion . Do more with less code and in less time .

  • How to find out the execution time of a sql inside a function

    Hi All,
    I am writing one function. There is only one IN parameter. In that parameter, i will pass one SQL select statement. And I want the function to return the exact execution time of that SQL statement.
    CREATE OR REPLACE FUNCTION function_name (p_sql IN VARCHAR2)
    RETURN NUMBER
    IS
    exec_time NUMBER;
    BEGIN
    --Calculate the execution time for the incoming sql statement.
    RETURN exec_time;
    END function_name;
    /

    Please note that wrapping query in a "SELECT COUNT(*) FROM (<query>)" doesn't necessarily reflect the execution time of the stand-alone query because the optimizer is smart and might choose a completely different execution plan for that query.
    A simple test case shows the potential difference of work performed by the database:
    Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
    Session altered.
    SQL>
    SQL> drop table count_test purge;
    Table dropped.
    Elapsed: 00:00:00.17
    SQL>
    SQL> create table count_test as select * from all_objects;
    Table created.
    Elapsed: 00:00:02.56
    SQL>
    SQL> alter table count_test add constraint pk_count_test primary key (object_id)
    Table altered.
    Elapsed: 00:00:00.04
    SQL>
    SQL> exec dbms_stats.gather_table_stats(ownname=>null, tabname=>'COUNT_TEST')
    PL/SQL procedure successfully completed.
    Elapsed: 00:00:00.29
    SQL>
    SQL> set autotrace traceonly
    SQL>
    SQL> select * from count_test;
    5326 rows selected.
    Elapsed: 00:00:00.10
    Execution Plan
    Plan hash value: 3690877688
    | Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT  |            |  5326 |   431K|    23   (5)| 00:00:01 |
    |   1 |  TABLE ACCESS FULL| COUNT_TEST |  5326 |   431K|    23   (5)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
            419  consistent gets
              0  physical reads
              0  redo size
         242637  bytes sent via SQL*Net to client
           4285  bytes received via SQL*Net from client
            357  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
           5326  rows processed
    SQL>
    SQL> select count(*) from (select * from count_test);
    Elapsed: 00:00:00.00
    Execution Plan
    Plan hash value: 572193338
    | Id  | Operation             | Name          | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT      |               |     1 |     5   (0)| 00:00:01 |
    |   1 |  SORT AGGREGATE       |               |     1 |            |          |
    |   2 |   INDEX FAST FULL SCAN| PK_COUNT_TEST |  5326 |     5   (0)| 00:00:01 |
    Statistics
              1  recursive calls
              0  db block gets
             16  consistent gets
              0  physical reads
              0  redo size
            412  bytes sent via SQL*Net to client
            380  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL>As you can see the number of blocks processed (consistent gets) is quite different. You need to actually fetch all records, e.g. using a PL/SQL block on the server to find out how long it takes to process the query, but that's not that easy if you want to have an arbitrary query string as input.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle:
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Execution Time Issue

    Help Please!!!
    I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
    Collects analog data from a cDAQ chassis with a 9205 at 5kHz
    Data is collected in 100ms chunks
    Some of the data is saved directly to a TDMS file while the rest is averaged for a single data point. That single data point is saved to disk in a text file every 200ms.
    Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
    Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
    Where else can I look for where the time went? It doesn't seem to make sense.
    Thanks for reading!
    Tim
    Attachments:
    Screen Shot 2013-09-06 at 9.32.46 AM.png ‏87 KB

    You should probably increase how much data you write with a single Write to Text File.  Move the Write to Text File out of the FOR loop.  Just have the data to be written autoindex to create an array of strings.  The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
    Another idea I am having is to use another loop (yes another queue as well) for the writing of the file.  But you put the Dequeue Element inside of another WHILE loop.  On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever.  Any further iteration should have a timeout of 0.  You do this with a shift register.  Autoindex the read strings out of the loop.  This array goes straight into the Write to Text File.  This way you can quickly catch up when your file write takes a long time.
    NOTE:  This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    Write all data on queue.png ‏16 KB

  • Execution Time

    HI All,
    we have been Created mapping. It contains only one expression. in that we are doing To_char conversion and triming the data.
    The target table structure contains 99 Columns. In that one column is Unique.
    The count of records at source level is 3 Lakhs + records,
    For that reason we had created two mappings. One is for initial Load and another to load last 30 days data.
    We had created Process flow, to load last 30 days data. based on fileter *(Updatedate>=(sysdate-30)* and it is taking execution time is 3 Hrs + (12000+)
    we tried by reducing last 7 days data to load into Target . It is taking execution time is 3 Hrs + to load (3000+ records)
    How we can reduce the execution time.
    Regards,

    Try indexing the Updatedate column if the % of data that you're retrieving is small compared to the total in the table.
    Configure the mapping to run SET BASED.
    Configure the mapping so that the DEFAULT AUDIT LEVEL is ERROR DETAILS or NONE.
    Drop/disable indexes on target and rebuild afterwards.
    Drop/disable/novalidate FKs on target and reapply afterwards.
    Cheers
    Si

  • Oracle Execution Time for function inside view

    Hi Guys,
    i would like to ask if i call a function inside a view , how does it behave in term of execution time and performance
    For Example i have a view as below
    create or replace view CUST.CUST_VIEW
    select a.nice , a.getCustomDisplay(a.name,a.pin,a.dos,b.master_key) as custom from CUST.customer as a , CUST.master as b
    where a.idno = b.main_id_no
    AND the function look like this
    create or replace function getCustomDisplay(a varchar2,b varchar2,c varchar2,d varchar2)
    begin
    select * from CUST.MAPPING_MATRIX order by idno asc;
    for loop
    //logic goes here to determine the result return from matrix
    end
    My Question is for example
    1. If i do select * from CUST.CUST_VIEW ( return 1000 records for example ) , so the function getCustomDisplay will be executed 1000 times also right ( that means select * from CUST.MAPPING_MATRIX order by idno asc; will also be executed 1000 times ) ?
    2. If i do select * from CUST.CUST_VIEW where rownum <= 20 , how many times getCustomDisplay() function will be executed ?
    The reason i ask this because recently we saw a few million execution times per day from AWR report for this query
    "select * from CUST.MAPPING_MATRIX order by idno asc;"
    But when i investigate , and put a logger whenever it call getCustomDisplay , the query above as mention in item no 2 only will be executed as many as the record that will be returned from ( view + where condition ).
    3. will it affect performance if my view return a lot of records ? or is there any way to improve it?
    Thanks

    Hi
    i have other solutions that seems work for reducing number of execution times but do you think its scalable and feasible ?
    CREATE OR REPLACE package body ACER.TYPE_CAT_PASS_UTIL_TEST as
    */* Private package data */*
    TYPE g_rec IS RECORD (
    id_no               VARCHAR2 (4),
    type_pass            VARCHAR2 (3),
    scheme_ind           VARCHAR2 (5),
    cat_pass             VARCHAR2 (2),
    entrepass            VARCHAR2 (2),
    display_type_pass        VARCHAR2 (15),
    display_cat_pass         VARCHAR2 (5),
    display_type_pass_desc    VARCHAR2 (80),
    rule_id                  VARCHAR2 (5)
    TYPE g_tab_type IS TABLE OF g_rec INDEX BY BINARY_INTEGER;
    g_tab   g_tab_type;
    i       BINARY_INTEGER;
    procedure initializeTypePassMatrix(test  IN varchar2) as
    begin
    if(g_tab.COUNT < 1)then
    FOR appln_rec in (
    SELECT tb_type_cat_pass_matrix.id_no,
    tb_type_cat_pass_matrix.type_pass,
    tb_type_cat_pass_matrix.scheme_ind,
    tb_type_cat_pass_matrix.cat_pass,
    tb_type_cat_pass_matrix.entrepass,
    tb_type_cat_pass_matrix.display_type_pass,
    tb_type_cat_pass_matrix.display_cat_pass,
    tb_type_cat_pass_matrix.display_type_pass_desc,
    tb_type_cat_pass_matrix.rule_id
    FROM tb_type_cat_pass_matrix ORDER BY id_no asc)
    LOOP
    dbms_output.put_line('g_tab.COUNT before insert: ' || g_tab.COUNT);
    i := g_tab.COUNT + 1;
    g_tab (i).id_no         := appln_rec.id_no;
    g_tab (i).type_pass         := appln_rec.type_pass;
    g_tab (i).scheme_ind        := appln_rec.scheme_ind;
    g_tab (i).cat_pass          := appln_rec.cat_pass;
    g_tab (i).entrepass        := appln_rec.entrepass;
    g_tab (i).display_type_pass     := appln_rec.display_type_pass;
    g_tab (i).display_cat_pass     := appln_rec.display_cat_pass;
    g_tab (i).display_type_pass_desc:= appln_rec.display_type_pass_desc;
    g_tab (i).rule_id         := appln_rec.rule_id;
    DBMS_OUTPUT.put_line ('g_tab.count after insert: ' || g_tab.COUNT);
    END LOOP;
    else
    DBMS_OUTPUT.put_line ('g_tab>=1, no need to initialize');
    end if;
    exception
    when others then
    dbms_output.put_line('error happen'||DBMS_UTILITY.format_error_backtrace);
    Logger.ERROR('TYPE_CAT_PASS_UTIL.initializeTypePassMatrix',SQLCODE,SQLERRM || ' ' ||DBMS_UTILITY.format_error_backtrace,'SYSTEM');
    end initializeTypePassMatrix;
    procedure populateTypeCatPassFullDesc(typePass  IN varchar2, schemeInd IN varchar2,catPass IN varchar2,entrePass IN varchar2, displayTypePass IN OUT varchar2,displayTypePassDesc IN OUT varchar2, displayCatPass IN OUT varchar2 )is
    v_displayTypePass varchar2(15) :='-';
    v_displayTypePassDesc varchar2(100) :='-';
    v_displayCatPass   varchar2 (2):='-';
    v_type_pass  varchar2(3)  := '';
    v_scheme_ind  varchar2(5) := '';
    v_cat_pass  varchar2(2);
    v_entrepass  varchar2(2);
    v_flag_valid_1 boolean:=false;
    v_flag_valid_2 boolean:=false;
    v_flag_valid_3 boolean:=false;
    v_flag_valid_4 boolean:=false;
    v_appln_rec g_rec;
    begin
    dbms_output.put_line('line 1');
    initializeTypePassMatrix('test');
    FOR nomor in g_tab.FIRST .. g_tab.LAST
    LOOP
    v_appln_rec := g_tab(nomor);
    dbms_output.put_line('line 2.1');
    v_flag_valid_1 :=false;
    v_flag_valid_2 :=false;
    v_flag_valid_3 :=false;
    v_flag_valid_4 :=false;
    v_type_pass     := v_appln_rec.type_pass;
    v_scheme_ind    := v_appln_rec.scheme_ind;
    v_cat_pass     := v_appln_rec.cat_pass;
    v_entrepass    := v_appln_rec.entrepass;
    dbms_output.put_line('line 2.2');
    if(typePass =  v_type_pass or v_type_pass = 'NA') then
    v_flag_valid_1:= true;
    end if;
    if(schemeInd = v_scheme_ind or v_scheme_ind='NA') then
    v_flag_valid_2 := true;
    elsif(schemeInd is null and v_scheme_ind is null) then
    v_flag_valid_2 := true;
    end if;
    if(catPass = v_cat_pass or v_cat_pass='NA') then
    v_flag_valid_3 := true;
    elsif(catPass is null and v_cat_pass is null) then
    v_flag_valid_3 := true;
    end if;
    if(entrePass = v_entrepass or v_entrepass='NA') then
    v_flag_valid_4 := true;
    end if;
    if(v_flag_valid_1 = true and v_flag_valid_2 = true and v_flag_valid_3 = true and v_flag_valid_4 = true) then
    v_displayTypePass     := v_appln_rec.display_type_pass;
    v_displayCatPass     := v_appln_rec.display_cat_pass;
    v_displayTypePassDesc   := v_appln_rec.display_type_pass_desc;
    dbms_output.put_line('rule id got :'||v_appln_rec.rule_id);
    dbms_output.put_line('rule no got :'||v_appln_rec.id_no);
    exit when (0 = 0);
    end if;
    END LOOP;
    displayTypePass := v_displayTypePass;
    displayCatPass  := v_displayCatPass;
    dbms_output.put_line('1type:' || v_displayTypePassDesc);
    displayTypePassDesc :=    v_displayTypePassDesc;
    dbms_output.put_line('2type:' || displayTypePassDesc);
    dbms_output.put_line('type:' || v_displayTypePass);
    dbms_output.put_line('cat :' || v_displayCatPass);
    exception
    when others then
    dbms_output.put_line('error happen'||DBMS_UTILITY.format_error_backtrace);
    Logger.ERROR('TYPE_CAT_PASS_UTIL.populateTypeCatPass',SQLCODE,SQLERRM || ' ' ||DBMS_UTILITY.format_error_backtrace,'SYSTEM');
    end populateTypeCatPassFullDesc;
    function getDisplayTypePass(typePass  IN varchar2, schemeInd IN varchar2,catPass IN varchar2,entrePass IN varchar2) return varchar2 is
    v_displayTypePass varchar2(15) :='-';
    v_displayTypePassDesc varchar2(100) :='-';
    v_displayCatPass varchar2(2) :='-';
    begin
    populateTypeCatPassFullDesc(typePass,schemeInd,catPass,entrePass,v_displayTypePass,v_displayTypePassDesc,v_displayCatPass);
    return  v_displayTypePass;
    exception
    when others then
    dbms_output.put_line('error happen'||DBMS_UTILITY.format_error_backtrace);
    Logger.ERROR('TYPE_CAT_PASS_UTIL.populateTypeCatPass',SQLCODE,SQLERRM || ' ' ||DBMS_UTILITY.format_error_backtrace,'SYSTEM');
    end getDisplayTypePass;
    end TYPE_CAT_PASS_UTIL_TEST;
    By Using like above even i do query on select * from <some_view) it will be only one execution for
    SELECT tb_type_cat_pass_matrix.id_no,*
    **tb_type_cat_pass_matrix.type_pass,**
    **tb_type_cat_pass_matrix.scheme_ind,**
    **tb_type_cat_pass_matrix.cat_pass,**
    **tb_type_cat_pass_matrix.entrepass,**
    **tb_type_cat_pass_matrix.display_type_pass,**
    **tb_type_cat_pass_matrix.display_cat_pass,**
    **tb_type_cat_pass_matrix.display_type_pass_desc,**
    **tb_type_cat_pass_matrix.rule_id**
    **FROM tb_type_cat_pass_matrix ORDER BY id_no asc*
    the key point is the initializeTypePassMatrix function but it seems the variable only works for one session ?
    if i open new session it will be reset again .

  • Select - execution time strange behaviour

    hi all,
    when i executed an SQL - select (with joins and so on) - I have observed the following behaviour:
    Query execution times are like: -
    for 1000 records - 4 sec
    5000 records - 10 sec
    10000 records - 7 sec
    25000 records - 16 sec
    50000 records - 33 sec
    I tested this behaviour with different sets of sqls on different sets of data. But in each of the cases, the behaviour is more or less the same.
    Can any one explain - why Oracle takes more time to result 5000 records than that it takes to 10000. (In some cases, this behaviour can be observed at ~2000 to ~5000 )
    Please note that this has not something to do with the SQLs as - i tested this with different sets of sql on different sets of data.
    Can there be any Oracle`s internal reason - which can explain this behaviour?
    regards
    at

    at,
    Things you need to check before you go further -
    1) Are you using OPTIMIZER_MODE=CHOOSE or OPTIMIZER_MODE=RULE?
    2) If you are using RULE then rewrite the queries or use some hints to help it along. If you're using a 3rd party application, then consider going to CHOOSE for the optimizer_mode.
    3) If you're using CHOOSE (Cost Based Optimizer) then:
    a) update your statistics. You can do this by creating a .cmd (or shell) file that logs into sqlplus or svrmgrl (better get use to sqlplus qs it isn't around qnymore in 9i), then call a sql file you will create(@c:\users\oracledba\analyze.sql for example). The analyze.sql file might look something like this:
    #create a file to call in just a moment
    spool c:\users\oracledba\anaylzed.sql
    select 'analyze table' ||table_name|| 'compute statistics;' from dba_tables where owner NOT IN ('SYS', 'SYSTEM', 'DBA_USER', 'EXP_USER', 'QUEST', 'any_other_user_who_is_not_a_normal_user') and type <> 'TEMPORARY';
    #execute the file you just created
    @c:\users\oracledba\anaylzed.sql
    spool off
    b) This should give you an easy to use point-and-click way to anayle your tables. You can make a similar one for indexes as well. Don't run this thing every five minutes or you'll end up with crap. If possible, run it at a busy time of day but at a time when the performance impact caused by the script (which actually isn't much but does have an effect) won't be too drastic.
    You also need to take a look at some additional OPTIMIZER_MODE accompanying parameters that will effect the time given to the CBO. Important ones are
    a)optimizer_max_permutations (set to 1000 to limit the number of plans the optimizer examines before choosing the best plan. It can be as hig as 80000 but that is too many and research indicates that setting it to 1000 to start is good because the optimizer considers the fact that it only has 1000 shots at finding the best plan and derives the most likely first)
    b)optimizer_index_caching (set to 95 which tells the optimizer that 95% of the time it will find the index in the cache and to use the index - typically this is set equal to your hit ratio)
    c)optimizer_index_cost_adj is a bit more complicated (set it to a value that will tell Oracle how much cheaper it is to access the data via index vs. accessing it through full table scan. I'd start with 5 as a value and monitor it from there).
    There are several other things you can do also - like creating bitmap indexes (assuming you've created indexes in the correct columns already).
    If you send some more details about your environment etc we can provide more info.
    Hope this helps,
    The Infamous
    Eric Cartman

  • Execution time too low

    I was trying to measure the execution time. The rate is 1KS/s per channel and the samples to read is 100 per channel in a for loop of 10. The time should be around 1000ms, but it's only 500-600ms. And when I changed the rate/numbers of samples, the execution time doesn't change..... how could this happen?
    Solved!
    Go to Solution.
    Attachments:
    trial 6.vi ‏19 KB

    JudeLi wrote:
    I've tried to drag the clear task out of the loop but every time I did it, it ended up in a broken wire saying that the soure is a 1-D array of  DAQmx event and the type of sink is DAQmx event.....
    You can right-click on the output tunnel and tell it to not auto index.  But even better would be to use shift registers just in case you tell that FOR loop to run 0 times.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines
    Attachments:
    trial 6.vi ‏13 KB

  • Can the format of a SQL Statement modify the execution time of an SQL ....

    Can the format of a SQL Statement modify the execution time of an SQL statement?
    Thanks in advance

    It depends on:
    1) What oracle version are you using
    2) What do you mean for "format"
    For example: if you're on Oracle9i and changing format means changing the order of the tables in the FROM clause and you're using Rule Based Optimizer then the execution plan and the execution time can be very different...
    Max
    [My Italian Oracle blog|http://oracleitalia.wordpress.com/2009/12/29/estrarre-i-dati-in-formato-xml-da-sql/]

  • For loop 1 iteration execution time

    Hi,
    what is the time taken of for loop 1 iteration without code inside.How to calculate this Execution time.

    Ravindranath.K wrote:
    Hi,
    what is the time taken of for loop 1 iteration without code inside.How to calculate this Execution time.
    No code in the FOR loop?  It will not use any time because the compiler will optimize it out.
    Why do you even need to know?
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • MARS General FP Drop Rule vs. Listed Unconf. FPs

    I have a gazillion (really!) Unconfirmed False Positive events listed on that Tab in MARS. The specific event is "Windows SMB Enum Share DoS" and I created a Drop Rule for ANY of these events, with Source and Destination from my inside networks. I know all of my systems are patched against it.
    It appears my Drop Rule is working, since viewing the Sessions associated with these (clicking the "Show" link at the right of each) shows no sessions after I installed the Drop Rule.
    But I still have all of these Events in the Unconf. FP list. I would like to avoid doing the "False Positive" procedure for each, for two reasons:
    1. It will take a long time.
    2. I will also wind up with a gazillion Drop Rules, which the system will either have to process OR I'll have to go through THEM and Inactivate them.
    Any ideas?
    Paul Trivino

    Try this to prevent System Determined False Positives from displaying as incidents?
    If you confirm what was previously an unconfirmed false positive, then a
    drop rule is created. That drop rule should prevent any further incidents
    of that type. So, this shouldn't be happening. Please make sure you've
    clicked `Activate'.
    Check the related bug-id:CSCsc74104

  • CREATE TABLE/INDEX execution times

    Hello,
    what are my options to optimize table and/or index creation times?
    I have a script that creates something around 60 000 objects (maybe half index, half table) and while each operation takes no more than a second, this will result in a 17 hour execution time. So I'm looking for ways to decrease table creation by a fraction of a second :-/
    What I can think of is that all of these operations end up writing to the same datafile (e.g. SYSTEM01.DBF), could it do any good to divide the system tablespace into more data files? Adding a datafile would only increase the quota, so I would have to regroup the data?
    Or can I increase the number of redologs? Or temporarily add larger redolog files?
    Here is an extract to demonstrate:
    14:20:10 SQL> DROP TABLE PS_DOTL_PDS2_T2
    14:20:10   2  /
    Table dropped.
    14:20:11 SQL> CREATE TABLE PS_DOTL_PDS2_T2 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
    14:20:11   2           BUSINESS_UNIT VARCHAR2(5) NOT NULL,
    14:20:11   3           PO_ID VARCHAR2(10) NOT NULL,
    14:20:11   4           LINE_NBR INTEGER NOT NULL,
    14:20:11   5           SCHED_NBR SMALLINT NOT NULL,
    14:20:11   6           DISTRIB_LINE_NUM INTEGER NOT NULL,
    14:20:11   7           BUSINESS_UNIT_REQ VARCHAR2(5) NOT NULL,
    14:20:11   8           REQ_ID VARCHAR2(10) NOT NULL,
    14:20:11   9           REQ_LINE_NBR INTEGER NOT NULL,
    14:20:11  10           REQ_SCHED_NBR SMALLINT NOT NULL,
    14:20:11  11           REQ_DISTRIB_NBR INTEGER NOT NULL,
    14:20:11  12           ACCOUNT VARCHAR2(10) NOT NULL,
    14:20:11  13           OPERATING_UNIT VARCHAR2(8) NOT NULL,
    14:20:11  14           PRODUCT VARCHAR2(6) NOT NULL,
    14:20:11  15           FUND_CODE VARCHAR2(5) NOT NULL,
    14:20:11  16           CLASS_FLD VARCHAR2(5) NOT NULL,
    14:20:11  17           PROGRAM_CODE VARCHAR2(5) NOT NULL,
    14:20:11  18           BUDGET_REF VARCHAR2(8) NOT NULL,
    14:20:11  19           AFFILIATE VARCHAR2(5) NOT NULL,
    14:20:11  20           AFFILIATE_INTRA1 VARCHAR2(10) NOT NULL,
    14:20:11  21           AFFILIATE_INTRA2 VARCHAR2(10) NOT NULL,
    14:20:11  22           CHARTFIELD1 VARCHAR2(10) NOT NULL,
    14:20:11  23           CHARTFIELD2 VARCHAR2(10) NOT NULL,
    14:20:11  24           CHARTFIELD3 VARCHAR2(10) NOT NULL,
    14:20:11  25           PROJECT_ID VARCHAR2(15) NOT NULL,
    14:20:11  26           ALTACCT VARCHAR2(10) NOT NULL,
    14:20:11  27           DEPTID VARCHAR2(10) NOT NULL,
    14:20:11  28           MONETARY_AMOUNT DECIMAL(26, 3) NOT NULL,
    14:20:11  29           DISTRIB_AMT DECIMAL(26, 3) NOT NULL,
    14:20:11  30           PO_DT DATE,
    14:20:11  31           CURRENCY_CD VARCHAR2(3) NOT NULL,
    14:20:11  32           KK_CLOSE_PRIOR VARCHAR2(1) NOT NULL,
    14:20:11  33           PO_STATUS VARCHAR2(2) NOT NULL,
    14:20:11  34           MID_ROLL_STATUS VARCHAR2(1) NOT NULL,
    14:20:11  35           CURRENCY_CD_BASE VARCHAR2(3) NOT NULL,
    14:20:11  36           RT_TYPE VARCHAR2(5) NOT NULL) TABLESPACE PSAPP STORAGE (INITIAL
    14:20:11  37   40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
    14:20:11  38   PCTUSED 80
    14:20:11  39  /
    Table created.
    14:20:11 SQL> CREATE UNIQUE  iNDEX PS_DOTL_PDS2_T2 ON PS_DOTL_PDS2_T2
    14:20:11   2   (PROCESS_INSTANCE,
    14:20:11   3           BUSINESS_UNIT,
    14:20:11   4           PO_ID,
    14:20:11   5           LINE_NBR,
    14:20:11   6           SCHED_NBR,
    14:20:11   7           DISTRIB_LINE_NUM,
    14:20:11   8           BUSINESS_UNIT_REQ,
    14:20:11   9           REQ_ID,
    14:20:11  10           REQ_LINE_NBR,
    14:20:11  11           REQ_SCHED_NBR,
    14:20:11  12           REQ_DISTRIB_NBR) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT
    14:20:11  13   100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL
    14:20:11  14   NOLOGGING
    14:20:11  15  /
    Index created.
    14:20:11 SQL> ALTER INDEX PS_DOTL_PDS2_T2 NOPARALLEL LOGGING
    14:20:11   2  /
    Index altered.
    14:20:11 SQL> DROP TABLE PS_DOTL_PDS2_T3
    14:20:11   2  /
    Table dropped.
    14:20:12 SQL> CREATE TABLE PS_DOTL_PDS2_T3 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
    14:20:12   2           BUSINESS_UNIT VARCHAR2(5) NOT NULL,
    14:20:12   3           PO_ID VARCHAR2(10) NOT NULL,
    14:20:12   4           LINE_NBR INTEGER NOT NULL,
    14:20:12   5           SCHED_NBR SMALLINT NOT NULL,
    14:20:12   6           DISTRIB_LINE_NUM INTEGER NOT NULL,
    14:20:12   7           BUSINESS_UNIT_REQ VARCHAR2(5) NOT NULL,
    14:20:12   8           REQ_ID VARCHAR2(10) NOT NULL,
    14:20:12   9           REQ_LINE_NBR INTEGER NOT NULL,
    14:20:12  10           REQ_SCHED_NBR SMALLINT NOT NULL,
    14:20:12  11           REQ_DISTRIB_NBR INTEGER NOT NULL,
    14:20:12  12           ACCOUNT VARCHAR2(10) NOT NULL,
    14:20:12  13           OPERATING_UNIT VARCHAR2(8) NOT NULL,
    14:20:12  14           PRODUCT VARCHAR2(6) NOT NULL,
    14:20:12  15           FUND_CODE VARCHAR2(5) NOT NULL,
    14:20:12  16           CLASS_FLD VARCHAR2(5) NOT NULL,
    14:20:12  17           PROGRAM_CODE VARCHAR2(5) NOT NULL,
    14:20:12  18           BUDGET_REF VARCHAR2(8) NOT NULL,
    14:20:12  19           AFFILIATE VARCHAR2(5) NOT NULL,
    14:20:12  20           AFFILIATE_INTRA1 VARCHAR2(10) NOT NULL,
    14:20:12  21           AFFILIATE_INTRA2 VARCHAR2(10) NOT NULL,
    14:20:12  22           CHARTFIELD1 VARCHAR2(10) NOT NULL,
    14:20:12  23           CHARTFIELD2 VARCHAR2(10) NOT NULL,
    14:20:12  24           CHARTFIELD3 VARCHAR2(10) NOT NULL,
    14:20:12  25           PROJECT_ID VARCHAR2(15) NOT NULL,
    14:20:12  26           ALTACCT VARCHAR2(10) NOT NULL,
    14:20:12  27           DEPTID VARCHAR2(10) NOT NULL,
    14:20:12  28           MONETARY_AMOUNT DECIMAL(26, 3) NOT NULL,
    14:20:12  29           DISTRIB_AMT DECIMAL(26, 3) NOT NULL,
    14:20:12  30           PO_DT DATE,
    14:20:12  31           CURRENCY_CD VARCHAR2(3) NOT NULL,
    14:20:13  32           KK_CLOSE_PRIOR VARCHAR2(1) NOT NULL,
    14:20:13  33           PO_STATUS VARCHAR2(2) NOT NULL,
    14:20:13  34           MID_ROLL_STATUS VARCHAR2(1) NOT NULL,
    14:20:13  35           CURRENCY_CD_BASE VARCHAR2(3) NOT NULL,
    14:20:13  36           RT_TYPE VARCHAR2(5) NOT NULL) TABLESPACE PSAPP STORAGE (INITIAL
    14:20:13  37   40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
    14:20:13  38   PCTUSED 80
    14:20:13  39  /
    Table created. It's a PeopleSoft database, during one of the upgrade steps, running on Oracle 11.2.0.3, Windows patchset #17 I believe. (Win2008R2_64)
    As always any input or references are greatly appreciated.
    Best regards.

    Hi,
    See bellow. You can create deferred segment creation option of Oracle. Oracle will not spend time on extent allocation and can save you enormous amount of time overall.
    http://www.oracle-base.com/articles/11g/segment-creation-on-demand-11gr2.php
    What I can think of is that all of these operations end up writing to the same datafile (e.g. SYSTEM01.DBF), could it do any good to divide the system tablespace into more data files? >Adding a datafile would only increase the quota, so I would have to regroup the data?Why giving example of SYSTEM01.DBF? You should not be using system tablespace. Having multiple datafiles will not be helping you.
    What do you mean by regroup of data?
    Salman
    Edited by: Salman Qureshi on Apr 10, 2013 4:02 PM

Maybe you are looking for

  • Connection between SDM client and server is broken

    Dear All, First of all this is what I have -NW04 SPS 17 -NWDS Version: 7.0.09 Build id: 200608262203 -using VPN connection -telnet on port 57018 is succesfull I can login to SDM server (from NWDS and from SDM GUI) I can see the state of SDM(green lig

  • Process::getVariables() in 10.3.1.0 BPM Studio

    I want to get the names of process variables. If I use Process::getVariables(), the variables names that are given out are "PREDEFINE_xxxxxx" like PREDEFINE_PRIORITY. I can't find the variables that are in a process as listed in the Instance Variable

  • Time Machine wants password

    TM was backing up wirelessly to my TC until I had signal interference issues. I moved the iMac to where the signal was good and got almost everything working. Now when I launch TM, it asks for a backup drive (Storage location is not setup yet). I cho

  • HR - Benefit enrollment for employees

    I need a FM, BAPI, etc.. to enroll an employee into benefit programs (health, life, spending, etc...).  How can this be accomplished?  Regards, Davis

  • "couldn't set locale correctly" after jumpstart

    Hi Guru, I've encountered the message of "couldn't set locale correctly" after jumpstart install of solaris 8 in Netra Sun machine. I found the Java version is 1.2.2, may I know how can I upgrade to Java 1.4.2? Kindly advice. Thank you very much. Reg