Performance issue while accessing SWwLOGHIST & SWWHEAD tables

Hi,
We have applied workflow function in Purchase Order (PO) for release strategry purpose. We have two release levels.
The performance issue came while user who tried to update the existing PO in which the PO had already released. because after user amend the PO and then they pressed "SAVE" button in PO's screen, the release strategry will reset and it tried to read the SWwLOGHIST table, it took few seconds to minutes to complete the save process.
My workflow's schedule job details as below:
SWWERRE - every 20 minutes
SWWCOND - every 30 minutes
SWWDHEX - every 3minutes
Table Entries:
SWWHEAD - 6mil entries
SWwLOGHIST - 25mil entries
Should we do data archiving on the above workflow tables?
Is it only the solution?
Kindly advice,
Thanks,
Regards,
Thomas

Hi,
The sizes for both tables as below:-
SWWLOGHIST - 3GB
SWWWIHEAD - 2GB
I've a REORGchk_alltables in weekly basis, in DB2/DB6 (DB) do I need to manually reorg of the tables or rebuild the indexes?
You can refer the attached screenshots for both tables.
Thanks,
Regards,
Thomas

Similar Messages

  • I'm facing performance issue while accessing the PLAF Table

    Dar all,
    I'm facing performance issue while accessing the PLAF Table.
    The START-OF-SELECTION of the report starts with the following select query.
        SELECT plnum  pwwrk matnr gsmng psttr FROM plaf
        INTO CORRESPONDING FIELDS OF TABLE it_tab
        WHERE matnr IN s_matnr
          AND pwwrk = p_pwwrk
          AND psttr IN s_psttr
          AND auffx = 'X'
          AND paart = 'LA' .
    While executing the report in the Quality system it does not face any performance issue...
    When it comes to Production System the above said select query itself it is taking 15 - 20 minutes time to move further.
    Kindly help me to over come this problem...
    Regards,
    Jessi

    Hi,
    "Just implement its primary Key
    WHERE PLNUM BETWEEN '0000000001' AND '9999999999' " By this you are implementing the Primary Key
    This statement has nothing to do with performance, because system is not able to use primary key or uses every row.
    Jessica, your query uses secondary index created by SAP:
    1     (Material, plant) which uses fields MANDT MATNR and PLWRK.
    but it is not suitable in your case.
    You can consider adding new one, which would containt all fields: MANDT, MATNR, PWWRK, PSTTR AUFFX PAART
    or - but it depends on number of rows meeting and not meeting (auffx = 'X' AND paart = 'LA' ) condition.
    It could speed the performance, if you would create secondary index based on fields MANDT, MATNR, PWWRK, PSTTR
    and do like Ramchander suggested: remove AUFFX and PAART from index and where section, and remove these unwanted rows
    after the query using DELETE statement.
    Regards,
    Przemysław
    Please check how many rows in production system

  • Performance issues while accessing the Confirm/Goods Services' transaction

    Hello
    We are using SRM 4.0 , through Enterprise Portal 7.0.
    Many of our users are crippled by Performance issues when accessing the Confirm/Goods Services tab( Transaction bbpcf02).
    The system simply clocks and would never show the screen.
    This problem occurs for some users all the time, and some users for some time.
    It's not related to the User's machine as others are able to access it fast using the same machine.
    It is also not dependent on the data size (i.e.no . of confirmations created by the user).
    We would like to know why only some users are suffering more pronouncedly, and why is this transaction generally slower than all others.
    Any directions for finding the Probable cause will be highly rewarded.
    Thanks
    Kedar

    Hi Kedar,
    Please go through the following OSS Notes:
    Note 610805 - Performance problems in goods receipt
    Note 885409 - BBPCF02: The search for confirmation and roles is slow
    Note 1258830 - BBPCF02: Display/Process confirmation response time is slow
    Thanks,
    Pradeep

  • Performance Issue while Joining two Big Tables

    Hello Experts,
    We've the following Scenario, wherein we need to have Sales rep associated for a Sales Order. This information is available in VBPA table with Sales Order, Sales Order Item and Partner Function being the key.
    NOw I'm interested in only one Partner Function for e.g. 'ZP'. This table is having around 120 million records.
    I tried both options:
    Option 1 - Join this table(VBPA) with Sales Order Item table(VBAP) within the Data Foundation Layer of the Analytic View and doing the filtering on Partner Function
    Option 2 - Create a Attribute View for VBPA having filtering on Partner Function and then join this Attribute View in the Logical Join Layer with the Data Foundation table.
    Both these options are killing the performance.
    Is there any way out to achieve this ?
    Your expert opinion is greatly appreciated!!
    Thanks & regards,
    Jomy

    Hi,
    Lars is correct. You may have to spend a little bit more time and give a bigger picture.
    I have used this join. It takes about 2 to 3 seconds to execute this join for me. My data volume is less than yours.
    You must be have used a left outer join when joining the attribute view (with constant filter ZP  as specified in your first post) to the data foundation. Please cross check once again, as sometimes my fat finger inadvertently changed the join type and I had to go back and fix it. If this is a left outer join or a referential join, HANA  does not perform the join if you are not requesting any field from the attribute view on table VBPA. This used to be a problem due to a bug in SP4 but got fixed in SP5.
    However, if you have performed this join in the data foundation, it does enforce, the join even if you did not ask any fields from the VBPA table. The reason being that you have put a constant filter ZR (LIPS->VBPA join in  data foundation as specified in one of your later replies).
    If any measure you are selecting in the analytic view is a restricted measure  or a calculated measure that needs some field from VBPA, then the join will be enforced as you would agree. This is where I had most trouble. My  join itself is not bad but my business requirement to get the current value of a partner attribute on  a higher level calculation view sent too much data from analytic view to calculation view.
    Please send the full diagram  of your model and vizplan. Also, if you are using a front end (like analysis office), please trap the SQL sent from this front end tool and include it in the message.  Even a straight SQL you have used in which you have detected this performance issue will be helpful.
    Ramana

  • Performance Issue while access reports through Infoview.

    Users run reports( Webi ,Crystal ) through Infoview u2013 Accessed by Categories  , while navigating between categories( we have different categories : Sales, Purchasing..Etc ) in the Infoview is fine ,it  takes time to show all the reports within the categories ,which  takes  around 3 u2013 5 mins , but run time of the each reports are quick and as expected.
    In CMS , while navigating between categories and time taken to show all the reports within the categories are fine . couldnu2019t not understand why this is happening in infoview only.
    Would like to know if any one have similar issues. Is there any settings needs to made on the tomcat server or CMS or Infoview. Searched for OSS notes and found a note : 1206095 but it is for Enterprise XI release2.
    Product Details :
    BOE XI3.1 SP3 .
    Any info will be helpfull.

    Hello,
    - Is Tomcat installed on the same server as BOE?  If not, did you use wdeploy to deploy the WAR files? https://bosap-support.wdf.sap.corp/sap/support/notes/1325139
    - Do you have more than one Tomcat server,  and is it fronted by a load balancer.  If it is, try bypassing the load balancer and access InfoView directly without going thru the load balancer.
    There is a CMS command-line options that might help you improving performances in your environment.
    That switch is named -maxobjectsincache which allows you to increase the maximum number of objects that the CMS stores in its memory cache. Increaseing the number of objects reduces the number of databases calls required and can improve CMS performance.
    The default value for this option is 10000 and the maximum value is 100000.  Please keep in mind it is not recommended to exceed 100,000 as too many objects in memory will result in the CMS degradation.  I would suggest testing with 60,000.
    Regards,
    Wallie

  • Performance issues with version enable partitioned tables?

    Hi all,
    Are there any known performance issues with version enable partitioned tables?
    I’ve been doing some performance testes with a large version enable partitioned table and it seems that OCB optimiser is choosing very expensive plans during merge operations.
    Tanks in advance,
    Vitor
    Example:
         Object Name     Rows     Bytes     Cost     Object Node     In/Out     PStart     PStop
    UPDATE STATEMENT Optimizer Mode=CHOOSE          1          249                    
    UPDATE     SIG.SIG_QUA_IMG_LT                                   
    NESTED LOOPS SEMI          1     266     249                    
    PARTITION RANGE ALL                                   1     9
    TABLE ACCESS FULL     SIG.SIG_QUA_IMG_LT     1     259     2               1     9
    VIEW     SYS.VW_NSO_1     1     7     247                    
    NESTED LOOPS          1     739     247                    
    NESTED LOOPS          1     677     247                    
    NESTED LOOPS          1     412     246                    
    NESTED LOOPS          1     114     244                    
    INDEX RANGE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62     2                    
    INDEX RANGE SCAN     SIG.QIM_PK     1     52     243                    
    TABLE ACCESS BY GLOBAL INDEX ROWID     SIG.SIG_QUA_IMG_LT     1     298     2               ROWID     ROW L
    INDEX RANGE SCAN     SIG.SIG_QUA_IMG_PKI$     1          1                    
    INDEX RANGE SCAN     WMSYS.WM$NEXTVER_TABLE_NV_INDX     1     265     1                    
    INDEX UNIQUE SCAN     WMSYS.MODIFIED_TABLES_PK     1     62                         
    /* Formatted on 2004/04/19 18:57 (Formatter Plus v4.8.0) */                                        
    UPDATE /*+ USE_NL(Z1) ROWID(Z1) */sig.sig_qua_img_lt z1                                        
    SET z1.nextver =                                        
    SYS.ltutil.subsversion                                        
    (z1.nextver,                                        
    SYS.ltutil.getcontainedverinrange (z1.nextver,                                        
    'SIG.SIG_QUA_IMG',                                        
    'NpCyPCX3dkOAHSuBMjGioQ==',                                        
    4574,                                        
    4575                                        
    4574                                        
    WHERE z1.ROWID IN (
    (SELECT /*+ ORDERED USE_NL(T1) USE_NL(T2) USE_NL(J2) USE_NL(J3)
    INDEX(T1 QIM_PK) INDEX(T2 SIG_QUA_IMG_PKI$)
    INDEX(J2 WM$NEXTVER_TABLE_NV_INDX) INDEX(J3 MODIFIED_TABLES_PK) */
    t2.ROWID
    FROM (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j1,
    sig.sig_qua_img_lt t1,
    sig.sig_qua_img_lt t2,
    wmsys.wm$nextver_table j2,
    (SELECT /*+ INDEX(WM$MODIFIED_TABLES MODIFIED_TABLES_PK) */
    UNIQUE VERSION
    FROM wmsys.wm$modified_tables
    WHERE table_name = 'SIG.SIG_QUA_IMG'
    AND workspace = 'NpCyPCX3dkOAHSuBMjGioQ=='
    AND VERSION > 4574
    AND VERSION <= 4575) j3
    WHERE t1.VERSION = j1.VERSION
    AND t1.ima_id = t2.ima_id
    AND t1.qim_inf_esq_x_tile = t2.qim_inf_esq_x_tile
    AND t1.qim_inf_esq_y_tile = t2.qim_inf_esq_y_tile
    AND t2.nextver != '-1'
    AND t2.nextver = j2.next_vers
    AND j2.VERSION = j3.VERSION))

    Hello Vitor,
    There are currently no known issues with version enabled tables that are partitioned. The merge operation may need to access all of the partitions of a table depending on the data that needs to be moved/copied from the child to the parent. This is the reason for the 'Partition Range All' step in the plan that you provided. The majority of the remaining steps are due to the hints that have been added, since this plan has provided the best performance for us in the past for this particular statement. If this is not the case for you, and you feel that another plan would yield better performance, then please let me know and I will take a look at it.
    One suggestion would be to make sure that the table was been recently analyzed so that the optimizer has the most current data about the table.
    Performance issues are very hard to fix without a reproducible test case, so it may be advisable to file a TAR if you continue to have significant performance issues with the mergeWorkspace operation.
    Thank You,
    Ben

  • Issue while filling Inventory setup tables.

    Hi,
    Here we are facing a critical issue, while filling Inventory setup tables, we have done BX setup successfully. Not able complete BF and UM because of performance issues.
    We filled 40% of BF and UM, since we dont have system downtime and stoped the setup at that movement. Shall we do BF & UM setup without R/3 system downtime. Is there any other way we can fill the BF, UM setup tables.
    Your help is really appreaciated.
    Thanks
    BK

    Hi BK,
    Ideally you need the downtime when doing the setup for current and previous period as these are the only open ones where you can expect changes to happen due to user postings. What selection did you give when doing the set up? One idea is to break up the set up job so that each job loads for a week and then run the jobs in parallel. This would work for BF and then for UM of course you need to run it per company code. Are you using on ODS in your data model?

  • Performance Issue's Related in Adance table in advance table

    Hi,
    Can anybody let me know what are the performance issues in advance table in advance table,because i am having big performance issue while implementing advance table in advance table, my inner table is rendering very slowly.
    Thanks

    Table in a table is a performance eating structure :), because ur VL will cache both parent and child VO rows in JVM.The only way to improve the performance is to tune your sql queries.
    --Mukul                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Performance issue while opening the report

    HI,
    I am working BO XI R3.1.there is performance issue while opening the report in BO Solris Server but  on window server it is compratively fast.
    we have few reports which contains 5 fixed prompt 7 optional prompt.
    out of 5 fixed prompt 3 prompt is static (it contains 3 -4 record only )which is coming from materlied view.
    we have already use many thing for improve performance in report like-
    1) Index Awareness
    2) Aggregate Awareness
    3) Array fatch size-250
    3) Aray bind time -32767
    4) Login time out -600
    the issue is that before refresh opening the report iteslf taking time 1.30 min on BO solris server but same report taking time in BO window server 45 sec. even we  import on others BO solris server it is taking same time as per old solris server(1.30 min).
    when we close the trace in solris server than it is taking 1.15  sec time.it should not be intial phase it is not hitting more on database.so why it is taking that much time while opening the report.
    could you please guide us where exectly problem is there and how we can improve performance for opening the report.In case the problem related to solris server so what would be and how can we rectify.
    Incase any further input require for the same feel free to ask me.

    Hi Kumar,
    If this is happening with all the reports then this issue seems to be due to firewall or security settings of Solaris OS.
    Please try to lower down the security level in solaris and test for the issue.
    Regards,
    Chaitanya Deshpande

  • Performance issue while generating Query

    Hi BI Gurus.
    I am facing performance issue while generating query on 0IC_C03.
    It has a variable as (from & to) for generating the report for a particular time duration.
    if the variable (from & to) fields is filled then after taking a long time it shows run time error.
    & if the query is executed without mentioning the variable(which is optional) then the data is extracted from beginning to till date. the same takes less time in execution.
    & after that the period has to be selected manually by option keep filter value. please suggest how can i solve the error
    Regards
    Ritika

    HI RITIKA,
    WEL COME TO SDN.
    YOUHAVE TO CHECK THE FOLLOWING RUN TIME SEGMENTS USING ST03N TCODE:
    High Database Runtime
    High OLAP Runtime
    High Frontend Runtime
    if its high Database Runtime :
    - check the aggregates or create aggregates on cube and this helps you.
    if its High OLAP Runtime :
    - check the user exits if any.
    - check the hier. are used and fetching in deep level.
    If its high frontend runtime:
    - check if a very high number of cells and formattings are transferred to the Frontend ( use "All data" to get value "No. of Cells") which cause high network and frontend (processing) runtime.
    For From and to date variables, create one more set and use it and try.
    Regs,
    VACHAN

  • Issue while accessing a SQL Server table over OTG

    Hi,
    I have been learning oracle for about 1.5 years and am just starting to learn some OTG pieces. I am wondering about an issue. The issue is:
    "We need help with an issue we are having while accessing a SQL Server table over OTG. We are getting the following error message in Oracle :
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [Oracle][ODBC SQL Server Driver]Unicode conversion failed {HY000}
    The column it is failing on is "-----------" in the view --------------- in the SQL Server database pointed to by the Oracle DB Link ------------------- thats created in the Oracle instances ---- and -----.
    This was working before, but is now failing, we suspect its due to new multi-byte data being added to the base table in the above column."
    I took out the details and added ---- instead. I am wondering your guys thoughts on fixing this issue and helping me learn along the way. Thanks

    Hi Mike,
    Thanks for the response, here are the details:
    1. What is the character set of the Oracle RDBMS being used. What is returned by -
    select * from nls_database_parameters;
    NLS_CHARACTERSET
    AL32UTF8
    NLS_NCHAR_CHARACTERSET
    UTF8
    We get SQL_Latin1_General_CP1_C1_AS and 1252 as Collation Property and Code Page
    The datatype of the column in question in SQL Server is nvarchar(100).
    When I do a describe on the SQL Server view ( desc CK_DATA_FOR_OPL@------- ), I get the error below;
    ERROR: object CK_DATA_FOR_OPL does not exist
    Select * from CK_DATA_FOR_OPL@------ where rownum =1 does get me a row.
    create table tmp_tab as
    Select * from CK_DATA_FOR_OPL@----- where rownum =1;
    desc tmp_tab shows the datatype of the said column in the table created in Oracle as NVARCHAR2(150).
    Not sure why a column defined with size 100 in SQL Server should come across as 150 when seen over OTG. We see something similar in DB2 tables we access over OTG as well.
    Edited by: 993950 on Mar 15, 2013 8:49 AM

  • Performance issues while query data from a table having large records

    Hi all,
    I have a performance issues on the queries on mtl_transaction_accounts table which has around 48,000,000 rows. One of the query is as below
    SQL ID: 98pqcjwuhf0y6 Plan Hash: 3227911261
    SELECT SUM (B.BASE_TRANSACTION_VALUE)
    FROM
    MTL_TRANSACTION_ACCOUNTS B , MTL_PARAMETERS A  
    WHERE A.ORGANIZATION_ID =    B.ORGANIZATION_ID 
    AND A.ORGANIZATION_ID =  :b1 
    AND B.REFERENCE_ACCOUNT =    A.MATERIAL_ACCOUNT 
    AND B.TRANSACTION_DATE <=  LAST_DAY (TO_DATE (:b2 ,   'MON-YY' )  )  
    AND B.ACCOUNTING_LINE_TYPE !=  15  
    call     count       cpu    elapsed       disk      query    current        rows
    Parse        1      0.00       0.00          0          0          0           0
    Execute      3      0.02       0.05          0          0          0           0
    Fetch        3    134.74     722.82     847951    1003824          0           2
    total        7    134.76     722.87     847951    1003824          0           2
    Misses in library cache during parse: 1
    Misses in library cache during execute: 2
    Optimizer mode: ALL_ROWS
    Parsing user id: 193  (APPS)
    Number of plan statistics captured: 1
    Rows (1st) Rows (avg) Rows (max)  Row Source Operation
             1          1          1  SORT AGGREGATE (cr=469496 pr=397503 pw=0 time=237575841 us)
        788242     788242     788242   NESTED LOOPS  (cr=469496 pr=397503 pw=0 time=337519154 us cost=644 size=5920 card=160)
             1          1          1    TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS (cr=2 pr=0 pw=0 time=59 us cost=1 size=10 card=1)
             1          1          1     INDEX UNIQUE SCAN MTL_PARAMETERS_U1 (cr=1 pr=0 pw=0 time=40 us cost=0 size=0 card=1)(object id 181399)
        788242     788242     788242    TABLE ACCESS BY INDEX ROWID MTL_TRANSACTION_ACCOUNTS (cr=469494 pr=397503 pw=0 time=336447304 us cost=643 size=4320 card=160)
       8704356    8704356    8704356     INDEX RANGE SCAN MTL_TRANSACTION_ACCOUNTS_N3 (cr=28826 pr=28826 pw=0 time=27109752 us cost=28 size=0 card=7316)(object id 181802)
    Rows     Execution Plan
          0  SELECT STATEMENT   MODE: ALL_ROWS
          1   SORT (AGGREGATE)
    788242    NESTED LOOPS
          1     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_PARAMETERS' (TABLE)
          1      INDEX   MODE: ANALYZED (UNIQUE SCAN) OF
                     'MTL_PARAMETERS_U1' (INDEX (UNIQUE))
    788242     TABLE ACCESS   MODE: ANALYZED (BY INDEX ROWID) OF
                    'MTL_TRANSACTION_ACCOUNTS' (TABLE)
    8704356      INDEX   MODE: ANALYZED (RANGE SCAN) OF
                     'MTL_TRANSACTION_ACCOUNTS_N3' (INDEX)
    Elapsed times include waiting on following events:
      Event waited on                             Times   Max. Wait  Total Waited
      ----------------------------------------   Waited  ----------  ------------
      row cache lock                                 29        0.00          0.02
      SQL*Net message to client                       2        0.00          0.00
      db file sequential read                    847951        0.40        581.90
      latch: object queue header operation            3        0.00          0.00
      latch: gc element                              14        0.00          0.00
      gc cr grant 2-way                               3        0.00          0.00
      latch: gcs resource hash                        1        0.00          0.00
      SQL*Net message from client                     2        0.00          0.00
      gc current block 3-way                          1        0.00          0.00
    ********************************************************************************On a 5 node rac environment the program completes in 15 hours whereas on a single node environemnt the program completes in 2 hours.
    Is there any way I can improve the performance of this query?
    Regards
    Edited by: mhosur on Dec 10, 2012 2:41 AM
    Edited by: mhosur on Dec 10, 2012 2:59 AM
    Edited by: mhosur on Dec 11, 2012 10:32 PM

    CREATE INDEX mtl_transaction_accounts_n0
      ON mtl_transaction_accounts (
                                   transaction_date
                                 , organization_id
                                 , reference_account
                                 , accounting_line_type
    /:p

  • Performance issue while selecting material documents MKPF & MSEG

    Hello,
    I'm facing performance issues in production while selecting Material documents for Sales order and item based on the Sales order Stock.
    Here is the query :
    I'm first selecting data from ebew table which is the Sales order Stock table then this query.
        IF ibew[] IS NOT INITIAL AND ignore_material_documents IS INITIAL.
    *     Select the Material documents created for the the sales orders.
          SELECT mkpf~mblnr mkpf~budat
                 mseg~matnr mseg~mat_kdauf mseg~mat_kdpos mseg~shkzg
                 mseg~dmbtr mseg~menge
           INTO  CORRESPONDING FIELDS OF TABLE i_mseg
           FROM  mkpf INNER JOIN mseg
           ON    mkpf~mandt = mseg~mandt
           AND   mkpf~mblnr = mseg~mblnr
           AND   mkpf~mjahr = mseg~mjahr
           FOR   ALL entries IN ibew
           WHERE mseg~matnr      = ibew-matnr
           AND   mseg~werks         = ibew-bwkey
           AND   mseg~mat_kdauf   = ibew-vbeln
           AND   mseg~mat_kdpos  = ibew-posnr.
          SORT i_mseg BY mat_kdauf ASCENDING
                         mat_kdpos ASCENDING
                         budat     DESCENDING.
        ENDIF.
    I need to select the material documents because the end users want to see the stock as on certain date for the sales orders and only material document lines can give this information. Also EBEW table gives Stock only for current date.
    For Example :
    If the report was run for Stock date 30th Sept 2008, but  on the 5th Oct 2008, then I need to consider the goods movements after 30th Sept and add if stock was issued or subtract if stock was added.
    I know there is an Index MSEG~M in database system on mseg, however I don't know the Storage location LGORT and Movement types BWART that should be considered, so I tried to use all the Storage locations and Movement types available in the system, but this caused the query to run even slower than before.
    I could create an index for the fields mentioned in where clause , but it would be an overhead anyways.
    Your help will be appreciated. Thanks in advance
    regards,
    Advait

    Hi Thomas,
    Thanks for your reply. the performance of the query has significantly improved than before after switching the join from mseg join mkpf.
    Actually, I even tried without join and looped using field symbols ,this is working slightly faster than the switched join.
    Here are the result ,  tried with 371 records as our sandbox doesn't have too many entries unfortunately ,
    Results before switching the join  146036 microseconds
    Results after swithing the join        38029 microseconds
    Results w/o join                           28068 microseconds for selection and 5725 microseconds for looping
    Thanks again.
    regards,
    Advait

  • Performance issues while opening business rule

    Hi,
    we're working with Hyperion version 9.2.1 and we're having some performance problems while opening business rules. I analyzed the issue and found out that it has something to do with assigning access privileges to the rule.
    The authorization plan looks as followed:
    User A is assigned to group G1
    User B is assigned to group G2
    Group G1 ist assigned to Group XYZ
    Group G2 ist assigned to Group XYZ
    Group XYZ holds the provision "basic user" for the planning application.
    Without assigning any access priviliege the business rule opens immediately.
    By assigning access privilege to group G1/G2 (validate or launch) the business rule opens immediately.
    By assigning access privilege to group XYZ the business rule opens after 2-5 minutes.
    Has anyone an idea why this happens and how to solve this?
    Kind regards
    Uli
    Edited by: user13110201 on 12.05.2010 04:31

    This has been an issue with Business Rules for quite awhile. Oracle has made steps both forward and backward in later releases than yours; and they've issued patches addressing, if not completely resolving, the problem. Things finally seem to be much better in 11.1.1.3, although YMMV.

  • Performance issue of frequently data inserted tables

    Hi all,
    Have a table named raw_trap_store having columns as trap_id(number,PK),Source_IP(varchar2), OID(varchar2),Message(CLOB)  and received_time(date).
    This table is partitioned across 24 partitions where partitioning column being received_time. (every hour's data will be stored in each partition).
    This table is getting inserted with 40-50 records/sec on an average. Overall number of records for a day will be around 2.8-3 million. Data will be retained for 2 days.
    No updates will be happening on this table.
    Performance issue:N
    Need a report  which involves selection of records from this table based on certain values of Source IP (filtering condition on source_ip column).
    Need a report  which involves selection of records from this table based on certain values of OID (filtering condition on OID column).
    But, if i create an index on SourceIP and OID column, then inserts are getting slow. (have created a normal index. not partitioned index)
    Please help me to address the above issue.

    Giving the nature of your report (based on Source_IP and OID) and the nature of your table partitioning (range partitioned by received_time), you have already made a good decision to create these particular indexes as a normal (b-tree or global) and not locally partitioned. Because if you have locally partitioned them, your reports will not eliminate partitions (because they do not include the partition key in their where clause) and hence your index range scans will scan all 24 partitions generating a lot of logical I/O
    That is said, remember that generally we insert once and select many times. You have to balance that. If you are sure that it is the creation of your two indexes that has decreased the insert performance then you may set them at in an unusable state before the insert and rebuild them after. But this might be a good advice only if the volume of data to be inserted is much bigger than the existing volume of data before the insert.
    And if you are not deleting from the table and the table does not contain triggers and integrity constraints (like FK constraint) then you can opt for a direct path insert using the hint /*+ append */
    Best regards
    Mohamed Houri
    <mod. action: removed unecessary blog ref.>
    Message was edited by: Nicolas.Gasparotto

Maybe you are looking for

  • Safari crashing often, mostly when idle

    Hi. Since the Safari 5 update, my safari is crashing often. I notice it especially when Safari is idle. I've read the forum posts here so I have tried a lot to narrow down the problem, without any luck. So far I: reinstalled Safari (after deleting pl

  • My sales order is not created and showing erroe

    dear sir, i ve already created the customer with internal no and wen i m raising the sales order one erroe is comming" no costomer master record exist for the sold to party" error no 199 but the customer master is still there and i ve  also created t

  • Bridge/Photoshop CS3 Web Gallery Question

    Hi There, I have a question about the Photoshop cs3 Flash Gallery, particularly the flashobject.js file and index.html file.  Whenever I try and open the finished Flash Gallery HTML page, I get a message that says "Please upgrade your Flash player" a

  • Single Image from the Picture Library

    Hi All - I am trying to display single image from a picture library. I thought it is the simplest task of all to do in SharePoint. I am having hard time. The Picture Library List View Webpart is not displaying single image (created a view that will b

  • Switching Video Angle in the Viewer ... not working.

    OK, I am stumped... I usually do my music videos in Avid but today it's FCP! I've made a multiclip with my 21 angles, and add edits and switch the angle just fine in the timeline. I've mapped my switch to previous and next video angle to / and * on t