Going from Results tab to Criteria is taking too much time I think

Hi guys,
I noticed that sometimes when you go from Results to Criteria tab - it's taking too long, maybe 15-20 seconds sometimes. And in reality, shouldn't it be faster?
I wonder whether it's an OBIEE issue at all?

IE definitely has an issue...don't know if it's 6 or generally IE. Firefox is a LOT faster in most cases. Also, you don't need to kill IE every three hours or so just to free my RAM from all the memory leakage...
Edit:
Don't take me for a FF fanboy though...I'm using exclusively IE for development to ensure everything runs as it will on the client machines afterwards.
Message was edited by:
ChrisBerg

Similar Messages

  • Report is taking too much time when running from parameter form

    Dear All
    I have developed report in oracle reports bulider 10g. while running it from report builder data is coming very fast.
    But, If it is running from parameter form it is taking too much time to format report in PDF.
    Please suggest any configuration or setting if anybody is having Idea.
    Thanks

    Hi,
    The first thing to check is whether the query is running to completion in TOAD. By default, TOAD just selects the first 50 rows, where as Discoverer must return all the rows before displaying results if a crosstab report is used.
    Secondly, check that the queries and the explain plans are the same in Discoverer and Toad. Although, Discoverer shows the SQL in the SQL inspector this isn't necessarily the SQL actually sent to the database. Use TOAD to interogate the Discoverer session to determine the actual SQL and compare this SQL and explain plan to SQL you ran in TOAD.
    Thirdly, check that the session context is the same in both cases. So check that any custom contexts and the USER_ENV context is the same, and if any security packages or VPD policies are used in the SQL that these have been initialised the same.
    If you still cannot determine the difference then trace both sessions.
    Rod West

  • Hi from the last two days my iphone have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    Hi,  from the last two days my iphone( iphone 4 with ios 5) have very slow to open the apps and very slow when i check the notification window , it taking too much time to open when i tap to down . help me to resolve the issue.

    The Basic Troubleshooting Steps are:
    Restart... Reset... Restore...
    iPhone Reset
    http://support.apple.com/kb/ht1430
    Try this First... You will Not Lose Any Data...
    Turn the Phone Off...
    Press and Hold the Sleep/Wake Button and the Home Button at the Same Time...
    Wait for the Apple logo to Appear and then Disappear...
    Usually takes about 15 - 20 Seconds... ( But can take Longer...)
    Release the Buttons...
    Turn the Phone On...
    If that does not help... See Here:
    Backing up, Updating and Restoring
    http://support.apple.com/kb/HT1414

  • Load is taking too much time & going to short dump.

    Hi,
                We have around 1000 records to be loaded from r/3 . The  load is taking too much time and it is still running.
    When tried to manually update the records ,we are getting an error ' converting  the rate into indirect quote: CAD/USD( ex. rate type EURX). Did any one faced this type of error.
    Thanks,
    APR

    Hi,
    what are you trying to do? If you wanna load exchange rates this can be done via context menu of the appropriate source system ( transfer global settings/exchange rates)

  • Report taking too much time in the portal

    Hi freiends,
    we have developed a report on the ods,and we publish the same on the portal.
    the problem is when the users are executing the report at the same time it is taking too much time.because of this the perfoemance is very poor.
    is there any way to sort out this issue,like can we send the report to the individual user's mail id
    so that they can not log in to the portal
    or can we create the same report on the cube.
    what could be the main difference if the report made on the cube or ods?
    please help me
    thanks in advance
    sridath

    Hi
    Try this to improve performance of query
    Find the query Run-time
    where to find the query Run-time ?
    557870 'FAQ BW Query Performance'
    130696 - Performance trace in BW
    This info may be helpful.
    General tips
    Using aggregates and compression.
    Using less and complex cell definitions if possible.
    1. Avoid using too many nav. attr
    2. Avoid RKF and CKF
    3. Many chars in row.
    By using T-codes ST03 or ST03N
    Go to transaction ST03 > switch to expert mode > from left side menu > and there in system load history and distribution for a particular day > check query execution time.
    /people/andreas.vogel/blog/2007/04/08/statistical-records-part-4-how-to-read-st03n-datasets-from-db-in-nw2004
    /people/andreas.vogel/blog/2007/03/16/how-to-read-st03n-datasets-from-db
    Try table rsddstats to get the statistics
    Using cache memory will decrease the loading time of the report.
    Run reporting agent at night and sending results to email. This will ensure use of OLAP cache. So later report execution will retrieve the result faster from the OLAP cache.
    Also try
    1. Use different parameters in ST03 to see the two important parameters aggregation ratio and records transferred to F/E to DB selected.
    2. Use the program SAP_INFOCUBE_DESIGNS (Performance of BW infocubes) to see the aggregation ratio for the cube. If the cube does not appear in the list of this report, try to run RSRV checks on the cube and aggregates.
    Go to SE38 > Run the program SAP_INFOCUBE_DESIGNS
    It will shown dimension Vs Fact tables Size in percent.If you mean speed of queries on a cube as performance metric of cube,measure query runtime.
    3. To check the performance of the aggregates,see the columns valuation and usage in aggregates.
    Open the Aggregates...and observe VALUATION and USAGE columns.
    "---" sign is the valuation of the aggregate. You can say -3 is the valuation of the aggregate design and usage. ++ means that its compression is good and access is also more (in effect, performance is good). If you check its compression ratio, it must be good. -- means the compression ratio is not so good and access is also not so good (performance is not so good).The more is the positives...more is useful the aggregate and more it satisfies the number of queries. The greater the number of minus signs, the worse the evaluation of the aggregate. The larger the number of plus signs, the better the evaluation of the aggregate.
    if "-----" then it means it just an overhead. Aggregate can potentially be deleted and "+++++" means Aggregate is potentially very useful.
    In valuation column,if there are more positive sign it means that the aggregate performance is good and it is useful to have this aggregate.But if it has more negative sign it means we need not better use that aggregate.
    In usage column,we will come to know how far the aggregate has been used in query.
    Thus we can check the performance of the aggregate.
    Refer.
    http://help.sap.com/saphelp_nw70/helpdata/en/b8/23813b310c4a0ee10000000a114084/content.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/60/f0fb411e255f24e10000000a1550b0/frameset.htm
    performance ISSUE related to AGGREGATE
    Note 356732 - Performance Tuning for Queries with Aggregates
    Note 166433 - Options for finding aggregates (find optimal aggregates for an InfoCube)
    4. Run your query in RSRT and run the query in the debug mode. Select "Display Aggregates Found" and "Do not use cache" in the debug mode. This will tell you if it hit any aggregates while running. If it does not show any aggregates, you might want to redesign your aggregates for the query.
    Also your query performance can depend upon criteria and since you have given selection only on one infoprovider...just check if you are selecting huge amount of data in the report
    Check for the query read mode in RSRT.(whether its A,X or H)..advisable read mode is X.
    5. In BI 7 statistics need to be activated for ST03 and BI admin cockpit to work.
    By implementing BW Statistics Business Content - you need to install, feed data and through ready made reports which for analysis.
    http://help.sap.com/saphelp_nw70/helpdata/en/26/4bc0417951d117e10000000a155106/frameset.htm
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    http://help.sap.com/saphelp_nw04/helpdata/en/c1/0dbf65e04311d286d6006008b32e84/frameset.htm
    You can go to T-Code DB20 which gives you all the performance related information like
    Partitions
    Databases
    Schemas
    Buffer Pools
    Tablespaces etc
    use tool RSDDK_CHECK_AGGREGATE in se38 to check for the corrupt aggregates
    If aggregates contain incorrect data, you must regenerate them.
    202469 - Using aggregate check tool
    Note 646402 - Programs for checking aggregates (as of BW 3.0B SP15)
    You can find out whether an aggregate is usefull or useless you can find out through a proccess of checking the tables RSDDSTATAGGRDEF*
    Run the query in RSRT with statistics execute and come back you will get STATUID... copy this and check in the table...
    This gives you exactly which infoobjects it's hitting, if any one of the object is missing it's useless aggregate.
    6
    Check SE11 > table RSDDAGGRDIR . You can find the last callup in the table.
    Generate Report in RSRT
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/cccad390-0201-0010-5093-fd9ec8157802
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Business Intelligence Journal Improving Query Performance in Data Warehouses
    http://www.tdwi.org/Publications/BIJournal/display.aspx?ID=7891
    Achieving BI Query Performance Building Business Intelligence
    http://www.dmreview.com/issues/20051001/1038109-1.html
    Assign points if useful
    Cheers
    SM

  • SAP GUI taking too much time to open transactions

    Hi guys,
    i have done system copy from Production to Quality server completely.
    after that i started SAP on Quality server, it is taking too much time for opening SAP transactions (it is going in compilation mode).
    i started SGEN, but it is giving time_out errors. please help me on this issue.
    MY hardware details on quality server
    operating system : SuSE Linux 10 SP2
    Database : 10.2.0.2
    SAP : ECC 6.0  SR2
    RAM size : 8 GB
    Hard disk space : 500 GB
    swap space : 16 GB.
    regards
    Ramesh

    Hi,
    >i started SGEN, but it is giving time_out errors. please help me on this issue.
    You are supposed to run SGEN as a batch job and so, it should be possible  to get time out errors.
    I've seen Full SGEN lasting from 3 hours on high end systems up to 8 full days on PC hardware...
    Regards,
    Olivier

  • Taking too much time (1min) to connect to database

    Hi,
    I have oracle 10.2 and 10g application server.
    Its taking too much time to connect to database through application (on browser). The connection through sqlplus is fine.
    Please share your experience.
    Regards,
    Naseer

    Dear AnaTech,
    i am going to ask not related this question which already you answered i am going to ask you about that how to connect forms6i and Developer 10g with OracleAS.
    i have installed and working Developer Suite 10g Ver. 10.1.2 and also Form Builder 6i. On my other machine i installed and working Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 and also on the database machine i installed Oracle Enterprise Manager 10g Application Server Control 10.1.2.0.2.
    my database conectivity with Developer suite Forms and Reports and also Form6i and Reports6i are working fine. no problem.
    now the 1 question of mine is that when i try to run form6i through run from web i got this error. FRM-99999: error 18121 occured see the release not.
    this and the main question of mine is this that how can i control my OracleAS 10g with forms because basically the functionality of OracleAS is Mid-Tier but i am not utilizing the Mid-tier i am using here Two-tier Envrionment even i installed 3-Tier Environment so tell me how i utilize it with 3-Tier..
    I hope you don't mind that i ask this question here and also if you give me you email then we can discuss this in detail and i can be helpful of your great expertise. i also know and utilize my 3-tier real envrionment.
    Waiting for your great response.
    Regards,
    K.J.J.C

  • BPC application is taking too much time to load

    Hi experts!
    I'm facing a very weird problem...
    We've developed a BPC application (app name: USM).
    This application is taking too much time to be loaded  in some computers (around 8 minutes to load).  Yes, in SOME computers.
    There is around 100.000 registers in the database and most coming from material master data.
    If I try to load this USM application in another computer, the process loads smoothly. The computer's hardware is all the same, the server is hyper estimated and everyone is in the same network.
    I talked to infrastructure departament and we made several tests. We run BPC on the server (loaded quickly), on several computers (some loads quickly, others don't), used wireless and cable connection (got all the same result) and checked communication between BW and BPC but it is ok.
    After all, I tried to load APSHEL application in the same enviroment and it loaded intantly. So, I guess it is something wrong with my application. But if was this, I suppose it should happen to all computers and not only with part of them.
    Have anybody ever seen something like this?
    Thank you in advance.
    Rubens
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:43 PM
    Edited by: Rubens Massayuki Kumori on May 12, 2011 8:46 PM

    Hi Rubens,
    I would try making a couple of test:
    1. I will install the client in a machine that is located in the same network segment, or try using a vpn that comunicates with the server bypassing all security devices, only to see if the network it's the problem.
    2. Making a full optimize of one application to see if maybe the problem it's related to the segmentation of the cubes (i don't think that this is the problem but give it a try).
    It is very wierd that in some computers happends and in others don't... also try to clean up the local cache of the applications in those computers that are giving to you bad performce and retry.
    hope it helps,

  • Job is taking too much time during Delta loads

    hi
    when i tried to extract Delta records from R3 for Standard Extractor 0FI_GL_4 it is taking 46 mins while there are very less no. of delta records(193 records only).
    PFA the R3 Job log. the major time is taking in calling the Customer enhacement BW_BTE_CALL_BW204010_E.
    please let me know why this is taking too much time.
    06:10:16  4 LUWs confirmed and 4 LUWs to be deleted with FB RSC2_QOUT_CONFIRM_DATA
    06:56:46  Call up of customer enhancement BW_BTE_CALL_BW204010_E (BTE) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Call up of customer enhancement EXIT_SAPLRSAP_001 (CMOD) with 193 records
    06:56:46  Result of customer enhancement: 193 records
    06:56:46  Asynchronous sending of data package 1 in task 0002 (1 parallel tasks)
    06:56:47  IDOC: InfoIDOC 2, IDOC no. 121289649, duration 00:00:00
    06:56:47  IDOC: Begin 09.05.2011 06:10:15, end 09.05.2011 06:10:15
    06:56:48  Asynchronous sending of InfoIDOCs 3 in task 0003 (1 parallel tasks)
    06:56:48  Through selection conditions, 0 records filtered out in total
    06:56:48  IDOC: InfoIDOC 3, IDOC no. 121289686, duration 00:00:00
    06:56:48  IDOC: Begin 09.05.2011 06:56:48, end 09.05.2011 06:56:48
    06:56:54  tRFC: Data package 1, TID = 3547D5D96D2C4DC7740F217E, duration 00:00:07, ARFCSTATE =
    06:56:54  tRFC: Begin 09.05.2011 06:56:47, end 09.05.2011 06:56:54
    06:56:55  Synchronous sending of InfoIDOCs 4 (0 parallel tasks)
    06:56:55  IDOC: InfoIDOC 4, IDOC no. 121289687, duration 00:00:00
    06:56:55  IDOC: Begin 09.05.2011 06:56:55, end 09.05.2011 06:56:55
    06:56:55  Job finished
    Regards
    Atul

    Hi Atul,
    Have you written any customer exit code . If yes check for the optimization for it .
    Kind Regards,
    Ashutosh Singh

  • Spatial query with sdo_aggregate_union taking too much time

    Hello friends,
    the following query taking too much time for execution.
    table1 contains around 2000 records.
    table2 contains 124 rows
    SELECT
    table1.id
    , table1.txt
    , table1.id2
    , table1.acti
    , table1.acti
    , table1.geom as geom
    FROM
    table1
    WHERE
    sdo_relate
    table1.geom,
    select sdo_aggr_union(sdoaggrtype(geom, 0.0005))
    from table2
    ,'mask=(ANYINTERACT) querytype=window'
    )='TRUE'
    I am new in spatial. trying to find out list of geometry which is fall within geometry stored in table2.
    Thanks

    Hi Thanks lot for your reply.
    But It is not require to use sdo_aggregate function to find out whether geomatry in one table is in other geomatry..
    Let me give you clear picture....
    What I trying to do is, tale1 contains list of all station (station information) of state and table2 contains list of area of city. So I want to find out station which is belonging to city.
    For this I thought to get aggregation union of city area and then check for any interaction of that final aggregation result with station geometry to check whether it is in city or not.
    I hope this would help you to understand my query.
    Thanks
    I appreciate your efforts.

  • Query is taking too much time

    hi
    The following query is taking too much time (more than 30 minutes), working with 11g.
    The table has three columns rid, ida, geometry and index has been created on all columns.
    The table has around 5,40,000 records of point geometries.
    Please help me with your suggestions. I want to select duplicate point geometry where ida=CORD.
    SQL> select a.rid, b.rid from totalrecords a, totalrecords b where a.ida='CORD' and b.idat='CORD' and
    sdo_equal(a.geometry, b.geometry)='TRUE' and a.rid !=b.rid order by 1,2;
    regards

    I have removed some AND conditions That was not necessary. It's just that Oracle can see for example that
    a.ida='CORD' AND
    b.idat='CORD' AND
    a.rid !=b.rid AND
    sdo_equal(a.geometry, b.geometry)='TRUE'
    ORDER BY 1,2;if a.ida does not equal 'CORD', the whole set of conditions evaluates to FALSE, so Oracle will not bother evaluating the rest of the conditions because it's all AND'ed together, and TRUE AND FALSE = FALSE.
    So if you place your least expensive conditions first (even though the optimizer can and will reorder conditions) this will give you a small performance benefit. Too small to notice, but on 5.4 million records it should be noticable.
    and I have set layer_gtype=POINT.Good, that will help. I forgot about that one (Thanks Luc!).
    Now i am facing the problem to DELETE duplicate point geometry. The following query is taking too much time. What is too much time? Do you need to delete these duplicate points on a daily or hourly basis? Or is this a one-time cleanup action? If it's a one-time cleanup operation, does it really matter if it takes half an hour?
    And if this is a daily or even hourly operation, then why don't you prevent the duplicates from entering the table in the first place? That will save you from having to clean up afterwards. Of course, this might not be possible with your business requirements.
    Lastly: can you post an explain plan for your queries? Those might give us an idea of what is taking so much time. Please enclose the results of the explain plan with
    [ c o d e ]
    <code/results here>
    [ / c o d e ]
    that way the original formatting is kept and it makes things much easier to read.
    Regards,
    Stefan

  • Delta Sync taking too much time on refreshing of tables

    Hi,
    I am working on Smart Service Manager 3.0. I have come across a scenario where the delta sync is taking too much time.
    It is required that if we update the stock quantity then the stock should be updated instantaneously.
    To do this we have to refresh 4 stock tables at every sync so that the updated quantity is reflected in the device.
    This is taking a lot of time (3 to 4 min) which is highly unacceptable from user perspective.
    Please could anyone suggest something so that  only those table get refreshed upon which the action is carried out.
    For eg: CTStock table should get refreshed only If i transfer a stock and get updated accordingly
    Not on any other scenario like the changing  status from accept to driving or any thing other than stocks.
    Thanks,
    Star
    Tags edited by: Michael Appleby

    Hi fiontan,
    Thanks a lot for the response!!!
    Yeah!! I kow it's a lotta code, but i thought it'd be more informative if the whole function was quoted.
    I'm in fact using the PrintWriter to wrap the BufferedWriter but am not using the print() method.
    Does it save any time by using the print() method??
    The place where the delay is occurring is the wile loop shown below:
                while(allitems.hasMoreElements()){
                String aRow = "";
                    XDItem item = (XDItem)allitems.nextElement();
                    for(int i =0 ; i < props.length; i++){
                         String value = item.getStringValue(props);
         if(value == null || value.equalsIgnoreCase("null"))
              value = "";
                             if(i == 0)
                                  aRow = value;
                             else
                                  aRow += ("\t" + value);
    startTime1 = System.currentTimeMillis();
    System.out.println("time here is--before-writing to buffer --out.flush() done: " +startTime1);
    bufferWrt.write(aRow.toCharArray());
    out.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.flush();//added by rosmon to check extra time taken for extraction//
    bufferWrt.newLine();
    startTime2 = System.currentTimeMillis();
    System.out.println("time here is--after-writing to buffer : " +startTime2);
    What exactly happens is that after a few loops it just seems to sleep for around 20 seconds and then again starts off and ............it goes on till the records are done.
    Please do lemme know if you have any idea as to why this is happening !!!!! This bug is giving me the scare.
    thanks in advance

  • ACCTIT table Taking too much time

    Hi,
      In SE16: ACCTIT table i gave the G/L account no after that i executed in my production its taking too much time for to show the result.
    Thanku

    Hi,
    Here iam sending details of technical settings.
    Name                 ACCTIT                          Transparent Table
    Short text            Compressed Data from FI/CO Document
    Last Change        SAP              10.02.2005
    Status                 Active           Saved
    Data class         APPL1   Transaction data, transparent tables
    Size category      4       Data records expected: 24,000 to 89,000
    Thanku

  • Taking too much time to load application

    Hi,
    I have deployed a j2ee application on oracle 10g version 10.1.2.0.2. But the application is taking too much time to load. After loading ,everything works fast.
    I have another 10g server (same version) in which the same application is loading very fast.
    When I checked the apache error logs found this :-
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 09:17:31 2007] [warn] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 09:17:31 2007] [error] [client 10.1.20.9] [ecid: 89128867058,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:36:36 2007] [notice] FastCGI: process manager initialized (pid 21177)
    [Thu Apr 26 11:36:37 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:36:37 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:36:37 2007] [warn] long lost child came home! (pid 9124)
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0015: recv() returns 0. There has no message available to be received and oc4j has gracefully (orderly) closed the connection.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:39:51 2007] [warn] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0184: Failed to find an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0145: There is no oc4j process (for destination: home) available to service request.
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0119: Failed to get an oc4j process for destination: home
    [Thu Apr 26 11:39:51 2007] [error] [client 10.1.20.9] [ecid: 80547835731,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    [Thu Apr 26 11:46:33 2007] [notice] FastCGI: process manager initialized (pid 21726)
    [Thu Apr 26 11:46:34 2007] [notice] Oracle-Application-Server-10g/10.1.2.0.2 Oracle-HTTP-Server configured -- resuming normal operations
    [Thu Apr 26 11:46:34 2007] [notice] Accept mutex: fcntl (Default: sysvsem)
    [Thu Apr 26 11:46:34 2007] [warn] long lost child came home! (pid 21182)
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] oc4j_socket_recvfull timed out
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] (4)Interrupted system call: MOD_OC4J_0038: Receiving data from oc4j exceeded the configured "Timeout" value and the error code is 4.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0054: Failed to call network routine to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0033: Failed to receive an ajp13 message from oc4j.
    [Thu Apr 26 11:53:32 2007] [warn] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0078: Network connection errors happened to host: lawdb.keralalawsect.org and port: 12501 while receiving the first response from oc4j. This request is recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0121: Failed to service request with network worker: home_15 and it is not recoverable.
    [Thu Apr 26 11:53:32 2007] [error] [client 10.1.20.9] [ecid: 89138452752,1] MOD_OC4J_0013: Failed to call destination: home's service() to service the request.
    Please HELP ME...

    Hi this is what the solution given by your link
    A.1.6 Connection Timeouts Through a Stateful Firewall Affect System Performance
    Problem
    To improve performance the mod_oc4j component in each Oracle HTTP Server process maintains open TCP connections to the AJP port within each OC4J instance it sends requests to.
    In situations where a firewall exists between OHS and OC4J, packages sent via AJP are rejected if the connections can be idle for periods in excess of the inactivity timeout of stateful firewalls.
    However, the AJP socket is not closed; as long as the socket remains open, the worker thread is tied to it and is never returned to the thread pool. OC4J will continue to create more threads, and will eventually exhaust system resources.
    Solution
    The OHS TCP connection must be kept "alive" to avoid firewall timeout issues. This can be accomplished using a combination of OC4J configuration parameters and Apache runtime properties.
    Set the following parameters in the httpd.conf or mod_oc4j.conf configuration files. Note that the value of Oc4jConnTimeout sets the length of inactivity, in seconds, before the session is considered inactive.
    Oc4jUserKeepalive on
    Oc4jConnTimeout 12000 (or a similar value)
    Also set the following AJP property at OC4J startup to enable OC4J to close AJP sockets in the event that a connection between OHS and OC4J is dropped due to a firewall timeout:
    ajp.keepalive=true
    For example:
    java -Dajp.keepalive=true -jar oc4j.jar
    Please tell me where or which file i should put the option
    java -Dajp.keepalive=true -jar oc4j.jar ??????/

  • Taking too much time in Rules(DTP Schedule run)

    Hi,
    I am Scheduling the DTP which have filters to minimize the load data.
    when i run the DTP it is taking too much time in the "rules" (i can see the  DTP monitor ststus package by pakage and step by step like "Start routine" "rules" and "End Routine")
    here it is consuming too much time in Rules Mapping.
    what is the problem and any solutions please...
    regards,
    sree

    Hi,
    Time taken at "rules" depends on the complexity involved there in ur routine. If it is a complex calculation it will take time.
    Also check ur DTP batch settings, ie how many no. of background processes used to perform  DTP, Job class.
    U can find these :
    goto DTP, select goto menu and select "Settings for Batch Manager".
    In the screen increase no of Processes from 3 to higher no(max 9).
    ChaNGE job class to 'A'.
    If ur DTP is still running , cancel it ie Kill the DTP, delete from the Cube,
    Change these settings and run ur DTP one more time.
    U can observer the difference.
    Reddy

Maybe you are looking for

  • Looking for HP dm4-1265dx replacemen​t lid

    where can i find a replacement lid for my HP dm4-1265dx laptop? dropped my laptop and the top is totally broken and detached from the base.

  • How to search for where a font is used?

    I'm using FrameMaker 12 and receiving prompts that a font is not available and being replaced by another font. How can I find the instances of where this is occuring so that I can correct the problem?

  • "Quick" tab items disappeared on right side - blank?

    The "Quick" tab items (drop down menu) on right side of screen disappeared. Just blank now. Cannot retrieve. It all began when I hit "Texture" at bottom right of screen. Hit button a second time and texture panel went away but the quick items did not

  • How to make actual file for TV Shows on ipod

    I have a 30g ipod, and I would like to know if it is possible to make a actual file for tv shows. When you go under video on the ipod, there is one for music videos, and one for movies, but when you set the videos to tv shows, it only shows up under

  • Help! I've forgotten my master password to my imac.

    I was recently going through a divorce, and had stored my entire household in a storage unit, including my iMac. Now I can't remember the Master password. Can anyone tell me how to reset it, PLEASE? Thanks so much!