Is any solution to [SQL0904] Resource limit exceeded.

I got the below error while connecting with DB2
java.sql.SQLException: [SQL0904] Resource limit exceeded.
     at com.ibm.as400.access.JDError.throwSQLException(JDError.java:650)
     at com.ibm.as400.access.JDError.throwSQLException(JDError.java:621)
     at com.ibm.as400.access.AS400JDBCStatement.commonPrepare(AS400JDBCStatement.java:1506)
     at com.ibm.as400.access.AS400JDBCPreparedStatement.<init>(AS400JDBCPreparedStatement.java:185)
     at com.ibm.as400.access.AS400JDBCConnection.prepareStatement(AS400JDBCConnection.java:1903)
     at com.ibm.as400.access.AS400JDBCConnection.prepareStatement(AS400JDBCConnection.java:1726)
     at com.dst.hps.mhsbenefits.DBLoader.getMappingCount(DBLoader.java:1135)
     at com.dst.hps.mhsbenefits.DBLoader.countAllItems(DBLoader.java:772)
     at com.dst.hps.mhsbenefits.DBLoader.doLoadWorker(DBLoader.java:432)
     at com.dst.hps.mhsbenefits.DBLoader.access$1(DBLoader.java:357)
     at com.dst.hps.mhsbenefits.DBLoader$1.run(DBLoader.java:2996)
Database connection closed.
Is there any solution from java programming side to resolve this issue.

Check your code to make sure that you're closing your Connections, Statements, and ResultSets properly. My guess is that you are not, so your application has exhausted these scarce resources.
If you don't know what "closing properly" looks like, here's a decent link:
http://sdnshare.sun.com/view.jsp?id=538
This class does it right:
package jdbc;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
public class ConnectionTester
   private static final String DEFAULT_DRIVER = "oracle.jdbc.OracleDriver";
   private static final String DEFAULT_URL = "jdbc:oracle:thin:@host:1521:SID";
   private static final String DEFAULT_USERNAME = "username";
   private static final String DEFAULT_PASSWORD = "password";
   public static void main(String[] args)
      String sql = ((args.length > 0) ? args[0] : "");
      String driver = ((args.length > 1) ? args[1] : DEFAULT_DRIVER);
      String url = ((args.length > 2) ? args[2] : DEFAULT_URL);
      String username = ((args.length > 3) ? args[3] : DEFAULT_USERNAME);
      String password = ((args.length > 4) ? args[4] : DEFAULT_PASSWORD);
      Connection connection = null;
      try
         connection = connect(driver, url, username, password);
         if (!"".equals(sql.trim()))
            List result = executeQuery(connection, sql);
            for (int i = 0; i < result.size(); i++)
               Object o = result.get(i);
               System.out.println(o);
         else
            System.out.println("sql cannot be blank");
      catch (ClassNotFoundException e)
         e.printStackTrace();
      catch (SQLException e)
         e.printStackTrace();
      finally
         close(connection);
   public static Connection connect(String driver, String url, String username, String password) throws ClassNotFoundException, SQLException
      Class.forName(driver);
      return DriverManager.getConnection(url, username, password);
   public static void close(Connection connection)
      try
         if (connection != null)
            connection.close();
      catch (SQLException e)
         e.printStackTrace();
   public static void close(Statement statement)
      try
         if (statement != null)
            statement.close();
      catch (SQLException e)
         e.printStackTrace();
   public static void close(ResultSet resultSet)
      try
         if (resultSet != null)
            resultSet.close();
      catch (SQLException e)
         e.printStackTrace();
   public static void rollback(Connection connection)
      try
         if (connection != null)
            connection.rollback();
      catch (SQLException e)
         e.printStackTrace();
   public static List executeQuery(Connection connection, String sql) throws SQLException
      List result = new ArrayList();
      Statement statement = null;
      ResultSet resultSet = null;
      try
         statement = connection.createStatement();
         resultSet = statement.executeQuery(sql);
         ResultSetMetaData metaData = resultSet.getMetaData();
         int numColumns = metaData.getColumnCount();
         while (resultSet.next())
            Map row = new LinkedHashMap();
            for (int i = 0; i < numColumns; ++i)
               String columnName = metaData.getColumnName(i+1);
               row.put(columnName, resultSet.getObject(i+1));
            result.add(row);
      finally
         close(resultSet);
         close(statement);
      return result;
}%

Similar Messages

  • PI 7.11 - SXI_CACHE Issue - SQL0904 - Resource limit exceeded

    Hi IBM I Gurus,
    We are having the SXI_CACHE issue in our PI 7.11 SPS04 system. When we try to do the delta cache refresh through SXI_CACHE, it is returning the error SQL 0904 - Resource limit exceeded. When we try to do the full cache refresh, it is getting the issue 'Application issue during request processing'.
    We have cleaned up the SQL Packages with DLTR3PKG command, which did not resolve the issue. We recently performed a system copy to build the QA instance and I observed that the adapter engine cache for the development is presented in the QA instance and removed that cache from there.
    I am not seeing the adapter engine connection data cache in our PI system. The adapter engine cache is working fine.
    All the caches are working fine from the PI Administration page. The cache connectivity test is failing with the same error as I mentioned for the SXI_CACHE.
    Please let me know if you have encountered any issue like this on IBM I 6.1 Platform.
    Your help is highly appreciated.
    Thanks
    Kalyan

    Hi Kalyan,
    SQL0904 has different reason codes ...
    Which one are you seeing ?
    Is the SQL pack really at its boundary of 1GB ?
    ... otherwise, it is perhaps a totally different issue ... then DLTR3PKG cannot help at all ...
    If you should see this big SQL Package, you should use PRTSQLINF in order to see if there is more or less over and over the same SQL in, just with different host variables or so ...
    If the last point should be the case, I would open a message with BC-DB-DB4 so that they can check how to help here or to talk to the application people to behave a bit different ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.com http://www.4soi.de http://www.easymarketplace.de

  • PI 7.11 - SXI_CACHE Issue - SQL 0904 - Resource Limit Exceeded

    Hi PI & IBM I Gurus,
    We are having the SXI_CACHE issue in our PI 7.11 SPS04 system. When we try to do the delta cache refresh through SXI_CACHE, it is returning the error SQL 0904 - Resource limit exceeded. When we try to do the full cache refresh, it is getting the issue 'Application issue during request processing'.
    We have cleaned up the SQL Packages with DLTR3PKG command, which did not resolve the issue. We recently performed a system copy to build the QA instance and I observed that the adapter engine cache for the development is presented in the QA instance and removed that cache from there.
    I am not seeing the adapter engine connection data cache in our PI system. The adapter engine cache is working fine.
    All the caches are working fine from the PI Administration page. The cache connectivity test is failing with the same error as I mentioned for the SXI_CACHE.
    Please let me know if you have encountered any issue like this on IBM I 6.1 Platform.
    Your help is highly appreciated.
    Thanks
    Kalyan

    Hi Kalyan,
    SQL0904 has different reason codes ...
    Which one are you seeing ?
    Is the SQL pack really at its boundary of 1GB ?
    ... otherwise, it is perhaps a totally different issue ... then DLTR3PKG cannot help at all ...
    If you should see this big SQL Package, you should use PRTSQLINF in order to see if there is more or less over and over the same SQL in, just with different host variables or so ...
    If the last point should be the case, I would open a message with BC-DB-DB4 so that they can check how to help here or to talk to the application people to behave a bit different ...
    Regards
    Volker Gueldenpfennig, consolut international ag
    http://www.consolut.com http://www.4soi.de http://www.easymarketplace.de

  • Database error text.. "Resource limit exceeded. MSGID=  Job=753531/IBPADM/WP05"

    Hello All,
    We are getting below runtime errors
    Runtime Errors         DBIF_RSQL_SQL_ERROR
    Exception              CX_SY_OPEN_SQL_DB
    Short text
        SQL error in the database when accessing a table.
    What can you do?
        Note which actions and input led to the error.
        For further help in handling the problem, contact your SAP administrator
        You can use the ABAP dump analysis transaction ST22 to view and manage
        termination messages, in particular for long term reference.
    How to correct the error
        Database error text........: "Resource limit exceeded. MSGID=  Job=753531/IBPADM/WP05""
       Internal call code.........: "[RSQL/OPEN/PRPS ]"
        Please check the entries in the system log (Transaction SM21).
        If the error occures in a non-modified SAP program, you may be able to
        find an interim solution in an SAP Note.
        If you have access to SAP Notes, carry out a search with the following
        keywords:
        "DBIF_RSQL_SQL_ERROR" "CX_SY_OPEN_SQL_DB"
        "SAPLCATL2" or "LCATL2U17"
        "CATS_SELECT_PRPS"
        If you cannot solve the problem yourself and want to send an error
        notification to SAP, include the following information:
        1. The description of the current problem (short dump)
           To save the description, choose "System->List->Save->Local File
        (Unconverted)".
        2. Corresponding system log
           Display the system log by calling transaction SM21.
           Restrict the time interval to 10 minutes before and five minutes
    after the short dump. Then choose "System->List->Save->Local File
    (Unconverted)".
    3. If the problem occurs in a problem of your own or a modified SAP
    program: The source code of the program
        In the editor, choose "Utilities->More
    Utilities->Upload/Download->Download".
    4. Details about the conditions under which the error occurred or which
    actions and input led to the error.
    System environment
        SAP-Release 700
        Application server... "SAPIBP0"
        Network address...... "3.14.226.140"
        Operating system..... "OS400"
        Release.............. "7.1"
           Character length.... 16 Bits
        Pointer length....... 64 Bits
        Work process number.. 5
        Shortdump setting.... "full"
        Database server... "SAPIBP0"
        Database type..... "DB400"
        Database name..... "IBP"
        Database user ID.. "R3IBPDATA"
        Terminal................. "KRSNBRB032"
        Char.set.... "C"
        SAP kernel....... 721
        created (date)... "May 15 2013 01:29:20"
        create on........ "AIX 1 6 00CFADC14C00 (IBM i with OS400)"
        Database version. "DB4_71"
    Patch level. 118
    Patch text.. " "
    Database............. "V7R1"
    SAP database version. 721
    Operating system..... "OS400 1 7"
    Memory consumption
    Roll.... 0
    EM...... 12569376
    Heap.... 0
    Page.... 2351104
    MM Used. 5210400
    MM Free. 3166288
    Information on where terminated
        Termination occurred in the ABAP program "SAPLCATL2" - in "CATS_SELECT_PRPS".
        The main program was "CATSSHOW ".
        In the source code you have the termination point in line 67
        of the (Include) program "LCATL2U17".
        The termination is caused because exception "CX_SY_OPEN_SQL_DB" occurred in
        procedure "CATS_SELECT_PRPS" "(FUNCTION)", but it was neither handled locally
         nor declared
        in the RAISING clause of its signature.
        The procedure is in program "SAPLCATL2 "; its source code begins in line
        1 of the (Include program "LCATL2U17 ".
    The exception must either be prevented, caught within proedure
    "CATS_SELECT_PRPS" "(FUNCTION)", or its possible occurrence must be declared in
      the
    RAISING clause of the procedure.
    To prevent the exception, note the following:
    SM21: Log
    Database error -904 at PRE access to table PRPS
    > Resource limit exceeded. MSGID= Job=871896/VGPADM/WP05
    Run-time error "DBIF_RSQL_SQL_ERROR" occurred
    Please help
    Regards,
    Usha

    Hi Usha
    Could you check this SAP Notes
      1930962 - IBM i: Size restriction of database tables
    1966949 - IBM i: Runtime error DBIF_DSQL2_SQL_ERROR in RSDB4UPD
    BR
    SS

  • Error 1074388947 at CAN Open Object, Exceed resource limit

    Dear All,
    Basically, I'm listening to 5 frames from NI-CAN, and I can't listen to the sixth...
    To be more precise, I'm listening to STAT1, STAT2, EXT1, EXT2 and I'm sending commands to CMD.
    When I try to add a "hearing" on the BITE frame, I obtain the error:
    "1074388947 occured at NI-CAN Open Object (ncOpen.vi)
    NI-CAN: Exceeded resource limit for queues in shared memory between firmware/driver. The ncReadmult function is not allowed. Solutions: Decrease queue lengths in objects; Set read queue length to at least 2; Decrease number of CAN Objects."
    I can't see any significative parameter to change...
    I'm doing a ncConfigCANNET + ncOpen + ncSetAttr, then for each frames: ncConfigCANObj + ncOpen + ncSetAttr + ncCreateOccur.
    That's when I'm adding the sixth frame's part, I get the error at the output of the ncOpen.
    Does someone knows what's happening ?
    Thanks for any help you could give me, even if it's just a thought... Thanks in advance ! I'm quite desperate actually...
    Laurent

    Hi,
    This is a KB wich could help.
    http://digital.ni.com/public.nsf/allkb/CC36BA1DD421EC23862569060054F6E2?OpenDocument
    .NIDays2008 {font-family:Arial, Helvetica, sans-serif; font-size:12px; color: #065fa3; font-weight: bold; text-decoration: none; text-align: right;} .NIDays2008 a, a:hover {text-decoration: none;} .NIDays2008 a img {height: 0; width: 0; border-width: 0;} .NIDays2008 a:hover img {position: absolute; height: 90px; width: 728px; margin-left: -728px; margin-top:-12px;}
    >> Avez-vous entendu parler de NI Days ?

  • When Safari opens, the page exceeds the width of my screen.  Any solutions?  Have tried adjusting the font.

    When Safari opens, the page exceeds the width of my screen.  Any solutions?  Have tried adjusting the font.

    Click the green button in the top left corner of the window to resize it. If you can't reach the button, drag the window to the right by its title bar. You can further resize by dragging any of the edges or corners. Once you've resized the window, Safari should remember the size.

  • FAST Search - Unable to Complete Query on BackEnd resource limit remporarily exceeded code:1013

    Hi ,
    We are using FAST search in our SharePoint 2010 environment. I am encountering an issue while navigating to 21st  page or Query string value 200 and above (
    URL -> /Pages/results.aspx?k=1&s=All&start1=201
    ) in search core result web part. The error message is UI is  "
    The search request was unable to execute on FAST Search Server".
    I have found the below log in the ULS .
    SearchServiceApplication::Execute--Exception:   Microsoft.Office.Server.Search.Query.FASTSearchQueryException: Unable to   complete query on backend Resource limit temporarily
    exceeded Code:   1013   
    at Microsoft.Office.Server.Search.Query.Gateway.FastSearchGateway.GetQueryResult(IQuery   query)   
    at   Microsoft.Office.Server.Search.Query.Gateway.FastSearchGateway.ExecuteSearch(SearchRequest   request, String queryString)   
    at   Microsoft.Office.Server.Search.Query.Gateway.AbstractSearchGateway.Search(SearchRequest   request)   
    at   Microsoft.Office.Server.Search.Query.FASTQueryInternal.ExecuteSearch(SearchRequest   request, ResultTableCollection rtc)   
    at Microsoft.Office.Server.Search.Query.FASTQueryInternal.Execute(QueryProperties   properties)   
    at   Microsoft.Office.Server.Search.Administration.SearchServiceApplication.Execute(QueryProperties   properties)
    Please share your thoughts or advise if some has come across this issue. Thanks!

    Hi ,
    We are using FAST search in our SharePoint 2010 environment. I am encountering an issue while navigating to 21st  page or Query string value 200 and above (
    URL -> /Pages/results.aspx?k=1&s=All&start1=201
    ) in search core result web part. The error message is UI is  "
    The search request was unable to execute on FAST Search Server".
    I have found the below log in the ULS .
    SearchServiceApplication::Execute--Exception:   Microsoft.Office.Server.Search.Query.FASTSearchQueryException: Unable to   complete query on backend Resource limit temporarily
    exceeded Code:   1013   
    at Microsoft.Office.Server.Search.Query.Gateway.FastSearchGateway.GetQueryResult(IQuery   query)   
    at   Microsoft.Office.Server.Search.Query.Gateway.FastSearchGateway.ExecuteSearch(SearchRequest   request, String queryString)   
    at   Microsoft.Office.Server.Search.Query.Gateway.AbstractSearchGateway.Search(SearchRequest   request)   
    at   Microsoft.Office.Server.Search.Query.FASTQueryInternal.ExecuteSearch(SearchRequest   request, ResultTableCollection rtc)   
    at Microsoft.Office.Server.Search.Query.FASTQueryInternal.Execute(QueryProperties   properties)   
    at   Microsoft.Office.Server.Search.Administration.SearchServiceApplication.Execute(QueryProperties   properties)
    Please share your thoughts or advise if some has come across this issue. Thanks!

  • Time Limit exceeded error in R & R queue

    Hi,
    We are getting Time limit exceeded error in the R & R queue when we try to extract the data for a site.
    The error is happening with the message SALESDOCGEN_O_W. It is observed that whenever, the timelimit error is encountered, the possible solution is to run the job in the background. But in this case, is there any possibility to run the particular subscription for sales document in the background.
    Any pointers on this would be of great help.
    Thanks in advance,
    Regards,
    Rasmi.

    Hi Rasmi
    I suppose that the usual answer would be to increase the timeout for the R&R queue.
    We have increased the timeout on ours to 60 mins and that takes care of just about everything.
    The other thing to check would be the volume of data that is going to each site for SALESDOCGEN_O_W. These are pretty big BDOCs and the sales force will not thank you for huge contranns time 
    If you have a subscription for sales documents by business patrner, then it is worth seeing if the business partner subscription could be made more intelligent to fit your needs
    Regards
    James

  • Message getting stuck in XBQO queue - Time limit exceeded

    Hi All,
    We have a BPM scenario in our project (on PI 7.0 SP18), where bundle of PEXR2002 Payment IDocs are received as a single flat file. This file is then consumed by the BPM, to split the message into multiple payments using Java Mapping.
    However, when we get an IDoc file of size greater than 5 MB (more than 500 IDocs), the message gets stuck in XBQO queue and eventually giving a SYSFAIL with the message "Time limit exceeded". Could you please let us know if you have encountered a similar issue and are aware of a possible solution.
    Any pointers to this will be really appreciated.
    Thanks & Regards,
    ROSIE SASIDHARAN

    H Rosie,
    1)  Goto SXMB_ADM-> Integration Engine Configuration->Parameter  EO_MSG_SIZE_LIMIT->possible values 0 - 2,097,151 (KB)
    The parameter EO_MSG_SIZE_LIMIT enables serial processing of messages of a particular size. This applies for messages with the quality of service Exactly Once (EO). If the message is larger than the parameter value, the message is processed in a separate queue.
    2)  Goto SXMB_ADM-> Integration Engine Configuration->Parameter  HTTP_TIMEOUT->possible values n Seconds, where n is a whole number.
    The parameter Specifies the timeout for HTTP connections (time between two data packages at line level). This value overrides the system profile parameter icm/server_port_n (for example, icm/server_port_0 : PROT=HTTP, PORT=50044, TIMEOUT=900). If you do not set the parameter HTTP_TIMEOUT or if you set the parameter to 0, then the setting for the system profile parameter is used.
      See SAP Note 335162  for sysfail issue....
      Hope these will help u....
    Regds,
    Pinangshuk.

  • TRFC error "time limit exceeded"

    Hi Prashant,
    No reply to my below thread...
    Hi Prashant,
    We are facing this issue quite often as i stated in my previous threads.
    As you mentioned some steps i have already followed all the steps so that i can furnish the jog log and tRFC details for reference long back.
    This issue i have posted one month back with full details and what we temporarily follow to execute this element successfully.
    Number of times i have stated that i need to know the root cause and permanent solution to resolve this issue as the log clearly states that it is due to struck LUWs(Source system).
    Even after executing the LUWs manually the status is same (Request still running and the status is in yellow color).
    I have no idea why this is happening to this element particularly as we have sufficient background jobs.
    we need change some settings like increasing or decreasing data package size or something else to resolve the issue permanently?
    For u i am giving the details once again
    Data flow:Standard DS-->PSA--->Data Target(DSO)
    In process monitor screen the request is in yellow color. NO clear error message s defined here.under update 0 record updated and missing message with yellow color except this the status against each log is green.
    Job log:Job is finished TRFCSSTATE=SYSFAIL message
    Trfcs:time limit exceeded
    What i follow to resolve the issue:Make the request green and manually update from PSA to Data target and the job gets completed successfully.
    Can you please tell me how to follow in this scenario to resolve the issue as i waiting for the same for long time now.
    And till now i didn't get any clue and what ever i have investigated i am getting replies till that point and no further update beyond this
    with regards,
    musai

    Hi,
    You have mentioned that already you have checked for LUWs, so the problem is not there now.
    In source system, go to we02 and check for idoc of type RSRQST & RSINFO. If any of them are in yellow status, take them to BD87 and process them. If the idoc processed is of RSRQST type, it would now create the job in source system for carrying out dataload. If it was of RSINFO type, it would finish the dataload in SAP BI side as well.
    If any in red, then check the reason.

  • LMS 4.2.2 Fault Discovery - Network Adapters Limit Exceeded

    Running fault device discovery after adding approx 50 new devices to LMS DCR. It seems I have reached a limit in DFM with regards to how many device components it can manage. The devices are put into questioned mode and the following error is seen in the Question State Device Report
    Network Adapters Limit Exceeded.The number of Network Adapters discovered by Fault Management exceeded the maximum supported limit.Please check on screen help for more information.     
    What is the limit that is being referred to here? Is it independent of the LMS device licence? Where can I see how many "network adapters" LMS is monitoring? I have approx 900 devices being managed by LMS on a 1500 device licence. I am running LMS 4.2.2
    Any light you could shed on this would be appreciated
    Thanks,
    Mark         

    I've come across this a few times. I did not find a solution. It went away with a reinstall.
    What seems to happen is that somehow DFM puts all the device interfaces in a managed state.
    It should not.
    It should only manage the ports and interfaces that connect to another managed device.
    There is a hardcoded limit of 40.000 managed ports in DFM.
    Suspected situations:
    On one server where I found this the system locale that I put to us-english, was somehow put back to be-french. Maybe even during the installation, not sure who or what did this.
    On another server the anti-virus which I  had turned off, was turned back on during installation,probably by a policy server.
    Cheers,
    Michel

  • Error "time limit Exceed"?"

    hi Experts,
    What to do when a load is failing with "time limit Exceed"?

    Hi,
    Time outs can be due to many reasons.  You will need to find out for your specific build.  Some of the common ones are:
    1. You could have set a large packet size.  See in the monitor whether the number of records in one packet seem inordinately large, say 100,000.  Reduce the number in steps and see which ones work for you.
    2.  The target may have a large number of fields, even then you will receive a time out as the size of the packet may become large.  Same solution as point 1.
    3.  You may have built an index in the target ODS which may impact your write speed.  Remove any indexes and run with the same package size, if it works then you know the index is the problem.
    4.  There is a basis setting for time out.  Check that is set as per SAP recommendation for your system.
    5.  Check the transactional RFCs in the source system.  It may have choked due to a large number of errors or hung queues.
    Cheers...

  • DNL_CUST_ADDR "SYSFAIL" with time limit exceeded

    Hi Experts,
    i hope you can help here.
    We set up a new connection between a CRM and an ECC.
    Everything worked fine until we started the initial customizing load.
    All customizing objects went through, besides DNL_CUST_ADDR.
    It stopped in the Inbound Queue  of the CRM with error message "Sysfail" in Detail
    "Time limit exceeded".
    The Solution is not note 873918, we implemented it and it still doesn´t
    work.
    Ahead thank you very much.
    Regards
    Matthias Reich

    Hi Matthias,
       You can try any of the following things.
       Before starting the initial load of this object, change the block size in DNL_CUST_ADDR object to 50. Here delete the current queue and reinitiate the load  OR
       If you don't want to delete the existing queue, basis can increase the GUI maximum time. For this they need to change the parameter rdisp/max_wprun_time. You can see the current max time by executing the program RSPARAM. After doing this setting, you can unlock the queue.
    //Bhanu

  • RFC Resource Cap Exceeded

    I'm getting the message " RFC Resource Cap Exceeded"in the EFWK Resource Manager batch job on my Solution Manager system. Any idea what his means?
    Log file from the batch job
    Job log overview for job:    EFWK Resource Manager (01 Minute / 13292300
    Step 001 started (program E2E_EFWK_RESOURCE_MGR, variant , user ID SMD_RFC)
    Worklist Pass 1 with * worklist items.                                                                               
    PR4 E2E_LUW_ME_CORE_SSR  R A0A6CD91579BFD5FD57D0833E94041D2 SM_PR4CLNT300_READ   Resource allocated successfully
    DR4 E2E_ME_RFC_ASYNC  R C93328FEAACEEEDBD3C42C54C6EF7418 SM_DR4CLNT300_READ      Resource allocated successfully
    QR4 E2E_ME_RFC_ASYNC  R BD366CBD7D64373F7BFF4EAA051B5CCB SM_QR4CLNT300_READ      Resource allocated successfully
    QP4 E2E_ME_RFC_ASYNC  R B8EBD209EA7BC1C9111C7B6FC4D87CE2 SOLMANDIAG              Resource allocated successfully
    QP4 E2E_ME_RFC_ASYNC  W B752CD1338491A17EF21AAEDE904DBBC SOLMANDIAG              RFC Resource Cap Exceeded
    PP4 E2E_ME_RFC_ASYNC  W B63BEEDDC2242DF411FF65FD938BA311 SOLMANDIAG              RFC Resource Cap Exceeded
    PR4 E2E_ME_RFC_ASYNC  W 9BD94F0CBE395638B7D80B917CF34EDB SOLMANDIAG              RFC Resource Cap Exceeded
    DR4 E2E_ME_RFC_ASYNC  W 98E031268E5D1E5CEDBD24A6A02D99E8 SOLMANDIAG              RFC Resource Cap Exceeded
    DB4 E2E_ME_RFC_ASYNC  W 957D32A9E59D7A9F24DECCED5E2DF53A SOLMANDIAG              RFC Resource Cap Exceeded
    QR4 E2E_ME_RFC_ASYNC  W 5FBB1CE85153E619EEA69FE0E6771902 SOLMANDIAG              RFC Resource Cap Exceeded
    PB4 E2E_LUW_ME_CORE_SSR  R 59DEEECAE564B756473F567030E8546B SM_PB4CLNT300_READ   Resource allocated successfully
    Requested Resource not found                                                                               
    Job cancelled after system exception ERROR_MESSAGE

    Hi,
    On note 1435043 - E2E Workload Analysis - No applicable data found there is a note
    3) EFWK Resource Manager Job
    This is the central job that controls all data loading steps. Make sure this job is released in transaction SM37. The job name is E2E_HK_CONTROLLER
    - Make sure that you release the job with user SMD_RFC to avoid authority problems.
    - Make sure the job is not scheduled more than once for the same timestamp.
    - Check the job spool for any errors.
    - Check the test job logs in Solution Manager Workcenter -> "  Extractor FWK Administration", click on "Latest Job Logs".
    Common issues: "RFC Resource Cap. Exceeded". Increase the value of the RFC Resource Cap in table E2E_RESOURCES using SE16.
    So may be it could help.
    Rgds,

  • Csngen_adjust_time: adjustment limit exceeded

    Hello !
    sun directory server patch 6
    linux redhat 3.7
    I have an error message on my master server :
    [08/Dec/2008:21:00:58 +0100] - ERROR<38994> - CSN Generation - conn=-1 op=-1 msgId=-1 - Internal error csngen_adjust_time: adjustment limit exceeded; value - 39780731, limit - 86400
    Master and Consumer are synchronized using ntp.
    The problem is that someone has "played" a few weeks ago with the time of the server to make some tests.
    I have tried to export / import the database using db2ldif/ldif2db but nothing repaired the directory.
    I have deleted the changelog directory with no result too.
    I have seen such a problem on this forum but nobody gave a concrete solution to it.
    Does anyone experienced that ?
    Any idea ?
    thanks
    Regards

    Hallo ,
    i have the same entry in the error-log. l havr no idea to fix.
    do you fix it? What do you do?
    we have directory server 5.2 P4

Maybe you are looking for