UTL_HTTP Fails After Large Number of Requests

Hello,
The following code issues an HTTP request, obtains the response, and closes the response. After a significantly large number of iterations, the code causes the session to terminate with an "END OF FILE ON COMMUNICATIONS CHANNEL" error (Oracle Version 10.2.0.3). I have the following two questions that I hope someone can address:
1) Could you please let me know if you have experienced this issue and have found a solution?
2) If you have not experienced this issue, are you able to successfully run the following code below in your test environment?
DECLARE
http_req utl_http.req;
http_resp utl_http.resp;
i NUMBER;
BEGIN
i := 0;
WHILE i < 200000
LOOP
i := i + 1;
http_req := utl_http.begin_request('http://<<YOUR_LOCAL_TEST_WEB_SERVER>>', 'POST', utl_http.HTTP_VERSION_1_1);
http_resp := utl_http.get_response(http_req);
utl_http.end_response(http_resp);
END LOOP;
dbms_output.put_line('No Errors Occurred. Test Completed Successfully.');
END;
Thanks in advance for your help.

I believe the end_request call is accomplished implicitly through the end_response function based on the documentation that I have reviewed. However, to be sure, I had attempted your suggestion as it also had occurred to me. Unfortunately, after attempting the end_request, I received an error since the request was already implicitly closed. Therefore, the assumption is that the end_request call is not applicable in this context. Thanks for the suggestion though. If you have any other suggestions, please let me know.

Similar Messages

  • Unable to parse properly for a large number of requests

    Hi all
    in weblogic 9 .2 when a request comes which parses and reads a xml file then there is no problem. But at a time when a large number of request comes to read the xml file then it is behaving differently .One node it is unable to locate.The meaning of error is actually inappropriate
    java.lang.NullPointerException
         at com.sun.org.apache.xerces.internal.dom.ParentNode.nodeListItem(ParentNode.java:814)
         at com.sun.org.apache.xerces.internal.dom.ParentNode.item(ParentNode.java:828)
         at com.test.ObjectPersistanceXMLParser.getData(ObjectPersistanceXMLParser.java:46)
         at com.test.testservlet.doPost(testservlet.java:634)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:763)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:225)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:127)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:283)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:175)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3214)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:1983)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:1890)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1344)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
    But there is a node and is found while processing in a java program for the same xml file. But this occurs when a large number of request comes.
    Please suggest.

    Yes I parse xml as much as the request comes. and i do not want to synchronize here. The below code executes for each request. Do we have a solution here .
    DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory.newInstance();
    DocumentBuilder docBuilder = docBuilderFactory.newDocumentBuilder();
    Document configXML = docBuilder.parse(filePath);
    I have tried with DOMParser and follwowing method is applied
    setFeature( "http://apache.org/xml/features/dom/defer-node-expansion ",false).
    Please suggest.

  • Building secondary Index fails for large number(25,000,000) of records

    I am inserting 25,000,000 records of the type:
    Key --> Data
    [long,String,long] --> [{long,long}, {String}}
    using setSecondaryBulkLoad(true) and then build two Secondary indexes on {long,long} and {String} of the data portion.
         private void buildSecondaryIndex(DataAccessLayer dataAccessLayer ) {
              try {
                   SecondaryIndex<TDetailSecondaryKey, TDetailStringKey, TDetailStringRecord> secondaryIndex = store.getSecondaryIndex(dataAccessLayer.getPrimaryIndex(), TDetailSecondaryKey.class, SECONDARY_KEY_NAME);
              } catch (DatabaseException e) {
                   throw new RuntimeException(e);
    It fails when I build the SecondaryIndex probably due to Java Heap Space Error. See the failure trace below.
    I do not face this problem when I deal with 250,000 records.
    Is there a work around that without haveing to set the memory space configurations of the JVM.
    Failure Trace:
    java.lang.RuntimeException: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:444)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
         at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
         at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
         at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:66)
         at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
         at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
         at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
         at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
         at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
         at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
         at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
    Caused by: Environment invalid because of previous exception: com.sleepycat.je.RunRecoveryException
         at com.sleepycat.je.dbi.EnvironmentImpl.checkIfInvalid(EnvironmentImpl.java:976)
         at com.sleepycat.je.log.LogManager.getLogEntry(LogManager.java:584)
         at com.sleepycat.je.txn.Txn.undo(Txn.java:713)
         at com.sleepycat.je.txn.Txn.abortInternal(Txn.java:631)
         at com.sleepycat.je.txn.Txn.abort(Txn.java:599)
         at com.sleepycat.je.txn.AutoTxn.operationEnd(AutoTxn.java:36)
         at com.sleepycat.je.Environment.openDb(Environment.java:505)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         ... 22 more
    Caused by: java.lang.OutOfMemoryError: Java heap space
         at java.util.HashMap.resize(HashMap.java:462)
         at java.util.HashMap.addEntry(HashMap.java:755)
         at java.util.HashMap.put(HashMap.java:385)
         at java.util.HashSet.add(HashSet.java:200)
         at com.sleepycat.je.txn.Txn.addReadLock(Txn.java:964)
         at com.sleepycat.je.txn.Txn.addLock(Txn.java:952)
         at com.sleepycat.je.txn.LockManager.attemptLockInternal(LockManager.java:347)
         at com.sleepycat.je.txn.SyncedLockManager.attemptLock(SyncedLockManager.java:43)
         at com.sleepycat.je.txn.LockManager.lock(LockManager.java:178)
         at com.sleepycat.je.txn.Txn.lockInternal(Txn.java:295)
         at com.sleepycat.je.txn.Locker.nonBlockingLock(Locker.java:288)
         at com.sleepycat.je.dbi.CursorImpl.lockLNDeletedAllowed(CursorImpl.java:2357)
         at com.sleepycat.je.dbi.CursorImpl.lockLN(CursorImpl.java:2297)
         at com.sleepycat.je.dbi.CursorImpl.fetchCurrent(CursorImpl.java:2227)
         at com.sleepycat.je.dbi.CursorImpl.getCurrentAlreadyLatched(CursorImpl.java:1296)
         at com.sleepycat.je.dbi.CursorImpl.getNextWithKeyChangeStatus(CursorImpl.java:1442)
         at com.sleepycat.je.dbi.CursorImpl.getNext(CursorImpl.java:1368)
         at com.sleepycat.je.Cursor.retrieveNextAllowPhantoms(Cursor.java:1587)
         at com.sleepycat.je.Cursor.retrieveNext(Cursor.java:1397)
         at com.sleepycat.je.SecondaryDatabase.init(SecondaryDatabase.java:182)
         at com.sleepycat.je.SecondaryDatabase.initNew(SecondaryDatabase.java:118)
         at com.sleepycat.je.Environment.openDb(Environment.java:484)
         at com.sleepycat.je.Environment.openSecondaryDatabase(Environment.java:382)
         at com.sleepycat.persist.impl.Store.openSecondaryIndex(Store.java:684)
         at com.sleepycat.persist.impl.Store.getSecondaryIndex(Store.java:579)
         at com.sleepycat.persist.EntityStore.getSecondaryIndex(EntityStore.java:286)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.buildSecondaryIndex(TDetailStringDAOInsertTest.java:441)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.insertCellSetInOneTxn(TDetailStringDAOInsertTest.java:280)
         at com.infobionics.ibperformance.TDetailStringDAOInsertTest.mainTest(TDetailStringDAOInsertTest.java:93)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    1. Does the speed of building of secondary index
    depend on the type of the data in the key? Will
    having integers in secondary key as opposed to string
    be better?The byte size of the key and data is significant of course, but the data type is not.
    2. How much are we bound of the memory? Lets assume
    my memory setting is fixed.
    a. I know with current memory settings if I set txn
    n on, I have java Heap Error.
    So will I be limited on the size of
    secondary index or
    will it just get really slow swapping
    tree information from the disk as it builds it.No. The out-of-memory error was caused by a very large transaction that holds locks. When using small transactions or non-transactional access, you won't have this problem. In general, like most databases, JE writes and reads information to/from disk as needed.
    b. Is there any other way of speeding the build of
    f secondary database?No, other then general performance tuning, nothing I know of.
    c. Will it be more beneficial not to bulk
    load when the datasize gets large
    so that secondary database is built
    incrementally?It's up to you whether you want to pay the price during an initial load or incrementally.
    d. Do you think it will help to partition the
    e original database into smaller databases
    using some criteria, and thus build
    smaller trees.          Why? You can use deferred write or non-transactional access to load any size database.
    The only weak point in this is if we have to bulk
    bulk load in one partition
    at some time increasing its size we may
    face the same problem againFace what problem?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Internal Error 500 started appearing even after setting a large number for postParametersLimit

    Hello,
    I adopted a CF 9 web-application and we're receiving the Internal 500 Error on a submit from a form that has line items for a RMA.
    The server originally only had Cumulative Hot Fix 1 on it and I thought if I installed Cumulative Hot Fix 4, I would be able to adjust the postParametersLimit variable in the neo-runtime.xml.  So, I tried doing this, and I've tried setting the number to an extremely large number (last try was 40000), and I'm still getting this error.  I've tried putting a <cfabort> on the first line on the cfm file that is being called, but I'm still getting the 500 error.
    As I mentioned, it's a RMA form and if the RMA has a few lines say up to 20 or 25 it will work.
    I've tried increasing the following all at the same time:
    postParameterSize to 1000 MB
    Max size of post data 1000MB
    Request throttle Memory 768MB
    Maximum JVM Heap Size - 1024 MB
    Enable HTTP Status Codes - unchecked
    Here's some extra backgroun on this situation.  This is all that happened before I got the server:
    The CF Server is installed as a virtual machin and was originally part of a domain that was exposed to the internet and the internal network.  The CF Admin was exposed to the internet.
    AT THIS TIME THE RMA FORM WORKED PROPERLY, EVEN WITH LARGE NUMBER OF LINE ITEMS.
    The CF Server was hacked, so they did the following:
    They took a snapshot of the CF Server
    Unjoined it from the domain and put it in the DMZ.
    The server can no longer connect to the internet outbound, inbound connections are allowed through SSL
    Installed cumulative hot fix 1 and hot fix APSB13-13
    Changed the Default port for SQL on the SQL Server.
    This is when the RMA form stopped working and I inherited the server.  Yeah!
    Any ideas on what i can try next or why this would have suddenly stopped working after making the above changes on the server.
    Thank you

    Start from the beginning. Return to the default values, and see what happens. To do so, proceed as follows.
    Temporarily shut ColdFusion down. Create a back-up of the file neo-runtime.xml, just in case.
    Now, open the file in a text editor and revert postParametersLimit and postSizeLimit to their respective default values, namely,
    <var name='postParametersLimit'><number>100.0</number></var>
    <var name='postSizeLimit'><number>100.0</number></var>
    That is, 100 parameters and 100 MB, respectively. (Note that there is no postParameterSize! If you had included that element in the XML, remove it.)
    Restart ColdFusion. Test and tell.

  • BUG: Last Image of Large Number Being Moved fails

    This has happened several times in organizing some folders.  Moving over 100 images at a time, it seems that one image near the end fails - I get the screen that Lightroom can't move the image right now.  It's always just one image.  I can move it on it's own just a second later and it works just fine.
    While the Move operation is being fixed, consider that it could go way faster than it does now if the screen didn't have to be refreshed after each file has been moved.  I can see the value of the refresh if it's just a few images being moved, but for a large number, the refresh isn't helpful anyhow.
    Paul Wasserman

    I posted on this last week, and apparently a number of people have experienced this.
    http://forums.adobe.com/thread/690900
    Please report it on this bug report site so that it gets to the developers' attention sooner:
    https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    Bob

  • I bought Creative suite 5 a few years ago, my computer's hard drive failed, after It was fixed I re installed my program and it now says my Serial number is not valid.

    I bought Creative suite 5 a few years ago, my computer's hard drive failed, after It was fixed I re installed my program and it now says my Serial number is not valid.
    I'm on the last day of a 30 day trail... it won't take my serial number which is the same one that worked before

    Contact support by web chat.
    Mylenium

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • I have lost large number of files after upgrading to macOS Yosemite. Any help to restore them?

    I have lost a large number of files after upgrading to macOS Yosemite!  Any help on how to restore the files would be much appreciated.

    Iimport them from your backup you made before choosing to upgrade to Yosemite.
    Cheers
    Pete

  • Pairing devices fails after install of osx 10.8.5.. It fails at my Iphone and Ipad touch during the pairing about 5 seconds after I confirm the 6 digit number. Does this make sense? Thanks for considering this.

    Pairing devices fails after install of osx 10.8.5.. It fails at my Iphone and Ipad touch during the pairing about 5 seconds after I confirm the 6 digit number. Does this make sense?  Whats the fix? I am moderately computer literate. Thanks for considering this.

    The following entry may indicate a failing harddrive, so doing backups and replacing the harddrive may be in order.
    Disk Information:
              TOSHIBA MK7559GSXF disk0 : (750,16 GB)
              S.M.A.R.T. Status: Failing                                  <-----------
    A little bit about drive S.M.A.R.T. status:
    "The most basic information that SMART provides is the SMART status. It provides only two values: "threshold not exceeded" and "threshold exceeded". Often these are represented as "drive OK" or "drive fail" respectively. A "threshold exceeded" value is intended to indicate that there is a relatively high probability that the drive will not be able to honor its specification in the future: that is, the drive is "about to fail". The predicted failure may be catastrophic or may be something as subtle as the inability to write to certain sectors, or perhaps slower performance than the manufacturer's declared minimum."
    http://en.wikipedia.org/wiki/S.M.A.R.T.

  • Large number of apps crashing after opening screen

    A large number of the applications on my 16 GB iPhone 3G have begun to crash when I try to open them. At first I thought it was just those that used the camera, but now, although the photo ones are among the major offenders, there are others too.
    I have almost filled my apps quota but never had this issue before. I think I may have had one or two problematic apps over the past 9 months, but nothing like the number that are now refusing to open. Attempting to open these apps will show the app's opening screen and then just bring me back to the home screen.
    I'm not aware of having made any changes that might have caused the problem. I tried to remember the most recent apps I added and tried deleting them but it didn't help. I've shut down and restarted a few times and done a total reboot once or twice, but without success.
    Can anyone suggest what I should try next?
    Thanks.

    KrummenHacker wrote:
    I have exactly the same problem and even deauthorizing and then authorizing the computer again doesn't help to solve this.
    It happen just after I upgraded iTunes to 8.1.
    Since that, I'm not able any more to launch any application installed from the AppStore. Native applications and custom applications installed from Xcode are working without any problem.
    I also tried to purchase a new application to see what happen. Result: it doesn't launch.
    Bernard
    Did you purchase the application in iTunes on your Mac and then sync it to the iPhone?
    If not, try walking through the process I suggested above.
    If you did purchase the application in iTunes and sync it to the iPhone after deauthorizing and authorizing, try updating the iPhone to 2.2.1 or restoring the iPhone as shown here: This article: http://support.apple.com/kb/HT1414 will walk you through restoring an iPhone or iPod touch.
    -Jason

  • Http get requests fail after a few weeks

    All,
    I have a get request to a servlet that works for a few weeks, then it will suddenly stop.
    I change the code once, works,then it will fail after a few weeks.
    I change the code again, works, then it will fail after a few weeks.
    Servlet works like: send one request, wait, then send a second.
    Here are the last 2 code iterations:
    try {
            // Construct data
            String data = URLEncoder.encode("key1", "UTF-8") + "=" + URLEncoder.encode("value1", "UTF-8");
            data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8");
              //String data = "";
            // Send data
            //URL url = new URL("http://localhost:8080/stocks?action=1&date=20080310");
            URL url = new URL("http://localhost:8080/stocks/monitor?action=1&date="+stringDate);
            URLConnection conn = url.openConnection();
            conn.setDoOutput(true);
            OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream());
            wr.write(data);
            wr.flush();
            // Get the response
            BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
            //System.out.println(rd.read());
            String line;
            int count =0;
            while ((line = rd.readLine()) != null) {
                // Process line...
                 System.out.println(count + line);
                 count++;
            wr.close();
            rd.close();
        } catch (Exception e) {
        try {
            // Construct data
            String data = URLEncoder.encode("key1", "UTF-8") + "=" + URLEncoder.encode("value1", "UTF-8");
            data += "&" + URLEncoder.encode("key2", "UTF-8") + "=" + URLEncoder.encode("value2", "UTF-8");
             //String data = "";
            // Send data
            URL url = new URL("http://localhost:8080/stocks/monitor?action=2");
            URLConnection conn = url.openConnection();
            conn.setDoOutput(true);
            OutputStreamWriter wr = new OutputStreamWriter(conn.getOutputStream());
            wr.write(data);
            wr.flush();
            // Get the response
            BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
            String line;
            int count =0;
            while ((line = rd.readLine()) != null) {
                // Process line...
                 System.out.println(count + line);
                 count++;
            wr.close();
            rd.close();
        } catch (Exception e) {
        }I send this request twice with different params
    public static String sendGetRequest(String endpoint, String requestParameters)
    String result = null;
    if (endpoint.startsWith("http://"))
    // Send a GET request to the servlet
    try
    // Construct data
    StringBuffer data = new StringBuffer();
    // Send data
    String urlStr = endpoint;
    if (requestParameters != null && requestParameters.length () > 0)
    urlStr += "?" + requestParameters;
    URL url = new URL(urlStr);
    URLConnection conn = url.openConnection ();
    // Get the response
    BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
    StringBuffer sb = new StringBuffer();
    String line;
    while ((line = rd.readLine()) != null)
    sb.append(line);
    rd.close();
    result = sb.toString();
    } catch (Exception e)
    e.printStackTrace();
    return result;
    }Any ideas?
    Edited by: iketurna on Mar 13, 2008 7:21 AM

    You appear to have empty catch blocks. Which means you don't get the error message that would tell you what is failing.
    Put in code that logs the exception and the stack trace of the exception. If you can't figure out the error message, post it here.
    You should be closing streams in finally statements. Otherwise they might not get closed when there is an error -> you leak descriptors -> you run out of descriptors -> every stream open will fail -> more errors -> more descriptors get leaked -> etc -> everything stops working. Always do it like this:
        WhateverStream out = null;
        try {
            out = ...;
            ...use out...;
        } finally {
            try {
                if (out != null) out.close();
           } catch (IOException e) { ...log it... }
        }

  • Large number of pictures have disappeared after backing up my iPhone 5s

    I used a new iCloud Back Up (Which I have created today) on my new iPhone 5s in order to setup it! everything went good, apps, contacts.. everything. But a large number of pictures have not been backed up I don't know, I can tell you that the last pictures on my iPhone now is actually from June, 14 I really need to get my photos back, please help me !!
    Using iPhone 5s, iOS 8.1.1

    No one will help me ?

  • Large number of deadlocks seen after upgrade to 2.4.14

    We upgraded the BDB version to 2.4.14 and are using the latest 4.7 release. Without code change on our part we are seeing large number of deadlocks with the new version.
    Do let me know if more information is needed.
    BDB: 106 lockers
    BDB: 106 lockers
    BDB: 106 lockers
    BDB: 106 lockers
    BDB: Aborting locker 8000a651
    BDB: MASTER: /m-db/m-db432816-3302-4c30-9dd0-e42a295c970c/master rep_send_message: msgv = 5 logv 14 gen = 149 eid -1, type log, LSN [825][1499447]
    BDB: 107 lockers
    BDB: Aborting locker 8000a652
    BDB: 107 lockers
    BDB: MASTER: /m-db/m-db432816-3302-4c30-9dd0-e42a295c970c/master rep_send_message: msgv = 5 logv 14 gen = 149 eid -1, type log, LSN [825][1500259] perm
    BDB: MASTER: will await acknowledgement: need 1
    BDB: 106 lockers
    BDB: 106 lockers
    BDB: Aborting locker 8000a65a
    BDB: Aborting locker 8000a658
    BDB: MASTER: got ack [825][1500259](149) from site rit-004:10502
    BDB: 105 lockers
    BDB: 103 lockers
    BDB: Container - 5e69b5cf184b41ef8f0719e1b0f944a1.bdbxml - Updating document: 5ca1ab1e0a0571bf048c6e298618c7048c6e2ec315a3
    BDB: 104 lockers
    BDB: Container - 5e69b5cf184b41ef8f0719e1b0f944a1.bdbxml - Updating document: 5ca1ab1e0a0571bf048c6e298618c7048c6e2ec35d5d

    Also an interesting observation the replica process which not doing any thing except keeping up with the master is eating 3/4 times more cpu than the master when i am creating and updating records in the xml db.
    On a 4 cpu setup, the master process takes about half a cpu where as the replica is chewing upto 2 CPUs
    What is the replica doing which is this cpu intensive?!?!?!

  • How to calculate the area of a large number of polygons in a single query

    Hi forum
    Is it possible to calculate the area of a large number of polygons in a single query using a combination of SDO_AGGR_UNION and SDO_AREA? So far, I have tried doing something similar to this:
    select sdo_geom.sdo_area((
    select sdo_aggr_union (   sdoaggrtype(mg.geoloc, 0.005))
    from mapv_gravsted_00182 mg 
    where mg.dblink = 521 or mg.dblink = 94 or mg.dblink = 38 <many many more....>),
    0.0005) calc_area from dualThe table MAPV_GRAVSTED_00182 contains 2 fields - geoloc (SDO_GEOMETRY) and dblink (Id field) needed for querying specific polygons.
    As far as I can see, I need to first somehow get a single SDO_GEOMETRY object and use this as input for the SDO_AREA function. But I'm not 100% sure, that I'm doing this the right way. This query is very inefficient, and sometimes fails with strange errors like "No more data to read from socket" when executed from SQL Developer. I even tried with the latest JDBC driver from Oracle without much difference.
    Would a better approach be to write some kind of stored procedure, that adds up all the single geometries by adding each call to SDO_AREA on each single geometry object - or what is the best approach?
    Any advice would be appreciated.
    Thanks in advance,
    Jacob

    Hi
    I am now trying to update all my spatial table with SRID's. To do this, I try to drop the spatial index first to recreate it after the update. But for a lot of tables I can't drop the spatial index. Whenever I try to DROP INDEX <spatial index name>, I get this error - anyone know what this means?
    Thanks,
    Jacob
    Error starting at line 2 in command:
    drop index BSSYS.STIER_00182_SX
    Error report:
    SQL Error: ORA-29856: error occurred in the execution of ODCIINDEXDROP routine
    ORA-13249: Error in Spatial index: cannot drop sequence BSSYS.MDRS_1424B$
    ORA-13249: Stmt-Execute Failure: DROP SEQUENCE BSSYS.MDRS_1424B$
    ORA-29400: data cartridge error
    ORA-02289: sequence does not exist
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 27
    29856. 00000 - "error occurred in the execution of ODCIINDEXDROP routine"
    *Cause:    Failed to successfully execute the ODCIIndexDrop routine.
    *Action:   Check to see if the routine has been coded correctly.
    Edit - just found the answer for this in MetaLink note 241003.1. Apparently there is some internal problem when dropping spatial indexes, some objects gets dropped that shouldn't be. Solution is to manually create the sequence it complains it can't drop, then it works... Weird error.

Maybe you are looking for