Write through URLConnection object

Hi all,
I have a small problem with URLConnection. I make a URL connection to my server and I am trying to write some content to this connection after setting dooutput(). But data is not writing to other end until I call read inputstream. I don't know why , data is writing even I read 0 bytes from same connection. Actually our requirement is to write data continuosly with out making new connection. Can any body help me.
Thanks in Advance
Madhva

Hi JCG,
Thanks for reply, Yes I did urlconnection.connect. After that only i am trying to write to server and I am able to read from server. Here important thing is it is writing perfectly even I read 0 bytes from server whaich i don't want as once you start reading data you can't do write operation on same connection.
Thanks
Madhav

Similar Messages

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Write through and CacheStore

    Hi,
         I'm running a near cache implementation, with the front being a local cache and the back being a distributed cache. The distributed cache has a local cache and a read-write-backing-map-scheme for persisting each cache to disk every t minutes (for backup purposes - we still use a cache in memory).
         I have a few questions about the Write through capabilities and CacheStore so as to better understand what's going on here:
         1. We only need to store the data to disk for backup in case of complete cluster failure (for example, all n machines in our cluster go down). Right now my implementation of the CacheStore has one line which reads "return null" for the following methods:
         load(..)
         loadAll(..)
         Is there a more efficient/effective way to write to disk and ignore reads if item does not exist in distributed map rather than extending CacheStore and returning null for all methods noted above?
         My reading and writing to disk occurs using the ExternalizableHelper class, thx for this nice work.
         2. How are CacheStore's instantiated initially? Since we want each cache (say we have two different caches here for simplicity) backed up to file every t minutes, do we have to have a separate CacheStore object for each cache? What is the best practice to attach a cachestore to a particular cache?
         For example, I start two Tangosol instances, one on machineA and one on machineB, both storing data as per my configuration. The 2 caches being used are "cacheA" and "cacheB". So when I start the two Tangosol instances, I have to instantiate CacheStore twice so that I can separately write "cacheA" and "cacheB" to their own separate files.
         3. When backup to disk occurs, is there any removing of items from the distributed cache?
         4. I'm not completely sure on the write delay here. What if an item in the cache is just added once, and no updates occur on it (ie. just one put, and 0+ gets). After a specified amount of time, will this be written to disk, or does an update on this object in the cache have to occur before this item can be added to the write queue and eventually written to disk? Once an item is added for the first time, does this trigger the update time for this object to be the first write time?
         Thanks,
         - Noah

    Hi Noah,
         1. No, load() and loadAll() returning null is the most effective way of implementing this.
         2. You can pass the cache name as a constructor parameter - see Parameter Macros in the Coherence User Guide.
         3. No, nothing is removed from the cache
         4. Writes are only triggered by put()'s.
         For more information please take a look at this forum post: <a href = "http://www.tangosol.net/forums/thread.jspa?threadID=445&tstart=0">What is Read-Through/Write-Through/Write-Behind Caching? </a>
         Regards,
         Dimitri

  • Problems issuing continuous requests to a server through URLConnection

    Hi ,
    I have a URLConnection object 'uc' obtained from a URL object tied to a server URL.
    i m issuing HTTP requests continously to this server by calling ' uc = u.openConnection()' everytime
    Hence this will return me a new URL Connection object everytime.
    After some 300-400 requests, the program ends abruptly , this may be due to shortage of resources to be allocated to the I/O streams of the URL connection.
    My question is -is there some way to issue multiple requests from the same URL Connection object ? or is there some other method of issuing multiple requests which does not consume a lot of resources ?
    Note: The server accepts only GET method, so i cant write the contents to the output stream of the connection to translate it to a new request every time .
    Thanx

    My question is -is there some way to issue multiple
    requests from the same URL Connection object ? or is
    there some other method of issuing multiple requests
    which does not consume a lot of resources ?
    A HttpURLConnection instance can only be used to issue one http request. What can be re-used is the underlying TCP connection, with the keep-alive header (HTTP 1.1 persistent connections). But that should be handled transparently and by default by the servlet engine.
    The later is able to manipulate the stream to increase performance, if you do not close the stream, i.e you don't call conn.disconnect() or by specify the "Connection: close" request property
    ( conn.setRequestProperty("Connection", "close") )
    So, you shouldn't actually close the steam, just flush it (if you close it, the physical socket connection will be terminated).
    All the http persistent connection/keep-alive issues were apparently fixed in J2SE 1.4.1 ....(are you using 1.4 ?)
    To optimize further, you could take a look at :
    http://jakarta.apache.org/commons/httpclient/

  • How to control the  authorization of IM05 through  authorization object

    Now we want to control the  authorization of IM05 through  authorization object C_PRPS_USR, but C_PRPS_USR is not assigned to tcode im05.How can we assign authorization object C_PRPS_USR to tcode im05? OR do we have any other method to obtain the same result?

    write a factory method that controls the number of instances for you:
    import java.util.List;
    import java.util.Arrays;
    public class Bar
       private static final int MAX_BARS = 5;
       private static int numBars = 0;
       private int id;
       public static void main(String [] args)
          try
             int numBars = ((args.length > 0) ? Integer.parseInt(args[0]) : MAX_BARS+1);
             Bar [] bars = new Bar[numBars];
             for (int i = 0; i < bars.length; ++i)
                bars[i] = Bar.create();
             System.out.println(Arrays.asList(bars));
          catch (Exception e)
             e.printStackTrace();
       private Bar() { this.id = numBars++; }
       public String toString() { return "I am bar number " + this.id; }
       public static Bar create()
          Bar nextBar = null;
          if (numBars < MAX_BARS)
             nextBar = new Bar();
          return nextBar;
    }%

  • Multiple data loads in PSA with write optimized DSO objects

    Dear all,
    Could someone tell me how to deal with this situation?
    We are using write optimized DSO objects in our staging area. These DSO are filled with full loads from a BOB SAP environment.
    The content of these DSO u2013objects are deleted before loading, but we would like to keep the data in the PSA for error tracking and solving. This also provides the opportunity to see what are the differences between two data loads.
    For the normal operation the most recent package in the PSA should be loaded into these DSO-objects (as normal data staging in BW 3.5 and before) .
    As far as we can see, it is not possible to load only the most recent data into the staging layer. This will cause duplicate record errors when there are more data loads in the PSA.
    We all ready tried the functionality in the DTP with u201Call new records, but that only loads the oldest data package and is not processing the new PSA loads.
    Does any of you have a solution for this?
    Thanks in advance.
    Harald

    Hi Ajax,
    I did think about this, but it is more a work around. Call me naive but it should be working as it did in BW3.5!
    The proposed solution will ask a lot of maintenance afterwards. Beside that you also get a problem with changing PSA id's after the have been changed. If you use the posibility to delete the content of a PSA table via the process chain, it will fail when the datasourcese is changed due to a newly generated PSA table ID.
    Regards,
    Harald

  • Rename a File in a SharePoint document library through Client object model

    Hi,
    How  to Rename a File in a SharePoint document library through Client object model?
    Thanks
    Poomani Sankaran

    Hi,
    According to your description, you want to rename file in the document library using SharePoint Client Object Model.
    Here is a code snippet works well in my environment for your reference:
    static void Main(string[] args)
    string url = "http://sp2013sps/sites/test/";
    ClientContext clientContext = new ClientContext(url);
    Microsoft.SharePoint.Client.List spList = clientContext.Web.Lists.GetByTitle("Documents");
    clientContext.Load(spList);
    clientContext.ExecuteQuery();
    if (spList != null && spList.ItemCount > 0)
    Microsoft.SharePoint.Client.CamlQuery camlQuery = new CamlQuery();
    camlQuery.ViewXml =@"<View> <Query> <Where><Eq><FieldRef Name='LinkFilenameNoMenu' /><Value Type='Computed'>New Microsoft Word Document.docx </Value></Eq></Where> </Query> <ViewFields><FieldRef Name='Title' /></ViewFields> </View>";
    ListItemCollection listItems = spList.GetItems(camlQuery);
    clientContext.Load(listItems);
    clientContext.ExecuteQuery();
    listItems[0]["Title"] = "word.docx";
    listItems[0]["FileLeafRef"] = "word.docx";
    listItems[0].Update();
    clientContext.ExecuteQuery();
    More information about SharePoint Client Object Model:
    http://msdn.microsoft.com/en-us/library/office/ee537247(v=office.14).aspx
    http://www.codeproject.com/Articles/399156/SharePoint-Client-Object-Model-Introduction
    http://www.learningsharepoint.com/2010/07/12/programmatically-upload-document-using-client-object-model-sharepoint-2010/
    Best regards

  • Access current user's manager name in the console application ( through Client object model)

    Hi Guys,
    Is there any way to retrieve current logged-in user's Manager name in the console application.
    As I don't have access to the server where SharePoint 2010 is installed so I wanted to access through client object model.
    arun singh

    Unfortunately, you can't use CSOM to do this in SharePoint 2010 (you can in SharePoint 2013!), but you CAN use the User Profile Service .asmx web service to accomplish this. You need to call the
    GetUserProfileByName method exposed
    in the http://<yourServerName>/_vti_bin/UserProfileService.asmx web service. Pass in the user name for the current user and Manager will be one of the properties that is returned.
    Here is a link to a blog post with example code.
    Please mark my reply as helpful (the up arrow) if it was useful to you and please mark it an answer (the check box) if it answered your question! Thank you!
    Danny Jessee | MCPD - SharePoint Developer 2010 | MCTS - SharePoint 2010, Configuring
    Blog: http://dannyjessee.com/blog | Twitter: @dannyjessee

  • CO_ITEM: write archive for object PSG is cancelled by system

    Dear Experts;
    I have one more problem in archiving project at my client using R/3 4.7.
    On preparation for this archiving project, we have simulated all objects on "development server" and they went OK and we also have executed what was recommended by sap note nr 613707.
    Now, we move to the next step to QA server where the data is mirroring to Production server. Here the comes the problem on "CO_ITEM". The problem is, the system cancel the write process for object "PSG". For your information, we have checked all the parameter such as;
    - Technical setting
    - Residence time for CO line items
    When we check the log it says: "ABAP/4 processor: TSV_TNEW_OCCURS_NO_ROLL_MEMORY"
    What does it mean and are we suppose to do?
    Your advise and support is highly appreciated.
    Thank you,
    AZNI

    Since you QA system is a mirror of Prod it contains large number of CO line items as compared to Dev environment. Due to the large number of CO line items the buffer size is getting exceed resulting in the dump.
    Check the OSS Note 888292 - Archiving CO_ITEM for orders terminates
    Hope this helps
    -Samanjay

  • Should OS/FileSystem caching be write-through?

    I have a question. I use Ubuntu. Should I mount my filesystem (which holds BDB's content) with "-o sync" option? That is, should my file system cache be write-through?
    I have this question because, if I turn on the logging feature in Berkeley DB but let the file system cache be write-back, I don't exactly know if the log is properly flushed to the disk or not.

    Thanks George. I agree that mature applications would be better off mounting their filesystem with "-o sync" option.
    But here is a thing: I ran an example test case where I inserted 10 million key-value pairs with logging enabled, and saw that the average response time per insertion was 10 milli seconds, and I did the same experiment with logging disabled and saw that it too took 10 milliseconds per insertion on an average.
    For the experiment with logging enabled, I create the environment with DB_INIT_LOG and DB_INIT_TXN flags but don't surround the insertion requests with txn_begin() and txn->commit(). I guess this way of doing insertions is called autocommit. I am hoping I am doing this experiment right.
    Thanks for the pointers about set_flags() and DB_TXN_NOSYNC, I am going to look them up.

  • Write-through limitation and putAll

    Please find the quote below from developer guide, particularly this one In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail.If a putAll is called on a cache, will it result in one CacheStore.storeAll or many storeAll triggered from different coherence nodes/servers? (assume a distributed topology coherence 3.7.1)
    Will the store transaction failure lead to putAll transaction failure?
    Are there any patterns that shows how this coherence works with typical databases?
    14.7.2 Write-Through LimitationsCoherence does not support two-phase CacheStore operations across multiple CacheStore instances. In other words, if two cache entries are updated, triggering calls to CacheStore modules sitting on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).

    gs100 wrote:
    Thanks for the input, I have further questions based on these suggestions.
    1. Let us say one of the putAll fails we would know that it has failed due to underlying one or more store/storeAll. And even if we rollback the coherence transaction, the store/storeAll that succeeded would not be rolled back automatically, is that correct? If true, this means that it would leave the underlying DB/store in the inconsistent state with that of in-memory cache?I guess that is one of the reasons why the transaction framework does not support cache stores... also, write-behind would coalesce updates which would have funny consequences with regards to the transactional context...
    2. How do we get the custom implementation of putAll, that you suggested to handle specific errors? any pointers on this would be helpful.I guess it is not going to be posted, the Coherence team may or may not add something which is a bit more deterministic with regards to error.
    A few aspects of Coherence behaviour (a.k.a pitfalls) which you need to be aware of to be able to implement your own solution:
    Exceptions propagating back to the client can happen in:
    - entry-processor (not for putAll specifically)
    - result serialization code (not for putAll specifically, but for processAll/aggregate for example)
    - deserialization code (indexes/filter-based backing map listeners/cache stores lead to deserialization even for putAll)
    - triggers (intentionally, too)
    - cache stores
    There is no place where you could catch any exceptions from inside the NamedCache call, so they will come out.
    Coherence may execute the operation on one thread per partition or one thread per multiple partitions, but never on multiple threads per partition. This means there may be multiple exceptions even from a single storage node, but only at most one exception would be generated per partition (starting with 3.5).
    If you send multiple partitions with the same NamedCache call, you can lose exceptions as you wouldn't know if an exception would have or wouldn't have happened with a partition if it was sent alone instead of together with another on the same node.
    As you need to be able to return all exceptions from your method call, you have to produce and catch all of them and collect them otherwise you would lose all but one. To produce and catch all exceptions you have to produce all exceptions independently, i.e. different partitions must be operated on independently.
    To send an operation to a single partition only, you can separate the operations to different partitions by separating the keysets for different partitions with key-based operations, or applying a PartitionedFilter for filter-based operations.
    It is up to you where and how you iterate through the partitions. You can do it on the caller, you can do it on storage node from an Invocable sent via an InvocationService (in this case you can be either optimistic with ownership or chase a partition).
    3. Because we are thinking putAll that coherence implemented is most optimized (parallelism). I am not sure how the custom implementation can be as optimal (hope we don't end up calling one by one).You cannot implement it as optimally as Coherence itself does as it interleaves operations (Messages) to independent partitions/nodes (does not have to wait for the return message) from a single thread without waiting for the responses from individual nodes/partitions.
    You can either parallelize operations to multiple threads, or do the iteration on the single thread at the cost of higher latency.
    Best regards,
    Robert

  • Updation of List Settings through Client object model

    Hi all,Is there any way to update List settings through Client object model.
    I want to work with all the list settings programmaticaly using client object model.
    Is there any way??? 

    Hi,
         Handful operations you can do it with client object model as certain limitations are there, which are not in server object model. Sharepoint Client obejct model comes with 3 APIs .i.e .Net managed code, javascript api and silverlight
    object model . 
    For reference , i implemented test code to add list on the Quick launch.
             ClientContext clientContext = new ClientContext("Web FullUrl");
                Web web = clientContext.Web;
                List list = web.Lists.GetByTitle("Test");
                list.OnQuickLaunch = true;
                list.Update();
                clientContext.Load(list);
                clientContext.ExecuteQuery();
    Regards,
    Milan Chauhan
    LinkedIn
    |
    Twitter | Blog
    | Email

  • JUNIT : how to call static methods through mock objects.

    Hi,
    I am writing unit test cases for an action class. The method which i want to test calls one static method of a helper class. Though I create mock of that helper class, but I am not able to call the static methods through the mock object as the methods are static. So the control of my test case goes to that static method and again that static method calls two or more different static methods. So by this I am testing the entire flow instead of testing the unit of code of the action class. So it can't be called as unit test ?
    Can any one suggest me that how can I call static methods through mock objects.

    The OP's problem is that the object under test calls a static method of a helper class, for which he wants to provide a mock class
    Hence, he must break the code under test to call the mock class instead of the regular helper class (because static methods are not polymorphic)
    that wouldn't have happened if this helper class had been coded to interfaces rather than static methods
    instead of :
    public class Helper() {
        public static void getSomeHelp();
    public class MockHelper() {
        public static void getSomeHelp();
    }do :
    public class ClassUnderTest {
        private Helper helper;
        public void methodUnderTest() {  // unchanged
            helper.getSomeHelp();
    public interface Helper {
        public void getSomeHelp();
    public class HelperImpl implements Helper {
        public void getSomeHelp() {
            // actual implementation
    public class MockHelper implements Helper {
        public void getSomeHelp() {
            // mock implementation
    }

  • Personnel Cost Planning through org objects

    Can anybody tell me if salary survey data maintained for position or job can be included when trying to collect data through org objects.
    I am trying to collect data basis through org objects, transaction PHCPDCOO for jobs. I have also maintained the salary survey infotype, but its not populating the IT5010 with the survey data.
    Can anybody help?
    Thanks,
    Shipra

    It is extremely difficult to answer your question without knowing all the details of your scenario.
    I can briefly explain the scenario I used:
    1. I have maintained IT1005 on job level to further distribute this information along the organization structure.
    2. First of all i had to run data collection on job level using method "Infotype planned compensation: 1005" and cost item that was a base for other items, e.g. basic salary. System creates records in infotype 5010 "Planning of Pers. Costs" for evaluated jobs.
    3. Then I collected cost items for positions using method "Data from related cost planning objects", calculation type A, basis that I previously set up, evaluation path O_O_S_C (I started from head org.unit) and object type source C. I did it for positions to have possibility to adjust cost items on position level using transaction PHCPDCUI - Edit Data, e.g. you can ask managers to review your defaults on position level before staring consolidation on org.unit level.
    4. Then you start data collection on org.unit level using cost items on position level. Process similar to step 3 but with different evaluation path and source object.
    5. When it's done you run cost planning (PHCPADMN - Manage) using special plan for planning based on OM data.
    There are plenty of other scenarios, so can't really give you exact answer on your question, as I don't which scenario you use.

  • Is there a Tool or function (in Illustrator or InDesign) that selects any shape and "punches" a "hole" of that shape all the way down through multiple objects to the paper or artboard?

    Is there a Tool or function (in Illustrator or InDesign) that selects any shape and “punches” a “hole” of that shape all the way down through multiple objects to the paper or artboard?

    In Illustrator, group the objects that you want to punch through, and use the transparency pallet.

Maybe you are looking for

  • Backing up iPad to iCloud

    I am backing up my iPad (less than 10 GB) to iCloud for the first time and the estimated time stated was 41 hrs. I turned off other devices using my WiFi and tried again and now it is 9 hrs. Is this normal?

  • Convert to .WMA

    My Verizon Wireless LG VX8200 music phone only plays .wma's. Is there a way that I convert .mp3's to .wma's on my mac? Are there any free programs i can use to do this?

  • What is /var/adm/messages error ID 29530 Daemon.notice

    Hi, I have a dns domain on Unix box, Solaris 9. Last Friday, I just added a new server into dns file and update dns. But someone how the dns didn't update. I checked all the messages file from /var/adm and found some errors start from last month whic

  • PS CC newbie with layer translate question

    I am a newbie in PS. Currently I am using Adobe PS CC, in which I want to merge quite a number of images together to form a "collage". Those images are already named with coordinates, like (0, 1).png with size 1280 x 1280 pixel, and I already load al

  • Slide Show Freezes after latest Apple Security Update

    Since installing the latest security update to 10.4.6 from Apple, initiating Slide Show causes iPhoto to freeze on my 2 Powerbooks and iBook 500. The only way to get out of it is to Force Quit iPhoto. This happens with iPhoto 4.0.3 and with 2.0. Befo