Write blocked waiting on read

Hi all,
I have been experiencing difficulties with simultaneous read/write with AsyncIO. For my scenario that the client/server application is being developed, a client may send requests or status information at any given time. Hence continual monitoring on the incoming stream is required.
The server shall respond depending upon the status message requiring a write to the client (so it won't respond to all messages received). Likewise, the client in some instances will only send status messages depending on the last message it received from the server.
I've been experiencing difficulties with writes blocking because of reading. Essentially I would like to continually poll reading whilst allowing writes to be flushed immediately.
Using traditional blocking read/writes things hang indefinitely whilst attempting to write a message as reading has blocked waiting for input.
Using the IBM AsyncIO package (which is purported to be faster implementation of NIO), writing blocks for some time until reading (i assume) relinquishes the socket to allow writing to occur before resuming reading again. The lag time in this situation is significant.
Is someone able to provide an example using non-blocking R/W in which a server can sit reading (on one thread) whilst the writing is attempted on another thread that doesn't cause any lag?
Below is a basic overview of what is happening in my software:
public class MessageQueue {
   private LinkedList<Message> queue;
   /** Creates a new instance of MessageQueue */
   public MessageQueue() {
      queue = new LinkedList<Message>();
   public synchronized void put(Message message) {
      queue.add( message );
      notifyAll();
   public synchronized boolean isEmpty() {
      return queue.isEmpty();
   public synchronized Message get() {
      while( queue.isEmpty() ) {
         try {
            wait();
         } catch( InterruptedException ie ) {
      Message message = ( Message )queue.removeFirst();
      return message;
   public synchronized void close() {
      queue.clear();
      queue = null;
public class InputReader implements Runnable {
  private MessageQueue messages;
  private AsyncSocketChannel async_channel;
  public InputReader(MessageQueue messages, AsyncSocketChannel async_channel) {
    this.messages = messages;
    this.async_channel = async_channel;
  public long read(ByteBuffer b) {
     long bytes_read = 0;
     helper = new AsyncSocketChannelHelper( this.async_channel );
     future = channel.read(b);
     bytes_read = future.getByteCount( );
     return bytes_read;
  public void run() {
     ByteBuffer b = ByteBuffer.allocateDirect(Message.SIZE);
     boolean running = true;
     while(running) {
        if (read(b) == 0)
          running = false;
        else
          messages.put(new Message(b));
public class OutputWriter implements Runnable {
  private MessageQueue messages;
  private AsyncSocketChannel async_channel;
  public OutputWriter(MessageQueue messages, AsyncSocketChannel async_channel) {
    this.messages = messages;
    this.async_channel = async_channel;
  public long write(ByteBuffer b) {
      long bytes_written = 0;
      try {
         AsyncSocketChannelHelper helper = new AsyncSocketChannelHelper( this.async_channel );
         IAsyncFuture future = helper.write(b, 20000); // write or timeout
         // wait for completion of write, or for the timeout to happen
         bytes_written = future.getByteCount( );
         // THIS IS WHERE THE PROBLEM LIES. The write does not happen straight away because of the read operation. With traditional blocking IO this locks completely.
      } catch ( AsyncTimeoutException ate) {
              System.err.println("Timed out after 20 seconds");
      return bytes_written;
   public void run() {
     boolean running = true;
     while(running) {
        Message m = this.messages.get();
        if (write(m.getByteBuffer()) == 0)
          running = false;
        else
          messages.put(new Message(b));
public class Controller {
   public Controller(AsyncSocketChannel async_channel) {
        MessageQueue in = new MessageQueue();
        MessageQueue out = new MessageQueue();
        InputReader ir = new InputReader(out, async_channel);
        OutputWriter ow = new OutputWriter(out, async_channel);
        new Thread(ow).start();
        new Thread(ir).start();
        boolean running = true;
        Message m;
        while(running) {
           m = in.get();
           if (m.getStatus() == "REQUIRES_RESPONSE"))
             out.put(m); // dummy example to demonstrate that once the right condition is met, a new message must be written
}

That makes me wonder what the problem is I am experiencing then.
I initially had stock-standard java.net IO for socket reading and writing. The approach I took was to set up an input reader on its own thread and an output writer on its own thread.
When it came to writing data however, things locked up. Stepping through the code in debug mode allowed me to see that the write method was not completing because the read method was waiting for input to come in.
I tested it using traditional buffered output which made a call to flush() afterwards, but it was getting stuck on the write() call.
I came to the conclusion that the read must be blocking the write from completing because of the response to this thread: http://forum.java.sun.com/thread.jspa?forumID=536&threadID=750707
On further debugging, write() never locked up when I didn't allow the input reader to perform a simultaneous read() on the socket.
Hence my belief that the java.net socket does block when one operation is being performed.
After dealing with IBM's AsyncIO package I'd be willing to wager that their ibmaio package is 50x more complex to use than standard Java sockets (barely any documentation/examples) so 10x complexity seems positively lightweight ;-)
So ejp, to clarify, would NIO help solve this blocking problem or do you think something else is the culprit? It is hard to see what as to test things out I made two bare bones testing programs (one client and the other a server) so I don't feel it could be anything else in the code.
Thoughts?

Similar Messages

  • Hot Data Block with concurrent read and write

    Hi,
    This is from ADDM Report.
    FINDING 8: 2% impact (159 seconds)
    A hot data block with concurrent read and write activity was found. The block
    belongs to segment "SIEBEL.S_SRM_REQUEST" and is block 8138 in file 7.
    RECOMMENDATION 1: Application Analysis, 2% benefit (159 seconds)
    ACTION: Investigate application logic to find the cause of high
    concurrent read and write activity to the data present in this block.
    RELEVANT OBJECT: database block with object# 73759, file# 7 and
    block# 8138
    RATIONALE: The SQL statement with SQL_ID "f1dhpm6pnmmzq" spent
    significant time on "buffer busy" waits for the hot block.
    RELEVANT OBJECT: SQL statement with SQL_ID f1dhpm6pnmmzq
    DELETE FROM SIEBEL.S_SRM_REQUEST WHERE ROW_ID = :B1
    RECOMMENDATION 2: Schema, 2% benefit (159 seconds)
    ACTION: Consider rebuilding the TABLE "SIEBEL.S_SRM_REQUEST" with object
    id 73759 using a higher value for PCTFREE.
    RELEVANT OBJECT: database object with id 73759
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Wait class "Concurrency" was consuming significant database
    time. (4% impact [322 seconds])
    what does it mean by hot block with concurrent read and write??
    is rebuilding the table solves the problem as per addm report?

    Hi,
    You must suffer from buffer busy waits.
    When a buffer is updated, the buffer will be latched, and other sessions can not read it or write it.
    You must have multiple sessions reading and writing that one block.
    Recommendation 2 results in fewer records per block, so less chance multiple sessions are modifying and reading 1 block. It will also result in a bigger table.
    The recommendation doesn't make sense for tablespaces with segment storage management auto, as for those tablespaces pctfree does not apply.
    Buffer busy waits will also occur if the blocksize of your database is set too high.
    Sybrand Bakker
    Senior Oracle DBA

  • Wait events 'direct path write'  and 'direct path read'

    Hi,
    We have a query which is taking more that 2 min. It's a 9.2.0.7 database. We took the trace/tkprof of the query,and identified that there are so manay 'direct path write' and 'direct path read' wait events in the trace file.
    WAIT #3: nam='direct path write' ela= 5 p1=201 p2=70710 p3=15
    WAIT #3: nam='direct path read' ela= 170 p1=201 p2=71719 p3=15
    In the above, "p1=201" is a file_id, but we could not find any data file, temp file, control file with that id# 201.
    Can you please let us know what's "p1=201" here, how to identify the file which is causing the issue.
    Thanks
    Sravan

    What does:
    show parameter db_filesreturn? My guess, is that it returns 200.
    The direct file read and direct file write events are reads and writes to TEMP tablespace. In those wait events, the file# is reported as db_files+temp file id. So, 201 means temp file #1.
    Now, as to your actual performance problem.
    Without seeing the SQL and the corresponding execution plan, it's impossible to be sure. However, the most common causes of temp writes are sort operations and group by operations.
    If you decide to post your SQL and execution plan, please be sure to make it readable by formatting it. Information on how to do so can be found here.
    Hope that helps,
    -Mark
    Edited by: mbobak on May 1, 2011 1:50 AM

  • Does SocketChannel.write() block

    I have a server communicating with it's clients using NIO and therefore SocketChannel's. When data are send to clients, the method SocketChannel.write(ByteBuffer) are used. The question is then, does the execution time depend of SocketChannel.write() depend on the speed of the client to receive data? Does the the method block in any way, or does it just send what is possible and then returns?

    If you have the channel in blocking mode, it also depends on how fast the client is reading. If the client isn't reading at all, ultimately its receive buffer will fill up, then the sender's sending buffer will fill, and then the write will block waiting for space in the send buffer; once the client starts reading again, space will become available and the write can complete. That's not to say that every write waits for the completion of every prior read, it's a matter of buffering and windowing, and the effect is rather decoupled because of the presence of two intermediate buffers. However it certainly can occur.

  • Very high data block waits

    I have one table xxxx in a tablespace tbx and the tablespace tbx has only 1 datafile. The table xxxx size is 890MB w/ 14 millions records. The datafile size is 2048MB. This table is a frequently access with insert/delete/select. The system spends alot of time waiting on this datafile. If I create a new tablespace abc with 20 datafiles worth about 100MB each, would it help reducing the data block wait count? The pctfree/pctused is 10/40 respectively.
    Can anyone please give me an how to resolve this?

    I am looking at an Oracle Statistics. We use SAN technology with RAID 0+1 Striped across all disks.
    First I use this query to get the wait statistics:
    select time, count, class
    from v$waitstat
    order by time, count;
    From this query above, I got this result and database just been up 02/17/2004, just about 4 days ago:
    TIME COUNT CLASS
    0 0 sort block
    0 0 save undo block
    0 0 save undo header
    0 0 free list
    0 0 bitmap block
    0 0 unused
    0 0 system undo block
    0 0 system undo header
    0 0 bitmap index block
    10 10 extent map
    48 656 undo header
    271 853 undo block
    301 730 segment header
    780382 1214405 data block
    Then I use this query to find which datafile is being hit the most:
    select count, file#, name
    from x$kcbfwait, v$datafile
    where indx + 1 = file#
    order by count desc;
    The query above returned:
    COUNT     FILE#     NAME
    473324     121     /xx/xx_ycm_tbs_03_01.dbf
    104179     120     /xx/xx_ycm_tbs_02_01.dbf
    93336     118     /xx/xx_idx_tbs_03_01.dbf
    93138     119     /xx/xx_idx_tbs_03_02.dbf
    80289     90     /xx/xx_datafile67.dbf
    64044     108     /xx/xx_ycm_tbs_01_01.dbf
    61485     41     /xx/xx_datafile25.dbf
    61103     21     /xx/xx_datafile8.dbf
    57329     114     /xx/xx_ycm_tbs_01_02.dbf
    29338     5     /xx/xx_datafile02.dbf
    29101     123     /xx/xx_idx_tbs_04_01.dbf
    file# 121 is in a tablespace with this only datafile and this tablespace hold only one table. file#120 is the same thing, it's in another tablespace and only one table in that tablespace.
    At the same time, i use TOP in Solaris I see iowait range between 5-25% during busy hour.

  • SocketChannel.write() blocking application

    Greets,
    I'm developping a huge application and got a latency/block problem using the write(ByteBuffer) method on a Socket from a socketchannel connection.
    Running java 1.5 (diablo) on Freebsd 6 servers, 4Gb ram (2.2 allocated to jvm), with dual xeon dual-core (total 4 cores)
    Here is the application schema :
    - A thread accepting connexion on the socketchannel
    - A thread selecting keys with data to process, enqueuing it after some basic checks on a command FIFO
    - A thread getting commands from the FIFO and processing 'em, generating answers on 4 answer FIFOs
    - 4 threads (1 per FIFO) to get answers and send 'em back to the socket.
    The application usually runs with 4500-5000 simultaneous clients.
    My problem is that the only write() method sometimes takes over 20ms to write a message, with a length smaller than 50 bytes.
    As I got about 25000 answers to process each second, when some of 'em decide to be slow, the 4 threads runs slowly, and all the connected clients are suffering of that latency, for the few minutes needed to empty the FIFOs.
    Every client socket get about 5 answers per second.
    On about 1 hour running, there are about 3 'peaks' of slowness, that I cannot explain yet. That's why I'm in need of advices !
    I monitored the application when such case happens. TOP indicates 40% cpu idle, JVM memory got >500Mb free, network runs @ about 1.2Mbps, where maximal transfer rate is >20Mbps. netstat -m told me no erros, and a large amount of free buffers available.
    As the only slow process is the write() method that usually runs faster than 1ms each call, but in those case I got delays over 20ms.
    freebsd tcp default sendbuffer size is 64k, receive buffer is 32k
    Commands average received size is below 1k, Answers average sending size below 8k.
    This application is running live, and as I cannot emulate 5000+ connections with a similar beahviour to test withour being sure that won't crash all.
    What points could be responsible of such slow write() calls ? Seems it's not CPU, not RAM, not network itself...
    I suppose it's the network buffers that are causing problems. But I don't really know if I have to fit 'em to a lower size, fitting my requirements, or to a larger size, to be sure there won't be full buffers blocking all ?
    I need advices. Thanks for your ideas !
    Bill

    Hmm. So you're happy to lose data?
    A few comments:
    (a) SocketChannels are thread-safe. I don't think you need the synchronization at all, unless maybe multiple writing threads are possible. I would eliminate that possibility and the sync myself.
    (b) If you're getting write delays of 30ms occasionally, the sync must also take 30ms at the same points if it is doing anything at all, i.e. if the possibility of multiple writing threads does exist. So maybe it doesn't?
    (c) I would have a good look at this:
    http://forum.java.sun.com/thread.jspa?threadID=459338
    and specifically the part on how to manage a channel that presents write blocks, using OP_WRITE when it happens and turning it off when it doesn't.
    (d) You seem to be using one output buffer for all channels. You might be better off using a small one per channel. Then that way you don't clear, you just do put/flip/write/compact, and if the write returned 0 just post OP_WRITE for next time around the select loop. Then you won't lose any data at all, except to a client who really isn't reading: you can detect that situation by keeping track of the last successful write time to a channel, and when there is pending data and the last write is too long ago have a think about what the block means in terms of the application. Maybe you should just disconnect the client?
    (e) It would be interesting to know how many times the write loop looped when you get these large delays, and also what the data size was, and also to know that for the other cases to see if there is a difference.
    (f) Generally from a fairness point of view I prefer not to have write loops, just one attempt and if it returns even a short read I post OP_WRITE as above. Otherwise you're spending too long servicing one channel.
    You can contact me offline via http://www.telekinesis.com.au if you like.

  • Non-Blocking call to read the Keyboard

    does anyone know how to make a JAVA program make a non-blocking call to read the keyboard? eg. write a program which generates prime number until a keyboard key is pressed.

    if you use a gui you can use keyListener
    Would work only if your gui elements have focus right now.

  • DIO Port Config & DIO Port Write Block Diagram Errors (Call Library Function Node:libra​ry not found or failed to load)

    Hi Guys, need help on this.
    I have this LabVIEW program that used to work on the old computer.
    The old computer crashes most of the time, so I upgraded the computer
    and used its Hard Drive as slave to the new computer.
    I have no idea where are its installers since the guy that made the program 
    is not in my department anymore.
    I downloaded all the drivers needed from NI: NIDAQ9.0, NIVISA,NI488.2, 
    and drivers of some instruments needed in the setup. I'm using LabVIEW8.2.
    Everything's fine until I open the LabVIEW program for our testing.
    Here goes the error:
       DIO Port Config
       DIO Port Write
    Block Diagram Errors
       Call Library Function Node: library not found or failed to load
    Attachments:
    ErrorList.JPG ‏200 KB

    Honestly, I'm a newbie on Labview. I just want this old program to run on the new computer.
    The guys that installed the drivers on the old computer are no longer here in my department.
    And I have no idea where the drivers are. So I just downloaded the drivers needed for my hardware and instruments.
    Here's my hardware: (cards: PCI-DIO-96, PCI-GPIB), (instruments: SCB100,E4407B, HP83623, HP3458, HP8657)
    OS: Windows XP Pro
    By the way, I have unzipped the TraditionalDAQ drivers. First I tried the 7.4.1, but installation error appeared.
    I thought maybe the installer is corrupted, so I downloaded the 7.4.4 and unzipped it.
    But, still same installation error appears. I don't understand, both TraditionalDAQ drivers have same installation error.
    Now I have tried the DAQmx8.7.2 driver, bu still the DIO Port Config and DIO Port Write have errors.

  • Waiting for read from 'fileAdapter'. Asynchronous callback.

    I am trying to implement a mid process receive activity for receiving from a file adapter.
    I have used correlation set for the first receive activity which receives input from a web client and second receive activity receives from adapter.
    the problem is that in the Flow Trace when the second receive activity keeps showing pending and waiting for read from 'fileAdapter'. Aynchronous callback.
    Any help would be appreciated

    Hello,
    I have got the same problem. I tried set-up CorrelatonSet, but I could not find solution ... Receive activity is still waiting for dequeue from AQ (and what's more - message is removed from queue by AQ adapter immediately after BPEL process is deployed. Receive acitivity hasn't information about this dequeue - it's still waiting).
    Could you pls. write more information???
    Many thanks,
    martin

  • A clarification about block# wait event parameter ....

    Hi ,
    In Oracle Database Reference of 10g (Part Number B14237-02) about the block# wait event parameter is pointed out ... :
    This is the block number of the block for which Oracle needs to wait. The block number is relative to the start of the file. To find the object to which this block belongs, enter the following SQL statements:
    select name, kind
    from ext_to_obj_view
    where file# = file#
         and lowb <= block#
         and highb >= block#;Can you give me a simple example of using the above sql statement.... as ext_to_obj_view object does not exist.....
    Many thanks ,
    Simon

    This view is created by $ORACLE_HOME/rdbms/admin/catclust.sql script (to be run by sys user).
    http://download-uk.oracle.com/docs/cd/B19306_01/rac.102/b14197/monitor.htm#RACAD718
    Nicolas.

  • When I want to download an app, it is write "download waiting" at the bottom of the icon and nothing change

    When I want to download an app, it is write "download waiting" at the bottom of the icon and nothing change...
    Help me please

    No, you can't update your 2G iPast past iOS 4.2.1. The graphics hardware on the 2G does not support the latest Open GLS.
    Yes, there are not that many apps that still works with a 2G with 4.2.1.

  • Writing the file using Write to SGL and reading the data using Read from SGL

    Hello Sir, I have a problem using the Write to SGL VI. When I am trying to write the captured data using DAQ board to a SGL file, I am unable to store the data as desired. There might be some problem with the VI which I am using to write the data to SGL file. I am not able to figure out the minor problem I am facing.  I am attaching a zip file which contains five files.
    1)      Acquire_Current_Binary_Exp.vi -> This is the VI which I used to store my data using Write to SGL file.
    2)      Retrive_BINARY_Data.vi -> This is the VI which I used to Read from SGL file and plot it
    3)      Binary_Capture -> This is the captured data using (1) which can be plotted using (2) and what I observed is the plot is different and also the time scare is not as expected.
    4)      Unexpected_Graph.png is the unexpected graph when I am using Write to SGL and Read from SGL to store and retrieve the data.
    5)      Expected_Graph.png -> This is the expected data format I supposed to get. I have obtained this plot when I have used write to LVM and read from LVM file to store and retrieve the data.
    I tried a lot modifying the sub VI’s but it doesn’t work for me. What I think is I am doing some mistake while I am writing the data to SGL and Reading the data from SGL. Also, I don’t know the reason why my graph is not like (5) rather I am getting something like its in (4). Its totally different. You can also observe the difference between the time scale of (4) and (5).
    Attachments:
    Krishna_Files.zip ‏552 KB

    The binary data file has no time axis information, it is pure y data. Only the LVM file contains information about t(0) and dt. Since you throw away this information before saving to the binary file, it cannot be retrieved.
    Did you try wiring a 2 as suggested?
    (see also http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=925 )
    Message Edited by altenbach on 07-29-2005 11:35 PM
    LabVIEW Champion . Do more with less code and in less time .
    Attachments:
    Retrive_BINARY_DataMOD2.vi ‏1982 KB

  • Has anyone else ever had a problem where you had to perform 2 datasocket writes before the datasocket read would pick up the change?

    I have a local VI that is simply a control that writes to the datasocket server whenever the control value changes.(the dataitem on the server is permanent - its initialized and never released by the server)
    In the same local VI I have a datasocket read polling a different dataitem on the server.
    The remote machine has a VI that reads the permanent dataitem on the server once per second.
    For some reason, after adding the local VI mentioned above, the remote VI stopped picking up the first change in the permanent variable.
    I'd start both the local and remote VIs...
    Then change the local control and the remote V
    I would not update - as if the datasocket write(upon adjusting the control) did not take place. So I'd change the control value again - this time the remote VI would update to this new value. And from here on out - the remote VI would update correctly. This problem only occurs when the local VI is first started up.
    What in the heck is going on?

    dingler44 wrote:
    >
    > Has anyone else ever had a problem where you had to perform 2
    > datasocket writes before the datasocket read would pick up the change?
    >
    > I have a local VI that is simply a control that writes to the
    > datasocket server whenever the control value changes.(the dataitem on
    > the server is permanent - its initialized and never released by the
    > server)
    > In the same local VI I have a datasocket read polling a different
    > dataitem on the server.
    > The remote machine has a VI that reads the permanent dataitem on the
    > server once per second.
    > For some reason, after adding the local VI mentioned above, the
    > remote VI stopped picking up the first change in the permanent
    > variable.
    > I'd start both the local and remote VIs...
    > Then change the local control and the remote VI would not update -
    > as if the datasocket write(upon adjusting the control) did not take
    > place. So I'd change the control value again - this time the remote
    > VI would update to this new value. And from here on out - the remote
    > VI would update correctly. This problem only occurs when the local VI
    > is first started up.
    >
    > What in the heck is going on?
    Gorka is right, this came up on Info-LV a few days ago. Someone
    described a similar problem. I replied that I had seen similar
    behaviour, reported it to NI, and they verified a bug. There is no fix
    yet, but they are aware of it and will fix it. No anticipated release
    date for the fix.
    Regards,
    Dave Thomson
    David Thomson 303-499-1973 (voice and fax)
    Original Code Consulting [email protected]
    www.originalcode.com
    National Instruments Alliance Program Member
    Research Scientist 303-497-3470 (voice)
    NOAA Aeronomy Laboratory 303-497-5373 (fax)
    Boulder, Colorado [email protected]

  • How to write a program to read any texts in any ABAP program?

    Hi Experts,
    How can I write a program to read specific coding section or any texts in any ABAP program?
    For example, I want to wirte a program to count how many 'LOOP' and 'ENDLOOP' are in any other program.
    Thanks!
    Best regards,
    Hao

    Hi,
    Follow the given below URL for the program to read another Program into an internal Table.
    http://abap4.tripod.com/Upload_and_Download_ABAP_Source_Code.html
    Once the Code is there in the Internal Table , you can do the necessary string search.

  • Virt-install hangs at "Write protecting the kernel read-only data",

    I am trying to Install Oracle Linux 5.4 as a Paravirtualized Machine on an Oracle VM Server (2.2.1).
    Following are the steps that I followed:
    I have the dvd iso file on /OVS/iso_pool
    1. mkdir -p /el/EL5-x86
    2. mount -t iso9660 -o ro,loop /OVS/iso_pool/Enterprise-R5-U4-Server-x86_64-dvd.iso /el/EL5-x86
    3. service portmap start
    4. service nfs start
    5. exportfs *:/el/EL5-x86
    6. mkdir /OVS/running_pool/vm01
    7. virt-install
    and gave the following details
    Name of the virtual machine: vm01
    RAM:7168
    disk path: /OVS/running_pool/vm01/system.img
    disk space: 80GB
    graphics support: yes
    install location- nfs:OVM server ipaddress:/el/EL5-x86
    It starts the install, but hangs at
    XENBUS: Device with no driver: device/vbd/51713
    XENBUS: Device with no driver: device/vbd/51714
    XENBUS: Device with no driver: device/vif/0
    XENBUS: Device with no driver: device/console/0
    Initalizing network drop monitor service
    Write protecting the kernel read-only data: 483k
    Can you please help
    Thanks,
    Radhika

    Yes I am able to connect to the guest vnc console, and proceed with the install.
    I did, Choose a language, choose a keyboard, configure TCP/IP, Manual TCP/IP configuration, after this I am getting a message "That directory could not be mounted from the server"
    I tried giving the IPAdress of the OVM Server where the directory was mounted, nothing is working. I cannot proceed any further.

Maybe you are looking for

  • Sender SOAP Adapter - how to avoid changes of URL for diferent environments

    Dear experts, we have a concern with transports of PI objects in our environment. Situation: When we transport SOAP Sender objetcs in PI from Dev to Test the URL (Endpoint) changes automatically the hostname and the business system (target system in

  • ITunes Win7 64-bit won't open older library

    Updated iTunes on Win7, 64-bit laptop and it now won't open ... posts an error message of "can't open library file created with older version". I performed the upgrade on my XP-Pro 32-bit laptop with no issues, and a clean upgrade/update of my iPhone

  • Ox8387eae41:2d error message

    I have an HP PSC1610 all-in -one printer.  I am getting two different error messages, OX8387EAE41:2D and OX841154CB. The printer will not print or scan.  It is operating on Windows 7 64 bit.  It worked fine for the last couple of years until now.  Is

  • The scenarios that use db_file_name_convert

    On what kind of circumstances does the setting of db_file_name_convert become effective? I am only sure about one that is standby database. What are the other scenarios? Thanks

  • Why can't I get my pictures to email

    I am trying to email a photo and it is not allowing me to do it