File being written to disappears

In our implementation, we use ObjectOutputStream and ObjectInputStream to write/read our objects (serialized).
inFile = new ObjectInputStream( new BufferedInputStream(new FileInputStream(FileName), 262144));
outFile = new ObjectOutputStream(new BufferedOutputStream(new FileOutputStream(FileName),262144));
During reading/writing operation, somehow the file name disappears suddenly (can't see this file from 'ls' command), but the reading/writing operation continues qithout any exceptions (as long as the file was opened before it mysteriously disappeared from the OS).
If we need to open this file after it disappeared, we will get 'file not found exception error' (obviously). We don't know why the file disappears (at any random point of time) without a delete action from our application or anywhere else.
The OS version is SunOS 5.7 Generic_106541-08 sun4u sparc SUNW,Ultra-80.
The Java version used in our testing is Java version "1.2.2" Solaris VM (build Solaris_JDK_1.2.2_05a, native threads, sunwjit).

Is this a local filesystem? I would guess there would more chance of error (especially after the mentioned time delay) if it were remote ... I'm still a bit suspicious of the buffer size. Try changing this to something smaller (like 4096) and see if the behavior changes. If this does not change the behavior, try a different VM version. If these do not help you, try going to Slowaris support ... I'd be more inclined to think it's an underlying platform bug than a VM bug. Just a hunch, though.
Good luck,
-Derek

Similar Messages

  • Regarding File being written to Appl server

    Hi Pals,
    Can we restrict the size of the file that is being written to the application server through Open dataset or any other command.
    We have a report which selects data based on a given time period.
    The problem we are facing is that even for a period of 1 month the data file written to the appl server is around <b>14-15 GB</b>. Hence we have been asked to split the file that is being written.
    However the other option of reducing the time period of seelction has been ruled out by our clients.
    Please suggest.
    Thanks in advance.
    Rgrds,
    Gayathri N.

    Hi ,
    try like this
          i_maximum_lines = 99999999.
          block_size = 100000.
    do.
          refresh <l_table>.
          v_filect = v_filect + 1.
          write v_filect to v_filecount left-justified.
          fetch next cursor i_cursor
                     appending corresponding fields of table
    *                 <l_table>  PACKAGE SIZE i_maximum_lines.
                      <l_table>  package size block_size.
          if sy-subrc ne 0.
          e_table = i_table-tabname.
          e_dbcnt = sy-dbcnt.
          clear cursor_flag.
          close cursor i_cursor.
          exit.
          endif.
          delta_dbcnt = sy-dbcnt + old_dbcnt.
          old_dbcnt   = sy-dbcnt.
    Regards
    Prabhu

  • .SHMDJOXSHM_EXT Files Being Written to /tmp

    Recently, we've noticed files for each our our 11g databases being written to the /tmp directory. The file names are formatted like this: .SHMDJOXSHM_EXT_nnn_sid_nnnnnnnn.
    These files appear sporadically for each database running on a machine, sometimes 100+ a minute. They are binary files.
    We're running 11.1 on Solaris 10 with containers.
    A Google search finds only one occurrence of this situation and Metalink support suggested I pose the question here.
    We're beginning to move our applications to their production environment and it would be good to determine if the cause for these files was of concern.

    Maybe you should raise a incident to the support and show them that this very thread has no response for the last 3 years...
    +-- thread locked, please open your own defining your own environment, thanks --+
    Nicolas.

  • Are files being written when HDV conforms?

    What exactly happens when my HDV sequence conforms for tape? Is anything being written to the HD?
    Thanks!

    Hi there
    My (basic) understanding of this is that upon conform the computer rebuilds the long GOP structure of your shots enbling seamless playback to tape.
    When you shoot HDV each time you record a shot there is a long GOP structure on tape, when you edit you break that structure and your processor has to re-build this on the fly so that you can see what you are doing (this is one reason why monitoring and cutting HDV is not straightforward).
    when you go to tape the processor effectively renders this long gop structure back into place. I'm not sure if its a render written to the HD though.
    As I said thats my understanding, there are loads of people here more qualified than me to comment on this however, so I may stand corrected shortly. In the meantime google "long gop" for more detail.
    best regards
    Andy

  • XML file being written to and queried are in two different language dir

    I have a script that writes out an XML file and will read it back for checking purposes. I then have a different  script that will read the XML file and act upon it (in this case for Open, Closed for Severe wWeather, Closed for Holiday, etc). What happens is that when I write to the XML file the file is put into the Documant Management/en_US folder. When I try to red from the file it reads from the Document Management/en folder (which is proabably a hold over from our previous versions).
    So, I look into the System/Language area and see that the IVR language configuration is en_US and the Default IVR Configuration Language is set to en. So, I changed the second one to be en_US.
    Now, when I try to read the file it reads from the en_US folder so everything looks great. But, now when I try to write the file it writes to the en folder.
    If I switch the Default IVR Configuration Language back to en, everything goes back to how it worked at the beginning of the note.
    So, I have been fighting with this for a few days now and cannot figure out why the system will not read and write from/to the same folder. Can anyone come up with a reason? I see nothing in the scripts that specify a different folder so I am utterly confused. With it jumping between folders I feel like I am in some Abbott and Costello film....
    Thanks for any help
    Dave

    The good news is we can be lazy but first to check one more thing: Is your Trigger set to use the System Default language, en_US, or en?  If it's set to en that might explain the inconsistancy.
    Now the lazy part: when the engine cannot find a document in en_US it is supposed to auto-search the en and then default folders as well. So, you should be able to let it write to en and read from en_US because when it doesn't find it in en_US it should go look in en and then default anyways.
    If none of that is working either then I declare shenanigans, I mean defect! Time to open a TAC SR.
    Please remember to rate helpful responses and identify helpful or correct answers.

  • Moving the cursor inside a file being written.

    Is there any way to move the cursor to a specific location in a file that I am editing.
    for example:
    moveCursor(int row, int col)
    moveRight(int col)
    moveToEnd(); //Move to end of current rowI know that I can use RandomAccessFile but that only provides skipBytes(int), seek(long), and getFilePointer().
    Thanks,
    Marcus

    I don't know of anything like that in the api, but you could roll your own. You need to know the length of your lines (which I assume you do since it's a RAF). It'll also be easier to keep track of what row and col you're on, so you don't have to recompute it continually (this may not be true depending on the rest of your design).
    To move the cursor to a given row and col, you seek to (row * RECORD_LENGTH + col).
    To move right, just move the pointer ahead the difference between the current col and the given col.
    Moving to the end of the current row is the same as moving right, just with the last row specified.
    Add in some bounds checking to make sure you don't go past the end of a row, and you should be good.

  • File corruption on SDCard when multiple files are being written from WinCE 6.0R3

    We currently have file corruption problems which we have been able to reproduce on our system which uses WinCE 6.0R3. We have an SDCard in our system which is mounted as the root FS.  When multiple files are being written to the file system we occasionally
    see file corruption with data destined from one file, ending up in another file, or in another location in the same file.  We have already written test SW that we have been able to use to reproduce the problem, and have worked with the SDCard vendor to
    check that the memory controller on the card is not the source of the problems.
    We know that the data we send to WriteFile() is correct, and that the data which eventually gets sent through the SDCard driver to the SD card is already corrupted.
    We believe that the problem is somewhere in the microsoft private sources between the high level filesystem API calls and the low level device calls that get the data onto the HW.
    We have confirmed that the cards that get corrupted are all good and this is not a case ofpoor quality flash memory in the cards. The same cards that fail under WinCE 6.0R3 never fail under the same types of testing on Windows, Mac OX, or linux.  We
    can hammer the cards with single files writes over and over, but as soon as multiple threads are writing multiple files it is only a matter of time before a corruption occurs.
    One of the big problems is that we are using the sqlcompact DB for storing some data and this DB uses a cache which get's flushed on it's own schedule. Often the DB gets corrupted because other files are being written when the DB decides to flush.
    So we can reproduce the error (with enough time), and we know that data into the windows CE stack of code is good, but it comes out to the SDcard driver corrupted.  We have tried to minimize writes to the file system, but so far we have not found a
    way to make sure only one file can be written at once. Is there a setting or an API call that we can make to force the OS into only allowing one file write at a time, or a way of seeing how the multiple files are managed in the private sources?
    Thanks
    Peter

    All QFE's have been applied we are building the image so we have some control.
    I have build an image which used the debug DLL's of the FATFS and I have enabled all of the DebugZones.  The problem is still happening. From the timings in the debug logs and the timestamps in the data which corrupts the test file I have been able
    to see that the file is corrupted AFTER the write is complete. Or at least that's how it seems.
    We finished writing the file and closed the handle. Then more data is written to other files. When we get around to verifying the file it now contains data from the files that were subsequently written.
    What I think I need to do is figure out in detail how the two files were "laid down" onto the SDCard.  If the system used the same cluster to write the 2 files then that would explain the issue.

  • Ocrfile is not being written to.  open file issues.  Help please.

    I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
    Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
    From imon_<instance>.log:
    Health check failed to connect to instance.
    GIM-00090: OS-dependent operation:mmap failed with status: 22
    GIM-00091: OS failure message: Invalid argument
    GIM-00092: OS failure occurred at: sskgmsmr_13
    That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
    We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
    Any help greatly appreciated.

    Check Bug... on metalink
    if Bug 6931689
    Solve:
    To fix this issue please apply following patch:
    Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
    or
    Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
    Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
    Good Luck

  • File adapter reading while the file is still being written....

    Hello BPEL Gurus,
    I had a quick question around BPEL or ESB file adapter. Does BPEL file adapter starts reading a huge file that is being written or it waits until the writing process is completed and file is complete?
    Any response is highly is appreciated.
    Thanks.
    SM

    It goes like this. At every polling frequency, the adapter looks into the directory for the files with specified pattern (e.g. *.csv, MYCOMPANY*.txt) and specified condition, e.g. minimum file age. This means if there will be 2 files available with matching criteria, the both will be picked up and processed simultaneously in two different BPEL instances. No specific order of execution. However you will find the instances in BPEL console with little delay based on file size.
    Perhaps you can elaborate your scenario further. Do we have knowledge of the file name that are to be picked up from the folder. You may use synchronous read option. If you are using 10.1.3.4 version then you can specify the file name before file adapter makes a synchronous read into the give directory.

  • Hprof heap dump not being written to specified file

    I am running with the following
    -Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
    When I start the appserver, the file /tmp/englog1.txt gets created, but
    when I do a kill -3 pid on the .kjs process, nothing else is being written to
    /tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
    and a core file is generated.
    Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
    Thanks.

    Hi
    It seems that the option you are using is correct. I may modify it to something like
    java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
    This seems to work on 1.3.1_02, so may be something specific to the JDK version that
    you are using. Try a later version just to make sure.
    -Manish

  • Issue with .cp file being deleted automatically!

    My content team has complained about entire .cp files being
    deleted without them deleting it and not being in their recycle
    folder etc (3 different complaints). They couldn’t find them
    anywhere, including the recent project list in the captivate
    dashboard when you first open it. I was skeptical when they told me
    this until this evening when it happened to me. This is a HUGE
    issue, especially considering I have spent 10hrs on this file and
    just published it to production and now it needs to be redone.
    I checked with my developers to confirm it was the same
    circumstances as mine and it is. It seems to occur if you are
    closing a project and while it is closing, if you happen to be in
    Windows Explorer with the closing .cp file name highlighted in the
    folder. It disappears! I watched it with my own eyes. I checked my
    recycle bin and it is not there. I checked the recent project pane
    and it is not available for me to select.
    Has anyone else seen this? If you haven’t, make sure
    you do not do the above to make it happen!
    - Petra

    This is a duplicated post. Possibly a forum hiccup.
    Click
    here to see the original.

  • I want to have a progress bar monitor and display the progress of a file being opened, How do I do it?

    I have written large files for a report. When I open the files in the VI by pushing a button I want to view the progress of the file being opened. My first thought was to get the file size, some how measure the number of bytes comming out and monitor it on the progress bar. I am unsure about how to do this. If you know of a better way to monitor and disply the progress of an opeing file please let me know.

    If I understand you correct the progress bar is not the problem, it's to get a numeric value indicating the progress...right?
    If so then you could read the file size, preallocate a byte array to hold the file, then read it in chunks and put the chunks into the array using replace array elements...(you could concatinate strings, but that would easily become very memory and speed expensive...)until you have read the entire file. If the file is 20 MB and you want the progress bar to have a resolution of about 1% read it in chunks of 256 KB...For each chunk read, increment the progress bar...
    MTO

  • IO Labels are NOT being written to prefs!

    I've followed all the advice on existing threads about this and it's definitely a serious bug for me. IO Labels are not being written to prefs for me.
    Any ideas? Already tried deleting prefs, creating new blank project and not saving, nothing has worked.

    I found a workaround for anyone having this issue - and this is the ONLY thing that has worked after a week of trying everything on several forums.
    Open Logic, set your labels how you want.
    While Logic is open go to ~/Library/Preferences and delete com.apple.logic.pro.plist
    Quit Logic. I don't think it matters whether you save your project or not. When Logic quits a new plist will be written, and this one WILL have your labels!
    Seems on my machine Logic would not update the IO labels bit of the prefs unless it was writing a complete new prefs file.

  • Opmn logs not being written

    Hi All,
    We are facing an issue.
    No logs are being written to the opmn/logs directory. It was being written correctly till 4th December and then stopped all of a sudden.
    Are there any configuration files which may have been affected.
    Best regards,
    Brinda

    To clarify.
    We are now rotating the logfiles with the linux/unix command logrotate. I suspect that this is what is causing the issue that the logs are note being filled after rotation and we need to restart opmn for the logs to start getting populated.
    So I think we need to configure rotating logs in opmn.xml.
    The Application server version is 10.1.3. This is the log line in our opmn.xml.
    <log path="$ORACLE_HOME/opmn/logs/opmn.log" comp="internal;ons;pm" rotation-size="1500000"/>
    So the question is how do we activate opmn to rotate the log so that we do not need to use logrotate.
    In this document it says that you have to activate ODL for log rotation to work:
    http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#sthref370
    Is this true or can we rotate text logs as well. This is what we would prefer.
    Best regards,
    Gustav

  • Lightroom cache files permanently written to alternate drive

    I've encountered an unusual problem using LR version 1.4.1 under Windows XP. Whenever I preview a RAW file in LR, a cache file is being written to an alternate drive, named Cache0000000001.dat, with each additional preview increasing the last digit in the file name by 1, Cache0000000002.dat, etc., and an Index.dat file. Each cache file is 2532kb or 8607kb depending on the size of the original RAW file. After closing Lightroom, these files remain on the disk and have to be manually deleted.
    This has only begun to happen in the past couple of days. I therefore assume it's due to some change I've made recently. The change began after I removed Photoshop CS2 after having both CS2 and CS3 installed for several weeks. I don't know how the particular disk for writing these cache files is being selected other than the fact that my CS3 scratch disk is the same disk. The Lightroom files are on a separate disk.
    This is not a major problem except that the cache files add up very quickly, taking up a lot of disk space and must be deleted manually after closing LR.
    There is probably no simple explanation for this anomaly but if anyone has any ideas, I certainly would appreciate the input.

    Ian,
    Thanks for the reply. Yes, you are correct. The preferences in Camera Raw determine where these cache files go and does limit the size of the total cache. The default is 10 GB which suits me fine. Where these files have been going in the past is a mystery to me.
    It does seem surprising to me that the location for the Lightroom cache is
    determined in Photoshop Camera Raw but so be it.
    Again, thanks for solving this problem for me.
    george

Maybe you are looking for

  • Item category cannot be changed

    Hello experts, Using the transaction OVEP when you select an item category and try to change it is not possible. See note: 459285 Revenue recognition: Item category cannot be changed (but does not apply in previous version --> The problem is correcte

  • Tethering / Personal Hotspot

    No not kinky games but an essential item on a smartphone.....when can we expect these to be provided?   I know about the FCC issue but surely a few palms can be greased to get it through them? WP is dead

  • Group functions

    Hello. How can I use user define group function (Oracle 9i feature) in the same report with system define (sum, max). I use Discoverer 9i. Thanks.

  • Insane Gamma Shifting...Losing my mind

    I have a transition clip, the last frame of which is identical to the menu still it connects to. But when the menu activates there is a noticeable shift in brightness. The transition was rendered out of After Effects. I know there have been gamma iss

  • EnterpriseOne Client Installation (for production)

    Hi, I would like to install EnterpriseOne Client on my desktop for production support (like changing menus, deploying packages). I have already ran setup from the shared folder of E812 (from deployment server) to install client on my desktop for deve