Hprof heap dump not being written to specified file

I am running with the following
-Xrunhprof:heap=all,format=b,file=/tmp/englog.txt (java 1.2.2_10)
When I start the appserver, the file /tmp/englog1.txt gets created, but
when I do a kill -3 pid on the .kjs process, nothing else is being written to
/tmp/englog1.txt. In the kjs log I do see the "Dumping java heap..." message
and a core file is generated.
Any ideas on why I'm not getting anything else written to /tmp/englog1.txt?
Thanks.

Hi
It seems that the option you are using is correct. I may modify it to something like
java -Xrunhprof:heap=all,format=a,cpu=samples,file=/tmp/englog.txt,doe=n ClassFile
This seems to work on 1.3.1_02, so may be something specific to the JDK version that
you are using. Try a later version just to make sure.
-Manish

Similar Messages

  • Finder data not being written to .DS_Store file

    The symptom of the problem I am experiencing is that the position of icons on my desktop (as well as icons in other folders in my home directory, as well as Finder window settings for folder in my home directory) are lost every time I log out. On a subsequent login, all of the icons "reflow" starting from the top right corner of the screen, just below the boot volume icon. I have been able to determine that this has to do with the .DS_Store file(s) in the Desktop folder and other folders of my home directory. If a .DS_Store file exists, the timestamp does not change when I change the icon layout and logout. If I delete the .DS_Store file, it is not re-created.
    In my case, my home directory (and the child Desktop folder) is being mounted off an SMB share on a Windows 2003 server. My Mac is using AD authentication to the same Windows 2003 server, which is the domain controller. I'm logging in with my AD credentials, and my home directory mapping is picked up from the home directory field in AD.
    Now, Googling this problem, I found a lot of people complaining about wanting to suppress the use/creation of the .DS_Store files on Windows network volumes. This led to an Apple KB article (http://support.apple.com/kb/HT1629) on how to modify a default to prevent the creation of .DS_Store files on network volumes--essentially the very behavior I am experiencing. The upshot of the KB article is to us the following command in terminal:
    *defaults write com.apple.desktopservices DSDontWriteNetworkStores true*
    I did a 'defaults read' to confirm this default isn't set on my install of Mac OS X 10.5.6--and it isn't. I then tried using the following variation in the hope I could force the behavior I wanted:
    *defaults write com.apple.desktopservices DSDontWriteNetworkStores false*
    Predictably, this had to effect.
    The upshot is, NONE of the Finder data for files and folders in my home directory (icon positions, Finder view window defaults, etc.) is preserved between logons. And this is driving me nuts! I've spent several hours over the past two evening trying to troubleshoot this, and I've had no luck.
    As a footnote, I'll mention that if I drag a folder from my home directory onto the local hard disk, the .DS_Store file is created/updated and things behave as expected (icon position and Finder window defaults are preserved). But, if I then drag the folder back over to my home directory, the values in the .DS_Store file become "frozen" and don't change.

    Hey, try this:
    1.
    Put a file in a folder on your Desktop.
    Edit: not your Desktop, but rather a typical local home on an HFS+ volume ~/Desktop
    2.
    Use "Get Info" to add a "Comment".
    The comment can be up to somewhere between 700 and 800 characters.
    3.
    Copy the folder to a FAT formatted flash drive, SMB share, etc.
    4.
    Create a new folder on the flash drive.
    5.
    Move the file from the first folder to the second folder.
    Confirm that the "Finder" shows the comment to be there.
    6.
    Quit the "Finder" - in 10.4 and earlier, this action would ensure that all .DS_Store files get updated, including transfer of comments. I.e. don't "relaunch" or "kill", etc. Enable and user the "Quit" menu, or use:<pre>
    osascript -e 'tell application "Finder" to quit</pre>
    7.
    Now where's the comment?
    In step 2, you could also have just created a file on the non-HFS volume and wasted your time typing in a 700 character comment.
    In step 6, a more real-world scenario is ejecting the drive or logging out, but deliberately quitting the "Finder" is the most conservative way to ensure that changes get written to .DS_Store files and comments survive. In 10.5.6, even under these conditions, comments are lost.
    Icon positions and view preferences are one thing, but with comments - that's real user-inputted data that Apple is playing fast and loose with. And if they no longer support comments on non-HFS volumes, the "Finder" shouldn't be showing you the comments when you double-check them in step 5, or allow them to be added in the alternate version of step 2.
    ..."C'mon Apple... what gives here?"...
    Unfortunately, this "Discussions" site is not frequented by Apple devs. Have you considered filing a bug report? I wonder what they would tell you, eg. if the treatment of .DS_Store files actually does turn out to be a bug or is intentional as it suspiciously seems...
    http://developer.apple.com/bugreporter/index.html

  • Ocrfile is not being written to.  open file issues.  Help please.

    I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
    Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
    From imon_<instance>.log:
    Health check failed to connect to instance.
    GIM-00090: OS-dependent operation:mmap failed with status: 22
    GIM-00091: OS failure message: Invalid argument
    GIM-00092: OS failure occurred at: sskgmsmr_13
    That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
    We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
    Any help greatly appreciated.

    Check Bug... on metalink
    if Bug 6931689
    Solve:
    To fix this issue please apply following patch:
    Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
    or
    Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
    Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
    Good Luck

  • IO Labels are NOT being written to prefs!

    I've followed all the advice on existing threads about this and it's definitely a serious bug for me. IO Labels are not being written to prefs for me.
    Any ideas? Already tried deleting prefs, creating new blank project and not saving, nothing has worked.

    I found a workaround for anyone having this issue - and this is the ONLY thing that has worked after a week of trying everything on several forums.
    Open Logic, set your labels how you want.
    While Logic is open go to ~/Library/Preferences and delete com.apple.logic.pro.plist
    Quit Logic. I don't think it matters whether you save your project or not. When Logic quits a new plist will be written, and this one WILL have your labels!
    Seems on my machine Logic would not update the IO labels bit of the prefs unless it was writing a complete new prefs file.

  • Why does hprof=heap=dump have so much overhead?

    I udnerstand why the HPROF option heap=sites incurs a massive performance overhead; it has to intercept every allocation and record the current call stack.
    However, I don't understand why the HPROF option heap=dump incurs so much of a performance overhead. Presumably it could do nothing until invoked, and only then trace from the system roots the entire heap.
    Can anyone speak to why it doesn't work that way?
    - Gordon @ IA

    Traditionally agents like hprof had to be loaded into the virtual machine at startup, and this was the only way to capture these object allocations. The new hprof in the JDK 5.0 release (Tiger) was written using the newer VM interface JVM TI and this new hprof was mostly meant to reproduce the functionality of the old hprof from JDK 1.4.2 that used JVMPI. (Just FYI: run 'java -Xrunhprof:help' for help on hprof).
    The JDK 5.0 hprof will at startup, instrument java.lang.Object.<init>() and all classes and methods that use the newarray bytecodes. This instrumentation doesn't take long and is just an initial startup cost, it's the run time and what happens then that is the performance bottleneck. At run time, as any object is allocated, the instrumented methods trigger an extra call into a Java tracker class which in turn makes a JNI call into the hprof agent and native code. At that point, hprof needs to track all the objects that are live (the JVM TI free event tells it when an object is freed), which takes a table inside the hprof agent and memory space. So if the machine you are using is low on RAM, using hprof will cause drastic slowdowns, you might try heap=sites which uses less memory but just tracks allocations based on site of allocation not individual objects.
    The more likely run time performance issue is that at each allocation, hprof wants to get the stack trace, this can be expensive, depends on how many objects are allocated. You could try using depth=0 and see if the stack trace samples are a serious issue for your situation. If you don't need stack traces, then you would be better off looking at the pmap command that gets you an hprof binary dump on the fly, no overhead, then you can use jhat (or HAT) to browse the heap. This may require use of the JDK 6 (Mustang) release for this experiment, see http://mustang.dev.java.net for the free downloads of JDK 6 (Mustang).
    There is an RFE for hprof to allow the tracking of allocations to be turned on/off in the Java tracker methods that were injected, at the Java source level. But this would require adding some Java APIs to control sun/tools/hprof/Tracker which is in rt.jar. This is very possible and more with the JVM TI interfaces.
    If you haven't tried the NetBeans Profiler (http://www.netbeans.org) you may want to look at it. It does take an incremental approach to instrumentation and tries to focus in on the areas of interest and allows you to limit the overhead of the profiler. It works with the latest JDK 5 (Tiger) update release, see http://java.sun.com/j2se.
    Oh yes, also look at some of the JVM TI demos that come with the JDK 5 download. Look in the demo/jvmti directory and try the small agents HeapTracker and HeapViewer, they have much lower overhead and the binaries and all the source is right there for you to just use or modify and customize for yourself.
    Hope this helps.
    -kto

  • Opmn logs not being written

    Hi All,
    We are facing an issue.
    No logs are being written to the opmn/logs directory. It was being written correctly till 4th December and then stopped all of a sudden.
    Are there any configuration files which may have been affected.
    Best regards,
    Brinda

    To clarify.
    We are now rotating the logfiles with the linux/unix command logrotate. I suspect that this is what is causing the issue that the logs are note being filled after rotation and we need to restart opmn for the logs to start getting populated.
    So I think we need to configure rotating logs in opmn.xml.
    The Application server version is 10.1.3. This is the log line in our opmn.xml.
    <log path="$ORACLE_HOME/opmn/logs/opmn.log" comp="internal;ons;pm" rotation-size="1500000"/>
    So the question is how do we activate opmn to rotate the log so that we do not need to use logrotate.
    In this document it says that you have to activate ODL for log rotation to work:
    http://download.oracle.com/docs/cd/B25221_04/web.1013/b14432/logadmin.htm#sthref370
    Is this true or can we rotate text logs as well. This is what we would prefer.
    Best regards,
    Gustav

  • Compare Heap Dumps not working

    Dear All,
    Am trying to compare two heap dumps to find differences. I have read the Tutorial. I have two heap dumps open and click on the Delta button. But all I get is a message saying something like there are no two dumps open on the desktop. Can anybody help? Thanks a lot, Anthony

    Hi Scott,
    thanks for finding this bug... It looks like we do not pick up dumps that have been opened via "Open File...". I will have a look at it.
    The context menu does not work on purpose. The problem is this: in the heap dump we only have the object address. But as the garbage collector is moving objects around, this address changes even as the object stays the same. We cannot know if the object was moved or if a new one was created.
    So one has to structurally compare the heap dump. We do this currently for the histogram on class level and on class loader level (by the extracted class loader name if a name resolver is present). For objects we do nothing at the moment. I haven't really given much thought about that, but it should be possible to compare the graph/domTree once the user picked to comparable objects (which he can, because he knows the semantics).
    Regarding open sourcing: I just spend the last 3 hours hunting a little, tiny bug that was introduced when removing some SAP specifics which we do not open source. Soon, maybe and of next week we should have something there.
    New features is an automatic leak detection, e.g. we create a HTML report with the suspectd leaks. And the handling of local variables, ie. stuff kept alive by a thread, is much easier. Ahh, and we have some pie charts now.
    Andreas.

  • Inbox folders not being created from msf files

    Last week, I noticed that my Inbox folder files weren't appearing in my profile. I posted a generalized question here: https://support.mozilla.org/en-US/questions/996903
    The response I received didn't help that much. The folder I created appeared on the remote mailbox as empty folders.
    My mail account is synced via IMAP to a remote Zimbra mail client using STARTTLS. I can view my Inbox. The other folders in my Inbox account are not visible. I can send and recieve messages from my INBOX.
    When I select "Manage folder Subscriptions" In the account options, I can see the folders. When I select them, and click the "Subscribe" button, nothing happens.
    Inspecting my IMAP directory, I see the following structure:
    ~/.thunderbird/13zr0oo0.default/ImapMail/mail.<zimbra_account>.com:
    -rw-r--r-- 1 matthewe matthewe 1238 May 2 10:38 Archives.msf
    -rw-r--r-- 1 matthewe matthewe 1235 May 2 10:38 Drafts.msf
    -rw------- 1 matthewe matthewe 47500385 May 2 09:56 INBOX
    -rw-r--r-- 1 matthewe matthewe 375885 May 2 10:36 INBOX.msf
    -rw-r--r-- 1 matthewe matthewe 25 May 2 08:58 msgFilterRules.dat
    -rw-r--r-- 1 matthewe matthewe 1233 May 2 10:38 Sent.msf
    -rw-r--r-- 1 matthewe matthewe 0 May 2 10:38 site-errors.msf
    -rw-r--r-- 1 matthewe matthewe 0 May 2 09:59 Templates.msf
    -rw-r--r-- 1 matthewe matthewe 2275 May 2 10:33 Trash.msf
    The msf files are created when I click the "Subscribe" button but subsequent Mail files containing the mail data are not (note that non-msf files exist only for "INBOX".)
    Removing .msf files and restarting Thunderbird has no affect.
    When I launch, the errors console reports the following three errors:
    "Could not read chrome manifest 'file:///usr/lib/thunderbird/extensions/%7B972ce4c6-7e08-4474-a285-3208198ce6fd%7D/chrome.manifest'.
    -- There is no chrome.manifest file in my filesystem under /usr/lib/thunderbird/extensions/{972ce4c6-7e08-4474-a285-3208198ce6fd}"
    "While creating services from category 'profile-after-change', could not create service for entry 'Disk Space Watcher Service', contract ID '@mozilla.org/toolkit/disk-space-watcher;1'"
    This is reported as https://bugzilla.mozilla.org/show_bug.cgi?id=883621 and has been resolved at https://bugzilla.mozilla.org/show_bug.cgi?id=895964 but my system still throws the error. (I have plenty of disk space. Permissions perhaps?)
    [JavaScript Warning: "Calendar https://mail.zimbrahostedemail.com/home/matthewe@<zimbra_client>.com/Calendar has a dangling E-Mail identity configured."]
    I'm not sure what that means.
    I am using Thunderbird 24.5.0 on Ubuntu 13.10

    Is your question about files not being created, or because you cannot send and receive mail, or both?

  • Jvmargs not being placed in jnlp file by ant deploy task

    Hi
    I have an ant deploy task as given below. I run ant to produce the jnlp file. Everything works ok, EXCEPT the jvmargs, and the enclosing platform element, are not copied into the jnlp file.
    Is this the intended behaviour?
    thanks
    graham
    -- ant task ---
    <project name="BuildInfoView" xmlns:fx="javafx:com.sun.javafx.tools.ant">
    <taskdef
         resource="com/sun/javafx/tools/ant/antlib.xml"
         uri="javafx:com.sun.javafx.tools.ant"
         classpath="/Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home/lib/ant-javafx.jar"/>
    <fx:deploy
    outdir="../installation/bin"
         outfile="infoview"
    width="800"
         height="500"
         embeddedWidth="100%"
         embeddedHeight="100%">
         <fx:application
              name="Infoview"
              mainClass="com.ods.infoview.Main">
         </fx:application>
         <fx:info
              title="InfoView"
              description="InfoView"
              vendor="OrangeDog">
         </fx:info>
         <fx:permissions elevated="true"/>
         <fx:platform javafx="2.2+">
         <fx:jvmarg value="-Xmx400"/>
         </fx:platform>
    <!--
    cacheCertificates="true"/>
    -->
         <fx:resources>
              <fx:fileset dir="../gradle/output/"/>
         </fx:resources>
    </fx:deploy>
    </project>

    There is a FAQ note about this
    http://java.sun.com/products/javawebstart/faq.html#72
    Bill

  • MBR not being written to during install of RT to a Desktop Computer.

    Hello,
    We have been trying on and off for months in order to get LabView RT to load
    onto a desktop computer and have that computer boot up directly from the hard
    drive we installed RT on. First we go through the HD format disk procedure, and
    it doesn’t give any errors. When we reboot the computer after the install, we
    can not boot from the hard drive. The hard drive still seems to have Grub (a
    boot loader that was used previously on this hard drive to run Linux) since it
    spits out Grub errors while trying to boot. I checked the boot sector of the
    hard drive, and that seems to have some LabView content on there, and the
    partition where the LVRT is seems to have the files that should be there as
    well.
    We formatted the hard drive using partition Magic, and set the disk up to have
    one primary partition that was set to active, and formatted as FAT32. We did
    not specify a drive letter for the partition while creating it since C:\ was
    already being used on the computer where we were formatting the disk, and I
    know that LVRT uses drive C:\ as well... so we didn't want any conflicts.
    After all this… no errors during the install.. and still we
    can not boot from the hard drive. Instead, we boot using the boot disk, which
    seems to work fine, but we would really like to be able to boot from the hard
    drive and take the floppy out of the computer that we are using.
    Has anyone else encountered any problems like this before? I
    have read some of the forum posts about people having the same type of issue,
    but they were able to set the partition as Active and everything  worked out, but that was not the case for us.
    Any help is greatly appreciated since calling NI support
    doesn’t really seem to lead anywhere, and some of them seem like they don’t
    know too much about hard drives (which is understandable, since this is a very
    small portion of what they do)
    Thanks
    -Mark

    Trey,
    The drive that I have formatted was the IDE drive I was using previously for
    the RTOS that was not booting up on its own, but was running when we used the
    boot disk.
    I ran the PC Eval before the format, and everything came back successful.
    I deleted the partition, created a new one, and formatted
    the disk again to FAT32 via Disk Management in Windows XP Pro (just to start
    clean). After that I ran CHKDSK, and no errors were reported. I then booted
    using the PC Eval disk, and this time I got this error:
    WARNING: Flush from cache to disk Failed
    I’m not sure why suddenly I have this error, since all I did
    was reformat the disk. What should the partition setup as anyways… I have never
    seen that mentioned anywhere. Is it supposed to be a logical or primary? Should
    it be set to active?
    I will try different combinations since somehow it was
    working before, so there must be a way to get it formatted correctly. I’ll let
    you know which one seems to work.

  • Messages not being written to page

    HI,
    What could stop messages from being displayed/written. I have changed my page template from a table based layout to a css layout and have wrapped the following #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# in a div. I cause an error intentionally on my page but the message does not appear. When I inspect the page source it confirms that the message was not written to the page. I have an identical application that uses table layouts for the page template and the same process produces a message. When I inspect the page source the message is written in the #GLOBAL_NOTIFICATION# #SUCCESS_MESSAGE# #NOTIFICATION_MESSAGE# position as expected.
    Would anybody have any ideas for me to pursue to get to the bottom of this problem?
    Thank You
    Ben

    Scott,
    I found what the problem was ...
    The page template has a subtemplate region and the entry for Success Message and Notification where empty.
    Ben
    Edited by: Benton on Jun 18, 2009 2:09 PM

  • "Sidecar file has conflict", keyword changes in LR not being written to JPEGs

    I am new to Lightroom and evalutating 3.6 on a trial.
    I performed the following test:
    1. Imported 3 pictures into Lightroom from my hard drive
    2. In Lightroom, I made a keyword change to one photo (removed a keyword).
    3. All three photos' Metadata icon show the arrow, indicating that there has been a change. (I only changed the keyword on one photo.)
    4. I click the icon for the photo whose keyword I changed.
    5. I get the message "The metadata for this photo has been changed in Lightroom. Save the changes to disk?"
    6. I click "Yes".
    7. The photo's Metadata icon shows an exclamation point. When you hover over the exclamation point, it shows the message "Sidecar has conflict".
    I have two questions:
    1. My expectation is that when I changed a keyword, or any other metadata, in LR, it will show up on the photo outside of LR. My understanding is that this is a functionality of LR. Am I wrong? (The Help files seem to indicate that this is a reasonable expectation.)
    2. I am assuming, having read many posts on the forum, that the error message "Sidecar has conflict" is a bug. Am I right? (My understanding is that JPEGs don't have sidecar files, which just makes this message even more odd.)
    I am on Windows 7 Home Premium (upgraded from Vista Home Premium), 32 bit.
    Thanks for your help,
    --Elisabeth.

    Beat, Thanks for taking the time to answer.
    Your expectation is not quite right:
    Any change to metadata will primarily be recorded in your catalog, and nowhere else. Only if you perform "Save Metadata to File" (which can be set to be done automatically in your catalog settings) or "Save the Changes to Disk" after pressing the arrow, the changes will be written into your JPEGs (or into *.xmp sidecard files for Raw images) and can be seen from outside of LR.
    Yes, you are right. What I should have written was: My expectation is that when I change a keyword, or any other metadata in LR, the change will show up outside of LR, after I "Save Metadata to File". This is not happening, instead I get the error message and different keywords in LR than outside.
    Where do your originals reside? Could it be you don't have the proper authority to rewirte the JPEGs?
    My originals reside on my Public drive: I definitely have rights. In fact, I did many other change to photos in the same drive after I posted this on the forum.

  • Site access request alerts are not being sent to specified email.

    In my SharePoint 2013 deployment I can't seem to get access request alerts working correctly. I go into the site permissions for any given site, enable access request and set an email address. When a user requests access an email is never sent to the address
    i specified. Because of this site administrators have to constantly go into the site settings and check if there are any access requests and approve or deny them. I have checked my Exchange server logs and no email ever reaches it so it appears the alert is
    never generated. Other outgoing emails such as alerts on libraries do work correctly. 
    Please, help!

    I have a solution for you.
    Called MS Support and they told me that sharepoint tries to send these mails authenticated via the Web Appl Pool Account.
    So we started netmon to analyse this problem.
    There we found the entry:
    SMTP:Rsp 550  5.7.1 Client does not have permissions to send as this sender
    You can solve this problem by authorizing the Web Appl Pool User to the SMTP receive connector (on exchange server):
    Get-ReceiveConnector “<spconector>” | Add-ADPermission -User “CONTOSO\AppPoolAccount” -ExtendedRights “ms-Exch-SMTP-Accept-Authoritative-Domain-Sender”
    Get-ReceiveConnector “<spconector>” | Add-ADPermission -User “CONTOSO\AppPoolAccount” -ExtendedRights “ms-Exch-SMTP-Accept-Any-Sender”
    or (this is what we do):
    Add the IP addresses of the sharepoint webservers to the relay of the exchange servers (for this you must have an open relay connector).

  • Tag Info Not Being Written to File

    Some of my MP3's have comments that I would like to remove.
    For example, let's say one of my MP3's in Itunes shows the comment "How are you?"...
    When I right click on the song, click GET INFO, select INFO and delete the comments, the comments are no longer visible in Itunes however, when I go to the file directly and right click and select properties, the comments that I deleted in ITunes are still showing in the files properties.
    Even weirder, if instead of deleting the comments "How are you?" I just over-write the comment with "This is a test", that comment will show when I go directly to the file and look at its properties however, if I go back into Itunes and then delete the comments "This is a test", the original comments "How Are you?" show back up in the file properties.
    Any ideas?

    your code...
    for(int i=0;i==FileData.length;i++)// this is a VERY odd looking loop
      WriteBuff.write(FileData[i]+"\r\n");shouldn't that be i<FileData.length? if FileData is indeed an array you
    will get an ArrayIndexOutOfBoundsException...
    anyway it doesn't seem to be an IO problem so the difficulty is probably
    in your loop. try this for starters...
    for(int j=0;j<FileData.length;j++){
      System.out.println("Looping "+j);
      WriteBuff.write(FileData[j]+"\r\n");// not sure about this either maybe use PrintWriter and println instead?

  • Field value not being written to database

    Hello,
    I'm trying to auto-populate a form field from a value in one of my backing beans by changing the field value property from #{bindings.Modifiedby1.inputValue} to #{pageFlowScope.backing_Login.userName}. When I open the form, the correct value from the backing bean is displayed in the field now, but when I save/commit the record the value doesn't make it to the database so while it is displayed on the screen in the form field, it doesn't get written to the database. Is there something that needs to be done to update the input value in the actual binding or something? I'm developing in JDeveloper 11.1.1.3.
    Thanks!

    Well, I thought it was working. I've found that the ModifiedBy and ModifiedDate fields do get updated in the database now, but it updates those two fields in the first row that was displayed on the table instead of the row that was selected. Here's my backing bean code (I've tried to comment what I'm doing in each step):
    public void editDialogListener(DialogEvent dialogEvent) {
    //when "ok" button in dialog box is selected
    if (dialogEvent.getOutcome().name().equals("ok")) {
    //get userName from Login backing bean and save as loggedInUser
    RequestContext requestContext =
    RequestContext.getCurrentInstance();
    String loggedInUser =
    (String)requestContext.getPageFlowScope().get("userName");
    //get bindings
    BindingContainer bindings = getBindings();
    //get Modifiedby1 attribute from bindings and setInputValue to LoggedInUser
    AttributeBinding attrModifiedBy =
    (AttributeBinding)bindings.getControlBinding("Modifiedby1");
    attrModifiedBy.setInputValue(loggedInUser);
    //get Modifieddate1 attribute from bindings and setInputValue to current date
    AttributeBinding attrModifiedDate =
    (AttributeBinding)bindings.getControlBinding("Modifieddate1");
    attrModifiedDate.setInputValue(new Date());
    //perform Commit
    OperationBinding operationBinding =
    bindings.getOperationBinding("Commit");
    operationBinding.execute();
    Can anyone see what I need to add or do differently to grab the selected row's attributes for updating?

Maybe you are looking for

  • Yoga 2 11. Virtual keyboard in Tablet mode

    Hi. Yoga 2 11 and Windows 8.1 is entirley new to me, so please excuse my lack of knowledge. When PC is asleep in tablet mode, I cannot get the virtual keyboard in order to log in. When I try to type, it pops up for a second or so (varies in time), bu

  • MTS configurable material

    Hi, I'm having scenario of MTS for configurable material. We dont have APO. And we are not using VC. We have Is-Mill package. How to do the MTS for configurable material. If u any update please reply me. I tried with FERT material with Material is co

  • I have 2692 songs imported to my iTunes library and I've used 18.9 GB, why I can't import the rest of my CDs?

    I have 2692 songs imported to my iTunes library and I've used 18.9 GB, why I can't import the rest of my CDs?

  • How to multiply input by price

    Hi, I'm putting together a order form and would like to have the total filled in for the end-user for each item. There are multiple items, but I', just trying to get Right now, I have an input for Quantity (named AAA-MS) and a read-only input (named

  • NX-OS and enable authentication

    I am trying to secure a few Nexus switches with tacacs+  I am able to authenticate logins but I don't see the command for privileged mode, for example on a 2960 switch it was; aaa authentication enable default group tacacs+ enable Was this removed on