CF Builder 2 slow on closing files issue

This is an old issue that still has not been resolved
When closing a file on CF Builder 2 it takes a long time for the file to close. This used to happen on Windows XP. I have a new PC with Windows 7 and reinstalled CF Builder 2 but the issue closing the file continues.
I have to do the following for that to stop.
Closed CF Builder
Renamed the file workspace\.metadata\.plugins\com.adobe.ide.editor.cfml\foldings\foldings.data
Started CF Builder
However, after few hours CF Builder 2 becomes slow again when closing the files.
Is there any fix for this issue other than the temporary solution above?

Charlie, thanks for your time and information responding to this issue. They were helpful to track down what the problem is.
Based on the information you gave me above I noticed the following.
Under the Progress View what I see when closing a file are messages related to building workspace as the picture below shows.
As for folding.data file, I followed your advice and opened in a text editor. The file content has, like you said, some binary data, but also contains the strings "file:" followed by the full path for the files that I closed. I looked at the size of it and it was 11k with about 11 lines. Then I started to open and close some file and noticed that CFBuilder was writing the names of all the files I close to the folding.data file, which increased the file size. Now the file has 14 lines and 14k. The previous folding.data files had size varying from 38k to 87k. I also note that the CFBuilder was not appending the most recent files to folding.data as the path for the most recent closed files were in the middle of the file content. It seems that CFBuilder was not only updating but re-creating the file with the previous content and the string with the new file path in a random fashion.
When I said that the closing of the files is initially faster and then gets slower, I meant to say that is faster when I delete the folding.data file and CFBuilder creates a new one. As CFBuilder starts updating the folding.data file it gets slower. After that it continues slow even in between CFBuilder sessions.So I think that might be a threshold related to the file size that makes the process of closing files slow.
I don't think the problem is related to memory as I already had the Show Heap Status view on display and even when there is not much use of memory closing the file is still slow.
At this point what it looks like is that the process of closing a file inside CFBuilder updates the folding.data file and when that file reaches a particular size, it takes some time to actually close the file as if CFBuilder was waiting for the folding.data file to be updated.  What intrigues me is the fact that this behavior happened in w different PC's so it has to be something related to the setting of CFBulder in handling the closing of files.
I was wondering if there is some sort of setting related to closing files in CFBuilder.
Once again, thank you very much for your responses.

Similar Messages

  • Slow opening of files on DFS

    I built a Windows 2008 R2 SP1 VM server, installed DFS and created a namespace.  I then added a folder and moved data from a CIFS share that was near capacity to free up space.  Everything I have mentioned so far was just to alevaite the fact that
    the CIFS share was almost full.  I built this server as a temp solution until I got the hardware in to build a new dedicated server for DFS/Storage.  I have built the new Windows 2008 R2 SP1 physical server and also made it a namespace server. 
    The performance of this server is
    Phenomenal.  All the storage is local to the box as I am trying to consolidate all the shares around the network.  I replicated the share from the temp server to the new server and removed the referral from the old server.  I then copied
    a few new shares over to the new server and added a folder for each (no replication for these).  I then moved the home drives to DFS.  Everything has been going well for a few weeks.  Yesterday users started reporting slow opening of files,
    specifically Excel and Word but I noticed PDFs opened slow also.  This issue currently only affects Windows 7 users but only a handful.  The nic, cpu and memory on the physical server have very low utilization.  I have been unable to isolate
    the exact issue but if I point the users to the
    \\server\share, the files open VERY fast.  I did notice that when I right-clicked on a mapped DFS drive, the DFS tab showed the status as Unreachable.  I flushed the cache on the client and it went to OKay but file opening is still slow. 
    After some testing it appears to take almost exactly 50 seconds to open a file every time.  I am going to be away from the office for a few days so I went ahead and changed all the DFS mapped drives to point to
    \\server\share for now as I will not get it resolved before i leave.   I also scoured the internet for resolutions and looked at all the related topics that were presented when I typed the title of this thread. 
    I installed DFSUTIL on a few of the Windows 7 computers experiencing the slowness and the referrals look good.  Every test through DFSUTIL looks fine.
    Any recommendations would be greatly appreciated.
    Thanks

    I found the self-inflicted issue.  I have a dedicated DFS NameSpace server with 66TB of storage.  The drive that has this attached storage has a folder I created call DFS where all the new DFS folders are pointed.  At this point I should have
    reviewed the drive permissions closer.  The default permissions on the drive includes the local USERS group which has special permissions.  The first few shares I moved over already had authenticated users - Modify so all was well.  I then moved
    the home folders over and relaized that the local users allowed everyone to view everyone elses home drive.  In a kneejerk reaction I removed the USERS group from the root of the drive which no longer allowed users to access the
    \\domain.com\namespace because the DFSRoots folder is also on the same drive.  I guess that since they didnt have access they eventually defaulted back to
    \\servername\namespace which caused the delay???
    I added the permissions back under the namespace folder and only applied to that folder.  Users can now access the namespace quickly.  At this point I am not sure what the permissions should be for the root drive.  So now my question is:
    1.  If I am doing to dedicate a drive to DFS, what should the root permissions be on the drive?  The drive will contain a folder E:\DFS as a root folder for all the shares and another folder named E:\DFSRoots (created by DFS with the default drive
    changed from C: to E:) which will obvioulsy hold the namespace.
    2.  If a user would lose permissions to the dfs namespace (as they did in my case) how were they eventually able to access the file after the 50 second delay?
    Thanks

  • Aperture 3.1.3 Very slow importing raw files

    Aperture 3.1.3 Very slow importing raw files please help
    Pro Photographer. I have a Mac Pro Quad core 1 yr old with 12GB Ram. I import via 2 Firewire 800 card readers - A Lexar & a UDMR Reader.Only using them singly. System has 4 1TB hard drives 1. Applications, 2. Aperture Library, 3. Aperture Vault, 4 Time Machine
    On a photo shoot, first CF card may download and complete in less than a minute but the next card with a similar number of images may take 10 minutes.
    Sometimes if I swap card readers, this can speed up download but then everything slowly grinds to a halt.
    I have closed down all other programmes, switched off Faces and Enable Gestures - switched off  Look up places.
    Situation driving me nuts as we shoot and view images with customers immediately after the session and download time effecting the amount of time we have to do this. Nothing seems logical as such a high spec machine should theoretically fly through this.

    Have you upgraded your OS ans all your Apps to the latest Versions and also rebooted the system?
    Turn on the Activity Viewer (Windows ->Show Activity) while importing, to see how Aperture is spending its time.
    Also launch some tools from the Application -> Utilities folder:
    launch the Console window, to check for diagnostics /error messages
    launch the Activity Viewer, to see how much RAM and CPU time Aperture and other processes are using.
    How much space is free on your system drive? And on the drive you are importing to?
    If you find in the Activity Viewer, that Aperture is slow, but de facto using not much CPU time, then the problem may be a corrupted Aperture Library or corrupted preference files.
    Try to import into a new, empty Aperture library and see if the problem persists.
    If you have no problems with a new Library, try the Aperture 3: Troubleshooting Basics:
    http://support.apple.com/kb/HT3805
    Start with repairing permissions and repairing the library.
    If none of the measures outlined there, or you need further help, please report back.
    Regards
    Léonie

  • Cant install PSE 11 shared files issue?

    Hi - I cant seem to install PSE 11 as I keep getting shared file issues. Ive managed to work my way into the manual error log and its saying this - Any ideas please? Please go slow I am so not technical at all!
    Thanks
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
    Visit http://www.adobe.com/go/loganalyzer/ for more information
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
    START - Installer Session
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
    RIBS version: 3.5.8.0
    Win OS version: 6.1.1.0 64 bit Type: 1
    ::START TIMER:: [Total Timer]
    CHECK: Single instance running
    CHECK : Credentials
    Load Deployment File
    Create Required Folders
    Assuming install mode
    Looking up install source path
    Sync Media DB ...
    ::START TIMER:: [Sync Media DB]
    Pre check media db sync
    End of Pre check media db sync. Exit code: 0
    :: END TIMER :: [Sync Media DB] took 247.39 miliseconds (0.24739 seconds) DTR = 371.882 KBPS (0.363166 MBPS)
    Ready to initialize session to start with ...
    ::START TIMER:: [CreatePayloadSession]
    -------------------- BEGIN - Updating Media Sources - BEGIN --------------------
    Updated source path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller
    Updating media info for: {0449467E-102A-4514-9F4D-91BCEE129390}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\ElementsCameraRawProfile7.0All-160512123011\Install.db
    Updating media info for: {2EBE92C3-F9D8-48B5-A32B-04FA5D1709FA}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\AdobeXMPPanelsAll\Install.db
    Updating media info for: {3A8D4A3A-5E08-4E33-8831-50CB4DC101E6}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\ElementsCameraRaw7.1All-x64\Install.db
    Updating media info for: {5A73BC84-244A-48B4-ADDB-99CE9E2DBBE7}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\ElementsCameraRaw7.1All\Install.db
    Updating media info for: {98CE8819-87AA-4814-8167-ADDDD513485F}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\PSE11STIInstaller\Install.db
    Updating media info for: {D1B83970-7269-48BE-8B0E-5120D9327E52}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\AdobeAPE3.101-mul\Install.db
    Updating media info for: {F6F5021E-0548-43C1-82CC-C5C7A6906585}
    Ignoring original data since install source is local
      Type: 0, Volume Order: 1, Media Name: Adobe Photoshop Elements 11
      Path: C:\Users\Nina\Adobe Photoshop Elements 11\PSE 11\ElementsSTIInstaller\payloads\ElementsCameraRawProfile7.0All\Install.db
    --------------------  END  - Updating Media Sources -  END  --------------------
    Supported RIBS version range: [0.0.66.0,3.5.8.0]
    [    3392] Tue Jul 02 20:12:47 2013 ERROR
    Payload {98CE8819-87AA-4814-8167-ADDDD513485F} of version: 4.0.19.0 is not supported by this version: 3.5.8.0 of RIBS
    Payload {D1B83970-7269-48BE-8B0E-5120D9327E52} of version: 4.0.19.0 is not supported by this version: 3.5.8.0 of RIBS
    Unsupported payload versions included
    [    3392] Tue Jul 02 20:12:47 2013  INFO
    :: END TIMER :: [Total Timer] took 326.856 miliseconds (0.326856 seconds) DTR = 293.707 KBPS (0.286824 MBPS)
    -------------------------------------- Summary --------------------------------------
    - 0 fatal error(s), 3 error(s), 0 warning(s)
    ERROR: Payload {98CE8819-87AA-4814-8167-ADDDD513485F} of version: 4.0.19.0 is not supported by this version: 3.5.8.0 of RIBS
    ERROR: Payload {D1B83970-7269-48BE-8B0E-5120D9327E52} of version: 4.0.19.0 is not supported by this version: 3.5.8.0 of RIBS
    ERROR: Unsupported payload versions included
    Search the same string above to find when the error occured.
    Exit Code: 21 - payloads version is not supported by installed version of RIBS
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
    END - Installer Session
    *=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

    I have come across this on another forum but I dont have any idea how to do this? Can someone help please?
    So resolve this TIME consuming problem, i had another look at that support page and this is what i did to fix it Thank you for your help 99jon for the link.
    Copied the elements folder to the root of the HD, then renamed the folder with the installer files contained within to c:\PSElements9Installer then ran the setup.exe as administrator and hey presto, it worked and fully completed the install grrrrr.
    Cheers people of the Digital World, ive now relieved my stress
    Was this helpful?Yes  NoSubmit
    Reply

    Mark as:
        1 posts
    11-Apr-2013
    Currently Being Moderated
    3. TangAdmin,
    11-Apr-2013 17:04   in reply to Digital Controller
    Report
        Had the same problem with PSE10 Copying it to kthe root of c and renaming it to PSElements10Installer Worked perfectly.  
    thanks to all involved
    Was this helpful?Yes  No

  • Flash Builder 4.7 closing emulator when clicking outside

    Starting today I now have my Flash Builder 4.7 closing the android emulator if I click anywhere outside the emulator screen. Which makes it really hard to debug, only started today, and I have no idea what has caused it.
    Also, launching it on a device (Motorolla XOOM), it's closing pretty much as soon as it loads.
    My main issue is that I have idea at all why it's doing this, and there's no error message or anything, it just closes and the console goes dead.
    Anyone else had this issue, or know where to look for error log info?

    AAAAAAAAAAAhhh!
    Found the issue.
    My co-worker had overridden a handler thusly:
    override protected function deactivateHandler(event:Event):void
                                            super.deactivateHandler(event);
                                            NativeApplication.nativeApplication.exit();
    This was for some reason or other... but it wasn't affecting him as he was only testing on a nexus 10, which for some reason that event doesn't get fired unless you are actually trying to exit the app.
    But on the Galaxy Xoom, and the air emulator it gets called just whenever the app is told to go to sleep or lose focus I guess.
    Wow, that wasted a LOT of time!

  • [svn:osmf:] 15988: asdocs build fix (missing closing /p added x 3)

    Revision: 15988
    Revision: 15988
    Author:   [email protected]
    Date:     2010-05-10 09:32:06 -0700 (Mon, 10 May 2010)
    Log Message:
    asdocs build fix (missing closing </p> added x 3)
    Modified Paths:
        osmf/trunk/framework/OSMF/org/osmf/elements/LightweightVideoElement.as
        osmf/trunk/framework/OSMF/org/osmf/elements/LoadFromDocumentElement.as
        osmf/trunk/framework/OSMF/org/osmf/elements/VideoElement.as

    Carey,
    I have tried london1a1's workaround, and it has not made any difference.
    It seems that london1a1 suggests changing the Camera.h file in this location:
              Users/london1a1/Documents/DW_NAF/PhoneGapLib/PhoneGapLib/Classes/Camera.h
    Whereas you're saying to change the Camera.h file in this location:
              /Applications/Adobe Dreamweaver CS5.5/Configuration/NativeAppFramework/DWPhoneGap/iphone/PhoneGapLib/Classes/Camera.h
    I've tried changing the Camera.h file in both locations.  Neither has made a difference.

  • Dreamweaver CS3 (9) maxed CPU slow performance and FTP issues.

    Dreamweaver CS3 (9) maxed CPU slow performance and FTP
    issues.
    Never mind...I found it.
    Of course I have all the firewall, worm, scanning, phishing
    filtering, full time protect, and everything else turned off when I
    am working on a remote/testing site however I found the Symantec
    antivirus email scan being enabled (doesn't it just interact with
    my email program?) was getting in the way of DW9 ftp operations,
    causing some lagging that was putting DW9 into an unstable mode
    during and immediately after (10 seconds) ftp operations.
    The CPU still spikes to 99% during ftp functions but DW9
    works even though it's likely taxed to the max but when I turn off
    email scan DW9 returned to a functioning state during ftp and
    remote file view. Whatever it takes huh?
    PS: email scanning only never effected DW8.02
    My System Info: properly configured (as in: everything works
    or I fix it) Toshiba - XP pro SP2 – Pentium 4 - 2.3GHz
    – 1.5 GB ram – Intel extreme graphics card 64 MB ram
    – Apache 2.2 server – PHP 5 – MySQL 5 -
    phpMyAdmin – Dreamweaver 8.02 – Interakt Kollection
    3.7.1 – Dreamwweaver 9 - DeVtoolbox – Flash –
    GoLive - Photoshop - ImageReady – Illustrator – Acrobat
    – CuteFTP – Putty SSH - blabla...)

    Thanks for the response. I managed to Google the 'nobody' account and found the explanation.
    As you say, I assumed the 'syslogd' process had to do with system logs. I did check the console and couldn't find anything of particular interest in the logs.
    I'll continue to monitor but if anyone has any further suggestions please post.
    One thing that might also have an impact... my available drive space is now down to less than 5gb (runs anywhere between 1gb and 5gb free depending on what I'm running). I know this causes problems for Parallels as it keeps moaning at me, but would it cause any problems with OSX?
    Thanks again,
    Tom

  • Very slow copying of files to external hard drive

    Dear all,
    I've been finding that my MacBook Pro (a 2007 model running Snow Leopard) is extremely slow to copy files over to an external hard drive. The hard drive is connected via Firewire and is perfectly fine with my (Tiger) desktop G5. When I say slow, I mean that the progress monitor is counting in individual bytes.
    Any ideas?
    Thanks in advance,
    Roger

    I've tried again with a different cable and the firewire 400 (which is certainly what I'm using) still proves appallingly slow when I'm trying to copy files from my MacBook to the drive. The other way round (External HD to computer) is just fine.
    Sounds like a hardware problem. In order to troubleshoot it, you'd need to try a different FW HD. If it did the same thing, it would suggest an issue with the MBP's FW port/bus. If it worked fine, then it would suggest an issue with the first FW HD, likely with the circuitry, as has been surmised before (the HD itself may be just fine, and could be transferred into a different enclosure).
    I would try firewire 800, but assume that you can't have a connector which is 800 one end and 400 the other: both HDs are firewire 400 and I want to carry on using them ...
    800 is backwards compatible with 400, so you actually can, but it will operate at the 400 speed; in any case, no reason to mess with that here.
    One of my HDs, a later model, has a USB connector built in. I tried copying from the MacBook to the HD using the USB and it was very fast.
    Sounds like you've narrowed it down to a FW issue, as described above.
    I'm wondering whether this is, as one of the other correspondents surmises, an OSX 6.3 problem ...?
    I strongly doubt it. I'm on 10.6.3 and have done SuperDuper backups/Time Machine backups/large file transfers, and all have behaved just as they did before 10.6.3. I read all the posts that come through here, and there is no widespread issue (normally, when something like that happens, there will be many posts about it; if fact, there are usually many posts after any major update/upgrade, when people believe the update/upgrade has done something to their machines, but I've been surprised at the low number after 10.6.3, at least here in this forum, and 10.6.3 was a very large update).
    Would be very interested to know your informed thoughts.
    Some thoughts:
    -as mentioned above, try a different FW HD on that same MBP port to determine if the port is operating correctly.
    -if the port does function properly, get a new enclosure and install the HD from the FW enclosure (USB enclosures, if you're satisfied with their speed, are very inexpensive, sometimes as low as $10 or so). Check out Macsales, Newegg and Amazon for some ideas/examples, or if you want to see hundreds of them, google it.
    -if the port is the issue, you may want to get used to using USB or get a FW800 HD enclosure. Obviously there is no guarantee the FW800 port/bus is functioning properly, but I think the odds would be good.
    Message was edited by: tjk

  • SHADOW_IMPORT_UPG1 is very very slow, no log files are created

    Hi all
    We are now doing our production upgrade, during the SHADOW_IMPORT_UPG1 phase system is very slow
    no log files are created in the /usr/sap/put/log directory.
    only three files are growing in /usr/sap/tmp directory
    orar3p> ls -lrt
    total 219176
    -rw-rw-rw-   1 r3padm     sapsys        2693 Aug 15 18:42 UCMIG_DE.ECO
    -rw-rw-rw-   1 r3padm     sapsys        2374 Aug 15 18:42 R3trans.out
    -rw-rw-rw-   1 r3padm     sapsys        2685 Aug 15 18:46 ADDON_TR.ECO
    -rw-rw-rw-   1 r3padm     sapsys         726 Aug 15 20:04 crshdusr.log
    -rw-rw-rw-   1 r3padm     sapsys        3915 Aug 15 21:53 EU_IMTSK.ECO
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLPTN18.R3P
    -rw-rw-r--   1 r3padm     sapsys         257 Aug 15 22:09 SAPKKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36433272 Aug 15 23:44 SAPKLESN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     36807577 Aug 15 23:44 SAPKLFRN18.R3P
    -rw-rw-r--   1 r3padm     sapsys     35372350 Aug 15 23:44 SAPKLPTN18.R3P
    orar3p> date
    Fri Aug 15 23:44:54 PDT 2008
    Can anyone help what to do
    Thanks
    Senthil

    Hello,
    did you discover what the cause was for this phase running so slow? And  how long did it take to complete in the end?
    We are currently running an upgrade of our Development system and have struck the same issue.
    I killed the upgrade after the phase had been running for 4 hours and restarted it, but it looks like it is still going to run for a long time.
    Regards....John

  • Cannot access a closed file error on adding a file to a library

    I am trying to copy the attachments from a list to a document library. When I try to write into a document library, occasionally I am getting following error "The remote server returned an error: (500) Internal Server Error.  Cannot access
    a closed file.  Cannot access a closed file.  "
    Any ideas on what is causing the problem. The code is specified below, The error message is specified after the code. Can you please let me know if I am missing anything. I cant find anything wrong in the code or in the library. The code is running as WCF
    Service and the call to process this is made from Infopath form.
    try
    SPSecurity.RunWithElevatedPrivileges(delegate()
    string siteURL = "Site URL";
    using (SPSite site = new SPSite(siteURL))
    SPWeb oWeb = site.OpenWeb();
    SPList oList = oWeb.Lists["List Name"];
    SPList docDestination = oWeb.Lists["Destination Library on same site"];
    SPFolder fldRoot = oWeb.Folders["Destination Library on same site"];
    SPFileCollection flColl = null;
    SPQuery oQuery = new SPQuery();
    oQuery.Query = "<Where><Eq>" + "<FieldRef Name=\"ID\"/><Value Type=\"Number\">" + RequestID + "</Value></Eq></Where>";
    SPListItemCollection oItems = oList.GetItems(oQuery);
    foreach (SPListItem lstItem in oItems)
    if (lstItem["ID"].ToString() == RequestID)
    if (lstItem.Attachments != null && lstItem.Attachments.Count > 0)
    foreach (String strName in lstItem.Attachments)
    flColl = fldRoot.Files;
    SPListItem listtem = docDestination.Items.Add();
    SPFile FileCopy = lstItem.ParentList.ParentWeb.GetFile(lstItem.Attachments.UrlPrefix + strName);
    SPQuery oQueryDest = new SPQuery();
    oQueryDest.Query = "<Where><Eq>" + "<FieldRef Name=\"Name\"/><Value Type=\"Text\">" + FileCopy.Name + "</Value></Eq></Where>";
    SPListItemCollection collListItems = docDestination.GetItems(oQuery);
    string destFile = flColl.Folder.Url + "/" + FileCopy.Name ;
    if (collListItems.Count > 0)
    String strFileNamePrfix = FileCopy.Name.Substring(0, FileCopy.Name.Length - 3);
    strFileNamePrfix = strFileNamePrfix + "_" + collListItems.Count.ToString();
    strFileNamePrfix = strFileNamePrfix + FileCopy.Name.Substring(FileCopy.Name.Length - 3,3);
    destFile = flColl.Folder.Url + "/" + strFileNamePrfix;
    byte[] fileData = FileCopy.OpenBinary();
    Trace.WriteLine("Board Services dest file" + destFile + " Count = " + collListItems.Count.ToString());
    SPFile flAdded = flColl.Add(destFile, fileData,true);
    SPListItem item = flAdded.Item;
    flAdded.Item.Update();
    catch (Exception ex)
    string strLogEntry = ex.Message.ToString() + ex.StackTrace.ToString();
    finally
    Form submission failed. (User: xxxxx, Form Name: Template, IP: , Request: http://Site Url/Lists/List name/Task/editifs.aspx?List=e6647e38-cb01-46b3-8bee-308357045532&ID=116&Source=http://Site URL/Lists/List Name/AllItems.aspx?View={CB9F3C2B-49DF-4D75-9720-1E7E000FD229}&FilterField1=Status&FilterValue1=Completed&IsDlg=1&Web=706d51c9-e509-40f3-884a-7d40e126ff14,
    Form ID: urn:schemas-microsoft-com:office:infopath:list:-AutoGen-2013-12-06T16:11:54:747Z, Type: DataAdapterException, Exception Message: The remote server returned an error: (500) Internal Server Error.  Cannot access a closed file.  Cannot
    access a closed file.  
     at System.IO.FileStream.Write(Byte[] array, Int32 offset, Int32 count)    
    The remote server returned an error: (500) Internal Server Error.)

    Hi Dimitri,
    Thank you, I tried increasing the timeout on web.config file of the wCF service and reloaded the config file.
    <httpRuntime executionTimeout="90" maxRequestLength="20000" useFullyQualifiedRedirectUrl="false" requestLengthDiskThreshold="8192"/>
    As the error
    Form submission failed. (User: , Form Name: Template, IP: , Request: http://site /editifs.aspx?List=e6647e38-cb01-46b3-8bee-308357045532&ID=102&Source=http://site List/AllItems.aspx?View={cb9f3c2b-49df-4d75-9720-1e7e000fd229}&SortField=Status&SortDir=Asc&IsDlg=1&Web=706d51c9-e509-40f3-884a-7d40e126ff14#,
    Form ID: urn:schemas-microsoft-com:office:infopath:list:-AutoGen-2013-12-06T16:11:54:747Z, Type: DataAdapterException, Exception Message: The remote server returned an error: (500) Internal Server Error.  Cannot access a closed file.  Cannot access
    a closed file.  
     at System.IO.FileStream.Write(Byte[] array, Int32 offset, Int32 count)    
    is happening at following line
    SPFile flAdded = flColl.Add(destFile, fileData,attachmentPropertiesHashTable ,true);
    I only modified the timeout on the wcf service.
    I dont think I need to change the timeout on the Webapplication which has this site?
    Regards
    Nate

  • Laserjet p2055dn is slow printing pdf files using mac

    My Mac is very slow printing pdf files on my hp laserjet p2055dn.  I have noticed fixes for this problem for people using Windows.  Is there a Mac solution?

    Do you want to create PDF files using Java? If so, there is a library available at http://www.lowagie.com/iText/docs.html Check out this site. There are many more similar PDFGEnerator tools that you can use..

  • EXTREMELY slow to open files - Illustrator CC 2014!

    Hi,
    I just updated from Maverick to Yosemite 10.10.1
    See my system config on this monosnap link:
    http://take.ms/2CAxh
    I have installed the latest Illustrator CC - 18.1.1
    See monosnap link:
    http://take.ms/lgqj0
    I have a big problem everytime I try to open a file from within Illustrator CC!
    When try to open a new folder (with few or meny files) it TAKES FOREVER for Illustrator to render the files in the folder. (Aprox 2-5 minutes...).
    Everything on the machine freezes - it's like a big hang! - but after the loooong wait the hang kind of releases and I can see all files in the folder and can open and go on with the job...
    When I do exactly the same but from within Photoshop CC (2014.2.2) everything works smooth, quick and fine!
    I have turned of Yosemite transparance, checked that I have Verdana installed.
    What more can I do???
    Please help - this problem is extremely tedious to have!

    Hi Evil Lair Bear,
    have You find a solution and solved the problem (besides the embedding idea)?
    I still have the same problem = extreeemly slow to open files from within Illustrator 18.1.1
    We have tons of old illustrator files dating back so far as Illustrator 3.2...
    It's impossible fur us at the studio to reopen all old files and embedding pics...
    Now we alla have to have to do the annoying work around "find and open the files in finder"...
    This is driving me nuts...
    Any help would be magic! ;-)

  • [svn] 3638: Changing the build order of swc file update with dita xml files .

    Revision: 3638
    Author: [email protected]
    Date: 2008-10-14 16:49:56 -0700 (Tue, 14 Oct 2008)
    Log Message:
    Changing the build order of swc file update with dita xml files.
    By default this would now happen during the checkintests target - so "ant clean main" shouldn't get affected.
    To opt out use -Dno.doc=true
    QA: No
    Doc: No
    Tests: checkintests
    Modified Paths:
    flex/sdk/trunk/build.xml
    flex/sdk/trunk/frameworks/build.xml
    flex/sdk/trunk/frameworks/projects/airframework/build.xml
    flex/sdk/trunk/frameworks/projects/flash-integration/build.xml
    flex/sdk/trunk/frameworks/projects/flex/build.xml
    flex/sdk/trunk/frameworks/projects/flex4/build.xml
    flex/sdk/trunk/frameworks/projects/framework/build.xml
    flex/sdk/trunk/frameworks/projects/haloclassic/build.xml
    flex/sdk/trunk/frameworks/projects/rpc/build.xml
    flex/sdk/trunk/frameworks/projects/utilities/build.xml

    Revision: 3638
    Author: [email protected]
    Date: 2008-10-14 16:49:56 -0700 (Tue, 14 Oct 2008)
    Log Message:
    Changing the build order of swc file update with dita xml files.
    By default this would now happen during the checkintests target - so "ant clean main" shouldn't get affected.
    To opt out use -Dno.doc=true
    QA: No
    Doc: No
    Tests: checkintests
    Modified Paths:
    flex/sdk/trunk/build.xml
    flex/sdk/trunk/frameworks/build.xml
    flex/sdk/trunk/frameworks/projects/airframework/build.xml
    flex/sdk/trunk/frameworks/projects/flash-integration/build.xml
    flex/sdk/trunk/frameworks/projects/flex/build.xml
    flex/sdk/trunk/frameworks/projects/flex4/build.xml
    flex/sdk/trunk/frameworks/projects/framework/build.xml
    flex/sdk/trunk/frameworks/projects/haloclassic/build.xml
    flex/sdk/trunk/frameworks/projects/rpc/build.xml
    flex/sdk/trunk/frameworks/projects/utilities/build.xml

  • Ocrfile is not being written to.  open file issues.  Help please.

    I've been troubleshooting an open file issue on our Test environment for quite a while now. Oracle has had me update to latest CRS bundle for 10.2.0.3, then upgrade to 10.2.0.4, then two more patches via OPatch to bring 10.2.0.4 RAC to it's most recent patch. None of these patches resolved our problem. We have ~8700 datafiles in the database and once the database is started, we're at ~11k on Production but on Test we're at ~37K or higher. It takes 1-2 days to hit the 65536 limit before it crashes. I have to 'bounce' the database to keep it from crashing. Yes, I could raise the ulimit but that isn't solving the problem.
    Over the weekend I noticed that on Production and DEV, the ocrfile is being written to constantly and has a current timestamp but on Test, the ocrfile has not been written to since the last OPatch install. I've checked the crs status via 'cluvfy stage -post crsinst -n all -verbose' and everything comes back as 'passed'. The database is up and running, but the ocrfile is still timestamped at April 14th and open files jump to 37k upon opening the database and continue to grow to the ulimit. Before hitting the limit, I'll have over 5,000 open files for 'hc_<instance>.dat, which is where I've been led down the path of patching Oracle CRS and RDBMS to resolve the 'hc_<instance>.dat bug which was supposed to be resolved in all of the patches I've applied.
    From imon_<instance>.log:
    Health check failed to connect to instance.
    GIM-00090: OS-dependent operation:mmap failed with status: 22
    GIM-00091: OS failure message: Invalid argument
    GIM-00092: OS failure occurred at: sskgmsmr_13
    That info started the patching process but it seems like there's more to it and this is just a result of some other issue. The fact that my ocrfile on Test is not being written to when it updates frequently on Prod and Dev, seems odd.
    We're using OCFS2 as our CFS, updated to most recent version for our kernel (RHEL AS 4 u7 -- 2.6.9-67.0.15.ELsmp for x86_64)
    Any help greatly appreciated.

    Check Bug... on metalink
    if Bug 6931689
    Solve:
    To fix this issue please apply following patch:
    Patch 7298531 CRS MLR#2 ON TOP OF 10.2.0.4 FOR BUGS 6931689 7174111 6912026 7116314
    or
    Patch 7493592 CRS 10.2.0.4 Bundle Patch #2
    Be aware that the fix has to be applied to the 10.2.0.4 database home to fix the problem
    Good Luck

  • Sudden MTS File Issues

    I have been editing MTS files (AVCHD) from a Panasonic GH2 (Some hacked some stock firmware) for over 2 years with little to no issues. I am using the CS5.5 Master Collection, Premiere Pro in this case, on a 3 year old Sager laptop (64-bit, i-7 720qm, 8GB RAM, 2 internal HDDs, nVidia GeForce GTX 285M). I am not sure if I am having hardware issues or a corrupt file but something is not right.
    I am now in the middle of a project and all of the sudden my computer is VERY sluggish and it takes minutes before things update (such as scrolling in my timeline to a different frame) and sometimes it just freezes up all together. There is also an issue (could be related) with the second set of spanned clips (detailed below) from this project.
    The timeline is DV 24P widescreen since it is going to DVD and also uses footage shot from a DVX100b for cutaways. The GH2 footage was all shot with stock firmware 1.1 at 24H, all shot on one day and the first recording time was just shy of an hour, then about 20 minutes and then another for about an hour. This produced 3 mts files for the first section, 1 solo file for the short section and 4 files for the last long section.  I copied the whole card to my D drive without altering any of the folder structure like a good boy. I brought in the clips to the project window and had no issues with editing the first hour section. I will say that I did not use the media browser as I have learned recently is the preferred way to import these clips. For the last 2 years I have dragged and dropped using the windows explorer window and have not had any issues until now. I have dealt with spanned clips, a music video hat required over ten video tracks and much higher bit rates so I am not sure what the trigger is for these issues. What is odd about the second longer clip is that when the first clip is dragged in to the timeline( 00004), it only brings that clip, not the others that should be attached. The first section worked but not this one. And all of the files that follow this clip are numbered differently but look identical to (00004) when dragged to the timeline. They all appear normal in explorer (smaller clips that should be spanned together) so the footage is in tact, just not in PP.
    I experimented a little and copied clip 0005 into it's own folder, dragged it in and it then displayed properly (not a copy of 00004). The sluggish issue is still there so I am not ready to just grin and bear it with this workaround. I also experimented by creating a brand new project, copying the original footage from my archive drive (never imported into a timeline so no xmp files) into a new folder and used media browser to bring in the footage. It didn't show clips 00005 and up since the media browser is only supposed to show the first of the spanned clips but it was still only 24 minutes long and obviously not including the rest of the clips. MEdia browser recognizes these clips are supposed to be together, but PP is seeing them all as copies of 00005.
    I will greatly appreciate any opinions but bear in mind that I have used this same workflow on this same system for over 2 years. The only variable is that these spanned clips may be slightly longer than I have used in the past but not by much. I really don't know if there is a peice of hardware malfunctioning, if the clip is somehow corrupt or if I am missing something obvious.
    What coulod cause this sudden change in performance? Would editing with a proxy help? Should I convert this trouble clip to a different format? Am I asking too much of my system? Are there ways to check if my computer is functioning properly?
    Thanks in advance for any insight and for reading this rather lengthy explaination.
    Cole

    I wanted to give a quick update on this MTS file issue. I was able to get my system back to normal by isolating the trouble files in their own folders outside of the "Private" folder(the actual source files, not in the premiere project). I copied the first clip of the group into a folder by itself and the last 3 clips into another folder. I deleted the originals from premiere and imported (dragging and dropping from windows, not using media browser) the isolated files and they worked fine. Obviously they were now 4 independent files that I had to place side by side in the timeline but they lined up and there is no more lag in the system and there are no more duplicate files.
    Far from conventional but it has me editing again and I didn't have to buy a new computer to be back in business. I can only guess that something glitched when it was conforming and it wouldn't recognize the spanned clips.
    A special thanks to Eric at ADK for offering some suggestions that I fortunately didn't have to try.

Maybe you are looking for

  • UrgentError - oracle.jbo.RowCreateException: JBO-25017 while creating a row

    Hi, I am developing a page which would insert values into the database. However when I run the page, I get the following error: Error - oracle.jbo.RowCreateException: JBO-25017: Error while creating a new entity row for RepeatPayEO. It gives the erro

  • Pegging information has not been calculate for [item] in this plan

    Hello planners - we ran across a strange thing today. We have an item in our unconstrained MRP with multiple demands and resulting supplies. The last demand line (Sales Order MDS) shows pegging information has not been calculate for [item] in this pl

  • /etc/sshd_config out of order?

    Leopard has this new GUI feature for allowing only certain users to connect via ssh. On Tiger I used /etc/sshd_config with the AllowUsers directive to define allowed users. The settings in Leopard's System Preferences do not change this file. I gave

  • Psc 2210 all-in-one.

    I have an old workhorse, a psc 2210 all-in-one. Can I easily turn the color off and on? Anybody know?   Thanks.

  • ORA-00604,ORA-00600,ORA-1652,ORA-1653,ORA-00257 errors

    Hi All, I am getting the below alerts in my Oracle 9i database. ORA-00604: error occurred at recursive SQL level 3 ORA-00600: internal error code, arguments: [kghpih:ds], [ ORA-1652: unable to extend temp segment by 128 in tablespace           TS_PHD