FileStreams

Hello,
the following class can inflate and deflate files. The in/deflated file overrides the old one. But I've encountered a problem. This class first writes a file into a byteArray. Next it is being in/deflated and written to the disk. But with large files(>30mb on my computer), I get a memory overload. Does anyone have any suggestions?
Thanks in advance,
Wouter
This is the code:
package koepelFileCompressor;
import java.io.*;
import java.util.zip.*;
public class koepelFileHandler {
     static final int BUFFER = 4096;
     public koepelFileHandler(){
          //lege constructor
     public byte[] makeByteArray(File file){
          byte data[] = new byte[BUFFER];
          try{
               BufferedInputStream fileNaarByte = null;
               FileInputStream fi = new FileInputStream(file);
               fileNaarByte = new BufferedInputStream(fi, BUFFER);
               ByteArrayOutputStream out = new ByteArrayOutputStream();
               int count;
               while((count = fileNaarByte.read(data, 0, BUFFER)) != -1) {
               out.write(data, 0, count);
               fileNaarByte.close();
               return out.toByteArray();
          catch(Exception e){
               e.printStackTrace();
               return data;
     public void pakIn(String fileName){
          try{
               Deflater compressor = new Deflater();
          compressor.setLevel(Deflater.BEST_COMPRESSION);
               compressor.setInput(makeByteArray(new File(fileName)));
               BufferedOutputStream inflateTo = new BufferedOutputStream(new FileOutputStream(new File(fileName)));
          compressor.finish();
               byte[] buf = new byte[BUFFER];
               while (!compressor.finished()) {
          int countIt = compressor.deflate(buf);
          inflateTo.write(buf, 0, countIt);
               inflateTo.close();
          catch(Exception e){
               e.printStackTrace();
     public void pakUit(String fileName){
          try {
               Inflater decompressor = new Inflater();
          decompressor.setInput(makeByteArray(new File(fileName)));
          BufferedOutputStream deflateTo = new BufferedOutputStream(new FileOutputStream(new File(fileName)));
          byte[] buf = new byte[BUFFER];
          while (!decompressor.finished()) {
     int count = decompressor.inflate(buf);
     deflateTo.write(buf, 0, count);
          deflateTo.close();
     catch(Exception e) {
          e.printStackTrace();
}

One option is to increase the Java heap size by doing
java -XmxBIGGERNUMBER yourRunnableClassName
Also, you could analyze your code to see if you can set to null some large data structures
after you're done using them, before you build some other large data structures. Call System.gc() a few times in succession after you've set the structure(s) to null, and it will clean up the memory soon afterwards. Though this will slow down the performance of the program somewhat.
Hope this helps,
Corrine
Hello,
the following class can inflate and deflate files. The
in/deflated file overrides the old one. But I've
encountered a problem. This class first writes a file
into a byteArray. Next it is being in/deflated and
written to the disk. But with large files(>30mb on my
computer), I get a memory overload. Does anyone have
any suggestions?
Thanks in advance,
Wouter
This is the code:
package koepelFileCompressor;
import java.io.*;
import java.util.zip.*;
public class koepelFileHandler {
     static final int BUFFER = 4096;
     public koepelFileHandler(){
          //lege constructor
     public byte[] makeByteArray(File file){
          byte data[] = new byte[BUFFER];
          try{
               BufferedInputStream fileNaarByte = null;
               FileInputStream fi = new FileInputStream(file);
fileNaarByte = new BufferedInputStream(fi,
i, BUFFER);
ByteArrayOutputStream out = new
ew ByteArrayOutputStream();
               int count;
while((count = fileNaarByte.read(data, 0, BUFFER))
)) != -1) {
               out.write(data, 0, count);
               fileNaarByte.close();
               return out.toByteArray();
          catch(Exception e){
               e.printStackTrace();
               return data;
     public void pakIn(String fileName){
          try{
               Deflater compressor = new Deflater();
          compressor.setLevel(Deflater.BEST_COMPRESSION);
compressor.setInput(makeByteArray(new
ew File(fileName)));
BufferedOutputStream inflateTo = new
ew BufferedOutputStream(new FileOutputStream(new
File(fileName)));
          compressor.finish();
               byte[] buf = new byte[BUFFER];
               while (!compressor.finished()) {
          int countIt = compressor.deflate(buf);
          inflateTo.write(buf, 0, countIt);
               inflateTo.close();
          catch(Exception e){
               e.printStackTrace();
     public void pakUit(String fileName){
          try {
               Inflater decompressor = new Inflater();
decompressor.setInput(makeByteArray(new
y(new File(fileName)));
BufferedOutputStream deflateTo = new
= new BufferedOutputStream(new FileOutputStream(new
File(fileName)));
          byte[] buf = new byte[BUFFER];
          while (!decompressor.finished()) {
     int count = decompressor.inflate(buf);
     deflateTo.write(buf, 0, count);
          deflateTo.close();
     catch(Exception e) {
          e.printStackTrace();

Similar Messages

  • How do I use a FileStream with an IFilePromise without knowing the File?

    I've finished the implmentation of my project using Async IFilePromises by extending the ByteArray class. I can drag out of my application and files are created by copying bits over the network, etc. The issue I'm running into is that because I used the ByteArray as my dataprovider, files are stored "In-Memory" until the Close event is fired.
    My application routinely copies files around 700 MB and up to 2 GB. The implementation quickly causes out of memory errors. That said, if I change the implementation to use FileStreams then the files are written directly to disk and memory is exactly where I want it to be.
    The issue is that the FileStream requires a File object when it's FileStream.open(File, FileMode) method is called. If I manually create a file and use it, then everything is fine, but doing that defeats the purpose of the FilePromise. I lose all the information about where a person dropped the file.
    So how do I get access to the File object that is created for each FilePromise? I know one is created by inspecting the call stack when the IFilePromise.open() method is called and inspecting the caller MacFilePromiseWrapper._file.
    I've tried simply returning a plain FileStream instead of an extended class in the IFilePromise.open method, but that still gives the same error result when I try to write to the stream (says that the stream isnt open yet).
    I would have expected the MacFilePromiseWrapper / FilePromiseWrapper to intelligently handle the returned IDataProvider and perform any "open" and "close" it needed. Please tell me I'm just missing something obvious. I'm so close to being done with this project and I don't want to have to rewrite this using a native implementation. The performance issues (memory usage) will cause me to do just that if I cant figure this out.
    Thanks for any help,
    Jared

    I have resolved the issues myself.
    I had the idea of an IFilePromise / IDataInput backwards. The way I thought it worked is that what you were "writing to" when you wrote into a ByteArray and fired of progress events was simply a pipe to a file handle on the disk. That's backwards.  What really happens is that the thing you're writing to is a buffer for the IFilePromise. Firing the open/progress/close events let the file system know that it's safe to pull up to IDataInput.bytesAvailable out of your buffer and write that to the disk and the buffer you write to can be a ByteArray, Socket, FileStream, etc.
    The confusion was compounded by my attempts to use FileStream as the IDataInput. What I was doing for "testing purposes" was creating a temporary file and writing my bytes to it. The file handle created by the FilePromise would immediately disappear, so I thought it meant I needed to somehow get access to that handle so I knew where to send my bits using FileStream.open(File, FileMode).
    I was really close, but had a mistake in there.  I was on the right track with opening the FileStrea and using the temporary file, but I opened it with the wrong FileMode.
    I used FileMode.WRITE because I thought I was writing to the disk only. When I switched it to FileMode.UPDATE (which is read/write) then everything worked as I needed it to.
    So to summarize, to use a FileStream as the IDataInput for an IFilePromise you need to:
    Create the FileStream object and return it when the IFilePromise.open is called
    When you get bytes off your network, etc, dispatch the OPEN event
    Open the FileStream object with FileMode.UPDATE and use a temporary file provided by File.createTempFile(), store it for later
    Write bytes to the FileStream and send out the proper ProgressEvent.PROGRESS events
    Dispatch the COMPLETE event when you're done
    the IFilePromise.end() method will be called and you have an opportunity to close your FileStream and to delete the temporary file that was created
    I hope this helps someone that runs into a similar issue.

  • Fairly certain that FileStream.writeObject() and FileStream.readObject() do not function - at all -.

    I've struggled with this since Jan 9th, 2013 (if not longer) and the only conclusion I can come to is that this simply does not function.  No matter what I try and no matter what resource (and I'm finding precious few) I follow to try to implement this within Flash Builder 4.7, Flex SDK 4.6.0 (Build 23201), AIR SDK 3.5, I only succeed in creating a file (with the correct name) that is 123 bytes in size that reads back in as NULL;
    I've tried using ByteArray.writeObject()/readObject() as an intermediary with FileStream.writeBytes()/readBytes(), with no luck.
    I've tried instantiating an object, setting properties and then using that.  I've tried instantiating my correctly formed ValueObject (including the remoteClass alias metadata tag).
    I've tried using -verbatim- the example provided in the top most suggested 'Community Help' resource http://www.switchonthecode.com/tutorials/adobe-air-and-flex-saving-serialized-objects-to-f ile It is worth noting that this solitary example of the procedure/SDK-usage is dated to Flex SDK 3.0 and at least 04/04/2009 (first comment on the article).
    My frustrating hell (one version of many methods attempted) is detailed on StackOverflow (including -all- mxml, as, and trace output), but so far, no assistance has been forthcoming, alas.  This is a severely stripped down and simplified version of what had been a far more complex attempt:
    http://stackoverflow.com/questions/14366911/flex-air-actionscript-mobile-file-writeobject- readobject-always-generates-null-w
    An earlier post* detailing a far more complex attempt interation, with, alas, just as little help (guess this isn't a hot button topic) forthcoming:
    http://stackoverflow.com/questions/14259393/flex-actionscript3-filestream-writeobject-fail s-silently-in-ios-what-am-i-doin
    * I previously suspected that it was only failing from within iOS on an iPad, but the first example (the stripped down version) made it evident that it didn't work in the AIR mobile device simulator (iPad) in the Windows environment, and indeed, didn't work in a non-mobile project in the windows environment AIR launcher.
    I'm at a loss, upset, frustrated, in major trouble with my supervisor/deadlines, etc.
    I would very much appreciate any suggestions/help/confirmation/etc.
    Just to move ahead with development I've opted for a far less preferable solution of writing out both an XML file and a JPG file.  I very much do not like this and very much want to store encapsulated serialized objects locally in the same way I assume will work for storing remotely with AMFPHP (if the project ever gets to that point *sigh*).
    Again.  Would be so grateful for any help.

    I want to add to this post as I marked it as "The Answer" though it does not indeed contain the answer directly, for those who come looking for simliar solutions.
    harUI prompted me to realize that my metadata term needed to be capitalized (RemoteClass instead of remoteClass).  As the metadata tags may be user defined, the compiler throws no errors (or warnings *grumble*)
    package vo
        import flash.display.BitmapData;
       // [remoteClass(alias="PTotmImageVO")] incorrect
       [RemoteClass(alias="PTotmImageVO")]
        public class PTotmImageVO

  • Access Denied error while reading from filestream

    Hi Everyone.
    I have an intranet application that stores files in SQL filestream.
    On my dev machine, everything works like a charm.
    I'm able to upload and store files into SQL filestream (AjaxUpload) and able to downlaod them.
    On the live server, I'm able to upload files, delete them, but when I try to download the file from filestream, I get the following error:
    Access is denied
    Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
    Exception Details: System.ComponentModel.Win32Exception: Access is denied
    Source Error:
    An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
    Stack Trace:
    [Win32Exception (0x80004005): Access is denied]
    System.Data.SqlTypes.SqlFileStream.OpenSqlFileStream(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) +1465594
    System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access, FileOptions options, Int64 allocationSize) +398
    System.Data.SqlTypes.SqlFileStream..ctor(String path, Byte[] transactionContext, FileAccess access) +27
    quotes_GetFileStream.quotes_GetFileStream_Load(Object sender, EventArgs e) +740
    System.Web.UI.Control.LoadRecursive() +71
    System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3048
    The application pool is set to integrated, V.4.0,  and I use a domain user for the identity authentication back to SQL.
    I gave that user DB_Owner rights to the SQL Database for that application.
    Even tried giving it all the SQL Server Roles, though i still get the above error.
    When I change the Identity username to mine (I have Domain Admin rights), everything works flawlessly on the live server.
    What rights am I missing to give that user so he can read from SQL filestream properly.
    Here is the block of code that gets the file from filestream and pushes it to the browser, maybe I'm missing something here (though once i modify the user it works great).
    Dim RecId As Integer = -1
    If Not IsNothing(Request.QueryString("ID")) And IsNumeric(Request.QueryString("ID")) Then
    RecId = CType(Request.QueryString("ID"), Integer)
    End If
    Dim ConString As String = ConfigurationManager.ConnectionStrings("ConnectionString").ToString
    Using Con As SqlConnection = New SqlConnection(ConString)
    Con.Open()
    Dim txn As SqlTransaction = Con.BeginTransaction()
    Dim Sql As String = "SELECT FileData.PathName() , GET_FILESTREAM_TRANSACTION_CONTEXT() as TransactionContext, [FileName], [FileExtension] FROM [QAttach] where RecId = @RecId"
    Dim cmd As SqlCommand = New SqlCommand(Sql, Con, txn)
    cmd.Parameters.Add("@RecID", Data.SqlDbType.Int).Value = RecId
    Dim Rdr As SqlDataReader = cmd.ExecuteReader()
    While Rdr.Read()
    Dim FilePath As String = Rdr(0).ToString()
    Dim objContext As Byte() = DirectCast(Rdr(1), Byte())
    Dim fname As String = Rdr(2).ToString()
    Dim FileExtension As String = Rdr(3).ToString()
    Dim sfs As SqlFileStream = New SqlFileStream(FilePath, objContext, FileAccess.Read)
    Dim Buffer As Byte() = New Byte(CInt(sfs.Length) - 1) {}
    sfs.Read(Buffer, 0, Convert.ToInt32(Buffer.Length))
    Response.Buffer = True
    Response.Charset = ""
    Response.Cache.SetCacheability(HttpCacheability.NoCache)
    Response.ContentType = FileExtension
    Response.AddHeader("content-disposition", "attachment;filename=" & fname)
    Response.BinaryWrite(Buffer)
    Response.Flush()
    Response.End()
    sfs.Close()
    End While
    End Using
    Thanks.
    Oren

    @William Bosacker:
    Please accept our apologies for any mistreatment.  While there's certainly no legal recourse for posts on an open forum like this, we do take steps to try to keep the forums a pleasant and friendly place to visit, and toward this end, two other moderators
    have already cleaned the thread and have begun to address the abuse.
    Not to defend the manner in which it was addressed, I must still point out that necro posting to a thread (this thread is from late 2013) and proposing your post as an answer are both generally discouraged. Again, I cannot condone the way in which it was
    presented, but I can understand why the other contributors thought something should be said.
    The recommended way to contribute information like this would be to create a new Discussion thread with content something like:
    "I was experiencing XYZ issue and while searching for a resolution I found this thread [link to old thread].  But after trying A, B, and C, I found that the following actually resolved my problem [code snippet].  I thought this might be helpful
    for anyone else with this issue... yada yada"
    In the case of this original thread, the issue was most certainly permission related.  While the underlying network permissions would certainly need to allow that user to access the server, the root problem may well have been within the SQL table permissions
    themselves.  The OP of the original thread really didn't provide enough context to know if they had the internal database permissions set correctly.
    The information you've provided essentially shows one way to set the table permissions, but it isn't necessarily the only way.  Its also possible that the issue could be resolved by modifying permission entries within the SQL manager rather than through
    a particular script file.  So while this information may indeed be helpful to someone in the future, it does not necessarily answer the question of this thread.  Only the OP has enough information to know if this can be applied to their situation;
    and since the thread is several years old and was originally closed by a moderator, there is very little chance that the OP will be back to respond.
    Hopefully this clears the air a little and will allow us all to get back to trying to help the VB development community within the guidelines of the forum.
    Reed Kimble - "When you do things right, people won't be sure you've done anything at all"

  • Error creating a file in filestream folder

    So, we have a filestream table that we have been using and copying a significant number of image file into over the past month (about 6 million).  So far, the copying has been going well but have run into a problem which I cannot find an explanation
    or cure. 
    When I try to create a new folder, I am getting the following message:
    An unexpected error is keeping you from creating the folder.  If you continue to receive this error, you can use the error code to search for help with this problem. 
    Error 0x8007013D: The system cannot find message text for message number 0x%1 in the message file for %2
    Any thoughts?

    Hi Mark Anthony Erwin,
    Usually, when you can create a new folder to store the FILESTREAM data , you need to enable the XP_CMDSHELL feature on SQL Server, then create a FILESTREAM enabled database, and create a table with FILESTREAM columns to store FILESTREAM data. Once the FILESTREAM
    table is created successfully, we can  insert any other files to FILESTREAM table via OPENROWSET function, For more information, see:
    http://www.mssqltips.com/sqlservertip/1850/using-insert-update-and-delete-to-manage-sql-server-filestream-data/.
    According to your description, you want to create a new folder in the FILESTREAM folder, you need to check if you don’t set and reconfigure the xp_cmdshell.
    For existing databases, you can use the
    ALTER DATABASE statement to add a FILESTREAM filegroup.
    ALTER DATABASE [FileStreamDataBase]
    ADD FILE (NAME = N'FileStreamDB_FSData2', FILENAME = N'C:\Filestream\FileStreamData2')
    TO FILEGROUP FileStreamGroup
    GO
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • How to restore a filestream enabled and rbs configured web application

    HI
    in sharepoint farm I taken  backup of a filestream enabled and rbs configured web application  from central admin
    and how to restore it in a new sharepoint farm in different server
    adil

    hi i seen this error message in sprestore file in restore foloder,
    i am trying to restore the backup  in sql srver 2008 R2
    and
    backup taken  by sharepoint form which has back end sql server 2010 database datacenter
    IF EXISTS ( SELECT * FROM master..sysdatabases WHERE has_dbaccess(name)=1 AND
    name=@db_name )
    BEGIN
    SELECT 1 as ErrorCode
    END
    ELSE
    BEGIN RESTORE DATABASE [WSS_Content_Prod] FROM
    DISK=@db_location WITH STATS=5, FILE=1, MOVE @db_OldName TO @db_NewFile, MOVE @db_OldLogName TO @db_NewLogFile, MOVE @fsfg_old0 TO @fsfg_new0, NOREWIND, NOUNLOAD, RESTART, RECOVERY
    END
    @db_location=C:\BackUpandRestore from 117\ALPROD\spbr0000\0000016A.bak, @fsfg_old0=RBSFilestreamFile, @fsfg_new0=c:/rbs, @db_OldName=WSS_Content_Prod, @db_NewFile=C:\Program Files\Microsoft
    SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\WSS_Content_Prod.mdf, @db_OldLogName=WSS_Content_Prod_log, @db_NewLogFile=c:/Prodlogs\WSS_Content_Prod_log.ldf, @db_name=WSS_Content_Prod
    [3/24/2014 9:44:08 PM] Verbose: [WSS_Content_Prod] SQL command timeout is set to 1.00 hours.
    [3/24/2014 9:44:08 PM] FatalError: Object WSS_Content_Prod failed in event OnRestore. For more information, see the spbackup.log or sprestore.log file located in the backup directory.
    SqlException: The operation did not proceed far enough to allow RESTART. Reissue the statement without the RESTART qualifier.
    RESTORE DATABASE is terminating abnormally.
    [3/24/2014 9:44:08 PM] Debug: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
    at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
    at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
    at System.Data.SqlClient.SqlDataReader.ConsumeMetaData()
    at System.Data.SqlClient.SqlDataReader.get_MetaData()
    at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
    at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result)
    at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior, String method)
    at System.Data.SqlClient.SqlCommand.ExecuteReader(CommandBehavior behavior)
    at Microsoft.SharePoint.Administration.Backup.SPSqlBackupRestoreHelper.RunCommand(SqlCommand sqlCommand, SPBackupRestoreInformation args, Boolean throwOnRestart, Boolean& restart, SPSqlBackupRestoreConnection connection)
    at Microsoft.SharePoint.Administration.Backup.SPSqlBackupRestoreHelper.Run(SPBackupRestoreInformation args)
    at Microsoft.SharePoint.Administration.Backup.SPSqlRestoreHelper.Run(SPBackupRestoreInformation args)
    at Microsoft.SharePoint.Administration.SPDatabase.OnRestore(Object sender, SPRestoreInformation info)
    at Microsoft.SharePoint.Administration.SPContentDatabase.OnRestore(Object sender, SPRestoreInformation info)
    [3/24/2014 9:44:08 PM] Verbose: Starting OnPostRestore event.
    [3/24/2014 9:44:08 PM] Verbose: Starting object: SharePoint - 85.
    [3/24/2014 9:44:08 PM] Progress: [SharePoint - 85] 100 percent complete.
    [3/24/2014 9:44:08 PM] Verbose: [SharePoint - 85] Restoring features and its activation properties.
    adil

  • Convert CS4 Flash 10 as projector file to AIR For fileStream Writing Capabilities Not Working

    Have a simple kiosk .exe that works fine as a projector/.exe on PC platform.
    Wanted to add stats output to file of when FAQ buttons pressed.
    fileStream class available when I change publish settings to AIR AND works greate when I test movie w/in Flash, but
    I try to publish and click the .exe I don't see anything.
    Any info would be appreciated.

    Here's a general dive on RTMP. You can see the packet structure contains metadata in the header about the content that will follow. The NetStream link I put above talks about it as well when using appendBytes(). The byte(array) parser understands FLV files with a header. After the header is parsed (again, see RTMP and what the server sends (0x12) before invoking a play), it expects all future calls to appendBytes() to be a continuation of that file (or stream). In other words, the first call to appendBytes() is missing the header.
    This is where I rely on FMS for the most part. I know it knows to send a header upon connection. If you're implementing your own streaming setup you're going to need to supply this just like a FMS RTMP connection would (in AMF). I haven't needed to do anything this custom but if you get the structure of an initial RTMP connection (sniff it with a fiddler2 or find a resource on the structure) and encode it in byteArray, send it first to appendBytes with the correct information about the video you're sending, it should work.
    Otherwise it makes sense appendBytes is failing by just getting a chunk of binary without having a header (the instructions on what exactly the binary is and how to use it).
    Strictly per the documentation, On2/VP6 should be fine, I use it for all my old f4v projects. I'm not sure about MPGA/V.

  • Permission to FileStream Directory on MSDN question

    On technet you have listed - http://technet.microsoft.com/en-us/library/bb933993(v=sql.105).aspx
    Only the account under which the SQL Server service account runs is granted NTFS permissions to the FILESTREAM container. We
    recommend that no other account be granted permissions on the data container.
    Why is this the case, what if you want to allow your IIS App Pool Account access to read these files.  We are using PDF API that when trying to stream takes two minutes+ to generate the pdf file, however if we can read from the directly it is in
    milliseconds. Can you provide more evidence on why the app pool identity cannot access this directory?  Again, why the recommendation?  
    In MSDN you contradict yourself on how to use IO to Read/Write to the file tables - http://msdn.microsoft.com/en-us/library/gg492089.aspx#accessing
    Moojjoo MCP, MCTS
    MCP Virtual Business Card
    http://moojjoo.blogspot.com

    Tibor, I am writing a custom application for the WEB
    Where WebConfigurationManager.AppSettings["WebDocuments"] = The file stream directory
    INSERTs
    public void UploadFiles(List<UploadFileModel> uploadedFile)
    string path = WebConfigurationManager.AppSettings["WebDocuments"];
    foreach (UploadFileModel file in uploadedFile)
    if (file != null && file.File.ContentLength == 0)
    continue;
    if (file == null) continue;
    if (file.FileName == null) continue;
    string savedFileName = Path.Combine(
    path,
    Path.GetFileName(file.FileName));
    file.File.SaveAs(savedFileName);
    DELETEs
    public static void DeleteFilesByWebSiteId(int webSiteId)
    string path = WebConfigurationManager.AppSettings["WebDocuments"];
    //string path = @"C:\_Temp\"; Used with Upload
    string strWebSiteId = webSiteId.ToString();
    string filesToDelete = strWebSiteId + "*";
    string[] fileList = Directory.GetFiles(path, filesToDelete);
    if (fileList.Length > 0)
    foreach (string file in fileList)
    System.IO.File.Delete(file);
    Again this would require the app pool identity.  Is this a security problem and why?  It would only require read/write capability.
    Moojjoo MCP, MCTS
    MCP Virtual Business Card
    http://moojjoo.blogspot.com

  • FileStream problem with AIR deployment

    I use FileStream to load local XML file. When I test the desktop application directly from Flex everything is ok but when I install it with AIR, the application work but the file is not loaded. Here is the code I use:
    import flash.filesystem.*;
     var applicationDirectoryPath:File = File.applicationDirectory; 
    var nativePathToApplicationDirectory:String = applicationDirectoryPath.nativePath.toString(); 
    nativePathToApplicationDirectory +=
    "/config_poste_temps.xml"; 
    var file:File = new File(nativePathToApplicationDirectory); 
    var fs:FileStream = new FileStream(); 
    fs.open(file, FileMode.READ);
    var config:XML = XML(fs.readUTFBytes(fs.bytesAvailable)); 
    fs.close();
    plan.text = config.division;
    Regards

    I am having the same problem as simon_lucas although my config is slightly different:
    FlashBuilder 4 with SDK 4.6, Current app is with SDK 2.5.
    Trying to update with SDK 3.2, I double-checked per Horia Olaru that I do use the application updater swc:
             <path-element>D:\Program Files\Adobe\Adobe Flash Builder 4.6\sdks\4.6.0 - Air 3.2\frameworks\libs\air\applicationupdater.swc</path-element>
             <path-element>D:\Program Files\Adobe\Adobe Flash Builder 4.6\sdks\4.6.0 - Air 3.2\frameworks\libs\air\applicationupdater_ui.swc</path-element>
    But I still have the 16815 error.
    If I change the update.xml to be <update xmlns="http://ns.adobe.com/air/framework/update/description/2.5">
    instead of <update xmlns="http://ns.adobe.com/air/framework/update/description/3.2">, I am proposed to download the new app and then I get an error 16824.
    Please HELP!

  • When to use Filestream partitions?

    We have a Web site where we do a lot of document management. We currently have a table with 370,000 records in it. When uploading a new file we check it's size and if it is below 2Gig we store it in a VarChar blob column. We currently want to alter that
    table and add a Filestream column and transfer the data as shown below. As you see we are only creating one file folder and the query will probably run for six hours or so. 
    We are also thinking about adding up to 5 million audio files stored in a different area. We could conceivably end up with several terabytes of file data. Should we partition and if so how many files should we store in each partition? We are using SQL Server
    2012 and Windows Server 2012 R2.
    --Create a ROWGUID column
    USE CUR
    ALTER Table documents
    Add DocGUID uniqueidentifier not null ROWGUIDCOL unique default newid()
    GO
    --Turn on FILESTREAM
    USE CUR
    ALTER Table documents
    SET (filestream_on=FileStreamGroup1)
    GO
    --Add FILESTREAM column to the table
    USE CUR
    ALTER Table documents
    Add DocContent2 varbinary(max) FILESTREAM null
    GO
    -- Move data into the new column
    UPDATE documents
    SET DocContent2=DocContent
    where doccontent is not null and  doccontent2 is null  
    GO
    --Drop the old column
    ALTER Table documents
    DROP column DocContent
    GO
    --Rename the new FILESTREAM column to the old column name
    Use CUR
    GO
     sp_rename 'documents.DocContent2', 'DocContent','Column'
    GO

    Hi tomheaser,
    Quote: Should we partition and if so how many files should we store in each partition?
    Yes, if our database contains very large tables, we may benefit from partitioning those tables onto separate filegroups. In this case, SQL Server can access all the drives of each partition at the same time, this may reduce a lot time to load data.
    If you only want to reduce the query time by increasing the number of the filegroups, then the limit on the maximum number of partitions is 15,000 in SQL Server. But in order to maintain a balance between performance and number of partitions, we need to consider
    more things such as memory, partitioned index operations, DBCC commands, and queries. So please consider all those things first, then choose a reasonable number of partitions. For more information about Performance Guidelines of Table Partition, please refer
    to the following article:
    http://msdn.microsoft.com/en-us/library/ms190787(v=sql.110).aspx
    If you have any question, please feel free to let know.
    Regards,
    Jerry Li

  • Filestream Creation Unable to Open Physical File Operating System Error 259

    Hey Everybody,
    I have run out of options supporting a customer that is having an error when creating a database with a file stream.  The error displayed is unable to open physical file operating system error 259 (No more data is available).  We're using a pretty
    standard creation SQL script that we aren't having issues with other customers:
    -- We are going to create our data paths for the filestreams.  
    DECLARE @data_path nvarchar(256);
    SET @data_path = (SELECT SUBSTRING(physical_name, 1, CHARINDEX(N'master.mdf', LOWER(physical_name)) - 1)
                      FROM master.sys.master_files
                      WHERE database_id = 1 AND file_id = 1);
    -- At this point, we should be able to create our database.  
    EXECUTE ('CREATE DATABASE AllTables
    ON PRIMARY
        NAME = AllTables_data
        ,FILENAME = ''' + @data_path + 'AllTables_data.mdf''
        ,SIZE = 10MB
        ,FILEGROWTH = 15%
    FILEGROUP FileStreamAll CONTAINS FILESTREAM DEFAULT
        NAME = FSAllTables
        ,FILENAME = ''' + @data_path + 'AllTablesFS''
    LOG ON
        NAME = AllTables_log
        ,FILENAME = ''' + @data_path + 'AllTables_log.ldf''
        ,SIZE = 5MB
        ,FILEGROWTH = 5MB
    GO
    We are using SQL Server 2014 Express.  File streams were enabled on the installation SQL Server.  The instance was created successfully and we are able to connect to the database through SSMS. The user is using an encrypted Sophos. 
    We have tried the following:
    1. Increasing the permissions of the SQL Server server to have full access to the folders.
    2. Attempted a restore of a blank database and it failed.
    There doesn't seem to be any knowledge base articles on this particular error and I am not sure what else I can do to resolve this.  Thanks in advance for any help!

    Hi Ryan,
    1)SQL Server(any version) can't be installed on encrypted drives. Please see a similar scenario in the following link
    https://ask.sqlservercentral.com/questions/115761/filestream-and-encrypted-drives.html
    2)I don't think there is any problem with permissions on the folder, if the user can create a database in the same folder. Am not too sure. Also see the article by
    Jacob on configuring the FILESTREAM for SQL Server that describes how to configure FILESTREAM access level & creating a FILESTREAM enabled database
    Hope this helps,
    Thanks
    Bhanu 

  • Can we setup FILESTREAM on Failover Cluster

    I saw following point on Technet article about RBS.
    The local FILESTREAM provider is supported only when it is used on local hard disk drives or an attached Internet Small Computer System Interface (iSCSI) device. You cannot use the local RBS FILESTREAM provider on remote storage devices such as network attached storage (NAS).
    It looks like that we cannot use FILESTREAM on Failover Cluster because to setup Failover Cluster we need to have NAS. But then the NAS is made available locally for Failover Cluster so FILESTREAM should work right?
    Found another article which talks about setting up FILESTREAM on Failover Cluster so I am a bit confused.
    https://msdn.microsoft.com/en-us/library/cc645886.aspx

    Hi Frank,
    As other post, we can set up FILESTREAM on a Failover cluster.
    However, FILESTREAM can't live on a network addressable storage (NAS) device unless the NAS device is presented as a local NFS volume via iSCSI. With iSCSI , it is supported by Microsoft
    FILESTREAM provider. 
    Reference:
    Description of support for network database files in SQL Server
    Programming with FileStreams in SQL Server 2008
    Thanks,
    Lydia Zhang
    Lydia Zhang
    TechNet Community Support

  • IOS 8 - FileStream throwing error 2038 on open for write?

    Hey all,
    Going through iOS 8 compatibility checks with our Adobe AIR app (tested with AIR 13 and AIR 14), I'm noticing changes to file storage.
    In short, my code has always been as follows for simply storing a player profile file (matching iOS documentation for as fas as I know: File System Programming Guide: File System Basics). And this has worked well to prevent purges when the device is low on storage space as well as keeping the data there when updating the app.
    This code only seems to work for iOS 4 to iOS 7:
    var storagePath:File = new File(File.applicationDirectory.nativePath + "/\.\./Documents");
    try
         var targetFile:File = storagePath.resolvePath("profile.bin");
         var stream:FileStream = new FileStream();
         stream.open(targetFile, FileMode.WRITE);
         stream.writeBytes(<byteArray here>, 0, 0);
         stream.close();
    catch (err:Error)
         <informs user something went wrong, retries, etc. basic error handling>
    Running this on iOS 8 will always throw a SecurityError (#2038) from stream.open.
    Now, we can still save data and fix this by replacing the first line by:
    var storagePath:File = new File(File.applicationStorageDirectory.nativePath);
    But, this leaves me with a few things, in order of descending importance:
    1) Reading something like this makes me scared as our game has a large amount of daily players: "I’m using applicationStorageDirectory to store files. The problem is those files get deleted when the user updates his app…" (AIR App Compliance with Apple Data Storage Guidelines, last comment)
    2) What has changed in the iOS 8 file system that suddenly makes my original code fail? Apple developer documentation is still outlining this should be valid. Is this a possible AIR bug?
    3) I assume I need to set "don't backup" flags on the files when saving to the appStorageDir?
    4) Is anyone else running into this?
    Thanks in advance!

    Thanks for your quick reply!
    I agree about not traversing up the directory tree, but a blog post from an Adobe employee I read a long time ago put me on that track: Saumitra Bhave: AIR iOS- Solving [Apps must follow the iOS Data Storage Guidelines]
    Anyway, I ran some tests including your suggested solution and it returns an interesting result:
    #1 File.documentsDirectory (iOS 8)
    Full path = /var/mobile/Containers/Data/Application/<UUID>/Documents
    Result: works as expected, no errors thrown!
    #2 new File(File.applicationDirectory.nativePath + "/\.\./Documents")  (iOS 8)
    Full path = /private/var/mobile/Containers/Bundle/Application/<UUID>/Documents
    Result: error, no write permission! (as I would expect with 'private' being there)
    #3 File.documentsDirectory (iOS 7)
    Full path = /var/mobile/Applications/<UUID>/Documents
    Result: works as expected!
    #4 new File(File.applicationDirectory.nativePath + "/\.\./Documents")  (iOS 7)
    Full path = /var/mobile/Applications/<UUID>/Documents
    Result: works as expected! (notice it's exactly the same as #3)
    So, while the storage directory is easily adjustable and #1 should fit the bill nicely, I'm thinking of how to preserve user data when people begin updating from iOS 7 to iOS 8 as it will be kind of hard for me to locate my earlier data on iOS8 unless part of the update process is to also relocate all application data? I mean, even if I had used File.documentsDirectory before, this would still be a potential problem? In any case, it's obvious the iOS8 file system is different.
    How is this going to work?

  • Filestream.write for .jpgs

    Hi,
    i am working on an AIR-Project. The User can load pictures from his computer into the Air-Project. What I want is, to save these pictures automaticly at a certain Folder. I need that because the next time the user is using the Program the picture shall be seen (so i have to have it in a certain Folder)
    I just learned how to create a new document in a certain Folder:
    import flash.filesystem.FileStream;
    import flash.events.Event;
    import flash.filesystem.File;
    import flash.errors.IOError;
    var fileStream:FileStream;
    fileStream = new FileStream();
    fileStream.addEventListener(Event.CLOSE, fileCloseHandler);
    fileStream.addEventListener(IOErrorEvent.IO_ERROR, fileIOError);
    var writeFile:File;
    writeFile = File.documentsDirectory.resolvePath('AIR/test.txt');
    fileStream.openAsync(writeFile, FileMode.WRITE);
    fileStream.writeUTFBytes('Hello World');
    fileStream.close();
    function fileIOError(event:IOErrorEvent):void
    trace ("Sorry did not work");
    function fileCloseHandler(event:Event):void
    trace ("DONE");
    So far so good :-)
    All the Examples i found creating a (.txt) text-file where the content was set by:
    fileStream.writeUTFBytes("Hello World');
    Now i have 2 Problems;
    1) as far as I understand I need to change from "writeUTFBytes" for text-files to something like "writeObject" or "writeBytes" to stream .jpg data - right???
    2) the bigger Problem - I do not know how to let my graphic.data be the content instead of the "Hello World"
    What i have now does work in parts - the first part loads the picture "pic.jpg" / and the second part is a version of the code above that worked well for a .txt
    What does not work is, that I do not get the content of "graphic" into the file to be streamed...
    Here is what i have;
    var graphic:Loader = new Loader();
    var url:URLRequest = new URLRequest("pic.jpg");
    graphic.load(url);
    graphic.contentLoaderInfo.addEventListener(Event.COM PLETE, done);
    addChild(graphic);
    function done(evt:Event)
    trace ("loaded");
    import flash.filesystem.FileStream;
    import flash.filesystem.File;
    import flash.events.Event;
    import flash.filesystem.File;
    import flash.errors.IOError;
    var savethis:File = new File();
    savethis = graphic.data;
    var fileStream:FileStream;
    fileStream = new FileStream();
    fileStream.addEventListener(Event.CLOSE, fileCloseHandler);
    fileStream.addEventListener(IOErrorEvent.IO_ERROR, fileIOError);
    var writeFile:File;
    writeFile = File.documentsDirectory.resolvePath('AIR/pic.jpg');
    fileStream.openAsync(writeFile, FileMode.WRITE);
    fileStream.writeObject(savethis);
    fileStream.close();
    function fileIOError(event:IOErrorEvent):void
    trace ("Did not work");
    function fileCloseHandler(event:Event):void
    trace ("good work");
    Hope you can help me with this!!!
    Try to keep you answer simple - i am not an every-day flasher (looks like I am getting there :-))
    Jan

    I'm having the exact same issue.  I purchased Aperture recently just so I could edit the metadata on lots of old jpg pictures and scans.  Until I upgraded to Lion, it was working well.  Now with Lion, I can still edit the metadata, but I cannot write the metadata back to the master.   I just spent the last two days filling out the metadata info (primarily Title and Caption), figuring to save to master once I got done with the batch of photos I'm currently working on.  I don't want to lose my work. 
    I'm wondering if I can accomplish the same thing by exporting my updated metadata versions and reimporting them.  Might that work?  Will I lose/gain anything in the process? 

  • Splitting 1GB Files // Problem with FileStream class

    Hi, in my Air (2 beta) app i'm splitting large files to upload them in  smaller chunks.
    Everything works fine, until i choose files larger  than 1GB?
    This might be the Problem:
    var  newFile:File =  File.desktopDirectory.resolvePath(filename);                        
    trace(newFile.size);
    //  8632723886  (About 8GB correct file size)
    BUT if i use the FileStream Class instead:
    var stream:FileStream = new  FileStream();
    stream.open(new File(filename), FileMode.READ);
    trace(stream.bytesAvailable)
    //  42789294 ("wrong" file size?)
    If i run the same code with files smaller  than 1GB stream.bytesAvailable returns the same result as newFile.size.
    Is there a  limitation in the FileStream class or is my code wrong?
    Thanks!

    use asynchronous file handling method. i.e use (filestream object).openAsync(file,filemode.read). here is the implementation :
    private var fileCounter:int = 0;
    private var bytesLoaded:int = 0;
    private var filePath:String = "D:\\folder\\";
    private var fileName:String = "huge_file";               
    private var fileExtension:String = ".mkv";
    private var file:File = new File(filePath+fileName+fileExtension);
    //split size = 1 GB
    private var splitSize:int = 1024*1024*1024;
    private var fs:FileStream = new FileStream();
    private var newfs:FileStream = new FileStream();
    private var byteArray:ByteArray = new ByteArray();
    private function init():void{
         fs.addEventListener(Event.COMPLETE,onFsComplete);
         fs.addEventListener(ProgressEvent.PROGRESS,onFsProgress);
         newfs.open(new File(filePath+fileName+fileCounter+fileExtension),FileMode.WRITE);
         fs.openAsync(new File(filePath+fileName+fileExtension),FileMode.READ);
    private function onFsComplete(e:Event=null):void{                    
         fs.readBytes(byteArray,0,fs.bytesAvailable);                    
         newfs.writeBytes(byteArray,0,Math.min(splitSize-bytesLoaded,fs.bytesAvailable));
         for(var i:int = 0; i < byteArray.length; i+=splitSize){
              newfs.close();
              newfs.open(new File(filePath+fileName+fileCounter+fileExtension),FileMode.WRITE);
              newfs.writeBytes(byteArray,i,Math.min(splitSize,byteArray.length-i));
              fileCounter++;
              trace("Part " + fileCounter + " Complete");
    private function onFsProgress(e:ProgressEvent):void{
         if((bytesLoaded+fs.bytesAvailable)==file.size){
              onFsComplete();                         
         else if((bytesLoaded + fs.bytesAvailable)>=splitSize){
              fs.readBytes(byteArray,0,splitSize-bytesLoaded);
              newfs.writeBytes(byteArray,0,byteArray.length);
              newfs.close();
              bytesLoaded = fs.bytesAvailable;
              fs.readBytes(byteArray,0,bytesLoaded);
              fileCounter++;
              newfs.open(new File(filePath+fileName+fileCounter+fileExtension),FileMode.WRITE);                         
              newfs.writeBytes(byteArray,0,byteArray.length);
              byteArray.clear();
              trace("Part " + fileCounter + " Complete");
         else{
              bytesLoaded+=fs.bytesAvailable;
              fs.readBytes(byteArray,0,fs.bytesAvailable);
              newfs.writeBytes(byteArray,0,byteArray.length);
              byteArray.clear();     
    cheers!

  • Memory leak in fileStream.readMultiByte?

    Hi everybody,
    after a long session bug hunting my ipad application because of a memory leak, i think i found a memory bug in the fileStream class.
    I am using the fileStream class to load xml and css files in my application for initial data etc.
    I parsed the fileStream using readMultiByte() to a string, but there seems to be a small (<1kb) memory leak using this method.
    After switching to fileStream.readUTFBytes() the memory leak seems to be gone.
    Can someone confirm this for me, so that we can submit this to the adobe bug database.
    Greetings,
    Kriz

    Hi Hank,
    how are you using the fileStream to open your files?. If u use the fileStream.open, your application will stop everything, and waits for the file to be completely loaded before continuing, instead u can use the fileStream.openAsync to open a asynchronous connection, and use listeners for the fileStream to execute on completion.
    For you next question, try building your own tweens using Event.ENTER_FRAME, and frame counters instead of a tween engine like TweenLite (tween engines have a lot of handles that are still being used, even if u are not using them), Also try to use Bitmaps, or cacheAsBitmap items for GPU rendering. There are a lot of thread in this forum about this question, and the method used really depends on the type of animation.
    Hope that answers your questions,
    Kriz

Maybe you are looking for