How to truncate a memory mapped file

If one maps a file, the mapped size will become the file size. So the size parameter passed to the map() method of FileChannel should be carefully calculated. However, what if one can't decide beforehand the size of the file?
I tried to use truncate(), but that throws a runtime exception: truncate() can't be used on a file with user-mapped section open.
public class MapFileSizeTest extends TestCase
  public void testMapFileSize() throws Exception
    final File file=new File("testMapFileSize.data");
    FileChannel configChannel= new RandomAccessFile(file, "rw").getChannel();
    //this will result a file with size 2,097,152kb
    MappedByteBuffer configBuffer= configChannel.map(FileChannel.MapMode.READ_WRITE,
        0, 1000000000);
    configBuffer.flip();
    configBuffer.force();  
    //truncate can't be used on a file with user-mapped section open
//    configChannel.truncate( configBuffer.limit());
    configChannel.close();
}Could somebody please give some suggestions? Thank you very much.

The region (position/size) that you pass to the map method should be contained in the file. The spec includes this statement: "The behavior of this method when the requested region is not completely contained within this channel's file is unspecified. " In the Sun implementation, we attempt to extend the file if the requested region is not completely contained but this is not required by the specification. Once you map a region you should not attempt to truncate the file as it can lead to unspecified exceptions (see MappedByteBuffer specification). Windows prevents it; others allows it but cause access to the inaccessible region to SIGSEGV (which must be handled and converted into a runtime error).

Similar Messages

  • Nio ByteBuffer and memory-mapped file size limitation

    I have a question/issue regarding ByteBuffer and memory-mapped file size limitations. I recently started using NIO FileChannels and ByteBuffers to store and process buffers of binary data. Until now, the maximum individual ByteBuffer/memory-mapped file size I have needed to process was around 80MB.
    However, I need to now begin processing larger buffers of binary data from a new source. Initial testing with buffer sizes above 100MB result in IOExceptions (java.lang.OutOfMemoryError: Map failed).
    I am using 32bit Windows XP; 2GB of memory (typically 1.3 to 1.5GB free); Java version 1.6.0_03; with -Xmx set to 1280m. Decreasing the Java heap max size down 768m does result in the ability to memory map larger buffers to files, but never bigger than roughly 500MB. However, the application that uses this code contains other components that require the -xMx option to be set to 1280.
    The following simple code segment executed by itself will produce the IOException for me when executed using -Xmx1280m. If I use -Xmx768m, I can increase the buffer size up to around 300MB, but never to a size that I would think I could map.
    try
    String mapFile = "C:/temp/" + UUID.randomUUID().toString() + ".tmp";
    FileChannel rwChan = new RandomAccessFile( mapFile, "rw").getChannel();
    ByteBuffer byteBuffer = rwChan.map( FileChannel.MapMode.READ_WRITE,
    0, 100000000 );
    rwChan.close();
    catch( Exception e )
    e.printStackTrace();
    I am hoping that someone can shed some light on the factors that affect the amount of data that may be memory mapped to/in a file at one time. I have investigated this for some time now and based on my understanding of how memory mapped files are supposed to work, I would think that I could map ByteBuffers to files larger than 500MB. I believe that address space plays a role, but I admittedly am no OS address space expert.
    Thanks in advance for any input.
    Regards- KJ

    See the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038

  • Memory-mapped file is possible?

    Hi everyone, I'm a new Labview user and I want to start a new project that uses Memory mapped file.
    I have a working C# code to read the $gtr2$ MMF, where i simple use 
    MemoryMappedFile.OpenExisting("$gtr2$")
    to get data from it.
    How it is  possible to read this kind of file in labview? I can't find anything useful on the web.
    I'm using a LabVIEW 2013 student edition.
    Thanks to everyone who wants to answer my question.
    Have a nice day.

    Hi,
    I too only have done the CLAD…
    You have to look for DotNet examples, you will find them here in the forum…
    And usually it helps to read the documentation for that MMF class to recreate your C++ code in LabVIEW!
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • How can I generate a map file with LabVIEW?

    We wish to use a product which inserts code into our executable to prevent tampering with it by crackers.  The program, however, takes the executable file, as well as the map file (which is commonly generated by c++ compilers) and uses the map file to determine where in the exe the critical routines that need protection are at.  I can not, however, determine how to create such a map file for a LabVIEW generated executable.  Is there a special build option I need to invoke?

    Yes, I'm familar with NI's licensing technology, having talked with someone (you, Dennis, I believe) about it before.  The problem we have is our software is sold to factories in China where there are no internet connections.  We have a physical key 'dongle' which must be present in order for the executable to be willing to run.  However, it appears that people are taking the executable  which LabVIEW creates and they are editing it, probably by using a a debugger and tracing to the the code which checks for the dongles presence and bypassing it.  To my knoweledge, NI's products don't do anything to prevent this, right?
    We found a company which sells a product that encryptes, checksums, etc... an executable file, but it needs to know the layout of functions in the exe in order to determine which areas to focus the obfuscation on.  They were sort of matter of fact when they said it needs the exe and the map file, as if they expected any language which produced an exe could produce a map file.

  • CFS and memory mapped file

    I would like to know if it is possible to memory map (mmap) a file that is residing on a cluster file system (CFS or GFS).
    If I remember correctly, memory mapping a file residing on NFS has issues.
    Thanks,
    Harsh

    I'm using SC 3.1u4 on Solaris 9. I ran in to a problem with memory mapped files on CFS.
    I've multiple processes (on one cluster node) sharing such a file that was created using the following command:
    mmap ((caddr_t)0,SOME_SIZE,(PROT_READ | PROT_WRITE), (MAP_SHARED | MAP_NORESERVE),fd,0);
    Issuing msync with MS_INVALIDATE as the third argument is ok. But when some other process tries to read the memory the node seems to hang.
    I can't examine the processes using pstack or truss as both of them get hung too. Only way out of this mess is to reboot the node.
    I can't imagine this problem hasn't been seen before. Is there a patch for it?

  • Error code 1450 - memory mapped file

    Hello,
    in my application I am using memory mapped files. I have three of it, the maximum size of the biggest one is 5MB. I store 64 Waveforms from a DAQ card in it. 
    The application runs fine, but sometimes comes an error, when I try to access the MMF. The error code is 1450, "insufficient system resources"
    Is a size of 5MB too big? Should I rather create one MMF for each waveform?

    Hi mitulatbati,
    which development tools are you actually using?
    Which platform, libraries and so on...?
    Can you post example code?
    Marco Brauner NIG 

  • Memory mapped files Are they still used.

    To System  programmers.
    In some of my old code David used memory mapped files in handling huge sets of random points.  The code reads in the whole file and then sets flags similar to an async process.  The filemapping handles memory instead of using mallocs.  the
    data maybe stored on the heap or in hte global stack.  I went back to Viusal Studio 6 and tried to take out the code as the standard c++ handles a full file read as a char buffer as a void * structure.  I found some valloc data types and
    then found the newer filemapping routines in VS2013. Plus an explanation of global stack and heap.
    Are software developers using file mapping or are they using say vectors to form stl structures.
    Cheers
    John Keays
    John Keays

    Here is some typical code in the old C.  This is close to the code I used in Visual studio 6.  I need to put this in vs2013 under c++ or C++ 11/14.  I have guessed the file handle open and size code.
    main{
    int fsize, numRecords;
    Point *allPoints;
     fsize = readAllFile(name, &allPoints);
    numRecords = fsize/ sizeof(Point);
    for (i=0; i < numRecords:; I++)  printf("rec %d values x %.3f\n", i, allPoints[i].x);
    int
    readAllFile(char*name, void **addr){
    file *fh;
    int fsize;
    openf(fh, name);
    fsize = getfilesize(fh);
    *addr = malloc(sizeof(char)*fsize);
    fclose(fh);
    return fsize;
    This is the boilerplate for the file reads.  Even tried this with text files and parsing the text records.  Instead of the mallocs you suggest vector and the scheme of the code remains the same.
    For a lidar file the xyz records have grown from 10,000 in the 1990's to 1,000,000 points in the mid 2000's.  For this size file 24 M bytes are allocated in one hit.  The whole of the Gold Coast in terms of lidar points in 2003 was 110 million
    points.  It could be more.
    Where is the data stored in the Malloc, Vector or memory Mapped file.  What is good and bad practice.
    Cheers
    john Keays
    John Keays

  • Memory mapped files

    Does anyone know if there is any way to use memory mapped files in Java? If so, what are the calls that emulate the C++ calls to CreateFileMapping() MapViewOfFile() and OpenFileMapping()?

    http://java.sun.com/j2se/1.4.1/docs/api/java/nio/MappedByteBuffer.html

  • How to add a new mapping file

    I am using SSIS (SQL Server 2008) to replicate from an ODBC source (using .NET Framework Data Provider for ODBC) to SQL Server.
    I am using the Import/Export Wizard and encountering the problem described in KB 152728:
    SSIS 2008 Import Export Wizard can show numbers instead of data type
    http://support.microsoft.com/kb/2152728
    I am trying to add a new mapping file in C:\Program Files\Microsoft SQL Server\100\DTS\MappingFiles, but it appears that the new mapping file is not being noticed by the wizard - the dialog box of the wizard (COnvert Types without Conversion Checking) still
    displays
    Source Information:
    Cannot locate the mapping file to map the provider types to SSIS types
    I tried creating a new mapping file by copying an existing one (DB2ToMSSql10.XML), renaming it ODBCToMSSql10.XML (as near as I can tell, the filename doesn't matter), and changing the SourceType line:
    SourceType="System.Data.Odbc.OdbcConnection"
    Is there anything else I need to do?  I was expecting I might find a file that SSIS uses to figure out which mapping file to use, but I was unsuccessful, so I think it looks at the SourceType line.
    The following is the full output of the dialog - perhaps I should be modifying DTS\binn\DtwTypeConversion.xml instead?  But that seems messier than creating a new file in DTS\MappingFiles:
    [Source Information]
    Cannot locate the mapping file to map the provider types to SSIS types
    [Destination Information]
    Destination Location : (local)
    Destination Provider : SQLNCLI10
    Mapping file (to SSIS type): c:\Program Files\Microsoft SQL Server\100\DTS\MappingFiles\MSSQLToSSIS10.XML
    [Conversion Table]
    SSIS conversion file: c:\Program Files\Microsoft SQL Server\100\DTS\binn\DtwTypeConversion.xml

    Hi, did you ever get a solution to this issue? I'm having a similar challenge trying to create a new mapping file which is not being detected by the import wizard.
    Please let me know.
    Gregg

  • How to stop BDB from Mapping Database Files?

    We have a problem where the physical memory on Windows (NT Kernel 6 and up, i.e. Windows 7, 2008R2, etc.) gets maxed out after some time when running our application.  On an 8GB machine, if you look at our process loading BDB, its only around 1GB. But, when looking at the memory using RAMMAP, you can see that the BDB database files (not the shared region files) are being mapped into memory and that is where most of the memory consumption is taking place.  I wouldn't care normally, as memory mapping can have performance and usability benefits. But the results are the system comes to a screeching halt.   This happens when we are inserting results in high order, e.g. 10s of millions of records in a short time frame.
    I would attach a picture to this post, but for some reason the insert image is greyed out.
    Environment open flags: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_TXN | DB_INIT_MPOOL | DB_THREAD | DB_LOCKDOWN | DB_RECOVER
    Database open flags: DB_CREATE | DB_AUTO_COMMIT

    An update for the community
    Cause
    We opened a support request (SR) to work with Oracle on the matter. The conclusion we came to was that the main reason for the memory consumption was the Windows System Cache.  (For reference, see this http://support.microsoft.com/kb/976618) When opening files in buffered mode, the equivalent of calling CreateFile without specifying FILE_FLAG_NO_BUFFERING, all I/O to a file goes through the Windows System Cache.  The larger the database file, the more memory is used to back it.  This is not the same as memory mapped files, of which Berkeley will use for the region files (i.e. the environment.) Those also use memory, but because they are bounded in size, will not cause an issue (e.g. need a bigger environment, just add more memory.)  The obvious reason to use the cache is for performance optimizations, particularly in read-heavy workloads. 
    The drawback, however, is that when there is a significant amount of I/O in a short amount of time, that cache can get really full and can result in the physical memory being close to 100% used.  This has negative affects on the entire system. 
    Time is important, because Windows needs time to transition active pages to standby pages which decreases the amount of physical memory.   What we found is that when our DB was installed on FLASH disk, we could generate a lot more I/O and our tests could run in a fraction of the time, but the memory would get close to 100%. If we ran those same tests on slower disk, while the result was the same, i.e. inserted 10 million records into the data, the time takes a lot long and the memory utilization does not approach even close to 100%. Note that we also see the memory consumption happen when we utilize the hotbackup in the BDB library. The reason for this is obvious:  In a short amount of time we are reading the entire BDB database file which makes Windows utilize the system cache for it. Total amount of memory might be a factor as well. On a system with 16GB of memory, even with FLASH disk, we had a hard time reproducing the issue where the memory climbs.
    There is no Windows API that allows an application to control how much system cache is reserved or usable or maximum for an individual file.  Therefore, BDB does not have fine grained control of this behavior on an individual file basis.  BDB can only turn on or off buffering in total for a given file.
    Workaround
    In Berkeley, you can turn off buffered I/O in Windows by specifying the DB_DIRECT_DB flag to the environment.  This is the equivalent of calling CreateFile with specifying FILE_FLAG_NO_BUFFERING.  All I/O goes straight to the disk instead of memory and all I/O must be aligned to a multiple of the underlying disk sector size. (NTFS sector size is generally 512 or 4096 bytes and normal BDB page sizes are generally multiples of that so for most this shouldn't be a concern, but know that Berkeley will test that page size to ensure it is compatible and if not it will silently disable DB_DIRECT_DB.)  What we found in our testing is that using the DB_DIRECT_DB flag had too much of a negative affect on performance with anything but FLASH disk and therefore can not use it. We may consider it acceptable for FLASH environments where we generate significant I/O in short time periods.   We could not reproduce the memory affect when the database was hosted on a SAN disk running 15K SAS which is more typical and therefore are closing the SR.
    However, Windows does have an API that controls the total system wide amount of system cache space to use and we may experiment with this setting. Please see this http://support.microsoft.com/kb/976618 We are also going to experiment with using multiple database partitions so that Berkeley spreads the load to those other files possibly giving the system cache time to move active pages to standby.

  • High Page Reads/Sec on Windows 2008 R2 64-bit running on VMware but very low Real Memory & Page file Usage.

    Hello All,
    Below is the server configuration,
    OS: Windows 2008 R2 Enterprise 64 Bit
    Version: 6.1.7601 Service Pack 1 Build 7601
    CPU: 4 (@ 2.93 GHz, 1 core)
    Memory: 12 GB
    Page file: 12 GB
    1. The actual utilization, be it a 15 minute sample, hourly, weekly etc, the utilization of real memory has never crossed 20% and the page file usage is at 0.1%. For some reason, the Pages/Sec>Limit% counter reports 100% continuously regardless of the
    sampling intervals. Upon further observation, the Page Reads/Sec value is somewhere between 150~450 and Page Input/Sec is somewhere between 800~8000. Does this indicate a performance bottleneck? (I've in the interim asked the Users, App. Owners to see if they
    notice any performance degradation and awaiting response). If this indicates a performance issue, please could someone help list down how to track this down further to which process/memory mapped file is causing it? and what I should go about performing to
    fix this problem please?
    p.s., initially the Security logs were full on this server and since page file is tied to Application, Security and System logs, this was freed up to see if this is causing the high page reads but this doesn't.
    2. If the above does not necessarily indicate a performance problem, please can someone reference few KB articles that confirms this? Also, in this case, will there be any adverse effects if attempting to fine tune a server which is already running fine?
    assuming App. Owners confirm there isn't any performance degradation.
    Thanks in advance.

    Hi,
    Based on the description, we can try to download Server Performance Advisor (SPA) to help further analyze the performance of the server. SPA can generate comprehensive diagnostic reports and charts and provides recommendations to help you quickly analyze
    issues and develop corrective actions.
    Regarding this tool, the following articles can be referred to for more information.
    Microsoft Server Performance Advisor
    https://msdn.microsoft.com/en-us/library/windows/hardware/dn481522.aspx
    Server Performance Advisor (SPA) 3.0
    http://blogs.technet.com/b/windowsserver/archive/2013/03/11/server-performance-advisor-spa-3-0.aspx
    Best regards,
    Frank Shen
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Memory Mapping on a Virtual Machine

    I have an application that memory maps a number of files. Occasionally it needs to unmap them so that they can be updated. On a real machine this process works flawlessly but I have two installs that are on virtual machines. On the virtual machine installs the unmapping occasionally seems to either fail or at least take a long time. When I come to update the file I get the "The requested operation cannot be performed on a file with a user-mapped section open" error message.
    The failure is fairly random, sometimes the update process works sometimes is fails and it doesn't consistently fail on the same mapped file which is why I think it's something to do with the timing of the unmapping and the virtual machine environment. Both virtual machine installs are running on Windows 2003 Server.
    Has anyone else seen this? I'm going to try inserting a pause between the unmapping and the update but that feels like a hack, I'd rather a call back to tell me the unmapping is complete but I don't suppose that's possible.

    the_doozer wrote:
    Ok, I'll grant you there is no way to explicitly unmap a mapped file in Java (a huge failing of the file mapping system IMHO) but closing any open FileChannel and nulling out the MappedByteBuffer is, in my experience, normally enough to cause the OS to unmap the file. This system lets me update files quite happily on all but the virtual machine system.I think you've been lucky in that case. I had some test cases that consistently failed since I couldn't delete the memory mapped file that I previously had created.

  • Import map file

    Hi Experts,
    I can see that form MDM Console the Ports table under Admin node has field with type "Inbound" and the name of the import map. 
    Could anybody tell how to get this Import map file from the repository if my remote system is set up "Inbound/outbound" as MDM ?
    Thanks for your help
    Kind regards,
    Wei Dona

    Hi,
    In order to connect and to save map files please following the following steps:
    If you are using text file then follow the steps as
    1.Open Import manager.Select repository.enter username &password
    2.Select Type as Delimited text/Excel/Access and so on.
    3.Enter the source file .
    4)Enter the delimiter if Delimited text (eg:space,:,; etc)
    5.Select the source and destination table.
    6.Map the corresponding source and destination fields.
    7.Then go to match records tab select the field as per which the records will b imported.
    8.Go to File-Save.Enter the map name.
    9.Execute the import.
    Next time you want to edit the map or execute the same goto file-Open-map name.
    Hope this give some information.
    Cheers
    Santosh.

  • Embedded LVM file, memory map?

    I am running out of memory, "No Space in Execution Regios" when I try to built my ARM 7 project.  I want to determine the code size for each of my VIs in the project.   Is there a LINK file? 
    The build process produces an Applicaiont.lvm file that contains some (all) of the mapping mapping.  I found an SDK article that showed how to read the various data/structure size and mapping format but did it not describe how the VI executibles were mapped.
      What is the best way to determine the size and mapping of both data and executable code for an Embedded project build?   
    Solved!
    Go to Solution.

    There is a linker output map file located at
    .../<project dir>/<proj name>/<target name>/<applicaiton name>/2.0/project/labview.map

  • I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?

    I am running out of memory on my hard drive and need to delete files. How can I see all the files/applications on my hard drive so I can see what is taking up a lot of room?
    Thanks!
    David

    Either of these should help.
    http://grandperspectiv.sourceforge.net/
    http://www.whatsizemac.com/
    Or search 'disk size' in the App Store.
    Be carefull with what you delete & have a backup BEFORE you start, you may also want to reboot to try to free any memory that may have been written to disk.

Maybe you are looking for

  • Using VPD in APEX

    I need to use a VDP in APEX to restrict access to seeing some records. In other oracle apps I did with VPD, I did these steps: 1) Created a view (SP_TEACHER) 2) Created a function to dynamically set the predicte (where) 3) Created a policy dbms_rls.a

  • Deletion of Purchase Requisition

    Hello All;   I would like to permanently remove the PR's that are marked for deletion. We are not going to archive them, but would like to have those data removed from Database tables. I have tried out MEMASSRQ, but didnt help. Any light thrown on th

  • Ragged Hierarchies

    We have a ragged hierarchy where there may be a case where the data level in a parent-child hierarchy is blank. Is there a way to use the parent value instead of the blank value so that all of the levels of a parent child hierarchy in a ragged hierar

  • DAQ task update during running task

    Hi, I hope sombody can answer this question. Is it possible to update a running DAQ task without stopping the task? In my task I have an analog output and I want to change the amplitude and/or the duty cycle of the output waveform. Stopping the task

  • HDCP error with iTunes 11.3.1 update

    I recently cannot watch any HD content on my computer purchased through iTunes.  Just started happening with the latest 11.3.1 update.  I get the old HDCP content cannot be played message even though I have all apple gear, could play the content a fe