Memory mapped files
Does anyone know if there is any way to use memory mapped files in Java? If so, what are the calls that emulate the C++ calls to CreateFileMapping() MapViewOfFile() and OpenFileMapping()?
http://java.sun.com/j2se/1.4.1/docs/api/java/nio/MappedByteBuffer.html
Similar Messages
-
Nio ByteBuffer and memory-mapped file size limitation
I have a question/issue regarding ByteBuffer and memory-mapped file size limitations. I recently started using NIO FileChannels and ByteBuffers to store and process buffers of binary data. Until now, the maximum individual ByteBuffer/memory-mapped file size I have needed to process was around 80MB.
However, I need to now begin processing larger buffers of binary data from a new source. Initial testing with buffer sizes above 100MB result in IOExceptions (java.lang.OutOfMemoryError: Map failed).
I am using 32bit Windows XP; 2GB of memory (typically 1.3 to 1.5GB free); Java version 1.6.0_03; with -Xmx set to 1280m. Decreasing the Java heap max size down 768m does result in the ability to memory map larger buffers to files, but never bigger than roughly 500MB. However, the application that uses this code contains other components that require the -xMx option to be set to 1280.
The following simple code segment executed by itself will produce the IOException for me when executed using -Xmx1280m. If I use -Xmx768m, I can increase the buffer size up to around 300MB, but never to a size that I would think I could map.
try
String mapFile = "C:/temp/" + UUID.randomUUID().toString() + ".tmp";
FileChannel rwChan = new RandomAccessFile( mapFile, "rw").getChannel();
ByteBuffer byteBuffer = rwChan.map( FileChannel.MapMode.READ_WRITE,
0, 100000000 );
rwChan.close();
catch( Exception e )
e.printStackTrace();
I am hoping that someone can shed some light on the factors that affect the amount of data that may be memory mapped to/in a file at one time. I have investigated this for some time now and based on my understanding of how memory mapped files are supposed to work, I would think that I could map ByteBuffers to files larger than 500MB. I believe that address space plays a role, but I admittedly am no OS address space expert.
Thanks in advance for any input.
Regards- KJSee the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038
-
I would like to know if it is possible to memory map (mmap) a file that is residing on a cluster file system (CFS or GFS).
If I remember correctly, memory mapping a file residing on NFS has issues.
Thanks,
HarshI'm using SC 3.1u4 on Solaris 9. I ran in to a problem with memory mapped files on CFS.
I've multiple processes (on one cluster node) sharing such a file that was created using the following command:
mmap ((caddr_t)0,SOME_SIZE,(PROT_READ | PROT_WRITE), (MAP_SHARED | MAP_NORESERVE),fd,0);
Issuing msync with MS_INVALIDATE as the third argument is ok. But when some other process tries to read the memory the node seems to hang.
I can't examine the processes using pstack or truss as both of them get hung too. Only way out of this mess is to reboot the node.
I can't imagine this problem hasn't been seen before. Is there a patch for it? -
Error code 1450 - memory mapped file
Hello,
in my application I am using memory mapped files. I have three of it, the maximum size of the biggest one is 5MB. I store 64 Waveforms from a DAQ card in it.
The application runs fine, but sometimes comes an error, when I try to access the MMF. The error code is 1450, "insufficient system resources"
Is a size of 5MB too big? Should I rather create one MMF for each waveform?Hi mitulatbati,
which development tools are you actually using?
Which platform, libraries and so on...?
Can you post example code?
Marco Brauner NIG -
Memory-mapped file is possible?
Hi everyone, I'm a new Labview user and I want to start a new project that uses Memory mapped file.
I have a working C# code to read the $gtr2$ MMF, where i simple use
MemoryMappedFile.OpenExisting("$gtr2$")
to get data from it.
How it is possible to read this kind of file in labview? I can't find anything useful on the web.
I'm using a LabVIEW 2013 student edition.
Thanks to everyone who wants to answer my question.
Have a nice day.Hi,
I too only have done the CLAD…
You have to look for DotNet examples, you will find them here in the forum…
And usually it helps to read the documentation for that MMF class to recreate your C++ code in LabVIEW!
Best regards,
GerdW
CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
Kudos are welcome -
Memory mapped files Are they still used.
To System programmers.
In some of my old code David used memory mapped files in handling huge sets of random points. The code reads in the whole file and then sets flags similar to an async process. The filemapping handles memory instead of using mallocs. the
data maybe stored on the heap or in hte global stack. I went back to Viusal Studio 6 and tried to take out the code as the standard c++ handles a full file read as a char buffer as a void * structure. I found some valloc data types and
then found the newer filemapping routines in VS2013. Plus an explanation of global stack and heap.
Are software developers using file mapping or are they using say vectors to form stl structures.
Cheers
John Keays
John KeaysHere is some typical code in the old C. This is close to the code I used in Visual studio 6. I need to put this in vs2013 under c++ or C++ 11/14. I have guessed the file handle open and size code.
main{
int fsize, numRecords;
Point *allPoints;
fsize = readAllFile(name, &allPoints);
numRecords = fsize/ sizeof(Point);
for (i=0; i < numRecords:; I++) printf("rec %d values x %.3f\n", i, allPoints[i].x);
int
readAllFile(char*name, void **addr){
file *fh;
int fsize;
openf(fh, name);
fsize = getfilesize(fh);
*addr = malloc(sizeof(char)*fsize);
fclose(fh);
return fsize;
This is the boilerplate for the file reads. Even tried this with text files and parsing the text records. Instead of the mallocs you suggest vector and the scheme of the code remains the same.
For a lidar file the xyz records have grown from 10,000 in the 1990's to 1,000,000 points in the mid 2000's. For this size file 24 M bytes are allocated in one hit. The whole of the Gold Coast in terms of lidar points in 2003 was 110 million
points. It could be more.
Where is the data stored in the Malloc, Vector or memory Mapped file. What is good and bad practice.
Cheers
john Keays
John Keays -
How to truncate a memory mapped file
If one maps a file, the mapped size will become the file size. So the size parameter passed to the map() method of FileChannel should be carefully calculated. However, what if one can't decide beforehand the size of the file?
I tried to use truncate(), but that throws a runtime exception: truncate() can't be used on a file with user-mapped section open.
public class MapFileSizeTest extends TestCase
public void testMapFileSize() throws Exception
final File file=new File("testMapFileSize.data");
FileChannel configChannel= new RandomAccessFile(file, "rw").getChannel();
//this will result a file with size 2,097,152kb
MappedByteBuffer configBuffer= configChannel.map(FileChannel.MapMode.READ_WRITE,
0, 1000000000);
configBuffer.flip();
configBuffer.force();
//truncate can't be used on a file with user-mapped section open
// configChannel.truncate( configBuffer.limit());
configChannel.close();
}Could somebody please give some suggestions? Thank you very much.The region (position/size) that you pass to the map method should be contained in the file. The spec includes this statement: "The behavior of this method when the requested region is not completely contained within this channel's file is unspecified. " In the Sun implementation, we attempt to extend the file if the requested region is not completely contained but this is not required by the specification. Once you map a region you should not attempt to truncate the file as it can lead to unspecified exceptions (see MappedByteBuffer specification). Windows prevents it; others allows it but cause access to the inaccessible region to SIGSEGV (which must be handled and converted into a runtime error).
-
Hello All,
Below is the server configuration,
OS: Windows 2008 R2 Enterprise 64 Bit
Version: 6.1.7601 Service Pack 1 Build 7601
CPU: 4 (@ 2.93 GHz, 1 core)
Memory: 12 GB
Page file: 12 GB
1. The actual utilization, be it a 15 minute sample, hourly, weekly etc, the utilization of real memory has never crossed 20% and the page file usage is at 0.1%. For some reason, the Pages/Sec>Limit% counter reports 100% continuously regardless of the
sampling intervals. Upon further observation, the Page Reads/Sec value is somewhere between 150~450 and Page Input/Sec is somewhere between 800~8000. Does this indicate a performance bottleneck? (I've in the interim asked the Users, App. Owners to see if they
notice any performance degradation and awaiting response). If this indicates a performance issue, please could someone help list down how to track this down further to which process/memory mapped file is causing it? and what I should go about performing to
fix this problem please?
p.s., initially the Security logs were full on this server and since page file is tied to Application, Security and System logs, this was freed up to see if this is causing the high page reads but this doesn't.
2. If the above does not necessarily indicate a performance problem, please can someone reference few KB articles that confirms this? Also, in this case, will there be any adverse effects if attempting to fine tune a server which is already running fine?
assuming App. Owners confirm there isn't any performance degradation.
Thanks in advance.Hi,
Based on the description, we can try to download Server Performance Advisor (SPA) to help further analyze the performance of the server. SPA can generate comprehensive diagnostic reports and charts and provides recommendations to help you quickly analyze
issues and develop corrective actions.
Regarding this tool, the following articles can be referred to for more information.
Microsoft Server Performance Advisor
https://msdn.microsoft.com/en-us/library/windows/hardware/dn481522.aspx
Server Performance Advisor (SPA) 3.0
http://blogs.technet.com/b/windowsserver/archive/2013/03/11/server-performance-advisor-spa-3-0.aspx
Best regards,
Frank Shen
Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected] -
Memory Mapping on a Virtual Machine
I have an application that memory maps a number of files. Occasionally it needs to unmap them so that they can be updated. On a real machine this process works flawlessly but I have two installs that are on virtual machines. On the virtual machine installs the unmapping occasionally seems to either fail or at least take a long time. When I come to update the file I get the "The requested operation cannot be performed on a file with a user-mapped section open" error message.
The failure is fairly random, sometimes the update process works sometimes is fails and it doesn't consistently fail on the same mapped file which is why I think it's something to do with the timing of the unmapping and the virtual machine environment. Both virtual machine installs are running on Windows 2003 Server.
Has anyone else seen this? I'm going to try inserting a pause between the unmapping and the update but that feels like a hack, I'd rather a call back to tell me the unmapping is complete but I don't suppose that's possible.the_doozer wrote:
Ok, I'll grant you there is no way to explicitly unmap a mapped file in Java (a huge failing of the file mapping system IMHO) but closing any open FileChannel and nulling out the MappedByteBuffer is, in my experience, normally enough to cause the OS to unmap the file. This system lets me update files quite happily on all but the virtual machine system.I think you've been lucky in that case. I had some test cases that consistently failed since I couldn't delete the memory mapped file that I previously had created. -
Hi folks.
I am developing an application that has very large input files. During execution, the files will be processed twice: once, sequentially to get the position of each piece of data in the file, and then directly by seeking to a specific position to retrieve a specific piece of information.
My rational for doing this is to avoid loading the entire content of the file into memory via some data structure. However, all of the seeking/reading seems to be quite a performance hit.
Is there a way to memory map a file and then be able to read only a portion of the data based on its byteposition? I've searched around for sample code, but I can only find examples of sequential access.
Any help will be appreciated extremely!!
ThanksThat's pretty simple. Thanks
Follow-up questions:
The code I have now reads:
FileChannel fc = seqDBRAF.getChannel();
ByteBuffer roBuf = fc.map(FileChannel.MapMode.READ_ONLY, 0, fc.size());
CharBuffer cb = Charset.forName("ISO-8859-15").newDecoder().decode(roBuf);
The decode line takes a long time to execute not the "map" line. Why is this?
If/when I use the position method to "seek" to the right place, should I do this to the ByteBuffer and then decode? Or decode first and then just read from the position in the Charbuffer?
Thanks -
Embedded LVM file, memory map?
I am running out of memory, "No Space in Execution Regios" when I try to built my ARM 7 project. I want to determine the code size for each of my VIs in the project. Is there a LINK file?
The build process produces an Applicaiont.lvm file that contains some (all) of the mapping mapping. I found an SDK article that showed how to read the various data/structure size and mapping format but did it not describe how the VI executibles were mapped.
What is the best way to determine the size and mapping of both data and executable code for an Embedded project build?
Solved!
Go to Solution.There is a linker output map file located at
.../<project dir>/<proj name>/<target name>/<applicaiton name>/2.0/project/labview.map -
Running Out of Memory - Any Files, Programs I Can Run or Delete?
Looking to erase some programs and/or files that are not needed to increase my memory.
Is there any recommendations or a sort or clean-up program I can run on my mac?
Thanks for your time guys!Memory is RAM - programs are stored on disk which seems to be what you are talking about
You should have a minimum of 10 GB of free disk at all times - how much do you have?
The hints in the Apple link are all good
1 - add an external hard drive - this is cheap, quick and the most effective solution
2 - get rid of unnecessary language support as a quick (and small) improvement - Monolingual - http://monolingual.sourceforge.net/ - is an easy way to do that (be sure not to eliminate English)
3 - use a memory mapping program to show you where you are using memory - grand perspective is good - http://grandperspectiv.sourceforge.net/
Stay away from automatic programs like Spring Cleaning - you are likely to mess up - uch better to control it yourself
LN
Message was edited by: LarryHN -
Hello,
I'have some strange problem with mapping file using a FileChannel and the map method.
My program creates some number of threads. These threads are accessing in a file (in a different places) to write down some information. The number of the created threads is about (max) 100 but at one moment I may have for example from 1 to 60 threads that are active.
Sometimes when a thread tries to map some region from the file, I have an exception.
So here is my code:
FileChannel fch = rafOut.getChannel();
try {
position = rafOut.getChannel().position();
//I have the exception here fch.map(....)
mbf = fch.map(FileChannel.MapMode.READ_WRITE, position, Manager.getSize());
} catch (IOException e2) {
e2.printStackTrace();
//here I'm using some putXXX() methods
//and at the and I have
mbf.force();
}This code sometimes generates this exception:
java.io.IOException: The requested operation over a mapped file was not accomplished
at sun.nio.ch.FileChannelImpl.truncate0(Native Method)
at sun.nio.ch.FileChannelImpl.map(Unknown Source)
at DataWriter.run(DataWriter.java:40)
Any idea why I have this error?
Regards,
Anton
PS. The first part of the exception message was written in French, so I tried to translated it :)
Edited by: anton_tonev on Nov 5, 2007 6:49 AMHi ejp,
i'm trying to extend the file. In the begging the file is empty, so every thread adds data to the file.
About the mapping, yes I know that map many times the same file is not the right solution ... but for every thread I'm mapping a different part of the file, for exemple :
one thread maps from 0 byte to the 256 byte and writes there the data.
another thread maps from 256 to 1024 and so on ...
The problem is that the threads don't know the size of the data before the mapping(e.g. the future size of the entire file), if it was known, may be I could map the entire file in the memory. And there is more thing, the file sometimes becomes very big .. a hundred of Gbs, so mapping just some parts of it in the memory for me is the best solution.
Regards,
Anton -
Hi all
I have Nokia N70 with the latest firmware v5.0638.3.0.1
I installed Nokia Maps late last year, but never really used it. I tried to load it earlier this week, but it refused to load. So:
- I tried to re-install the SIS - "File Corrupted" Error.
- I tried to remove the Maps app from TOOLS > MANAGER - "File Corrupted" Error.
- I formatted the Memory card - "File Corrupted" Error.
- I hard reset my phone and tried to reinstall but - "File Corrupted" Error.
- I even downloaded the v2 Beta of Maps but - "File Corrupted" Error.
Any advice on how to get around this? I'm travelling to Spain next week, can could really use the maps!
thanks!Hi Again
installing version 1 should be ok do if you use application manager to install them it should work save it on you memory card to make putting the maps on it easyer
Hope this helps -
MaxDB cannot start: Missing root pointer in memory mapping
Hello,
I cannot start (or change to any other state) my MaxDB. I have already tried to remove directory rtedump_dir as per SAP Note 1283278, but this did not help.
Any help will be greatly appreciated
Warm greetings
Jan
mibse2:se2adm 62> dbmcli -d SE2 -u superdba,xxxyyy show state
OK
Missing root pointer in memory mapping when restoring memory map from /sapdb/SE2/data/wrk/SE2/rtedump_dir
mibse2:se2adm 63> dbmcli -d SE2 -u superdba,xxxyyy db_enum
OK
SE2 /sapdb/SE2/db 7.8.01.14 fast offline
SE2 /sapdb/SE2/db 7.8.01.14 quick offline
SE2 /sapdb/SE2/db 7.8.01.14 slow offline
SE2 /sapdb/SE2/db 7.8.01.14 test offline
mibse2:se2adm 64> dbmcli -d SE2 -u superdba,xxxyyy inst_enum
OK
7.8.01.14 /sapdb/clients/SE2
7.8.01.14 /sapdb/SE2/db
mibse2:se2adm 65> sdbregview -l
Installation: Global /sapdb/programs
Global Listener 7.8.01.14 valid 64 bit
Installation Compatibility 7.8.01.14 valid 64 bit
Installer 7.8.01.14 valid
SAP Utilities Compatibility 7.8.01.14 valid 64 bit
Installation: CL_SE2 /sapdb/clients/SE2
Base 7.8.01.14 valid 64 bit
Fastload API 7.8.01.14 valid 64 bit
JDBC 7.6.06.07 valid
Messages MSG 0.9004 valid
ODBC 7.8.01.14 valid 64 bit
SAP Utilities 7.8.01.14 valid 64 bit
SQLDBC 7.8.01.14 valid 64 bit
SQLDBC 76 7.6.06.10 valid 64 bit
SQLDBC 77 7.8.01.14 valid 64 bit
Installation: SE2 /sapdb/SE2/db
Base 7.8.01.14 valid 64 bit
DB Analyzer 7.8.01.14 valid 64 bit
Database Kernel 7.8.01.14 valid 64 bit
Fastload API 7.8.01.14 valid 64 bit
JDBC 7.6.06.07 valid
Loader 7.8.01.14 valid 64 bit
Messages MSG 0.9004 valid
ODBC 7.8.01.14 valid 64 bit
Redist Python 7.8.01.14 valid 64 bit
SAP Utilities 7.8.01.14 valid 64 bit
SQLDBC 7.8.01.14 valid 64 bit
SQLDBC 76 7.6.06.10 valid 64 bit
SQLDBC 77 7.8.01.14 valid 64 bit
Server Utilities 7.8.01.14 valid 64 bit
mibse2:se2adm 66> xinstinfo SE2
IndepData : /sapdb/data
IndepPrograms : /sapdb/programs
InstallationPath : /sapdb/SE2/db
Kernelversion : KERNEL 7.8.01 BUILD 014-121-233-288
Rundirectory : /sapdb/SE2/data/wrk/SE2
mibse2:se2adm 67>Hi,
1. Please update with output of the following commands:
ps u2013efe | grep dbmsrv
ls u2013l /var/lib/sdb/dbm/ipc
ipcs -m | wc u2013l
sysctl -a | grep kernel.shmmni
dbmcli inst_enum
dbmcli db_enum u2013s
2. Please post the dbmsrv*.err located in /sapdb/data/wrk
3. Check first<!> that you have not active dbmrfc/dbmsrv processes,
Stop the x_server .
- kill all these processes manually, if they where not release after
Closing DBMGUI, dbmcli sessions, stoping the application server.
- try to check/remove the shared memory
/sapdb/MAXDB1/db/pgm/dbmshm CHECK /var/lib/sdb/dbm/ipc MAXDB1
/sapdb/MAXDB1/db/pgm/dbmshm DELETE /var/lib/sdb/dbm/ipc MAXDB1
- Check in /var/lib/sdb/dbm/ipc if you have files MAXDB1.dbm.shi and MAXDB1.dbm.shm
< rename both files to MAXDB1.dbm.shi.old and MAXDB1.dbm.shm.old>
- Try to connect to the database using dbmcli tool & post results.
Hope above steps are helpful.
Regards,
Deepak Kori
Maybe you are looking for
-
AiO Remote app on iPad/iPhone does not find M1217nfw
Hi, I have a M1217nfw setup on my WiFi network. I can use Airprint from my iPad or iPhone to print on it - works fine. However I want to use the AiO Remote app on my iPad or iPhone to scan documents from this printer to my iPad/iPhone. The problem
-
Error while installing Migration Tool kit for SAP BPC7.0
Hi all, We are implementing SAP Business Planning and Consolidation Migration Process to BPC 7.0. We are trying to installin Migration Tool Kit. In this kit we are having two set up files - one for BPCMigrationClient and another for BPCMigrationServe
-
We are planning on implementing Oracle 11i this year and would like to use ATP functionality. We have several product lines with very high shrinkage rates. It appears that when looking at WIP, ATP supply will assume the work order header quantity unt
-
Help me , I got, exception with message "COUNT field incorrect "
exception I got *java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect* my code for(i=0;i<5;i++) //here title , link,description are strings Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); Connection con=DriverManager.g
-
Profilers disabled in my project
Hi I'm using jdev 10.1.3.0.4. I want to make an execution profile on my application, but the Run->Execution profile <projectname> is disabled aswell as my other profiler menuitems. If I create a new project the menu items are enabled. If have checked