A very big file

i have a very big text file . almost 800 MB . in some lines i have a special pattern "from the end" is present. i want to pick up those lines.
whats are the possible efficient solutions ?
as the file is large , i hesitate to use BufferedReader + readLine + indexOf("from the end") because its not efficient . it reads all the lines.
is RandomAccessFle will be a good solution ? but RAF dont have readLine + indexOf("from the end") .how do i solve it then ?
i have java 1.4 . in java 1.4 what is the best solution for solving this kind of problem ?

Under all circumstance you'll have to read all bytes
of the file once.Not true! If the file consists of bytes with a character representation that uses only one byte (e.g. ASCII or iso-8859-x) then the following class if very efficient.
import java.io.*;
import java.util.*;
public class GetLinesFromEndOfFile
    static public class BackwardsFileInputStream extends InputStream
        public BackwardsFileInputStream(File file) throws IOException
            assert (file != null) && file.exists() && file.isFile() && file.canRead();
            raf = new RandomAccessFile(file, "r");
            currentPositionInFile = raf.length();
            currentPositionInBuffer = 0;
        public int read() throws IOException
            if (currentPositionInFile <= 0)
                return -1;
            if (--currentPositionInBuffer < 0)
                currentPositionInBuffer = buffer.length;
                long startOfBlock = currentPositionInFile - buffer.length;
                if (startOfBlock < 0)
                    currentPositionInBuffer = buffer.length + (int)startOfBlock;
                    startOfBlock = 0;
                raf.seek(startOfBlock);
                raf.readFully(buffer, 0, currentPositionInBuffer);
                return read();
            currentPositionInFile--;
            return buffer[currentPositionInBuffer];
        public void close() throws IOException
            raf.close();
        private final byte[] buffer = new byte[4096];
        private final RandomAccessFile raf;
        private long currentPositionInFile;
        private int currentPositionInBuffer;
    public static List<String> head(File file, int numberOfLinesToRead) throws IOException
        return head(file, "ISO-8859-1" , numberOfLinesToRead);
    public static List<String> head(File file, String encoding, int numberOfLinesToRead) throws IOException
        assert (file != null) && file.exists() && file.isFile() && file.canRead();
        assert numberOfLinesToRead > 0;
        assert encoding != null;
        LinkedList<String> lines = new LinkedList<String>();
        BufferedReader reader= new BufferedReader(new InputStreamReader(new FileInputStream(file), encoding));
        for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
            lines.addLast(line);
        reader.close();
        return lines;
    public static List<String> tail(File file, int numberOfLinesToRead) throws IOException
        return tail(file, "ISO-8859-1" , numberOfLinesToRead);
    public static List<String> tail(File file, String encoding, int numberOfLinesToRead) throws IOException
        assert (file != null) && file.exists() && file.isFile() && file.canRead();
        assert numberOfLinesToRead > 0;
        assert (encoding != null) && encoding.matches("(?i)(iso-8859|ascii|us-ascii).*");
        LinkedList<String> lines = new LinkedList<String>();
        BufferedReader reader= new BufferedReader(new InputStreamReader(new BackwardsFileInputStream(file), encoding));
        for (String line = null; (numberOfLinesToRead-- > 0) && (line = reader.readLine()) != null;)
            // Reverse the order of the characters in the string
            char[] chars = line.toCharArray();
            for (int j = 0, k = chars.length - 1; j < k ; j++, k--)
                char temp = chars[j];
                chars[j] = chars[k];
                chars[k]= temp;
            lines.addFirst(new String(chars));
        reader.close();
        return lines;
    public static void main(String[] args)
        try
            File file = new File("/usr/share/dict/words");
            int n = 10;
                System.out.println("Head of " + file);
                int index = 0;
                for (String line : head(file, n))
                    System.out.println(++index + "\t[" + line + "]");
                System.out.println("Tail of " + file);
                int index = 0;
                for (String line : tail(file, "us-ascii", n))
                    System.out.println(++index + "\t[" + line + "]");
        catch (Exception e)
            e.printStackTrace();
}

Similar Messages

  • How do i open a VERY big file?

    I hope someone can help.
    I did some testing using a LeCroy LT342 in segment mode. Using the
    Labview driver i downloaded the data over GPIB and saved it to a
    spreadsheet file. Unfortunately it created very big files (ranging from
    200MB to 600MB). I now need to process them but Labview doesn't like
    them. I would be very happy to split the files into an individual file
    for each row (i can do this quite easily) but labview just sits there
    when i try to open the file.
    I don't know enough about computers and memory (my spec is 1.8GHz
    Pentium 4, 384MB RAM) to figure out whether if i just leave it for long
    enough it will do the job or not.
    Has anyone any experience or help they could offer?
    Thanks,
    Phil

    When you open (and read) a file you usually move it from your hard disk (permanent storage) to ram.  This allows you to manipulate it in high speeds using fast RAM memory, if you don't have enough memory (RAM) to read the whole file,  you will be forced to use virtual memory (uses swap space on the HD as "virtual" RAM) which is very slow.  Since you only have 384 MB of RAM and want to process Huge files (200MB-600MB) you could easily and inexpensively upgrade to 1GB of RAM and see large speed increases.  A better option is to lode the file in chunks looking at some number of lines at a time and processing this amount of data and repeat until the file is complete, this will be more programming but will allow you to use much lass RAM at any instance.
    Paul
    Paul Falkenstein
    Coleman Technologies Inc.
    CLA, CPI, AIA-Vision
    Labview 4.0- 2013, RT, Vision, FPGA

  • VERY big files (!!) created by QuarkXPress 7

    Hi there!
    I have a "problem" with QuarkXPress 7.3 and I don't know if this is the right forum to ask...
    Anyway, I have createed a document, about 750 pages, with 1000 pictures placed in it. I have divided it in 3 layouts.
    I'm saving the file and the file created is 1,20GB !!!
    Isn't that a very big file for QuarkXPress??
    In that project there are 3 layouts. I tried to make a copy of that file and delete 2 of 3 layouts and the project's file size is still the same!!
    (Last year, I had created (almost) the same document and as I checked that document now, its size is about 280 MB!!)
    The problem is that I have "autosave" on (every 5 or 10 minutes) and it takes some time to save it !
    Can anyone help me with that??
    Why does Quark has made SO big file???
    Thank you all for your time!

    This is really a Quark issue and better asked in their forum areas. However, have you tried to do a Save As and see how big the resultant document is?

  • Question about reading a very big file into a buffer.

    Hi, everyone!
    I want to randomly load several characters from
    GB2312 charset to form a string.
    I have two questions:
    1. Where can I find the charset table file? I have used
    google for hours to search but failed to find GB2312 charset
    file out.
    2. I think the charset table file is very big and I doubted
    whether I can loaded it into a String or StringBuffer? Anyone
    have some solutions? How to load a very big file and randomly
    select several characters from it?
    Have I made myself understood?
    Thanks in advance,
    George

    The following can give the correspondence between GB2312 encoded byte arrays and characters (in hexadecimal integer expression).
    import java.nio.charset.*;
    import java.io.*;
    public class GBs{
    static String convert() throws java.io.UnsupportedEncodingException{
    StringBuffer buffer = new StringBuffer();
    String l_separator = System.getProperty("line.separator");
    Charset chset = Charset.forName("EUC_CN");// GB2312 is an alias of this encoding
    CharsetEncoder encoder = chset.newEncoder();
    int[] indices = new int[Character.MAX_VALUE+1];
    for(int j=0;j<indices.length;j++){
           indices[j]=0;
    for(int j=0;j<=Character.MAX_VALUE;j++){
        if(encoder.canEncode((char)j)) indices[j]=1;
    byte[] encoded;
    String data;
    for(int j=0;j<indices.length;j++) {
         if(indices[j]==1) {
                encoded =(Character.toString((char)j)).getBytes("EUC_CN");
                          for(int q=0;q<encoded.length;q++){
                          buffer.append(Byte.toString(encoded[q]));
                          buffer.append(" ");
                buffer.append(": 0x");buffer.append(Integer.toHexString(j));
                buffer.append(l_separator);
        return buffer.toString();
    //the following is for testing
    /*public static void main(String[] args) throws java.lang.Exception{
       String str = GBs.convert();
       System.out.println(str);*/

  • Upload of very big files (300MB+)

    Hello all,
    I am trying to create application in HTMLDB to store files via webbrowser in DB(BLOB). I created all needed components and now stuck with the problem of uploading big files. Basicaly when file is over 100MB it becomes unreliable and for big files does not work at all. Can somebody help me to figure out how to upload big files into htmldb application. Any hints and sugestions are welcome. Examples will be even more appreciated.
    Sincerely,
    Ian

    Ian,
    When you say "big files does not work at all", what do you see in the browser? Is no page returned at all?
    When a file is uploaded, it takes some amount of time to simply transfer the file from the client to modplsql. If you're on a local Gbit network, this is probably fast. If you're doing this over a WAN or over the Internet, this is probably fairly slow. As modplsql gets this uploaded file, it writes it to a temporary BLOB. Once fully received, modplsql will then insert this into the HTML DB upload table.
    I suspect that the TimeOut directive in Apache/Oracle HTTP Server is kicking in here. The default setting for this is 300 (5 minutes).
    I believe the timeout is reset by modplsql during file transfer to avoid a timeout operation while data is still being sent. Hence, I believe the insertion of your large file into the file upload table is taking longer than the TimeOut directive.
    The easy answer is to consider increasing your TimeOut directive for Apache/Oracle HTTP Server.
    The not so easy answer is to investigate why it takes so long for this insert, and tune the database accordingly.
    Hope this helps.
    Joel

  • How kann I store a very big file in to Oracle XML DB?

    Hello,
    I´m looking for a fast method to store a XML file in to a oracle 10g XE. I had try to store the 500 kb file in to the database as a xmltype or a clob, but I still have the same Error: "ORA-01704: string literal too long". I´m looking for a long time for a possibility to store this file and another one(113 MB) in to the database. I had searched by google to see if any solution are available, and the unique solution found is to splitt the document in a loop statement(due to the 32 kb limit). But this solution don´t allow any storage with a XML Schema and is to slow.
    here is an example how I did it(but it didn´t work):
    create table Mondial(Nr int, xmldata xmltype);
    INSERT INTO Mondial VALUES (1, 'big xml file');
    I would also try the alternative with a bind variable like this:
    create or replace PROCEDURE ProcMondial IS
    poXML CLOB;
    BEGIN
    poXML := 'big xml file';
    INSERT INTO Mondial VALUES (1, XMLTYPE(poXML));
    EXCEPTION
    WHEN OTHERS THEN
    raise_application_error(-20101, 'Exception occurred in Mondial procedure :'||SQLERRM);
    END ProcMondial;
    I become also the same Error:String to long!
    I am using sql developer for the Query.
    please help me, I´m despaired.
    thanks!
    Michael

    If you use the suggested statement
    create table Mondial(Nr int, xmldaten xmltype) TABLESPACE mybigfile;than I hope for you that this XML data content will not be used for content driven procedures like selecting, updating or deletion of parts of XML data. The default for "xmltype" has a CLOB physical representation that, if not on 11g combining it with an XMLIndex, is only useful for document driven XML storage. That means - You ALWAYS delete, select, update or insert the WHOLE XML content (/per record basis). If this is not your intent, you will encounter performance problems.
    Use instead Object Relational or Binary XML (11g) while using the XMLType datatype or the XMLType datatype in conjunction with CLOB based storage AND an XMLIndex if on 11g. Carefully read the first and/or second chapter of the 10/11g XMLDB Development Guide. Carefully choose your XMLType needed (and or XML design) if you don't want to be disappointed regarding the end result and/or have to redesign your XML tables.
    A short introduction on the possibilities can be found here (not only 11g related btw): http://www.liberidu.com/blog/?p=203

  • Spilt a big file into 3 files??

    Hello,
    Is there any software I can use to spilt a very big file into
    3 or 4 small files please?
    Thanks.

    Use one of these applications to split the file.
    (14020)

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • Very Big Videora MP4 File not copying to iTunes

    Hi,
    I have followed your exact steps from the Videora guide, the problem I am experiencing is that when I drag and drop my MPEG4 file into iTunes, nothing happens.
    When dropped into any of the valid directories in iTunes, the + icon shows that the action is valid, but I can't see that it has put my movie into iTunes anywhere?!
    The only thing that I can think of is that the movie file i'm adding is rather large, 2.3G. In fact all the DVD's i'm trying to convert are massive and takes hours?!
    Has anyone else experienced this?
    Help?
    theOne

    You may want to check your videora settings. The file size you speak off sounds like a very uncompressed file. 2.3gb is bigger than any movie file I ever had on my computer and once I run films through videora it usually makes them about a third of their original size.

  • Photoshop CC slow in performance on big files

    Hello there!
    I've been using PS CS4 since release and upgraded to CS6 Master Collection last year.
    Since my OS broke down some weeks ago (RAM broke), i gave Photoshop CC a try. At the same time I moved in new rooms and couldnt get my hands on the DVD of my CS6 resting somewhere at home...
    So I tried CC.
    Right now im using it with some big files. Filesize is between 2GB and 7,5 GB max. (all PSB)
    Photoshop seem to run fast in the very beginning, but since a few days it's so unbelievable slow that I can't work properly.
    I wonder if it is caused by the growing files or some other issue with my machine.
    The files contain a large amount of layers and Masks, nearly 280 layers in the biggest file. (mostly with masks)
    The images are 50 x 70 cm big  @ 300dpi.
    When I try to make some brush-strokes on a layer-mask in the biggest file it takes 5-20 seconds for the brush to draw... I couldnt figure out why.
    And its not so much pretending on the brush-size as you may expect... even very small brushes (2-10 px) show this issue from time to time.
    Also switching on and off masks (gradient maps, selective color or leves) takes ages to be displayed, sometimes more than 3 or 4 seconds.
    The same with panning around in the picture, zooming in and out or moving layers.
    It's nearly impossible to work on these files in time.
    I've never seen this on CS6.
    Now I wonder if there's something wrong with PS or the OS. But: I've never been working with files this big before.
    In march I worked on some 5GB files with 150-200 layers in CS6, but it worked like a charm.
    SystemSpecs:
    I7 3930k (3,8 GHz)
    Asus P9X79 Deluxe
    64GB DDR3 1600Mhz Kingston HyperX
    GTX 570
    2x Corsair Force GT3 SSD
    Wacom Intous 5 m Touch (I have some issues with the touch from time to time)
    WIN 7 Ultimate 64
    all systemupdates
    newest drivers
    PS CC
    System and PS are running on the first SSD, scratch is on the second. Both are set to be used by PS.
    RAM is allocated by 79% to PS, cache is set to 5 or 6, protocol-objects are set to 70. I also tried different cache-sizes from 128k to 1024k, but it didn't help a lot.
    When I open the largest file, PS takes 20-23 GB of RAM.
    Any suggestions?
    best,
    moslye

    Is it just slow drawing, or is actual computation (image size, rotate, GBlur, etc.) also slow?
    If the slowdown is drawing, then the most likely culprit would be the video card driver. Update your driver from the GPU maker's website.
    If the computation slows down, then something is interfering with Photoshop. We've seen some third party plugins, and some antivirus software cause slowdowns over time.

  • How to read big files

    Hi all,
    I have a big text file (about 140MB) containing data I need to read and save (after analysis) to a new file.
    The text file contains 4 columns of data (so each row has 4 values to read).
    When I try to read all the file at once I get a "Memory full" error messags.
    I tried to read only a certain number of lines each time and then write it to the new file. This is done using the loop in the attached picture (this is just a portion of the code). The loop is repeated as many times as needed.
    The problem is that for such big files this method is very slow and If I try to increase the number of lines to read each time, I still see the PC free memory decending slowly at the performance window....
    Does anybody have a better idea how to implement this kind of task?
    Thanks,
    Mentos.
    Attachments:
    Read a file portion.png ‏13 KB

    Hi Mark & Yamaeda,
    I made some tests and came up with 2 diffrenet aproaches - see vis & example data file attached.
    The Read lines aproach.vi reads a chunk with a specified number of lines, parses it and then saves the chunk to a new file.
    This worked more or less OK, depending on the dely. However in reality I'll need to write the 2 first columns to the file and only after that the 3rd and 4th columns. So I think I'll need to read the file 2 times - 1st time take first 2 columns and save to file, and then repeat the loop and take the 2 other columns and save them...
    Regarding the free memory: I see it drops a bit during the run and it goes up again once I run the vi another time.
    The Read bytes approach reads a specified number of bytes in each chunk until it finishes reading all the file. Only then it saves the chunks to the new file. No parsing is done here (just for the example), just reading & writing to see if the free memory stays the same.
    I used 2 methods for saving - With string subset function and replace substring function.
    When using replace substring (disabled part) the free memory was 100% stable, but it worked very slow.
    When using the string subset function the data was saved VERY fast but some free memory was consumed.
    The reading part also consumed some free memory. The rate of which depended on the dely I put.
    Which method looks better?
    What do you recommand changing?
    Attachments:
    Read lines approach.vi ‏17 KB
    Read bytes aproach.vi ‏17 KB
    Test file.txt ‏1 KB

  • Very Big Cell when export in Excel

    Dear Tech guys,
    I use VS 2008 with VB and CR 2008.
    Crystal report, and export in PDF are OK, but when i export the report in Excel, i have the bellow problems.
    The report is a delivery note with 7 columns and many rows.
    1. In all pages, the page numbers are lost, except from the last page.
    2. After last row, excel has a very big in height (height > 300) row. Because of this, excel creates a new empty page and at the bottom of the new page i see the page number (Page 5 of 5).
    Can you help me with this problem??
    I have this problem after the last update (Service Pack 3).
    Visual Studio 2008: 9.030729.1 SP 1
    Crystal Reports 2008: 12.3.0.601
    Thank you in advance.
    Best regards,
    Navarino Technology Dept.
    Edited by: Navarino on Jul 15, 2010 2:47 PM

    Dear all good morning from Greece.
    First of all, i like to thank you for your quick respond.
    Dear Ludek Uher,
    1. Yes, this is from a .NET (3.5) application with VB.
    2. I do the export via code.
    3. From the CR designer i have the same problem.
    Dear David Hilton,
    The photo, is not working.
    I found the option "On Each Page" from the CR designer and i changed it. Now i get the page number on every page but i can see that something is wrong with the page height and with the end of the page of the report.
    I will try to show you the problem of the Excel file, after the option "On Each Page":
    Header........................
                      Field1     field2    field3 ......
    row1 .......
    row2 ......
    row3.....
    row56 ......
    {end of page 1}
    {new page 2}
    row57
    row58
    row59
    Page 1 of 4 (the footer of the first page must be in the first page, but it shown in the second page)
    row60
    row61
    row62
    {end of page 2}
    {new page 3}
    row110
    row111
    row112
    Page 2 of 4 (the footer of the second page must be in the second page, but it shown in the third page)
    row140
    row141
    row142
    {end of page 3}
    {new page 4}
    and go on.....
    I hope this helped.
    If i change the margins from Page Break Preview in Excel, the pages are OK. So i thing that something conflicts with the end of the page. The report does not understand the end of the page.
    If there is a way to send you the file or screen shots, please tell me how.
    Thank you in advance again.
    Best regards,
    Navarino Technology Dept.
    Edited by: Navarino on Jul 16, 2010 9:09 AM

  • Big file export issue

    Hello,
    We have developed a pair of import/export plug-ins in order to support a in-house format. For several reasons, which are not explained here, we had to develop import-export plugins instead of a single format plug-in. Our in house file format is designed for aerial and satellite images and supports very large files which can be above 100.000*100.000 pixels in size.
    The import plug-in works fine with large images but, unfortunately, we cannot export these images because the export plug-in is grayed out in the drop down list when a large image is loaded. We have tried with several versions of Photoshop up to CS 6 but the problem remains. We haven't found any attributes in the export plug-in in order to indicate that it supports large images.
    Has anyone got an idea ?
    Thanks,
    Bruno

    Heh, we also seem to run into the same issue with our Geographic Imager plugin, when exporting georeferenced files that exceed in either height or width the 30000 pixel limit (yeah, a common case for aerial or satellite data). PS indeed disables the menu and I'm unaware about any workaround for it (I'd also love to know if there is one). What's more important though, is that Photoshop doesn't actually disable the export plugin in this case - you can still run it through scripts or actions. And this is why for us specifically this is not really a big deal, because we do provide access to all our functinality including Export via our own panel.
    Here http://forums.adobe.com/thread/745904 Chris mentioned a PIPL property that was supposed to exist that limits the exported file size, but I think the overall conclusion was that it didn't make it to the release, so unless he or Tom enlighten us here about another magic property, there may be no better soultion to it.
    ivar

  • What Is A big file?

    Hi All. I'm making a smart folder for big files when the question hit me. When does a file become a big file? 100MB? 500MB? 1GB? I would love to hear anyone's opinions.

    It's all relative. Its fair to define the relative size of a file in terms of entropy and utility, both of which are measurable (entropy empirically, utility through observation).
    The entropy is a measure of the randomness of the data. If it's regular, it should be compressible. High entropy (1 per bit) means a good use of space, low entropy (0 per bit), a waste.
    Utility is simple, what fraction of the bits in a file are ever used for anything. Even if it's got a high entropy, if you never refer to a part of the file, it's existence is useless.
    You can express the qualitative size of a file as the product of the entropy and the utility. Zero entropy means the file itself contains just about no information -- so, regardless how big it is it's larger than it needs to be. If you never access even one bit of a file, it's also too big regardless how many bits are in the file. If you use 100% (1 bit per bit) of a file, but it's entropy is 0.5 per bit, then the file is twice as big as it needs to be to represent the information contained in it.
    An uncompressed bitmap is large, whereas a run-length encoded bitmap that represents the exact same data is small.
    An MP3 file is based on the idea that you can throw away information in a sound sample to make it smaller, but still generate something that sounds very similar to the original. Two files representing the same sound, but the MP3 is smaller because you can sacrifice some of the bits because you are lowering the precision with which you represent the original (taking advantage that human perception of the differences is limited).
    So, 100M would seem like a ridiculous size for a forum post, but it sounds small for a data file from a super-high resolution mass spectrometer.

  • Big File vs Small file Tablespace

    Hi All,
    I have a doubt and just want to confirm that which is better if i am using Big file instead of many small datafile for a tablespace or big datafiles for a tablespace. I think better to use Bigfile tablespace.
    Kindly help me out wheather i am right or wrong and why.

    GirishSharma wrote:
    Aman.... wrote:
    Vikas Kohli wrote:
    With respect to performance i guess Big file tablespace is a better option
    Why ?
    If you allow me to post, I would like to paste the below text from my first reply's doc link please :
    "Performance of database opens, checkpoints, and DBWR processes should improve if data is stored in bigfile tablespaces instead of traditional tablespaces. However, increasing the datafile size might increase time to restore a corrupted file or create a new datafile."
    Regards
    Girish Sharma
    Girish,
    I find it interesting that I've never found any evidence to support the performance claims - although I can think of reasons why there might be some truth to them and could design a few tests to check. Even if there is some truth in the claims, how significant or relevant might they be in the context of a database that is so huge that it NEEDS bigfile tablespaces ?
    Database opening:  how often do we do this - does it matter if it takes a little longer - will it actually take noticeably longer if the database isn't subject to crash recovery ?  We can imagine that a database with 10,000 files would take longer to open than a database with 500 files if Oracle had to read the header blocks of every file as part of the database open process - but there's been a "delayed open" feature around for years, so maybe that wouldn't apply in most cases where the database is very large.
    Checkpoints: critical in the days that a full instance checkpoint took place on the log file switch - but (a) that hasn't been true for years, and (b) incremental checkpointing made a big difference the I/O peak when an instance checkpoint became necessary, and (c) we have had a checkpoint process for years (if not decades) which updates every file header when necessary rather than requiring DBWR to do it
    DBWR processes: why would DBWn handle writes more quickly - the only idea I can come up with is that there could be some code path that has to associate a file id with an operating system file handle of some sort and that this code does more work if the list of files is very long: very disappointing if that's true.
    On the other hand I recall many years ago (8i time) crashing a session when creating roughly 21,000 tablespaces for a database because some internal structure relating to file information reached the 64MB hard limit for a memory segment in the SGA. It would be interesting to hear if anyone has recently created a database with the 65K+ limit for files - and whether it makes any difference whether that's 66 tablespaces with about 1,000 files, or 1,000 tablespace with about 66 files.
    Regards
    Jonathan Lewis

Maybe you are looking for