How data buffer cache is handled

hi all,
I have a doubt in how buffer cache is managed.. I read that server process will read blocks from data file and keeps them in buffer cache if they are not already present in buffer cache.. Suppose two users from different sessions issue the same statement at the same time, and they are picked up by different server processes, and upon checking they will find that the block is not present in SGA, so they both will go read the block.. is it not unnecessary.. or am i wrong? please let me know how it is handled.
Regards
Suresh

Hi tinku,
I understand what you mean.. but my problem is different.
suppose there are two sessions
first session issues a statement:
Select * from emp where empno = 12;
and second session issues
select * from emp where empno in (12,13);
becoz these two statements are different, these two statements are to be parsed separately. After parsing, each of the server processes will see if the block corresponding 12 is already in memory, if it is not, they will read it from data file and keep in buffer cache.. suppose these two server processes at the same time find that the block is not in SGA and try to read the block from data file, there would be two copies of the block in SGA. isn't it. I don't think this is what is gonna happen. so i need how the server processes coordinate with each other to read only necessary blocks without redundancy.
Regards
Suresh

Similar Messages

  • Data Buffer Cache!

    Hi Guru(s),
    Please anyone can tell me that, how can we check the size of data buffer cache.
    Please help.
    Regards,
    Rajeev K

    Hi,
    Here is what I would do:
    First, let's have a look what views we have available to us
    SQL> select table_name from dict where table_name like '%SGA%';
    TABLE_NAME
    DBA_HIST_SGA
    DBA_HIST_SGASTAT
    DBA_HIST_SGA_TARGET_ADVICE
    V$SGA
    V$SGAINFO
    V$SGASTAT
    V$SGA_CURRENT_RESIZE_OPS
    V$SGA_DYNAMIC_COMPONENTS
    V$SGA_DYNAMIC_FREE_MEMORY
    V$SGA_RESIZE_OPS
    V$SGA_TARGET_ADVICE
    GV$SGA
    GV$SGAINFO
    GV$SGASTAT
    GV$SGA_CURRENT_RESIZE_OPS
    GV$SGA_DYNAMIC_COMPONENTS
    GV$SGA_DYNAMIC_FREE_MEMORY
    GV$SGA_RESIZE_OPS
    GV$SGA_TARGET_ADVICE
    19 rows selected.v$sgainfo looks pretty promising, what does it contain?
    SQL> desc v$sgainfo
    Name                                                              Null?    Type
    NAME                                                                       VARCHAR2(32)
    BYTES                                                                      NUMBER
    RESIZEABLE                                                                 VARCHAR2(3)And how many rows are in it?
    SQL> select count(*) from v$sgainfo;
      COUNT(*)
            12Not too many so let's look at them all, with a little formatting
    SQL> select name,round(bytes/1024/1024,0) MB, resizeable from v$sgainfo order by name;Hope that helps,
    Rob

  • Data Buffer Cache Error Message

    I'm using a load rule that builds a dimenson on the fly and getting the following error: "Not enough memory to allocate the Data Buffer Cache [adDatInitCacheParamsAborted]"I've got 4 other databases which are set up the same as this one and I'm not getting this error. I've checked all the settings and I think they're all the same.Anyone have any idea what this error could mean?I can be reached at [email protected]

    Hi,
    Same issue, running Vista too.  This problem is recent.  It may be due to the last itunes update.  itunes 11.2.23

  • Data buffer cache has undo information...??

    do data buffer in sga has any undo information....??

    920273 wrote:
    in sga we have data buffer ..... in data buffer cache if i m updating a block, do any undo information is maintained in buffer cache regarding the change, i hav made in block.....am
    lets take an example if i m updating a block having value '2' to '4'...and dbwriter not writes to datafiles ...in this case do any undo information is maintained in buffer cache...???Hmmm.. When you updated value 2 to 4, so still its uncommitted then the transaction is active and it will be in buffer cache itself. Of course if the cache is full or performed any checkpoint, then this modified data also writtened into data files. So you should not mention dbwriter not writes to datafiles you cant predict when cache going to full or checkpoint occurred.

  • Data Buffer Cache Quality

    Hi All,
    Can somebody please please tell some ways in which i can improve the data buffer quality? Presently it is 51.2%. The DB is 10.2.0.2.0
    I want to know, wat all factors do i need to keep in mind if i want to increase DB_CACHE_SIZE?
    Also, i want to know how can i find out Cache Hit ratio?
    Further, i want to know which are the most frequently accessed objects in my DB?
    Thanks and Regards,
    Nick.

    Nick-- wud b DBA wrote:
    Hi Aman,
    Thanks. Can u please give the appropriate query for that?
    And moreover when i'm giving:
    SQL>desc V$SEGMENT-STATISTICS; It is giving the following error:
    SP2-0565: Illegal identifier.
    Regards,
    Nick.LOL dude I put it by mistake. Its dash(-) sign but we need underscore(_) sign.
    About the query, it may vary what you really mean by "most used obect". If you mean to find the object that is undergoing lots of reads,writes than this may help,
    SELECT Rownum AS Rank,
    Seg_Lio.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'LIO' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'logical reads'
    ORDER BY St.VALUE DESC) Seg_Lio
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_r.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Reads' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical reads'
    ORDER BY St.VALUE DESC) Seq_Pio_r
    WHERE Rownum <= 10
    UNION ALL
    SELECT Rownum AS Rank,
    Seq_Pio_w.*
    FROM (SELECT St.Owner,
    St.Obj#,
    St.Object_Type,
    St.Object_Name,
    St.VALUE,
    'PIO Writes' AS Unit
    FROM V$segment_Statistics St
    WHERE St.Statistic_Name = 'physical writes'
    ORDER BY St.VALUE DESC) Seq_Pio_w
    WHERE Rownum <= 10; But if you are looking for the objects which are most highly in the waits than this query may help
    select * from
       select
          DECODE
          (GROUPING(a.object_name), 1, 'All Objects', a.object_name)
       AS "Object",
    sum(case when
       a.statistic_name = 'ITL waits'
    then
       a.value else null end) "ITL Waits",
    sum(case when
       a.statistic_name = 'buffer busy waits'
    then
       a.value else null end) "Buffer Busy Waits",
    sum(case when
       a.statistic_name = 'row lock waits'
    then
       a.value else null end) "Row Lock Waits",
    sum(case when
       a.statistic_name = 'physical reads'
    then
       a.value else null end) "Physical Reads",
    sum(case when
       a.statistic_name = 'logical reads'
    then
       a.value else null end) "Logical Reads"
    from
       v$segment_statistics a
    where
       a.owner like upper('&owner')
    group by
       rollup(a.object_name)) b
    where (b."ITL Waits">0 or b."Buffer Busy Waits">0)This query's reference:http://www.dba-oracle.com/t_object_wait_v_segment_statistics.htm
    So it would depend upon that on what ground you want to get the objects.
    About the cache increase, are you seeing any wait events related to buffer cache or DBWR in the statspack report?
    HTH
    Aman....

  • Data Buffer Cache Limit

    Is there any way that I could signal daat buffer cache to write all data to data files if amount of dirty blocks reach say 50 Mb.
    Iam processing with BLOBS, one blob at a time, some of which has sizes exceeeding 100 Mb and the diffcult thing is that I cannot write to disk until the whoile blob is finished as it is one transaction.
    Well if anyone is going to suggest that to open, process close, commit ....well i tried that but it also gives error "nmo free buffer pool" but this comes for twice the size of buffer size = 100 Mb file when db cache size is 50 mb.
    any ideas.

    Hello,
    Ia using Oracle 9.0.1.3.1
    I am getting error ORA-OO379 No free Buffers available in Buffer Pool. Default for block size 8k.
    My Init.ora file is
    # Copyright (c) 1991, 2001 by Oracle Corporation
    # Cache and I/O
    db_block_size=8192
    db_cache_size=104857600
    # Cursors and Library Cache
    open_cursors=300
    # Diagnostics and Statistics
    background_dump_dest=C:\oracle\admin\iasdb\bdump
    core_dump_dest=C:\oracle\admin\iasdb\cdump
    timed_statistics=TRUE
    user_dump_dest=C:\oracle\admin\iasdb\udump
    # Distributed, Replication and Snapshot
    db_domain="removed"
    remote_login_passwordfile=EXCLUSIVE
    # File Configuration
    control_files=("C:\oracle\oradata\iasdb\CONTROL01.CTL", "C:\oracle\oradata\iasdb\CONTROL02.CTL", "C:\oracle\oradata\iasdb\CONTROL03.CTL")
    # Job Queues
    job_queue_processes=4
    # MTS
    dispatchers="(PROTOCOL=TCP)(PRE=oracle.aurora.server.GiopServer)", "(PROTOCOL=TCP)(PRE=oracle.aurora.server.SGiopServer)"
    # Miscellaneous
    aq_tm_processes=1
    compatible=9.0.0
    db_name=iasdb
    # Network Registration
    instance_name=iasdb
    # Pools
    java_pool_size=41943040
    shared_pool_size=33554432
    # Processes and Sessions
    processes=150
    # Redo Log and Recovery
    fast_start_mttr_target=300
    # Sort, Hash Joins, Bitmap Indexes
    pga_aggregate_target=33554432
    sort_area_size=524288
    # System Managed Undo and Rollback Segments
    undo_management=AUTO
    undo_tablespace=UNDOTBS

  • ANN: Learn how to invalidate cache and handle exceptions

    http://otn.oracle.com/sample_code/tech/java/codesnippet/webcache/index.html
    These set of New OracleAS Web Cache How-to's illustrates how to invalidate cached content using esi:invalidate and PL/SQL invalidation tags. Also, learn how to esi exceptions using esi:try, esi:accept, and esi:except tags.
    Cheers,
    -Srikanth

    You may use any of the two ways:
    1. Use the interface. Go to Navigator, and go to that particular page where that portlet is, and then edit it. Go to Properties/Cache. See at the very bottom of the page for how to clear cache for that page.
    2. See details of the api function for cache invalidation.
    wwpro_api_invalidation.execute_cache_invalidation;

  • Database buffer Cache

    Hi Guru,
    Can anyone tell me what is the actual definition of Data buffer Cache & Log buffer Cache and how it works in Oracle 10g??
    Please
    Regards,
    Rajeev,India
    Edited by: 970371 on Nov 8, 2012 7:06 PM

    vlethakula wrote:
    Databuffer cache contains the blocks which are read from physical data files.Database buffer cache contains buffers that hold the blocks read from the disk.
    Log buffer cahce contains changes made to the database
    e.g: You try to update a row, the changes made to that row is written in form of change vectors to Log buffer cache, from there on certail rules(like commit) LGWR background process writes those changes from LOG BUFFER CACHE to redo log files.
    The block which is modified in database buffer cache, will be written to physical files by DBWR process after certain rules are met (like checkpoint)The reason that I didn't give the explanation or the links containing the same from the docs that I wanted OP to come up with some sort of his own understanding about the two caches first.
    Aman....

  • Buffer cache of SGA

    can anyone plz clarify my doubt,it is which parameter defines the size of db buffer cache in the SGA.does db_cache_size directly defines the size or is it the size of the cache of standard blocks(specified by db_block_size parameter).

    DB_BLOCK_BUFFERS specifies the number of blocks to allocate for data buffer. This parameter's value is then multiplied with DB_BLOCK_SIZE to calculate the size of the data buffer.
    DB_CACHE_SIZE specifies the size value itself directly in units of KV,MB or GB. This parameter alone is enough to calculate the data buffer cache size.
    DB_BLOCK_BUFFERS can only create buffer cache in units of blocks based on one parameter DB_BLOCK_SIZE. On the other hand, multiple data buffere caches can be created by using DB_nK_CACHE_SIZE parameters where n is the blocks's size of for the buffer cache. So for example, one can allocate X MB buffer cache of 8K block size and have Y MB buffer cache of 16K blocks. This helps when you have tablespaces of varying block sizes (This could not be possible using DB_BLOCK_BUFFERS as DB_BLOCK_SIZE is not modifiable).
    DB_CACHE_SIZE can work along with SGA_TARGET parameter (which decides SGA size). If DB_CACHE_SIZE is 0, then it's value varies based on usage. If a value is set then that value turns out to be a minimum value. This is not possible using DB_BLOCK_BUFFERS.

  • Data being fetched bigger than DB Buffer Cache

    DB Version: 10.2.0.4
    OS : Solarit 5.10
    We have a DB with 1gb set for DB_CACHE_SIZE . Automatic Shared Memory Management is Disabled (SGA_TARGET = 0).
    If a query is fired against a table which is going retrieve 2GB of data. Will that session hang? How will oracle handle this?

    Tom wrote:
    If the retrieved blocks get automatically removed from the buffer cache after it is fetched as per LRU algorithm, then Oracle should handle this without any issues. Right?Yes. No issues in that the "+size of a fetch+" (e.g. selecting 2GB worth of rows) need to fit completely in the db buffer cache (only 1GB in size).
    As Sybrand mentioned - everything in that case will be flushed as newer data blocks will be read... and that will be flushed again shortly afterward as even newer data blocks are read.
    The cache hit ratio will thus be low.
    But this will not cause Oracle errors or problems - simply that performance degrade as the data volumes being processed exceeds the capacity of the cache.
    It is like running a very large program that requires more RAM than what is available on a PC. The "extra RAM" comes from the swap file on disk. App program will be slow as its memory pages (some on disk) needs to be swapped into and out of memory as needed. It will work faster if the PC has sufficient RAM. However, the o/s is designed to deal with this exact situation where more RAM is needed than what physically available.
    Similar situation with processing larger data chunks than what the buffer cache has capacity for.

  • How to remove an object from Buffer Cache

    Hi,
    I have a simple question. How can I remove an object from the Buffer Cache in Oracle 10gR2 ?
    I am doing some tuning tasks in a shared development database, so I can't do "alter system flush shared_pool" because it will affect other people who are running their queries. So I want to remove from Buffer Cache only the objects that I know that I am the only reader. I can see the objects that I want to be removed by querying the V$BH view.
    By the way, I did some "alter system flush shared_pool" and my objects were not removed from the Buffer Cache, and they are not in the "Keep".
    Thanks In Advance,
    Christiano

    Further more, you can use CACHE | NOCACHE on table level to indicate how you want Oracle handle the data blocks of said table.
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2215507
    CACHE | NOCACHE | CACHE READS
    Use the CACHE clauses to indicate how Oracle Database should store blocks in the buffer cache. If you specify neither CACHE nor NOCACHE, then:
    In a CREATE TABLE statement, NOCACHE is the default
    In an ALTER TABLE statement, the existing value is not changed.
    CACHE For data that is accessed frequently, this clause indicates that the blocks retrieved for this table are placed at the most recently used end of the least recently used (LRU) list in the buffer cache when a full table scan is performed. This attribute is useful for small lookup tables.
    As a parameter in the LOB_storage_clause, CACHE specifies that the database places LOB data values in the buffer cache for faster access.
    Restriction on CACHE You cannot specify CACHE for an index-organized table. However, index-organized tables implicitly provide CACHE behavior.
    NOCACHE For data that is not accessed frequently, this clause indicates that the blocks retrieved for this table are placed at the least recently used end of the LRU list in the buffer cache when a full table scan is performed. NOCACHE is the default for LOB storage.
    As a parameter in the LOB_storage_clause, NOCACHE specifies that the LOB value either is not brought into the buffer cache or is brought into the buffer cache and placed at the least recently used end of the LRU list. The latter is the default behavior.
    Restriction on NOCACHE You cannot specify NOCACHE for an index-organized table.
    CACHE READS CACHE READS applies only to LOB storage. It specifies that LOB values are brought into the buffer cache only during read operations but not during write operations.

  • 10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE

    제품 : ORACLE SERVER
    작성날짜 : 2004-05-25
    10G NEW FEATURE-HOW TO FLUSH THE BUFFER CACHE
    ===============================================
    PURPOSE
    이 자료는 Oracle 10g new feature 로 manual 하게
    buffer cache 를 flush 할 수 있는 기능에 대하여 알아보도록 한다.
    Explanation
    Oracle 10g 에서 new feature 로 소개된 내용으로 SGA 내 buffer cache 의
    모든 data 를 command 수행으로 clear 할 수 있다.
    이 작업을 위해서는 "alter system" privileges 가 있어야 한다.
    Buffer cache flush 를 위한 command 는 다음과 같다.
    주의) 이 작업은 database performance 에 영향을 줄 수 있으므로 주의하여 사용하여야 한다.
    SQL > alter system flush buffer_cache;
    Example
    x$bh 를 query 하여 buffer cache 내 존재하는 정보를 확인한다.
    x$bh view 는 buffer cache headers 정보를 확인할 수 있는 view 이다.
    우선 test 로 table 을 생성하고 insert 를 수행하고
    x$bh 에서 barfil column(Relative file number of block) 과 file# 를 조회한다.
    1) Test table 생성
    SQL> Create table Test_buffer (a number)
    2 tablespace USERS;
    Table created.
    2) Test table 에 insert
    SQL> begin
    2 for i in 1..1000
    3 loop
    4 insert into test_buffer values (i);
    5 end loop;
    6 commit;
    7 end;
    8 /
    PL/SQL procedure successfully completed.
    3) Object_id 확인
    SQL> select OBJECT_id from dba_objects
    2 where object_name='TEST_BUFFER';
    OBJECT_ID
    42817
    4) x$bh 에서 buffer cache 내에 올라와 있는 DBARFIL(file number of block) 를 조회한다.
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    TS# FILE# DBARFIL DBABLK CLASS STATE MODE_HELD J
    9 23 23 1297 8 1 0 7
    9 23 23 1298 9 1 0 7
    9 23 23 1299 4 1 0 7
    9 23 23 1300 1 1 0 7
    9 23 23 1301 1 1 0 7
    9 23 23 1302 1 1 0 7
    9 23 23 1303 1 1 0 7
    9 23 23 1304 1 1 0 7
    8 rows selected.
    5) 다음과 같이 buffer cache 를 flush 하고 위 query 를 재수행한다.
    SQL > alter system flush buffer_cache ;
    SQL> select ts#,file#,dbarfil,dbablk,class,state,mode_held,obj
    2 from x$bh where obj= 42817;
    6) x$bh 에서 state column 이 0 인지 확인한다.
    0 은 free buffer 를 의미한다. flush 이후에 state 가 0 인지 확인함으로써
    flushing 이 command 를 통해 manual 하게 수행되었음을 확인할 수 있다.
    Reference Documents
    <NOTE. 251326.1>

    I am also having the same issue. Can this be addressed or does BEA provide 'almost'
    working code for the bargin price of $80k/cpu?
    "Prashanth " <[email protected]> wrote:
    >
    Hi ALL,
    I am using wl:cache tag for caching purpose. My reqmnt is such that I
    have to
    flush the cache based on user activity.
    I have tried all the combinations, but could not achieve the desired
    result.
    Can somebody guide me on how can we flush the cache??
    TIA, Prashanth Bhat.

  • How to convert data buffer of type "oracle::occi::OCCI_SQLT_NUM" to double

    Hello All,
    I am using bulk reading by utilizing 'setDataBuffer()' method call and fetching N records at a time.
    It works well for fields of Oracle type VARCHAR, when I set
    h5. rs->setDataBuffer(col, buff, oracle::occi::OCCI_SQLT_CHR, size, lens, inds);
    and later iterating and interpreting data within 'buff' like this:
    h5. char* text = ((char*)buff) + size*i; <-- length is in lens array
    When the column's data type is NUMBER however, I am stuck.
    Indicating that data type to data buffer works well:
    h5. rs->setDataBuffer(col, buff, oracle::occi::OCCI_SQLT_NUM, size, lens, inds);
    I would like now to extract a value of c++ double type from that buffer.
    But I do not have an idea how to handle data within buff. I suppose it should have NUMBER stored inside in some internal format (just as described in Oracle® C++ Call Interface Programmer's Guide, section 'Description of External Datatypes', NUMBER)
    Gestures like casting to double or trying to interpet the buffer's inner as oracle::occi::Number do not produce sensible results.
    Could you help me to understand how can I obtain the double value from that buffer in a bulk read?

    I have the following code which has been working for me.
    showln ("Array Fetch");
    sql = "SELECT c1, c2, c3, c4 FROM nd17_otab";
    stmt->setSQL ( sql );
    OCCIResultSet *rs = stmt->executeQuery();
    rs->setDataBuffer (1, num, OCCIINT, sizeof(ub4), lenC1);
    rs->setDataBuffer (2, flt, OCCIBFLOAT, sizeof(float), lenC2);
    rs->setDataBuffer (3, dbl, OCCIBDOUBLE, sizeof(double), lenC3);
    rs->setDataBuffer (4, str, OCCI_SQLT_CHR, CHRS, lenC4);
    if ( rs->next(ROWS) )
    show ("Number of Rows Fetched: ") << rs->getNumArrayRows () << endl;
    showln ("Fetched Records");
    for (int j = 0; j < ROWS; j++ )
    show ("C1: ") << num[j] << endl;
    show ("C2: ") << flt[j] << endl;
    show ("C3: ") << dbl[j] << endl;
    show ("C4: ") << str[j] << endl;
    stmt->closeResultSet(rs);
    c1, c2, c3, c4 columns in table nd17_otab are of type NUMBER, BINARY_FLOAT, BINARY_DOUBLE & VARCHAR2

  • Fiori Enhancment - BSP how to clear buffer/cache?

    Hi All,
    So I'm trying to make some enhancmenets to a Fiori app. I have downloaded it and reuploaded is as a new BSP application.
    I wanted to make some changes to the BSP, so I changed the code directly, in SAP.
    I thought I must be doing something wrong, but my changes weren't having any effect.
    To further test this I wrote some nonsense which should ahve broken it... but it still ran fine!
    To test further again I went to SICF and deactivated the node!! ANd it STILL worked fine!
    I opened new sessions in my browser, and new incognito windows, and closed it all and reopened it all, but it STILL works fine!
    There is clearly some kind of caching/buffering going on here, I've found the tables: O2PAGDIR and O2PAGDIRT have buffered values (found out this from ST02)
    But how do I clear it?! I turned buffering off in these tables, to see if that worked, and made another change, and tried to reload the table... but STILL the page loads with no changes.
    Please, can anyone tell me how to clear the buffer/cache that means that when I make UI chanegs in a BSP it doesn't show them?!
    Thank you
    Lindsay

    Hi Mauro,
    I overcame this issue in a variety of ways:
    Firstly, I do all the customisation of the BSP applications locally on my machine.
    Then I upload the whole BSP using the program /UI5/UI5_REPOSITORY_LOAD.
    In order to make sure the display you see in your browser is the up-to-date version, there are various cache clearing things you can do:
    Program /UI5/RESET_CACHEBUSTER - this has no UI, and takes only a second to run
    Transaction /UI5/THEME_TOOL : double click on "Invalidate Cache" to refresh the theme cache (if you have made theme changes)
    Then you have the two model caches mentioned above by Ashish - if you changes the Gateway service but aren't seeing these changes you should run these to ensure the model is up-to-date.
    Browser caches: make sure you browser cache is cleared: for Chrome this is easy: hit F12 (to get Dev toold open), clickk on the cog icon, and tick the box "Disable cache (while DevTootls is open)" - then keep devtools open while refreshing the page.
    The best way to enhance the BSP applications is locally on your machine, and do all your testing locally before uploading.
    I hope this helps, let me know how you get on.
    Lindsay

  • How can I Cache the data I'm reading from a collection of text files in a directory using a TreeMap?

    How can I Cache the data I'm reading from a collection of text files in a directory using a TreeMap? Currently my program reads the data from several text files in a directory and the saves that information in a text file called output.txt. I would like to cache this data in order to use it later. How can I do this using the TreeMap Class? These are the keys,values: TreeMap The data I'd like to Cache is (date from the file, time of the file, current time).
    import java.io.*;
    public class CacheData {
      public static void main(String[] args) throws IOException {
      String target_dir = "C:\\Files";
      String output = "C:\\Files\output.txt";
      File dir = new File(target_dir);
      File[] files = dir.listFiles();
      // open the Printwriter before your loop
      PrintWriter outputStream = new PrintWriter(output);
      for (File textfiles : files) {
      if (textfiles.isFile() && textfiles.getName().endsWith(".txt")) {
      BufferedReader inputStream = null;
      // close the outputstream after the loop
      outputStream.close();
      try {
      inputStream = new BufferedReader(new FileReader(textfiles));
      String line;
      while ((line = inputStream.readLine()) != null) {
      System.out.println(line);
      // Write Content
      outputStream.println(line);
      } finally {
      if (inputStream != null) {
      inputStream.close();

    How can I Cache the data I'm reading from a collection of text files in a directory using a TreeMap? Currently my program reads the data from several text files in a directory and the saves that information in a text file called output.txt. I would like to cache this data in order to use it later. How can I do this using the TreeMap Class?
    I don't understand your question.
    If you don't know how to use TreeMap why do you think a TreeMap is the correct solution for what you want to do?
    If you are just asking how to use TreeMap then there are PLENTY of tutorials on the internet and the Java API provides the methods that area available.
    TreeMap (Java Platform SE 7 )
    Are you sure you want a map and not a tree instead?
    https://docs.oracle.com/javase/tutorial/uiswing/components/tree.html

Maybe you are looking for