Splitting data packages in CMOD

Hello All,
We am currently using a standard delivered SAP extractor.  However, we have written additional logic in CMOD for this extractor.  As a result of the CMOD there is a potential of 1 record expanding into hundreds of records which increases the size of the initial data package exponentially.  Is there a way in CMOD to split up the final data package into multiple data packages and send it to BW one at time?
Thanks,
BQ

If you change it after the extraction process has taken place then....you cant
Maybe you can include your logic in the extractor in order to select all the information in the extraction process...
Regards

Similar Messages

  • How to break up data package in CMOD

    Hello All,
    We am currently using a standard delivered SAP extractor. However, we have written additional logic in CMOD for this extractor. As a result of the CMOD there is a potential of 1 record expanding into hundreds of records which increases the size of the initial data package exponentially. Is there a way in CMOD to split up the final data package into multiple data packages and send it to BW one at time?
    Thanks,
    BQ

    Hi Nagesh,
    We thought of reducing the datapackage size, but that would increase the amount of data packages to over 1500.  This degrades our performace a lot.  We only want to split the data packages that are extremely large.  We dont' want to affect the size of data packages that are fine in size.  We want to create multiple datapackages because if we don't the load fail due to out of memory error.
    Like I mentioned in my initial message, 1 record generated from the standard extractor could potentially generate additional 100s of records through the CMOD code.
    <u>For example standard extractor delivers the following record:</u>
    1  ABC X
    <u>CMOD Results:</u>
    1  ABC X
    1  ABC Y
    1  ABC Z
    .... and so on and so forth
    I am hoping there is some kind of ABAP coding in the CMOD that could split the packages up and send it over BW separately.
    Thanks,
    BQ<u></u>

  • Creation of data packages due to large amount of datasets leads to problems

    Hi Experts,
    We have build our own generic extractor.
    When data packages (due to large amount of datasets) are created, different problems occur.
    For example:
    Datasets are now doubled and appear twice, one time in package one and a second time in package two. Since those datsets are not identical, information are lost while uploading those datasets to an ODS or Cube.
    What can I do? SAP will not help due to generic datasource.
    Any suggestion?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • Data Transfer Prozess (several data packages due two huge amount of data)

    Hi,
    a)
    I`ve been uploading data from ERP via PSA, ODS and InfoCube.
    Due to a huge amount of data in ERP - BI splits those data in two data packages.
    When prozessing those data to ODS the system delete a few dataset.
    This is not done in step "Filter" but in "Transformation".
    General Question: How can this be?
    b)
    As described in a) data is split by BI into two data packages due to amount of data.
    To avoid this behaviour I enterd a few more selection criteria within InfoPackage.
    As a result I upload data a several time, each time with different selction criteria in InfoPackage.
    Finally I have the same data in ODS as in a), but this time without having data deleted in step "Transformation".
    Question: How is the general behaviour of BI when splitting data in several data packages?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

  • FI-SL data package too large on delta

    Hi Guys,
    I`m loading data from FI-SL total extractor ( 3FI_SL_xx_TT )
    No problem with Delta Init but when we try to load a regular delta there is some problem with the size of the packages, there are totally irregular package sizes: 70.158, 52.398, 299.784, 299.982, 57243, .. So I`m getting TSV_TNEW_PAGE_ALLOC_FAILED error on transfer from PSA to InfoCube.
    Unfortunately we have to load a heavy bunch of data on delta because they want this data monthly (an business issue) so I`m note able to load it in small requests every day.
    To make the things worse I have that start routine to carry on Balance (sapnote_0000577644 - DataSource 0EC_PCA_3 and FI-SL line item DataSources).
    It`s not taking the package size of the infopackage in consideration (1000 Kbs / 2 processes / 10 pkgs/IDoc Info).
    Config:
    - BI 7.0 SP 23/AIX/Oracle
    - ECC: AIX/Oracle PI_BASIS 2008_1_700 000 / SAP_ABA 700 015 / SAP_BASIS 700 015 / SAP_APPL 600 013
    I  checked sap note sapnote_0000917737 - Reducing the request size for totals record DataSources but it not apply to us...
    Someone have any idea how to fix it?
    Thanks in advance,
    Alex

    Chubbyd4d wrote:
    Hi masters,
    I have a package of 11,000 rows coding with around 70 procedures and function in my DB.
    I used array to handle the global data, and almost every procedure and function in the package using the array.
    The package was creating problem in debugging. Gave me the Program too large error. so it came to an idea to split the package to smaller packages.
    However, the problem that I would like to discuss are
    what is the advantage to split the package into few smaller packages. How is the impact on the memory.
    if I chunk the package into few packages, will it be easier if use GTT instead of array for the global data? How is the impact on the peroformance, since GTT is using I/OOne of our larger packages is over 20,000 lines and around 500 procedures on a 10.2 database.
    No problem or complaints about it being too large.
    As for splitting packages, that's entirely up to you. Packages are designed to put all related functionality in one place. If you feel a need to split it out to seperate packages then perhaps consider grouping together related functionality on a smaller granularity that you previously had.
    In relation to memory, smaller packages will take up less space obviously, but if you are going to be calling all the packages anyway then they'll all get loaded into memory and still take up about the same amount of memory (give or take a little).
    GTT's are generally a better idea than arrays if you are dealing with large amounts of data or you need to run queries against that data. If you need to effectively query against the data then you'll probably end up with worse performance processing an array in a query style than the IO overhead of a GTT. Of course you have to look at the individual processes and weigh up the pros and cons for each.

  • Write new line in Data Package fails

    Hi all,
    I'm trying to assign Cost Distribution Codes from my Start Routine into a BW cube.
    The start routine derives the Cost Center from a Masterdata table in where the HR Position, Cost Centerand FTE % are held.
    Eg. I'm pulling an employee record from datasource 0HR_PA_0; this employee record is assigned to a Position which is split into 2 Cost Centers.
    Datasource 0HR_PA_0 only has one record per month, but I'd like the cube to be uploaded with 2 records; one for each Cost Center Distribution Code and FTE %, but how would I do that?
    I created a Start Routine in where I specified to find the Coste Center from the Masterdata object (zposcost) and if it finds more than 1 to write an extra line.  This doesn't work; it still only writes only one line to the cube.
    Please help!
    This is part oif the code:
    DATA:
    gd_/bic/mzposcost   LIKE /bic/mzposcost,
    gd_/bi0/mcostcenter LIKE /bi0/mcostcenter,
    gl_data_package     TYPE _ty_s_sc_1.
    *IF MORE THAN 1 FOR THE DATE RANGE WRITE A NEW LINE IN THE DATA PACKAGE
                  MOVE-CORRESPONDING <source_fields> TO gl_data_package.
                  MOVE gd_/bic/mzposcost-as_prcnt TO
                  gl_data_package-/bic/zhr_pcnt3.
                  MOVE gd_/bic/mzposcost-costcenter TO
                  gl_data_package-costcenter.
    LOOP AT il_data_package INTO gl_data_package.
          MOVE-CORRESPONDING gl_data_package TO <source_fields>.
          APPEND <source_fields> TO SOURCE_PACKAGE.
        ENDLOOP.
    Edited by: Markus de Graaff on Jun 8, 2010 2:37 PM

    Hi,
    actually someone edited the code by commenting out the followng bit; I believe this is the loop that should assign the values eventually to the internal table?
    * MOVE THE DATA TO AN INTERNAL TABLE.
        LOOP AT SOURCE_PACKAGE ASSIGNING <SOURCE_FIELDS>.
          move-corresponding <SOURCE_FIELDS> to ld_data_package.
          append ld_data_package.
        ENDLOOP.
        LD_DATA_PACKAGE[] = SOURCE_PACKAGE[].
    When I uncomment it, it says ld_data_package is not a structure or internal table with header line.
    Any suggestions? I really like to solve this please!
    M.

  • How do we control the data package size that comes into the DSO?

    Hi experts,
    I have this scenario:
    Initial information (numbers are not real):
    I have 10 contracts in CRM (one order documents)
    Each contract when extracted becomes 50 records.
    Running BW 3.x
    (1) Now i start data extraction in BW, i will receive 5 packets, split like following:
    DP1: 100 records (contract 1 and 2)
    DP2: 100 records (contract 3 and 4)
    DP3: 50 records (contract 5)
    These records are stored in the PSA.
    (2) Then, it seems the system keeps the same package size and send these DPs to DSO like following:
    DP1 -> 100 records -> DSO
    DP2 -> 100 records -> DSO
    DP3 -> 50 records -> DSO
    What i want:
    I have a special case and i want to be able to do the following starting from (2).
    Instead of sending
    DP1 -> 100 records -> DSO
    DP2 -> 100 records -> DSO
    DP3 -> 50 records -> DSO
    I want to send:
    DP1 -> 10 records -> DSO
    DP2 -> 10 records -> DSO
    DP3 -> 10 records -> DSO
    DP25 -> 10 records -> DSO
    Do I have control over the data package size (number of records)?
    Can the DPs between DataSource <-> DSO be different then the ones from SourceSystem <->DataSource?
    Can i even go further and do some kind of selection to be able to send like following:
    DP1 -> all records from item 01 to 10 of contract 1 -> DSO
    DP2 -> all records from item 11 to 20 of contract 1 -> DSO
    DP3 -> all records from item 01 to 10 of contract 2 -> DSO
    DP4 -> all records from item 11 to 20 of contract 2 -> DSO
    DPn -> all records from item 11 to 20 of contract 10 -> DSO
    Thanks!

    Hi,
      If you are using infopackage try the setting in the infopackage ie in the scheduler menu at the top
    choose DataS: Default data transfer in which you can change the package size of data
    if using DTP in Extraction Tab you can specify  Package Size.
    Hope this helps for you.
    Thanks,
    Arun

  • My New Iphone 4 not connecting to data package.

    Hello,
    I just bought my new iphone 4 recentely from Apple. and i subcribed for 5GB data package with the service provider in UAE howeverm the service is splitted between my ipad and the new iphone but unfortunatelly the iphone still not connecting to the internet service ? i thought there was a problem with the micro sims but when i removed the sim from the iphone and checked it with my friend iphone 4 it worked perfectly ?
    Any suggestions why my iphone refuses to connect ? i just received the phone less than 5 days.
    my ios is 4.3.5
    Thanks

    I would just call the company you are with and get them to fix it.

  • Identify the last data package in start routine

    Hi Everyone
    We have a start routine in transformations. We require to do some special processing in the start routine only when the last data package is executing. How can we determine in the start routine that current package is last one or not ? Any pointers in this direction are appreciated.

    Hi,
    You can get packet Id from datapackid in start routine and end routine. But I'm not so sure how to identify the last packet ID, alternatively you can store this packet id in some where else and read the same value in End routine if your logic(processing) permits to do this in End routine instead of Start routine.
    METHODS
          start_routine
            IMPORTING
              request                  type rsrequest
              datapackid               type rsdatapid
            EXPORTING
              monitor                  type rstr_ty_t_monitors
            CHANGING
              SOURCE_PACKAGE              type tyt_SC_1
            RAISING
              cx_rsrout_abort.
    hope it helps...
    regards.
    Raju

  • Problems with the O2 blackberry data package on my Curve 3G.

    I have already informed O2 about this but they claim that I should be used the blackberry support services, but nothing there helps me!
    I got my Blackberry Curve 3G on September 9th this year and I put on the Blackberry Data Package bolt-on onto my phone on September 16th. I then received a text to say they've taken £5 from my credit and it will be up and running in the next 24 hours. Its now September 19th and my BBM is not working at all and I am extremely upset with the services and behaviour I have received from both O2 and Blackberry.
    Is there any way you can help? If this fails, I shall be forced to go back to the shop from where I got my Blackberry from and ask for their help.
    Many thanks, Jade.

    Can a bubble whistle Problems with the O2 blackberry data package on my Curve 3G.? The seat matures in your oar. The lad ices the pursuing method inside a resident. A judge spins against the vendor! The rose wows the hello. 
    filipina heart

  • Data package is missing in the return structure

    Hi BW Folks,
    I have an issue with ODS activation.While activating the data in ODS object am getting following error message
    Activation of data records from ODS object XXXX terminated.
    data package XXXXX contains errors with status 9 in table 'XX' but this data package is missing in the return structure.
    In detail: The data package is entered in the return structure as incorrect.
    Can anyone provide me the solution. Thanks in advance. Have a nice time!
    Regards,
    Nani.

    HI
    Check these links
    Re: Status 9 error when activating an ODS in a Process Chain
    ODS activation error - status 9
    Error while data loading-terminated with Status 9
    Error while data loading-terminated with Status 9
    hope it helps
    regards
    CK
    Assing points if usefull

  • DTP does not fetch all records from Source, fetches only records in First Data Package.

    Fellas,
    I have a scenario in my BW system, where I pull data from a source using a Direct Access DTP. (Does not extract from PSA, extracts from Source)
    The Source is a table from the Oracle DB and using a datasource and a Direct Access DTP, I pull data from this table into my BW Infocube.
    The DTP's package size has been set to 100,000 and whenever this load is triggered, a lot of data records from the source table are fetched in various Data packages. This has been working fine and works fine now as well.
    But, very rarely, the DTP fetches 100,000 records in the first data package and fails to pull the remaining data records from source.
    It ends, with this message "No more data records found" even though we have records waiting to be pulled. This DTP in the process chain does not even fail and continues to the next step with a "Green" Status.
    Have you faced a similar situation in any of your systems?  What is the cause?  How can this be fixed?
    Thanks in advance for your help.
    Cheers
    Shiva

    Hello Raman & KV,
    Thanks for your Suggestions.
    Unfortunately, I would not be able to implement any of your suggestions because, I m not allowed to change the DTP Settings.
    So, I m working on finding the root cause of this issue and came across a SAP Note - 1506944 - Only one package is always extracted during direct access , which says this is a Program Error.
    Hence, i m checking more with SAP on this and will share their insights once i hear back from them.
    Cheers
    Shiva

  • Same set of Records not in the same Data package of the extractor

    Hi All,
    I have got one senario. While extracting the records from the ECC based on some condition I want to add some more records in to ECC. To be more clear based on some condition I want to add addiional lines of data by gving APPEND C_T_DATA.
    For eg.
    I have  a set of records with same company code, same contract same delivery leg and different pricing leg.
    If delivery leg and pricing leg is 1 then I want to add one line of record.
    There will be several records with the same company code contract delivery leg and pricing leg. In the extraction logic I will extract with the following command i_t_data [] = c_t_data [], then sort with company code, contract delivery and pricing leg. then Delete duplicate with adjustcent..command...to get one record, based on this record with some condition I will populate a new line of record what my business neeeds.
    My concern is
    if the same set of records over shoot the datapackage size how to handle this. Is there any option.
    My data package size is 50,000. Suppose I get a same set of records ie same company code, contract delivery leg and pricing leg as 49999 th record. Suppose there are 10 records with the same characteristics the extraction will hapen in 2 data packages then delete dplicate and the above logic will get wrong. How I can handle this secnaio. Whether Delta enabled function module help me to tackle this. I want to do it only in Extraction. as Data source enhancement.
    Anil.
    Edited by: Anil on Aug 29, 2010 5:56 AM

    Hi,
    You will have to do the enhancement of the data source.
    Please follow the below link.
    You can write your logic to add the additional records in the case statement for your data source.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/c035c402-3d1a-2d10-4380-af8f26b5026f?quicklink=index&overridelayout=true
    Hope this will solve your issue.

  • Can we have a Family Data Package, Please?

    I have 4 people on our wireless service with Verizon.  Until recently, only I had a data plan.  Then I got my kids new phones and they got data plans.  Now my husband wants a data plan when he is eligible for a new phone in January.  All these plans are getting REALLY expensive.  How about coming out with a family data share plan, similar to the shared minutes plan?
    Thank you!

    TheGreatOne wrote:
    spottedcatfish wrote:
    They didn't feel like coercing people to add texting packages was an appropriate move, so instead they adjusted the price point.  We should all feel deeply blessed that instead of requiring a texting package as well as a data package on most high quality phones, they just made it so affordable that I'm sure most people grab it by default.  Fortunately for Verzion, the profit margins on texting is even higher than on data, because even if you use it a ton, it still costs them next to nothing to provide to you.
    Even if Verizon has required messaging packages instead of data packages,you would still get people complaining most likely. Probably people then saying something like "oh i don't use messaging on my phone ever. I don't need it." 
    And rightly so. Nobody should be forced to buy anything they dont need or use. Is that not common sense?

Maybe you are looking for

  • Adobe Bridge problem, Anyone help?

    Hi I am using After Effects and want to add an effect to Text i use the brower presets option which opens Adobe Bridge.  Adobe Bridge normally allow a preview of the effect in the preview section.  However where the preview should be an 'unidentified

  • Can any ONE HELP!!!

    my nano is stuck!! The poor little thing is stuck on the play/pause screen and it says it's on lock, but i took it off lock...so now it is just frozen for no reason!!?? i don't know what to do.

  • Access Exchange Service From SharePoint 2013 Custom Web part getting The request failed with HTTP status 401: Unauthorized.

    I want to Fill a drop down with Outlook Meeting of Current log-in user in SharePoint 2013 web part for default credentials I am using the following code  ExchangeServiceBinding binding = new ExchangeServiceBinding();             ServicePointManager.S

  • Passing values to RFC/BAPI Table

    Hi, I am having a very strange problem. While passing the values to RFC/BAPI table using add method the values are not passed to backend SAP. Below is the code which I am using just to pass some data in RFC/BAPI table. The same code was working few d

  • Disk Space "leaking" in Java Application Using java.io.RandomAccessFile

    I have noted this behaviour in both java version 1.2.1 and 1.3.1 on Solaris 5.8. I have a java application that is essentially "tailing" a very large log file. The program opens a RandomAccessFile in read-only mode and calls readLine on the file as l