Large log file to add to JList

Hello,
I have a rather large log file (> 250 000 lines) which I would like to add to a JList -- I can read the file okay but, obviously, can't fine a container which will hold that many items. Can someone suggest how I might do this?
Thanks.

NegativeSpace13579 wrote:
I can't add that many items to a Vector or Array to pass to the JList.You fail to describe the problem you run into. Why can't you add that many items? Do you get any error message?
Please read http://catb.org/~esr/faqs/smart-questions.html to learn how to ask questions the smart way.

Similar Messages

  • Data Services Designer 14 - Large Log files

    Hello,
    we're running several jobs with the Data Services Designer 14, all works fine.
    But today a problem occur:
    The Data Designer on a client produces after finishing a big job a very large log file in the Data Services Designer folder with 8 GB.
    Is it possible to delete these log files automatically or restrict the maximum size of the created log files in the designer?
    What's the best way?
    Thanks!

    You can set to automatically delete the log files based on number of days.
    I have done this in XI 3.2, but as per the document, this is how it can be done in DS 14.0.
    In DS 14.0, this is handled in CMC.
    1. Log into the Central Management Console (CMC) as a user with administrative rights to the Data Services application.
    2. Go to the u201CApplicationsu201D management area of the CMC. The u201CApplicationsu201D dialog box appears.
    3. Right-click the Data Services application and select Settings. The u201CSettingsu201D dialog box appears.
    4. In the Job Server Log Retention Period box, enter the number of days that you want to retain the following:
    u2022 Historical batch job error, trace, and monitor logs
    u2022 Current service provider trace and error logs
    u2022 Current and historical Access Server logs
    The software deletes all log files beyond this period. For example:
    u2022 If you enter 1, then the software displays the logs for today only. After 12:00 AM, these logs clear and the software begins saving logs for the next day.
    u2022 If you enter 0, then no logs are maintained.
    u2022 If you enter -1, then no logs are deleted.
    Regards,
    Suneer.

  • Ridicliously large log file

    I was wondering why /library was using 3GB, and I found the culprit. /Library/Logs/Console/501/console.log.6 is an 848.6 MB file! I have all good intentions of deleting it, but I am wondering how a log file got to be that big. I am also wondering what is in it, but I cannot find a program that will not crash on opening such a big file.

    maciscool
    This terminal command will reset that log to zero length, which will not upset any log rotation. Copy and paste the following into the Terminal window, followed by a return:
    >| /Library/Logs/Console/501/console.log.6I don't think running MacJanitor is going to help, since the console logs are not rotated by the normal periodic tasks. Rather they appear to be rotated on a new login.
    Edit: I can confirm this. I just logged out (not Fast User Switching, but properly) and logged back in. My console logs are rotated.

  • Resources For Large Log File

    I wrote a LogFile class that opens a BufferedWriter on the specified file and writes to it when called. The constructor is as follows:
    m_bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(m_strLogFile, true)));
    m_logFile = new File(m_strLogFile);
    I have a max log file size set to 2 megs (at which point the file is archived and reset). This limit has worked great. However, now that my system is seeing more usage, I'd like to raise the max limit to 5 megs. I've done that and all appears ok. My question is whether the JVM uses more resources for the 5 meg file than the 2 meg, or does it only keep track of a pointer to the end of the file and append to that. I'm hoping the latter is the case, so that the file size is irrelevant. However, if there might be performance difficulties writing to a file that size, I'll keep the limit at 2 megs.
    Michael

    I have a max log file size set to 2 megs (at which
    point the file is archived and reset).I suppose it really depends on how you are doing that.
    However, unless you explicitly set a buffer size somewhere that is 2 megs, then normal file io is going to use about 512 or 1k, regardless of the file size. If you use a buffered writer in java it will use 512 bytes (I believe) and the low level os will use about 512. The actual number of bytes might be 256 up to about 4k on the OS side, but I think most buffers are around 512 these days.

  • Large log files

    I had a bit of a panic a few days ago when my available hard drive space went below 700MB. Last I had looked it was around 10GB. So I spent the last few days clearing out old applications and files that I don't need and have made some progress, but not enough.
    I downloaded OmniDiskSweeper to help and found that my private/var/log space is pretty bloated: private is at 31.5GB. Within private/var/log the two biggest items are the 'asl' folder and 'system.log.1'
    I went into Console to look up system.log.1 and found 7.6 gigs of the following lines repeating:
    Feb 28 09:55:33 Dwend-MacBookPro [0x0-0x3af3af].com.apple.Safari[3499]: * set a breakpoint in mallocerrorbreak to debug
    Feb 28 09:55:33 Dwend-MacBookPro [0x0-0x3af3af].com.apple.Safari[3499]: Safari(3499,0xb0baa000) malloc: * error for object 0x2238c4f4: Non-aligned pointer being freed
    Can I just delete this log? I thought about reinstalling Safari just to make sure this won't happen again, but it looks like a one time thing on Feb 28, so maybe that's not necessary? Help!??

    Any idea what causes this and how to prevent it from recurring? I deleted over 200 GB of logs yesterday after running completely out of space.
    These boards were very helpful in identifying the problem, so thanks!

  • Private strand flush not complete how to find optimal size of redo log file

    hi,
    i am using oracle 10.2.0 on unix system and getting Private strand flush not complete in the alert log file. i know this is due to check point is not completed.
    I need to increase the size of redo log files or add new group to the database. i have log file switch (checkpoint incomplete) in the top 5 wait event.
    i can't change any parameter of database. i have three redo log group and log files are of 250MB size. i want to know the suitable size to avoid problem.
    select * from v$instance_recovery;
    RECOVERY_ESTIMATED_IOS     ACTUAL_REDO_BLKS     TARGET_REDO_BLKS     LOG_FILE_SIZE_REDO_BLKS     LOG_CHKPT_TIMEOUT_REDO_BLKS     LOG_CHKPT_INTERVAL_REDO_BLKS     FAST_START_IO_TARGET_REDO_BLKS     TARGET_MTTR     ESTIMATED_MTTR     CKPT_BLOCK_WRITES     OPTIMAL_LOGFILE_SIZE     ESTD_CLUSTER_AVAILABLE_TIME     WRITES_MTTR     WRITES_LOGFILE_SIZE     WRITES_LOG_CHECKPOINT_SETTINGS     WRITES_OTHER_SETTINGS     WRITES_AUTOTUNE     WRITES_FULL_THREAD_CKPT
    625     9286     9999     921600          9999          0     9     112166207               0     0     219270206     0     3331591     5707793please suggest me or tell me the way how to find out suitable size to avoid problem.
    thanks
    umesh

    How often should a database archive its logs
    Re: Redo log size increase and performance
    Please read the above thread and great replies by HJR sir. I think if you wish to get concept knowledge, you should add in your notes.
    "If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager Database Control."
    Source:http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/build_db.htm#19559
    Pl also see ML Doc 274264.1 (REDO LOGS SIZING ADVISORY) on tips to calculate the optimal size for redo logs in 10g databases
    Source:Re: Redo Log Size in R12
    HTH
    Girish Sharma

  • MR11 log file

    Hi,
    While run MR11 for GR/IR clearing account a log file is generated and document number is 5400000010. Where this log file is stored by default and how it should display ? Beside F.13 automatic clearing  clears those GR/IR records whose balance shows as 0. And difference amount is cleared through F-03 by choosing document number from GR and IR under same purchase order. In F.13(automatic clearing) does not clear those same GR value and IR value inspite of same PO number. These values are easily tracebale from normal balance viewing mode through FBL3N. Why these values are not cleared through F.13 ?
    Regards,
    Samrat

    Immediate AI:
    0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
    1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
    log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
    2. You can consider shrinking  the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
    If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
    3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
    4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
    Santosh Singh

  • Crystal Report Server Database Log File Growth Out Of Control?

    We are hosting Crystal Report Server 11.5 on Microsoft SQL Server 2005 Enterprise.  Our Crystal Report Server SQL 2005 database file size = 6,272 KB, and the log file that goes with the database has a size = 23,839,552.
    I have been reviewing the Application Logs and this log file size is auto-increasing about 3-times a week.
    We backup the database each night, and run maintenance routines to Check Database Integrity, re-organize index, rebuild index, update statistics, and backup the database.
    Is it "Normal" to have such a large LOG file compared to the DATABASE file?
    Can you tell me if there is a recommended way to SHRINK the log file?
    Some Technical Documents suggest frist truncating the log, and the using the DBCC SHRINKFILE command:
    USE CRS
    GO
    --Truncate the log by changing the database recovery model to SIMPLE
    ALTER DATABASE CRS
    SET RECOVERY SIMPLE;
    --Shrink the truncated log file to 1 gigabyte
    DBCC SHRINKFILE (CRS_log, 1000);
    GO
    --Reset the database recovery model.
    ALTER DATABASE CRS
    SET RECOVERY FULL;
    GO
    Do you think this approach would help?
    Do you think this approach would cause any problems?

    my bad you didn't put the K on the 2nd number.
    Looking at my SQL server that's crazy big my logs are in the k's like 4-8.
    I think someone enabled some type of debugging on your SQL server, it's more of a Microsoft issue, as our product doesn't require it from looking at my SQL DB's
    Regards,
    Tim

  • Require 9i Primary and Standby redo logs files same size?

    Hi,
    We have 9.2.0.6 Oracle RAC (2 node) and configured data guard (physical standby).
    I want to increase redo log files size, but i can't this do same time primary and standby side.
    Is there a rule, primary and standby database instances have same size redo log files?
    If I increase only primary redo log files, is there any side effect? However I try this issue on test system. I increased all primary redo log files(if status='INACTIVE' drop redo log group and add redo log group, switch logfile,...)
    , but i couldn't changed standby side. So the system is work well. Is this correct solution or not? How can i increase both sides redo log files?
    Thank you for helps..

    Thank you for your helps.. I found this issue answer:
    http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14239/manage_ps.htm#i1010448
    Consequently, when you add or drop an online redo log file at the primary site, it is important that you synchronize the changes in the standby database by following these steps:
    If Redo Apply is running, you must cancel Redo Apply before you can change the log files.
    If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO, change the value to MANUAL.
    Add or drop an online redo log file:
    To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log' SIZE 100M;
    To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    Repeat the statement you used in Step 3 on each standby database.
    Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    bye..

  • MS_SQL Shrinking Log Files.

    Hi Experts,
    We have checked the documentation which have been received from SAP (SBO_Customer portal), Based on the Early Watch Alert.
    AS per SAP requsition we have minimized the size of 'Test Database Log' file  through MS_SQL Management Studio.(Restricted file size 10 percent, size 10 MB)
    Initially it was 50 Percent, 1000 MB
    Doubt is:
    Is that any problem will occur in future regarding this changes of 'LOG FILES'.
    Kindly help me.
    Based on your reply ...
    i will update in live production database....
    By
    kart

    The risk to shrink log file is fairly small.  Current hardware and software has much better reliability than before.  When you shrink your log file, you just lose some history which nobody even know there is any value in it.
    On the contrary, if you keep very large log file, it may cause more troubles than doing anything good.
    Thanks,
    Gordon

  • "Share over Wan" - passworded but log files say differently?

    In a desperate attempt to get backup features to work on my TC, I enabled "Share over Wan". Thinking that I've got more than enough security with disk passwords, I didn't automatically think there'd be a problem.
    I then looked at my log files on my TC a day later and saw successful logins from IP's other than mine - but all within the same subdomain.
    Does "Share over Wan" supersede the disk passwords? I've tried accessing from other subdomains (my work) and always get prompted for passwords. Should I be worried about these successful logins or ignore them as successful pings (or the like?)
    I've, of coarse, now turned off "Share over Wan".

    awkwood wrote:
    Cheers omp!
    I have one suggestion: your count_lines method will be quite slow on large log files.
    Rather than use readlines you can optimize the read operations like so:
    line_count = 0
    File.open(identifier.to_cache_path) do |f|
    while block = f.read(1024)
    line_count += block.count("\n")
    end
    end
    The speed boost makes it comparable to shelling out to wc.
    Thanks for the suggestion; I just committed it.

  • Streaming log file analyzer?

    Hi there,
    We host several flv files with Akamai and recently enabled
    the log file delivery service. So, we now have Akamai-generated log
    files in W3C format. I was assuming I could use WebTrends to
    analyze these, but after looking at them briefly, it shows
    different events like play, stop, seek, etc., and I don't know that
    WebTrends would be able to process all of that.
    Our most basic requirement is to see how many times each
    video was viewed. If we could get more detailed analysis, like
    video X gets viewed on average for 2:00, but video Y only gets
    viewed for 20 seconds, that would be great as well.
    Does anyone have any suggestions for the best software to
    analyze these files?
    Thanks,
    Matt

    Immediate AI:
    0. Check the log file auto growth setup too and check is this a practically a good one and disk has still space or not.
    1. If disk is full where you are keeping log file, then add a log file in database property page on another disk where you have planned to keep log files, in case you can't afford to get db down. Once you are done then you can plan to truncate data out of
    log file and remove that if it has come just first time issues. If this happens now and then check for capacity part.
    2. You can consider shrinking  the log files after no any other backup are going on or any maintenance job like rebuild\reorg indexes \update stats jobs are executing as this will be blocking it.
    If db size is small and copy files from prod to dr is not that latency prone, and shrink is not happening, then you can try changing recovery model and then do shrinking and reconfigure log-shipping after reverting recovery model.
    3. Even you can check if anyone mistakenly places some old files and forgot to remove them which is causing disk full issues. Also
    4. For permanent solution, do monitor the environment for capacity and allocate good space for log file disks. Also consider tweaking frequencies of the log backup from default that suits your environment.
    Santosh Singh

  • Tracking down .log file creators?

    I have had disk space getting eaten up and discovered using "WhatSize" a number of files in the private/var folder set (see discussion on similar in Nov 2008) The response to the query then was to track down what was writing the files in the first place.
    I can locate the files - a series called swapfile which build exponentially to a size of 1 GB, but there is no indication of what app is creating the file.
    I have found a similar set with suffix .asl which ostensibly are created by Adobe Photoshop CS3 - great, except I don't have that installed on my iBook. I do have Photoshop CS, but the recent creation dates (last week) of the files don't match up with my last use of Photoshop (two months ago).
    Has anyone any suggestions?

    Hmm, Niel may have some ideas, but several things strike me as being odd with what is happening on your machine. First, those swap files should have been cleared out with a simple restart, you should not have had to remove them by hand. In fact, from what I've read you really aren't supposed to.
    Second, if you are running 10.5.7 and have Activity Monitor set to show all processes, you really ought to see aslmanager at some point--it is supposed to run "as needed" when called by syslogd. For more information about it, see this note at MacFixIt:
    http://www.macfixit.com/article.php?story=20090122213555897
    And if everything is running along normally, I don't think kernel_task should be using too much of your system resources. At the moment, with only two programs running of my own, it is using 1.2% of the CPUs and 71MBs of RAM. If kernel-task is hogging your resources I would think something is wrong, perhaps a bad driver or other system level extension of something.
    If you haven't done so, you might try launching Console from your Utilities folder, make sure the Log List is showing (there's an icon in the top left corner to hide and show it), expand the Log Database Queries at the top, and select All Messages. You'll see a lot of messages about what your computer did to get started, then other messages will start to appear, they'll all be time-stamped, see if you start to get a lot messages about something that isn't working (which would account for the too large log files), or else you might see that the kernel is working very hard at something or other, and what it might be (which would tell you why the kernel-task is using a lot of resources).
    Francine
    Francine
    Schwieder

  • Dropping log file in standby database

    Please,
    I need a help for the following issue:
    I'm making a technical documentation on various event that occur on dataguard configuraation, right now I drop a redo log group file on primary database, and when I try to drop the equivalent log group file on standby database I got the following error:
    SQL> alter database drop logfile group 3;
    alter database drop logfile group 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    this is the current state of the redolog file on standby database.
    SQL> select group#,members,status from v$log;
    GROUP# MEMBERS STATUS
    1 3 CLEARING_CURRENT
    3 3 CLEARING
    2 3 CLEARING
    Eventhough I do the following command on standby I also get an error.
    SQL> ALTER DATABASE CLEAR LOGFILE GROUP 3;
    ALTER DATABASE CLEAR LOGFILE GROUP 3
    ERROR at line 1:
    ORA-01156: recovery in progress may need access to files
    Can someone tell me how to drop on dataguard configuration the redolog file on primary database and their corresponding on standby database
    I'm working on 10 release 2, on Windows
    Thanks you

    Oracle Dataguard Concept and administration release 2,ref b14239: is my source but it doesn't work when trying to drop stanby group or logile member.
    For example, if the primary database has 10 online redo log files and the standby
    database has 2, and then you switch over to the standby database so that it functions
    as the new primary database, the new primary database is forced to archive more
    frequently than the original primary database.
    Consequently, when you add or drop an online redo log file at the primary site, it is
    important that you synchronize the changes in the standby database by following
    these steps:
    1. If Redo Apply is running, you must cancel Redo Apply before you can change the
    log files.
    2. If the STANDBY_FILE_MANAGEMENT initialization parameter is set to AUTO,
    change the value to MANUAL.
    3. Add or drop an online redo log file:
    ■ To add an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE ADD LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log'
    SIZE 100M;
    ■ To drop an online redo log file, use a SQL statement such as this:
    SQL> ALTER DATABASE DROP LOGFILE '/disk1/oracle/oradata/payroll/prmy3.log';
    4. Repeat the statement you used in Step 3 on each standby database.
    5. Restore the STANDBY_FILE_MANAGEMENT initialization parameter and the Redo Apply options to their original states.
    Thank

  • Installer Log File

    when i installed office, it seemed that an installer log file was created on the HD. i dragged it somewhere else, but later when i installed other programs (keyserver, acrobat, etc) other installer log files were again created on the HD. I got tried of dragging all these away, so i combined the files together and put them back onto HD. but why is the text file automatically changed into a unix executable file now? is it alright? and will other installer log files just add onto it later?
    MacBook   Mac OS X (10.4.8)  

    You can delete them or put them into either  /username/Library/Logs/ or /Library/Logs/. They're not essential unless you have installation issues and want to provide the data to the developer whose SW you installed.

Maybe you are looking for

  • BlackBerry Z10

    Purchase your BlackBerry Z10 with the new BlackBerry 10 operating system today at Best Buy! The BlackBerry Z10 features a gorgeous 4.2” display (1208x768 resolution) 356 ppi. An 8MP rear camera with capability to shoot 1080p video. A 1.5GHz dual-core

  • Recommend an Home Media Setup Using iTunes

    Perhaps someone could point me in the right direction--an article or discussion that might give me some direction regarding setting up an "ideal" iTunes media server/client setup in my home. Right now, I have a MacBook Pro with a 320GB hard drive tha

  • SQLDev 4 EA2 is too unstable

    We were happily using 4 EA1, despite the obvious bugs it seemed stable enough and not so slow.. We have started using EA2 since it came out, and it seems much more buggy, very slow, and very unstable - crashing several times per day on Mac OSX and Wi

  • Rendered Video has Glitches

    I'm from Toronto, Canada and right now working on an online for a PAL tv series. This project's been a long process, originally all the required files were on the suite's FW800 RAID drives but now they're all on drives supplied by the client: FW 400,

  • Importing raw - colors

    Hi, i have problem with importing raw photos (nikon D3000 and D3100) into lightroom. There is a difference in colors. I would like to send pictures to you, but i dont know how. I tried to do something with camera profiles, but it was unsuccessful. Do