GC.log File Size Limitation and Rotation

Hi,
Is it possible to limit the size of a GC.log file and to rotate it without causing havoc?
Best regards,
Katzensee

James Roller1 wrote:
You say, if you use 1024 x 768 images... that's just it: I'm just using the images without resizing. Some may be 4MB files when dragged in; others may be 300kb. I am not doing anything to them. Should I?  Does that help? Or does IBA "handle all that" when I drag and drop?
It's not well documented. I get the impression that, if you drop an image that has resolution higher than 2048 x 1536, it is down-sampled to that size. You should at least crop the image outside IBA before dropping it in. If you want to save space, down-sample the image first to 1024 x 768 (after cropping).
James Roller1 wrote:
Also for the video: we're using iMovie and we just "Share" to iTunes and choose Large. Should I be choosing a lower setting or, again, does IBA not care and will take the Large and resize it?
Instead of sharing to iTunes, export the movie to iPad format (File -> Export) and select 640 x 480. That'll give you reasonable quality at around 12 MB per minute.
James Roller1 wrote:
I have to say I chuckled at the wisdom and simplicity of the 2 Volume suggestion. I'd like to keep it as one, and I'd like to sell it on the store, but your idea is ingenious in its simplicity: just make 2!
Thanks! Two's better'n one
Michi.

Similar Messages

  • Nio ByteBuffer and memory-mapped file size limitation

    I have a question/issue regarding ByteBuffer and memory-mapped file size limitations. I recently started using NIO FileChannels and ByteBuffers to store and process buffers of binary data. Until now, the maximum individual ByteBuffer/memory-mapped file size I have needed to process was around 80MB.
    However, I need to now begin processing larger buffers of binary data from a new source. Initial testing with buffer sizes above 100MB result in IOExceptions (java.lang.OutOfMemoryError: Map failed).
    I am using 32bit Windows XP; 2GB of memory (typically 1.3 to 1.5GB free); Java version 1.6.0_03; with -Xmx set to 1280m. Decreasing the Java heap max size down 768m does result in the ability to memory map larger buffers to files, but never bigger than roughly 500MB. However, the application that uses this code contains other components that require the -xMx option to be set to 1280.
    The following simple code segment executed by itself will produce the IOException for me when executed using -Xmx1280m. If I use -Xmx768m, I can increase the buffer size up to around 300MB, but never to a size that I would think I could map.
    try
    String mapFile = "C:/temp/" + UUID.randomUUID().toString() + ".tmp";
    FileChannel rwChan = new RandomAccessFile( mapFile, "rw").getChannel();
    ByteBuffer byteBuffer = rwChan.map( FileChannel.MapMode.READ_WRITE,
    0, 100000000 );
    rwChan.close();
    catch( Exception e )
    e.printStackTrace();
    I am hoping that someone can shed some light on the factors that affect the amount of data that may be memory mapped to/in a file at one time. I have investigated this for some time now and based on my understanding of how memory mapped files are supposed to work, I would think that I could map ByteBuffers to files larger than 500MB. I believe that address space plays a role, but I admittedly am no OS address space expert.
    Thanks in advance for any input.
    Regards- KJ

    See the workaround in http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038

  • Get Total DB size , Total DB free space , Total Data & Log File Sizes and Total Data & Log File free Sizes from a list of server

    how to get SQL server Total DB size , Total DB free space , Total Data  & Log File Sizes and Total Data  & Log File free Sizes from a list of server 

    Hi Shivanq,
    To get a list of databases, their sizes and the space available in each on the local SQL instance.
    dir SQLSERVER:\SQL\localhost\default\databases | Select Name, Size, SpaceAvailable | ft -auto
    This article is also helpful for you to get DB and Log File size information:
    Checking Database Space With PowerShell
    I hope this helps.

  • Log file size in Sun Directory Server

    Does anyone have an idea about the how the Sun Directory Server's log file size will increase in size with respective to the actions performed?
    Can someone give a data regarding this? If someone has a better scenario and the supportive data w.r.t log file size it will be helpful.
    Thanks,

    AFAIK No its based on time "At a certain time, or after a specified interval, the server rotates your access logs. "
    More info in Archiving Log Files in [http://docs.sun.com/app/docs/doc/820-7985/gczxv?l=en&a=vie]
    It should be easy to write such a script to be run as a daemon in logs directory. Here is the pseudo code :
    while [1]
    do
    get size of the access/error log file
    If size of file > max_size
    <ws-install-dir>/https-<instance>/bin/rotate
    sleep for sometime
    done

  • Log file size in Sun Access Manager

    Does anyone have an idea about the how the Sun Access Manager's log file size will increase in size with respective to the actions performed?
    Can someone give a data regarding this? If someone has a better scenario and the supportive data w.r.t log file size it will be helpful.
    Thanks,

    I would like to take the log files backup daily (for future reference).
    I need to know the following:
    1) What log files need to be backup? Do I need to take all am*.* files (It will be around 3.5 GB in size)?
    2) Ideally, I believe that only few MB of data goes into these am*.* files but, I cannot store every day 3.5 + GB of logs
    3) I observed that these files (am*.* have every days activity added to them), So I would like to enable the Log rotation
    Please let me know how can I proceed.
    Thanks

  • MessageBox log file size

    Hi, 
    In our prod environment, the MessageBox data file is withing the recommended limits - 2GB, but the log file is 32GB. Is this a reason to worry, or it is normal?  I couldn't find any recommendations on this. 
    Thank you very much!

    This is not normal.
    IMO your BizTalk database Jobs are not running , Make sure your BizTalk SQL servers jobs have been enabled and SQL server agent is running. 
    Please have a look of
    How to Configure the Backup BizTalk Server Job article to enable the jobs. 
    The BizTalk backup job is responsible for keeping the log file size in the limit. 
    you can try shrinking the log file using following SQL command
    USE BiztalkMsgBoxDb;
    GO
    -- Truncate the log by changing the database recovery model to SIMPLE.
    ALTER DATABASE BiztalkMsgBoxDb
    SET RECOVERY SIMPLE;
    GO
    -- Shrink the truncated log file to 1 MB.
    DBCC SHRINKFILE (BiztalkMsgBoxDb_Log, 2);
    GO
    I would recommend you to have a read of following articles
    BizTalk Environment Maintenance from a DBA perspective 
    BizTalk Databases: Survival Guide
    hope this helps. 
    Greetings,HTH
    Naushad Alam
    When you see answers and helpful posts, please click Vote As Helpful, Propose As Answer, and/or
    Mark As Answer
    alamnaushad.wordpress.com

  • Is there a file size limitation to using this service?

    I am working on a large PDF file (26 mb) and I need to re-size the original and mess with the margins. I don't beleive there is an easy way to do this in Adobe Acrobat 6.0 Pro. It sounds like I have to convert the file back to a Word document, do the adjustments there and then produce a new PDF. I have two questions:
    Is there a file size limitation to using this service?
    Will a PDF to Word doc. conversion mantain the format of the orginal PDF?
    Thanks
    Tim

    Good day Tim,
    There is a 100MB file size limitation for submitting files to the ExportPDF service.  As for the quality of the conversion, from our FAQ entitled Will Adobe ExportPDF convert both text and formatting information?:
    Adobe ExportPDF is capable of exporting high quality information, but the quality of your Word or Excel document depends on the quality of the PDF file you start with. For instance, if your PDF file was originally authored in Microsoft Word or Excel and converted to PDF using the PDFMaker functionality of Adobe Acrobat®, your PDF file contains a rich set of information that can be captured by Adobe ExportPDF. This includes relative positioning of tables, images, and even multi-column text, as well as page, paragraph, and font attributes.
    If your PDF file was originally authored using simpler PDF generation methods, such as “print to PDF” or “scan to PDF” options, Adobe ExportPDF will convert any recognizable text and then use sophisticated conversion intelligence to preserve as much of the page layout as possible.
    Please let us know if you have any other questions!
    Kind regards,
    David

  • How to set up PopProxy* log file size ?

    Dear All,
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?
    ./imsimta version
    Sun Java(tm) System Messaging Server 7.0-3.01 64bit (built Dec 9 2008)
    libimta.so 7.0-3.01 64bit (built 09:24:13, Dec 9 2008)
    Steve

    SteveHibox wrote:
    Does anybody know how to set up MMP PopProxy* log file size and rollovertime ?Details on these settings are available here:
    http://wikis.sun.com/display/CommSuite6U1/Communications+Suite+6+Update+1+What%27s+New#CommunicationsSuite6Update1What%27sNew-MMPLogging
    Regards,
    Shane.

  • PDF file size limited to graphics memory in Reader?

    I've created a form (in LiveCycleDS) that allows for an unlimited number of photos to be loaded into it. I put an image into a subform that is duplicated every time a user clicks a button thus creating an unlimited number of images that can hold photos. I then extended it with Reader Extensions.
    I'm running into a problem when I try to load a large number of photos into the form.  It gets to about 47MB of images when it locks-up Adobe reader.  I've been able to bring up other applications after the lock-up and when I switch back to Reader the artifacts of the other application are then displayed within Reader.
    What is the practical limit to the size of a file created with LiveCycle?  Is it tied to the amount of graphics memory a computer has?  My machine has 2GB of RAM while my video card has only 384MB.  I haven't been able to figure out if there is a file size limitation.

    It's entirely normal for file size to increase. PDFs are much more compressed than print streams. For some printers the print size is almost constant (a huge bitmap), for others, it's a collection of graphical items and the size varies enormously. Rarely anything you can do.

  • Increase redolog file size - Merits and Demerits

    Hi
    Currently, we are in  9.2.0.7.0 oralce version and having redolog file sizes (Mirrlog and origlog) of 100MB.
    Now we are planning to increase the size to 200 MB so that we could reduce the number of archive log files.
    Can you please let me know what would be the demerits of bigger size in redolog files?
    And also let me know the step by step process how to increase the size of redolog files?
    Thank you

    > I understand what you are saying but in our situation our backup policy is one time online backup  and one time offline backup in a week.....Online backup is on Thu and Offline backup is on Sunday.......
    >
    > In case of system crash if needed we would need to apply archive log files; If we have lesser number of archive logs; recover database would be faster.......correct me if am wrong.
    You are wrong.
    Ok, let's see an example:
    You took your backup on sunday midnight and your DB needs recovery on wednesday.
    Meanwhile you created say, 800 M worth of redolog data per day.
    That sums up to (monday, tuesday, wednesday) 3x800 M = 2400 M that need to be recovered.
    Going with your current setup (100 M redolog size) the largest archivelog file can be 100 M, makes 24 files to restore and recover.
    After changing the redologsize to, say 200 M, you only have 12 files to restore and recover.
    But know what? It's still 2400 M of data.
    Since you will likely not put every archivelog file to its own tape, but rather change the tape each day (just an assumption) or maybe don't use manually operated tapes at all, the little latency overhead in handling tapes doesn't count in to your overall recovery time.
    All in all you still need to feed the same amount of data to the recovery process.
    Apart from this:
    if you're discussing short recovery times, than you'd never perform just two data backups a week.
    You'd make online backups every day - maybe incremental ones.
    You' d use the flashback recovery area.
    An additional thing often overlooked: in many cases the ultimate performance killer for a restore/recovery scenario is not the technology in use.
    It's that when the case is there, the DBA is not sure anymore, what do to.
    He wonders:
    Where the good backups are.
    How to get them back from the 3rd party backup tool.
    How to check them.
    Where to get a different storage system because the original one is broken.
    How to figure out what needs recovery
    How the tools work
    By ensuring that you always master the theory and the how to of restore and recovery - that's how you make it quick and painless (and dataloss-less).
    regards,
    Lars

  • Log files in bdump and udump

    Respected Sir
    Its very serious because log files in bump and udump are created frequently and they take too much space.
    The problem is that i am not able to understand or solve these problem.I am pasting few log here.
    BDUMP->
    Dump file h:\oracle\product\10.2.0\admin\orcl2\bdump\orcl2_lgwr_1364.trc
    Thu Jul 30 09:15:58 2009
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 1
    CPU : 4 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:3481M/4094M, Ph+PgF:5024M/5975M, VA:1290M/2047M
    Instance name: orcl2
    Redo thread mounted by this instance: 1
    Oracle process number: 6
    Windows thread id: 1364, image: ORACLE.EXE (LGWR)
    *** 2009-07-30 09:15:58.687
    *** SERVICE NAME:() 2009-07-30 09:15:58.671
    *** SESSION ID:(166.1) 2009-07-30 09:15:58.671
    Media recovery not enabled or manual archival only 0x10000
    Maximum redo generation record size = 156160 bytes
    Maximum redo generation change vector size = 150672 bytes
    *** 2009-07-30 10:34:04.875
    Media recovery not enabled or manual archival only 0x10000
    *** 2009-07-30 10:34:28.312
    Media recovery not enabled or manual archival only 0x10000
    *** 2009-07-30 11:23:36.000
    Media recovery not enabled or manual archival only 0x10000
    *** 2009-07-30 18:06:53.718
    Media recovery not enabled or manual archival only 0x10000
    *** 2009-07-30 22:02:26.734
    Media recovery not enabled or manual archival only 0x10000
    *** 2009-07-31 04:55:48.312
    Media recovery not enabled or manual archival only 0x10000
    UDUMP
    Dump file h:\oracle\product\10.2.0\admin\orcl2\udump\orcl2_ora_192.trc
    Wed Jul 29 12:34:17 2009
    ORACLE V10.2.0.1.0 - Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Windows Server 2003 Version V5.2 Service Pack 1
    CPU : 4 - type 586, 1 Physical Cores
    Process Affinity : 0x00000000
    Memory (Avail/Total): Ph:3653M/4094M, Ph+PgF:5205M/5975M, VA:1325M/2047M
    Instance name: orcl2
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 149
    Windows thread id: 192, image: ORACLE.EXE (SHAD)
    *** SERVICE NAME:() 2009-07-29 12:34:17.703
    *** SESSION ID:(159.1) 2009-07-29 12:34:17.703
    kccsga_update_ckpt: num_1 = 8, num_2 = 0, num_3 = 0, lbn_2 = 0, lbn_3 = 0
    Successfully allocated 3 recovery slaves
    Using 364 overflow buffers per recovery slave
    Thread 1 checkpoint: logseq 317, block 2, scn 2070858
    cache-low rba: logseq 317, block 3
    on-disk rba: logseq 317, block 248, scn 2071389
    start recovery at logseq 317, block 3, scn 0
    ----- Redo read statistics for thread 1 -----
    Read rate (ASYNC): 122Kb in 0.35s => 0.34 Mb/sec
    Total physical reads: 4096Kb
    Longest record: 8Kb, moves: 0/201 (0%)
    Longest LWN: 10Kb, moves: 0/63 (0%), moved: 0Mb
    Last redo scn: 0x0000.001f9b5c (2071388)
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 1
    Average hash chain = 92/92 = 1.0
    Max compares per lookup = 1
    Avg compares per lookup = 598/690 = 0.9
    *** 2009-07-29 12:34:24.015
    KCRA: start recovery claims for 92 data blocks
    *** 2009-07-29 12:34:24.343
    KCRA: blocks processed = 92/92, claimed = 92, eliminated = 0
    *** 2009-07-29 12:34:24.390
    Recovery of Online Redo Log: Thread 1 Group 1 Seq 317 Reading mem 0
    ----- Recovery Hash Table Statistics ---------
    Hash table buckets = 32768
    Longest hash chain = 1
    Average hash chain = 92/92 = 1.0
    Max compares per lookup = 1
    Avg compares per lookup = 690/690 = 1.0
    Thanks

    You will still get lots of messages. Which messages depends on many things, some of which are changes you can make in the init files. On others, Oracle only knows to get rid of them if people log support calls. Whether they decide to get rid of them is mysterious, but probably not likely on a version soon to stop development.
    Don't get mad at Hemant, he's just an experienced guy trying to help you, for free. You can get mad at me though, because my cares are elsewhere. You can get mad at Oracle too, because this is evidence of sloppiness on their part. But be nice to the support people, it's not their fault, they're being paid to help you. Though you can get mad at them if they don't help you after you've paid them, and you've jumped through hoops to get their attention.

  • SQL LOG FILE SIZE INCREASING

    Hi DBA's
    SQL Log file size occupies more disk space on the server, the overall database size is 8GB
    How to decrease the SQL LDF file size on the server, please explain the suitable steps to perform
    Thanks
    DBA

    use master
    go
    dump transaction <YourDBName>
    with no_log
    go
    use <YourDBName>
    go
    DBCC SHRINKFILE (<YourDBNameLogFileName>,
    100) -- where 100 is the size you may want to shrink it to in MB, change it to your needs
    go
    -- then you can call to check that all went fine
    dbcc checkdb(<YourDBName>)
    Andy ,
    what point in asking user to use No_log and you did not even motioned about what this eveil command will do. Actually its
    seriously not required reason being initial size of log file set to 8 G.
    Plus what is point in running checkdb ?
    I don't agree to any part you pointed
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
    My Technet Wiki Article
    MVP

  • Log file size

    We have a DNS Server running on solaris 9, it's generating huge logs hence /var/adm/messages file size is vey big. Is there any way to create seperate log file for everyday or can I restrict the log file size for a single file.
    Thank you

    Hmmm,
    For what type environment is this DNS server used for? How many domains/delegated domains are configured on the host?
    I think by default BIND allows 1000 recursive lookup connections. (That is already plenty and if you have that amount of legitimate traffic you will have to add more DNS servers and configure the nodes accordingly)
    Is the server listed as a Name Server for your domain and used externally for name resolution for your domain host entries, maybe the SOA?
    nslookup (enter)
    set type=ns (enter)
    you_domain_mane (i.e. your_domain.com) (enter)Or
    dig �q NS your_domain.com
    If the affected server returns in the list it is NEVER EVER a good idea to allow recursive lookups.
    My guess is that you are subject to denial of service, unless you host a fairly large environment with 1000s of hosts.
    Change the recursive-cient connection back (you system cannot handle 5000 recursive lookups and your system utilization shows this.)
    Then configure
    �category queries { your_query_file; };� In your namd.conf
    restart BIND
    Use �rndc� to change the trace level to 1
    Let it run for 2 -5 min and stop BIND entirely
    Then run something like:
    �cat your_query_file | cut -d'/' -f2 | sort | uniq �c | more� (depends on the log file format, better yet use nwak)
    take a quick look to see if there is one IP that is hammering your system.

  • SQL log file size is extending rapidly

    Hello All,
    We are using ECC 6.0, our database is SQL 2005 & operating system is Windows NT 4x AMD64 L.
    Our database log file size is increasing rapidly, now its size is more than the all 4 data files (near about 300gb).
    Last week I tried to shrink log file but it didn't worked.
    Now less space is remained on disk, please help me.
    Now the system is started giving dump at the time of log in, & the dump is like "START_CALL_SICK ".
    I am attaching dump error text file.
    Please help why is this happening.
    Thanks in advance
    Mahendra

    Hi,
    I have backed up log file & shrink the file but it didn't worked for me
    What is the result? It shrinks the log and release all the space (for all committed transactions).
    How can i add another log file?
    Can i delete old log file after adding new log file.
    You can add another log file by following below steps. but in your case, this is not the right solution because you have good amount of log file configuration for your database (now its size is more than the all 4 data files (near about 300gb)).
    Open SQL server management studio > Expand database > Right click on database > Select Files > Click on Add > Give the input parameters (Logical file name, path, initial size etc.) click on OK
    If system is not allowing you to shrink the log file, it means you have active transactions in system which are continuously using your log file.
    Regards,
    Nick Loy

  • Premiere Pro CC 2014 file size limits?

    Hi a friend needs to create a 37hr uncompressed AVI file (by combining an avi of pictures and mp3 of audio of a performance) and is wondering if it can be done using Adobe Premiere Pro CC 2014, ie are there any file size limits? Any comments much appreciated.

    Would be interesting to know how you are going to store that. 37 hours of HD uncompressed in an AVI wrapper requires around 24 TB of free disk space on a single volume. That means you would need  something like 12 x 3 TB drives in Raid6 + 1 HS, requiring at least a 16 port raid controller for those 13 disks, just for the output file. Due to fill rate degradation, that is cutting it thin. Additionally at least a X79 or X99 motherboard with 2011 socket is necessary.
    Next question is, who would be crazy enough to marathon-watch 37 hours of a performance?
    You may consider another workflow, not uncompressed and not 37 hours.

Maybe you are looking for