Switch log file is too slow

hello,
my database is 10R2
the specified size of online redolog file is 50mb
the switching of log file is too slow..
even though the size of archived log file has become around 50,029,there is still no log switch of file..
why the log switch is not occuring immediately after the size reaches upto that limit(50,029)...?
what might be the setting do i need to do..
any help ....any guess ??
thank you..

Hello,
The frequency of the log switch depends on the activity of the Database.
You may check the Alert log for some messages like this:
Thread 1 cannot allocate new log, sequence ...
Checkpoint not completeWhile Oracle needs informations in the Redo log to complete the Checkpoint, it cannot reuse it. So, it can happen that the Database doesn't switch the redo log up to the Checkpoint is complete.
If you have this kind of messages, the following link may help you:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:69012348056
Often, it's solved by adding redo log files.
The Parameter fast_start_mttr_target may also help the Database to optimize the checkpoint:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm
Else, if you don't have this message ( checkpoint not complete ...), but you want to log switch more frequently (even if the Redo log is not completely full), you have the parameter archive_lag_target. It can help to force log switch every n+ seconds:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams009.htm
NB: It's not recommended to get log switch too frequently, in general you shouldn't have more than 3 or 4 log switchs every hour.
Hope this help.
Best regards,
Jean-Valentin
Edited by: Lubiez Jean-Valentin on Mar 5, 2011 11:44 AM

Similar Messages

  • Switch log file

    Hi Everyone,
    We are using the 9i database and during the import, we have temporarily added 2 new log groups (Each 2 gb in size) to improve the import performance. After the completion of the import we have a plan to drop those newly added log groups. In order to do that we have issue the command "Alter system switch logfile".,
    but it is running more than 8 hrs., We have increased the Log_archive_max_processes value also but still it is running.,
    Could anyone please suggest about how to improve the speed of switch log file operation?
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
    PL/SQL Release 9.2.0.8.0 - Production
    CORE 9.2.0.8.0 Production
    TNS for HPUX: Version 9.2.0.8.0 - Production
    NLSRTL Version 9.2.0.8.0 - Production
    Regards,
    Jai

    Hi,
    Please find below the last few lines of alert log,
    Thu Mar 7 15:06:15 2013
    alter database datafile 7 autoextend off
    Thu Mar 7 15:06:15 2013
    Completed: alter database datafile 7 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 8 autoextend off
    Completed: alter database datafile 8 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 9 autoextend off
    Completed: alter database datafile 9 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 10 autoextend off
    Completed: alter database datafile 10 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 11 autoextend off
    Completed: alter database datafile 11 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 12 autoextend off
    Completed: alter database datafile 12 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 13 autoextend off
    Completed: alter database datafile 13 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database datafile 14 autoextend off
    Completed: alter database datafile 14 autoextend off
    Thu Mar 7 15:06:15 2013
    alter database tempfile 1 autoextend off
    Completed: alter database tempfile 1 autoextend off
    Thu Mar 7 16:02:53 2013
    ARCH: Evaluating archive log 1 thread 1 sequence 7
    Thu Mar 7 16:03:17 2013
    ARCH: Evaluating archive log 1 thread 1 sequence 7
    Thu Mar 7 16:26:01 2013
    alter database datafile 7 autoextend off
    Thu Mar 7 16:26:01 2013
    Completed: alter database datafile 7 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 8 autoextend off
    Completed: alter database datafile 8 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 9 autoextend off
    Completed: alter database datafile 9 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 10 autoextend off
    Completed: alter database datafile 10 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 11 autoextend off
    Completed: alter database datafile 11 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 12 autoextend off
    Completed: alter database datafile 12 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 13 autoextend off
    Completed: alter database datafile 13 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database datafile 14 autoextend off
    Completed: alter database datafile 14 autoextend off
    Thu Mar 7 16:26:01 2013
    alter database tempfile 1 autoextend off
    Completed: alter database tempfile 1 autoextend off
    Fri Mar 8 04:21:08 2013
    ALTER SYSTEM SET log_archive_max_processes=6 SCOPE=BOTH;
    Fri Mar 8 05:05:43 2013
    alter database drop logfile group 4
    Fri Mar 8 05:05:43 2013
    ORA-350 signalled during: alter database drop logfile group 4...
    Fri Mar 8 05:07:13 2013
    ARCH: Evaluating archive log 1 thread 1 sequence 7

  • Alert SID .log file size too big ,How to keep it under control

    alert<SID>.log file size too big ,How to keep it under control?
    -rw-r--r-- 1 oracle dba 182032983 Aug 29 07:14 alert_g54nha.log

    Metalink Note:296354.1

  • Switching log file

    Hi All,
    Greeting of the day...
    please suggest on the below...
    need to do alter switch log file if the log file is not switched for 15 minutesthanks,
    baskar.l

    I'd suggest you not to do it manually, but rather to use [FAST_START_MTTR_TARGET|http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm#REFRN10058] parameter
    The FAST_START_MTTR_TARGET initialization parameter is used to specify the amount of time (in seconds) a database should take to perform a crash or instance recovery. The value set for the FAST_START_MTTR_TARGET initialization parameter is internally converted to a set of parameters that modify the operation of Oracle in such a way that recovery time is as close to this estimate as possible.
    The FAST_START_IO_TARGET, LOG_CHECKPOINT_INTERVAL, and LOG_CHECKPOINT_TIMEOUT initialization parameters should not be used when using the FAST_START_MTTR_TARGET initialization parameter. Setting these parameters to active values obstructs the normal functioning of the FAST_START_MTTR_TARGET initialization parameter, resulting in unexpected results.
    The maximum value that can be set for the FAST_START_MTTR_TARGET initialization parameter is 3600. If a value greater than 3600 is set, Oracle automatically rounds it to 3600.
    Edited by: Kamran Agayev A. on Aug 4, 2009 12:06 PM

  • File.length() too slow?

    I'm working on an application that needs to process directory listings on network fileserver, looking for files and directories, and getting basic info about files found (size, last modified date).
    I'm trying to use java.io.File.list(java.io.FileFilter) with java.io.FileFilter returning file.isDirectory() or file.isFile() to get a list of just files or directories, and try to get the rest of file information later for each of the returned Files. However, when it gets to a directory with a lot of files (13000+), it seem to be unacceptably slow. I have tracked it down to File.length(), taking up to 80ms per file (!), which amounts to only about 13 files per second.
    It's not a problem of the platform (Win XP), directory listing contianing all information I need takes less than 3 seconds for this big directory, while getting the same through Java APIs (calling isDirectory(), isFile(), length() and lastModified() within the FileFilter callback ) takes ages.
    Is there a better way to get a directory listing, without being orders of magnitude slower than necessary? I can think of calling native dir command and parsing the output, but that is a mess...

    I have tracked this down to native implementation of File.length() - VC++ runtime function _stati64 which they use to get file length is too slow.
    Why dont they use Windows API? I have tested that getting file size using GetFileAttributesEx() is at least 50x faster than _stati64 for my file!                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • File sharing too slow

    Hi,
    I have 2 macs connected via hub. My internet connection is very fast and everything is ok but file sharing is too slow. 1GB takes 8 hours to be transfered.
    Any fix?
    Thanks!

    yup same here, 9 gb file shared via airport extreme from my Macpro 8proc to my 17" Macbook Pro was going to take 4 hrs?????? doing it via firewire now, 10 minutes.
    is there a network setting in missing as both machines obviously have 902.11N and the extreme is setup for N (B/G compatible)??
    jas

  • How do i fix a file where relinking or updating file is too slow and generating 60+MB pdf from a 4mb Indesign file?

    I have an 18 page files linking several pdf images and working with this file is horribly slow.  These pdf images(total of 15) easily take a min to relink or update (pdf's are at max 2mb but majority are less than 1mb).  The Indesign file is only 4mb but when i export or print to pdf I get a 60-70mb file. Exporting or printing these 18 pages easily takes 30 min.  I can't use the file with a higher display quality unless i have the patience of watching a plant grow on that day. I've read a majority of ways why InDesign would be slow. At work im operating Windows 7, Indesign 6 and I have 32MB of RAM.  Extremely frustrating and reminding me of the days of dial up internet.  I've already done a SAVE AS and tried to reduce linked files image size and to no avail has not helped.  I was able to get my 72mb pdf file to reduce to 6mb after using Adobe Acrobat and getting rid of the hidden information but I lose alot of quality in the linework.  I am guessing its because these pdfs are generated from Revit or Autocad is why it might be holding onto some hidden info. So how do i make this file work faster (relink,export,pdf etc)?

    Just to note as well, InDesign only links to files, there's no embedding, you would see a 4mb file, but if it's linked to a 10mb pdf and it has to include all that PDF in the final file then it will be circa 14mb file.
    What you have is a vector heavy PDF and there's no real way to reduce that without losing quality.
    You may have better joy by opening the linked PDFS in acrobat and using File>Print and choose Adobe PDF (or another method) to basically refry the CAD pdfs.
    I'd do this with a copy of all the files though just in case something goes drastically wrong.
    Frankly, I feel it's the nature of how CAD pdfs are made that's causing the downfall here, refrying these and Printing these files to PDF removes a lot of the complicated things happening in CAD files and makes a very dulled down PDF version.
    Once you've done this then try FIle>Export from indesign to make your pdf.

  • Lenovo Beacon file transfer too slow

    dear all, i'm a newby here and i need your help. I own a Beacon with 2x1TB HDD in RAID 0 mode, so when i move datas to Beacon the max. file trasfer speed is up to 5 mb/s. In my opinion this is tooooo slow. Can anyone help me, or say about your file tranfer speed to Beacon. My configuration: Lenovo Beacon +  Netgear 5-port Gigabit Switch (GS305) + Win7 PC. Everything is LAN wired. I already tryed another PC (Notebook) but with the same result. Thank you!

    Beacon use HTTP protocal, and we use Filezilla as benchmark comparison (Note, Filezilla use FTP protocal.)
    Test config: Router, Netgear WGR614 V9 with Lan speed 100Mbps, Thinkpad Notebook (win7) wire connected to router, Beacon wire connected to router .
    Test Result:  Single file speed 5MBPS, and three files 10MBPS by Beacon; single file 11.6MBPS, multiple files 11MBPS

  • Reading from XML file is too slow

    I am trying to read some values from XML file, it takes about 1 or 2 minutes to finish reading, my xml file has about 4000 xml elements. Does anyone know this is normal or something wrong? How could make it faster?
    Thank you

    fine if it helps others... i hope NI will not be angry *fg*
    thx for your bug-report, i do not test the sub.vi until now.
    exchange the OR with an AND, solves the problem with the endless-loop, but error checking will not work (the loop only stops if no error AND no start-tag is found)
    changing the loop termination condition and putting the NOT from the error condition to the no_starttag_flag do both. correctly stops the loop when error occurs OR no further elements found.
    i attached the new sub.vi for version 7.0 and 7.1, also but some colors in the logo, for your convinience
    catweazle
    Attachments:
    xmlFile_GetElements_(Array).vi ‏76 KB
    xmlFile_GetElements_(Array).vi ‏65 KB

  • RAC 10.2.0.4, event gc cr block busy & log file switch

    hello everybody,
    i would like to know if there is any dependencies between gc cr block busy and log switch in the one node of the rac cluster.
    i had a select and its completion time lasted 12 secs instead of 1, the start time of the select is the start time of the log switch on the node.
    But when i looked into the active session history the session which was standing for that select had been waiting gc cr block busy instead log file switch completion.
    While looking to the Google resources i ve noticed that "The gc current block busy and gc cr block busy wait events indicate that the
    remote instance received the block after a remote instance processing delay.
    In most cases, this is due to a log flush".
    I would be really greatfull if anybody would be able to locate the initial dependancy i ve mantioned and explain the cause of the issue as i can not quite get why the selection took so long.
    Thank you in advance!

    Did you told "log file switch"?
    you mean log file switch (checkpoint incomplete) or log file switch (archiving needed) or log file switch/archive or log file switch (clearing log file) or log file switch completion or log switch/archive
    however a instance can wait ... if you find high values about waiting, you may tune your database.
    please show us
    - Top 5 Wait Events
    SQL> alter session set nls_date_format='YYYY/MM/DD HH24:MI:SS';
    SQL> select name, completion_time from V$ARCHIVED_LOG order by completion_time ;
    Check How often do you switch logfile to archive log? ... Every switch log file... you may find "log file switch" waiting
    I see... you no high DML activitiy.
    But Please check High segment + object and query on AWR report... (example: Segments by Physical Writes )
    just investigate
    Good Luck

  • Reader 10.1 update fails, creates huge log files

    Last night I saw the little icon in the system tray saying an update to Adobe Reader was ready to be installed.
    I clicked it to allow the install.
    Things seemed to go OK (on my Windows XP Pro system), although very slowly, and it finally got to copying files.
    It seemed to still be doing something and was showing that it was copying file icudt40.dll.  It still displayed the same thing ten minutes later.
    I went to bed, and this morning it still showed that it was copying icutdt40.dll.
    There is no "Cancel" button, so this morning I had to stop the install through Task Manager.
    Now, in my "Local Settings\TEMP" directory, I have a file called AdobeARM.log that is 2,350,686 KB in size and a file MSI38934.LOG that is 4,194,304 KB in size.
    They are so big I can't even look at them to see what's in them.  (Too big for Notepad.  When I tried to open the smaller log file, AdobeARM.log, with Wordpad it was taking forever and showing only 1% loaded, so after five minutes, I terminated the Wordpad process so I could actually do something useful with my computer.)
    You would think the installer would be smart enough to stop at some point when the log files begin to get enormous.
    There doesn't seem to be much point to creating log files that are too big to be read.
    The update did manage to remove the Adobe Reader X that was working on my machine, so now I can no longer read PDF files.
    Maybe I should go back Adobe Reader 9.
    Reader X never worked very well.
    Sometimes the menu bar showed up, sometimes it didn't.
    PDF files at the physics e-print archive always loaded with page 2 displayed first.  And if you forgot to disable the look-ahead capability, you could get banned from the e-print archive site altogether.
    And I liked the user interface for the search function a lot better in version 9 anyway.  Who wants to have to pop up a little box for your search phrase when you want to search?  Searching is about the most important and routine activity one does, other than going from page to page and setting the zoom.

    Hi Ankit,
    Thank you for your e-mail.
    Yesterday afternoon I deleted the > 2 GB AdobeARM.log file and the > 4.194 GB
    MSI38934.LOG file.
    So I can't upload them.  I expect I would have had a hard time doing so
    anyway.
    It would be nice if the install program checked the size of the log files
    before writing to them and gave up if the size was, say, three times larger
    than some maximum expected size.
    The install program must have some section that permits infinite retries or
    some other way of getting into an endless loop.  So another solution would be
    to count the number of retries and terminate after some reasonable number of
    attempts.
    Something had clearly gone wrong and there was no way to stop it, except by
    going into the Task Manager and terminating the process.
    If the install program can't terminate when the log files get too big, or if
    it can't get out of a loop some other way, there might at least be a "Cancel"
    button so the poor user has an obvious way of stopping the process.
    As it was, the install program kept on writing to the log files all night
    long.
    Immediately after deleting the two huge log files, I downloaded and installed
    Adobe Reader 10.1 manually.
    I was going to turn off Norton 360 during the install and expected there
    would be some user input requested between the download and the install, but
    there wasn't.
    The window showed that the process was going automatically from download to
    install. 
    When I noticed that it was installing, I did temporarily disable Norton 360
    while the install continued.
    The manual install went OK.
    I don't know if temporarily disabling Norton 360 was what made the difference
    or not.
    I was happy to see that Reader 10.1 had kept my previous preference settings.
    By the way, one of the default settings in "Web Browser Options" can be a
    problem.
    I think it is the "Allow speculative downloading in the background" setting.
    When I upgraded from Reader 9 to Reader 10.0.x in April, I ran into a
    problem. 
    I routinely read the physics e-prints at arXiv.org (maintained by the Cornell
    University Library) and I got banned from the site because "speculative
    downloading in the background" was on.
    [One gets an "Access denied" HTTP response after being banned.]
    I think the default value for "speculative downloading" should be unchecked
    and users should be warned that one can lose the ability to access some sites
    by turning it on.
    I had to figure out why I was automatically banned from arXiv.org, change my
    preference setting in Adobe Reader X, go to another machine and find out who
    to contact at arXiv.org [I couldn't find out from my machine, since I was
    banned], and then exchange e-mails with the site administrator to regain
    access to the physics e-print archive.
    The arXiv.org site has followed the standard for robot exclusion since 1994
    (http://arxiv.org/help/robots), and I certainly didn't intend to violate the
    rule against "rapid-fire requests," so it would be nice if the default
    settings for Adobe Reader didn't result in an unintentional violation.
    Richard Thomas

  • Problem to send result from log file, the logfile is to large

    Hi SCOM people!
    I have problem when monitoring a log file on a Red Hat system, I get a alert that tells me that the log file is too large to send (see the alert context below).I guess that the problem is that the server logs to much between the 5 minutes that SCOM checks.
    Any ideas how to solve this?
    Date and Time: 2014-07-24 19:50:24
    Log Name: Operations Manager
    Source: Cross Platform Modules
    Event Number: 262
    Level: 1
    Logging Computer: XXXXX.samba.net
    User: N/A
     Description:
    Error scanning logfile / xxxxxxxx / server.log on values ​​xxxxx.xxxxx.se as user <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser>; The operation succeeded and cannot be reversed but the result is too large to send.
    Event Data:
    < DataItem type =" System.XmlData " time =" 2014-07-24T19:50:24.5250335+02:00 " sourceHealthServiceId =" 2D4C7DFF-BA83-10D5-9849-0CE701139B5B " >
    < EventData >
      < Data > / xxxxxxxx / server.log </ Data >
      < Data > ​​xxxxx.xxxxx.se </ Data >
      < Data > <SCXUser><UserId>xxxxxx</UserId><Elev></Elev></SCXUser> </ Data >
      < Data > The operation succeeded and cannot be reversed but the result is too large to send. </ Data >
      </ EventData >
      </ DataItem >

    Hi Fredrik,
    At any one time, SCX can return 500 matching lines. If you're trying to return > 500 matching lines, then SCX will throttle your limit to 500 lines (that is, it'll return 500 lines, note where it left off, and pick up where it left off next time log files
    are scanned).
    Now, be aware that Operations Manager will "cook down" multiple regular expressions to a single agent query. This is done for efficiency purposes. What this means: If you have 10 different, unrelated regular expressions against a single log file, all of
    these will be "cooked down" and presented to the agent as one single request. However, each of these separate regular expressions, collectively, are limited to 500 matching lines. Hope this makes sense.
    This limit is set because (at least at the time) we didn't think Operations Manager itself could handle a larger response on the management server itself. That is, it's not an agent issue as such, it's a management server issue.
    So, with that in mind, you have several options:
    If you have separate RegEx expressions, you can reconfigure your logging (presumably done via syslog?) to log your larger log messages to a separate log file. This will help "cook down", but ultimately, the limit of 500 RegEx results is still there; you're
    just mitigating cook down.
    If a single RegEx expression is matching > 500 lines, there is no workaround to this today. This is a hardcoded limit in the agent, and can't be overridden.
    Now, if you're certain that your regular expression is matching < 500 lines, yet you're getting this error, then I'd suggest contacting Microsoft Support Services to open an RFC and have this issue escalated to the product team. Due to a logging issue
    within logfilereader, I'm not certain you can enable tracing to see exactly what's going on (although you could use command line queries to see what's happening internally). This is involved enough where it's best to get Microsoft Support involved.
    But as I said, this is only useful if you're certain that your regular expression is matching < 500 lines. If you are matching more than this, this is a known restriction today. But with an RFC, even that could at least be evaluated to see exactly the
    load > 500 matches will have on the management server.
    /Jeff

  • Archived log files not registered in the Database

    I have Widows Server 2008 R2
    I have Oracle 11g R2
    I configured primary and standby database in 2 physical servers , please find below the verification:
    I am using DG Broker
    Renetly I did failover from primary to standby database
    Then I did REINSTATE DATABASE to returen the old primary to standby mode
    Then I did Switchover again
    I have problem that archive logs not registered and not imeplemented.
    SQL> select max(sequence#) from v$archived_log; 
    MAX(SEQUENCE#)
             16234
    I did alter system switch logfile then I ssue the following statment to check and I found same number in primary and stanbyd has not been changed
    SQL> select max(sequence#) from v$archived_log;
    MAX(SEQUENCE#)
             16234
    Any body can help please?
    Regards

    Thanks for reply
    What I mean after I do alter system switch log file, I can see the archived log files is generated in the physical Disk but when
    select MAX(SEQUENCE#) FROM V$ARCHIVED_LOG;
    the sequence number not changed it should increase by 1 when ever I do switch logfile.
    however I did as you asked please find the result below:
    SQL> alter system switch logfile;
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> /
    System altered.
    SQL> SELECT DB_NAME,HOSTNAME,LOG_ARCHIVED,LOG_APPLIED_02,LOG_APPLIED_03,APPLIED_TIME,LOG_ARCHIVED - LOG_APPLIED_02 LOG_GAP_02,
      2  LOG_ARCHIVED - LOG_APPLIED_03 LOG_GAP_03
      3  FROM (SELECT NAME DB_NAME FROM V$DATABASE),
      4  (SELECT UPPER(SUBSTR(HOST_NAME, 1, (DECODE(INSTR(HOST_NAME, '.'),0, LENGTH(HOST_NAME),(INSTR(HOST_NAME, '.') - 1))))) HOSTNAME FROM V$INSTANCE),
      5  (SELECT MAX(SEQUENCE#) LOG_ARCHIVED FROM V$ARCHIVED_LOG WHERE DEST_ID = 1 AND ARCHIVED = 'YES'),
      6  (SELECT MAX(SEQUENCE#) LOG_APPLIED_02 FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES'),
      7  (SELECT MAX(SEQUENCE#) LOG_APPLIED_03 FROM V$ARCHIVED_LOG WHERE DEST_ID = 3 AND APPLIED = 'YES'),
      8  (SELECT TO_CHAR(MAX(COMPLETION_TIME), 'DD-MON/HH24:MI') APPLIED_TIME FROM V$ARCHIVED_LOG WHERE DEST_ID = 2 AND APPLIED = 'YES');
    DB_NAME HOSTNAME           LOG_ARCHIVED   LOG_APPLIED_02    LOG_APPLIED_03     APPLIED_TIME     LOG_GAP_02      LOG_GAP_03
    EPPROD  CORSKMBBOR01     16252                  16253                        (null)                      15-JAN/12:04                  -1                   (       null)

  • Big Log Files resulting in Out Of Memory of serverpartition

    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    Ask in Photoshop General Discussion or go to Microsoft and search for an article on memory allocation
    http://search.microsoft.com/search.aspx?mkt=en-US&setlang=en-US
    This forum is about the Cloud as a delivery process, not about using individual programs
    If you start at the Forums Index https://forums.adobe.com/welcome
    You will be able to select a forum for the specific Adobe product(s) you use
    Click the "down arrow" symbol on the right (where it says All communities) to open the drop down list and scroll

  • RE: Big Log Files resulting in Out Of Memory of serverpartition

    To clean a log on nt, you can open it with notepad, select all and delte, add a space ans save as... with the same file name
    on unix, you can just redirect the standard input to the file name (e.g.:
    # > forte_ex_2390.log
    (Should work with nt but i never tried)
    Hope that will help
    De : Vincent R Figari
    Date : lundi 30 mars 1998 21:42
    A : [email protected]
    Objet : Big Log Files resulting in Out Of Memory of server partition
    Hi Forte users,
    Using the Component/View Log from EConsole on a server partition triggers
    an
    Out-Of Memory of the server partition when the log file is too big (a few
    Mb).
    Does anyone know how to change the log file name or clean the log file of
    a server partition running interpreted with Forte 2.0H16 ???
    Any help welcome,
    Thanks,
    Vincent Figari
    You don't need to buy Internet access to use free Internet e-mail.
    Get completely free e-mail from Juno at http://www.juno.com
    Or call Juno at (800) 654-JUNO [654-5866]

    So try treating your development box like a production box for a day and see if the problem manifests itself.
    Do a load test and simulate massive numbers of changes on your development box.
    Are there any OS differences between production and development?
    How long does it take to exhaust the memory?
    Does it just add new jsp files, or can it also replace old ones?

Maybe you are looking for