Limit amavis.log size

My amavis.log is getting pretty large (81.7 MB). Where do I specify the size limit for the amavis.log.

This should help:
- http://download.oracle.com/docs/cd/B32110_01/core.1013/b28944/appendix.htm#i1012194
Particularly the bit on rotating the log files.
Cheers,
Mick

Similar Messages

  • Listener.log size limit on Linux 64-bit

    Hi!
    We have listener.log file growing very fast because of very active database. Every month or two I truncate that file to free up disk space but this time I forgot to truncate it for some time.
    File grew to 4294967352 bytes and stopped there on that size. Everything is working as it should with listener service - only listener.log file isn't updating.
    I've tried to search for more informations about listener.log size limit but haven't found answer that satisfies me.
    Where can I find more information why my listner.log file is limited to 4294967352 bytes?
    I suppose that this is some OS limit but how can I check this?
    It is Linux 64-bit OS with Oracle 10.2.0.4.
    Thanks for possible answers and best regards,
    Marko Sutic

    Ah, yes... thanks Sybrand for reminder, my brain just stopped working :)
    Just resolved my problem:
    LSNRCTL> set current_listener LISTENER_DB
    Current Listener is LISTENER_DB
    LSNRCTL> set log_file listener_db1
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.2.10.40)(PORT=1521)))
    LISTENER_DB parameter "log_file" set to listener_db1.log
    The command completed successfully
    LSNRCTL> set log_file listener_db
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.2.10.40)(PORT=1521)))
    LISTENER_DB parameter "log_file" set to listener_db.log
    The command completed successfully
    LSNRCTL>
    Regards,
    Marko

  • Logfile size limit reached - logging stopped

    From time to time, quite often, a pop-up window appears on my screen and says:Logfile size limit reached - logging stopped. I only can push "Accept" button or close it, but it rises again some time later. I haven't found any information on the web.

    Hello,
    I had this problem too, but I found the cause of it. It's comes from the extension 'IPLogger 1.6'.
    This extension store, for every connexion, your IP adresse in a logfile, so after some times the file is full and the message "Logfile size limit reached" is displayed.
    The best solution is to completely remove this extension, and replace it by similar one, like 'External IP'.
    Now, the problem is solved for me :)
    p.s. This problem should be considered as a bug for extension IPLogger. The logfile should be cleared when full instead of diplaying this annoyinf message !

  • Weblogic 8.1 Server log size increase in Production environment

    Hi,
    Issue:: One of the log file is increasing in size and exceeding beyond the size mentioned in the configuration file resulting in application outage.
    Issue description:
    We are having problems with the log size in the Weblogic 8.1 server. The fileminsize has been mentioned in the config.xml.
    New log files like MYsvr.log00001,MYsvr.log00002, MYsvr.log00003, MYsvr.log00004 etc are also being generated appropriately when the max file size has been reached. But simultaneously, one of the files is growing in size, exceeding the limit mentioned in the configuration file. Eg.. the MYsvr.log00001 file is 800MB in size while the other files(MYsvr.log00002, MYsvr.log00003 etc are 10MB in size)
    This increase in size of the log has been resulting in an application outage.
    More Details:
    1. Server: BEA Weblogic 8.1 server
    2. Log size is fine in other environements. This is a problem only in the production environment.
    3. The entry in the config.xml is as follows:
    <Server ListenPort="6313" Name="MYsvr" NativeIOEnabled="true" TransactionLogFilePrefix="./">
    <ServerStart Name="MYsvr"/>
    <Log FileMinSize="10000" FileName="MYsvr.log" Name="MYsvr"
    NumberOfFilesLimited="true" RotationType="bySize"/>
    <SSL Name="MYsvr"/>
    <ServerDebug Name="MYsvr"/>
    <WebServer Name="MYsvr"/>
    <ExecuteQueue Name="default" ThreadCount="15"/>
    <KernelDebug Name="MYsvr"/>
    </Server>
    Could you please help with this issue ?
    Thank you.

    Can someone please provide a solution for the issue

  • Amavis.log - Please point me to a reference explaining amavis.log entries

    Where is the best reference to explain the entries in the /var/log/amavis.log file?
    I am experiencing mixed results with filtering where the from address [email protected] sometimes gets filtered but should never get filtered. Here are log entries:
    [ap100:/var/log] mlmladmi% grep act-us.info amavis.log
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) ESMTP< XFORWARD PROTO=SMTP HELO=act-us.info\r\n
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) ESMTP< MAIL FROM:<[email protected]> SIZE=8775\r\n
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup (debug_sender) => undef, "[email protected]" does not match
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) ESMTP> 250 2.1.0 Sender [email protected] OK
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) ESMTP::10024 /var/amavis/amavis-20060616T090012-25030: <[email protected]> -> <[email protected]> Received: SIZE=8775 from ap100.mlml.calstate.edu ([127.0.0.1]) by localhost (ap100.mlml.calstate.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 25030-10 for <[email protected]>; Fri, 16 Jun 2006 09:05:31 -0700 (PDT)
    Jun 16 09:05:31 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) Checking: [131.118.224.83] <[email protected]> -> <[email protected]>
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) wbl: checking sender <[email protected]>
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) query_keys: [email protected], partners-bounces@, act-us.info, .act-us.info, .info, .
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup_hash([email protected]), no matches
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup_re([email protected]), no matches
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup (blacklist_sender) => undef, "[email protected]" does not match
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) query_keys: [email protected], partners-bounces@, act-us.info, .act-us.info, .info, .
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup_hash([email protected]), no matches
    Jun 16 09:05:34 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) lookup (whitelist_sender) => undef, "[email protected]" does not match
    Jun 16 09:05:35 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) SPAM-TAG, <[email protected]> -> <[email protected]>, No, hits=0.287 tagged_above=-999 required=6 tests=BAYES_00, FORGEDRCVDHELO, INFO_TLD
    Jun 16 09:05:35 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) FWD via SMTP: [127.0.0.1]:10025 <[email protected]> -> <[email protected]>
    Jun 16 09:05:35 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) oneresponse_forall <[email protected]>: success, dsn_needed=0, '250 2.6.0 Ok, id=25030-10, from MTA: 250 Ok: queued as 5577754C266'
    Jun 16 09:05:35 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) Passed, <[email protected]> -> <[email protected]>, Message-ID: <001101c6915e$9ac80170$0428280a@JosiePC>, Hits: 0.287
    Jun 16 09:05:35 ap100.mlml.calstate.edu /usr/bin/amavisd[25030]: (25030-10) Passed CLEAN, <[email protected]> -> <[email protected]>, Hits: 0.287, tag=-999, tag2=6, kill=22, L/Y/0/0
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) ESMTP< XFORWARD PROTO=SMTP HELO=act-us.info\r\n
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) ESMTP< MAIL FROM:<[email protected]> SIZE=8120\r\n
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup (debug_sender) => undef, "[email protected]" does not match
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) ESMTP> 250 2.1.0 Sender [email protected] OK
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) ESMTP::10024 /var/amavis/amavis-20060616T094019-29102: <[email protected]> -> <[email protected]> Received: SIZE=8120 from ap100.mlml.calstate.edu ([127.0.0.1]) by localhost (ap100.mlml.calstate.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 29102-07 for <[email protected]>; Fri, 16 Jun 2006 09:42:07 -0700 (PDT)
    Jun 16 09:42:07 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) Checking: [131.118.224.83] <[email protected]> -> <[email protected]>
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) wbl: checking sender <[email protected]>
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) query_keys: [email protected], partners-bounces@, act-us.info, .act-us.info, .info, .
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup_hash([email protected]), no matches
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup_re([email protected]), no matches
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup (blacklist_sender) => undef, "[email protected]" does not match
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) query_keys: [email protected], partners-bounces@, act-us.info, .act-us.info, .info, .
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup_hash([email protected]), no matches
    Jun 16 09:42:10 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) lookup (whitelist_sender) => undef, "[email protected]" does not match
    Jun 16 09:42:11 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) SPAM-TAG, <[email protected]> -> <[email protected]>, No, hits=0.287 tagged_above=-999 required=5 tests=BAYES_00, FORGEDRCVDHELO, INFO_TLD
    Jun 16 09:42:11 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) FWD via SMTP: [127.0.0.1]:10025 <[email protected]> -> <[email protected]>
    Jun 16 09:42:11 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) oneresponse_forall <[email protected]>: success, dsn_needed=0, '250 2.6.0 Ok, id=29102-07, from MTA: 250 Ok: queued as 6607B54D348'
    Jun 16 09:42:11 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) Passed, <[email protected]> -> <[email protected]>, Message-ID: <[email protected]>, Hits: 0.287
    Jun 16 09:42:11 ap100.mlml.calstate.edu /usr/bin/amavisd[29102]: (29102-07) Passed CLEAN, <[email protected]> -> <[email protected]>, Hits: 0.287, tag=-999, tag2=5, kill=22, L/Y/0/0
    [ap100:/var/log] mlmladmi%
    Thanks
    Jeff
    Xserve G5   Mac OS X (10.4.6)  
    PowerBook G4 Titainium   Mac OS X (10.4.3)  

    http://www.ijs.si/software/amavisd/
    http://freshmeat.net/projects/amavisd-new/
    and of course, look at your config file:
    /etc/amavisd.conf
    Jeff

  • Limit application LOg?

    Is there any setting i can limit the size of application log size?i knew we can backup application log by copying & delete appliation log. Instead of this , any other setting we can do , it will delete application log automatically after reaching some size?

    Mike, There is no functionality to limit the logfile size. but there are some other options.. 1.CLEARLOGFILE TRUE/FALSE - With this configuration file setting you can set the log file to clear whenever OLAP server is re-started. 2. Run MaxL script to periodically clear log file <alter system> command. hope this helps -Maneesh Hari

  • How can I limit the vertical size of the plot legend?

    Hi all,
    in my program I use a waveform graph and its plot legend.
    My problem is that the vertical size of the plot legend increases out of my frame and screen if I add too many plots to the graph!!!!
    Is there a possibilty to limit the vertical size of the plot legend and/or to use a vertical scrollbar in the plot legend?
    I use LV 8.2.1 .
    Thanks
    daHans

    You can write to the "Active Plot" property node. The example given in the thread I linked to before shows using this. Did you take a look at that example?
    I'm only suggesting the alternative of an "Active Plot" control if you're trying to give the user the ability to manipulate one of the plots (like color, point style, etc). If you have a lot of plots, and the plot legend is too big, you can provide a numeric control where the user selects the plot, and then additional controls to set the properties for that plot. Not as intuitive as the plot legend, but if that's what you've gotta do, that's what you've gotta do. Attached is a simple example (LabVIEW 8.20).
    Attachments:
    plot.vi ‏21 KB

  • Limit on the size of the flat file in SAP Application Server

    Hi Gurus,
    My requirement is to upload the account payable data to the SAP Application Server.
    The requirement say's like if the size of the file exceeds the limit(is there any limit on file size in SAP?) of SAP then I should upload another file with the remaining records.
    Is there any limit on the file size uploaded into SAP Application Server?If there is limit on file size then what is the value of the limit?
    I guess the file size will depend on the basis configuration and the free size of the Application Server Folder.
    How to check free space in the Application Server folder and proceed further for placing the file in that folder?
    Thanks & Regards,
    Kiran Kumar K

    The limitation (if any) will be on the OS level. Nothing to do with SAP as such. Asking your basis team to provide you with an area with plenty of room will be the easiest option. How big are your files - most OSs can handle files of many GB. Sounds to me like the "requirement" has been written by someone without technical knowledge...

  • DeserializeJSON - is there a limit on the size of the JSON data that can be converted?

    I have some valid JSON data that's being converted successfully by DeserializeJSON... until it gets to a certain size, or that's certainly what appears to be happening.  The breaking point seems to be somewhere in the neighborhood of 35,000 characters... about 35KB.  When the conversion fails, it fails with a "JSON parsing failure: Unexpected end of JSON string" message.  And when the conversion fails, the JSON data is deemed to be valid per tools like this one:  http://www.freeformatter.com/json-validator.html.
    So, is there a limit on the size of the JSON data that can be converted by DeserializeJSON?
    Thanks!

    Thanks Carl.
    The JSON is being submitted in its entirety, confirmed by Fiddler.  And it's actually being successfully saved to a SQL Server nvarchar(MAX) field too.  I can validate that saved JSON.
    I'm actually grabbing the JSON to convert directly from the SQL Server, and your comments / thoughts led me down the path of resolution.
    Turns out that the JSON was being truncated prior to getting to the DeserializeJSON command, but it was the cfquery pull that was doing the truncating.  The fix was to enable "long text retrieval (CLOB)" for this datasource in CF Admin.  I'd never run into that before or even knew that this setting existed.
    Thanks again for your comments!

  • Is there a limit on the size of the input for the Solve Linear Equations block?

    Hello,
    I'm trying to figure out why the Solve Linear Equations block will properly function with some sets of data and why it won't with others. What my program is doing is taking a signal and comparing it with a batch of sine and cosine waves to try and pick out patterns in the data. I have different sample sizes and it seems to work when I throw 3900 points at it. However, I have another set with 4550 points and it gives me incorrect amplitudes for my sinusoids.  Is there some limit to the size of the matrices that I can give this block? Or is there some other workaround that still allows me to keep all of my data?
    Thanks,
    David Joseph

    Well, the best way to show what I expect is to see the entire program. It's pretty evident that when looking at the graphs, something isn't right. What is supposed to happen is that the runout amplitudes are found, and then those sinusoids are subtracted from the initial data, leaving tooth to tooth data and noise. When I use the larger arrays, it seems as though not all of the data gets through (count the peaks on the product gear runout graph vs. initial) and the amplitudes are much to small, such that nothing is really taken out and the tooth to tooth data looks like the initial data.
    Also, we will also be using an FFT, but it will be limited to only determining the frequencies we should check. I've fought with the fft blocks quite a bit and I just prefer to not use them. Plus, the guy I'm writing this for wants exact answers and does not want to pad or resample the data or use windows.
    The exact number of data points isn't important (ie. 4550 vs 4551) since I use the array size block to index the for loop.
    As for typical values, they can change a lot based on materials. But, the original 3900 data point sets and the 4550 data point sets used practically identical gears. So, use the original 3900 sets I've included as references (check the RO array block numbers to compare).
    I've included 3 3900 samples, 3 4550 samples, and 3 4550 samples that have been truncated to 3900 or so as constants on the block diagram.
    Also, the check for additional runouts (like 3 per rev, 4 per rev, etc..) is optional, but if you choose to use it, use positive integers only.
    I don't know how much of this program will make sense and I have wires running everywhere.. so good luck. Keep in mind I'm only a student and I hadn't touched Labview until about 2 or 3 months ago.
    Thanks,
    David Joseph
    Attachments:
    Full example.vi ‏139 KB

  • Is there a limit on the size of Exchange mailbox?

    One of my user has 8Gb of data stored on the Exchange server mailbox. (Yes, on the server, not local pst)
    Since ipad can store up to 1,000 recent messages, is there is limit on the size of 1,000 emails to be stored locally on the iPad, assuming the user has a 64Gb New iPad?

    no

  • Is there a limit on the size of SDHC card that can be read with the iPad camera connectioin kit?

    Is there a limit on the size of SDHC card that can be read with the iPad camera connection kit?

    I've successfully connected 32 gig SDHC and CF cards so if there is an upper limit, it's at least 32 gig.
    I know SDXC will not work.
    With the cards that don't work, have they been formatted correctly? the camera connection kit will only read cards holding images. (Well, it'll only see the images) And those images have to have a file name of exactly 8 characters (DSC_2342 for example) and they  have to be in a folder named DCIM.
    Anything else it wont' read.
    I put a photo on there called 'Christmas' and the connection kit won't see it. I put a photo on there in the DCIM folder named XMAS2342 it'll see that.
    So it's possible that those cards weren't read because they weren't speaking the right language.

  • What is limit of database size in oracle 10g standard edition/edition one

    Hai All,
    What is the limit of database size in oracle 10g standard edition and standard edition one.. I see the white paper of oracle says that the limitation is 500 GB. This limitation is correct.? if correct then what happened after the limit..?
    Please help?
    Shiju

    What white paper would that be? I can't see any limit in the Oracle Database 10g Editions comparisons.
    C.

  • Urgent: Huge diff in total redo log size and archive log size

    Dear DBAs
    I have a concern regarding size of redo log and archive log generated.
    Is the equation below is correct?
    total size of redo generated by all sessions = total size of archive log files generated
    I am experiencing a situation where when I look at the total size of redo generated by all the sessions and the size of archive logs generated, there is huge difference.
    My total all session redo log size is 780MB where my archive log directory size has consumed 23GB.
    Before i start measuring i cleared up archive directory and started to monitor from a specific time.
    Environment: Oracle 9i Release 2
    How I tracked the sizing information is below
    logon as SYS user and run the following statements
    DROP TABLE REDOSTAT CASCADE CONSTRAINTS;
    CREATE TABLE REDOSTAT
    AUDSID NUMBER,
    SID NUMBER,
    SERIAL# NUMBER,
    SESSION_ID CHAR(27 BYTE),
    STATUS VARCHAR2(8 BYTE),
    DB_USERNAME VARCHAR2(30 BYTE),
    SCHEMANAME VARCHAR2(30 BYTE),
    OSUSER VARCHAR2(30 BYTE),
    PROCESS VARCHAR2(12 BYTE),
    MACHINE VARCHAR2(64 BYTE),
    TERMINAL VARCHAR2(16 BYTE),
    PROGRAM VARCHAR2(64 BYTE),
    DBCONN_TYPE VARCHAR2(10 BYTE),
    LOGON_TIME DATE,
    LOGOUT_TIME DATE,
    REDO_SIZE NUMBER
    TABLESPACE SYSTEM
    NOLOGGING
    NOCOMPRESS
    NOCACHE
    NOPARALLEL
    MONITORING;
    GRANT SELECT ON REDOSTAT TO PUBLIC;
    CREATE OR REPLACE TRIGGER TR_SESS_LOGOFF
    BEFORE LOGOFF
    ON DATABASE
    DECLARE
    PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN
    INSERT INTO SYS.REDOSTAT
    (AUDSID, SID, SERIAL#, SESSION_ID, STATUS, DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, DBCONN_TYPE, LOGON_TIME, LOGOUT_TIME, REDO_SIZE)
    SELECT A.AUDSID, A.SID, A.SERIAL#, SYS_CONTEXT ('USERENV', 'SESSIONID'), A.STATUS, USERNAME DB_USERNAME, SCHEMANAME, OSUSER, PROCESS, MACHINE, TERMINAL, PROGRAM, TYPE DBCONN_TYPE,
    LOGON_TIME, SYSDATE LOGOUT_TIME, B.VALUE REDO_SIZE
    FROM V$SESSION A, V$MYSTAT B, V$STATNAME C
    WHERE
    A.SID = B.SID
    AND
    B.STATISTIC# = C.STATISTIC#
    AND
    C.NAME = 'redo size'
    AND
    A.AUDSID = sys_context ('USERENV', 'SESSIONID');
    COMMIT;
    END TR_SESS_LOGOFF;
    Now, total sum of REDO_SIZE (B.VALUE) this is far less than archive log size. This at time when no other user is logged in except myself.
    Is there anything wrong with query for collecting redo information or there are some hidden process which doesnt provide redo information on session basis.
    I have seen the similar implementation as above at many sites.
    Kindly provide a mechanism where I can trace which user is generated how much redo (or archive log) on a session basis. I want to track which all user/process are causing high redo to generate.
    If I didnt find a solution I would raise a SR with Oracle.
    Thanks
    [V]

    You can query v$sess_io, column block_changes to find out which session generating how much redo.
    The following query gives you the session redo statistics:
    select a.sid,b.name,sum(a.value) from v$sesstat a,v$statname b
    where a.statistic# = b.statistic#
    and b.name like '%redo%'
    and a.value > 0
    group by a.sid,b.name
    If you want, you can only look for redo size for all the current sessions.
    Jaffar

  • Why the flashback log'size smaller than the archived log ?

    hi, all . why the flashback log'size smaller than the archived log ?

    Lonion wrote:
    hi, all . why the flashback log'size smaller than the archived log ?Both are different.
    Flash logs size depends on parameter DB_FLASHBACK_RETENTION_TARGET , how much you want to keep.
    Archive log files is dumped file of Online redo log files, It can be either size of Online redo log file size or less depending on online redo size when switch occurred.
    Some more information:-
    Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named “FLASHBACK” under FRA. The size of every generated flashback log file is again under Oracle’s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well.
    Source:- http://dba-blog.blogspot.in/2006/05/flashback-database-feature.html
    Edited by: CKPT on Jun 14, 2012 7:34 PM

Maybe you are looking for

  • Cannot open PDF on Adobe Reader for iPhone

    When I click on the file to open in the App it says, "The document has features that are not supported in this version of Adobe Reader". I have ensured that I have downloaded the latest version of Adobe Reader onto my phone. The PDF, "material is cop

  • Table TKZU3 Enhancement in SPRO

    Hi, I have a request for enhancement in SPRO: Customizing: Controlling/Product cost controlling/Product cost planning/Basic settings for material costing/Overhead/Define costing sheets I need to add a new column for Plant to table TKZU3. Can someone

  • Altering a DOM once it has been created after parseing an XML file

    Does anyone know of a nice easy way to edit a DOM once I have parsed it in so that I can write an altered XML file back to disk. (I can do the reading and writing easy enough, I just need the editing part. I used to have code that worked if the child

  • Pricing Procedure Requirement

    Hi All In third party scenario our requirement is "PO Net Price should be Sales Gross Price". Is there a way to bring the pricing from PO to SO automatically. Please guide me through. Regards JK

  • IPCC Error 12005 agent login failed

    when I login to the agent desktop, the following error pops up.Login could not be performed.possible causes are invalid instrument, media termination problem or CM issue. I have installed 7.0ctios client along with ctios 7.1.4 release.