Logfile size

i want shrink my logfile member size , so i do follow steps
first, add two groups as new which has one logfile member
second, switch logfile to new groups
third, drop old logfile groups
but there is an error 'ora-00312 group which deleted must archive' , how can i archive this group .

Hi
You can use any of these depending upon the requirement:
alter system archive log all;
alter system archive log current;
alter system archive log sequence 104;
Regards
null

Similar Messages

  • Logfile size limit reached - logging stopped

    From time to time, quite often, a pop-up window appears on my screen and says:Logfile size limit reached - logging stopped. I only can push "Accept" button or close it, but it rises again some time later. I haven't found any information on the web.

    Hello,
    I had this problem too, but I found the cause of it. It's comes from the extension 'IPLogger 1.6'.
    This extension store, for every connexion, your IP adresse in a logfile, so after some times the file is full and the message "Logfile size limit reached" is displayed.
    The best solution is to completely remove this extension, and replace it by similar one, like 'External IP'.
    Now, the problem is solved for me :)
    p.s. This problem should be considered as a bug for extension IPLogger. The logfile should be cleared when full instead of diplaying this annoyinf message !

  • How to specify logfile size at the time of adding to a member.

    Hi All,
    I am in the process of upgrading Oracle 9.0 to 10.1.
    I am following manual upgrad process.As per the recomendation from the pre-upgrade information script,i need to recreate redo-log files.
    Logfiles: [make adjustments in the current environment]
    --> E:\ORACLE\ORADATA\PRODB229\REDO03.LOG
    .... status="INACTIVE", group#="1"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO02.LOG
    .... status="INACTIVE", group#="2"
    .... current size="1024" KB
    .... suggested new size="10" MB
    --> E:\ORACLE\ORADATA\PRODB229\REDO01.LOG
    .... status="CURRENT", group#="3"
    .... current size="1024" KB
    .... suggested new size="10" MB
    WARNING: one or more log files is less than 4MB.
    Create additional log files larger than 4MB, drop the smaller ones and then
    upgrade.i can add redo member by the below command,but not able to specicy the size as 10M.I did some googling but no luck with that..
    SQL> ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' T
    O GROUP 1;
    but it fails
    ALTER DATABASE ADD LOGFILE MEMBER 'E:\oracle\oradata\prodb229\REDO01.rdo' TO GROUP 2 SIZE 10M;
    ERROR at line 1:
    ORA-00933: SQL command not properly ended
    ~Thnx

    If you add a logfile to an existing group, you cannot specify the size for that file.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1004.htm#i2079942
    <quote>
    ADD [STANDBY] LOGFILE MEMBER Clause Use the ADD LOGFILE MEMBER clause to add new members to existing redo log file groups. Each new member is specified by 'filename'. If the file already exists, it must be the same size as the other group members, and you must specify REUSE. If the file does not exist, Oracle Database creates a file of the correct size. You cannot add a member to a group if all of the members of the group have been lost through media failure.
    <quote>

  • I want to increase logfile size?

    Good day,
    frequent log switches occuer in the database. the following message displays in OEM recommendations
    Increase the size of the log files to 327 M to hold at least 20 minutes of redo information.
    kindly telll me how to increase file size of logfile?
    thanks & Regards.

    to increase the size, create new log files, check point the old and drop.
    sql>select * from v$log
    sql>select * from v$controlfile_record_section where type ='REDO LOG'
    sql>alter database add logfile group 4 '/app/oracle/oradata/xxxx/redo04.log'
    size 350M reuse;
    sql>alter system switch logfile;
    sql>alter system checkpoint;
    Drop old logfiles.
    Alter database drop logfile group 1;
    note: if there are more than one gropus add same number of groups and delete all older groups

  • Increased logfile size and now get cannot allocate new log

    Because we were archiving up to 6 and 7 times per minute, I increased our logfiles from size from 13M to 150M. I also increased the number of groups from 3 to 5.
    Because we want to ensure recoverability within a certain timeframe, I also have a script that runs every 15 minutes and issues 2 commands: ALTER SYSTEM SWITCH LOGFILE; and ALTER SYSTEM CHECKPOINT;
    I am now seeing in my alert.log file the following, almost every time we do a log switch.
    Thread 1 cannot allocate new log, sequence 12380
    Private strand flush not complete
    No other changes have been made to the database.
    Why would this now be doing this?
    Should I not be doing both the ALTER SYSTEM SWITCH LOGFILE and the ALTER SYSTEM CHECKPOINT?
    Is there something else I should be looking at?
    Any suggestions/answers would be greatly appreciated.
    Db version: 11.1.0.7
    OS: OEL 5.5

    Set the FAST_START_MTTR_TARGET parameter to the instance recovery time in seconds that you want...
    ALTER SYSTEM SET LOG_ARCHIVE_MAX_PROCESSES=upto 10; this will make sure that the redo logs are copied faster....
    The sizing redo log files can influence performance because DBWR, LGWR and ARCH are all working during high DML periods.
    A too small online redo log file size can cause slowdowns from excessive DBWR and checkpointing behavior. A high checkpointing frequency and the "log file switch (checkpoint incomplete) can also cause slowdowns.
    Add additional log writer processes (LGWR).
    Ensure that the archived redo log filesystem resides on a separate physical disk spindle.
    Put the archived redo log filesystem on super-fast solid-state disks.

  • STANDBY LOGFILE SIZE

    Hi
    How much the size of Standby Log files should be espically on 10g while Configuring Data Guard?
    Regards
    Thunder2777

    The size of the current standby redo log files must exactly match the size of the current primary database online redo log filesFrom:
    http://docs.oracle.com/cd/B19306_01/server.102/b14239/create_ps.htm

  • Logfile Recommended size over error - LMS 3.1

    Log file
    Directory
    File Size (KBytes)
    Recommended Size Limit (KBytes)
    File System Utilization%
    jrm.log
    C:\PROGRA~1\CSCOpx\log
    107084
    1024
    Less than 1%
    stdout.log.*
    C:\PROGRA~1\CSCOpx\mdc\tomcat\logs
    57591
    102400
    Less than 1%
    dbbackup.log
    C:\PROGRA~1\CSCOpx\log
    11772
    1024
    Less than 1%
    psu.log.*
    C:\PROGRA~1\CSCOpx\log
    4832
    1024
    Less than 1%
    restorebackup.log.*
    C:\PROGRA~1\CSCOpx\log
    3075
    51200
    Less than 1%
    I have already configured recommended file size - file rotations in my LMS. still i keep getting this errors. Can you help me getting out of these RED alarms !!

    C:\Program Files\CSCOpx\bin>perl "c:\Program Files\CSCOpx\bin\logrot.pl" -v
    Tue Dec  1 19:53:50 2009: INFO: Read variable backup_dir --> D:\Log\
    Tue Dec  1 19:53:50 2009: INFO: Read variable delay --> 60
    Tue Dec  1 19:53:50 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\dbbackup.log, B
    ackup File = D:\Log\\dbbackup.log
    Tue Dec  1 19:53:50 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:53:50 2009: INFO: Rolling logfile archive.
    Tue Dec  1 19:53:50 2009: INFO: Archiving C:\PROGRA~1\CSCOpx\log\dbbackup.log to
    D:\Log\\dbbackup.log.0.
            1 file(s) copied.
    Tue Dec  1 19:53:51 2009: INFO: Rotating C:\PROGRA~1\CSCOpx\log\dbbackup.log.
    Tue Dec  1 19:53:51 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\jrm.log, Backup
    File = D:\Log\\jrm.log
    Tue Dec  1 19:53:51 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:53:51 2009: INFO: Rolling logfile archive.
    Tue Dec  1 19:53:51 2009: INFO: Archiving C:\PROGRA~1\CSCOpx\log\jrm.log to D:\L
    og\\jrm.log.0.
            1 file(s) copied.
    Tue Dec  1 19:54:01 2009: INFO: Compressing D:\Log\\jrm.log.0 using gzip -f
    Tue Dec  1 19:54:08 2009: INFO: Rotating C:\PROGRA~1\CSCOpx\log\jrm.log.
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\EDS.log, Backup
    File = D:\Log\\EDS.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\EDS.log bec
    ause it is not big enough.
    file size = 2745636, conf size = 19118080
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\CSPortal.log, B
    ackup File = D:\Log\\CSPortal.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\CSPortal.lo
    g because it is not big enough.
    file size = 132588, conf size = 1048576
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\EDS-GCF.log, Ba
    ckup File = D:\Log\\EDS-GCF.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\EDS-GCF.log
    because it is not big enough.
    file size = 26184, conf size = 1048576
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\psu.log, Backup
    File = D:\Log\\psu.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\psu.log bec
    ause it is not big enough.
    file size = 894079, conf size = 7122944
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\mdc\tomcat\logs\std
    out.log, Backup File = D:\Log\\stdout.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\mdc\tomcat\logs
    \stdout.log because it is not big enough.
    file size = 56244907, conf size = 111889408
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\CSDiscovery.log
    , Backup File = D:\Log\\CSDiscovery.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\CSDiscovery
    .log because it is not big enough.
    file size = 1084297, conf size = 1084416
    Tue Dec  1 19:54:08 2009: INFO: Logfile = C:\PROGRA~1\CSCOpx\log\CSDeviceSelecto
    r.log, Backup File = D:\Log\\CSDeviceSelector.log
    Tue Dec  1 19:54:08 2009: INFO: Attempting to use C:\PROGRA~1\CSCOpx\bin\logrot_
    stat.exe to obtain logfile size.
    Tue Dec  1 19:54:08 2009: INFO: Not archiving C:\PROGRA~1\CSCOpx\log\CSDeviceSel
    ector.log because it is not big enough.
    file size = 99013, conf size = 1326080
    C:\Program Files\CSCOpx\bin>

  • FAST_START_MTTR_TARGET - Optimal File Size

    I have set Fast_Start_MTTR_Target to 3600 (1 hour)...when I query v$instance_recovery view it says that 17929 MB is my suggested log file size. They are now sized at 300MB. This can't be correct, can it? I can't have a 17GB log file. This is Oracle JD Edwards though....but what else should I look at? I know 300MB is too small.

    kirkladb wrote:
    I have set Fast_Start_MTTR_Target to 3600 (1 hour)...when I query v$instance_recovery view it says that 17929 MB is my suggested log file size. They are now sized at 300MB. This can't be correct, can it? I can't have a 17GB log file. This is Oracle JD Edwards though....but what else should I look at? I know 300MB is too small. I Beleieve you have to FSMT to too high value i.e 1hr. I dont think you will wait your database to startup after instance crash for 1hr?????? This is i believe is not a gud settings. I would rather go with default value initially
    Now you have set FSMT to 1hr and oracle would think that it has to do incremental checkpoint after 1hr (to flush dirty block from buffer cache to disk) . So due to this configuration you are seeing too high value in v$instance_recovery i.e 17929.
    Please be notes that value of Optimal_logfile_size is very dynamic in nature and changes its value depending upon the load on system. So you cannot basically rely on this value if your system experience dynamic load.
    Looking upon Load in database and amount of redolog file switches per sec/min, If they happen to be too frequent for example 2-3logfile switches per 20 mins, then its considered to be overswitching. Oracle recommends for optimal performance every logfile switch should not happen before 15-20 mins.
    So good approach is to first see your alert.log and find out that how many log switches are happening then take a decision to increase the logfile size.
    Please check with your organization what they suggest how much time they need to wait while instance do its recovery and then change value from default to higher.
    Edited by: 909592 on Mar 29, 2012 11:45 AM

  • Number of switch logfile

    Hi,
    I have a too many batch run nightly on my database.
    I have 4 groups of logfile, the logfile size is 100M.
    The batch make too insert, update and delete, also the time between switch is very small, 15 or 20 seconds.
    I will create a standby database on a remote host, I'm afraid to see each 15 second arrive on network (40Mo/s) a file of 100Mo.
    Is this a solution to reduce (only night) the time between two switch, for example increase the size of logfile... ?
    Thnaks in advance for your help.
    Nicolas.

    The longest interval between checkpoints is set by log_checkpoint_interval. The default setting is 1800 seconds (30 minutes), so this is not likely to be the cause of your problem.
    You can reduce the frequency of switches by having larger logfiles.
    The volume of traffic to your standby database is governed by the activity on your main database, whether it goes in a large number of small files or a smaller number of larger files.

  • Optimal online redo log file size

    hello to all,
    I have installed just now 11gr2 patching 2.
    I found
    The Redo Logfile Size Advisor can be used to determine the least optimal online redo log file size based on the current FAST_START_MTTR_TARGET setting and MTTR statistics.
    Which means that Redo Logfile Size Advisor is enabled only if FAST_START_MTTR_TARGET is set.
    for now this installation is not a production instance (only for my testing) I am still on 10g
    my question is:
    is't important to set FAST_START_MTTR_TARGET, in 11g or that all old infos?, because from my query I see OPTIMAL_LOGFILE_SIZE isn't set.
    SELECT TARGET_MTTR,ESTIMATED_MTTR,WRITES_MTTR,WRITES_LOGFILE_SIZE, OPTIMAL_LOGFILE_SIZE FROM V$INSTANCE_RECOVERY;TARGET_MTTR     ESTIMATED_MTTR     WRITES_MTTR     WRITES_LOGFILE_SIZE     OPTIMAL_LOGFILE_SIZE
    0     20     0     0
    and
    SELECT a.group#, b.member, a.status, a.bytes
    FROM v$log a, v$logfile b
    WHERE a.group#=b.group#
    GROUP#     MEMBER     STATUS     BYTES
    6     /u03/oradata/TEST/redo06.log     CURRENT     1048576000
    5     /u03/oradata/TEST/redo05.log     INACTIVE     1048576000
    4     /u03/oradata/TEST/redo04.log     INACTIVE     1048576000
    3     /u03/oradata/TEST/redo03.log     INACTIVE     1048576000
    2     /u03/oradata/TEST/redo02.log     INACTIVE     1048576000
    1     /u03/oradata/TEST/redo01.log     INACTIVE     1048576000
    thanks for any doc or info you will pointing me

    for now this installation is not a production instance (only for my testing) I am still on 10g
    my question is:
    is't important to set FAST_START_MTTR_TARGET, in 11g or that all old infos?,There is no difference for FAST_START_MTTR_TARGET in 10g and 11g.
    For 10g
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14237/initparams068.htm#i1127412
    For 11g
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams079.htm
    Both documentation have exact text, means it will work exactly in 11g as 10g.
    But do'nt see this like (because its not a official documentation link... as it is different from official doc's text though)
    http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10755/initparams069.htm
    Regards
    Girish Sharma

  • Resizing the logfile in Primary data base

    Hi,
    If I increase the size of redo log files on my primary site, would this propagate the new size of redo across to standby ?
    Thanks & Regards
    Manoj

    If you are not using standby redo at your standby database, simply go ahed. If you maintain standby redo at standby site, then, increase the logfile size at standby site as well. It wont give you a problem, but, error message you will receive in the alter log of standby database, when primary size and standby redo size is differ. Oracle tendency is to select the same size of redo at both ends, if standby redo in place.
    Jaffar

  • Transaction log full

    Transaction log is full in production system ,when i was tried login into sap system it show the error message 'SNAP_NO_NEW_ENTRIES'.
    our system is db2 and AIX ,can any body hlep us step by step procedure for reslove the issue .
    For best answer will reward.
    Thanks
    Imran khan

    you have to increase the sum of the logs in order to enlarge the database log...plz do not forget that the log must fit the underlying file system, eg. /db2/<SID>/dir_log..so you might have to increase this as well using SMITTY...
    <b>(DB6) [IBM][CLI Driver][DB2/AIX64] SQL0964C  The transaction log for the database is full.  SQLSTATE=57011</b>
    <i>[root] > su - db2<sid></i>
    <i>1> db2 get db cfg for <SID> | grep -i logfilsiz</i>
    Log file size (4KB)                         (LOGFILSIZ) = 16380
    <i>2> db2 get db cfg for <SID> | grep -i logprimary</i>
    Number of primary log files                (LOGPRIMARY) = 20
    <i>3> db2 get db cfg for <SID> | grep -i logsecond</i>
    Number of secondary log files               (LOGSECOND) = 40
    so we have log file of max. 16.380 * 4.096 * 60 = 4.025.548.800 bytes (about 4 gb)...this needs to be increased by increasing either LOGPRIMARY and/or LOGSECONDARY...(assuming that your logfile size is 16kb...query db2 for your size!)
    <i>4> db2 update db cfg for <SID> using logsecond 80 immediate</i>
    DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.
    SQL1363W One or more of the parameters submitted for immediate modification
    were not changed dynamically. For these configuration parameters, all
    applications must disconnect from this database before the changes become
    effective.
    <i>5> db2 get db cfg for <SID> | grep -i logprimary</i>
    Number of primary log files                (LOGPRIMARY) = 20
    <i>6> db2 get db cfg for <SID> | grep -i logsecond</i>
    Number of secondary log files               (LOGSECOND) = 80
    <i>7> db2stop</i>
    02/20/2007 09:17:12     0   0   SQL1064N  DB2STOP processing was successful.
    SQL1064N  DB2STOP processing was successful.
    <i>8> db2start</i>
    02/20/2007 09:17:19     0   0   SQL1063N  DB2START processing was successful.
    SQL1063N  DB2START processing was successful.
    -> plz keep in mind that the sap system needs to be down when re-starting db2...
    check via snapshot:
    <i>9> db2 get snapshot for database on <SID></i>
    <b>Log space available to the database (Bytes)= 2353114756 (= 2.353 MB)
    Log space used by the database (Bytes)     = 4329925244 (= 4.329 MB)</b>
    Maximum secondary log space used (Bytes)   = 2993640963
    Maximum total log space used (Bytes)       = 4330248963
    Secondary logs allocated currently         = 46
    Appl id holding the oldest transaction     = 9
    so now our log is about 6.5 gb ...<b>see sapnote 25.351 for details</b>...
    GreetZ, AH

  • Control file corrupted

    HI, My database was operating on noarchivelog mode, I do have a backup from last night but all three control files seemed to be corrupted. Is there anyway, i can create new control file and syncronise with rest of the files?. If yes, can you please tell me the steps involved in creating new controlfile as I don't have any idea how to do that. Thanks alot.

    Hi,
    Set oracle_sid="your sid name"
    connect to sqlplus
    SQL>conn/as sysdba
    start your database in nomount stage
    SQL>Startup nomount
    Then type the following commands
    SQL> CREATE CONTROLFILE REUSE DATABASE "your database name"
    MAXLOGFILES 5 --optional
    MAXLOGMEMBERS 3 --optional
    MAXDATAFILES 14 --optional
    MAXINSTANCES 1 --optional
    MAXLOGHISTORY 226 --optional
    LOGFILE
    GROUP 1 'your logfile path' SIZE your logfile size,
    GROUP 2 'your logfile path' SIZE your logfile size
    DATAFILE
    'your datafile path',
    'your datafile path'
    After that open the database with RESETLOGS
    then shutdown the database
    SQL>shu
    Now multiflex the control file and mention the path in init file
    And take the complete closed backup( backup your datafile, control file, logfile)
    Then startup the database
    SQL> Startup
    Now your database is ready to USE.
    This is the Example to:
    SQL> CREATE CONTROLFILE REUSE DATABASE "ORCL"
    MAXLOGFILES 5
    MAXLOGMEMBERS 3
    MAXDATAFILES 14
    MAXINSTANCES 1
    MAXLOGHISTORY 226
    LOGFILE
    GROUP 1 'E:\ORACLE\ORADATA\ORCL\REDO01.LOG' SIZE 100M,
    GROUP 2 'E:\ORACLE\ORADATA\ORCL\REDO02.LOG' SIZE 100M,
    GROUP 3 'E:\ORACLE\ORADATA\ORCL\REDO03.LOG' SIZE 100M
    DATAFILE
    'E:\ORACLE\ORADATA\ORCL\UNDOTBS01.DBF',
    'E:\ORACLE\ORADATA\ORCL\EXAMPLE01.DBF',
    'E:\ORACLE\ORADATA\ORCL\INDX01.DBF',
    'E:\ORACLE\ORADATA\ORCL\TOOLS01.DBF',
    'E:\ORACLE\ORADATA\ORCL\USERS01.DBF',
    'E:\ORACLE\ORADATA\ORCL\OEM_REPOSITORY.DBF',
    'E:\ORACLE\ORADATA\ORCL\CWMLITE01.DBF',
    'E:\ORACLE\ORADATA\ORCL\DRSYS01.DBF',
    'E:\ORACLE\ORADATA\ORCL\ODM01.DBF',
    'E:\ORACLE\ORADATA\ORCL\XDB01.DBF',
    'E:\ORACLE\ORADATA\ORCL\USERS02.DBF',
    'E:\ORACLE\ORADATA\ORCL\USERS03.DBF',
    'E:\ORACLE\ORADATA\ORCL\USERS04.DBF'
    SQL>ALTER DATABASE OPEN RESETLOGS;
    And one more thing:
    To rename the database change reuse to set in the create control file script as shown below
    Regards
    S.Senthil Kumar

  • Dr. Watson Error on exiting weblogic server 6.1

    Hi,
    I sometimes get the Dr.watson error when I stop my weblogic server.
    The details are given below.
    I have weblogic 6.1
    Windows 2000 5.00.2195 Service pack 2
    Memory=392mb
    The message from Dr. Watson is:-
    "Dr. watson was unable to attach to the process. It is possible that process exited
    before Dr. Watson could attach to it.
    Windows 2000 error code=87 The parameter is incorrect."
    The message from the console is:-
    <Jan 15, 2002 5:08:10 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '9' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:10 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '0' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:11 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '1' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:11 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '2' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:11 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '3' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:12 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '4' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:12 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '5' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:12 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '6' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:12 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '7' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '8' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '10' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '11' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '14' for queue: 'default'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '0' for queue: '__weblogic_admin_html_queue'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '1' for queue: '__weblogic_admin_html_queue'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '0' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:13 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '1' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '2' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '3' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '4' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '5' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '6' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    <Jan 15, 2002 5:08:14 PM CST> <Critical> <Kernel> <Execute Thread: 'ExecuteThread:
    '7' for queue: '__weblogic_admin_rmi_queue'' stopped.>
    C:\bea\wlserver6.1>goto finish
    C:\bea\wlserver6.1>cd config\mydomain
    C:\bea\wlserver6.1\config\mydomain>ENDLOCAL
    C:\bea\wlserver6.1\config\mydomain>

    I talked with Sun today about this issue and they stated that the error I received is the result of a bug. Sun Bug ID: 6650667
    Synopsis: Cannot repack the changelog after trimming. libdb: DB_ENV->log_put: record larger than maximum file
    We can see the the following lines in your error log:
    [05/Jan/2010:00:05:03 -0500] - DEBUG - conn=-1 op=-1 msgId=-1 - libdb: DB_ENV->log_put: record larger than maximum file size (15956384 > 10485760)
    [05/Jan/2010:00:05:03 -0500] - Repacking backend 'changelog', LDAP entries error Invalid argument (22).
    [05/Jan/2010:00:05:03 -0500] - Repacking backend 'changelog' ended.
    There is no fix, and a workaround is listed as follows:
    Work Around:
    Increase nsslapd-db-logfile-size in dse.ldif (In entry "cn=config,cn=ldbm database,cn=plugins,cn=config" )
    This attribute is not present by default in the file.
    Its default value is 10485760.
    Which would make it the default as stated in the bug workaround.
    Make the change as directed above, perform the repack again and let me know the results.

  • Invitations not being sent out on Calendar Server 6.3

    We would be grateful for any help on this. We can give further details if needed, please just let us know.
    Thanks very much.
    Issue:
    To our customers the calendars server appears to be fully operational and without error. However, when a customer creates a new event and invites other people to attend, no invites are sent out. The event creator, however, believes all is in order as no error messages appear and the list of invitees appears on the event. This was working recently. We have looked for things which have changed (e.g. disk space, size of files, corruption, recent patching, licenses expiring, etc) but have not yet found anything that has changed.
    Some details of our system:
    ./cal/sbin/csversion
    Oracle Communications Calendar Server 6.3-27.01 (built Feb 15 2011)
    SunOS xxxx 5.10 Generic_142909-17 sun4u sparc SUNW,Sun-Fire-V440
    In /var/opt/SUNWics5/logs/watcher.log , we get these notices and errors when we restart the service.
    [20/Feb/2012:12:35:56 +1300] (Notice) Received request to restart: store admin http
    [20/Feb/2012:12:35:56 +1300] (Notice) Connecting to watcher ...
    [20/Feb/2012:12:36:00 +1300] (Notice) 7698
    [20/Feb/2012:12:36:00 +1300] (Notice) Stopping http server 7704 ..... done
    [20/Feb/2012:12:36:02 +1300] (Notice) Stopping http server 7705 ... done
    [20/Feb/2012:12:36:02 +1300] (Notice) admin server is not running
    [20/Feb/2012:12:36:02 +1300] (Notice) Stopping store server 7700 ............................................................... timeout
    [20/Feb/2012:12:37:03 +1300] (Error) Cannot stop server store with SIGTERM, now retrying with SIGKILL
    [20/Feb/2012:12:37:03 +1300] (Notice) Stopping store server 7700 ..... done
    [20/Feb/2012:12:37:05 +1300] (Notice) Starting store server .... 7718
    [20/Feb/2012:12:37:05 +1300] (Notice) Checking store server status ..... ready
    [20/Feb/2012:12:37:07 +1300] (Notice) Starting admin server ....Watched 'csadmind' process 7725 exited abnormally
    7725
    [20/Feb/2012:12:37:08 +1300] (Notice) Starting http server ....[20/Feb/2012:12:37:08 +1300] (Notice) Received request to restart: store admin http
    [20/Feb/2012:12:37:08 +1300] (Notice) Connecting to watcher ...
    ... 7727
    [20/Feb/2012:12:37:13 +1300] (Notice) 7698
    [20/Feb/2012:12:37:13 +1300] (Error) store failed twice in 600 seconds, will not perform restart
    Watched 'csadmind' process 7844 exited abnormally
    What we've done:
    We have run db_verify and csdb -v check on the databases. ics50deletelog.db did have corruption, so we fixed that by using db_dump and db_load to export then import the databases.
    We also purged the ics50deletelog.db database using cspurge afterwards when the dump and load didn't seem to have had an effect.
    We tried putting the log level up in the /opt/SUNWics5/cal/config/ics.conf , but didn't get any more details when restarting the service (through /etc/init.d/sunwics5 restart)
    Before we did the db_dump, db_load and purge:
    Calendar database version: 4.0.0
    Sleepycat Software: Berkeley DB 4.2.52: (December 3, 2003)
    Total database size in bytes: 1465503744
    Total number of calendars: 14964
    Total number of events: 830842
    Total number of tasks: 17351
    Total number of alarms: 57181
    Total number of gse entries: 9
    Total number of master component entries: 24100
    Total number of deletelog entries: 1779967
    Total logfile size in bytes: 79262
    Session database version: 3.0.0 [BerkeleyDB]
    Total database size in bytes: 0
    Total logfile size in bytes: 0
    Counter database version: 1.0.0 [Memory Mapped Files]
    Total database size in bytes: 0
    After the db_dump, db_load and purge:
    Calendar database version: 4.0.0
    Sleepycat Software: Berkeley DB 4.2.52: (December 3, 2003)
    Total database size in bytes: 1201864704
    Total number of calendars: 14964
    Total number of events: 830845
    Total number of tasks: 17351
    Total number of alarms: 55116
    Total number of gse entries: 9
    Total number of master component entries: 24100
    Total number of deletelog entries: 1779967
    Total logfile size in bytes: 5251612
    Session database version: 3.0.0 [BerkeleyDB]
    Total database size in bytes: 0
    Total logfile size in bytes: 0
    Counter database version: 1.0.0 [Memory Mapped Files]
    Total database size in bytes: 0
    Thanks again.

    Sounds like we talking about 'Event Invitations Notifications'. These are the notifications that gets sent when some one invite you to an event. Whether you receive the notification or not depends on your preference that is stored in ldap. This can be set using the Convergence by navigating through Options-> Calendar -> Notifcations (Check "Notify me via email of new invitations or invitation changes"). The actual ldap attributes that gets stored in user's entry is below.
    icsExtendedUserPrefs: ceNotifyEnable=1
    icsExtendedUserPrefs: ceNotifyEmail=[email protected]
    In the following cases, the user (attendee) will not get notifications even if the above preference is set.
    a> Creating an event with attendees in the past. CS6, by default, suppresses notifications for past events. You can enable notifications for past events by setting the config "ine.pastnotification.enable" to "yes" in ics.conf
    b> If the client creating the event requests server not to send notifications. (smtpNotify wcap param for storeevents.wcap). Outlook Connector uses this wcap param, so you won't get server notifications for the events invited using Outlook Connector. Outlook Connector sends it's own notifications directly.
    I think, you are looking for these kind of notifications. The process responsible for sending these notifications is "csadmind" on the Frontend. So, you need to check admin.log on the Frontend to see if it is sending notifications or not.
    Edited by: dabrain on Feb 21, 2012 9:26 AM

Maybe you are looking for

  • Adobe Digital Editions Download

    How can I download Adobe Digital Editions on any of my Palms. I have Z22, T|X and Centro. I can download library books to the Adobe Digital Editions on my computer but can't figure out how to get this onto my palm. Instructions say you have to regist

  • Photosmart C620 not printing black ....... even with new cartridge

    Have tried everything your support page & help screens suggest: originally was only printing top several inches of page now prints no black text unplugged diagnostic pages  ........ shows no diagnosis, no black square, color squares are very vivid te

  • Z61t backlight out -- How easy is replacement?

    I have a Z61t ThinkPad (type 9442-UN6) running Windows XP. Seems my backlight is out, as when I turned on the laptop this morning it powered on (lights, fans, everything) but displayed nothing on the monitor. Looking closely I can sometimes see a ver

  • Setting a date in Prompt

    Hi all I need to set sysdate in my Date Prompt in a "DATE" Format,and I m using system.currentTime When I use system.currentTime, It is taking the CurrentTime Stamp. which is failing in fetching records. I just want exact Date in "MM/DD/YYYY" AND SEN

  • Exit or badi for prod version

    Hi, My scenario is a make to order scenario. as and when we try to create sales order , production order gets created. I am using BAPI to create sales order including variant config with success in bebug mode. reason is i am passing the prd vesion ID