Want to reduce Log switch time interval !!!

Friends ,
I know that the standard LOG SWITCH TIme interval is 20/30 minutes , i.e., every time it is better switch redolog 1 to redolog 2 (or redolog2 to redolog3) within 20/30 minutes.
But in my production server , Logfile switches within every 60 minutes every time in the peak hour . Now my question , How I can make a situation where my logfile should switch to another logfile between 20/30 minutes .
Here my database configuration is :
Oracle database 10g (10.2.0.1.0 version) in AIX 5.3 server
AND
SQL> show parameter fast_start_mttr_target
NAME TYPE VALUE
fast_start_mttr_target integer 600
My every redolog file size is = 50 MB
In this situation , give me advice plz how I can reduce my logswitch time interval ?

You could either
a. Recreate your RedoLog files with a smaller size --- which action I would not recommend
OR
b. Set the instance parameter ARCHIVE_LAG_TARGET to 1800
ARCHIVE_LAG_TARGET specifies (in seconds) the duration at which a log switch would be forced, if it hasn't been done so by the online redo log file being full.
You should be able to use ALTER SYSTEM to change this value.
Hemant K Chitale
http://hemantoracledba.blogspot.com

Similar Messages

  • Reducing Initial Startup time in Flex 1.5

    How can we reduce initial startup time of a flex application
    ? Suppose we have a large application , and we want to reduce
    application startup time. Is it possible that we are able to devide
    application into multiple SWF files and (Initially ) only those SWF
    files are downloaded on client which are initialy required ? How
    can we acheive this i.e architecting application in such a way that
    intialy some SWF ( only required on client side ) are downloaded on
    client and remaing are downloaded only when they are actually
    required by client ?
    any help would be appreciated !!!!!!!!!!
    thanks

    Thanx for your reply !
    yes i want to create similar look.I have two questions
    regarding this
    1) I have little idea that we can acheive such look by using
    creationPolicy = "queued" , my question is ..........suppose we are
    using a tabNavigator with 4 tabs and in each tab we have set
    creationPolicy="queued" . we are using mxml custom components of
    each size 20 Kbs. First time either only first view will be loaded
    on the client side(size 20 K) or all 4 views(size 80 K) will be
    loaded. It is really important becuase I want to reduce initial
    startup time.
    2) How we can use such component which show size of contents
    downloaded ( as used at
    http://www.merhl.com/
    ). any sample code ?
    thanks once again !

  • Reduce export import time

    Hl All,
    I want to reduce the total time spent during export-import.
    Restrictions:
    =========
    1)Cross Platform
    2)exp and imp from lower version to upper version
    3)No 10G database
    Basically i want to do exp and imp in parallel so it reduces the total time spent on this activity. I thought of doing schema level exp-imp in parallel but i am afraid of the dependencies.
    Is there any other way to achieve the same or if i go with the above specified approach, can anyone provide some valuable inputs to that.
    I am trying to automate the above so that it becomes one time effort and rest all the times it(script) should do at his own.
    Thanks and regards
    Neeraj

    Hi All,
    Data volume is not less than 60GB and not more than 150GB.
    If i use a pipe on unix in between exp-imp what if my exp is slower at any
    point of time and import is faster(Any reason)?
    is import going to wait for the contents coming into the pipe through export or import will fail.
    I can consider creating indexes using the flat file.
    is there any way to get only the indexes in the flat file, i mean if i use indexfile
    option for import it gives me "CREATE TABLE..." statements too which means import utility is reading full
    dumpfile and giving me the output, i want only the "CREATE INDEX..." statement in the flat file.
    What about the schema level export and import?
    Any valuable inputs or proper steps from anyone out there
    Any restrictions while importing the schema's.
    Thanks and Regards

  • Reducing the time interval in file Adpt to write a flat file at a location

    Hi All,
    I hav a scenario where i hav to write a flat file (<b>XXX.txt</b>) to a location. b4 doing that, i hav to check whether <b>XXX.txt</b> already exists or not. If it doesn't exists then i hav to write the <b>XXX.txt</b> file there. if it already exists, then i hav to wait until that <b>xxx.txt</b> file gets deleted.
    In the receiver file adapter v hav an option <b>file construction mode = <u><i>create</i></u></b> which does the same thing. but the problem here is it is taking too long (<b>more than 5 min</b>) which is not at all acceptable in my case (it is ok <b>if it takes 1 min</b>).
    Is there any way to <b>reduce the time interval</b> using the same option?
    Or do we hav any <b>work around solution</b> for acheiving the same scenario?
    any help wud b appreciated.
    Thnx in Adv.
    Anil

    Anil
    As far as my knowledge goes I think it is not possible because we are not going to do anything from our end. XI is doing processing and creating a file for you. But you might be sending a large file at a time. So you have to improve the performance in your scenario. You check this urls on how to improve performance in XI:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/70ada5ef-0201-0010-1f8b-c935e444b0ad
    Improving performence in XI
    Maximum Permitted File Size in XI
    ---Satish

  • I want to find different time interval btw two times

    hi every body,
    my question : i ll give start time and end time as input.
    i want output ,time interval btw start time and end time
    for exp:  i ll give 7 to 11 as input.
        tables have data like   7   to  7.29.59  --->  1 prog
                                           8.00.00 to 9.29.59-----> 2 prog
                                           10.00.00 to 10.29.59--->3 prog
    i want to find time gap btw the progs  and want display output as  like  7.30.00 - 7.59.59  --->1 interval
                                                                                    9.30.00-9.59.59   ---> 2 interval
                                                                                    10.29.59-11.00.00---->3 interval

    Hi,
    declare some tmp WA as wa_tmp
    loop the data.
    adding from date.
    wa_tmp = data-from + 1 .
    read the next line
    read table data index sy-tabix + 1
    wa_tmp = to date of next line - 1.
    then append wa_tmp to final it.
    endloop.
    Hope it helps.

  • I dont want to enter password each time i log on

    I do not want to enter password each time I log on to my computer

    Hunter1744, welcome to the forum.
    If you are using Win7, you can go into Start.  In the "Search programs and files" box at the bottom type "cmd" and hit Enter.  This will open a terminal window.  Type "netplwiz' next to the prompt and hit Enter.  This will bring up a box titled User Accounts.  Uncheck the box next to "Users must enter a user name and password to use this computer."   It will ask for a Password and to verify the Password.  Once you do this, you should not have to logon with a Password in the future.
    When requesting help, you should always include the make/model of the computer and/or monitor. This information is necessary for us to review the specifications of them.
    Please click "Accept as Solution" if your problem is solved.
    Signature:
    HP TouchPad - 1.2 GHz; 1 GB memory; 32 GB storage; WebOS/CyanogenMod 11(Kit Kat)
    HP 10 Plus; Android-Kit Kat; 1.0 GHz Allwinner A31 ARM Cortex A7 Quad Core Processor ; 2GB RAM Memory Long: 2 GB DDR3L SDRAM (1600MHz); 16GB disable eMMC 16GB v4.51
    HP Omen; i7-4710QH; 8 GB memory; 256 GB San Disk SSD; Win 8.1
    HP Photosmart 7520 AIO
    ++++++++++++++++++
    **Click the Thumbs Up+ to say 'Thanks' and the 'Accept as Solution' if I have solved your problem.**
    Intelligence is God given; Wisdom is the sum of our mistakes!
    I am not an HP employee.

  • Getting time when online log switches; archiver start, ends

    Hi
    I am using "Oracle Database 10g Release 10.2.0.2.0 - 64bit Production"
    Is there a possibility to check the exact time
    -when do online redo log switch
    -when does Archiver start and finish his archiving online logs
    I would like to tune archiver performance.
    Thanks for answers
    Groxy

    In my current configuration I am getting a lot of "waits" (processes waiting form commit)
    I would like to see if there is any relation of archiving process with "waits" in my application. Redo logs currently siwtch upto 6 times /hour. My ideas is to increase redo log size from 150MB to e.g 300MB and give archiver less priority, so there are no peaks, which consumes significant amount of disk I/O and the process of archiving will be spreaded.
    Groxy

  • Forcing log switch every minute.

    Hi,
    I want to force a log switch every one minute how can i do it?
    What should be the value of fast_start_mttr_target?
    Does a checkpoint force a log switch?
    Do i need to only reduce the size of redo log to a small size?
    How can i make sure that a log switch will happen after a particular time period for ex. 1Minute,2 minute.
    I want to force a log switch every minute because i want to send the archive redo log to standby database so that not more than 1 minute changes in database are lost. I am using 10g R2 on windows 2003 server.
    I am unable to find a solution. Any help?

    Hi,
    I want to force a log switch every one minute how
    can i do it? yes with archive_lag_target parameter
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934
    What should be the value of fast_start_mttr_target?incremental or normal checkpoint "fast instance recovery/downtime concerned" introduced from oracle 8, this feature is enabled with the initialization parameter FAST_START_MTTR_TARGET in 9i.
    fast_start_mttr_target to database writer tries to keep the number of dirty blocks in the buffer cache low enough to guarantee rapid recovery in the event of a crash. It frequently updates the file headers to reflect the fact that there are not dirty buffers older than a particular SCN.
    http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtunin004.htm#sthref1110
    Does a checkpoint force a log switch? log switch force to checkpoint ,checkpoint never force to log switch.
    Do i need to only reduce the size of redo log to a
    small size?depends yours SLA how far you can risk the data ,but it will effect yours database performance ,recommended to set the size of log which should imply the log swtich after filling to 20 mins,its a trade off risk vs perofrmance.
    How can i make sure that a log switch will happen
    after a particular time period for ex. 1Minute,2
    minute.
    want to force a log switch every minute because i
    want to send the archive redo log to standby database
    so that not more than 1 minute changes in database
    are lost. I am using 10g R2 on windows 2003 server.
    am unable to find a solution. Any help?
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref934Khurram

  • Redo log switching

    11gR2
    Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
    Wanted to know what would be negative side of large redo logs?

    >
    11gR2
    Found out log switched every a few mins (2, 3 mins) at peak periods. I will make the recommendation for larger redo logs such that switches will be every 20 to 30 mins .
    Wanted to know what would be negative side of large redo logs?
    >
    Patience, grasshopper!
    If it ain't broke, don't fix it. So first make sure it is broken, or about to break.
    Unless you have an emergency on your hands you don't want to implement a change like that without careful examination of your current log file usage and history.
    You need to provide more information such as typical size of your log files, number of log groups, number of members in each group, log archive policy, etc.
    1. How often do these 'peak periods' occur? Do they occur fewer than 5 or 6 times a day? Or dozens of times?
    2. How long do they last? A few minutes? Or a few hours?
    3. What is the typical, non-peak rate of switches? This is really your base-line that you need to compare things to.
    4. What has the switch pattern been over the last few weeks or months?
    5. What has the growth in DB activity benn over the last few weeks or months? What do expect over the next few months?
    6. What is your goal in reducing the frequency of log switches?
    Basic negative sides include longer time to archive each log file (the fewer logs in each group the more impact here) and the length of time to recover in the event you need to. With large log files there is more for Oracle to wade through to find the relevant data for restoring the DB to a given point in time.
    Your suggestion of every 20 - 30 minutes means 2 to 3 times per hour. If you currently switch 10 or 12 times per hour you are making a very big change.
    Although you don't want to 'tweak' the logs unnecessarily you also don't want to make such large changes.
    Everything in moderation. If your current switch rate is 10 or 12 times per hour you may want to first cut this to maybe 1/2 to 1/3: that is, to 4 or 5 time per hour. It all depends on the answers to questions like I ask above. If you post the answers to know if will be helpful to anyone trying to advise you.

  • Reduce logging of cronie and systemd-timesyncd

    Hello,
    I can't find how to reduce logging of these two daemons. The log on my server looks like this:
    Dec 15 22:01:01 smecpi crond[23935]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 21:33:06 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.001s/-31ppm
    Dec 15 21:01:01 smecpi CROND[23929]: pam_unix(crond:session): session closed for user root
    Dec 15 21:01:01 smecpi CROND[23930]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 21:01:01 smecpi crond[23929]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 20:58:58 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.002s/-30ppm
    Dec 15 20:24:50 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.009s/0.001s/-30ppm
    Dec 15 20:01:01 smecpi CROND[23924]: pam_unix(crond:session): session closed for user root
    Dec 15 20:01:01 smecpi CROND[23925]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 20:01:01 smecpi crond[23924]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 19:50:41 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-31ppm
    Dec 15 19:16:33 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.001s/0.009s/0.001s/-30ppm
    Dec 15 19:01:01 smecpi CROND[23918]: pam_unix(crond:session): session closed for user root
    Dec 15 19:01:01 smecpi CROND[23919]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 19:01:01 smecpi crond[23918]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 18:53:39 smecpi systemd[1]: Started Cleanup of Temporary Directories.
    Dec 15 18:53:39 smecpi systemd[1]: Starting Cleanup of Temporary Directories...
    Dec 15 18:42:25 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-31ppm
    Dec 15 18:08:17 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-30ppm (ignored)
    Dec 15 18:01:01 smecpi CROND[23909]: pam_unix(crond:session): session closed for user root
    Dec 15 18:01:01 smecpi CROND[23910]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 18:01:01 smecpi crond[23909]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 17:34:08 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.001s/0.009s/0.001s/-30ppm
    Dec 15 17:01:01 smecpi CROND[23904]: pam_unix(crond:session): session closed for user root
    Dec 15 17:01:01 smecpi CROND[23905]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 17:01:01 smecpi crond[23904]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 17:00:00 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.009s/0.000s/-30ppm (ignored)
    Dec 15 16:25:52 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.000s/-30ppm
    Dec 15 16:01:01 smecpi CROND[23899]: pam_unix(crond:session): session closed for user root
    Dec 15 16:01:01 smecpi CROND[23900]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 16:01:01 smecpi crond[23899]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 15:51:44 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.000s/-30ppm
    Dec 15 15:17:35 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 15:01:01 smecpi CROND[23894]: pam_unix(crond:session): session closed for user root
    Dec 15 15:01:01 smecpi CROND[23895]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 15:01:01 smecpi crond[23894]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 14:43:27 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 14:09:19 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 14:01:02 smecpi CROND[23889]: pam_unix(crond:session): session closed for user root
    Dec 15 14:01:02 smecpi CROND[23890]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 14:01:02 smecpi crond[23889]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 13:35:11 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 13:01:02 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.002s/-30ppm
    Dec 15 13:01:01 smecpi CROND[23883]: pam_unix(crond:session): session closed for user root
    Dec 15 13:01:01 smecpi CROND[23884]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 13:01:01 smecpi crond[23883]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 12:26:54 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 12:01:01 smecpi CROND[23877]: pam_unix(crond:session): session closed for user root
    Dec 15 12:01:01 smecpi CROND[23878]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 12:01:01 smecpi crond[23877]: pam_unix(crond:session): session opened for user root by (uid=0)
    It's anoying at least for two reasons. Firstly, it makes hard to catch important information from other services (nginx, ssh, system maintain - log rotates...) and secondly, it takes some resources (unnecessary writes and storage).
    I've also tried to find how how to change timesyncd sync interval, but without any success. It would be fine for me to run it even only about once a day.
    The solution for reduce logging of cron is more complicated, since I need it for e.g. backups (using rdiff-backup). But it would be nice to have only warnings or errors reported next to backup job stdout.
    No special settings are in /etc/systemd/journald.conf (only volatile storage) and timesyncd.conf (nothing there).
    Units are as follows:
    /usr/lib/systemd/systemd-timesyncd.service
    [Unit]
    Description=Network Time Synchronization
    Documentation=man:systemd-timesyncd.service(8)
    ConditionCapability=CAP_SYS_TIME
    ConditionVirtualization=no
    DefaultDependencies=no
    RequiresMountsFor=/var/lib/systemd/clock
    After=systemd-remount-fs.service systemd-tmpfiles-setup.service systemd-sysusers.service
    Before=time-sync.target sysinit.target shutdown.target
    Conflicts=shutdown.target
    Wants=time-sync.target
    [Service]
    Type=notify
    Restart=always
    RestartSec=0
    ExecStart=/usr/lib/systemd/systemd-timesyncd
    CapabilityBoundingSet=CAP_SYS_TIME CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
    PrivateTmp=yes
    PrivateDevices=yes
    ProtectSystem=full
    ProtectHome=yes
    WatchdogSec=1min
    [Install]
    WantedBy=sysinit.target
    /usr/lib/systemd/cronie.service
    [Unit]
    Description=Periodic Command Scheduler
    [Service]
    ExecStart=/usr/bin/crond -n
    ExecReload=/usr/bin/kill -HUP $MAINPID
    KillMode=process
    Restart=always
    [Install]
    WantedBy=multi-user.target
    Last edited by Kotrfa (2014-12-16 10:18:51)

    Hello,
    I can't find how to reduce logging of these two daemons. The log on my server looks like this:
    Dec 15 22:01:01 smecpi crond[23935]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 21:33:06 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.001s/-31ppm
    Dec 15 21:01:01 smecpi CROND[23929]: pam_unix(crond:session): session closed for user root
    Dec 15 21:01:01 smecpi CROND[23930]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 21:01:01 smecpi crond[23929]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 20:58:58 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.002s/-30ppm
    Dec 15 20:24:50 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.009s/0.001s/-30ppm
    Dec 15 20:01:01 smecpi CROND[23924]: pam_unix(crond:session): session closed for user root
    Dec 15 20:01:01 smecpi CROND[23925]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 20:01:01 smecpi crond[23924]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 19:50:41 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-31ppm
    Dec 15 19:16:33 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.001s/0.009s/0.001s/-30ppm
    Dec 15 19:01:01 smecpi CROND[23918]: pam_unix(crond:session): session closed for user root
    Dec 15 19:01:01 smecpi CROND[23919]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 19:01:01 smecpi crond[23918]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 18:53:39 smecpi systemd[1]: Started Cleanup of Temporary Directories.
    Dec 15 18:53:39 smecpi systemd[1]: Starting Cleanup of Temporary Directories...
    Dec 15 18:42:25 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-31ppm
    Dec 15 18:08:17 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.001s/-30ppm (ignored)
    Dec 15 18:01:01 smecpi CROND[23909]: pam_unix(crond:session): session closed for user root
    Dec 15 18:01:01 smecpi CROND[23910]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 18:01:01 smecpi crond[23909]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 17:34:08 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.001s/0.009s/0.001s/-30ppm
    Dec 15 17:01:01 smecpi CROND[23904]: pam_unix(crond:session): session closed for user root
    Dec 15 17:01:01 smecpi CROND[23905]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 17:01:01 smecpi crond[23904]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 17:00:00 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.009s/0.000s/-30ppm (ignored)
    Dec 15 16:25:52 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.001s/0.008s/0.000s/-30ppm
    Dec 15 16:01:01 smecpi CROND[23899]: pam_unix(crond:session): session closed for user root
    Dec 15 16:01:01 smecpi CROND[23900]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 16:01:01 smecpi crond[23899]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 15:51:44 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.000s/-30ppm
    Dec 15 15:17:35 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 15:01:01 smecpi CROND[23894]: pam_unix(crond:session): session closed for user root
    Dec 15 15:01:01 smecpi CROND[23895]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 15:01:01 smecpi crond[23894]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 14:43:27 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 14:09:19 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 14:01:02 smecpi CROND[23889]: pam_unix(crond:session): session closed for user root
    Dec 15 14:01:02 smecpi CROND[23890]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 14:01:02 smecpi crond[23889]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 13:35:11 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 13:01:02 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.008s/0.002s/-30ppm
    Dec 15 13:01:01 smecpi CROND[23883]: pam_unix(crond:session): session closed for user root
    Dec 15 13:01:01 smecpi CROND[23884]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 13:01:01 smecpi crond[23883]: pam_unix(crond:session): session opened for user root by (uid=0)
    Dec 15 12:26:54 smecpi systemd-timesyncd[114]: interval/delta/delay/jitter/drift 2048s/-0.000s/0.008s/0.002s/-30ppm
    Dec 15 12:01:01 smecpi CROND[23877]: pam_unix(crond:session): session closed for user root
    Dec 15 12:01:01 smecpi CROND[23878]: (root) CMD (run-parts /etc/cron.hourly)
    Dec 15 12:01:01 smecpi crond[23877]: pam_unix(crond:session): session opened for user root by (uid=0)
    It's anoying at least for two reasons. Firstly, it makes hard to catch important information from other services (nginx, ssh, system maintain - log rotates...) and secondly, it takes some resources (unnecessary writes and storage).
    I've also tried to find how how to change timesyncd sync interval, but without any success. It would be fine for me to run it even only about once a day.
    The solution for reduce logging of cron is more complicated, since I need it for e.g. backups (using rdiff-backup). But it would be nice to have only warnings or errors reported next to backup job stdout.
    No special settings are in /etc/systemd/journald.conf (only volatile storage) and timesyncd.conf (nothing there).
    Units are as follows:
    /usr/lib/systemd/systemd-timesyncd.service
    [Unit]
    Description=Network Time Synchronization
    Documentation=man:systemd-timesyncd.service(8)
    ConditionCapability=CAP_SYS_TIME
    ConditionVirtualization=no
    DefaultDependencies=no
    RequiresMountsFor=/var/lib/systemd/clock
    After=systemd-remount-fs.service systemd-tmpfiles-setup.service systemd-sysusers.service
    Before=time-sync.target sysinit.target shutdown.target
    Conflicts=shutdown.target
    Wants=time-sync.target
    [Service]
    Type=notify
    Restart=always
    RestartSec=0
    ExecStart=/usr/lib/systemd/systemd-timesyncd
    CapabilityBoundingSet=CAP_SYS_TIME CAP_SETUID CAP_SETGID CAP_SETPCAP CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER
    PrivateTmp=yes
    PrivateDevices=yes
    ProtectSystem=full
    ProtectHome=yes
    WatchdogSec=1min
    [Install]
    WantedBy=sysinit.target
    /usr/lib/systemd/cronie.service
    [Unit]
    Description=Periodic Command Scheduler
    [Service]
    ExecStart=/usr/bin/crond -n
    ExecReload=/usr/bin/kill -HUP $MAINPID
    KillMode=process
    Restart=always
    [Install]
    WantedBy=multi-user.target
    Last edited by Kotrfa (2014-12-16 10:18:51)

  • How to reduce the Logout time in IDM

    Hi Everyone,
    Help me in Solving the issue.
    When i click on the logout button of the end user interface in sun idm, it is taking 20 secs to come back to the login page (login.jsp) which is causing very much delay in the process. I want to reduce this time. Dont know exactly what is the backend process.
    Kindly help me if you know any solution for the same.
    Thank you in advance.

    Srikanth,
    It would be better to use a database table(s) to store your audit trail information than an xml file, which could grow beyond a manageable size and then require housekeeping efforts and file rolling, etc.  It would also be much easier to query user related information out of the logs, pareto charts for failed vs. successful login attempts, etc. if the data was stored in a database.
    Have you considered modifying the Relogin.jsp page link in the portal's sub-menu bar?  How about making your own version of this web page and changing the link pointer?  Keep in mind that whatever result you come up with here will need a significant update for NetWeaver UME compatibility in version 12.0.
    Regards,
    Jeremy

  • Dataguard- scheduling manual log switch to have minimal lag

    Hi,
    I need some suggestion..
    Recentry I had configured a physical standby for my 10g production database in maximum performance mode.
    Now , to reduce the lag between the primary and standby , I put one cron job which will do a manual log switch(alter system switch logfile) in every 30 minutes.
    and it is doing the job..!
    Does this have any impact on my databases...?
    Looking forward for your invaluable responce...
    Regards
    Noushad
    DBA

    Maybe you should let Oracle do it's job, configure this parameter which signifies that no buffer will remain dirty (in the cache) for more than 1800 (or whatever you want) seconds:
    LOG_CHECKPOINT_TIMEOUT = 1800:p

  • Reduce the Logout time in IDM

    Hi Everyone,
    Help me in Solving the issue.
    When i click on the logout button of the end user interface in sun idm, it is taking 20 secs to come back to the login page (login.jsp) which is causing very much delay in the process. I want to reduce this time. Dont know exactly what is the backend process.
    Kindly help me if you know any solution for the same.
    Thank you in advance.

    Srikanth,
    It would be better to use a database table(s) to store your audit trail information than an xml file, which could grow beyond a manageable size and then require housekeeping efforts and file rolling, etc.  It would also be much easier to query user related information out of the logs, pareto charts for failed vs. successful login attempts, etc. if the data was stored in a database.
    Have you considered modifying the Relogin.jsp page link in the portal's sub-menu bar?  How about making your own version of this web page and changing the link pointer?  Keep in mind that whatever result you come up with here will need a significant update for NetWeaver UME compatibility in version 12.0.
    Regards,
    Jeremy

  • [svn:osmf:] 11045: Increasing timer interval in attempt to fix unit test on the build server.

    Revision: 11045
    Author:   [email protected]
    Date:     2009-10-21 02:32:45 -0700 (Wed, 21 Oct 2009)
    Log Message:
    Increasing timer interval in attempt to fix unit test on the build server.
    Modified Paths:
        osmf/trunk/framework/MediaFrameworkFlexTest/org/osmf/composition/TestParallelViewableTrai t.as

    In general theory, one now has the Edit button for their posts, until someone/anyone Replies to it. I've had Edit available for weeks, as opposed to the old forum's ~ 30 mins.
    That, however, is in theory. I've posted, and immediately seen something that needed editing, only to find NO Replies, yet the Edit button is no longer available, only seconds later. Still, in that same thread, I'd have the Edit button from older posts, to which there had also been no Replies even after several days/weeks. Found one that had to be over a month old, and Edit was still there.
    Do not know the why/how of this behavior. At first, I thought that maybe there WAS a Reply, that "ate" my Edit button, but had not Refreshed on my screen. Refresh still showed no Replies, just no Edit either. In those cases, I just Reply and mention the [Edit].
    Also, it seems that the buttons get very scrambled at times, and Refresh does not always clear that up. I end up clicking where I "think" the right button should be and hope for the best. Seems that when the buttons do bunch up they can appear at random around the page, often three atop one another, and maybe one way the heck out in left-field.
    While I'm on a role, it would be nice to be able to switch between Flattened and Threaded Views on the fly. Each has a use, and having to go to Options and then come back down to the thread is a very slow process. Jive is probably incapable of this, but I can dream.
    Hunt

  • Dataguard log switch question

    Wonder if anyone can help me with a question?
    I am new to data guard and only recently setup my first implementation of a primary and standby Oracle 11 g database.
    It's all setup correctly, i.e no gaps sequences showing, no errors in the alert logs and I have successfully tested a switch over and switch back.
    I wanted to re-test the archive logs were going across to the standby database ok, unfortunately I performed an alter system switch logfile on the standby database instead of primary.
    No errors are reported anywhere, no archive log sequence gaps or errors in the alert logs, but I am wondering if this will cause a problem the next time I have to failover to the standby database?
    Apologies for my lack of my knowledge I am new data guard and only been a DBA for a couple of years, have not had time to read up on the 500 page Data Guard book yet.
    Thanks in Advance

    First you have to know what happens when log switch occurs either manually or internally.
    All data & changes will be in online redo log files, once log switch occurred either automatic or forcefully, these information from online redo log files will be dumped to archives.
    Now tell me. Where will be the online redo? There is no concept of online redo data on standby, in case of real time apply you will have only standby redo log files, even you cannot switch standby redo log files.
    First this command on standby won't work, it's applicable only for online redo log files. So onions redo exists/active only in primary.
    So nothing to worry on it. Make sure environment is sync prior to performing switchover.
    Hope this helps.
    Your all questions why unanswered? Close them and keep the forum clean.

Maybe you are looking for

  • NO DATA FOUND

    I am using @section in my template.The report is erroring out if no data is found.I also need to display the report header and footer dynamically hence has to use the @section.In my report 6i I also declared a variable for count of the billing number

  • Which color management monitor?

    I shoot digital product and food photography using a mac powerbook. I've been using a LaCie CRT (yes, I said CRT) monitor for fine tuning my photos. It has finally started acting up so I'm ready for a new monitor. Can anyone recommend a dependable mo

  • Urgent question: forum or newsgroup?

    I am trying to convince Spanish speaking users "from the other side" (ex MM forums in Spanish) to post here their opinions about the changes that are almost here, but have not been succesful. All of them do not use browsers as such to access the foru

  • Moving the 9iAS Infrastructure Database

    Oracle Portal (9.0.2) on AIX. Right now everything is installed in a single environment. We want to split the load and move the infrastructure. Does anyone have any experience doing this ? Is there documentation available ?

  • HT201413 TRYING TO RESTORE MY IPHONE 5 BUT ERROR 9 POPS UP. ANY SOLUTION ?

    cannot restore my iphone 5 because error 9 shows up and process is interupted.