Archive generated time gap.

Hi,
database is 10.2.0.4 with data guard. can i set archive generate time lag to at least twice a hour.
regards
upul indika.

Upul Indika wrote:
Hi,
database is 10.2.0.4 with data guard. can i set archive generate time lag to at least twice a hour.
regards
upul indika.use delay parameter in remote destination. Please read below manual very detailed.
The DELAY attribute is optional. By default there is no delay.
The DELAY attribute indicates the archived redo log files at the standby destination are not available for recovery until the specified time interval has expired. The time interval is expressed in minutes, and it starts when the redo data is successfully transmitted to, and archived at, the standby site.
The DELAY attribute may be used to protect a standby database from corrupted or erroneous primary data. However, there is a tradeoff because during failover it takes more time to apply all of the redo up to the point of corruption.
The DELAY attribute does not affect the transmittal of redo data to a standby destination.
If you have real-time apply enabled, any delay that you set will be ignored.
Changes to the DELAY attribute take effect the next time redo data is archived (after a log switch). In-progress archiving is not affected.
You can override the specified delay interval at the standby site, as follows:
For a physical standby database:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;
Source:-
http://docs.oracle.com/cd/B28359_01/server.111/b28294/log_arch_dest_param.htm#i77472

Similar Messages

  • Same work item in the work flow is generating 2 mails with a time gap.

    Hi,
    I have a workflow object where,The same work item in the work flow is generating 2 mails with a time gap.
    Please suggest how should I resolve the problem.
    Thanks in advance.

    Hello Rob & Rick,
    Many thanks for your inputs.
    I am working on smartforms for the first time.It still some time till I find a solution with the inputs that you have suggested.
    I will keep this thread posted,& if I am not able to complete the task,i will close the thread.

  • PR generated time

    All,
    Supppose a requestor from any dept raised a PR with a pur group .
    So the concerned buyer will open transaction ME5A and see the open PR .
    1. Whether this is the std practice ?
    2. In this case the buyer will do ME5A transaction everyday morning only , so time gap will be there if requestor created PR in noon , any other solution is there without workflow ? What is the best practice ?
    3. Whether we can see the time of PR generated ?( not date , exact time )
    Please advise.
    regards

    Sandeep,
    If you are not getting Versions tab in ME53N,first you have to configure version management for PR in SPRO – Material management – purchasing-version management .
    After activating version management you can see the Version tab in ME53N near to source of supply tab in PR.
    Hope it will help you.
    Regards,
    Manish Agarwal
    Don’t forget to award if useful ans.

  • Processing the multiple sender xml one by one in a time gap to RFC

    Dear Experts,
             I have to process the multiple sender xml file one by one from FTP to RFC in time gap.
    For Ex:
            I will place 10 xml file in a FTP path at a  time, PI is picking 10 file at a time and process it to RFC at a time.
    Any other way to process the multiple file one by one through PI in a time gap to RFC
    (i,e) PI needs to process the 10 files one by one, once the first file processed successfully from FTP to RFC then the next file to process in a time gap to avoid getting the error in RFC.
    Kindly suggest your ideas or share some links how to process this multiple files.
    Best Regards,
    Monikandan.

    Hi Monikandan,
    You can use CE BPM with PI 7.1 But first check the suggestion of Anupam in the below thread:
    reading file sequentially from FTP using SAP PI file adapter
    Regards,
    Nabendu.

  • Can you change the time gap in songs with iPhone

    I really want to change the time gap in between each I my songs on my iPhone 4 but I don't know if I can or even how so if anyone can help :)

    What "once a day" really means is "every twenty four hours starting from the last time they were updated". Just manually update them tomorrow morning and you should find that the time has changed.

  • Looking for the block CD Generate Time Profiles for MPC simulation.vi

    Hello everyone!!! I am trying to implement MPC in LabVIEW. I have downloaded certain codes which shows the implementation. My question is in those codes i see a block named as CD Generate Time Profiles for MPC simulation.vi. I tried finding a lot for that block but i could not... Can anyone help me out with the problem (exactly under which section will i get that block) or can anyone tell me how do i give the set point profile for the MPC simulation problem???
    Solved!
    Go to Solution.

    The VIs related to generate profile can be found in:
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\vi.lib\addons\Control Design\_MPC\Reference Profile
    or
    C:\Program Files\National Instruments\LabVIEW 2011\\vi.lib\addons\Control Design\_MPC\Reference Profile
    You can look at examples in:
    C:\Program Files (x86)\National Instruments\LabVIEW 2011\examples\Control and Simulation\Control Design\MPC
    C:\Program Files\National Instruments\LabVIEW 2011\examples\Control and Simulation\Control Design\MPC
    to verify how to use those VIs.
    Barp - Control and Simulation Group - LabVIEW R&D - National Instruments

  • Change archive from time 'sysdate-8' crosscheck;

    change archive from time 'sysdate-8' crosscheck;
    What does this statement do

    I am not sure, but I think that if you have reincarnate your database to your previous backup and uptill some previous archivelog, and the RMAN catalog is pointing to the latest archivelog, so the above command may be used to resync RMAN catalog with your reincarnated database.
    I would love to get corrected.
    Thanks.

  • Time gap

    Hello. I have problem with time gap in process.
    There is parallel flow, one branch waiting for receive confirmation and second invoke some web services.
    Sometimes happens that process freeze between activities and continue after 45 seconds. If came confirmation message then client receive timeout but after 45 seconds process receive message from dehydration and continues.
    For example process enter in scope, next enter in sequence in one second and invoke happens after 45 seconds (I can see it in audit). Other branch still waiting for message.
    But problem happens occasionaly on different palces.
    Do you know it?
    In log we found
    <ERROR> <default.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "delivery": [com.oracle.b
    pel.client.ServerException: Waiting for response has timed out. The conversation id is LocalGUID:86ad73a168de5c5c:-374a595b:11512c692ec:-77f
    0. Please check the process instance for detail.]
    Is it error which caused time gap? Or is it only information about message which cannot be delivered because of time gap?

    This problem was caused by server environment. Process run on clustered server with 2 nodes. If second messages follow first message immediately and second message get other node then first message so it seems that nodes was not synchronized and second message reply with timeout.

  • My JMS Consumer receives messages at uneven time gaps

    I have two applications, say APP1 and APP2. Both of these are connected through JMS Bridges.
    APP1 is hosting the PUSH bridge to APP2.
    There was an uneven delay in the messages that are received at APP2.
    I have checked the logs and found that there is a time gap between APP1 posting the message on the JMS queue and APP2 receiving it.
    This is not happening for all the messages that are moving in between these applications, but it is consistently reoccuring.
    Many of the messages are received within 2Secs at APP2.But some of the messages are taking upto 30secs to show up at APP2.
    There is no network load between these applications.
    I want to understand why this is happening? and how to resolve this?
    Could anyone help me out?
    Thanks,
    Aditya
    Edited by: Aditya@AU on Jun 11, 2009 11:59 PM

    Are you using OpenMQ (also known as Glassfish MQ, Java System Message Queue) ? If yes, are you using OpenMQ 4.4's JMS Bridge feature ?

  • Why time gap?

    Hi Friends,
    Its a support issue. Its a idoc to file scenario.The time stamp of the sender says that message is send at 13:52 but in XI (integration engine) it is processed at 14:31. Why this time gap? Generanlly how much time it takes to process and send to receiver system? What are the different reasons for for this time gap?
    Thanks,
    Suresh

    Generally it wont take that much time to process an idoc.
    Possiblites can be
    1. Network Latency Issues
    2. Load on XI Server is more
    3. Check you are seeing the correct idoc only (check sent and received idoc)
    Regards
    Rajesh

  • Please provide me some real time gaps(SD) in projects and their solutions

    hiiiiii everybody
                             please provide me some real time gaps(SD) in projects and their solutions for the sake of Interviews.
    Thanks and Regards

    hiiiiii everybody
                             please provide me some real time gaps(SD) in projects and their solutions for the sake of Interviews.
    Thanks and Regards

  • File naming, archiving and time management

    I've posted on this subject before, but I have a new twist that I'd like to get some feedback on.
    I usually import my photos, keeping the master (now called original) file name until the end of the calendar year.  At the end of the year, I like to change the original name for classification and archiving purposes.  By then, I've usually made all of the deletions for the year, so I feel comfortable renaming the photos with some sort of counter or index.  My preferred classification system is: "Custom Name"/"Image Date_"/"Counter" (0000).
    The problem that I'm experiencing is that it is impossible to rename my originals using this format without some inaccuracies if I try to name them all at once without readjusting the computer's internal time zone settings.  I live on the east coast, so if I have a photo shot at 10:30 pm PDT on 2011-03-14, it gets named with a date of 2011-03-15, which obviously isn't accurate for when that photo was shot.  Well, it is accurate based on East Coast Time, but I want the file to be renamed with the date that it was shot, where it was shot, not where my computer currently resides.  Of course, I could rename the batch of 2011 photos in segments, but that would mean multiple quits/reopens from Aperture in order to change the time zone appropriately.
    It seems that my only choices are to either rename my photos at the time of import using the correct time zone settings on my computer, or to not use this renaming format.  Neither of these options are very appealing, since this renaming format is my preferred method.
    I guess my question is: does anyone have any insights or advice on either how to better work around this problem, or if not, other renaming methods that they like to use for archival and organizational purposes?  I know there are many to choose from, but I'm looking for something simple, which also provides direct information about the image, should I want to reference my Originals (which I do outside of Aperture from time to time).
    Thanks for adding to this discussion...
    mac

    Allen,
    SierraDragon wrote:
    mac-
    Personally I create a folder for each Project and copy pix from CF card into those folders. Then I import from the backup hard drive into Aperture using the folder name as the Project name.
    Usually each Project includes only one day or less, and I may have YYMMDD_JonesWed_A, YYMMDD_JonesWed_B, etc. for a large or multiday shoot. I do not let any Project contain more than ~400 Nikon D2x RAW+JPEG files.
    Projects are just that and never put into folders other than by month and/or year, just a forever chronological list. All organizing is done via Albums and Keywords. JonesWed_2011 is a keyword that can be an Album instantly when needed; bride is a keyword; wed is a keyword; flower is a keyword; etc.
    I use wedding just as an example. The process applies to all kinds of shoots.
    I use the 1-9999 Nikon auto-numbering of image files, and never rename image files except  sometimes during export. That way original image names can always be found across mass storage devices in the future independent of any application.
    -Allen
    SierraDragon wrote:
    Usually each Project includes only one day or less, and I may have YYMMDD_JonesWed_A, YYMMDD_JonesWed_B, etc. for a large or multiday shoot. I do not let any Project contain more than ~400 Nikon D2x RAW+JPEG files.
    Why do you keep the photo count in a project to around 400 files or so?  Is it detrimental to speed, or are there other considerations that have led you to work this way?
    SierraDragon wrote:
    Projects are just that and never put into folders other than by month and/or year, just a forever chronological list. All organizing is done via Albums and Keywords. JonesWed_2011 is a keyword that can be an Album instantly when needed; bride is a keyword; wed is a keyword; flower is a keyword; etc.
    So, you are saying that you sometimes put projects into folders by month and/or year?  Or, do you just keep all projects at the top level of the hierarchy?  The only folders I use are at the top of my hierarchy, and they are by year, 2002, 2003, 2004...2012.  I then keep all of my projects in the appropriate year.  I used to keep folders that were named things like, "Travel", "Occasions"..., but this became problematic when I had overlap, and images could fit in more than one designated folder.
    SierraDragon wrote:
    I use the 1-9999 Nikon auto-numbering of image files, and never rename image files except  sometimes during export. That way original image names can always be found across mass storage devices in the future independent of any application.
    It sounds as though you don't actually rename your images at all, but rather just keep the original names.  I don't like to do this because after deletions, it creates gaps in my sequence, and I also end up with multiple images with the same name.  I like for each image to have its own unique identifier by name.
    I'm considering importing the images using a version name, where the version is named by the image date.  I'll keep the original file name intact until the end of the year, and then, should I decide to rename my files, I could base my renaming system off of the version name.  This will automatically capture the date of the image without being reliant on my computer's time zone settings.

  • Recover database from archive log: Time based recovery

    Hi,
    Could you pelase help regarding the following:
    I have power outage in my machine running oracle 9i on Solaris OS 9
    Oracle is mounted but failed to open
    Once it is started showing the error message
    SQL> startup;
    ORACLE instance started.
    Total System Global Area 9457089744 bytes
    Fixed Size 744656 bytes
    Variable Size 3154116608 bytes
    Database Buffers 6291456000 bytes
    Redo Buffers 10772480 bytes
    Database mounted.
    ORA-01113: file 4 needs media recovery
    ORA-01110: data file 4: '/opt/oracle/oradata/sysdb/indx/indx01.dbf'
    So I tried to recover the database
    SQL> recover database;
    ORA-00279: change 1652948240 generated at 12/03/2007 13:09:08 needed for thread
    1
    ORA-00289: suggestion : /opt/oracle/oradata/nobilldb/archive/1_183942.dbf
    ORA-00280: change 1652948240 for thread 1 is in sequence #183942
    Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
    ORA-00279: change 1652948816 generated at 12/03/2007 13:09:19 needed for thread
    1
    ORA-00289: suggestion : /opt/oracle/oradata/nobilldb/archive/1_183943.dbf
    ORA-00280: change 1652948816 for thread 1 is in sequence #183943
    ORA-00278: log file '/opt/oracle/oradata/nobilldb/archive/1_183942.dbf' no
    longer needed for this recovery
    The power outage at 16:00pm and the recovery archive log file '/opt/oracle/oradata/nobilldb/archive/1_183942.dbf' at 11 am
    Always I am applying the next sequence it is giving the same message and asking the next sequence. I have more than 900 archive log from 11am to 4pm and each of them having the size of 100mb and it take 1 minute to get back from each to provide this error message.
    How I can start my recovery from say 15:45 onwards until 16:15?
    I have all archive logs in the proper destination.
    Still my database is not opened and it is starts applying archive log since 5 hours back, please help me regarding this
    Thanks in advance

    Wrong forum. Post your question in the following forum:
    General Database Discussions

  • SAP PI - Windows OS Generated Time Stamp

    Hi Experts,
         PI should pick the Files from FTP location based on the File create time (This is not a time stamp on file name but a windows(OS) generated one). Is it possible in SAP PI?
    Regards,
    Aniruddha

    Hi,
    This is possible in NFS protocal  Processing Sequenc By Date
    ●      Processing Sequence (for transport protocol File System (NFS))
    If you used placeholders when specifying the file name, define the processing sequence of the files:
    ○       By Name: Files are processed alphabetically by file name.
    ○       By Date: Files are processed according to their time stamp in the file system, starting with the oldest file
    (http://help.sap.com/saphelp_nw04/helpdata/en/e3/94007075cae04f930cc4c034e411e1/content.htm)
    Regards,
    Naveen.

  • Automatic Time Card Generation not generating Time Card

    Hi,
    I am running the program 'Automatic Time Card Generation'. The log file showing the message
    'The process did not generate any timecards for the Payroll, Payroll Period selected'.
    And there is no time card for the given Payroll month.
    Any suggestion for this.
    Many many thanks in advance.
    Regards,
    Subrat

    Please check metalink note 282547.1 to see it is relevant to your issue

Maybe you are looking for

  • FI tax assignment in SD

    hai, In FI we have created output tax for vat and service tax. In SD they have created their own pricing proceedure. where FI tax will be assigned with SD ? can anyone tell. govind.

  • Pls help  in WEBDYNPRO  Function Module 'RFC_START_PROGRAM' not found error

    Dear Friends, While i am calling one function module in webdynpro the follwing erroe is coming. Error:- Function Module 'RFC_START_PROGRAM' not found error Code:- CALL FUNCTION 'ZUPDATE' DESTINATION 'PCNCLNT300'     EXPORTING       im_material = item

  • UDF Importing Parameters

    Hi all, I have a user defined function which contains 32 importing parameters to pass to RFC for RFC lookup. But in UDF editor its allowing only 20 importing parameters to enter. Can any one help me out in this? please suggest me a best way to procee

  • Ship-to party Price

    Dear All In sales order. Ship-to party- CST is not appearing. Sold to party [x] and ship-to party[y]  is different. 1. Creating sales order for X and Y. 2. Maintaining price for Y. Basic price, excise duty, discount and freight is appearing but CST i

  • Other people cant view my profile

    when other people try and view my profile their computers get messed up and they get signed off, is there anyway i can stop that?