Archive restrictions

we have two database like primary and standby. I need to know the restrictions of archives.. i.e. if i create any tablespace in primary then generate and apply the archive. whether the same tablespace is present in standby?? similarly for datafiles , granting previleges and all

973363 wrote:
we have two database like primary and standby. I need to know the restrictions of archives.. i.e. if i create any tablespace in primary then generate and apply the archive. whether the same tablespace is present in standby?? similarly for datafiles , granting previleges and allNo need to add tablespaces in Standby. If the Data Guard setup is done correctly then Tablespaces will be created in Standby by using the redo in the archive logs.
Disaster Recovery Standby database will be in Mount Stage and can be logged in only as SYS user. so dont worry about the Privileges.
Please keep forum clean by Marking your Post as Answered or Helpful if Your question is answered.
Thanks & Regards,
SID
(StepIntoOracleDBA)
Email : [email protected]
http://stepintooracledba.blogspot.in/
http://www.stepintooracledba.com/
Edited by: Step Into Oracle DBA on Feb 18, 2013 2:06 AM

Similar Messages

  • Exchange 2013 - Archive RESTRICTION

    Hello All my Exchange MS Expert, 
    I have a customer requirement --> My customer want to restrict all user in their organization not to delete archive item from their archived mailbox? can we achieve this through security
    option / from ADSI edit??? I have suggested them to go with inplace-hold but not wanted in-pace feature instead they need if a user want to delete any archived item they should get message saying "you don't have appropriate" permission
    / unable to delete.
    I have open a Microsoft advisory Case: they told me to give some time to work on this requirement for test and will comeback to me.
    Friends if you have any suggestion pls pass me. Much appreciate your valuable input.

    Hello All my Exchange MS Expert, 
    I have a customer requirement --> My customer want to restrict all user in their organization not to delete archive item from their archived mailbox? can we achieve this through security
    option / from ADSI edit??? I have suggested them to go with inplace-hold but not wanted in-pace feature instead they need if a user want to delete any archived item they should get message saying "you don't have appropriate" permission
    / unable to delete.
    I have open a Microsoft advisory Case: they told me to give some time to work on this requirement for test and will comeback to me.
    Friends if you have any suggestion pls pass me. Much appreciate your valuable input.
    Exchange doesn't work that way. Using litigation hold/ Single Item Recovery is the only supported method.
    Twitter!: Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Restrict users from archiving PST to local computer

    Hi all,
    I would like to restrict users from archiving emails in outlook to the local computer. We have a serious problem that users are archiving emails to the local computer and then they can copy those emails to external devices or that they can attach this local
    pst file to their personal outlook profile which they can forward it to external recipients. We have ran into a serious problem now and I am try to resolve this problem by restricting users to archive the emails to their local computer. Is there any way I
    can do this?
    Only designated users should be able to archive the outlook emails (from the support team) and they can save it to a central file server.
    Please share me your thoughts. Thank you all for taking time to read this and for your suggestions.

    Hi Friend,
    Use Group Policy Feature and enable the “DisablePST” Reg value as it will not allow users to create new  PST file or even remove the Archive function from their Outlook interface.
    Registry path to disable PST File authentication (Group policy):
    HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\Outlook
    Take a brief explanation about various restrictions over PST File:
    https://www.simple-talk.com/sysadmin/exchange/using-group-policy-to-restrict-the-use-of-pst-files/
    Note: Improve community discussions by marking the answers helpful otherwise respond back for further help.
    Thanks
    Clark Kent

  • Restrict User to Print Document in Archive Link

    Dear Friends,
    I have uploaded and linked a scanned document to SAP FI Document by transaction code OAWD.
    Say 1700000145,
    Co.Co. BP01
    Fis Yr.  2011.
    Object BKPF
    User is able to see the linked document in FB03 from Services for Object>Attachment List.
    I want to restrict the user from printing the document.
    I checked with Authorization Object "S_WFAR_KPR" & "S_WFAR_OBJ". But this did not help.
    Can you please guide me how to handle this scenario.
    With Warm Regards
    Mangesh Pande

    PLEASE ANYONE REPLY ON MY QUESTION

  • Archive rows from one table to another, in batches of N at a time

    I am trying to set up an archiving process to move rows out of a large table into an archive table. I am using Oracle 10gR2, and I do not have the partitioning option. The current process is a simple (apologies for the layout, I can't see a way of formatting code neatly for posting...)
    for r in (select ... from table where ...) loop
    insert into arch_table (...) values (r.a, r.b, ...);
    -- error handling
    delete from table where rowid = r.rowid;
    -- error handling
    commit_count := commit_count + 1;
    if commit_count >= N then
    commit;
    commit_count := 0;
    end if;
    end loop;
    I know this is not a good approach - we're looking at fixing it because it's getting the inevitable "snapshot too old" errors, apart from anything else - and I'd like to build something more robust.
    I do need to only take N rows at a time - firstly, because we don't have the space to create a big enough undo tablespace to do everything at once, and secondly, because there is no business reason to insist that the archiving is done in a single transaction - it's perfectly acceptable to do "some at a time" and having multiple commits makes the process restartable while at the same time making some progress on each run.
    My first thought was to use rownum in the where clause to just do a bulk insert then a bulk delete:
    insert into archive_table (...) select ... from table where ... and rownum < N;
    delete from table where ... and rownum < N;
    commit;
    (I'd need some error logging clauses in there to be able to report errors properly, but that's OK).
    However, I can't find anything that convinces me that this type of use of rownum is deterministic - that is, the delete will always delete the same rows that the insert inserted (I could imagine different plans for the insert and the delete, which meant that the rows were selected in a different order). I can't think of a way to prove that this couldn't happen.
    Alternatively, I could do a bulk collect to select the rows in batches, then do a bulk insert followed by a rowid-based delete. That way there's a single select, so there's no issue of mismatches, but this would potentially use a lot of client memory to hold the row set.
    Does anybody have any comments or suggestions on the best way to handle this? I'd prefer a solution along the lines of the first suggestion (use rownum in the where clause) if I could find something I could be sure was reliable. I just have a gut reaction that it "should" be possible to do something like this in a single statement. I've looked briefly at esoteric uses of the merge statement to do the insert/delete in a single statement, but couldn't find anything.
    It's a problem that seems to come up a lot in discussions, but I have never yet seen a decent discussion of the various tradeoffs. (Most solutions I've seen tend to either suggest "bite the bullet and do it in one transaction and give it enough undo" or "use features of the data (for example, a record ID column) to roughly partition the data into manageable sizes", neither of which would be particularly easy in this case).
    Thanks in advance for any help or suggestions.
    Paul

    Actually, you also have a problem in that you get a PLS-00436 error because you can't reference individual attributes of a record in a FORALL. (I think this restriction might have been eased on 11g, but as I'm on 10g I have to live with it :-().
    However, your code did give me an idea - I can just bulk-select ROWID and then do an insert ... select * where rowid = (the selected rowid). I need to consider how efficient this will be, in a bit more detail, and do some tests, but as I'm doing rowid-based selects it should be reasonably good. Here's some sample code:
    -- create table pfm_test as select * from dba_objects;
    -- insert into pfm_test select * from pfm_test;
    -- insert into pfm_test select * from pfm_test;
    -- insert into pfm_test select * from pfm_test;
    -- update pfm_test set created = sysdate - (rownum/24);
    -- commit;
    -- create table pfm_test_archive as select * from pfm_test where 1=0;
    create or replace procedure archive_data (days number, batch number) as
        cursor c is select rowid from pfm_test where created < sysdate-days;
        type t_rowid_arr is table of rowid;
        l_rowid_arr t_rowid_arr;
        i number := 0;
    begin
        loop
         open c;
         fetch c bulk collect into l_rowid_arr limit batch;
         i := i + 1;
         dbms_output.put_line('Batch ' || i || ': ' || l_rowid_arr.count || ' records');
         exit when l_rowid_arr.count = 0;
         forall i in l_rowid_arr.first .. l_rowid_arr.last
           insert into pfm_test_archive
           select * from pfm_test where rowid = l_rowid_arr(i);
         forall i in l_rowid_arr.first .. l_rowid_arr.last
           delete from pfm_test where rowid = l_rowid_arr(i);
         commit;
         close c;
        end loop;
    end;
    -- exec archive_data(17000,1000);Now to look at error handling for FORALL statements...
    Thanks for the help.
    Paul.

  • Restrict Which Users Can Enter Data In List Form in SharePoint Foundation 2013

    Is there a way to restrict which users can enter data in particular fields in a list item entry form?
    We are using a SharePoint Foundation 2013 list and calendar to manage vacation time. We need to restrict non-supervisor users users from entering a value in a certain field in the vacation request form.
    Here is how the system works now:
    1. Employees complete the vacation request form (which creates a list item)
    2. An email is sent to their supervisor to either approve or decline the request
    3. Approved requests are automatically entered onto the vacation calendar
    We have restricted the list so that only supervisors can edit items (the pending vacation requests). The problem is that all users can mark their own requests as approved when they fill out the request form in the first place. Is there a way to restrict
    which users can enter data in particular fields on a list item entry form?

    Thanks for the suggestion. We ended up 1) hiding the approval column and 2) creating a second list, workflow, etc. The user no longer sees the approval column when filling out the form. Requests are now submitted to list A. Workflow #1 copies the request
    to List B, then deletes the item from List A. Once the request is added to List B, Workflow #2 emails the user that the request has been received and emails the supervisor that a request needs to be approved. Only supervisors have editing permissions on List
    B. Approved requests are automatically added to the vacation calendar (the calendar view of List B).
    We found the following site to be helpful in learning how to hide the list column:
    http://community.bamboosolutions.com/blogs/bambooteamblog/archive/2013/06/03/how-to-hide-a-sharepoint-list-column-from-a-list-form.aspx

  • How do I upload an 'On my Mac' email archive in Mail to an 'On my computer' folder in Outlook for Mac??

    I've installed Office for Mac with Outlook and want to import an archive of 'On my Mac' emails from Mail to a new folder created in Outlook's 'On my computer' section.
    When I recently upgraded to Mavericks, I was able to import this archive back in from a system backup I'd made beforehand. I thought I would be able to do the same again now, but Outlook's import function won't play ball, not even after I have led it to the 'Mailboxes' folder of MBOX data that I've saved onto my desktop. Outlook's 'import' button remains greyed out and unresponsive, even after I've opened the 'Mailboxes' folder to expose the MBOX data.
    I now know I am not alone in being unable to complete this task by the only method suggested in Outlook's 'Help' that is open to me - ie, to "import messages in MBOX format". I got nowhere at all trying the alternative, to "Import Apple Mail messages", where, after clicking 'File / Import', the options encountered bore little resemblance to those described in 'Help' - eg, to import, specifically, "Entourage information from an archive or earlier version" rather than the more general "Information from another application", whilst there was subsequently no mention whatsoever of "Apple Mail" after clicking "the right arrow to continue".
    I've since learned from trawling internet forums that Apple and Microsoft MBOX folders are not the same, which is presumably at the heart of this. There are also various solutions proffered, involving either uploading the archive over an IMAP connection (which I do have, to Gmail) to a shared email account, or using proprietary software that either converts email data from Mail to Outlook friendly or enables it to be uploaded from one to the other. However, none of the simpler solutions are relevant (they tend to be restricted to particular mail clients, etc) whilst the others require programming familiarity and skills that I don't possess.
    It's hard to believe that Microsoft could produce this suite expressly for Macs and, in the case of the Outlook component, implicitly to replace its Mac counterparts, yet for it to be incapable of doing so in the crucial, basic area of importing pre-existing Mac email data.
    Whilst I have taken this up with Microsoft, I'm just hoping there might be someone out there in the Mac community who might know the answer . . .

    Connect in recovery mode and restore, you'll get the option to rest the passcode during this process:
    iOS: Unable to update or restore and iPhone and iPod touch: Wrong passcode results in red disabled screen
    If you cannot remember the passcode, you will need to restore your device using the computer with which you last synced it. This allows you to reset your passcode and resync the data from the device (or restore from a backup). If you restore on a different computer that was never synced with the device, you will be able to unlock the device for use and remove the passcode, but your data will not be present. Refer to Updating and restoring iPhone, iPad and iPod touch software.

  • Vendor master Archiving

    Hi Experts,
             I am trying to Archive the vendor master.The process i am following is like:
    1.In XK06 - Deletion flag for the vendor.
    2.Then went to T-code-F58A.Then in that i maintained the Variant name as vendor_del.After that i click on maintain & give the required vendor to be Archived.
    3.After that when i am trying to come out of that screen the error msg is coming as
    <b>Links stored incompletely:</b>
    Message no. FG166
    Diagnosis
    Customer/vendor links are to be taken into consideration when archiving or deleting customers or vendors. To do this, link data from the database is required which in this case has not yet been put together.
    System Response
    Neither archiving nor deletion was carried out.
    Procedure
    Run program SAPF047 in the background at a time when no customer/vendor changes are being made. This program puts together the link information in table KLPA so that links (for example, specification of an alternative payee or dunning recipient) can be taken into consideration during deletion or archiving. You can find more information in the documentation for SAPF047.
    This is the MSG i am getting in Help
    4.After that i tryed to link that table KLPA.But not able.
    So pls guide me the exact process to do the same.
    Thanks,
    Das

    Please read the documentation of the program SAPF058V :
    Program SAPF058V allows you to automatically set archivable vendor master data for archiving. The same checks are carried out as with archiving.
    Firstly a proposal list is issued. You can set and save the archive flag in this list.
    If the program is started in the background, the archive flag is set automatically for all archivable data.
    Archiving itself is only permitted for master records where the archiving flag is set. If a proposed archivable data record is to actually be archived, the archiving flag must be set.
    You can restrict the quantity of master records to be checked by selecting
    o   Vendor numbers
    o   Minimum number of days in the system
    o   Company codes, if FI data is to be considered
    o   Purchasing organization, if MM data is to be considered
    o   System criteria (see below)
    Requirements
       Program modes
    The program can be run in three different modes:
    1.  Only general data is set for archiving (A segments)
    2.  Only application-specific data is set for archiving (B segments)
    3.  General and application-specific data is set for archiving
    Check whether data can be archived
    Mode 1: The A segments selected must be set for archiving, and no dependencies to B segments may exist. For example, no company code data may exist for a vendor if the general data is to be archived.
    Mode 2: The application-specific data must satisfy the relevant application criteria:
    FI: Company code data is archivable if no special G/L figures or transaction figures and no open or cleared items exist. In addition, the archive flag must be set at company code level.
    MM: Purchasing organization data is archivable if the archive flag is set at this level.
    Mode 3: The A or B segments selected must be set for archiving. The A and B segments of a record are deleted if the archive flag is set in the A segment. The B segment only of a record is deleted if the archive flag is set in the B segment (in this case, the A segment is only copied to the archive). In addition, the application-specific data must also satisfy the criteria from mode 2.
    The A segments are archived if all of the dependent B segments are archived and the vendor is not referred to anywhere else. If this is not the case, the A segments are copied into the archive.
    For all modes: Vendors cannot be archived if they are referred to on a general/company code level from other vendors (for example, financial address or alternative payee). This check can be deselected however (use the field "FI link validation off") if you know that all the vendors that refer to the first vendor are to be archived in this or a following run.

  • PO archiving

    My settings in config are Residence time 1 is 730 and residence time 2 is 365
    This means if there is no change in PO for 730 days then deletion indicator will be set.
    Then after 365 days , the PO will be deleted from the data base.
    1. I guess for residence time 2 period, we will not be able to do any changes in PO, is this correct?
       If yes, then why residence time2 is there?
    2. I manually deleted a PO line item. In next archival run (runs every month) this PO is marked as eligible for archival.Means I am not able to do any changes in this PO.
    Where I can find the settings for this?
    Please advice.

    a deletion indicator set manually or via the preproc program in archiving does not restrict the user from doing any changes.
    Please read OSS Note 948493 - Residence time in new reports RM06EV47, RM06BV47
    here SAP explains how the preproc and the write program interact with the residence time.

  • How to apply Software Restriction policy for specific user in local group policy object ?

    I am working on implementing user based software restriction policy programmatically for local group policy object.
    If i create a policy through Domain Controller,i do have option for software restriction policy in user configuration but in local group policy editor i don't have option for that.
    When i look for the changes made by policy applied from Domain Controller in registry, they modifies registry values for specific users on path HKEY_USERS\(SID of User)\Softwares\Policies\Microsoft\Windows\Safer\Codeidentifiers
    They also have registry.pol stored in SYSvol folder in Domain Controller. When i make the same changes in registry to block any other application, application is getting blocked.
    I achieved what i wanted but is it right to modify registry values ?  
    PS:- I am using Igrouppolicyobject API

    I achieved what I wanted but is it right to modify registry values ?
    You also can modify a registry programmatically based policy. Check this:
    http://blogs.msdn.com/b/dsadsi/archive/2009/07/23/working-with-group-policy-objects-programmatically-simple-c-example-illustrating-how-to-modify-a-registry-based-policy.aspx
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Is there a way to restrict the number of attempts for a remediated question using advanced actions?

    I have the following slides in my project:
    content slide 1
    content slide 2
    question slide 1
    question slide 2
    Question slide 1 is a question about content slide 1. Question slide 2 is a question about content slide 2. I would like to restrict the total number of attempts to two for each question. If question 1 is answered incorrectly on the first attempt, the learner would be returned to content slide 1 for review. Clicking the next button will take the learner back to the missed quiz question and allow them a second attempt to answer it correctly. If they answer it incorrectly again, it is scored as incorrect and the learner is taken to question slide 2.
    Can this be done or does remediation keep repeating until the learner answers the question correctly?
    If that is the case, can I achieve my objective by using advanced actions? And, if so, can you provide step by step instructions on how to do this?

    I think it could be possible, but giving you step-by-step instructions, sorry, that would take a lot of time. Did you use advanced actions already? My archived blog has a lot of use cases and tutorials, but I think it is not fair to ask on a forum for step-by-step instructions for each use case you want to create.  The most important thing will be to make sure that the user always remains in the Quiz scope, you can use the new system variable cpInQuizScope while testing. There is no system variable for attempts on question level, only one on Quiz level, so you'll have to create a user variable to track the attempts on question level. A big problem is that when you leave a question slide, without using the remediation work flow, the attempts are considered as finished. Personally I would prefer for that reason to not use the default question slides. You could try out a combination of remediation and advanced actions, never did test that?
    Lilybiri

  • Windows Explorer misreads large-file .zip archives

       I just spent about 90 minutes trying to report this problem through
    the normal support channels with no useful result, so, in desperation,
    I'm trying here, in the hope that someone can direct this report to some
    useful place.
       There appears to be a bug in the .zip archive reader used by Windows
    Explorer in Windows 7 (and up, most likely).
       An Info-ZIP Zip user recently reported a problem with an archive
    created using our Zip program.  The archive was valid, but it contained
    a file which was larger than 4GiB.  The complaint was that Windows
    Explorer displayed (and, apparently believed) an absurdly large size
    value for this large-file archive member.  We have since reproduced the
    problem.
       The original .zip archive format includes uncompressed and compressed
    sizes for archive members (files), and these sizes were stored in 32-bit
    fields.  This caused problems for files which are larger than 4GiB (or,
    on some system types, where signed size values were used, 2GiB).  The
    solution to this fundamental limitation was to extend the .zip archive
    format to allow storage of 64-bit member sizes, when necessary.  (PKWARE
    identifies this format extension as "Zip64".)
       The .zip archive format includes a mechanism, the "Extra Field", for
    storing various kinds of metadata which had no place in the normal
    archive file headers.  Examples include OS-specific file-attribute data,
    such as Finder info and extended attributes for Apple Macintosh; record
    format, record size, and record type data for VMS/OpenVMS; universal
    file times and/or UID/GID for UNIX(-like) systems; and so on.  The Extra
    Field is where the 64-bit member sizes are stored, when the fixed 32-bit
    size fields are too small.
       An Extra Field has a structure which allows multiple types of extra
    data to be included.  It comprises one or more "Extra Blocks", each of
    which has the following structure:
           Size (bytes) | Description
          --------------+------------
                2       | Type code
                2       | Number of data bytes to follow
            (variable)  | Extra block data
       The problem with the .zip archive reader used by Windows Explorer is
    that it appears to expect the Extra Block which includes the 64-bit
    member sizes (type code = 0x0001) to be the first (or only) Extra Block
    in the Extra Field.  If some other Extra Block appears at the start of
    the Extra Field, then its (non-size) data are being incorrectly
    interpreted as the 64-bit sizes, while the actual 64-bit size data,
    further along in the Extra Field, are ignored.
       Perhaps the .zip archive _writer_ used by Windows Explorer always
    places the Extra Block with the 64-bit sizes in this special location,
    but the .zip specification does not demand any particular order or
    placement of Extra Blocks in the Extra Field, and other programs
    (Info-ZIP Zip, for example) should not be expected to abide by this
    artificial restriction.  For details, see section "4.5 Extensible data
    fields" in the PKWARE APPNOTE:
          http://www.pkware.com/documents/casestudies/APPNOTE.TXT
       A .zip archive reader is expected to consider the Extra Block type
    codes, and interpret accordingly the data which follow.  In particular,
    it's not sufficient to trust that any particular Extra Block will be the
    first one in the Extra Field.  It's generally safe to ignore any Extra
    Block whose type code is not recognized, but it's crucial to scan the
    Extra Field, identify each Extra Block, and handle it according to its
    type.
       Here are some relatively small (about 14MiB each) test archives which
    illustrate the problem:
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_V.zip
          http://antinode.info/ftp/info-zip/ms_zip64/test_4g_W.zip
       Correct info, from UnZip 6.00 ("unzip -lv"):
    Archive:  test_4g.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_V.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    Archive:  test_4g_W.zip
     Length   Method    Size  Cmpr    Date    Time   CRC-32   Name
    4362076160  Defl:X 14800839 100% 05-01-2014 15:33 6d8d2ece  test_4g.txt
    (In these reports, "Length" is the uncompressed size; "Size" is the
    compressed size.)
       Incorrect info, from (Windows 7) Windows Explorer:
    Archive        Name          Compressed size   Size
    test_4g.zip    test_4g.txt         14,454 KB   562,951,376,907,238 KB
    test_4g_V.zip  test_4g.txt         14,454 KB   8,796,110,221,518 KB
    test_4g_W.zip  test_4g.txt         14,454 KB   1,464,940,363,777 KB
       Faced with these unrealistic sizes, Windows Explorer refuses to
    extract the member file, for lack of (petabytes of) free disk space.
       The archive test_4g.zip has the following Extra Blocks: universal
    time (type = 0x5455) and 64-bit sizes (type = 0x0001).  test_4g_V.zip
    has: PWWARE VMS (type = 0x000c) and 64-bit sizes (type = 0x0001).
    test_4g_W.zip has: NT security descriptor (type = 0x4453), universal
    time (type = 0x5455), and 64-bit sizes (type = 0x0001).  Obviously,
    Info-ZIP UnZip has no trouble correctly finding the 64-bit size info in
    these archives, but Windows Explorer is clearly confused.  (Note that
    "1,464,940,363,777 KB" translates to 0x0005545500000400 (bytes), and
    "0x00055455" looks exactly like the size, "0x0005" and the type code
    "0x5455" for a "UT" universal time Extra Block, which was present in
    that archive.  This is consistent with the hypothesis that the wrong
    data in the Extra Field are being interpreted as the 64-bit size data.)
       Without being able to see the source code involved here, it's hard to
    know exactly what it's doing wrong, but it does appear that the .zip
    reader used by Windows Explorer is using a very (too) simple-minded
    method to extract 64-bit size data from the Extra Field, causing it to
    get bad data from a properly formed archive.
       I suspect that the engineer involved will have little trouble finding
    and fixing the code which parses an Extra Field to extract the 64-bit
    sizes correctly, but if anyone has any questions, we'd be happy to help.
       For the Info-ZIP (http://info-zip.org/) team,
       Steven Schweda

    > We can't get the source (info-zip) program for test.
       I don't know why you would need to, but yes, you can:
          http://www.info-zip.org/
          ftp://ftp.info-zip.org/pub/infozip/src/
    You can also get pre-built executables for Windows:
          ftp://ftp.info-zip.org/pub/infozip/win32/unz600xn.exe
          ftp://ftp.info-zip.org/pub/infozip/win32/zip300xn.zip
    > In addition, since other zip application runs correctly. Since it should
    > be your software itself issue.
       You seem to misunderstand the situation.  The facts are these:
       1.  For your convenience, I've provided three test archives, each of
    which includes a file larger than 4GiB.  These archives are valid.
       2.  Info-ZIP UnZip (version 6.00 or newer) can process these archives
    correctly.  This is consistent with the fact that these archives are
    valid.
       3.  Programs from other vendors can process these archives correctly.
    I've supplied a screenshot showing one of them (7-Zip) doing so, as you
    requested.  This is consistent with the fact that these archives are
    valid.
       4.  Windows Explorer (on Windows 7) cannot process these archives
    correctly, apparently because it misreads the (Zip64) file size data.
    I've supplied a screenshot of Windows Explorer showing the bad file size
    it gets, and the failure that occurs when one tries to use it to extract
    the file from one of these archives, as you requested.  This is
    consistent with the fact that there's a bug in the .zip reader used by
    Windows Explorer.
       Yes, "other zip application runs correctly."  Info-ZIP UnZip runs
    correctly.  Only Windows Explorer does _not_ run correctly.

  • Database Started In Restricted Mode...

    Dear Exports,
    We are using Oracle 10gR2 on Windows server 2008. My database size is approximately 2tb. Now we are going to setup Dataguard for our primary. We started the database with the changer parameters for dataguard and create spfile from pfile. And started the database with Spfile. But after some time i noticed that the database started automatically in Restricted mode. As in our database Previously dataguard was configured, due to the some problem that setup is not working and the logs are not applied in Standby server. I thought may be for this reason database going to restricted mode automatically so i set the log_archive_dest_state_2='DEFFER' and Now I scanned the whore alert.log file and found the same situation of restricted is happening from Jan 2010. Please Suggest some solution how i can over come the problem.
    The alertlog when i started the database recently...
    ALTER DATABASE CLOSE NORMAL
    ORA-1507 signalled during: ALTER DATABASE CLOSE NORMAL...
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    ARCH: Archival disabled due to shutdown: 1089
    Shutting down archive processes
    Archiving is disabled
    Archive process shutdown avoided: 0 active
    Fri Apr 12 19:12:39 2013
    Starting ORACLE instance (normal)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =97
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    processes = 800
    sessions = 885
    sga_max_size = 10737418240
    __shared_pool_size = 1325400064
    __large_pool_size = 16777216
    __java_pool_size = 16777216
    __streams_pool_size = 0
    sga_target = 8589934592
    control_files = G:\ORADATA\CONTROL01.CTL, G:\ORADATA\CONTROL02.CTL, G:\ORADATA\CONTROL03.CTL
    db_block_size = 8192
    __db_cache_size = 7214202880
    compatible = 10.2.0.3.0
    log_archive_config = DG_CONFIG=(orcl,stdby)
    log_archive_dest_1 = LOCATION=I:\archive_log VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl
    log_archive_dest_2 = SERVICE=stdby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stdby
    log_archive_dest_state_1 = enable
    log_archive_max_processes= 30
    log_archive_format = ARC%D_%s_%R.%T
    fal_client = orcl
    fal_server = stdby
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = D:\oracle\product\10.2.0\flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    undo_retention = 5400
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=orclXDB)
    local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.12.86)(PORT=1521))
    job_queue_processes = 10
    audit_file_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\ADUMP
    background_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\BDUMP
    user_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP
    core_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\CDUMP
    db_name = orcl
    open_cursors = 500
    pga_aggregate_target = 10737418240
    PMON started with pid=2, OS id=14328
    PSP0 started with pid=3, OS id=15420
    MMAN started with pid=4, OS id=9092
    DBW0 started with pid=5, OS id=12360
    DBW1 started with pid=6, OS id=18112
    LGWR started with pid=7, OS id=18360
    CKPT started with pid=8, OS id=18448
    SMON started with pid=9, OS id=17160
    RECO started with pid=10, OS id=18512
    CJQ0 started with pid=11, OS id=19324
    MMON started with pid=12, OS id=14380
    Fri Apr 12 19:12:40 2013
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=13, OS id=2996
    Fri Apr 12 19:12:40 2013
    starting up 1 shared server(s) ...
    Fri Apr 12 19:12:41 2013
    ALTER DATABASE MOUNT
    Fri Apr 12 19:12:45 2013
    Setting recovery target incarnation to 2
    Fri Apr 12 19:12:45 2013
    Successful mount of redo thread 1, with mount id 1340307369
    Fri Apr 12 19:12:45 2013
    Allocated 15937344 bytes in shared pool for flashback generation buffer
    Starting background process RVWR
    RVWR started with pid=17, OS id=5524
    Fri Apr 12 19:12:45 2013
    Database mounted in Exclusive Mode
    Completed: ALTER DATABASE MOUNT
    Fri Apr 12 19:12:46 2013
    ALTER DATABASE OPEN
    Fri Apr 12 19:12:46 2013
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=18, OS id=11964
    ARC1 started with pid=19, OS id=13472
    ARC2 started with pid=20, OS id=17960
    ARC3 started with pid=21, OS id=18548
    ARC4 started with pid=22, OS id=15660
    ARC5 started with pid=23, OS id=15548
    ARC6 started with pid=24, OS id=14720
    ARC7 started with pid=25, OS id=15780
    ARC8 started with pid=26, OS id=17992
    ARC9 started with pid=27, OS id=17988
    ARCa started with pid=28, OS id=19436
    ARCb started with pid=29, OS id=16104
    ARCc started with pid=30, OS id=6656
    ARCd started with pid=31, OS id=6900
    ARCe started with pid=32, OS id=10568
    ARCf started with pid=33, OS id=16992
    ARCg started with pid=34, OS id=14372
    ARCh started with pid=35, OS id=18084
    ARCi started with pid=36, OS id=5788
    ARCj started with pid=37, OS id=4940
    ARCk started with pid=38, OS id=18816
    ARCl started with pid=39, OS id=14588
    ARCm started with pid=40, OS id=16820
    ARCn started with pid=41, OS id=8068
    ARCo started with pid=42, OS id=18736
    ARCp started with pid=43, OS id=8316
    ARCq started with pid=44, OS id=5952
    ARCr started with pid=45, OS id=16304
    ARCs started with pid=46, OS id=14884
    Fri Apr 12 19:12:46 2013
    ARC0: Archival started
    ARCt started with pid=47, OS id=19408
    Fri Apr 12 19:12:46 2013
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Thread 1 opened at log sequence 117446
    Current log# 10 seq# 117446 mem# 0: G:\ORADATA\REDO10A.LOG
    Current log# 10 seq# 117446 mem# 1: H:\ORADATA\REDO10B.LOG
    Fri Apr 12 19:12:46 2013
    ARC0: Becoming the 'no FAL' ARCH
    Fri Apr 12 19:12:46 2013
    ARC0: Becoming the 'no SRL' ARCH
    Fri Apr 12 19:12:46 2013
    ARC8: Becoming the heartbeat ARCH
    Fri Apr 12 19:12:47 2013
    Successful open of redo thread 1
    Fri Apr 12 19:12:47 2013
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Fri Apr 12 19:12:47 2013
    SMON: enabling cache recovery
    Fri Apr 12 19:12:48 2013
    Successfully onlined Undo Tablespace 1.
    Fri Apr 12 19:12:48 2013
    SMON: enabling tx recovery
    Fri Apr 12 19:12:48 2013
    Database Characterset is WE8MSWIN1252
    Opening with internal Resource Manager plan
    where NUMA PG = 1, CPUs = 16
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=48, OS id=10048
    Fri Apr 12 19:12:51 2013
    Completed: ALTER DATABASE OPEN
    Fri Apr 12 19:12:51 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_mmon_14380.trc:
    ORA-19815: WARNING: db_recovery_file_dest_size of 2147483648 bytes is 99.56% used, and has 9486336 remaining bytes available.
    Fri Apr 12 19:12:51 2013
    You have following choices to free up space from flash recovery area:
    1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
    then consider changing RMAN ARCHIVELOG DELETION POLICY.
    2. Back up files to tertiary device such as tape using RMAN
    BACKUP RECOVERY AREA command.
    3. Add disk space and increase db_recovery_file_dest_size parameter to
    reflect the new space.
    4. Delete unnecessary files using RMAN DELETE command. If an operating
    system command was used to delete files, then use RMAN CROSSCHECK and
    DELETE EXPIRED commands.
    Dump file d:\oracle\product\10.2.0\admin\orcl\bdump\alert_orcl.log
    Fri Apr 12 19:13:26 2013
    ORACLE V10.2.0.4.0 - 64bit Production vsnsta=0
    vsnsql=14 vsnxtr=3
    Windows NT Version V6.0 Service Pack 1
    CPU : 16 - type 8664, 16 Physical Cores
    Process Affinity : 0x0000000000000000
    Memory (Avail/Total): Ph:55999M/65533M, Ph+PgF:183151M/193356M
    Fri Apr 12 19:13:26 2013
    Starting ORACLE instance (restrict)
    LICENSE_MAX_SESSION = 0
    LICENSE_SESSIONS_WARNING = 0
    Picked latch-free SCN scheme 3
    Autotune of undo retention is turned on.
    IMODE=BR
    ILAT =97
    LICENSE_MAX_USERS = 0
    SYS auditing is disabled
    ksdpec: called for event 13740 prior to event group initialization
    Starting up ORACLE RDBMS Version: 10.2.0.4.0.
    System parameters with non-default values:
    processes = 800
    sessions = 885
    sga_max_size = 10737418240
    __shared_pool_size = 1325400064
    __large_pool_size = 16777216
    __java_pool_size = 16777216
    __streams_pool_size = 0
    sga_target = 8589934592
    control_files = G:\ORADATA\CONTROL01.CTL, G:\ORADATA\CONTROL02.CTL, G:\ORADATA\CONTROL03.CTL
    db_block_size = 8192
    __db_cache_size = 7214202880
    compatible = 10.2.0.3.0
    log_archive_config = DG_CONFIG=(orcl,stdby)
    log_archive_dest_1 = LOCATION=I:\archive_log VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=orcl
    log_archive_dest_2 = SERVICE=stdby NOAFFIRM ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stdby
    log_archive_dest_state_1 = enable
    log_archive_max_processes= 30
    log_archive_format = ARC%D_%s_%R.%T
    fal_client = orcl
    fal_server = stdby
    db_file_multiblock_read_count= 16
    db_recovery_file_dest = D:\oracle\product\10.2.0\flash_recovery_area
    db_recovery_file_dest_size= 2147483648
    standby_file_management = AUTO
    undo_management = AUTO
    undo_tablespace = UNDOTBS1
    undo_retention = 5400
    remote_login_passwordfile= EXCLUSIVE
    db_domain =
    dispatchers = (PROTOCOL=TCP) (SERVICE=orclXDB)
    local_listener = (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.12.86)(PORT=1521))
    job_queue_processes = 10
    audit_file_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\ADUMP
    background_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\BDUMP
    user_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\UDUMP
    core_dump_dest = D:\ORACLE\PRODUCT\10.2.0\ADMIN\ORCL\CDUMP
    db_name = orcl
    open_cursors = 500
    pga_aggregate_target = 10737418240
    PMON started with pid=2, OS id=18668
    PSP0 started with pid=3, OS id=17668
    MMAN started with pid=4, OS id=16896
    DBW0 started with pid=5, OS id=18204
    DBW1 started with pid=6, OS id=16976
    LGWR started with pid=7, OS id=14552
    CKPT started with pid=8, OS id=4480
    SMON started with pid=9, OS id=10236
    RECO started with pid=10, OS id=11832
    CJQ0 started with pid=11, OS id=18844
    MMON started with pid=12, OS id=15320
    Fri Apr 12 19:13:27 2013
    starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'...
    MMNL started with pid=13, OS id=14944
    Fri Apr 12 19:13:27 2013
    starting up 1 shared server(s) ...
    Fri Apr 12 19:13:27 2013
    alter database orcl mount exclusive
    Fri Apr 12 19:13:31 2013
    Setting recovery target incarnation to 2
    Fri Apr 12 19:13:31 2013
    Successful mount of redo thread 1, with mount id 1340305111
    Fri Apr 12 19:13:31 2013
    Allocated 15937344 bytes in shared pool for flashback generation buffer
    Starting background process RVWR
    RVWR started with pid=17, OS id=12880
    Fri Apr 12 19:13:31 2013
    Database mounted in Exclusive Mode
    Completed: alter database orcl mount exclusive
    Fri Apr 12 19:13:31 2013
    alter database open
    Fri Apr 12 19:13:31 2013
    Beginning crash recovery of 1 threads
    parallel recovery started with 15 processes
    Fri Apr 12 19:13:31 2013
    Started redo scan
    Fri Apr 12 19:13:31 2013
    Completed redo scan
    735 redo blocks read, 211 data blocks need recovery
    Fri Apr 12 19:13:31 2013
    Started redo application at
    Thread 1: logseq 117446, block 97761
    Fri Apr 12 19:13:31 2013
    Recovery of Online Redo Log: Thread 1 Group 10 Seq 117446 Reading mem 0
    Mem# 0: G:\ORADATA\REDO10A.LOG
    Mem# 1: H:\ORADATA\REDO10B.LOG
    Fri Apr 12 19:13:31 2013
    Completed redo application
    Fri Apr 12 19:13:32 2013
    Completed crash recovery at
    Thread 1: logseq 117446, block 98496, scn 7310601900
    211 data blocks read, 211 data blocks written, 735 redo blocks read
    Fri Apr 12 19:13:33 2013
    LGWR: STARTING ARCH PROCESSES
    ARC0 started with pid=33, OS id=18508
    ARC1 started with pid=34, OS id=14692
    ARC2 started with pid=35, OS id=17728
    ARC3 started with pid=36, OS id=12476
    ARC4 started with pid=37, OS id=19228
    ARC5 started with pid=38, OS id=11552
    ARC6 started with pid=39, OS id=7576
    ARC7 started with pid=40, OS id=6244
    ARC8 started with pid=41, OS id=18468
    ARC9 started with pid=42, OS id=18492
    ARCa started with pid=43, OS id=10352
    ARCb started with pid=44, OS id=15516
    ARCc started with pid=45, OS id=18216
    ARCd started with pid=46, OS id=5660
    ARCe started with pid=47, OS id=18756
    ARCf started with pid=48, OS id=17828
    ARCg started with pid=49, OS id=8696
    ARCh started with pid=50, OS id=17736
    ARCi started with pid=51, OS id=16736
    ARCj started with pid=52, OS id=13208
    ARCk started with pid=53, OS id=12012
    ARCl started with pid=54, OS id=19180
    ARCm started with pid=55, OS id=16632
    ARCn started with pid=56, OS id=17588
    ARCo started with pid=57, OS id=11948
    ARCp started with pid=58, OS id=18416
    ARCq started with pid=59, OS id=17888
    ARCr started with pid=60, OS id=2144
    ARCs started with pid=61, OS id=14392
    Fri Apr 12 19:13:33 2013
    ARC0: Archival started
    ARCt started with pid=62, OS id=19264
    Fri Apr 12 19:13:33 2013
    ARC1: Archival started
    ARC2: Archival started
    ARC3: Archival started
    ARC4: Archival started
    ARC5: Archival started
    ARC6: Archival started
    ARC7: Archival started
    ARC8: Archival started
    ARC9: Archival started
    ARCa: Archival started
    ARCb: Archival started
    ARCc: Archival started
    ARCd: Archival started
    ARCe: Archival started
    ARCf: Archival started
    ARCg: Archival started
    ARCh: Archival started
    ARCi: Archival started
    ARCj: Archival started
    ARCk: Archival started
    ARCl: Archival started
    ARCm: Archival started
    ARCn: Archival started
    ARCo: Archival started
    ARCp: Archival started
    ARCq: Archival started
    ARCr: Archival started
    ARCs: Archival started
    ARCt: Archival started
    LGWR: STARTING ARCH PROCESSES COMPLETE
    Fri Apr 12 19:13:33 2013
    Thread 1 advanced to log sequence 117447 (thread open)
    Thread 1 opened at log sequence 117447
    Current log# 8 seq# 117447 mem# 0: G:\ORADATA\REDO08A.LOG
    Current log# 8 seq# 117447 mem# 1: H:\ORADATA\REDO08B.LOG
    Successful open of redo thread 1
    Fri Apr 12 19:13:33 2013
    ARC3: Becoming the 'no FAL' ARCH
    Fri Apr 12 19:13:33 2013
    MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
    Fri Apr 12 19:13:33 2013
    ARC3: Becoming the 'no SRL' ARCH
    Fri Apr 12 19:13:33 2013
    ARCr: Becoming the heartbeat ARCH
    Fri Apr 12 19:13:33 2013
    SMON: enabling cache recovery
    Fri Apr 12 19:13:33 2013
    Successfully onlined Undo Tablespace 1.
    Fri Apr 12 19:13:33 2013
    SMON: enabling tx recovery
    Fri Apr 12 19:13:33 2013
    Database Characterset is WE8MSWIN1252
    Opening with internal Resource Manager plan
    where NUMA PG = 1, CPUs = 16
    replication_dependency_tracking turned off (no async multimaster replication found)
    Starting background process QMNC
    QMNC started with pid=63, OS id=14896
    Fri Apr 12 19:13:34 2013
    Completed: alter database open
    Fri Apr 12 19:13:34 2013
    ALTER SYSTEM disable restricted session;
    Fri Apr 12 19:13:35 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_mmon_15320.trc:
    ORA-19815: WARNING: db_recovery_file_dest_size of 2147483648 bytes is 99.56% used, and has 9486336 remaining bytes available.
    Fri Apr 12 19:13:35 2013
    You have following choices to free up space from flash recovery area:
    1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
    then consider changing RMAN ARCHIVELOG DELETION POLICY.
    2. Back up files to tertiary device such as tape using RMAN
    BACKUP RECOVERY AREA command.
    3. Add disk space and increase db_recovery_file_dest_size parameter to
    reflect the new space.
    4. Delete unnecessary files using RMAN DELETE command. If an operating
    system command was used to delete files, then use RMAN CROSSCHECK and
    DELETE EXPIRED commands.
    Fri Apr 12 19:13:36 2013
    Error 12514 received logging on to the standby
    Fri Apr 12 19:13:36 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arc0_18508.trc:
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    FAL[server, ARC0]: Error 12514 creating remote archivelog file 'stdby'
    FAL[server, ARC0]: FAL archive failed, see trace file.
    Fri Apr 12 19:13:36 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arc0_18508.trc:
    ORA-16055: FAL request rejected
    ARCH: FAL archive failed. Archiver continuing
    Fri Apr 12 19:19:33 2013
    Error 12514 received logging on to the standby
    Fri Apr 12 19:19:33 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arcr_2144.trc:
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    PING[ARCr]: Heartbeat failed to connect to standby 'stdby'. Error is 12514.
    Fri Apr 12 19:22:52 2013
    PING[ARCr]: Heartbeat failed to connect to standby 'stdby'. Error is 12514.
    Fri Apr 12 20:09:34 2013
    Error 12514 received logging on to the standby
    Fri Apr 12 20:09:34 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arcr_2144.trc:
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    PING[ARCr]: Heartbeat failed to connect to standby 'stdby'. Error is 12514.
    Fri Apr 12 20:13:50 2013
    Sat Apr 13 01:54:52 2013
    Error 12514 received logging on to the standby
    Sat Apr 13 01:54:52 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arcr_2144.trc:
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    PING[ARCr]: Heartbeat failed to connect to standby 'stdby'. Error is 12514.
    Sat Apr 13 12:35:12 2013
    Error 12514 received logging on to the standby
    Sat Apr 13 12:35:12 2013
    Errors in file d:\oracle\product\10.2.0\admin\orcl\bdump\orcl_arcr_2144.trc:
    ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
    PING[ARCr]: Heartbeat failed to connect to standby 'stdby'. Error is 12514.
    Sat Apr 13 12:37:01 2013
    ALTER SYSTEM SET log_archive_dest_state_2='DEFER' SCOPE=MEMORY;
    Sat Apr 13 12:38:23 2013
    Unable to allocate flashback log of 1946 blocks from
    current recovery area of size 2147483648 bytes.
    Current Flashback database retention is less than target
    because of insufficient space in the flash recovery area.
    Flashback will continue to use available space, but the
    size of the flash recovery area must be increased to meet
    the Flashback database retention target
    Use ALTER SYSTEM SET db_recovery_file_dest_size command
    to add space. DO NOT manually remove flashback log files
    to create space.
    Sat Apr 13 12:38:41 2013
    ALTER SYSTEM SET db_recovery_file_dest_size='5G' SCOPE=MEMORY;
    Sat Apr 13 12:38:41 2013
    db_recovery_file_dest_size of 5120 MB is 39.82% used. This is a
    user-specified limit on the amount of space that will be used by this
    database for recovery-related files, and does not reflect the amount of
    space available in the underlying filesystem or ASM diskgroup.
    Sat Apr 13 15:42:21 2013
    ORA-01555 caused by SQL statement below (SQL ID: 2zb70pwz9p06q, Query Duration=1365847941 sec, SCN: 0x0001.b3c0f735):
    Sat Apr 13 15:42:21 2013
    SELECT * FROM RELATIONAL("ORISSA_TRANSACTION"."SELLER_BIOMETRICS_TBL")
    Sat Apr 13 15:48:37 2013
    Thread 1 advanced to log sequence 117452 (LGWR switch)
    Current log# 10 seq# 117452 mem# 0: G:\ORADATA\REDO10A.LOG
    Current log# 10 seq# 117452 mem# 1: H:\ORADATA\REDO10B.LOG
    Sat Apr 13 20:31:00 2013
    The value (30) of MAXTRANS parameter ignored.
    kupprdp: master process DM00 started with pid=31, OS id=16196
    to execute - SYS.KUPM$MCP.MAIN('SYS_EXPORT_SCHEMA_30', 'ORISSA_TRANSACTION', 'KUPC$C_1_20130413203100', 'KUPC$S_1_20130413203100', 0);
    kupprdp: worker process DW01 started with worker id=1, pid=66, OS id=16200
    to execute - SYS.KUPW$WORKER.MAIN('SYS_EXPORT_SCHEMA_30', 'ORISSA_TRANSACTION');
    Sun Apr 14 00:00:11 2013
    Thread 1 advanced to log sequence 117453 (LGWR switch)
    Current log# 8 seq# 117453 mem# 0: G:\ORADATA\REDO08A.LOG
    Current log# 8 seq# 117453 mem# 1: H:\ORADATA\REDO08B.LOG
    Sun Apr 14 00:00:52 2013
    Thread 1 advanced to log sequence 117454 (LGWR switch)
    Current log# 9 seq# 117454 mem# 0: G:\ORADATA\REDO09A.LOG
    Current log# 9 seq# 117454 mem# 1: H:\ORADATA\REDO09B.LOG

    like you show in the alert.log:
    PROBLEM
    Unable to allocate flashback log of 1946 blocks from
    current recovery area of size 2147483648 bytes.
    Current Flashback database retention is less than target
    because of insufficient space in the flash recovery area.
    Flashback will continue to use available space, but the
    size of the flash recovery area must be increased to meet
    the Flashback database retention target
    Use ALTER SYSTEM SET db_recovery_file_dest_size command
    to add space. DO NOT manually remove flashback log files
    to create space.
    SOLUTION
    Fri Apr 12 19:12:51 2013
    You have following choices to free up space from flash recovery area:
    1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
    then consider changing RMAN ARCHIVELOG DELETION POLICY.
    2. Back up files to tertiary device such as tape using RMAN
    BACKUP RECOVERY AREA command.
    3. Add disk space and increase db_recovery_file_dest_size parameter to
    reflect the new space.
    4. Delete unnecessary files using RMAN DELETE command. If an operating
    system command was used to delete files, then use RMAN CROSSCHECK and
    DELETE EXPIRED commands.
    ************************************************************************choose one and try again. Post if you have more issues.

  • How do I stop Firefox from archiving conversations after I send a message to someone?

    When someone sends me an email and I reply, the conversation automatically goes to the archive. I can retrieve it by clicking on the 'undo archive' link, but I want it to stop archiving things in the first place. It seems to do it no matter who the conversation is with, meaning it's not restricted to personal emails from Gmail or Yahoo or any particular company I'm doing business with. It's been happening for a few months. I do not remember changing the archive setting and I cannot find any way to access the archive setting. Thanks for your help.

    ''carroled [[#question-1048761|said]]''
    <blockquote>
    When someone sends me an email and I reply, the conversation automatically goes to the archive. I can retrieve it by clicking on the 'undo archive' link, but I want it to stop archiving things in the first place. It seems to do it no matter who the conversation is with, meaning it's not restricted to personal emails from Gmail or Yahoo or any particular company I'm doing business with. It's been happening for a few months. I do not remember changing the archive setting and I cannot find any way to access the archive setting. Thanks for your help.
    </blockquote>
    ''carroled [[#question-1048761|said]]''
    <blockquote>
    When someone sends me an email and I reply, the conversation automatically goes to the archive. I can retrieve it by clicking on the 'undo archive' link, but I want it to stop archiving things in the first place. It seems to do it no matter who the conversation is with, meaning it's not restricted to personal emails from Gmail or Yahoo or any particular company I'm doing business with. It's been happening for a few months. I do not remember changing the archive setting and I cannot find any way to access the archive setting. Thanks for your help.
    </blockquote>
    Thanks, Phillipp. I found the info I need to fix the problem.

  • How to restrict lync user to change their presence

    Is it possible to restrict users to set their presence manually in Lync client.
    Some user in organization manually change their presence which is very panic some time

    Yes, you can get it from the back end database. Take a look here:
    http://blogs.technet.com/b/dodeitte/archive/2011/05/11/how-to-get-the-last-time-a-user-registered-with-a-front-end.aspx
    and
    http://gallery.technet.microsoft.com/office/List-last-time-a-User-b905cf5f
    If this helped you please click "Vote As Helpful" if it answered your question please click "Mark As Answer"
    Georg Thomas | Lync MVP
    Blog www.lynced.com.au | Twitter
    @georgathomas
    Lync Edge Port Check (Beta)
    This forum post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.

Maybe you are looking for

  • Total Settlement Value field  not getting populated in IW38  Transaction

    Dear all PMCS group Members, I have a list of  Maintenance Orders on which  Actual Costs are posted and also periodic settlements done before TECO.  When i list these Maintenance Orders in IW38 transaction the system is bringing in the values in  Tot

  • How do I add Core Audio files from my HD to the search tab?

    Hello there, I'm trying to add a few core audio files from my hard drive the sound effects library in Sound track. I assumed if I just coppied and pasted the CAF files in the ambience folder that is in the  ilife sound effects folder in my library th

  • The ODI reverse process use too long time

    I run the reverse process for a customized essbase model, it just keep running and never come out? should I reinstall ODI? I make another test, odiaess_93110_samples.zip, I import the ODI sample_essbase work repository. and I make a reverse process f

  • Help with a method to update data

    I keep and update daily a weather diary, (Excel worksheet) that is formatted and contains formulas. I would like to publish this worksheet or the data within it to a webpage created in Dreamweaver. I've tried using an "Iframe" linking it to the URL w

  • Error 15006 when streaming movies

    When I want to stream movies from my PC I choose "Apple TV" in iTunes and after a few minutes I get an error named 15006 "unknown error during connection to Apple TV". Any solution to this problem ? Harald