What order are Archive logs restored in when RMAN recover database issued

Ok, you have a run block that has restored your level-0 RMAN backup.
Your base datafiles are down on disc.
You are about to start recovery to point in time, lets say until this morning at 07:00am.
run {   set until time "TO_DATE('2010/06/08_07:00:00','YYYY/MM/DD_HH24:MI:SS')";
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
recover database;
So the above runs, it analyses the earlies SCN required for recovery, checks for incremental backups (none here), works out the archivelog range
required and starts to restore the archive logs. All as expected and works.
My question: Is there a particular order that RMAN will restore the archive logs and is the restore / recover process implemented as per the run block.
i.e Will all required archive logs based on the run block be restored and then the database recovered forward. Or is there something in RMAN that says restore these archive logs, ok now roll forwards, restore some more.
When we were doing this the order of the archive logs coming back seemed to be random but obviously constrained by the run block. Is this an area we need to tune to get recoveries faster for situations where incrementals are not available?
Any inputs on experience welcome. I am now drilling into the documentation for any references there.
Thanks

Hi there, thanks for the response I checked this and here are the numbers / time stamps on an example:
This is from interpreting the list backup of archivelog commands.
Backupset = 122672
==============
Archive log sequence 120688 low time: 25th May 15:53:07 next time: 25th May 15:57:54
Piece1 pieceNumber=123368 9th June 04:10:38 <-- catalogued by us.
Piece2 pieceNumber=122673 25th May 16:05:18 <-- Original backup on production.
Backupset = 122677
==============
Archive log sequence 120683 low time: 25th May 15:27:50 Next time 25th May 15:32:24 <-- lower sequence number restored after above.
Piece1 PieceNumber=123372 9th June 04:11:34 <-- Catalogued by us.
Piece2 PieceNumber=122678 25th May 16:08:45 <-- Orignial backup on Production.
So the above would show that if catalogue command you could influence the Piece numbering. Therefore the restore order if like you say piece number is the key. I will need to review production as to why they were backed up in different order completed on production. Would think they would use the backupset numbering and then piece within the set / availability.
Question: You mention archive logs are restored and applied and deleted in batches if the volume of archivelogs is large enough to be spread over multiple backup sets. What determines the batches in terms of size / number?
Thanks for inputs. Answers some questions.

Similar Messages

  • Finding the order of archive logs in RAC to NON-RAC manual cloning

    Hi,
    I am in the process of automating a RAC to NON-RAC clone using traditional method [ not using RMAN ].
    Here are the steps:
    On Source:
    Record the sequence number and thread of the instance
    put the database in begin backup mode
    copy the files
    disable the backup mode
    record the sequence number and thread of the instance
    prepare the order of the archive logs which will be needed for recovery ==> Here is where I need help
    On Target:
    Restore the backup
    Issue recovery using the order of archive logs identified ==> Failing as order is not correct.
    I am finding the archive logs based on FIRST_TIME# and preparing the order which is not correct. I think i should be using SCN but not sure what should be the criteria.
    Could you please let me know on what basis we can find the order of the archive logs which will be asked during the recovery ?
    Your quick response is appreciated. Thanks in advance for your help.
    Thanks
    Suneel

    Yes, When we execute recover database using backup controlfile until cancel, will prompt for the redo log sequences.
    But as i mentioned before, I am trying script everything and cannot manually enter the sequence numbers required. I need to know the archives and thread it is going to ask [ in what order as well ] so that recovery command can be scripted in shell.
    Please let me know if there is a away to find the order of archive logs which needs to be applied to recover [ if this is a NON-RAC, we can simply go with the sequence numbers but since it is a RAC, i should know the thread number as well ].
    Thanks

  • What order are find results in?

    When I use Spotlight and then select the "show in finder" option, what order
    are the results supposed to be in?  Alphabetical, you'd think, if "name" was
    selected.  But there seem to be many sublists in a large set of result, each
    restarting some sort of order every so many lines.  And why wouldn't clicking
    "name" again simply resort these?  It seems the only sort that affects the
    entire list in a predictable manner is sort by "type".  Have I inadvertently
    selected something wrong or am I missing something?  This makes spotlight
    much less useful to me.
    Anyone notice this?
    Mark

    Hi Ves,
      The AFCS dev guide mentions this in passing (3.1.5.1) but doesn't go into quite enough detail - we'll make a note to improve this for the next go-round.
      Essentially, all items for a given CollectionNode (from all of its nodes) are synched in the order that they were published, irrespective of which node they were published to. You can actually put a breakpoint down in MessageManager.receiveItems to watch this - all items for the entire collection are received in one big blob, then pushed onto an array, then sorted according to their timestamps and order, then sent as itemReceive up to the collectionNode.
      So yes, you could easily end up with an itemReceive for EVERYONE, followed by an itemReceive for HOSTS, then another one for EVERYONE. We make sure that they essentially come in the same order they did for people who were actually in the room at the time. For example in the chat pod, if I asked a question to the HOSTS node, which they answered on the public EVERYONE node, I'd want the items to be received in the right order, or the collective history wouldn't make sense.
    For separate CollectionNodes, their synchronization are kept separate (one at a time), since they're the result of individual subscribe() requests - the advice here is that if you have a series of items that have some dependency on order of arrival, you should make sure they're part of the same CollectionNode (which makes sense). If you need A to finish synching before B begins, put those items on different collectionNodes and call their subscribe() methods in that order. So, you can have it either way, depending on how you want to set it up.
    hope that helps
       nigel

  • What locks are required on a table when Oracle is processing an UPDATE

    What locks are required on a table when Oracle is processing an
    UPDATE statement?

    Welcome to the forum!
    Whenever you post provide your 4 digit Oracle version.
    >
    What locks are required on a table when Oracle is processing an
    UPDATE statement?
    >
    Here is a relevant quote from the 'Lock Modes' section of the doc that Ed Stevens provided
    >
    Assume that a transaction uses a SELECT ... FOR UPDATE statement to select a single table row. The transaction acquires an exclusive row lock and a row share table lock. The row lock allows other sessions to modify any rows other than the locked row, while the table lock prevents sessions from altering the structure of the table. Thus, the database permits as many statements as possible to execute.
    >
    The above describes the locks when you, the user, tells Oracle to lock the row.

  • On Provision or reProvision, what order are the resources proccessed in?

    Howdy,
    I am fighting a race condition. Occasionally, when an end user claims their computing accounts, and active sync thread is undoing some of the claiming process. I have done a lot of work to eliminate race condition potential, and am now rather stumped. However, I have come to wonder if the Provision workflow service is the culprit. The toUpdate list has LDAP first, a few DB resources, then Lighthouse. It is the LDAP active sync process that is stepping on the toes of the account claiming. Does this list get processed linearly in order? If so this might explain what is happening. However I can not find any information in the order of execution....
    Does anyone know?
    Thanks,
    Jim

    Hi there, thanks for the response I checked this and here are the numbers / time stamps on an example:
    This is from interpreting the list backup of archivelog commands.
    Backupset = 122672
    ==============
    Archive log sequence 120688 low time: 25th May 15:53:07 next time: 25th May 15:57:54
    Piece1 pieceNumber=123368 9th June 04:10:38 <-- catalogued by us.
    Piece2 pieceNumber=122673 25th May 16:05:18 <-- Original backup on production.
    Backupset = 122677
    ==============
    Archive log sequence 120683 low time: 25th May 15:27:50 Next time 25th May 15:32:24 <-- lower sequence number restored after above.
    Piece1 PieceNumber=123372 9th June 04:11:34 <-- Catalogued by us.
    Piece2 PieceNumber=122678 25th May 16:08:45 <-- Orignial backup on Production.
    So the above would show that if catalogue command you could influence the Piece numbering. Therefore the restore order if like you say piece number is the key. I will need to review production as to why they were backed up in different order completed on production. Would think they would use the backupset numbering and then piece within the set / availability.
    Question: You mention archive logs are restored and applied and deleted in batches if the volume of archivelogs is large enough to be spread over multiple backup sets. What determines the batches in terms of size / number?
    Thanks for inputs. Answers some questions.

  • Rebuilding config server.  what/how to archive and restore?

    Greetings,
    I've recently realized that I've gotta either re-jumpstart a server (because the root disk isn't mirrored, and because some other packages didn't get installed), or manually mirror the root disk (using SVM).
    What I'm after here is info on what I need to archive, how to do that, and how to restore.
    Details on the host: It's my primary master for my MMR setup, and is also the config server for when I set up other DS machines.
    I'm guessing that I've got to archive off o=NetscapeRoot, and possibly the config stuff (but not necessarily the suffix, since it should be able to be initialized from one of my other masters, right?)
    What tools are best used for this? Once the backup is made and the server is rebuilt, is it as simple as just restoring the archived trees? What tools are used for that?
    TIA,
    Patrick

    I should think:
    1) "tar" of everything under $LDAP_SERVER_ROOT, when both admin and ldap server are shutdown, make sure you check its content via "tar tvf ..." that all alias/*.db and all files under "db" are there.
    2) db2ldif for both NetscapeRoot and all userRoot(s) when both are down "ideally".
    I have a cron job for this "db2ldif_backup.sh" which runs daily at 02:00 when both are "up".
    # db2ldif_backup.sh
    # Execute these for iPlanet Directory Server
    IDS5_PATH=/var/Sun/mps
    BASEDN="dc=example,dc=com"
    LDIF_FILE=/home/ldap/full_backup.ldif
    NETSCAPEROOT_LDIF_FILE=/home/ldap/NetscapeRoot.ldif
    if [ -n "`ps -ef | grep 'ns-slapd' | grep -v grep`" ]
    then
    YYYY=`date +'%Y'`
    cd $IDS5_PATH/slapd-`hostname`/ldif
    rm -f $YYYY*.ldif
    ../db2ldif -n UserRoot
    cp $YYYY*.ldif $LDIF_FILE
    echo "Full user data backup written to $LDIF_FILE"
    rm -f $YYYY*.ldif
    ../db2ldif -n NetscapeRoot
    cp $YYYY*.ldif $NETSCAPEROOT_LDIF_FILE
    echo "Netscape Config data backup written to $NETSCAPEROOT_LDIF_FILE"
    chmod 600 $NETSCAPEROOT_LDIF_FILE
    fi
    Note: saveconfig and restoreconfig are doing the same for NetscapeRoot.
    3) db2bak
    Gary

  • In what order are results from gather_database_stats 'LIST AUTO' returned?

    11.2.0.1.0. The results from the following seem to have some order to them (groupings of objects returned alphabetically). Didn't see anywhere in the docs to indicate what's getting returned in what order (empty stats, stale stats). I looked at table stats for one at the top of the list and it looks like the table had stats gathered recently, so i'm confused. I ran this from sqlplus:
    set serveroutput on size unlimited;
    DECLARE
    ObjList dbms_stats.ObjectTab;
    BEGIN
    dbms_stats.gather_database_stats(objlist=>ObjList, options=>'LIST AUTO');
    FOR i in ObjList.FIRST..ObjList.LAST
    LOOP
    dbms_output.put_line(ObjList(i).ownname || '.' || ObjList(i).ObjName || ' ' || ObjList(i).ObjType || ' ' || ObjList(i).partname);
    END LOOP;
    END;
    /

    Enlightened answer: "So bug off and work with what you get"
    The report was run on a database that doesnt have any auto stat collection occurring. There are 4000 items returned in the list. One of the items looks ok when you look at dba_tables (row count correct and last_analyzed recent). Just trying to figure out why this table was included in the list. Might be the way stats collected for it isn't updating some mechanism that gather_database_stats is utilizing. From what I can tell the order is 1) tables with no stats, 2) indexes with no stats, 3) stale tables.

  • What order are Nodes Syncronized in within the same NodeCollection?

    Basically is there a guarantee of synchronization order within a NodeCollection? I assume that there is and that it is based off the order in which the nodes were created. For instance in the SimpleChatModel the Node creation order is:
    HISTORY_NODE_EVERYONE
    HISTORY_NODE_PARTICIPANTS
    HISTORY_NODE_HOSTS
    TYPING_NODE_NAME
    When synchronizing will I receive all of the messages in HISTORY_NODE_EVERYONE before I receive *ANY* messages for HISTORY_NODE_PARTICIPANTS? Will HISTORY_NODE_PARTICIPANTS send all of its messages before HISTORY_NODE_HOSTS sends its messages? When I say "sent" I don't as much what the actual network traffic is (I don't care about the actual packet order of arrival, that is your job AFCS developers ), what I really care about is whether ItemRecieve for HISTORY_NODE_PARTICIPANTS can get executed while HISTORY_NODE_EVERYONE is still receiving messages?
    To take this a step higher in the hierarchy to the NodeCollection level does a NodeCollection need to be fully synchronized before another NodeCollection (that called subscribe after the first one) can get any ItemRecieve messages? If NodeCollection A subscribes before NodeCollection B is A guaranteed to get synchronized before B? Will A finish Synchronizing before B gets its first message?
    I am starting to run into an issue where I need to make sure certain NODEs are synchronized before I start recieving messages in other NODEs. I of course can check for the dependencies on reception and simply defer execution until the dependency is sync'd but this will create a lot more code failure points than I really wanted. I was hoping for a simplier way of doing this.
                             Ves

    Hi Ves,
      The AFCS dev guide mentions this in passing (3.1.5.1) but doesn't go into quite enough detail - we'll make a note to improve this for the next go-round.
      Essentially, all items for a given CollectionNode (from all of its nodes) are synched in the order that they were published, irrespective of which node they were published to. You can actually put a breakpoint down in MessageManager.receiveItems to watch this - all items for the entire collection are received in one big blob, then pushed onto an array, then sorted according to their timestamps and order, then sent as itemReceive up to the collectionNode.
      So yes, you could easily end up with an itemReceive for EVERYONE, followed by an itemReceive for HOSTS, then another one for EVERYONE. We make sure that they essentially come in the same order they did for people who were actually in the room at the time. For example in the chat pod, if I asked a question to the HOSTS node, which they answered on the public EVERYONE node, I'd want the items to be received in the right order, or the collective history wouldn't make sense.
    For separate CollectionNodes, their synchronization are kept separate (one at a time), since they're the result of individual subscribe() requests - the advice here is that if you have a series of items that have some dependency on order of arrival, you should make sure they're part of the same CollectionNode (which makes sense). If you need A to finish synching before B begins, put those items on different collectionNodes and call their subscribe() methods in that order. So, you can have it either way, depending on how you want to set it up.
    hope that helps
       nigel

  • In what order are msgs consumed with single sess & multiple async consumers

    Sun MQ will serialize delivery of messages when you have a single session and multiple asynchronous consumers created from that session. I am trying to find out what is the order in which the messages will be delivered by the session to the message listener.
    So if I have 10 consumers on 10 destinations, each with 100 messages, in what order will messages be passed to the message listener.
    Thanks
    Aspi Engineer
    Putnam Investments

    In our testing, we have found that the consumerFlowLimit implementation will guarantee that you'll receive your messages out of order. What happens is that messages will be put in the consumer buffer and they won't be available for round-robin delivery to the other consumers (even if they are idle). I'd recommend setting the consumerFlowLimit=1 which minimizes the impact of this, but every other message will still be out of order (ie consumer #1 grabs messages 1&2, consumer #2 grabs messages 3&4, etc; so that they will process 1&3 together, and 2&4 together. I consider this a very serious bug of openMQ and wish we could disable the consumerFlowBuffer all together to get guaranteed processing order from the queue.
    Edited by: Pancetta on Jun 24, 2010 9:33 AM

  • Best way of deleting archive logs on standby when backup taken on primary

    Hello,
    I have a 11gr2 dataguard configuration without the broker.
    main problem in my particular situation arises from my FRA size on primary much more than on standby (90Gb vs 30Gb) and in some situations I'm constrained with FRA space on standby.... I'm going to align the FRAs but in the mean time....
    can anyone confirm that if I have configuration as in subject with maximum performance mode and I'm not using the broker I should consider a sort of "external" way for archive log deletion on the physical standby?
    In fact I'm setting on primary
    - retention policy = redundancy 1
    - archivelog deletion = applied on standby (without using any "backed up...." option)
    Considering the standby it seems from docs that:
    - retention policy always wins over deletion policy (so no archive eligible for deletion is deleted if retention is not satisfied)
    - I can set on standby archivelog deletion = applied on standby (without using any "backed up...." option)
    But it is not clear to me the effect of this config in case this is a standby itself and without any other cascade standby databases
    The manual says this has to be met:
    1) The archived redo log files have been applied to the required standby databases.
    ---> what does this mean in case the db is a standby.. met or not met as I have no other cascade standby dependant on this one?
    2) The logs are not needed by the BACKED UP ... TIMES TO DEVICE TYPE deletion policy. If the BACKED UP policy is not set, then this condition is always met.
    ---> I should be ok
    So the last question for standby is
    - what to set retention policy to?
    I have not understood if I set it to "NONE", this means no retention at all or retention for ever....
    sometimes I read that there is no policy; in other examples that files will be retained forever (because no file considered as obsolete).... or at least so I understood...
    I would like to have a sort of policy on standby where the archive logs can be deleted as soon as they are applied during the continuous media recovery phase...
    It seems I can't find a self contained set of configurations across the dataguard....
    Thanks in advance.
    Gianluca

    I agree with you.... It is a temporary contingency....
    I already stressed the fact that sizes between sites must be quite identical...
    we are going towards a production dataguard, migrating from a windows config (without it) to a Linux based config....
    but this contingency made me think about these considerations regarding policies and automatic dataguard operations
    I also stressed on equal size grow capability for both sites file systems, also for data tablespaces, so that for example in case of high fragmentation we have to avoid a limit situation where
    1) a file system full for DR datafiles causes abend of automatic media recovery
    2) this generates FRA growth in DR because archive logs are shipped but not applied any more
    3) FRA goes 100% in DR and primary cannot ship any more
    4) archive logs cannot be automatically deleted on primary because not already shipped
    5) FRA full on primary and DB hangs....
    Thanks anyway for input

  • Are archived logs useful for crashed instance?

    when the machine powers off,and start it again,the database will experience instance recover,will this kind of recover use archived log£¿I see if the database worked under unarchive mod before,then instance recover will not use archived log(no archived log).

    surpose that oracle makes a commit,and the commited
    data are already written into log file,but are not
    written back to datafile,and if the log file is
    archived,at this time OS crash.when oracle startup again
    it will user the online redo,and some of the latest
    archvied log to recover the committed data.and if the
    archvied log accidentally lost,how can oracle recovery.
    Or oracle will not reuse the online redo log until
    its data are written back to the datafile under archived
    log mod?

  • What causes areas to be grayed out when viewing in Acrobat 9?

    What causes text to be grayed out when viewing in Acrobat 9?
    Douments which were fine in Acrobat 9 suddenly have areas blacked or grayed out.  The same thing happens if opened in Acrobet Reader.  Is there a setting that causes this.

    It could be that the text is an image.
    Try going to Edit>Preferences>Page display>Page content and information and check the box hat says "Display large images".

  • Using LDAP Query in Active Directory to see what users are still logged ?

    any suggestions for a LDAP query that I can use in AD to see who is still logged into the network?
    It would be great to distinguish who's logged in with a screen lock which means they aren't really at their PC vs what users are actually using their PCs.
    Thanks in advance!

    I recently posted a framework for checking all machines to see who is logged into them. You can take that and adjust it as you need.
    https://social.technet.microsoft.com/Forums/en-US/fb2ef90a-ba15-41bf-8e6c-95d32256225b/how-do-i-run-this-query-from-a-text-file-list?forum=ITCG
    Don't retire TechNet! -
    (Don't give up yet - 13,085+ strong and growing)

  • Control what songs are on my Ipod touch when I'm out of WiFi coverage

    My iTunes library is much larger than my 4g Ipod Touch. with iTunes match can I control what songs are on my iTouch when I will be out of WiFi coverage.
    Thanks

    When I leave my house where I have WiFi coverage I want to select what songs I can play when I'm out of coverage. Because I have only 8 gig on my Ipod I can't download everything for offline use. I want to be able to choose what I have available when I'm off line. How can I do that?

  • Archive log shipping error in Logical standby database

    Hi,
    We have Primay and logical database configuration both in RAC in Production. both database are in 10.2.0.4.
    Everyday we having issue of some archivelog not transfered to logical server around 2AM. I am having good exposure in Logical/Log minor/Goldgendate and can say fal server/client server configuration is correct.
    Can you guys please give some input of what can cause this.
    Yes we take RMAN backup of both database around 1:30 using control file instead of recovery catalog and I doubt this put lock on control file and cause this issue.
    I would be glad of you could help me out in this.
    Thanks a lot!
    Pradeep
    Below is the configuration of archive/fal on both database.
    PIP1REP2 is logical database(instance 2) where apply process is running/configured.
    Primary server configuration on (PIP11) (same are on other instances).
    SQL> sho parameter fal
    NAME TYPE VALUE
    fal_client string PIP1REP2
    fal_server string PIP11
    SQL> sho parameter archive
    NAME TYPE VALUE
    log_archive_dest_1 string LOCATION=+ARCH01
    log_archive_dest_2 string SERVICE=pip1rep2 lgwr async
    log_archive_dest_state_1 string enable
    log_archive_dest_state_2 string ENABLE
    llog_archive_local_first boolean TRUE
    log_archive_max_processes integer 10
    log_archive_min_succeed_dest integer 1
    log_archive_start boolean FALSE
    log_archive_trace integer 0
    remote_archive_enable string true
    Logical server configuration
    SQL> sho parameter fal
    NAME TYPE VALUE
    fal_client string PIP1REP
    fal_server string pip11, pip12, pip13, pip14, pi
    p15
    SQL> sho parameter archive
    NAME TYPE VALUE
    alog_archive_dest_1 string LOCATION=+ARCH01/PIP1REP valid
    for=(onlinelogfiles, all_rol
    es) db_unique_name=pip1rep
    log_archive_dest_2 string LOCATION=+ARCH01/PIP1REP/ARCH_
    DEST2_STANDBY_LOGS VALID_FOR=(
    STANDBY_LOGFILES,STANDBY_ROLE)
    db_unique_name=pip1rep
    log_archive_dest_state_1 string enable
    log_archive_dest_state_2 string enable
    log_archive_local_first boolean TRUE
    log_archive_max_processes integer 20
    log_archive_min_succeed_dest integer 1
    log_archive_trace integer 0
    remote_archive_enable string true
    standby_archive_dest string +ARCH01                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Do you think creating recovery catalog will resolve it?
    Yes, I do. (But I do not know for sure)
    If it helps when I got to this part I decided to backup either the Primary or the Standby, but not both.
    I decided Primary only and documented how to recreate my standby without downtime.
    I hope this helps.
    regards.

Maybe you are looking for

  • Photoshop Elements 6 Uninstall & Reinstall

    In 2007 I bought Adobe Photoshop Elements 6.  Earlier this year a pop-up message told me one of my files was missing and that I should reinstall the program.  When I opened my "Add or Remove programs" tool, there was no listing for Photoshop elements

  • External Optical Speakers won't work on my G5

    I have a Power Mac G5 running Tiger 10.4.11 and Logictech 5.1 Dolby Surround Sound Speakers. I had them setup with this system a few years ago and then upgraded to leopard and compatibility was fine. My HDD recently died and I lost my leopard disc so

  • Write double to a file

    I want to write doubles to a file without converting them to string....... how is this done?

  • Changes made through SE43N not reflecting in the SAP easy access

    Hi All, I have done changes through T code SE43N in area menu FIGL. But I am not able to see the T Codes in the SAP easy access. Can anyone help on this issue? Kind Regards, Yogesh

  • Disk Warrior and 'The Folder with the ? Mark'

    Good morning. My MacBook recently crashed and I am left with the dreaded and endlessly blinking 'Question Mark Folder'. I have tried to restart using the System Disk and running Disk Utility but the Hard Drive is nowhere to be found. I have read in t