Does Linux filesystem undermine Oracle's recovery mechanism?

I've been an Oracle DBA for 10 years and have been using Oracle
on Linux for several months, but am not a Linux expert by any
means. A client told me something about the filesystem Linux uses
(x2?) that I find hard to believe. Can anyone shed some light on
this?
The claim is that the Linux filesystem does not implement
synchronous writes correctly. The implication is that when a user
commits a transaction and Oracle flushes the redo log to disk,
Oracle may think the redo information has been successfully
written when in fact its still sitting in a buffer somewhere
waiting to write. If a drive failure occurs, the redo might never
get written, but meanwhile the user has already been informed
their transaction has been committed.
Oracle does not flush data block buffers to disk when you commit
a transaction. Only the redo is flushed. If the instance were to
fail, Oracle reads the redo when you restart the instance and
performs instance recovery automatically.
If the Linux filesystem does not implement synchronous writes
legitimately, then the recovery mechanisms in Oracle are
compromised--indeed a successful commit is not a guarantee of
data permanence.
Its hard to believe that this could be true; I don't see how
Oracle Corporation could put so much effort into porting their
flagship products to Linux if data permanence cannot be
guaranteed.
Is my client mistaken in their understanding of the Linux
filesystem? Any insights from the Linux gurus out there would be
gratefully appreciated!
Regards,
Roger Schrag
Database Specialists, Inc.
null

Roger Schrag (guest) wrote:
: I've been an Oracle DBA for 10 years and have been using
: Oracle on Linux for several months, but am not a Linux expert
: by any means. A client told me something about the filesystem
: Linux uses (x2?) that I find hard to believe. Can anyone shed
: some light on this?
: The claim is that the Linux filesystem does not implement
: synchronous writes correctly. The implication is that when a
: user commits a transaction and Oracle flushes the redo log to
: disk, Oracle may think the redo information has been
: successfully written when in fact its still sitting in a
: buffer somewhere waiting to write. If a drive failure occurs,
: the redo might never get written, but meanwhile the user has
: already been informed their transaction has been committed.
The problem doesn't lie with Linux - fsync() and O_SYNC are
supported and, AFAIK, behave correctly.
The problem is that Oracle doesn't appear to use them. The redo
logs arnt fsync'ed on commit, nor do they appear to be opened
with O_SYNC.
Data loss will result, as you point out, if the RDBMS doesn't
tell the operating system to save the data synchronously.
Play with strace()ing the RDBMS background processes and confirm
this for yourself.
I'm not in a position to progress this with Oracle. Someone,
obviously, should, otherwise the next article on ZDNet could be
"Linux causes massive Oracle dataloss"..
null

Similar Messages

  • Where does one find the Oracle Best Practice/recommendations for how to DR

    What is the Oracle Best Practice for install/deployment and configuration of ODI 11g for Disaster Recovery?
    We have a project that is using Oracle ODI 11g (11.1.1.5).
    We have configured all the other Oracle FMW components as per the Oracle DR EDG guides. Basically using the Host ip name/aliasing concept to ‘trick’ the secondary site into thinking
    it is primary and continue working with minimal (or no) manual reconfiguration. But will this work for ODI? The FMW DR guide has sections for SOA, WebCenter and IdM, but nothing for ODI.
    Since ODI stores so much configuration information in the Master Repository..when this DB gets ‘data guarded’ to the secondary site and promoted to Primary…ODI will still think it is at the ‘other’ site. Will this break the actual agents running the scenarios?
    Where does one find the Oracle Best Practice/recommendations for how to DR ODI properly?
    We are looking for a solution that will allow a graceful switchover/failover with minimal manual re-configuration.

    user8804554 wrote:
    Hi all,
    I m currently testing external components with Windows Server and I want to test Oracle 11g R2.
    The only resource I have is this website and the only binaries seem to be for Linux OS.You have one other HUGE resource that, while it won't answer your current question, you'd better start getting familiar with if you are going to use Oracle. That is the complete and official documentation, found at tahiti.oracle.com
    >
    Does anybody know how I can upgrade my Oracle 11.1.0.7 version to the R2 release?
    Thanks,
    Bertrand

  • Does anybody know how Oracle load large N-Triple file into Oracle 11g R1?

    Does anybody know how Oracle load large N-Triple(NT) file into Oracle 11g R1 by using sql*loader according to their benchmark results?
    Their benchmark results indicate they have over come the large data set problem.
    http://www.oracle.com/technology/tech/semantic_technologies/htdocs/performance.html
    It means they have loaded LUBM 8000(1.068 Billion+ Triples) into Oracle successfully, but there is no detailed steps provided. For instance, 32-bit or 64-bit platform they used, only one NT file being used corresponding to one dataset or several NT files?
    Is there any exception occured during the loading process if the NT file beyond 60GB? When using jena to generate NT file against LUBM(8000), the size of the NT file would definitely beyond 60GB.
    We are dividing such large NT file into several small ones? Is it a good approach? I'm hesitating to do so!

    A Linux 32-bit platform was used for bulk-load of LUBM-8000 1.106 billion (before duplicate elimination) RDF triples into Oracle.
    Multiple gzipped N-Triple files were used to hold the LUBM-8000 data. zcat was used on all these files together to send the complete data into a named pipe. SQL*Loader used this named pipe as the input data file to load the data into a staging table in Oracle. Once the staging table was loaded, the sem_apis.bulk_load_from_staging_table API was used to load the data into Oracle Semantic Store.
    (Additional details in http://www.oracle.com/technology/tech/semantic_technologies/htdocs/performance.html )
    Thanks.

  • Does BPEL PM use Oracle HTTP Server

    Hi,
    Can anyone throw light on how BPEL PM invokes external partner links ? Does it use the Oracle HTTP server to send out the invoke request ?
    TIA

    Maybe use should define what you mean by an internal service as you can connect to DB, JMS, AQ, WS, etc. I'm assuming you mean web service therefore basically the mechanism is HTTP. This does not mean that you need OHS. Also internal services can go through a HTTP server if you desire.
    SOA Suite uses the same technology for both internal and external services. It is only your network that restricts connectivity. SOA Suite can connect directly to external services but this generally isn't recommended as it is a security risk.
    If OHS is killed then you should fail over to another OHS, if this configuration is not in place you will lose connectivity to external services.
    Have a look at this doc, it shows a Oracle's recommendation for enterprise deployments.
    http://download.oracle.com/docs/cd/E12839_01/core.1111/e12036/toc.htm
    cheers
    James

  • Oracle on Linux vs. Oracle on NT (Performance)

    Does someone have information about some benchmarks studies of Oracle on Linux vs. Oracle on NT?
    Thanks so much,
    Andre

    I looked at this for a data warehouse project. Same hardware (multi-boot) with various Linux distros (Red Hat, SuSE, and Mandrake) vs. NT4. Loaded the same data in all configurations.
    Results: performance was better on all Linuxen that I tested than it was on NT, but not a big enough difference to choose on this basis alone.
    I did, however, see that all the Linux variations were MUCH more stable than NT. I do almost all my development on a Linux server now, just because I got tired of rebooting NT every time my mouse moved. However, thus far most of the production with this warehouse is still being done on NT--so it CAN be made to work, I just don't understand why anybody wants to do it the hard way.
    Subjectively, I thought Red Hat was the most inconvenient Linux to work with, SuSE was easy, and Mandrake was nice but it wasn't officially supported by Oracle. I've also had good experiences with SuSE support, but Red Hat support just ignored me. That's just me; I don't know how others have fared with the various support departments.
    I did this about a year ago, so it was 8.1.6 Oracle and whatever Linux versions were current at the time (all had 2.2.* kernels); I have not revisited this with 9i and the only Linux distribution I'm using now is SuSE 7.2.

  • Recovery Mechanism in Solaris

    Hi to all,
    I am new to Solaris (comimg from HP-UX world) and I was wondering if there is some tool in Solaris world for making exact image of the system and use it afterwards to restore the system as it was at the moment of taking the image.
    HP-UX have such tool called ignite make_tape_recovery and is very handy tool fot this pourpuse.
    Something like this in Solaris?

    dejan.stojcevski wrote:
    Thanks a lot Ivan.
    This answered my question.
    I will search around to learn some more about flash archives and see what they can do too.
    Anyway a little comparisson with HP's make_tape_recovery:
    1. make_tape_recovery creates a bootable tape. No need to boot from instalation CD. You boot directly from the tape. ufsdump is not doing this.
    2. make_tape_recovery does not require to partition the underlying root disk. It is doing this automatically. ufsdump does not have this functionality.
    3. make_tape_recovery is fully automated backup/recovery mechanism mining after you boot from the tape you can return around 1 hour and you will have completly recovered system. ufsdump requires mounting/unmounting of slices.This sounds a lot like SCO's root/boot floppy/tape restore solution.
    Yet I think that this comparison is not correct because Sun's ufsdump and HP's make_tape_recovery are two diferent types of software (different philosophy). Sun's ufsdump is like HP's fsbackup utility - tools for full file system backups. HP's make_tape_recovery <=> Sun's ??? (flash archives maybe?)I don't think Sun has anything like this and the closest you could get would be a Flash archive or a Jumpstart server. And then you would still have to do a restore after a machine has been booted up.
    The closest you could get to something like in the Sun world would probably be "Bare Metal Restore" from Veritas, now Symantec.
    alan

  • Does Linux do anything About SSD Performance Degradation?

    I have been thinking about getting an SSD for my laptop but just read this article on performance degradation of SSDs: http://www.anandtech.com/show/2738/8
    I typically use ext4. Does anyone know if any of the Linux filesystems/kernels/etc do anything about SSD performance degradation? Has anyone found a solution to this problem? Maybe new SSDs don't have the issue?

    My OCZ Vertex 2 SSDs slowed down just a tiny bit soon after installing them, but TRIM seems to be working well and they don't seem to be slowing down any further. I'm running them on ext4 with journaling enabled, and only the log files are being directed to temp files.  Everything else is going to the SSDs, and they are being used just like regular hard drives.

  • Linux clusters and oracle

    Hello gentlemen!
    Does anybody have experience with linux clusters and oracle?
    Does this combination exist? Is linux great in this role?
    Any links on this topic?
    Any help would be greatly appreciated.
    Best regards,
    San.
    null

    This contradicts the information in the Oracle® Database Release Notes 10g Release 2 (10.2) for Linux x86-64:
    In Oracle Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide, Chapter 2, "Preinstallation," in the section "Oracle Clusterware Home Directory," it incorrectly lists the path /u01/app/oracle/product/crs as a possible Oracle Clusterware home (or CRS home) path. This is incorrect. A default Oracle base path is /u01/app/oracle, and the Oracle Clusterware home must never be a subdirectory of the Oracle base directory.
    A possible CRS home directory is in a path outside of the Oracle base directory. for example, if the Oracle base directory is u01/app/oracle, then the CRS home can be an option similar to one of the following:
    u01/crs/
    /u01/crs/oracle/product/10/crs
    /crs/home
    This issue is tracked with Oracle bug 5843155.

  • File does not exist :C:/oracle/product/10.2.0/db/apache/apache/htdocs/emd/m

    Dear All,
    unable to Login on Logging into Apex Login Page,
    in Apache Log the Following Error is found
    [ecid:1213173391:172.20.233.149:1788:1840:1.0]
    File does not exist :C:/oracle/product/10.2.0/db/apache/apache/htdocs/emd/main/
    what cud be the Problem restared the Apache Server Still the Same Error Found.,
    please Help me out in this
    thanks alot
    Edited by: khaja on Jan 18, 2009 12:48 PM

    Hi,
    Probably a mistake in your dads.conf file.
    ( Also check the APEX_PUBLIC_USER in the dads.conf file )
    Kind regards,
    Iloon

  • Need help in oracle data recovery

    Friends ,i need help in oracle data recovery.
    I had an oracle 8i database running on windows.
    For some reason Windows operating system crashed.
    It is not booting up.
    I dont have current backups.But my database physical files are in the disk.
    Controlfile,datafiles and redo log files are there.
    Is there any way I can recover my database?
    Please help in this issue.
    regards
    Ajith

    HI citrus,
    thanks for the reply.
    I have installed database 9i on the same PC after O/S reinstallation.
    You are saying that ,I need to keep oracle root folder same as that of my old installation ,and copy control files,redo log and data files in exactly same folders as that of old database,and then start the database?
    thank you for your patience and support.
    regards.,
    Ajith

  • How long does restoring iPhone 3GS take in recovery mode through iTunes?

    how long does restoring iPhone 3GS take in recovery mode through iTunes?

    You are right there as a weard issue try with the battery comes out and then try to connect to the itunes should be working properly, if not fix the power on/off button because are escential to do it.

  • Does Studio Creator support Oracle ADF Faces and other components?

    Hi everyone
    According to:
    http://www.oracle.com/technology/products/jdev/htdocs/partners/addins/exchange/jsf/doc/faq.html
    "Although ADF Faces is "vanilla" JSF we have not been able to run with Java Studio Creator Build 04.06.2. We are working with Sun to resolve the issues in Java Studio Creator."
    Does anyone know if Oracle ADF faces now work with Studio Creator and if so which version.
    In addition I am also looking for JSF visualization components. In addition to advanced 3-D graphs (send as PNG to client) I am also looking for components to visualize the structure of a website. These all need to work with Studio Creator.

    Importing 3rd party libraries used to be complicated. The .complib stuff was added precisely to make it easy.
    It does make packaging slightly harder for third party -vendors-, since there's one extra step, but this makes everything easier for (the much larger number of) users of the third party components, since the packaging format specifies a bunch of stuff that we used to have to ask of users when trying to add the jar and associated metadata into the IDE.
    The complib stuff is documented, so if you're producing a 3rd party JSF library, or if you really want to use one that hasn't yet been packaged, you can do the steps yourself.
    See http://wiki.java.net/bin/view/People/EdwinGoei -- the first couple of links describes the process. Yes, we're working with third party vendors to get this done for their component sets, and yes, there's talk with other IDE vendors to standardize all this.
    -- Tor
    http://blogs.sun.com/tor

  • TS4036 where does icloud store my mac osx recovery key?

    where does icloud store my mac osx recovery key for file vault?

    Note that you have to have specifically chosen to store the key with Apple at the time you created the FileVault encryption, and chosen security questions so that you can identify yourself when you come to retrieve it. If you did not store it with Apple in this way and do not have a record of it elsewhere then you cannot proceed. All this has nothing to do with iCloud and the process is not automatic.

  • Linux flavors for oracle 10gR2 RAC setup

    Hi,
    Oracle10gR2
    Which are the alternate Linux flavors (most similar to RHEL 5.5) that can be used for Oracle10gR2 RAC setup ??
    Regards

    Hi,
    while there other Linux versions are also vertified, best is to use Oracle Linux http://www.oracle.com/us/technologies/linux/index.html
    This is based on RHEL (you also have a 5.5. version).
    Regards
    Sebastian

  • Does impdp create a oracle user if not exising

    Hi
    Does impdp create a oracle usr if oracle schema is not existing when running remap_schema
    regards
    kb

    Yes,
    This is from documentation
    "If the schema you are remapping to does not already exist, the import operation creates it, provided the dump file set contains the necessary CREATE USER metadata and you are importing with enough privileges."
    http://download.oracle.com/docs/cd/B14117_01/server.101/b10825/dp_import.htm#sthref324

Maybe you are looking for