Sendfilev() and file system interaction

When transmitting a file over a TCP socket with sendfilev(), and the source being a regular file on a UFS file system, what exactly takes place in the filesystem locking subsystem?
For example, if an HTTP server employs the sendfile system call, and the file is locked so that it may be read from properly, how does one update this file if the site is heavily trafficked, god forbid it's a Solaris 9 .iso and the user is downloading this file from a less than broadband connection. The file would be "locked" from updating for a potentially long time.
/kris

I think I answered your email. Also, you can have a look at these threads:
Re: How to display bfile in Apex report
Re: Linking to display an external file
Denes Kubicek
http://deneskubicek.blogspot.com/
http://www.opal-consulting.de/training
http://apex.oracle.com/pls/otn/f?p=31517:1
-------------------------------------------------------------------

Similar Messages

  • How to bulk import data into CQ5 from MySQL and file system

    Is there an easy way to bulk import data into CQ5 from MySQL and file system?  Some of the files are ~50MB each (instrument files).  There are a total of ~1,500 records spread over about 5 tables.
    Thanks

    What problem are you having writing it to a file?
    You can't use FORALL to write the data out to a file, you can only loop through the entries in the collection 1 by 1 and write them out to the file like that.
    FORALL can only be used for SQL statements.

  • APEX 2.1 and server  file system interaction methods

    Hello experts
    I am building a web application using APEX 2.1 that would be used by a few hundred users as a secure and auditable portal to a word document file library.
    The main use case is that an authenticated user will find links to documents he/she has been granted access of various levels (read only or read/write).
    All the files are stored on the server file system.
    I have a couple of questions:
    1.some file" when placed in an HTML region source does not work. Is this normal ?
    2.Will I have to import the entire file library (very large in size and number of files) into APEX so I can access them from APEX or can I simply use the simple anchor HTML tag above.
    Many thanks. Zemus.
    Edited by: zemus on Jun 4, 2009 1:25 PM
    Edited by: zemus on Jun 4, 2009 1:26 PM
    Edited by: zemus on Jun 4, 2009 1:27 PM

    I think I answered your email. Also, you can have a look at these threads:
    Re: How to display bfile in Apex report
    Re: Linking to display an external file
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Oracle cache and File System cache

    on the checkpoint, oracle cache will be written to disk. But, If an oracle database is over file system datafile, it likely that the data are still leave in FileSystem cache. I don't know how could oracle keep data consistency.

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • IOS 8.3 and file system - lost access

    Since updating to iOS 8.3 I've lost access to my iPad's file system using iFunbox and various file explorers using both PC and Mac.
    It seems that a sandbox is being enforced in a way that even prevents reading of files or deletion of junk files (which kind of defeats the purpose of a sandbox.
    I can't find a way around this but it also means that I can't backup maps in Minecraft which destroys the multi-platform nature of the app and renders hundreds of hours of work over months useless!
    Effects of the problem:
    1) I cannot access my stored data
    2) I cannot delete junk files including file clutter created by apps - this means that a factory reset will be needed more frequently
    3) I cannot manually backup files that are missed by automatic backup
    4) The ability to use custom maps on Minecraft has gone
    5) I cannot track what data apps are storing
    Is there a workaround for this problem presumably until Apple fixes it (all ver

    After upgrading to iOS 8.3, some third-party programs, such as PhoneClean 4 iMobie, no longer have access to the file system.
    Some applications, such as Spotify, write a lot of data in the memory card. Because of the earlier case, the memory is running out of space and sometimes urgent need to eliminate these temporary files that are unnecessary.
    I find it intolerable that Apple does not propose a solution to this problem or provide a tool that allows us to iPad users delete these temporary files.
    I agree that security is important, but it is true that we see as our daily use memory cards are running out of space.
    Therefore, I demand that Apple me a solution to this problem.
    Greetings.

  • Oracle DB and File system backup configuration

    Hi,
    As I understand from the help documents and guides, brbackup, brarchive and brrestore are the tools which are used for backing up and restoring the oracle database and the file system. We have TSM (Trivoli Storage manager) in our infrastructure for managing the backup centrally. Before configuring the backup with TSM, I want to test the backup/restore configuration locally i.e. storing the backup to local file system and then restoring from there. Our backup strategy is to have full online backup on the weekends and incremental backup on the weekdays. Given this, following are the things, I want to test.
    1. Full online backup (to local file system)
    2. Incremental online backup (to local file system)
    3. Restore (from local file system)
    I found help documents to be very generic and couldn't get any specific information for the comprehensive configuration to achieve this. Can someone help with end to end configuration?
    We are using SAP Portal 7.0 (NW2004s) with Oracle 10g database hosted on AIX server.
    Helpful answers will be rewarded
    Regards,
    Chandra

    Thanks for your feedback. I am almost clear about this issue now, except one point need to be confirmed: do you mean that on linux or unix, if required, we can set "direct to disk" from OS level, but for windows, it's default "direct to disk", we do not need to set it manually.
    And I have a further question: If a database is stored on a SAN disk, say, a volume from disk array, the disk array could take snapshot for a disk on block level, we need to implement online backup of database. The steps are: alter tablespace begin backup, alter system suspend, take a snapshot of the volume which store all database files, including datafiles, redo logs, archived redo logs, controll file, service parameter file, network parameter files, password file. Do you think this backup is integrity or not. please note, we do not flush the fs cache before all these steps. Let's assume the SAN cache could be flushed automatically. Can I think it's integrity because the redo writes are synchronous.

  • Database shut down and file systems are still busy

    We have been copying our production DB over to a reporting DB for the last 3 years. The process we use on the reporting database is: shutdown the DB ABORT, restart it, then shutdown immediate. Then we umount the file systems and copy the production over using a Flex Clone on the NETAPP, remount the file systems and start the database back up. We used to shut down the listener which does not share the oracle binaries on the unix system but lately have left the listener up because other DB's use it. Every so often after shutting down the report database in prep for the copy, the file system that contains the binaries is busy and we can't unmount the ORACLE binaries.
    So my question is how does the listener know if the database is up or down? Is it reading something on the volumes we're trying to unmount to see if the database is up? We have users that schedule jobs in odd hours and was wondering if them hitting the listener and even though the database is down it's making the file systems busy?
    Any help is appreciated.

    Mi**** wrote:
    So my question is how does the listener know if the database is up or down? If/when the DB responds, then the listener knows it is up;
    otherwise listener reports error to client
    Is it reading something on the volumes we're trying to unmount to see if the database is up? We have users that schedule jobs in odd hours and was wondering if them hitting the listener and even though the database is down it's making the file systems busy?
    Listener is NEVER involved with any ongoing packet exchange between client & the DB.
    Listener takes original connection request & if the DB is up passes the request to the DB.
    After this initial handoff, the listener has NO more involvement with packet exchange between DB & client

  • CC configuration causes freezing and file system corruption

    I have received no response for:
    http://forums.adobe.com/message/6074118
    It is a major issue, can I file an issue/fix request somehow?
    Basically it appears that the default configuration when installing the complete product suite causes the OS (OSX Mavericks) to freeze unpredictably and render the system unusable unless shut down and rebooted. This may result in file system corruption and data loss requiring complete reinstall of OS and all products.
    Some have mentioned 'drive' is causing this, but I am not familiar with that component and do not see any options to remove or disable such a product in CC.
    I do not want any non-essential add-ons (like Bridge?) - just a reliable production configuration so I can get work done.
    The risk of having the system 'locked up' or file system corrupted is unacceptable.
    Thanks,
    Mike

    Hi mkrjf,
    Please enable root account and try to use Photoshop and let me know if still the same behavior.
    Root: http://support.apple.com/kb/PH14281
    Regards,
    Romit Sinha

  • RMAN copies to ASM and file system

    I am on 11g Rel 1 trying to do an RMAN backup with 2 copies of my backup. One copy should go to ASM storage and the other should go to a regular file system.
    My RMAN script is:
    run {
    configure default device type to disk;
    set backup copies 2;
    configure device type disk parallelism 16 backup type to compressed backupset;
    configure channel device type disk format = '+ASMDG2/backups/%d_%T_%U_dump0.bck, /orabackups/oracle/%d_%T_%U_dump0_offload.bck' MAXPIECESIZE 60G;
    configure controlfile autobackup on;
    backup incremental level=0 database plus archivelog;
    delete noprompt obsolete redundancy 1;
    exit;
    When the backup runs I get:
    RMAN-03009: failure of backup command ....etc.
    ORA-15124: ASM file name '+ASMDG2/backups/%d_%T_%U_dump0.bck, /orabackups/oracle/%d_%T_%U_dump0_offload.bck' containd invalid alias name...
    What is the correct syntax to get the copies to be able to go to ASM disk and regular disk locations?

    OK, I see that in the documentation this time. I'll try that for tonights backup and let you know how it works. I'm a bit concerned still because it looks like it wrote fine to the +ASMDG2 location.  I would have thought the whole backup would fail but I guess it was able to read the first path...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • ASM and file system combination

    Hi everyone -
    OK, this may sound a bit crazy, but it's the situation I'm facing and I really need your help and opinion before I do something stupid.
    I am using ASM on a SAN to manage my DATA tablespace (contains all the user data, including BLOBs). The ASM takes up all the disk space on the SAN except for what's left over from the disks allocated for the SAN OS. Basically, I have 941GB free space on the disks allocated the the SAN OS. Now, I need more space for my DATA tablespace, but can't afford to buy more disks at the moment (spent all my money on this SAN). I want to use the 941GB of unused disk space from the SAN OS disks for my DATA tablespace. But these disks are obviously not using ASM, and my current DATA tablespace is completely inside ASM.
    So the 6 million dollar question is . . . Can I add a "filesystem" datafile to an existing ASM tablespace? For example:
    Tablespace DATA is currently managed by ASM. Can I do
    SQL> alter tablespace DATA add datafile '/ora1/oradata/orcl/DATA_FS_01.DBF' size 1024M;
    So essentially I'll have part of the tablespace managed by ASM and another part residing on a traditional file system datafile.
    What do you think? Is this possible? Advisable? The worst thing I could do? Ok to do?
    Thanks in advance
    a

    OK, I see that in the documentation this time. I'll try that for tonights backup and let you know how it works. I'm a bit concerned still because it looks like it wrote fine to the +ASMDG2 location.  I would have thought the whole backup would fail but I guess it was able to read the first path...                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Partition scheme and file systems in dual boot environment [Solved]

    This machine is a general purpose desktop computer used by a single user. The plan is to run Arch primarily and have Ubuntu for backup. I have some experience with Arch and various other linux distros and usually just use a single partition for / with a separate /home partition. On my primary machine, however, I've accumulated a few disks and am wondering how I could best make use of what I have. I may eventually add a 256gb SSD or replace the 80gb drive with it, but currently this is what I have:
    8gb SSD
    80gb SSD
    80gb HDD
    1.5tb HDD
    18gb RAM
    Previously I kept almost everything on the 80gb SSD, including the home directory with individual folders for large files (VM's, music, video) stored on the 1.5tb drive and symlinked into my home directory. This worked very well, except the my home directory still wanted to grow fairly large over time as things accumulate (one culprit was my bitcoin wallet). If I did this again, it would be Arch on the 80gb SSD (probably using BTRFS), Ubuntu on the 80gb HDD (probably EXT4 or BTRFS), and a few different partitions on the 1.5tb drive, mostly BTRFS, depending on what will be stored.
    My other thought was to install both operating systems to the 80gb SSD for maximum speed while putting the home directories on the 80gb HDD or on separate partitions of the 1.5tb drive.
    In either case I'll probably make a /scratch mounted as a tmpfs to make better use of RAM.
    So questions:
    How much does speed matter generally for a home partition?
    Is BTRFS a good choice for this kind of system? More trouble than it's worth?
    Would an LVM make sense in this case? I'd assume there would be performance issues if I tried mixing SSD's and HDD's in an LVM.
    What should I use the small 8gb SSD for?
    Sorry if I've mixed terminology some here, I'm still learning!
    Last edited by spurious_access (2013-07-12 15:51:37)

    1) For most user data, speed is not really of consequence.  But what can matter is the speed of all the configuration files located in your home folder.  They are small, so decreased seek times are going to have a positive effect here.  This is the same reasoning as to why you would want the system configuration files in /etc to be on the SSD (not to mention you'd have a hell of a time separating /etc).
    2) Btrfs is f*cking awesome! 
    3) LVM could make sense, but it depends on what you want specifically.  The only reason you might have a performance hit with LVM2 is if you tried to apply some kind of RAID to it.  That is, striping and mirroring in LVM2 terms.  Obviously, vastly different speeds are going to do limited by whatever is slowest.  But the nice thing about LVM2 is that you can tell lvcreate to ensure that your logical volume is to be created all on one disk.  But btrfs is actually designed to handle disks of different sizes and to some extent, different speeds.
    4) If the 8GB SSD is fast, then maybe you could use it as a nice root filesystem.  I honestly don't think I have ever had my root get any larger than 5GB.  But then I don't run a full DE either, but I also don't keep my system as mega-slim as possible like Trilby.  It might make sense to LVM over the two SSDs and maybe have a separate volume group over the two HDDs.  Or have one btrfs filesystem spanning the two SSDs and one spanning the two HDDs. 
    Ultimately, the needs of your personal usage style is going to dictate what the actual best configuration of your machine is going to be.  But this hodge-podge of disks is defintiely not a terrible thing, and can actually probably be used quite efficiently.  It is totally up to you.
    As a side note (in regard to my "btrfs is f*cking awesome" comment above), I tried btrfs about a year ago.  It was nice, but it was not quite where I had hoped.  But after doing something stoopid on my machine, and having to redo part of my setup, I decided to just rsync my system to another disk, create a btrfs pool, then rsync it back.  I couldn't be happier with my decision.  It is truly amazing the rate at which btrfs is progressing.  New features, more stability, faster speeds, it is all coming along quite nicely.

  • Windows 8.1 and Disk and Files Systems Repair Failure Issues

    Recently I have seen issues with many customers computers showing a "Windows needs to restart to fix disk issues" error reported from the action center. In all of the computers since the release of Windows 8.1 that have received this error, upon
    restarting, have either locked or continued to give the same error as before. The way we have been fixing it is backing up the data, reloading windows then restoring the data from an external (as UEFI will not keep file integrity in 8 or 8.1) while the OS
    is up and running. Most of the time I have noticed that when a customer installs any 3rd party software the error returns. Is their a patch for this or is the easiest way just to disable notifications from action center?

    Hi,
    You can first try to use Startup Repair option in WinRE to check the issue.
    If you just would like to hide such notification, refer to this guide:
    How to Turn On or Off Action Center Messages in Windows 8
    http://www.eightforums.com/tutorials/22285-action-center-messages-turn-off-windows-8-a.html
    Kate Li
    TechNet Community Support

  • Time Machine and file system errors?

    Hardware:
    - 15" MBP 2GHz (mid 2006) 10.5
    - WD MyBook Premium 320gb
    Problem:
    Time Machine backs up to a partition on the MyBook (drive is GUID partition table, partition is Mac OS Extended). Immediately after a TM backup to a reformatted partition, Disk Utility may or may not detect invalid sibling links, invalid key lengths, or other errors. If it does not detect them after the initial backup, subsequent backups of recently changed files will cause said errors. This is consistent; its occurrence is not a "maybe".
    Disk Utility cannot fix these errors. fsck_hfs has had success in fixing these errors once. As I type fsck_hfs is running a second time, so we'll see the results from that. The first run produced the following:
    fsck_hfs -ypr /dev/disk1s2
    *Invalid node structure*
    (4, 44282)
    *Invalid volume file count*
    (It should be 832071 instead of 802858)
    *Invalid volume directory count*
    (It should be 181916 instead of 177095)
    *Invalid volume free block count*
    (It should be 13931374 instead of 14042565)
    *Volume Header needs minor repair*
    (2, 0)
    Questions:
    - Of what is this behavior indicative?
    - Is the MyBook necessarily bad?
    - Is TM somehow the cause?
    - Could bad sectors on the MyBook be the cause?
    - Is the drive within the MBP the source of these problems (Disk Utility has not previously found any errors...)?
    I'm really at a loss...

    Boot from the OS X installer disc that came with the computer. After the chime press and hold down the "D" key until the diagnostic screen appears.
    Booting From An OS X Installer Disc
    1. Insert OS X Installer Disc into the optical drive.
    2. Restart the computer.
    3. Immediately after the chime press and hold down the "C" key.
    4. Release the key when the spinning gear below the dark gray Apple logo appears.
    5. Wait for installer to finish loading.

  • Help with partitions and file systems [SOLVED]

    Hi, i have been using Ubuntu for a while, and now i want to move to Arch. I've probed it in a PC and i like so i want to make que change.
    But, before installing Arch, I have 2 doubts. I red the beginners guide and also the instalation guide. There it says that if better to have diferents partitions for  /, /boot/, /home, /usr, /var, y /tmp
    Usually, i alwayes used something like this:
        * /boot (32megas)
        * /swap (512 megas)
        * /root (6 a 8 gigas)
        * /home (80 gigas aprox)
    It's really better to also have partitions for /var, /usr y /tmp? o some of them? and, in that case, wich size should i give them? because i don't want to make them too small, but i don't want to waste disk space neither.
    Adn that takes me to my second question, wich filesystem is better for each partitionn? in many places, i read taht JFS its good for /var or that XFS if better for /home and big files
    I thinked to use something like this:
        * /boot (ext2)
        * / (JFS)
        * swap
        * /home (XFS)
    Is a good design? or should i use other filesystem like reiserfs, etc... and for /var, /usr and /tmp partition, wich one should i use?
    Thank you
    Ps: This Pc is gonna be a desktop pc
    Ps2: sorry for my bad english. it is not my real lenguage
    Reason of edit: added the swap partition. I forgot it
    Last edited by Thalskarth (2008-12-21 20:11:00)

    thanks everybody for the help,
    kaola_linux wrote:@Thalskarth - it's better to have /var especially if you're using ext3 or other filesystems which was designed for larger files as your partition for /home and /.  Having a seperate partition for /var would be nice (backup purposes and reinstalling without downloading the entire package whole over again). 5gb would be sufficient for your /var, anyway you can always resize it to your needs.
    so, a 5gb partition in reiserfs would be OK for a /var
    Inxsible wrote:
    Lot of people also use XFS which is known to have better performance with huge files. I think EXT3 offers a good balance...because I am never sure whether my home partition will have all huge files or not..same with my external drive...so i just use EXT3
    If you have a specific partition for movies or some video/audio editing that you do..you may wanna consider XFS too. I don't do all that...so I have never used XFS. I wouldn't know the exact performance difference between ext3 and XFS.
    Yes, in many places i red that XFS is better for big files. But i couldn't fine wich is the meaning of "big file". Does it mean a 200 mg file? or a 4.4gb one??
    the same applies to reiserFs, what is a small file? a 1mg one or a 4kb one?
    I have alwayes used ext3, i thinked in XFS and JFS just to give them a try.
    amranu wrote:I have no idea about better filesystems, all my partitions are ext3 (soon to be ext4)
    Inxsible wrote:One thing that makes me wanna keep EXT3 is that EXT4 is coming out (soon?) and you can upgrade from 3 to 4 without having to reformat and having to make backups of your current data.
    really is cominf soon? i didn't think in ext4 beacuse many places said that was in development for many years... meybe they were a bit out-of-date
    Edit: i search in the wiki and it says that since 11 october 2008, ext4 is "stable" and is been included since kernel 2.6.28 as stable realase, is to early to prove it? or it better to wait a while??
    thanks.
    and, does anyone try the JFS one?
    Last edited by Thalskarth (2008-12-03 00:14:56)

  • OAM AND FILE SYSTEM VERSONS NOT MATCHING

    OAM displays versions 4 and 5 , for various patches
    The filesystem shows version 6 by usind adident is present in $AP_TOP/patch/115/import/US
    we have applied series of patches
    R12

    Please post the details of the application release, database version and OS.
    OAM displays versions 4 and 5 , for various patchesWhat are those patches? Application/Database/OS patches?
    The filesystem shows version 6 by usind adident is present in $AP_TOP/patch/115/import/USVersion 6 of what?
    we have applied series of patchesWhat are those patches?
    Thanks,
    Hussein

Maybe you are looking for

  • Pages Per Sheet Not Working.

    In previous versions of Mac OS X this was never a problem for me, but I can't get the "pages per sheet" feature listed in the Layout menu of the print dialogue to work.  I've created documents in TextEdit, Microsoft Word, and a few other applications

  • Swap Image Looks fine in Firefox, not Safari? Can Javascript be an issue in Safari?

    Hi, Im new to web design as you may guess. I'm trying to put together a website navigation page with several swap image div tags. When I view the page in Firefox it looks fine, after the behavior is applied the page doesn't view at all in safari? Can

  • Can you change the Graphs Colors in BEx Web Analyzer?

    Dear All, I have few queries I have developed via BEx Query Designer which I am opening from BEx Web Analyzer in portals. Unfortunately the colors in the Graphs are not very pleasant. I was wondering if there is a way we can customize the colors whic

  • How to create a clone of database from STANDBY?

    How can i clone a database from physical standby database?

  • Leopard UpToDate Scheme for Outside US / Canada

    Hello, According to http://www.apple.com/macosx/uptodate/ , owners of Macs bought after Oct 1 are eligible for an upgrade to Leopard for a small fee. I fall under this category but I am a resident of Japan. What is the version of this scheme availabl