Fit into block in file system

I am developing a external r-tree and it's optimal to fit one node into one disk block.
The question is: how can I make sure that one node fit into one block, instead of using half of the first block and half of the second block, when using file system?
And more, if a node need 2.6 blocks, should I use 3 blocks and leave 0.4 block in empty,so the next node could start from a new block?

You should be able to obtain some pretty good performance improvements if your cache implementation
is tuned to the block size of the underlying file system.
If your cache pages are larger then the block size then you may end up having to perform multiple disk
head moves to fill a cache page or to flush it to disk (this is especially true if the file is fragmented).
If your cache pages are smaller then the block size then you may end up having to read the same block
multiple times when a larger page size would have resulted in a single read.
Most of this is only really an issue if you force writes to the disk (don't let the OS cache the writes) and
whether you do that depends on the task you are performing.
Don't even consider serailization (unless you have to ) as it is relativly CPU intensive and produces
data of an unknown size. This means that you cannot perform the tuning.
Many database style programs read and cache pages (or frames) of memory. The application then fills
in the pages with whatever data is needed. (basically it takes the same amount of time to flush a block to
disk as it does to flush a part of a block to disk).
matfud

Similar Messages

  • UPON boot up my computor always goes into a "checking file system on C"

    UPON boot up my computor always goes into a "checking file system on C".  It seems have started since last "Tuesday since the large update.

    This is the second part of my message that should stop the checking file system.
    Click Start, click All Programs, and then click Accessories.
    Right-click Command prompt, and then click Run as administrator.
    At the window that opens  type the following commands, pressing "Enter" after each one, please, note the spaces.
    chkntfs /d 
    chkntfs /c C:
    chkntfs /x C:
    Reboot the computer, it should go directly into Windows without checking the system.
    Please confirm if it starts normally and I need to know if the drive is "Dirty" or not.
    Please mark my post as SOLVED if it has resolved your problem. It helps others with similar situations.

  • IPhoto File System

    I thought this was a pretty simple process.
    I heard that you can make it so that when you import photos into iPhoto it will leave the pictures and/or folders in their original file system and NOT import them into iPhoto's file system.
    I was really excited about this because I HATE how iPhoto organizes photos...I can never find one if I look for it in Finder.
    So I unchecked the button under Advance so that I could keep my own file system/organization.
    I cleared out all the photos from iPhoto and deleted all files in the iPhoto Library folder.
    I have several folders, about 15, which have names such as "My Birthday" or "Sister's Birthday" and corresponding pictures in the folders. These folders are inside one folder on my Desktop.
    I thought that if I simply dragged that single folder into the iPhoto window it would import all the folders, BUT KEEP THEM THERE. I thought that it would use that folder as reference.
    But this did not happen. It still copied all the photos into iPhoto's goofy file system/organization.
    Please tell me there is a way to do what I want and that I'm not misunderstanding this feature.

    Thanks that cleared a few things up.
    I imported that single folder off the desktop into iPhoto. Now if I go into my iPhoto library and into Originals I'm presented with the year and if I go into a specific year then I can find all my files easily...in my own file organization.
    The year thing is no big deal, even though one year says 1969. I can deal with it.
    However, each individual picture files is now an alias.
    The whole reason I wanted to do this from the beginning was to:
    1) Get MY photos in MY own organization.
    2) Save disk space
    You see, because I don't like iPhoto's organization at all I've been keeping two different "libraries".
    I have the folder on my desktop with ALL the pictures in MY own organization.
    Then in iPhoto I have ALL the same photos, but in iPhoto's organization. I don't use iPhoto for editing at all so I never get confused about whether iPhoto or my desktop folder has the newest version.
    So a quick run through:
    I'll have taken picture with my digital camera. I pop the card into the reader and copy the files into the desktop folder and name/organize them etc. Then I will copy that folder into iPhoto.
    The resultant is I have my own "library" on my desktop to quickly access photos and then I have the same exact photos in iPhoto, but where I can easily mail them and show them off.
    Do you understand how I have this set up...? I know it's a little confusing.
    As you can see have these two libraries double disk space. So I want to only have the iPhoto library, but in my OWN file organization so that I can find it easily.
    Now back to the alias thing...
    It seems if I delete the folder off the desktop then the alias will be looking at nothing and then I won't have any photos. yikes!
    Hopefully you followed this.
    All I want it to be able to use iPhoto, but have its library so that I can organize it.
    Thanks again. Major points for you if you can help me some more.

  • IPhoto file system problems (unique)

    A while ago I moved all my pictures from a 2007 MacBook to a 2006 iMac. I bought the iMac from the school I attended and it was running an older version of iPhoto. Instead of "Events" they were "Rolls." Anyway, I've since updated to iPhoto '11 9.2.1. On my MacBook I frequently opened the iPhoto library to access photos for emailing or uploading rather than going through iPhoto. I've never modified any folders or photos or anything else within the file system, only in iPhoto. Ever since I updated iPhoto, however, something strange has occurred. The file tree used to mimic the events in iPhoto so it was easy to locate the pictures I wanted. Now, it is very confusing. All previous events from my MacBook are the same, but new photos aren't. After importing photos and organizing them into events, the file system doesn't reflect it. Instead, my photos within the file system are all over the place. The organization is based on the date of importing and not the events in iPhoto which frustrates me to no end since I am a bit OCD with file organization. Anyway, does anyone know what I can do? It drives me crazy. In iPhoto itself, the photos are organized by event exactly like I want. In the file system in Finder, however, it is not. You might ask why I care.... I just do. Obviously something is wrong and/or different than before.
    If this has been answered before, I apologize. I've tried to do some research so if you know where, just point me in the right direction. Thank you SO much for any help!

    Answered here: https://discussions.apple.com/message/8925249#8925249
    Unfortunately I couldn't find it until I asked it and it was suggested on the sidebar. Sorry everyone!

  • Cluster File system local to Global

    I need to convert a local High available file system to a global file system. The client needs to share data within the cluster and the solution i offered him was this.
    Please let me know if there is a better way to do this. The servers are running 2 failover resource NFS resource groups sharing file systems to clients. Currently the filesystems are configured as HAStoragePlus file systems.
    Thanks

    Tim, thanks much for your reply. I will doing this as a global file system. Currently, the HA file systems are shared out from only one node and I intend to keep it that way. The only difference is i will make the local HA file systems as global.
    I was referring the sun cluster concepts guide which mentions
    http://docs.sun.com/app/docs/doc/820-2554/cachcgee?l=en&a=view
    "A cluster file system to be highly available, the underlying disk storage must be connected to more than one node. Therefore, a local file system (a file system that is stored on a node's local disk) that is made into a cluster file system is not highly available"
    I assume i need to remove the file systems from hastp and make them as global? Please let me know if the understanding is correct...
    Thanks again.

  • How to bulk import data into CQ5 from MySQL and file system

    Is there an easy way to bulk import data into CQ5 from MySQL and file system?  Some of the files are ~50MB each (instrument files).  There are a total of ~1,500 records spread over about 5 tables.
    Thanks

    What problem are you having writing it to a file?
    You can't use FORALL to write the data out to a file, you can only loop through the entries in the collection 1 by 1 and write them out to the file like that.
    FORALL can only be used for SQL statements.

  • How to add more disk space into /   root file system

    Hi All,
    Linux  2.6.18-128
    can anyone please let us know how to add more disk space into "/" root file system.
    i have added new hard disk with space of 20GB, 
    [root@rac2 shm]# df -h
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/hda1             965M  767M  149M  84% /
    /dev/hda7             1.9G  234M  1.6G  13% /var
    /dev/hda6             2.9G   69M  2.7G   3% /tmp
    /dev/hda3             7.6G  4.2G  3.0G  59% /usr
    /dev/hda2              18G   12G  4.8G  71% /u01
    LABLE=/               2.0G     0  2.0G   0% /dev/shm
    /dev/hdb2             8.9G  149M  8.3G   2% /vm
    [root@rac2 shm]#

    Dude! wrote:
    I would actually question whether or not more disks increase the risk of a disk failure. One disk can break as likely as one of two of more disks.
    Simple stats.  Buying 2 lottery tickets instead of one, gives you 2 chances to win the lottery prize. Not 1. Even though the odds of winning per ticket remains unchanged.
    2 disks buy you 2 tickets in The-Drive-Failure lottery.
    Back in the 90's, BT (British Telecom) had a 80+ node OPS cluster build with Pyramid MPP hardware. They had a dedicated store of scsi disks for replacing failed disks - as there were disk failure fairly often due to the number of disks. (a Pryamid MPP chassis looked like a Xmas tree with all the scsi drive LEDs, and BT had several)
    In my experience - one should rather expect a drive failure sooner, than later. And have some kind of contingency plan in place to recover from the failure.
    The use of symbolic links instead of striping the filesystem protects from the complete loss of the enchilada if a volume member fails, but it does not reduce the risk of loosing data.
    I would rather buy a single ticket for the drive failure lottery for a root drive, than 2 tickets in this case. And using symbolic links to "offload" non-critical files to the 2nd drive means that its lottery ticket prize is not a non-bootable server due to a toasted root drive.

  • Mounting the Root File System into RAM

    Hi,
    I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself.  At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
    I have now succeeded, at least it seems, in performing such a task. There are still some issues.
    For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
    A custom kernel hook is used to (overall):
    Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
    Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
    Copy the non-volatile content into the ramdisk.
    Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
    A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
    To integrate this hook into a initramfs, a preset is needed.
    I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
    The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
    When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
    I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
    Some prerequisites exist (by default):
        Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
        Referencing paths:
            The variables:
                source=
                temporary=
            ...should have the same value in all of these files:
                "/etc/initcpio/hooks/ram"
                "/root/Documents/rootfs/unmount-root-fs.sh"
                "/root/.rsync/exclude.txt"    -- Should correspond.
            This is needed to sync the RAM disk back to the hard disk.
        I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
    Here are all the accepted and used parameters:
        Parameter                       Allowed Values                                          Default Value        Considered Values                         Description
        root                                 Default (UUID=+,/dev/disk/by-*/*)            None                     Any string                                      The source root
        rootfstype                       Default of "-t <types>" of "mount"           "auto"                    Any string                                      The FS type of the source root.
        rootflags                         Default of "-o <options>" of "mount"        None                     Any string                                      Options when mounting the source root.
        ram                                 Any string                                                  None                     "1"                                                  If this hook sould be run.
        ramfstype                       Default of "-t <types>" of "mount"           "auto"                     Any string                                      The FS type of the RAM disk.
        ramflags                         Default of "-o <options>" of "mount"        "size=50%"           Any string                                       Options when mounting the RAM disk.
        ramcleanup                    Any string                                                   None                     "0"                                                  If any left-overs should be cleaned.
        ramcleanup_source       Any string                                                   None                     "1"                                                  If the source root should be unmounted.
        ram_transfer_tool          cp,find,cpio,rsync,unison                            unison                   cp,find,cpio,rsync                           What tool to use to transfer the root into RAM.
        ram_unison_fastcheck   true,false,default,yes,no,auto                    "default"                true,false,default,yes,no,auto        Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_use        0,1                                                              None                    0                                                      If unison should use any available cache. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_update   0,1                                                              None                    0                                                     If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
    This is the basic setup.
    Optionally:
        I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
        I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
    Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
    I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
    Of course, use what you need from the provided files.
    Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
        find+cpio:  2:10s (2:12s on slower hardware)
        unison:      3:10s - 4:00s
        cp:             4 minutes (31 minutes on slower hardware)
        rsync:        4:40s (55 minutes on slower hardware)
        Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
    These are the remaining issues:
        find+cpio option does not create any destination files.
        (On some older hardware) When booting up, the source disk is not always detected.
        When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
    Inconveniences:
        Unison needs to perform an update detection at each startup.
        initramfs' ash does not parse wild characters to use "cp".
    That's about what I can think of for now.
    I will gladly try to answer any questions.
    I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
    Last edited by AGT (2014-05-20 23:21:45)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • How to save a file into file system from database?

    Hi!
    I am using AppEx 2.2.1.I have created a report showing the filename and id of a table. The table has a blob column containing the file.I want to hyperlink filename, so that I can go ahead and save it into my file system. But I don't know what to give in the URL field of the filename column attributes. How to get the file id of a file already in the database to link it and use? Any ideas please.
    Regards,
    Deepa.

    This is a reference for the file download:
    http://download-uk.oracle.com/docs/cd/B31036_01/doc/appdev.22/b28839/up_dn_files.htm#CJAHDJDA
    I have a demo page showing this as well:
    http://htmldb.oracle.com/pls/otn/f?p=31517:15
    Denes Kubicek

  • How to copy NCLOB value(Contains Word Document) into file system

    How to copy NCLOB value(Contains Word Document) into file system or display in sqlplus

    The UTL_FILE package will write it only to text file not(NCLOB Value[containts images as well as text])

  • LONG_RAW conversion (using OLE) to file system then import into BLOB column

    To whom it may concern (I will call "hero" or "savior" if you can solve this for me),
    I posted this message an another thread; but then later thought it would be better in its own thread.
    I was wondering how to extract the Adobe Acrobat Reader files (PDF) from OLE component stored in a LONG_RAW database column. I was able to successfully extract the MS-Word, MS-Excel, MS-Powerpoint, and MS-Project to the operating system. I was not able to extract the Adobe PDF files. The PDF files do not return an error message nor do they write out to the file system?
    Leonid previous posted some suggestions on how to convert the data. Can anyone expand on tasks #4 and #5 (see below)?
    You can use this library to do the following task:
    1. Get the type of an object stored inside OLE Item. Get information about the OLE server.
    GetCLSID, GetProgID, IsIDispatchSupported
    2. Get information about the object
    IsLinked, GetSourceDisplayName
    3. Get information from the object in formats supported by its IDataObject interface.
    EnumFormatEtc, RegClipFormat, GetData
    I suppose it's the best way to extract pictures!
    4. Save the object into an external file throu IPersistFile interface. It works only if IPersistFile::Save method is implemented by the OLE server. Unfortunately, some OLE servers don't support this method even don't return a correct error code.
    SaveToFile
    Works with: MS Word, MS Excel
    Not works with: Adobe Acrobat (I'm not sure. Maybe I've done something wrong), MS Photo Editor
    5. Extract the 'Contents' stream from a structured storage /a compound document/.
    ExtractToFile
    It isn't a documented way, but it can be used to extract PDF files.
    I have tried the following two scenarios for the PDF extraction:
    -- First Attempt
    ret:=OLEXTRA.SaveToFile( application, v_FILENAME_WO_EXT || '.pdf' );
    if ret <> 0 then
    show_message( 'SaveToFile Error code='||ret );
    end if;
    -- application does not return an error message
    -- application does not save the PDF file to the operating system.
    -- Second Attempt
    ret:=OLEXTRA.ExtractToFile( application, 'stg.pdf', 'fname.pdf' );
    if ret <> 0 then
    show_message( 'ExtractToFile Error code='||ret );
    end if;
    -- application does not return an error message.
    -- application does not save fname.pdf to the operating system
    -- application saves the stg.pdf to file system, but cannot open in Adobe
    Thanks in advance for the help,
    Mike

    There is no way from PL/SQL in Forms - you'll have to call out to C or Java Code to do it.
    You can link in Pro*C or OCI code into forms as a user exit in which case it can share the forms connection but that's not for the faint hearted.
    Using Java - called from the Java Importer feature in 6i+ you'll end up creating a separate connection.
    Frankly if you can do it in a generic enough way through a VB executable stick with that.
    Forms can get images in an out of a long raw using read_image_file and write_image_file but that does not extend to any binary file - just images.

  • Backup into file system

    Hi
    Setting backup-storage with the follwoing configuration is not generating backup files under said location - we are pumping huge volume of data and data(few GB) is not getting backuped up into file system - can you let me know if what is that I missing here?
    Thanks
    sunder
    <distributed-scheme>
         <scheme-name>distributed-Customer</scheme-name>
         <service-name>DistributedCache</service-name>
         <!-- <thread-count>5</thread-count> -->
         <backup-count>1</backup-count>
         <backup-storage>
         <type>file-mapped</type>
         <directory>/data/xx/backupstorage</directory>
         <initial-size>1KB</initial-size>
         <maximum-size>1KB</maximum-size>
         </backup-storage>
         <backing-map-scheme>
              <read-write-backing-map-scheme>
                   <scheme-name>DBCacheLoaderScheme</scheme-name>
                   <internal-cache-scheme>
                   <local-scheme>
                        <scheme-ref>blaze-binary-backing-map</scheme-ref>
                   </local-scheme>
                   </internal-cache-scheme>
                   <cachestore-scheme>
                        <class-scheme>
                             <class-name>com.xxloader.DataBeanInitialLoadImpl
                             </class-name>
                             <init-params>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>{cache-name}</param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>com.xx.CustomerProduct
                                       </param-value>
                                  </init-param>
                                  <init-param>
                                       <param-type>java.lang.String</param-type>
                                       <param-value>CUSTOMER</param-value>
                                  </init-param>
                             </init-params>
                        </class-scheme>
                   </cachestore-scheme>
                   <read-only>true</read-only>
              </read-write-backing-map-scheme>
         </backing-map-scheme>
         <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>blaze-binary-backing-map</scheme-name>
    <high-units>{back-size-limit 1}</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <expiry-delay>{back-expiry 0}</expiry-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>

    Hi
    We did try out with the following configuration
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>off-heap</type>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    With configuration the amount of residual main memory consumption is like 15 G.
    When we changed this configuration to
    <near-scheme>
         <scheme-name>blaze-near-HeaderData</scheme-name>
    <front-scheme>
    <local-scheme>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>{front-size-limit 0}</high-units>
    <unit-calculator>FIXED</unit-calculator>
    <expiry-delay>{back-expiry 1h}</expiry-delay>
    <flush-delay>1m</flush-delay>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>blaze-distributed-HeaderData</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <invalidation-strategy>present</invalidation-strategy>
    <autostart>true</autostart>
    </near-scheme>
    <distributed-scheme>
    <scheme-name>blaze-distributed-HeaderData</scheme-name>
    <service-name>DistributedCache</service-name>
    <partition-count>200</partition-count>
    <backing-map-scheme>
    <partitioned>true</partitioned>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <external-scheme>
    <high-units>20</high-units>
    <unit-calculator>BINARY</unit-calculator>
    <unit-factor>1073741824</unit-factor>
    <nio-memory-manager>
    <initial-size>1MB</initial-size>
    <maximum-size>50MB</maximum-size>
    </nio-memory-manager>
    </external-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>
    com.xx.loader.DataBeanInitialLoadImpl
    </class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>{cache-name}</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>com.xx.bean.HeaderData</param-value>
    </init-param>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>SDR.TABLE_NAME_XYZ</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <backup-count>1</backup-count>
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    <autostart>true</autostart>
    </distributed-scheme>
    Note backup storage is file-mapped
    <backup-storage>
    <type>file-mapped</type>
    <initial-size>1MB</initial-size>
    <maximum-size>100MB</maximum-size>
    <directory>/data/xxcache/blazeload/backupstorage</directory>
    <file-name>{cache-name}.store</file-name>
    </backup-storage>
    We still see that process residual main memory consumption is 15 G and we also see that /data/xxcache/blazeload/backupstorage folder is empty.
    Wanted to check where does backup storage maintains the information - we would like offload this to flat file.
    Appreciate any pointers in this regard.
    Thanks
    sunder

  • I am on a i mac osx 10.8.4 and can not get into i photo i keep recieving the message "The library could not be opened because the file system of the library's volume is unsupported.

    I am on a i mac osx 10.8.4 and can not get into i photo i keep recieving the message "The library could not be opened because the file system of the library's volume is unsupported."

    Where is the Library? It needs to be on a disk formatted Mac OS Extended (Journaled)

  • Fsck on HFS+ File System and bad block counter

    Hi community,
    does the fsck command check also for bad blocks in single user mode? Is in Mac OXS a similar command available like the dumpe2fs command for ext2/3 file systems to print out the actual number of bad blocks?
    Thx & Bye Tom

    +does the fsck command check also for bad blocks in single user mode?+
    No.
    +Is in Mac OXS a similar command available like the dumpe2fs command for ext2/3 file systems to print out the actual number of bad blocks?+
    Not a built-in command. If your device is supported by smartmontools, that's the way to go. It's possible that hfsdebug may do something like what you want, but it doesn't scan for unmapped bad blocks.

  • How to upload a file into file system on server?

    We want to allow our partners to upload .csv file on our unix box. How do we upload a file into file system. I might be wrong but after reading the documentation, I think OA framework only allows to save the file in database. Please advise.
    Thanks,

    Hi,
    I was making a very silly mistake and found it. We need to create a new Entity object before you try to fill it with data. So what we can do is in the Controler, call a method in the AM that creates a new entity object (this should be made in processRequest, so that the VO gets initialized when the page loads).
    Now you can fill the entity with data in the page (ie browse and choose the file), and then when the page is submitted call a method AM again but now in processFormRequest to do the commit.
    code example
    In the AM class
    * Creates a new createCVRecord. To be Called from ProcessRequest when the page loads.
    public void createCVRecord()
    XxPersonCVVOImpl vo = getXxPersonCVVO1();
    if (!vo.isPreparedForExecution())
    vo.executeQuery();
    Row row = vo.createRow();
    vo.insertRow(row);
    // Required per OA Framework Model Coding Standard M69
    row.setNewRowState(Row.STATUS_INITIALIZED);
    } // end createCVRecord()
    * The commit method. To be Called from ProcessFormRequest when the page is submitted.
    public void apply()
    getTransaction().commit();
    or else there is a different way... please refer the following thread for more details....
    File Upload
    thanks to martin, who pointed out the mistake.

Maybe you are looking for

  • What do I need to drive (4) 36" LCD's and put a Fullscreen movie on each?

    So, here's what I want to do....... My office wants to replace the TV's in the Lobby running DVD's that loop...Sound only from 1 TV.... I'm thinking it would be a lot better to not just get 4 36" LCD's and run the same crappy looping DVD's that are f

  • How to set the size for the content pane?

    * Login.java * Created on 15 March 2004, 17:17 import javax.swing.*; import java.awt.*; import java.awt.event.*; import java.sql.*; * @author Mindwalker public class Login extends javax.swing.JFrame implements ActionListener {      private JFrame con

  • HT5318 Won't install 10.6.3 -Says signature not valid

    Won't install 10.6.3 -Says signature not valid. Never had a problem with any updates before. Tried downloading , then installing. wont work.. I'm Stumped...

  • No audio from flashplayer

    Hi, ever since I updated my flashplayer I have no audio. All other media players are working ok (winamp, windows media etc) and I can hear audio through them fine. I have had 3 technitions from virgin media try to remedy the problem to no avail. Have

  • Foreign trade data incomplete for item 000010

    Hi, When we release the billing doc (while sending for accounting), I am getting the following message: "Foreign trade data incomplete for item 000010" While debugging I found when the field UVALL is "X" such message will come. Can anyone tell me why