Filesystem-based Hashtable

I need to store a lot of data in an associative memory. Does anyone know of some HashMap / HashTable or other associative memory equivalent that uses the file system instead of RAM for storage of the data (the data needs only be stored during one java session but is a few GB large, so only the file system can serve as storage space).
So far I found JDBMHashtable on Sourceforge, however, its implementation is lousy and very slow.
Cheers,
Andreas

depending on your data-access needs, perhaps this would work ?import java.util.HashMap;
class BufferedFileHashMap {
     private int bufferSize;
     private HashMap mymap;
     private volatile int additionCount = 0;
     private int buffered = 0;
     public BufferedFileHashMap(int bufferSize){
          mymap               = new HashMap(bufferSize);
          this.bufferSize     = bufferSize;
     public Object put(Object key, Object obj){
          if(additionCount == bufferSize){
               buffered++;
               // serialize mymap and get a new one
               mymap = new HashMap();
          additionCount++;
          return mymap.put(key, obj);
     public Object get(Object key){
          Object obj;
          // check current map ...
          obj = mymap.get(key);
          if(obj != null)
               return obj;
          // save current map to a temp
          // file and read in previously
          // serialized maps.
          for(int k = 0; k < buffered; k++){
               obj = mymap.get(key);
               if(obj != null)
                    break;
               // read in next
          // read back in the current map
          // from the temp location
          return obj;
public class Test {
     public static void main(String[] args){
          BufferedFileHashMap map = new BufferedFileHashMap(1000); // buffer 1000 objects
} allthought, just a thought (never done this before), will the hashmap serialize
all the objects who are also in the map ... ? hmm, perhaps it won't and potentially the
reason this is a big problem, anyway.

Similar Messages

  • Only after deleting filesystem based cache files new content is shown

    Hi,
    it seems that new content on my pages ends up in the filesystem cache on the middle tier. I have no influence on behaviour via the (caching concerned) page settings.
    Neither does a invalidate cache on the Access tab has any effect.
    Only when I remove middle tier cache files (in ..Apache\modplsq\cache) my new content shows up on the page.
    Has anyone had this behaviour ?
    Oracle says that this is normal behaviour and I should see in to the documentation to change this ....
    I'm lost though...
    Thanx Dave Ruzius
    Dave Ruzius

    Dave,
    What do you mean by not having access to influence the caching - do you mean you don't have admin access to the pages in question ?
    I suggest you read this document to understand the caching mechanisms for pages.
    Bottom line is, there are 3 caching modes (prior to 10g) for pages..
    - cache page defn only
    - cache page and content for [n] minutes
    - don't cache
    from what you're describing it sounds like the page has been cached with cache page and content for n minutes. This will cause the content to be cached in the file cache and the local browser cache but NOT in the webcache. The content will be cached for a period of time defined by the page designer, and will only expire after that time has expired. There should be a refresh link on the page when it's cached using this method (unless the page designer has removed it)..
    Does that help ?

  • Increase root filesystem in ldom

    Hi
    I have a ldom   having  a  flat file  /ldoms/ldom1-data  in the backend used as a virtual disk c0d2s0  for data  In the ldom .Now this virtual disk  is running out of space  in the ldom  How can I increase the size of this filesystem in the ldom or my only option is to create another file of bigger size in the primary domian ,export it to the ldom as a new Virtual disk ,parttion it in the  ldom  and  copy the data from the old disk to the new
    Appreciate any inputs
    TIA

    Hello
    I don't think you can do with sol10u6 without any patch,  assuming that you don't have SVM just and slice
    Solaris Does Not Automatically Handle an Increase in LUN Size (Doc ID 1382180.1)
    How to expand a UFS Lun in Solaris 10 (Doc ID 1451858.1)
    And the previous doc I gave you has the example for ufs too but with sol11, so mixing a bit all this docs I think you will be able to do, but first don't forget to do a backup just in case something goes wrong.
    How to Increase the Size of a Vdisk and Filesystem on a LDom Guest Domain (Doc ID 1549604.1)
    —snip —
    Procedure to increase size of a UFS filesystem based on SMI  label in a guest domain running Solaris 11
    Another way, is add a new LUN and do a mirror to a bigger LUN then remove the small LUN. (for this you need SVM or ZFS in the guest)
    Regards
    Eze

  • How to check the information in a backup device?

    for example, if I have a disk dump device, then I use this device for backup, maybe full backup, maybe log backup, maybe incremental backup or even backup different database to this dump device.
    Then I can I extract the information about how many different type of backup in this device? any command can list the all backup info?

    Disk dump devices are typically useful for dumps to a tape drive, ie, the tape drive address rarely (if ever) changes ... so an operator just has to switch out tapes on a regular basis while the regularly scheduled jobs dump to the same device every time.
    For filesystem-based dumps you're better off skipping the use of dump devices and just generate a dump file name on the fly.
    Keep in mind that you do NOT have to dump to a dump device.  You can dump straight to a file ...
    ===========================
    dump database mydb to "/dump_dir/MYDS.mydb.20140321.211537.dbdump"
    ===========================
    ... and you can even go so far as to build the dump command 'on the fly' with T-SQL, eg:
    ===========================
    declare @dumpcmd varchar(500), @dbname varchar(30)
    select  @dbname = db_name()
    select  @dumpcmd = 'dump database '+@dbname+' to "/dump_dir/'+@@servername+'.'+@dbname+'.'+
               convert(varchar,getdate(),112)+'.'+
               str_replace(convert(varchar,getdate(),108),':',null)+'.dbdump"'
    exec(@dumpcmd)
    ===========================
    NOTE: I'm not sitting at an ASE at the moment so you'll need to doublecheck the syntax of the above T-SQL.

  • What are the main causes of slowdown (long term)?

    I've had my MacBook for about 3½ years now, and it's starting to slow down. (Hardware specs are in sig.)
    Yup, I know it's normal. It's happened with every computer I've owned, regardless of platform.
    *_WHAT I CURRENTLY DO ABOUT IT_*
    I routinely run Maintenance (http://www.titanium.free.fr/pgs2/english/maintenance.html), a free app that—yep, you guessed it!—performs maintenance on your Mac. It uses a mix of maintenance scripts that are built-in to OS X (OS X runs them silently, but this allows them to be run on demand), and some of its own methods for cleaning a Mac to keep it running smoothly.
    I've found that Maintenance does help—I definitely recommend it!—but the older my MacBook has become, it seems the less Maintenance can do to mitigate the slowdown that comes with age.
    *_MY CURRENT BELIEF_*
    I don't think it's because the hardware is old by today's standards, because in my experience if you reformat any machine it will operate significantly faster, even after restoring all of the software that was removed by the format.
    I've become increasingly suspicious about the hard drive's free space's effect on a system's performance. I've always known that it has a noticeable effect—less swap space for virtual memory—but I'm beginning to wonder if the effect of a mostly-full hard drive is much more than I have previously imagined. Just a feeling.
    *_MY QUESTIONS_*
    *#1: What are the leading causes of slowdown (long term)?*
    *#2: What are the solutions to each of the above problems?*
    • acknowledging that there may not be a solution to every problem, and
    • some solutions may solve a problem “more completely” than others
    Edited for clarity by Tony R.

    • I was under the impression that Disk Utility's “Repair Disk” would fix all non-permissions-related errors (other than physical damage, obviously). If they can be fixed by a backup-reformat-restore, then I take it they are software (filesystem) based problems. If so, why can't software solve the problem without a reformat?
    Disk utility can and does repair permissions and basic disk errors. However it does not rebuild directory data.
    • How exactly do directory structure errors happen, and what other consequences do they have (if any)?
    The article on alsofts website should help
    What is directory damage?
    It is worth noting that in the future when Mac OS X support the ZFS disk format directory damage will be a thing of the past.
    however as you said
    Yep, I'm at 11.2% free disk space. Thanks for the info!
    I think this is your main issue. Try moving your itunes and movie to an external drive then see how your mac performs.
    Also a good way to reclaim some space is to delete all the language localisations you dont use. It is amazing that iWeb alone can install a few 100megs of foreign language templates that you will never use.
    I used to use Mike Bombich's delocalizer to do this but that app is not longer available so I use monolingual now.
    edit, as you have macbook you are lucky in that it is really straightforward to install a bigger internal hard drive.
    You can now get 5000GB 2.5 inch sata hard drives. You could take out the old drive hook that up to a usb to sata cable, install a bigger hard drive. then connnect the usb cable to your mac. Then boot from your Leopard Disk then use disk utility clone your old drive onto your new drive.
    Message was edited by: Tim Haigh

  • Why do i need udev+udisks+udisks2+gvfs installed to dynamic mount?

    hi there,
    yes, i have used the search function on that, but still have unanswered questions.
    1.
    why do i need udev+udisks+udisks2+gvfs installed to dynamically mount internal (ntfs, ext4) partitions ?
    If one these packages is missing, mounting an internal drive with "pcmanfm" is not possible.
    I know how to static mount these drives via "fstab", but i want to mount them when i need the access.
    2.
    why are my removable devices not automatically mounted in "pcmanfm" when plugged in?
    I have another OS (Lubuntu) running and this automatically recognizes when a cd is inserted or a usb stick is plugged in.
    I have tried to install the package "gvfs-afc" and rebooted, still no usb stick to see. But when i enter:
    sudo blkid -c /dev/null
    The usb stick is listed as "sdb1"
    I am using 64bit arch linux 3.9.3-1 with openbox+lxde.

    jasonwryan wrote:
    You don't. You need udev for a whole lot of other stuff, so leave that aside. To automount removable media, you can just use udisks and a helper like ud{iskie,evil}.
    For an ntfs partition, you will also need that driver.
    Comparing it with the Lubuntu; I am sure there is a lot more cruft preinstalled that makes this happen. In Arch, you just install what you need.
    The udev page has the details.
    so i have uninstalled the gvfs+udisks2 packages, rebooted and installed udevil-git and rebooted again.
    No partition is shown in the filemanager now. I really dont get it. The udev wiki says udev needs rules but my "/etc/udev/rules.d" folder is empty.
    The udisks wiki says that udisks and udisks2 are incompatible and that only one is needed and that udisks2 should be installed for gnome systems and udisks for xfce, but i have lxde installed. So it is not working with udisks and lxde (pcmanfm), when i try to install udisks2 additionally, it also does not work. Uninstalling udisks is also not possible because of the dependancy to libfm and so on...
    Here is my /etc/udevil/udevil-user-harry.conf:
    # udevil configuration file /etc/udevil/udevil.conf
    # This file controls what devices, networks, and files users may mount and
    # unmount via udevil (set suid).
    # IMPORTANT: IT IS POSSIBLE TO CREATE SERIOUS SECURITY PROBLEMS IF THIS FILE
    # IS MISCONFIGURED - EDIT WITH CARE
    # Note: For greater control for specific users, including root, copy this
    # file to /etc/udevil/udevil-user-USERNAME.conf replacing USERNAME with the
    # desired username (eg /etc/udevil/udevil-user-jim.conf).
    # Format:
    # OPTION = VALUE[, VALUE, ...]
    # DO NOT USE QUOTES except literally
    # Lines beginning with # are ignored
    # To log all uses of udevil, set log_file to a file path:
    #log_file = /var/log/udevil.log
    # Approximate number of days to retain log entries (0=forever, max=60):
    log_keep_days = 10
    # allowed_types determines what fstypes can be passed by a user to the u/mount
    # program, what device filesystems may be un/mounted implicitly, and what
    # network filesystems may be un/mounted.
    # It may also include the 'file' keyword, indicating that the user is allowed
    # to mount files (eg an ISO file). The $KNOWN_FILESYSTEMS variable may
    # be included to include common local filesystems as well as those listed in
    # /etc/filesystems and /proc/filesystems.
    # allowed_types_USERNAME, if present, is used to override allowed_types for
    # the specific user 'USERNAME'. For example, to allow user 'jim' to mount
    # only vfat filesystems, add:
    # allowed_types_jim = vfat
    # Setting allowed_types = * does NOT allow all types, as this is a security
    # risk, but does allow all recognized types.
    # allowed_types = $KNOWN_FILESYSTEMS, file, cifs, smbfs, nfs, curlftpfs, ftpfs, sshfs, davfs, tmpfs, ramfs
    allowed_types = $KNOWN_FILESYSTEMS, file, ntfs, vfat
    # allowed_users is a list of users permitted to mount and unmount with udevil.
    # Wildcards (* or ?) may be used in the usernames. To allow all users,
    # specify "allowed_users=*". UIDs may be included using the form UID=1000.
    # For example: allowed_users = carl, UID=1000, pre*
    # Also note that permission to execute udevil may be limited to users belonging
    # to the group that owns /usr/bin/udevil, such as 'plugdev' or 'storage',
    # depending on installation.
    # allowed_users_FSTYPE, if present, is used to override allowed_users when
    # mounting or unmounting a specific fstype (eg nfs, ext3, file).
    # Note that when mounting a file, fstype will always be 'file' regardless of
    # the internal fstype of the file.
    # For example, to allow only user 'bob' to mount nfs shares, add:
    # allowed_users_nfs = bob
    # The root user is NOT automatically allowed to use udevil in some cases unless
    # listed here (except for unmounting anything or mounting fstab devices).
    allowed_users = harry, root
    # allowed_groups is a list of groups permitted to mount and unmount with
    # udevil. The user MUST belong to at least one of these groups. Wildcards
    # or GIDs may NOT be used in group names, but a single * may be used to allow
    # all groups.
    # Also note that permission to execute udevil may be limited to users belonging
    # to the group that owns /usr/bin/udevil, such as 'plugdev' or 'storage',
    # depending on installation.
    # allowed_groups_FSTYPE, if present, is used to override allowed_groups when
    # mounting or unmounting a specific fstype (eg nfs, ext3, file). For example,
    # to allow only members of the 'network' group to mount smb and nfs shares,
    # use both of these lines:
    # allowed_groups_smbfs = network
    # allowed_groups_nfs = network
    # The root user is NOT automatically allowed to use udevil in some cases unless
    # listed here (except for unmounting anything or mounting fstab devices).
    allowed_groups = storage
    # allowed_media_dirs specifies the media directories in which user mount points
    # may be located. The first directory which exists and does not contain a
    # wildcard will be used as the default media directory (normally /media or
    # /run/media/$USER).
    # The $USER variable, if included, will be replaced with the username of the
    # user running udevil. Wildcards may also be used in any directory EXCEPT the
    # default. Wildcards will not match a /
    # allowed_media_dirs_FSTYPE, if present, is used to override allowed_media_dirs
    # when mounting or unmounting a specific fstype (eg ext2, nfs). For example,
    # to cause /media/network to be used as the default media directory for
    # nfs and ftpfs mounts, use these two lines:
    # allowed_media_dirs_nfs = /media/network, /media, /run/media/$USER
    # allowed_media_dirs_ftpfs = /media/network, /media, /run/media/$USER
    # NOTE: If you want only the user who mounted a device to have access to it
    # and be allowed to unmount it, specify /run/media/$USER as the first
    # allowed media directory.
    # IMPORTANT: If an allowed file is mounted to a media directory, the user may
    # be permitted to unmount its associated loop device even though internal.
    # INCLUDING /MNT HERE IS NOT RECOMMENDED. ALL ALLOWED MEDIA DIRECTORIES
    # SHOULD BE OWNED AND WRITABLE ONLY BY ROOT.
    allowed_media_dirs = /media, /run/media/$USER
    # allowed_devices is the first criteria for what block devices users may mount
    # or unmount. If a device is not listed in allowed_devices, it cannot be
    # un/mounted (unless in fstab). However, even if a device is listed, other
    # factors may prevent its use. For example, access to system internal devices
    # will be denied to normal users even if they are included in allowed_devices.
    # allowed_devices_FSTYPE, if present, is used to override allowed_devices when
    # mounting or unmounting a specific fstype (eg ext3, ntfs). For example, to
    # prevent all block devices containing an ext4 filesystem from being
    # un/mounted use:
    # allowed_devices_ext4 =
    # Note: Wildcards may be used, but a wildcard will never match a /, except
    # for "allowed_devices=*" which allows any device. The recommended setting is
    # allowed_devices = /dev/*
    # WARNING: ALLOWING USERS TO MOUNT DEVICES OUTSIDE OF /dev CAN CAUSE SERIOUS
    # SECURITY PROBLEMS. DO NOT ALLOW DEVICES IN /dev/shm
    allowed_devices = /dev/*
    # allowed_internal_devices causes udevil to treat any listed block devices as
    # removable, thus allowing normal users to un/mount them (providing they are
    # also listed in allowed_devices).
    # allowed_internal_devices_FSTYPE, if present, is used to override
    # allowed_internal_devices when mounting or unmounting a specific fstype
    # (eg ext3, ntfs). For example, to allow block devices containing a vfat
    # filesystem to be un/mounted even if they are system internal devices, use:
    # allowed_internal_devices_vfat = /dev/sdb*
    # Some removable esata drives look like internal drives to udevil. To avoid
    # this problem, they can be treated as removable with this setting.
    # WARNING: SETTING A SYSTEM DEVICE HERE CAN CAUSE SERIOUS SECURITY PROBLEMS.
    # allowed_internal_devices =
    # allowed_internal_uuids and allowed_internal_uuids_FSTYPE work similarly to
    # allowed_internal_devices, except that UUIDs are specified instead of devices.
    # For example, to allow un/mounting of an internal filesystem based on UUID:
    # allowed_internal_uuids = cc0c4489-8def-1e5b-a304-ab87c3cb626c0
    # WARNING: SETTING A SYSTEM DEVICE HERE CAN CAUSE SERIOUS SECURITY PROBLEMS.
    # allowed_internal_uuids =
    # forbidden_devices is used to prevent block devices from being un/mounted
    # even if other settings would allow them (except devices in fstab).
    # forbidden_devices_FSTYPE, if present, is used to override
    # forbidden_devices when mounting or unmounting a specific fstype
    # (eg ext3, ntfs). For example, to prevent device /dev/sdd1 from being
    # mounted when it contains an ntfs filesystem, use:
    # forbidden_devices_ntfs = /dev/sdd1
    # NOTE: device node paths are canonicalized before being tested, so forbidding
    # a link to a device will have no effect.
    forbidden_devices =
    # allowed_networks determines what hosts may be un/mounted by udevil users when
    # using nfs, cifs, smbfs, curlftpfs, ftpfs, or sshfs. Hosts may be specified
    # using a hostname (eg myserver.com) or IP address (192.168.1.100).
    # Wildcards may be used in hostnames and IP addresses, but CIDR notation
    # (192.168.1.0/16) is NOT supported. IP v6 is supported. For example:
    # allowed_networks = 127.0.0.1, 192.168.1.*, 10.0.0.*, localmachine, *.okay.com
    # Or, to prevent un/mounting of any network shares, set:
    # allowed_networks =
    # allowed_networks_FSTYPE, if present, is used to override allowed_networks
    # when mounting or unmounting a specific network fstype (eg nfs, cifs, sshfs,
    # curlftpfs). For example, to limit nfs and samba shares to only local
    # networks, use these two lines:
    # allowed_networks_nfs = 192.168.1.*, 10.0.0.*
    # allowed_networks_cifs = 192.168.1.*, 10.0.0.*
    allowed_networks = *
    # forbidden_networks and forbidden_networks_FSTYPE are used to specify networks
    # that are never allowed, even if other settings allow them (except fstab).
    # NO REVERSE LOOKUP IS PERFORMED, so including bad.com will only have an effect
    # if the user uses that hostname. IP lookup is always performed, so forbidding
    # an IP address will also forbid all corresponding hostnames.
    forbidden_networks =
    # allowed_files is used to determine what files in what directories may be
    # un/mounted. A user must also have read permission on a file to mount it.
    # Note: Wildcards may be used, but a wildcard will never match a /, except
    # for "allowed_files=*" which allows any file. For example, to allow only
    # files in the /share directory to be mounted, use:
    # allowed_files = /share/*
    # NOTE: Specifying allowed_files_FSTYPE will NOT work because the fstype of
    # files is always 'file'.
    allowed_files = *
    # forbidden_files is used to specify files that are never allowed, even if
    # other settings allow them (except fstab). Specify a full path.
    # Note: Wildcards may be used, but a wildcard will never match a /, except
    # for "forbidden_files = *".
    # NOTE: file paths are canonicalized before being tested, so forbidding
    # a link to a file will have no effect.
    forbidden_files =
    # default_options specifies what options are always included when performing
    # a mount, in addition to any options the user may specify.
    # Note: When a device is present in /etc/fstab, and the user does not specify
    # a mount point, the device is mounted with normal user permissions using
    # the fstab entry, without these options.
    # default_options_FSTYPE, if present, is used to override default_options
    # when mounting a specific fstype (eg ext2, nfs).
    # The variables $USER, $UID, and $GID are changed to the user's username, UID,
    # and GID.
    # FOR GOOD SECURITY, default_options SHOULD ALWAYS INCLUDE: nosuid,noexec,nodev
    # WARNING: OPTIONS PRESENT OR MISSING CAN CAUSE SERIOUS SECURITY PROBLEMS.
    default_options = nosuid, noexec, nodev, noatime
    default_options_file = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID, ro
    # mount iso9660 with 'ro' to prevent mount read-only warning
    default_options_iso9660 = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID, ro, utf8
    default_options_udf = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID
    default_options_vfat = nosuid, noexec, nodev, noatime, fmask=0022, dmask=0022, uid=$UID, gid=$GID, utf8
    default_options_msdos = nosuid, noexec, nodev, noatime, fmask=0022, dmask=0022, uid=$UID, gid=$GID
    default_options_umsdos = nosuid, noexec, nodev, noatime, fmask=0022, dmask=0022, uid=$UID, gid=$GID
    default_options_ntfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID, utf8
    default_options_cifs = nosuid, noexec, nodev, uid=$UID, gid=$GID
    default_options_smbfs = nosuid, noexec, nodev, uid=$UID, gid=$GID
    default_options_sshfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID, nonempty, allow_other
    default_options_curlftpfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID, nonempty, allow_other
    default_options_ftpfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID
    default_options_davfs = nosuid, noexec, nodev, uid=$UID, gid=$GID
    default_options_tmpfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID
    default_options_ramfs = nosuid, noexec, nodev, noatime, uid=$UID, gid=$GID
    # allowed_options determines all options that a user may specify when mounting.
    # All the options used in default_options above must be included here too, or
    # they will be rejected. If the user attempts to use an option not included
    # here, an error will result. Wildcards may be used.
    # allowed_options_FSTYPE, if present, is used to override allowed_options
    # when mounting a specific fstype (eg ext2, nfs).
    # The variables $USER, $UID, and $GID are changed to the user's username, UID,
    # and GID.
    # If you want to forbid remounts, remove 'remount' from here.
    # WARNING: OPTIONS HERE CAN CAUSE SERIOUS SECURITY PROBLEMS - CHOOSE CAREFULLY
    allowed_options = nosuid, noexec, nodev, noatime, fmask=0022, dmask=0022, uid=$UID, gid=$GID, ro, rw, sync, flush, iocharset=*, utf8, remount
    allowed_options_nfs = nosuid, noexec, nodev, noatime, ro, rw, sync, remount, port=*, rsize=*, wsize=*, hard, proto=*, timeo=*, retrans=*
    allowed_options_cifs = nosuid, noexec, nodev, ro, rw, remount, port=*, user=*, username=*, pass=*, password=*, guest, domain=*, uid=$UID, gid=$GID, credentials=*
    allowed_options_smbfs = nosuid, noexec, nodev, ro, rw, remount, port=*, user=*, username=*, pass=*, password=*, guest, domain=*, uid=$UID, gid=$GID, credentials=*
    allowed_options_sshfs = nosuid, noexec, nodev, noatime, ro, rw, uid=$UID, gid=$GID, nonempty, allow_other, idmap=user, BatchMode=yes, port=*
    allowed_options_curlftpfs = nosuid, noexec, nodev, noatime, ro, rw, uid=$UID, gid=$GID, nonempty, allow_other, user=*
    allowed_options_ftpfs = nosuid, noexec, nodev, noatime, ro, rw, port=*, user=*, pass=*, ip=*, root=*, uid=$UID, gid=$GID
    # mount_point_mode, if present and set to a non-empty value, will cause udevil
    # to set the mode (permissions) on the moint point after mounting If not
    # specified or if left empty, the mode is not changed. Mode must be octal
    # starting with a zero (0755).
    # mount_point_mode_FSTYPE, if present, is used to override mount_point_mode
    # when mounting a specific fstype (eg ext2, nfs).
    # NOT SETTING A MODE CAN HAVE SECURITY IMPLICATIONS FOR SOME FSTYPES
    mount_point_mode = 0755
    # don't set a mode for some types:
    mount_point_mode_sshfs =
    mount_point_mode_curlftpfs =
    mount_point_mode_ftpfs =
    # Use the settings below to change the default locations of programs used by
    # udevil, or (advanced topic) to redirect commands to your scripts.
    # When substituting scripts, make sure they are root-owned and accept the
    # options used by udevil (for example, the mount_program must accept --fake,
    # -o, -v, and other options valid to mount.)
    # Be sure to specify the full path and include NO OPTIONS or other arguments.
    # These programs may also be specified as configure options when building
    # udevil.
    # THESE PROGRAMS ARE RUN AS ROOT
    # mount_program = /bin/mount
    # umount_program = /bin/umount
    # losetup_program = /sbin/losetup
    # setfacl_program = /usr/bin/setfacl
    # validate_exec specifies a program or script which provides additional
    # validation of a mount or unmount command, beyond the checks performed by
    # udevil. The program is run as a normal user (if root runs udevil,
    # validate_exec will NOT be run). The program is NOT run if the user is
    # mounting a device without root priviledges (a device in fstab).
    # The program is passed the username, a printable description of what is
    # happening, and the entire udevil command line as the first three arguments.
    # The program must return an exit status of 0 to allow the mount or unmount
    # to proceed. If it returns non-zero, the user will be denied permission.
    # For example, validate_exec might specify a script which notifies you
    # of the command being run, or performs additional steps to authenticate the
    # user.
    # Specify a full path to the program, with NO options or arguments.
    # validate_exec =
    # validate_rootexec works similarly to validate_exec, except that the program
    # is run as root. validate_rootexec will also be run if the root user runs
    # udevil. If both validate_exec and validate_rootexec are specified,
    # validate_rootexec will run first, followed by validate_exec.
    # The program must return an exit status of 0 to allow the mount or unmount
    # to proceed. If it returns non-zero, the user will be denied permission.
    # Unless you are familiar with writing root scripts, it is recommended that
    # rootexec settings NOT be used, as it is easy to inadvertently open exploits.
    # THIS PROGRAM IS ALWAYS RUN AS ROOT, even if the user running udevil is not.
    # validate_rootexec =
    # success_exec is run after a successful mount, remount, or unmount. The
    # program is run as a normal user (if root runs udevil, success_exec
    # will NOT be run).
    # The program is passed the username, a printable description of what action
    # was taken, and the entire udevil command line as the first three arguments.
    # The program's exit status is ignored.
    # For example, success_exec might run a script which informs you of what action
    # was taken, and might perform further actions.
    # Specify a full path to the program, with NO options or arguments.
    # success_exec =
    # success_rootexec works similarly to success_exec, except that the program is
    # run as root. success_rootexec will also be run if the root user runs udevil.
    # If both success_exec and success_rootexec are specified, success_rootexec
    # will run first, followed by success_exec.
    # Unless you are familiar with writing root scripts, it is recommended that
    # rootexec settings NOT be used, as it is easy to inadvertently open exploits.
    # THIS PROGRAM IS ALWAYS RUN AS ROOT, even if the user running udevil is not.
    # success_rootexec =
    I have no idea what to do next, the only way it works, is the combination i mentioned in the title of this post. Any suggestion to solve that problem?

  • Staging Area location

    Hello,
    I am having trouble with the staging area on my enterprise manager 12c server. When i try to download an assembly it downloads to the default staging location which is /u01/app/oracle/product/12.1.0/omshome_1/gc_inst/em/EMGC_OMS1/sysman//stage.
    Now when i try to appy the download so it gets moved to the 'software library upload file locations' which is at /u02/app/oracle/oms_shared/ i get the following error in the job:
    Applying update of Oracle Virtual Templates and Oracle Virtual Assemblies type with entity ID : 99AED19240CD3922650F2A9255726868 and description : Database assembly for 11.2.0.3.0 Single Instance database with PSU1 on Linux 64-bit using filesystem based storage
    Apply failed: Exception: This operation requires atleast 3606937479 bytes in staging area (/u01/app/oracle/product/12.1.0/omshome_1/gc_inst/em/EMGC_OMS1/sysman//stage).
    This, of course, is true because there isn't enough freespace in /u01/app/oracle/product/12.1.0/omshome_1/gc_inst/em/EMGC_OMS1/sysman//stage.
    What i would like to do is to move the staging area to a location on /u02 where there is enough freespace but for the life of me i cannot find where to change this. Does anybody know how to do this?
    Any help would be greatly appreciated.
    Kind regards,
    Bert ten Cate

    Hi Bert,
    You can add an additional Software Library location in Setup, Provisioning and Patching, Software Library. Create on OS an new directory and the correct protection. Use Add in the EM-console to add this new location.
    See http://docs.oracle.com/cd/E24628_01/doc.121/e24473/softwarelib.htm for additional info.
    Eric

  • Create a database from 32bit rman backup with NEW name on a 64bit system

    Hello,
    i'm more programmer than database admin but i need to set up a 10gR2 apex development server based on our productive instance in a short time.
    For that i want to use a rman backup of the productive machine (SLES 10 32bit, DB10GR2+ASM, APEX 3.1).
    Because of the less time i got, i want to use an existing machine (SLES 11 64bit, DB10GR2+ASM) as destination.
    On this machine i want to create a new instance from the backup i took. I Already read note 881395.1 but i'm quite unsure about 2 things
    a) It's VERY IMPORTANT that the development instance has another name than the productive instance.
    When i follow 881395.1 and set the enviromental variable ORACLE_SID to another new name. Will this be all? Or will there be problems using another SID because of references in the controlfile or sth. like that?
    b) It's VERY IMPORTANT that the datafiles aren't stored in the same path and with the same names as the productive instance. It is another server but there is already a copy of the database (our dba tried to set up a standby solution on this machine) and i don't want to bother this!
    How can i rename the datafiles before restoring the backup from scatch?
    I found this: set newname for datafile 1 to '<filesystem based filename>';
    Will this help? Do i need to rename every single file? Or is it possible to set a path in which all datafiles will be restored?
    c) What about the 32bit to 64 bit thing? Will this work? Or do i need to convert the database? How? RMAN Convert?
    Or could RMAN convert anyhow used to realize the whole thing?
    Thank you very much for your support!
    Regards
    Daniel

    Hello,
    i'm more programmer than database admin but i need to set up a 10gR2 apex development server based on our productive instance in a short time.
    For that i want to use a rman backup of the productive machine (SLES 10 32bit, DB10GR2+ASM, APEX 3.1).
    Because of the less time i got, i want to use an existing machine (SLES 11 64bit, DB10GR2+ASM) as destination.
    On this machine i want to create a new instance from the backup i took. I Already read note 881395.1 but i'm quite unsure about 2 things
    a) It's VERY IMPORTANT that the development instance has another name than the productive instance.
    When i follow 881395.1 and set the enviromental variable ORACLE_SID to another new name. Will this be all? Or will there be problems using another SID because of references in the controlfile or sth. like that?
    b) It's VERY IMPORTANT that the datafiles aren't stored in the same path and with the same names as the productive instance. It is another server but there is already a copy of the database (our dba tried to set up a standby solution on this machine) and i don't want to bother this!
    How can i rename the datafiles before restoring the backup from scatch?
    I found this: set newname for datafile 1 to '<filesystem based filename>';
    Will this help? Do i need to rename every single file? Or is it possible to set a path in which all datafiles will be restored?
    c) What about the 32bit to 64 bit thing? Will this work? Or do i need to convert the database? How? RMAN Convert?
    Or could RMAN convert anyhow used to realize the whole thing?
    Thank you very much for your support!
    Regards
    Daniel

  • How to use web service as data source for forge or endeca ?

    hi,
    is there any way to use webservice as data source in pipeline ?
    i have requirment to get data from web service and dump it in endeca. for this i need some idea on how can be this achieved. is it possible to do so?

    Crawling is part of the CAS, which is kind of the data ingest process in Endeca as you would know. There are a bunch of OOTB Crawlers available like FileSystem based, XML Crawlers, etc., and there could be a case where you have to write your own and thats what I am suggesting because the XML crawl thats OOTB expects a specific format which isnt really mentioned in the Documentation. Please refer to CASDevGuide for more information. CAS, PlatformServices and ToolsAndFrameworks should be running when you start off with Endeca, you can see that from
    ps -ef | grep java
    on your machine

  • Best way to copy/move files in Java...

    I am trying to write or use Java to create directories on the filesystem, based on user information to a servlet. Using the File object and calling the mkdir() method, this works really well. I would like to add the functionality of then moving several files from an existing directory to this new one.
    I can certainly write some code that uses the Runtime.getRuntime().exec("cp a.html b.html")and issue a system command, but if there's a better way, I'd appreciate it.
    Thanks,
    Jay

    If you are implying that he use th renameTo() utility
    of the File class, then that will only work when the
    underlying system allows it - E.G., no renames across
    mount points, hard drives, partitions, etc. Copy the
    file yourself (with 7 or 8 lines of code) and avoid
    the whole thing.I would be willing to accept these restrictions, but I never would have guessed that the renameTo() method would copy the file. That doesn't seem to make sense.
    So you think I should just write code that issues a system command. I was hoping to avoid that because I dev and test on my Windows machine, then deploy to Unix. Is there nothing out there that handles file copies?

  • RAC to ASM or CFS.....need to make decision

    Hello,
    Currently we have RAC on HPUX using Raw devices. As we upgrade our Oracle apps to 10g database, we want to move from the Raw devices to either ASM or a clustered file system. We have oracle consultants in house chanting ASM, ASM, ASM. However our unix admins are chanting CFS, CFS, CFS. As a long time DBA, I am torn. What has your experience been with either or both? I really need input because I haven't used ASM in production. I don't want to just make the decision based on what I am familiar with which would be CFS. I want to do what is best for the system. ANY and all responses are welcome.
    Lori

    FWIW, my thoughts on your dilemma are:
    1. If you have no CFS in place today (or even if you do, but you're not using it for Oracle DB files), then I wouldn't convert to using it. I firmly believe that ASM will be here to stay and it is already Oracle's recommended configuration. While I don't think they'll drop support for filesystem-based databases, I do believe that support for RAW will eventually be dropped in a few releases.
    2. Adding CFS introduces another layer of complexity to an already complex system. It also introduces another vendor to the stack and, potentially, additional cost to license the CFS product. ASM puts all the eggs into the Oracle basket. While your grandmother may tell you putting all the eggs in one basket is bad, I think it's probably the way to get the best support from our friends at the big red O.
    3. ASM is relatively new-ish when compared to CFS, but it has a good track record and I'd venture to guess that there are more ASM installations supporting RAC than single instance databases. That should make you feel more comfortable about using ASM.
    So, if it were mine to choose, I'd choose ASM today. Good luck!
    Note: I am an Oracle consultant too, but you can view that as someone that's seen lots of different environments...not a bad thing when you're looking for survey answers :)

  • MQ, XA and WLI

    I'm trying to use MQ in a WLI process (JPD) with full XA support, without success.
              My configuration is as follows:
              -Weblogic 8.1 SP3, running an Integration domain
              -Websphere MQ 5.3 CSD07, running on the same physical box (necessary for XA)
              -I have a startup class that loads the appropriate com.ibm.mq.jms.MQQueueConnectionFactory and com.ibm.mq.jms.Queue classes (pre-populated with the correct info) into the JNDI tree.
              -connection mode is set to binding for the local queue managers, required for XA.
              -we have a library that uses the pre-loaded QCF's and Queue's, wrapped as a Java Control, to use in the JPD's. (The library is used to interact with the equivalent C++ library that exists on OS/390, VMS, Windows and Unix(s).)
              -XA works fine with our Oracle JDBC connections, so I know that XA is enabled.
              Using either the library or the control in a non-transactional manner works fine, either from a JPD or EJB or Servlet. It can send and receive with no issues.
              But when trying to use this setup with XA, things go wrong.
              Behaviour is as follows:
              1. In a very simple JPD, without any explicit transaction boundaries (the JPD is the transaction), messages are placed on the queue as soon as the QueueSender.send() is executed, but left in an uncommitted state. That's what I expect. Once the JPD ends, the message remains in an uncommitted state (with no connection handles left open on the queue) and MQ eventually cleans up the uncommited message. (I've also tried the same using a perform node and the appropriate JMS code in it, bypassing afore-mentioned library, same result.)
              1a) Modifying the above behaviour a bit, if I call the QueueSender.close() method after the send(), the connection is indeed closed, but the message remains uncommitted. If I don't call the QueueSender.close() method, the connection stays open until the next garbage collection cycle, and the message remains uncommitted forever.
              2. I tried creating an explicit transaction around the node doing the send(), and received a java.io.NotSerializableException, since the MQ libraries (com.ibm.mq.jms.*) are not serialable.
              3. I tried out of desperation to create the control as a factory, same result as #1.
              4. I've looked at the MQControl, but it doesn't seem to do XA at all. As soon as the MQControl sent the message, the message was on the queue and committed, even though the process had ended. Throwing an exception later in the JPD did NOT cause the message to roll back. My conclusion (please prove me wrong!) is that the MQControl is not XA aware.
              Seems that in scenarios 1-3 MQ knows that the message is in a transaction (in a Syncpoint in MQ-speak) but is never recieving the commit.
              I really don't want to use the Foreign JMS provider, since I've got a lot of queues, and the filesystem-based JNDI tree is the only option I have to get at the MQ QCF's and Queues... and that's really ugly. I also don't want to use the Messaging Bridge, again because of the sheer number of queues, and the pain in configuring and maintaining this beast.
              Any ideas? What am I missing?
              thanks
              mike

    Have you found any solution to your problem?
              I know that the WLI team deals often with MQ integration issues, and I know that MQ/XA integration is something that comes up often for them. Someone from WLI support should be able to help.
              One solution may be to use the WLI JMS control, using the IBM JMS API for MQSeries. As I understand it, this control supports XA. However, I'm not an WLI expert.
              If that fails, another solution woule be to use the JTA APi to enlist the MQ JMS provider directly. However, this can get tricky, but if you want to do it, you need to:
              -- Create XAConnection, and XASession objects for MQ when connecting
              -- Use weblogic.transaction.TransactionHelper to get access to the current Transaction object.
              -- Enlist the MQSeries XAResource object (which you get from the XASession object) with the transaction before sending.

  • Sharing Forms Object Libraries

    We are redeveloping a Forms 4.5 application in Forms 6i Rel2 using 9i database and want to make use of shared object library components for shared code etc. Whilst we can share other library components, every time we try to share any object library components through the Forms Administration/Module Access dialogue we get the error "FRM-10021 Unable to find this module." Of course Help simply reveals "Cause:Internal system error. Action:     Contact your DBA or an Oracle support representative."
    The owner of any module can still load the object library but no-one else can see the module(s) or make use of them.
    It is possible for two different users to create an object with the same name, as revealed in the table system.tool__module but it is still impossible to share the object.
    I have searched both this forum and the Forms documentation but cannot find any help on this problem.
    Thanks in anticipation.
    NoelH

    Storing modules in database and referencing them from there is not anymore supported in the upcoming forms9i release. If you
    can and want to stay at the leading edge you should start moving to filesystem based subclassing!
    Just my 2 cents - i know it's not helping with the direct problem, but maybe something to think about that could also solve your issues ;-)!
    Cheers,
    Stefan Mueller

  • How to clean the plex?

    How to clean the plex?

    Mate,
    we need more details; Is your filesystem based on Veritas volume manage and file system [Storage Foundation]
    - the terminoloy "plex" suggests this. If so post the output of this command subsituting the actual volume name
    and disk group name:
    vxprint -g diskgroup_name -qht vol_name
    If it's Solaris Volume Manager [ solstice ] or ZFS i will let the other gurus answer as my speciality is Veritas

  • Pacman-optimize: Is this bad?

    ==> md5sum'ing the old database...
    ==> copying /var/lib/pacman...
    ==> md5sum'ing the new database...
    ==> checking integrity...
    pacman-optimize: integrity check FAILED, reverting to old databse
    What could cause this error? I ran this after noticing that pacman would freeze (and segfault) when trying to uninstall the big-ass kde group of packages. How can I fix this sort of problem?

    phrakture wrote:Yes.  There are things being worked on, but alot of people don't experience any slowdown.... for me, I never notice any problems...
    IMHO the reason most people don't experience any slowdown is that they run Arch Linux on a desktop with lots and lots of memory so their pacman db never comes out of cache. And even if it does, their 7200 rpm disks with blazing fast access times can handle this.
    I know this has been reported over and over, but this is what it looks like on a notebook with an average 5400 rpm drive with an ext3 partition:
    [gali@neutrino ~]$ time pacman -Ss rosegarden
    extra/rosegarden 4.1.0-1
    audio and MIDI sequencer, score editor, and general-purpose music
    composition and editing application.
    real 0m27.105s
    user 0m0.394s
    sys 0m0.660s
    It used to be even worse -- around 45 seconds --, but I found another pacman directory under /var/lib/pacman/, so I had the entire repository tree nearly duplicated. I really don't know what caused this and whether if it was intentional, but guessing it wasn't, I just removed it. :-) du -hs /var/lib/pacman now comes out with only :-) 42 MB instead of the original 65 MB.
    Moroever, when installing a package, after the last line of installing packagexyz... done, it takes another 15 or 20 seconds for pacman to finish.
    And there are people out there still using 4200 rpm drives in their older notebooks...
    Please, don't get me wrong. I'm totally happy with Arch, this is just a minor annoyance. But I think it deserves attention.
    I was a longtime Windows user with only very passive Linux usage (several different reasons -- like a some gaming in the past, my sound card not being supported, etc.), but in July I finally switched over. For an ocassional use, Debian and Ubuntu was okay for me. But in a day-to-day manner, I just somehow couldn't feel at home there. Maybe because I felt the system is totally out of my control. (How ironic since I switched over from Windows. :-))
    Now I'm using Arch for a month or so and never felt happier in Linux.
    phrakture wrote:But the next release of pacman won't change the current system, but it will improve performance
    Well I'm certainly more than happy to hear this. :-)
    Maybe if pacman was using a plugin system for its database. And user could choose between filesystem-based storage, sqlite3 or anything that would be available as a plugin. I'm quite sure it couldn't take so long to search through a 40 MB sqlite DB, even on a slower hard drive like mine. It shouldn't be too hard to implement too, if pacman uses some healthy amount of abstraction when retrieving data from its database... But I haven't looked at the source code, so I don't know. And I guess it would break the omnipresent KISS principle...
    Hey, it was just an idea. :-)

Maybe you are looking for

  • X79-GD65 (8D) and i7 3820 issues

    Hello members I recently purchased a MSI X79-GD65 (8D) motherboard and an Intel i7 3820 CPU. I have installed 4 sticks of 16GB memory. When I turn the computer on, everything looks normal on the motherboard, except I get no video (have tried two vide

  • Oracle application server middle tier not starting

    I installed oracle 10g application server( version 10.1.2) in windows 2003 server. In the same machine we have infrastucture and middle tier installed. last 6 months it is working fine. now i stop all from ias console to stop the middle tier. in the

  • POs terms and condition in multiple languages

    how to print a PO's terms and conditions in multiple languages ??

  • WS01200003 workflow is hanging.

    Dear all workflow expert, I have a leave request workflow which is customize where i include this WS01200003 workflow in my customize workflow. This is to find and lock employee record so that it can upldate the infotype. It is working fine in my dev

  • How can I download my notes from the icloud

    In the browser I put www.icloud.com, then appears a page with a short menu in where i can go to Set up ICloud on this device Install Find  my iphone Install Find  my Friends I cant enter in my email account using safari to dowload a note that i need