Does OCFS2 file system get fragmented

We are running Production & Testing RAC databases on Oracle 9.2.0.8 RAC on Red Hat 4.0 using OCFS2 for the cluster file system.
Every week we refresh our Test database by deleting the datafiles and cloning our Standby database to the Test database. The copying of the datafiles from the Standby mount points to the Test database mount points (same server), seems to be taking longer each time we do this.
My question is : can the OCFS2 file system become fragmented over time from the constant deletion & copying of the datafiles and if so is there a way to defragment it.
Thanks
John

Hi,
I think it will get fragment if you constant deletion & copying of the datafiles on ocfs2.You can set the preferable block size and cluster size on the basis of actual applications,which can reduce the file fragments.
Regards
Terry

Similar Messages

  • Confusion with OCFS2 File system for OCR and Voting disk RHEL 5, Oracle11g,

    Dear all,
    I am in the process of installing Oracle 11g 3 Node RAC database
    The environment on which i have to do this implementation is as follows:
    Oracle 11g.
    Red Hat Linux 5 x86
    Oracle Clusterware
    ASM
    EMC Storage
    250 Gb of Storage drive.
    SAN
    As of now i am in the process of installing Oracle Clusterware on the 3 nodes.
    I have performed these tasks for the cluster installs.
    1. Configure Kernel Parameters
    2. Configure User Limits
    3. Modify the /etc/pam.d/login file
    4. Configure Operating System Users and Groups for Oracle Clusterware
    5. Configure Oracle Clusterware Owner Environment
    6. Install CVUQDISK rpm package
    7. Configure the Hosts file
    8. Verify the Network Setup
    9. Configure the SSH on all Cluster Nodes (User Equivalence)
    9. Enable the SSH on all Cluster Nodes (User Equivalence)
    10. Install Oracle Cluster File System (OCFS2)
    11.Verify the Installation of Oracle Cluster File System (OCFS2)
    12. Configure the OCFS2 (/etc/ocfs2/cluster.conf)
    13. Configure the O2CB Cluster Stack for OCFS2
    BUT, here after i am a little bit confused on how to proceed further. The next step is to Format the disk and mount the OCFS2, Create Software Directories... so and so forth.
    I asked my system admin to provide me two partitions so that i could format them with OCFS2 file system.
    He wrote back to me saying.
    *"Is what you want before I do it??*
    */dev/emcpowera1 is 3GB and formatted OCFS2.*
    */dev/emcpowera2 is 3GB and formatted OCFS2.*
    *Are those big enough for you? If not, I can re-size and re-format them*
    *before I mount them on the servers.*
    *the SAN is shared storage. /dev/emcpowera is one of three LUNs on*
    *the shared storage, and it's 214GB. Right now there are only two*
    *partitions on it- the ones I listed below. I can repartition the LUN any*
    *way you want it.*
    *Where do you want these mounted at:*
    */dev/emcpowera1*
    */dev/emcpowera2*
    *I was thinking if this mounting techique would work like so:*
    *emcpowera1: /u01/shared_config/OCR_config*
    *emcpowera2: /u01/shared_config/voting_disk*
    *Let me know how you'd like them mounted."*
    Please recommend me what i should convey to him so that i can ask him to the exact same thing.
    My second question is, as we are using ASM, for which i am gonna configure ASM after clusterware installation, should i install Openfiler??
    Pls refer the enviroment information i provided above and make recommendations.
    As of now i am using Jeffery Hunters guide to install the entire setup. You think the entire install guide goes well with my enviroment??
    http://www.oracle.com/technology/pub/articles/hunter_rac11gr1_iscsi.html?rssid=rss_otn_articles
    Kind regards
    MK

    Thanks for ur reply Mufalani,
    You have managed to solve half part of my query. But still i am stuck with what kind of mount point i should ask the system admin to create for OCR and Voting disk. Should i go with the mount point he is mentioning??
    Let me put forth few more questions here.
    1. Is 280 MB ok for OCR and voting disks respectively??
    2. Should i ask the system admin to create 4 voting disk mount points and two for ocr??
    3. As mentioned by the system admin.
    */u01/shared_config/OCR_config*
    */u01/shared_config/voting_disk*
    Is this ok for creating the ocr and voting disks?
    4. Can i use OCFS2 file system for formating the disk instead of using them as RAW device!!?
    5. As u mentioned that Openfiler is not needed for Configuring ASM... Could you provide me the links which will guide me to create partition disks, voting disks and ocr disks!! I could not locate them on the doc or else were. I did find a couple of them, but was unable to identify a suitable one for my envoirement.
    Regards
    MK

  • File system getting full // dev_icmmon rapidly increasing

    Hello All,
    /sapmnt/SID/profile file system getting full
    /in this file system dev_icmmon rapidly increasing
    more dev_icmmon
    [Thr 258] **** SigHandler: signal 1 received
    [Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
    [Thr 258] **** SigHandler: signal 1 received
    [Thr 01] *** ERROR => IcmReadAuthFile: could not open authfile: icmauth.txt - errno: 2 [icxxsec_mt.c 728]
    please help me to resolve the issue
    Regards
    Mohsin M

    I have killed icmon process ..now problem is solved

  • Can´t mount OCFS2 file system after Public IP modify

    Guys,
    We have an environment with 2 nodes with RAC database version 10.2.0.1. We need to modify the Public IP and VIP of the Oracle CRS. So we did the following steps:
    - Alter the VIP
    srvctl modify nodeapps -n node1 -A 192.168.1.101/255.255.255.0/eth0
    srvctl modify nodeapps -n node2 -A 192.168.1.102/255.255.255.0/eth0
    - Alter the Public IP
    oifcfg delif -global eth0
    oifcfg setif -global eth0/192.168.1.0:public
    - Alter the IP´s of the network interfaces
    - Update the /etc/hosts
    When we start the Oracle CRS, the components starts OK. But when we reboot the second node, the OCFS2 file system didn´t mount. The following errors occurs:
    SCSI device sde: 4194304 512-byte hdwr sectors (2147 MB)
    sde: cache data unavailable
    sde: assuming drive cache: write through
    sde: sde1
    parport0: PC-style at 0x378 [PCSPP,TRISTATE]
    lp0: using parport0 (polling).
    lp0: console ready
    mtrr: your processor doesn't support write-combining
    (2746,0):o2net_start_connect:1389 ERROR: bind failed with -99 at address 192.168.2.132
    (2746,0):o2net_start_connect:1420 connect attempt to node rac1 (num 0) at 192.168.2.131:7777 failed with errno -99
    (2746,0):o2net_connect_expired:1444 ERROR: no connection established with node 0 after 10 seconds, giving up and returning errors.
    (5457,0):dlm_request_join:786 ERROR: status = -107
    (5457,0):dlm_try_to_join_domain:934 ERROR: status = -107
    (5457,0):dlm_join_domain:1186 ERROR: status = -107
    (5457,0):dlm_register_domain:1379 ERROR: status = -107
    (5457,0):ocfs2_dlm_init:2007 ERROR: status = -107
    (5457,0):ocfs2_mount_volume:1062 ERROR: status = -107
    ocfs2: Unmounting device (8,17) on (node 1)
    When we did the command to force the mount occurs the errors:
    # mount -a
    mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /ocfs2. Check 'dmesg' for more information on this error.
    What occurs is that the OCFS2 is trying to connect with the older Public IP. My question is, how do i change the public IP in the ocfs2 ?
    regards,
    Eduardo P Niel
    OCP Oracle

    Hi, is correct you maybe check the /etc/cluster.conf file, maybe the configuration is wrong, you can also check the /etc/hosts file for verify the correct definition host names.
    Luck
    Have a good day.
    Regards,

  • OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS OVMSRVR. Its server is not in a cluster. [Thu Dec 04 01:33:10 EST 2014]

    Hi Guys,
    Im trying to create a repository .I have a single node OVM server and have presented two LUN's (Hitachi  HUS110 direct attached (via FC))
    I've created a server pool and unchecked the clustered server pool. I see the LUN's (physical disk from Oracle Virtual manager) .But when creating the repository i'm having this error
    "OVMRU_002030E Cannot create OCFS2 file system with local file server: Local FS OVMSRVR. Its server is not in a cluster. [Thu Dec 04 01:33:10 EST 2014]"
    Any steps i missed?Appreciate anyone's input
    Regards,
    Robert

    Hi Robert,
    did you actually add the OVS to the server pool, that you created?

  • Ocfs2 can not mount the ocfs2 file system on RedHat AS v4 Update 1

    Hi there,
    I installed ocfs2-2.6.9-11.0.0.10.3.EL-1.0.4-1.i686.rpm onto RedHat linux AS v4 update 1. Installation looks OK. And configure ocfs2 (At this stage i only added 1 node in the cluster), load and start accordingly. Then paritition the disk and mkfs.ocfs2 the partition. Everything seems OK.
    [root@node1 init.d]# ./o2cb status
    Module "configfs": Loaded
    Filesystem "configfs": Mounted
    Module "ocfs2_nodemanager": Loaded
    Module "ocfs2_dlm": Loaded
    Module "ocfs2_dlmfs": Loaded
    Filesystem "ocfs2_dlmfs": Mounted
    Checking cluster ocfs2: Online
    Checking heartbeat: Not active
    But here you can check if the partition is there:
    [root@node1 init.d]# fsck.ocfs2 /dev/hda12
    Checking OCFS2 filesystem in /dev/hda12:
    label: oracle
    uuid: 27 74 a6 70 32 ad 4f 77 bf 55 8e 3a 87 78 ea cb
    number of blocks: 612464
    bytes per block: 4096
    number of clusters: 76558
    bytes per cluster: 32768
    max slots: 2
    /dev/hda12 is clean. It will be checked after 20 additional mounts.
    However, mount -t ocfs2 /dev/hda12 just does not work.
    [root@node1 oracle]# mount -t ocfs2 /dev/hda12 /oradata/m10g
    mount.ocfs2: No such device while mounting /dev/hda12 on /oradata/m10g
    [root@node1 oracle]# mount -L oracle
    mount: no such partition found
    Looks like mount just can not see the ocfs2 partition somehow.
    I cannot find much info in metalink and anywhere else, does anyone here come across this issue before?
    Regards,
    Eric

    I have been having a similar problem.
    However, when I applied your fix I ended up with another problem:
    (20765,0):ocfs2_initialize_osb:1179 max_slots for this device: 4
    (20765,0):ocfs2_fill_local_node_info:851 I am node 0
    (20765,0):dlm_request_join:756 ERROR: status = -107
    (20765,0):dlm_try_to_join_domain:906 ERROR: status = -107
    (20765,0):dlm_join_domain:1151 ERROR: status = -107
    (20765,0):dlm_register_domain:1330 ERROR: status = -107
    (20765,0):ocfs2_dlm_init:1771 ERROR: status = -12
    (20765,0):ocfs2_mount_volume:912 ERROR: status = -12
    ocfs2: Unmounting device (253,7) on (node 0)
    Now the odd thing about this bit of log output (/var/log/messages)
    is the fact that this is only a 2 node cluster and only one node has
    currently mounted the file system in question. Now, I am running
    the multipath drivers with my qla2xxx drivers under SLES9-R2.
    However, at worst that should only double everything
    (2 nodes x 2 paths through the SAN).
    How can I get more low level information on what is consuming
    the node slots in ocfs2? How can I force it to "disconnect" nodes
    and recover/cleanup node slots?

  • Where does the svc system get its infos from?

    The longer I work with Solaris 10, the more I dread this awful "services" construction and its unwieldy administration.
    I have a newly setup machine with two zones, one created right after the other, meaning they should be identical.
    /usr is, as usual, lofs'ed into both zones, and contains a /usr/local/samba dir with a handcompiled 3.0.30 release.
    I have not created any manifests as I intended to simply delete the 'onboard' samba service and start the new version
    via the classical init script.
    I boot both zones.. and one starts "/usr/local/samba/smbd -D -f /etc/smb.conf", and the other starts (or tries to) "/usr/sfw/sbin/smbd -D".
    I inspected the manifest xmls for the network/samba service, and in both zones the entry lists /usr/sfw as start method.
    svccfg listprop network/samba also reports /usr/sfw for both zones.
    Where did the one zone get the info to start the /usr/local samba installation, and where did it get the (correct) conf file parameter?
    Why does svccfg report a different startup method property than it really runs on svcadm enable?
    I have grepped through all of /lib/svc, /etc/svc and /var/svc and nowhere is any reference to the /usr/local file nor the conf parameters.
    Swapping the svc repository.db file between both zones also swaps the problem around - so it must be somewhere in there,
    but as it's no longer a human readable file, I'm out of luck there. How did the /usr/local path and parameters get into that DB file?
    Does the service system search the filesystem for known daemon binary names and other guesswork? And if it does such voodoo,
    why didn't it do the same in the other identical zone?
    rant Did we really need to get this svc "feature"? Until now I have seen no advantage from it, and only incurred a huge load of inconvenience,
    be it the additional work of creating manifests for new services or debugging problems that I never had in a decade or two of init script usage.
    We have all seen for many years how well this registry/service stuff worked on Windows..*/rant*

    >
    I boot both zones.. and one starts "/usr/local/samba/smbd -D -f /etc/smb.conf", and the other starts (or tries to) "/usr/sfw/sbin/smbd -D".
    I inspected the manifest xmls for the network/samba service, and in both zones the entry lists /usr/sfw as start method.I take it you mean the full path with /usr/sfw in it, not "/usr/sfw" all by itself....
    What do you get for:
    svcprop -p start/exec network/samba:defaultThat should display the start string it uses.
    Where did the one zone get the info to start the /usr/local samba installation, and where did it get the (correct) conf file parameter?I can only assume that some program inserted that string into the service. As you said, it's not in the manifest, so it didn't come from there.
    Why does svccfg report a different startup method property than it really runs on svcadm enable?
    I have grepped through all of /lib/svc, /etc/svc and /var/svc and nowhere is any reference to the /usr/local file nor the conf parameters.
    Swapping the svc repository.db file between both zones also swaps the problem around - so it must be somewhere in there,
    but as it's no longer a human readable file, I'm out of luck there. How did the /usr/local path and parameters get into that DB file?
    Does the service system search the filesystem for known daemon binary names and other guesswork? And if it does such voodoo,
    why didn't it do the same in the other identical zone?
    rant Did we really need to get this svc "feature"? Until now I have seen no advantage from it, and only incurred a huge load of inconvenience,
    be it the additional work of creating manifests for new services or debugging problems that I never had in a decade or two of init script usage.As with many such things, the advangages are greater when the environment is more complex. For starting a single job, you won't see the benefits as much.
    Of course, the milestone services still run startup scripts out of /etc/rc[S23].d, so those are perfectly valid places to run a startup script.
    Darren

  • While doing PGI I system getting Error

    Dear SD Experts,
    While doing PGI in delivery system getting Error   "The batches are not defined for delivery item 000010".
    Error Massage :- Message no. VL605
    I already defined Batches to the given item. Its appearing Under Material and its specified in Batch split Tab.
    Kindly suggest me where I was wrong and what configuration I missed.
    Regards,
    Manzoor Ahmad

    hi
    select the stock against the batch which should be equal to delivery quantity.
    that means delivery quanquantity should be equal to picing quantity.
    after selecting manually press F3 to go back.
    system will give a POP UP message say YES
    then the batches against the quantity will be copied in the delivery document
    again go back you will come to over view of delivey screen
    then in picking tab page you will find all picked quantity if not give it manually
    and save the delivery document with out pressing PGI.
    hope this clears your issue
    balajia

  • File system getting full and Server node getting down.

    Hi Team,
    Currently we are using IBM Power 6 AIX operating system.
    And in our environment, for development system, file system is getting full and development system is getting slow while accessing.
    Can you please let me know , what exactly the problem & which command is used to see the file system size and how to resolve the issue by deleting the core files or some ting. Please help me .
    Thanks
    Manoj K

    Hi      Orkun Gedik,
    When i executed the command df -lg and find . -name core noting is displayed, but if i execute the command df is showed me the below information. below is an original file which i have modified the sid.
    Filesystem    512-blocks      Free %Used    Iused %Iused Mounted on
    /dev/fslv10     52428800  16279744   69%   389631    15% /usr/sap/SID
    Server 0 node is giving the problem. its getting down all the times.
    And if i check it in the /usr/sap/SID/<Instance>/work for the server node "std_server0.out file , the below information is written in the file.
    framework started for 73278 ms.
    SAP J2EE Engine Version 7.00   PatchLevel 81863.450 is running! PatchLevel 81863.450 March 10, 2010 11:48 GMT
    94.539: [GC 94.539: [ParNew: 239760K->74856K(261888K), 0.2705150 secs] 239760K->74856K(2009856K), 0.2708720 secs] [Times: user=0.00 sys=0.36, real=0.27 secs]
    105.163: [GC 105.164: [ParNew: 249448K->80797K(261888K), 0.2317650 secs] 249448K->80797K(2009856K), 0.2320960 secs] [Times: user=0.00 sys=0.44, real=0.23 secs]
    113.248: [GC 113.248: [ParNew: 255389K->87296K(261888K), 0.3284190 secs] 255389K->91531K(2009856K), 0.3287400 secs] [Times: user=0.00 sys=0.58, real=0.33 secs]
    Please advise.
    thanks in advance
    Manoj K

  • Root file system getting full SLES 10 SAP ERP6

    Hi Gurus
    Iam having an unusual  problem.My / file system is getting full and i can`tv pick up what causing it .I have checked the logs in /var/messages.I  dont know whats writing to / .I didnt copy anything directly on /
    Filesystem            Size  Used Avail Use% Mounted on
    /dev/sda5             7.8G  7.5G     0 100% /
    SLES 10 64 bit  ERP6 SR2
    Anyone who had a similar problem.
    Any ideas are welcome

    cd /
    xwbr1:/ # du -hmx --max-depth=1
    1       ./lost+found
    48      ./etc
    1       ./boot
    1       ./sapdb
    1       ./sapmnt
    1430    ./usr
    0       ./proc
    0       ./sys
    0       ./dev
    85      ./var
    8       ./bin
    1       ./home
    83      ./lib
    12      ./lib64
    1       ./media
    1       ./mnt
    488     ./opt
    56      ./root
    14      ./sbin
    2       ./srv
    1       ./tmp
    1       ./sapcd
    1       ./backupdisk
    1       ./dailybackups
    1       ./fulloffline
    cd /root
    xwbr1:~ # du -hmx --max-depth=1
    1       ./.gnupg
    1       ./bin
    1       ./.kbd
    1       ./.fvwm
    2       ./.wapi
    12      ./dsadiag
    1       ./.gconf
    1       ./.gconfd
    1       ./.skel
    1       ./.gnome
    1       ./.gnome2
    1       ./.gnome2_private
    1       ./.metacity
    1       ./.gstreamer-0.10
    1       ./.nautilus
    1       ./.qt
    1       ./Desktop
    1       ./.config
    1       ./Documents
    1       ./.thumbnails
    38      ./.sdtgui
    1       ./sdb
    1       ./.sdb
    4       ./.mozilla
    56      .

  • File systems and fragmentation

    I'm a converted ubuntu user, and I love arch to death now... but here's my question.
    When I was using ubuntu I read a post on lifehacker.com that said
    Defrag – Nope. Linux file systems do not have a need to be defragmented.
    Original link: http://lifehacker.com/5817282/what-kind … y-linux-pc
    Since I've always used ext4 and I still can't find anything on needing to defrag an ext3 or ext4 filesystem I assume it still holds true. However, I was reacently read this article on the wiki: https://wiki.archlinux.org/index.php/Ma … filesystem
    and decided to try out XFS on my /home partition. So i checked out the XFS page. all it has is a small section of fragmentation and how to defrag it and check how fragmented it is.
    I also remember seeing somewhere on the wiki (although I can't remember where) that JFS also needed it as well (might not have been JFS not entirely sure).
    So is this something I should worry about as I used to on windows? Are ext3 and ext4 really immune as they seem to be? And what are your recommendations on file systems to use?

    Zarcjap wrote:And what are your recommendations on file systems to use?
    If you're asking which FS you should use, there are already a few threads about this e.g. https://bbs.archlinux.org/viewtopic.php?id=135330
    It depends on what are you going to use it for.

  • Where does the file path get reported from?

    When looking at ZAM software information, it'll show the product name and then like:
    c:\program files\something\
    Where does that path information come from?
    The windows registry?
    Or does it actually come from the location of the .exe itself?

    kjhurni,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://support.novell.com/forums/

  • File system getting full

    Hi All,
    When i check the path
    /usr/sap/ccms/wilyintroscope/traces
    Its occupying lot of space.Can help us in understanding the reorganisation of the same.
    Indexrebuilder.sh would only rebuild the index right not the logs.Thanks.

    HI
    Can be done by scripts (chk [this|Introscope Enterprise Manager;)
    you can set the property file accordingly for managing the space (chk [this|/usr/sap/ccms is above the threshold value;)
    jansi

  • Unix shell: Environment variable works for file system but not for ASM path

    We would like to switch from file system to ASM for data files of Oracle tablespaces. For the path of the data files, we have so far used environment variables, e.g.,
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    This works just fine (from shell scripts, PL/SQL packages, etc.) if ORACLE_DB_DATA denotes a file system path, such as "/home/oracle", but doesn’t work if the environment variable denotes an ASM path like "\+DATA/rac/datafile". I assume that it has something to do with "+" being a special character in the shell. However, escaping "\+" didn’t work. I tried with both bash and ksh.
    Oracle managed files (e.g., set DB_CREATE_FILE_DEST to +DATA/rac/datafile) would be an option. However, this would require changing quite a few scripts and programs. Therefore, I am looking for a solution with the environment variable. Any suggestions?
    The example below is on a RAC Attack system (http://en.wikibooks.org/wiki/RAC_Attack_-OracleCluster_Database_at_Home). I get the same issues on Solaris/AIX/HP-UX on 11.2.0.3 also.
    Thanks,
    Martin
    ==== WORKS JUST FINE WITH ORACLE_DB_DATA DENOTING FILE SYSTEM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA=/home/oracle
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 20:57:09 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> !ls -l ${ORACLE_DB_DATA}/bma.dbf
    -rw-r----- 1 oracle asmadmin 2105344 Aug 24 20:57 /home/oracle/bma.dbf
    SQL> drop tablespace bma including contents and datafiles;
    ==== DOESN’T WORK WITH ORACLE_DB_DATA DENOTING ASM PATH ====
    collabn1:/home/oracle[RAC1]$ export ORACLE_DB_DATA="+DATA/rac/datafile"
    collabn1:/home/oracle[RAC1]$ sqlplus "/ as sysdba"
    SQL*Plus: Release 11.2.0.1.0 Production on Fri Aug 24 21:08:47 2012
    Copyright (c) 1982, 2009, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
    Data Mining and Real Application Testing options
    SQL> CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON;
    CREATE TABLESPACE BMA DATAFILE '${ORACLE_DB_DATA}/bma.dbf' SIZE 2M AUTOEXTEND ON
    ERROR at line 1:
    ORA-01119: error in creating database file '${ORACLE_DB_DATA}/bma.dbf'
    ORA-27040: file create error, unable to create file
    Linux Error: 2: No such file or directory
    SQL> -- works if I substitute manually
    SQL> CREATE TABLESPACE BMA DATAFILE '+DATA/rac/datafile/bma.dbf' SIZE 2M AUTOEXTEND ON;
    Tablespace created.
    SQL> drop tablespace bma including contents and datafiles;

    My revised understanding is that it is not a shell issue with replacing +, but an Oracle problem. It appears that Oracle first checks whether the path starts with a "+" or not. If it does not (file system), it performs the normal environment variable resolution. If it does start with a "+" (ASM case), Oracle does not perform environment variable resolution. Escaping, such as "\+" instead of "+" doesn't work either.
    To be more specific regarding my use case: I need the substitution to work from SQL*Plus scripts started with @script, PL/SQL packages with execute immediate, and optionally entered interactively in SQL*Plus.
    Thanks,
    Martin

  • How to run something on shutdown before file systems unmounted

    I've been trying to get kexec working with systemd, following the advice on this wiki page:
    https://wiki.archlinux.org/index.php/Kexec#Systemd
    Unfortunately, the suggested unit file does not work for me.  The problem is that no matter what I do, my /boot file system gets unmounted before kexec gets run (so that it cannot find the kernel).  I've tried adding most subsets of the following to the Unit section of my [email protected] file, all to no avail:
    Before=shutdown.target umount.target final.target
    RequiresMountsFor=/boot/vmlinuz-linux
    Wants=boot.mount
    After=boot.mount
    If I run "systemctl start kexec-load@linux" followed by "systemctl kexec", everything works fine.  But if I forget to do the former, then the kexec fails.
    Anyway, I'm finding it super frustrating not to be able to control the order of events on shutdown.  There has to be a way to do it, but absolutely nothing I've tried seems to make any difference.  On startup I can obviously control things, so I could have some dummy service that does nothing on start but on stop calls kexec.  But I don't want kexec called on an ordinary reboot or poweroff, only after "systemctl kexec."
    If someone could tell me how to do what I need to do, I would much appreciate it.

    use:
    FileOutputStream data_Stream = new FileOutputStream(data_File, true);The second argument is the append flag and is false by default.

Maybe you are looking for

  • Calendar Year/Month Variable & Filtering

    Hello Experts, I have a problem with a web template that I cannot solve. I have a web page containing one table and one generic navigation block.  They both have the same query behind them.  Calendar Year/Month is a free characteristic.  I have a var

  • Having trouble with each track being input into the library individually

    Old computer was fried, couldn't backwards sync iPod to new computer. Now trying to import CDs into Library and it is adding each track as an item in the Library rather than the whole album or CD. I've tried it twice. This is so aggravating!!!

  • Persistent ScriptUI Dialogs that can be shown several times.

    Hi all, I love ScriptUI and all its possibilities. But every time I try something new I fail for at least some days Here is my newest conundrum: I try to do something that involves selecting a menu item along the road. Assembling all menu items in a

  • Financial Reporting- Scheduling outside Workspace

    Hi, I would like to know if there is any way scheduling an FR report outside workspace. Please let me know the process of doing the same if any. The requiremnet is once the data is loaded and aggregation is run, a report book has to be scheduled. Thi

  • How to activate Photoshop CS?

    I have an old version of Photoshop I'm trying to activate.  I have the serial # and activation # ... just need the Authorization code. Obviously it's no longer supported by Adobe, so I need help! Thoughts?