Cannot mount cgroup: already mounted or busy

I am unable to mount cgroups on a fresh boot of a recent installation, fully up-to-date.
# mkdir /cgroups
# mount -t cgroup none /cgroups
mount: none already exists of /cgroups busy
According to lxc-config, I have cgroups enabled, and `dmesg | grep -i cgroup` confirms that the relevant subsystems are initialized. I am not finding anything added to dmesg.log, kernel.log, everything.log, etc., by the mount command.
Any ideas?

I am unable to mount cgroups on a fresh boot of a recent installation, fully up-to-date.
# mkdir /cgroups
# mount -t cgroup none /cgroups
mount: none already exists of /cgroups busy
According to lxc-config, I have cgroups enabled, and `dmesg | grep -i cgroup` confirms that the relevant subsystems are initialized. I am not finding anything added to dmesg.log, kernel.log, everything.log, etc., by the mount command.
Any ideas?

Similar Messages

  • Mount fat32 on x86: already mounted or busy

    hi. i suppose mounting fat32 on x86 is well-documented, but i'm having trouble, so i'd appreciate some help.
    im dual booting with solaris 10 and xp. my partitions are as follows:
    1st physical drive
    [boot sector] 55 MB
    [c:] label XP (NTFS)
    solaris partitions...
    2nd physical drive
    [d:] label Data (FAT32) (about 17 GB)
    i'd like to mount my FAT32 drive. according to man mount and man pcfs, i thought i'd do seomething like:
    bash-3.00# mount -F pcfs /dev/dsk/c1t0d0p0:d /mnt/data
    which yields:
    mount: /dev/dsk/c1t0d0p0:d is already mounted or /mnt/data is busy
    theres nothign in the /mnt/data folder. as far as i can tell, the drive isn't mounted. am i naming hte wrong device? im reading this "device-special:logical drive" stuff and i guess it's not making sense. i've included some more info below.
    hopefully after this i can mount the drive in the vfstab, but if not i suppose ill be back.
    bash-3.00# iostat -nxp
    extended device statistics
    r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
    3.6 1.0 30.5 4.5 0.0 0.0 2.9 4.9 0 2 c0d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 5.7 0 0 c0d1
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 fd0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t0d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t1d0
    0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 unknown:vold(pid507)
    bash-3.00# mount
    / on /dev/dsk/c0d0s0 read/write/setuid/devices/intr/largefiles/logging/xattr/oner ror=panic/dev=1980000 on Sun Feb 19 20:27:38 2006
    /devices on /devices read/write/setuid/devices/dev=4380000 on Sun Feb 19 20:27:24 2006
    /system/contract on ctfs read/write/setuid/devices/dev=43c0001 on Sun Feb 19 20:27:24 2006
    /proc on proc read/write/setuid/devices/dev=4400000 on Sun Feb 19 20:27:24 2006
    /etc/mnttab on mnttab read/write/setuid/devices/dev=4440001 on Sun Feb 19 20:27:24 2006
    /etc/svc/volatile on swap read/write/setuid/devices/xattr/dev=4480001 on Sun Feb 19 20:27:24 2006/system/object on objfs read/write/setuid/devices/dev=44c0001 on Sun Feb 19 20:27:24 2006
    /lib/libc.so.1 on /usr/lib/libc/libc_hwcap1.so.1 read/write/setuid/devices/dev=1980000 on Sun Feb 19 20:27:37 2006
    /dev/fd on fd read/write/setuid/devices/dev=4680001 on Sun Feb 19 20:27:38 2006
    /tmp on swap read/write/setuid/devices/xattr/dev=4480002 on Sun Feb 19 20:27:38 2006
    /var/run on swap read/write/setuid/devices/xattr/dev=4480003 on Sun Feb 19 20:27:39 2006
    /export/home on /dev/dsk/c0d0s7 read/write/setuid/devices/intr/largefiles/logging/xattr/oner ror=panic/dev=1980007 on Sun Feb 19 20:27:44 2006
    thanks!!!

    thanks martin
    i think i was trying to mount a SCSI drive instead of IDE that I have... thanks for clearing that up. i was also frustrated with teh :c part, I think I forgot about the 1st partition part.
    im under the impression that c0d0p1:c is my first physical drive, which is NTFS because of:
    bash-3.00# mount -F pcfs /dev/dsk/c0d0p1 /mnt/data
    mount: /dev/dsk/c0d1p1:c is not a DOS filesystem.
    but then why would c0d1p1 also yield:
    bash-3.00# mount -F pcfs /dev/dsk/c0d1p1 /mnt/data
    mount: /dev/dsk/c0d0p1, c0d0p2 is not a DOS filesystem.
    then again c0d0p0, c0d0p2, and c0d1p0 yields the same...
    is there a way to list the devices present, or do i have to throw knives at the wind? does anyone understand the naming schema of cN and dN? man pcfs and man mount don't help a whole lot.
    i dont see anything helpful in the vfstab
    #device device mount FS fsck mount mount
    #to mount to fsck point type pass at boot options
    fd - /dev/fd fd - no -
    /proc - /proc proc - no -
    /dev/dsk/c0d0s1 - - swap - no -
    /dev/dsk/c0d0s0 /dev/rdsk/c0d0s0 / ufs 1 no -
    /dev/dsk/c0d0s7 /dev/rdsk/c0d0s7 /export/home ufs 2 yes -
    /devices - /devices devfs - no -
    ctfs - /system/contract ctfs - no -
    objfs - /system/object objfs - no -
    swap - /tmp tmpfs - yes -
    i also found thsi script that output the devices... but it doesnt see any fat32 partitions =(
    bash-3.00# for f in /dev/rdsk/* ; do echo "$f = $(fstyp $f 2>/dev/null )"; done
    /dev/rdsk/c0d0p0 =
    /dev/rdsk/c0d0p1 =
    /dev/rdsk/c0d0p2 =
    /dev/rdsk/c0d0p3 =
    /dev/rdsk/c0d0p4 =
    /dev/rdsk/c0d0s0 = ufs
    /dev/rdsk/c0d0s1 =
    /dev/rdsk/c0d0s10 =
    /dev/rdsk/c0d0s11 =
    /dev/rdsk/c0d0s12 =
    /dev/rdsk/c0d0s13 =
    /dev/rdsk/c0d0s14 =
    /dev/rdsk/c0d0s15 =
    /dev/rdsk/c0d0s2 =
    /dev/rdsk/c0d0s3 =
    /dev/rdsk/c0d0s4 =
    /dev/rdsk/c0d0s5 =
    /dev/rdsk/c0d0s6 =
    /dev/rdsk/c0d0s7 = ufs
    /dev/rdsk/c0d0s8 =
    /dev/rdsk/c0d0s9 =
    /dev/rdsk/c0d1p0 =
    /dev/rdsk/c0d1p1 =
    /dev/rdsk/c0d1p2 =
    /dev/rdsk/c0d1p3 =
    /dev/rdsk/c0d1p4 =
    /dev/rdsk/c0d1s0 =
    /dev/rdsk/c0d1s1 =
    /dev/rdsk/c0d1s10 =
    /dev/rdsk/c0d1s11 =
    /dev/rdsk/c0d1s12 =
    /dev/rdsk/c0d1s13 =
    /dev/rdsk/c0d1s14 =
    /dev/rdsk/c0d1s15 =
    /dev/rdsk/c0d1s2 =
    /dev/rdsk/c0d1s3 =
    /dev/rdsk/c0d1s4 =
    /dev/rdsk/c0d1s5 =
    /dev/rdsk/c0d1s6 =
    /dev/rdsk/c0d1s7 =
    /dev/rdsk/c0d1s8 =
    /dev/rdsk/c0d1s9 =
    /dev/rdsk/c1t0d0p0 =
    /dev/rdsk/c1t0d0p1 =
    /dev/rdsk/c1t0d0p2 =
    /dev/rdsk/c1t0d0p3 =
    /dev/rdsk/c1t0d0p4 =
    /dev/rdsk/c1t0d0s0 =
    /dev/rdsk/c1t0d0s1 =
    /dev/rdsk/c1t0d0s10 =
    /dev/rdsk/c1t0d0s11 =
    /dev/rdsk/c1t0d0s12 =
    /dev/rdsk/c1t0d0s13 =
    /dev/rdsk/c1t0d0s14 =
    /dev/rdsk/c1t0d0s15 =
    /dev/rdsk/c1t0d0s2 =
    /dev/rdsk/c1t0d0s3 =
    /dev/rdsk/c1t0d0s4 =
    /dev/rdsk/c1t0d0s5 =
    /dev/rdsk/c1t0d0s6 =
    /dev/rdsk/c1t0d0s7 =
    /dev/rdsk/c1t0d0s8 =
    /dev/rdsk/c1t0d0s9 =
    /dev/rdsk/c1t1d0p0 =
    /dev/rdsk/c1t1d0p1 =
    /dev/rdsk/c1t1d0p2 =
    /dev/rdsk/c1t1d0p3 =
    /dev/rdsk/c1t1d0p4 =
    /dev/rdsk/c1t1d0s0 =
    /dev/rdsk/c1t1d0s1 =
    /dev/rdsk/c1t1d0s10 =
    /dev/rdsk/c1t1d0s11 =
    /dev/rdsk/c1t1d0s12 =
    /dev/rdsk/c1t1d0s13 =
    /dev/rdsk/c1t1d0s14 =
    /dev/rdsk/c1t1d0s15 =
    /dev/rdsk/c1t1d0s2 =
    /dev/rdsk/c1t1d0s3 =
    /dev/rdsk/c1t1d0s4 =
    /dev/rdsk/c1t1d0s5 =
    /dev/rdsk/c1t1d0s6 =
    /dev/rdsk/c1t1d0s7 =
    /dev/rdsk/c1t1d0s8 =
    /dev/rdsk/c1t1d0s9 =
    /dev/rdsk/c2t0d0p0 = fstyp: cannot stat or open </dev/rdsk/c2t0d0p0>
    /dev/rdsk/c2t0d0p1 = fstyp: cannot stat or open </dev/rdsk/c2t0d0p1>
    /dev/rdsk/c2t0d0p2 = fstyp: cannot stat or open </dev/rdsk/c2t0d0p2>
    /dev/rdsk/c2t0d0p3 = fstyp: cannot stat or open </dev/rdsk/c2t0d0p3>
    /dev/rdsk/c2t0d0p4 = fstyp: cannot stat or open </dev/rdsk/c2t0d0p4>
    /dev/rdsk/c2t0d0s0 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s0>
    /dev/rdsk/c2t0d0s1 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s1>
    /dev/rdsk/c2t0d0s10 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s10>
    /dev/rdsk/c2t0d0s11 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s11>
    /dev/rdsk/c2t0d0s12 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s12>
    /dev/rdsk/c2t0d0s13 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s13>
    /dev/rdsk/c2t0d0s14 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s14>
    /dev/rdsk/c2t0d0s15 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s15>
    /dev/rdsk/c2t0d0s2 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s2>
    /dev/rdsk/c2t0d0s3 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s3>
    /dev/rdsk/c2t0d0s4 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s4>
    /dev/rdsk/c2t0d0s5 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s5>
    /dev/rdsk/c2t0d0s6 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s6>
    /dev/rdsk/c2t0d0s7 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s7>
    /dev/rdsk/c2t0d0s8 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s8>
    /dev/rdsk/c2t0d0s9 = fstyp: cannot stat or open </dev/rdsk/c2t0d0s9>
    thanks again for your help

  • "none already mounted or /dev/pts busy" udev 050-2

    Just did a clean install on a new hard drive. Using udev. Haven't made any changes from default, except added "devfs=nomount" to the grub boot line.
    I get this error message during boot up:
    "none already mounted or /dev/pts busy"
    when I issue "mount" command, I do not see /dev/pts listed. If I try "mount /dev/pts" I get the same complaint "none already mounted or /dev/pts busy"
    1. What is pts? Why would it be busy?
    2. How do I solve this complaint?
    Thank you,
    John R
    [edit] was directed to bugs.archlinux.org, a report exists there.

    johnr wrote:What is pts? Why would it be busy?
    Is this line in your /etc/fstab?:
    /dev/pts devpts defaults 0 0

  • Mount: /dev/sda2 already mounted or /u01 busy

    Installed new OELinux4.7 with disk partitions but unable to to mount these
    [oracle@localhost sbin]$ ./fdisk -l (give no results)
    [root@localhost /]# mount /dev/sda3 /u01
    mount: /dev/sda3 already mounted or /u01 busy
    [root@localhost /]# cd /sbin
    *[root@localhost sbin]# ./fdisk -l*
    Disk /dev/sda: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders
    Units = cylinders of 16065 * 512 = 8225280 bytes
    Device Boot Start End Blocks Id System
    /dev/sda1 * 1 19 152586 83 Linux
    /dev/sda2 20 2630 20972857+ 8e Linux LVM
    /dev/sda3 2631 5241 20972857+ 8e Linux LVM
    /dev/sda4 5242 60801 446285700 5 Extended
    /dev/sda5 5242 7852 20972826 8e Linux LVM
    /dev/sda6 7853 10463 20972826 8e Linux LVM
    /dev/sda7 10464 13074 20972826 8e Linux LVM
    *[root@localhost sbin]# df -h*
    Filesystem Size Used Avail Use% Mounted on
    /dev/mapper/VolGroup00-LogVol01
    20G 3.5G 16G 19% /
    /dev/sda1 145M 15M 123M 11% /boot
    none 3.0G 0 3.0G 0% /dev/shm
    /dev/mapper/VolGroup00-LogVol02
    20G 78M 19G 1% /home
    [root@localhost /]# mount /dev/sda2 /u01
    mount: /dev/sda2 already mounted or /u01 busy
    [root@localhost /]#

    On a system without LVM, a filesystem is created inside a partition. fdisk is used to list partitions on disks. Because the filesystems are inside the partitions, you can use the name of parition to mount it.
    On a system with LVM, a filesystem is created inside a logical volume, not in a partition. The partitions (fdisk -l) are used as physical volumes (pvdisplay), which are added to a volume group (vgdisplay), in which a logical volume can be created (lvdisplay). In the logical volume a filesystem is created. Because of this, only the logical volumes can be used to mount the filesystem.
    LVM adds an abstraction layer between filesystems and partitions. This is extremely handy because it's easy to add a disk (which is made physical volume) to a volume group which makes space available, which can be added to any logical volume in the volume group. When that's done, the filesystem in the logical volume can be enlarged with resize2fs, even online. Without LVM, it's not possible or very hard at best to do that.

  • [SOLVED] mount: /dev/sda3 is already mounted or /mnt busy

    Hello all, I've spent several hours researching this to no avail.  I'm trying to mount my partitions, first with my root partition (/dev/sda3) mounted at /mnt.  It seems no matter what partition I try to mount it gives me the same error: mount: /dev/sda3 is already mounted or /mnt busy.  I've tried the fuser command and killed all process associated with this device but new processes seem to pop up making it busy once more.
    Hopefully somebody has some knowledge of what could be causing this one.
    Thanks, and I hope to resurrect an Arch workstation once more!
    -Aaron
    Last edited by guitarxperience (2012-10-09 14:52:21)

    guitarxperience wrote:
    ewaller wrote:
    guitarxperience wrote:
    I'd love to post that but can you tell me how to post it when I'm running the install on another computer. ...I just don't know if there's a way to route the output to a URL or something after-all it is connected to the internet. But there is no browser as far as I know.
    Why, yes! 
    community/wgetpaste 2.20-1 [installed]
    A script that automates pasting to a number of pastebin services
    ewaller@odin:~ 1004 %ls -l | wgetpaste
    Your paste can be seen here: https://gist.github.com/3842491
    ewaller@odin:~ 1005 %
    Looks like a groovy tool.. Can you tell me how to get it on my installation setup?  Don't seem to have apt-get available or anything like that to install things.
    Oh yeah this is Arch so I would use pacman.  The community database along with core and extra does not exist... when i run pacman.

  • Root.sh diskgroup already mounted in another lock name space

    I have successfully installed oracle grid infrastructure 11.2.0.3 and have created an oracle database on an asm instance on top of it.
    I installed the database as a RAC database on a single node db-test-mi-1, postponing the addition of a second node at a later time.
    Now I have a second node db-test-mi-2, that I need to add to to the database.
    I have successfully performed
    cluvfy stage -pre nodeadd -n db-test-mi-2 -verbose
    without any errors.
    When I try to extend grid infrastructure to the new node, with
    ./addNode.sh "CLUSTER_NEW_NODES={db-test-mi-2}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={db-test-mi-2-vip}"
    it works fine, but the script /oracle/app/11.2.0/grid/root.sh, from the grid home of the second node, generates an error
    ORA-15032: not all alterations performed
    ORA-15017: diskgroup "DATADG" cannot be mounted
    ORA-15003: diskgroup "DATADG" already mounted in another lock name space
    This is because of course the ASM instance on the first node is locking the disk group.
    How can I extend ASM to the second node??
    Any hel greatly appreciated

    Hi,
    Please refer
    Troubleshooting 11.2 Grid Infrastructure root.sh Issues (Doc ID 1053970.1)
    Thanks,
    Rajasekhar

  • IS Windows 7 already Mounted on BOot camp in OS x Mountain Lion

    Hi GUys
    i just wanted to ask that is the Windows 7 64 bit already mounted on BOOT CAMP in OS X mounation lion............... if it is do i need to buy a Disk for windows 7 Just in case

    Of course you have to buy it.  And then you have to install it.  did you really think you'd get it for free?

  • ORA-15003: diskgroup "DATA" already mounted in another lock n

    Recently installed 11.2.02 GRID in attempt to upgrade an 11.1.0.6 ASM instance.
    GRIDHOME=/u01/app/oracle/product/11.2.0/grid
    The GRID install/ASM upgrade went in fine but when I set my env variables to point to new ASM/GRID home and I try to start the ASM instance
    SQL> startup mount;
    ASM instance started
    Total System Global Area 284565504 bytes
    Fixed Size 1299428 bytes
    Variable Size 258100252 bytes
    ASM Cache 25165824 bytes
    ORA-15032: not all alterations performed
    ORA-15003: diskgroup "DATA" already mounted in another lock name space

    SQL> shutdown immediate;
    ORA-15100: invalid or missing diskgroup name
    ASM instance shutdown
    SQL> startup nomount;
    ORA-29701: unable to connect to Cluster Manager
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    [oracle@vastu ~]$ asmcmd
    ASMCMD-08103: failed to connect to ASM; ASMCMD running in non-connected mode
    ASMCMD> exit
    [oracle@vastu ~]$ sqlplus '/ as sysdba'
    SQL*Plus: Release 11.1.0.6.0 - Production on Sat Feb 2

  • Network destination already mounted when start Time Capsule backup after reboot

    Just installed a new Time Capsule to replace my router, and noticed that when I reboot my home machine the intial attempt to backup always fails. (Running OS X 10.7.2 on a early 2011 15 inch MBP.)
    Here is the Time Machine log:
    Starting standard backup
    Attempting to mount network destination URL: afp://[email protected]/Data
    Mounted network destination at mountpoint: /Volumes/Data using URL: afp://[email protected]/Data
    Failed to eject volume /Volumes/Data (FSVolumeRefNum: -105; status: -47; dissenting pid: 0)
    Waiting 60 seconds and trying again.
    Network destination already mounted at: /Volumes/Data
    Failed to eject volume /Volumes/Data (FSVolumeRefNum: -105; status: -47; dissenting pid: 0)
    Waiting 60 seconds and trying again.
    Network destination already mounted at: /Volumes/Data
    Failed to eject volume /Volumes/Data (FSVolumeRefNum: -105; status: -47; dissenting pid: 0)
    Giving up after 3 retries.
    Backup failed with error: 21
    I then get a popup from Time Machine that says "The backup disk image "/Volumes/Data/Wayne's Macbook Pro.sparsebundle" could not be accessed (error -1).
    I see "Time Machine Backups" on my desktop, and can access the drive via Finder.
    If I use Finder to go into "Volumes" I can eject "Time Machine Backups" and the "Data" directory. If I then reselect the Time Capsule from the Time Machine settings window the backup works. Backups continue to work until the next time i reboot the machine.
    Here is a log of a good backup run:
    Starting standard backup
    Attempting to mount network destination URL: afp://[email protected]/Data
    Mounted network destination at mountpoint: /Volumes/Data using URL: afp://[email protected]/Data
    QUICKCHECK ONLY; FILESYSTEM CLEAN
    Disk image /Volumes/Data/Wayne's MacBook Pro.sparsebundle mounted at: /Volumes/Time Machine Backups
    Backing up to: /Volumes/Time Machine Backups/Backups.backupdb
    967.8 MB required (including padding), 1.74 TB available
    Copied 1707 files (95.9 MB) from volume Macintosh HD.
    852.7 MB required (including padding), 1.74 TB available
    Copied 614 files (45 KB) from volume Macintosh HD.
    Starting post-backup thinning
    No post-back up thinning needed: no expired backups exist
    Backup completed successfully.
    Ejected Time Machine disk image.
    Ejected Time Machine network volume.
    Is the problem because something is causing the system to mount the Data drive at startup, then Time Machine tries to mount it again? Any suggestions? I've done some searching in the support forums and FAQs but can't find anything that seems to match my situation. (No doubt due to my inexperience with Time Capsule / Time Machine. <grin>)
    Many things for any suggestions and/or links to suggested solutions so I dont' have to take any manual actions after a reboot.
    Wayne

    At least if yours is a new Time Capsule you can beat up on Apple Support to get them to fix it.   So far no-one seems to have aroused Apple's interest in this issue.   I expect Apple have judged it less critical than whatever other teething problems have shown up with the launch of iCloud.  Apple are probably overwhelmed.

  • Time machine back up canceled Network destination already mounted

    I need some help here. I have nearly torn out my remaining hair.
    I have for 5 years successfully backed up, using Time Machine,  my iMac to a direct attached WD MyBook 2TB running raid. The My book has run out of space so I recently acquired a WD MY BOOK Live DUO  and am running it as Router attached storage.  I have struggled for 2 weeks but back ups keep failing. Latest message is as follows: "
    Starting standard backup
    Attempting to mount network destination URL: afp://;AUTH=No%20User%[email protected]/TimeMachine
    Mounted network destination at mountpoint: /Volumes/TimeMachine using URL: afp://;AUTH=No%20User%[email protected]/TimeMachine
    Network destination already mounted at: /Volumes/TimeMachine
    Backup canceled."
    I have tried powering off and on, reformatting. I have even gone back to the WD My book to check time machine still working and it picks up just fine.
    Any help appreciated.

    Unfortunately you are pretty much on  your own because TM is not designed to be backed up over a network unless of course you are using a Time Capsule, otherwise you will have to deal with WD. BTW while you have had decent luck with your prior WD many users do not share that experience with OS X, while their HD's are just fine the enclosures WD uses tend to be problematic. My suggestion is to both connect the device up locally to the iMac and also to replace the WD with a higher quality EHD such as an OWC Mercury Elite Pro.

  • X4100 mount command for second drive shows device already mounted

    I have a new x4100 system and I created a new partition and ext3 filesystem on the second drive. When I try to use that partition I get the error:
    mount: /dev/sdb1 already mounted
    Since this is a new system, that partition has never been used and I can't see anywhere that the device is in use. I tried the same thing on other systems and I get the same error. Has anyone seen this problem and if so how was it resolved?
    thanks

    Post Unix/Terminal queries to its forum under OS X Technologies.

  • Error Message from SLD - "Selected client already has a business system..."

    I have successfully created and test a File to IDOC scenario from a File system to ERP ECC 6.0 (client 101, system name sapdbt02)
    Right now, I creating  another File to IDOC scenario for a different file system to the same ERP ECC 6.0
    At the SLD, I would need to create two new Techncial System and Two new Business System.
    However when I creating a Business system for the ABAP Type for the ERP ECC 6.0, when selected the same client and system name of the first scenario that I have done, I got this error message
    "Selected client already has a business system associated, please select a new client or system"
    I am puzzled that I would need to a configure a new client for the same system ID for another FIle to IDOC scenario
    going to the same ERP system
    Please advise.
    Best Regards
    Freddy Ng

    For example:-
    create business system for R3 system in SLD as suggested by stefan. I assume that u have one R3 system which is either sending or receiving data from XI.
    assign the system to ur scenarios, u can use common IDOC/RFC/XI channel for sending data to R3 system.
    Separate R3 business system is required incase u have to send data to different R3 system.
    If u have different files coming from different File system in that case better to create different business service and use them.
    chirag

  • Hello. I have a problem with OEL 6.5 and ocfs2. When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.

    Hello.
    I have a problem with OEL 6.5 and ocfs2.
    When I mount ocfs2 with mount -a command all ocfs2 partitions mount and work, but when I reboot no ocfs2 partitions auto mount. No error messages in log. I use DAS FC and iSCSI FC.
    fstab:
    UUID=32130a0b-2e15-4067-9e65-62b7b3e53c72 /some/4 ocfs2 _netdev,defaults 0 0
    #UUID=af522894-c51e-45d6-bce8-c0206322d7ab /some/9 ocfs2 _netdev,defaults 0 0
    UUID=1126b3d2-09aa-4be0-8826-0b2a590ab995 /some/3 ocfs2 _netdev,defaults 0 0
    #UUID=9ea9113d-edcf-47ca-9c64-c0d4e18149c1 /some/8 ocfs2 _netdev,defaults 0 0
    UUID=a368f830-0808-4832-b294-d2d1bf909813 /some/5 ocfs2 _netdev,defaults 0 0
    UUID=ee816860-5a95-493c-8559-9d528e557a6d /some/6 ocfs2 _netdev,defaults 0 0
    UUID=3f87634f-7dbf-46ba-a84c-e8606b40acfe /some/7 ocfs2 _netdev,defaults 0 0
    UUID=5def16d7-1f58-4691-9d46-f3fa72b74890 /some/1 ocfs2 _netdev,defaults 0 0
    UUID=0e682b5a-8d75-40d1-8983-fa39dd5a0e54 /some/2 ocfs2 _netdev,defaults 0 0

    What is the output of:
    # chkconfig --list o2cb
    # chkconfig --list ocfs2
    # cat /etc/ocfs2/cluster.conf

  • Integration server already has a business system

    Dear All,
    I got problem while creating sender business system for FILE-IDOC Scenario.I have created r3_800 business system for IDOC receiver with logical system name APOCLNT800.After that I tried to create business system for FILE Sender with logical system name XI7CLNT001.But i am getting the error "The selected integration server already has a business system with logical name XI7CLNT001. Select a different integration server or change the logical name."
    Do I need to create another logical system?
    Any help would be appreciated.Thanks in advance.
    Regards
    Vinay

    In addition to my previous post, refer the following blogs. This will help you understand how ABAP based systems in the landscape already get registered in SLD while post installation. Just check once the BS you are trying to create in SLD is already there.
    /people/michal.krawczyk2/blog/2005/03/10/registering-a-new-technical-system-in-sld--abap-based
    /people/praveen.kurni3/blog/2010/10/15/creating-a-technical-system-of-type-was-abap-in-sld133-are-we-sure-about-the-objects-that-get-created

  • "Your request cannot be processed. Contact your Business Objects ....."

    Can anyone help with this error message?
    Description:
    : URL: /PerformanceManagement/jsp/dm-getpage.jsp? url=AfQJDRWQ3WNOtAoCRk5MQ4U
    Stack Trace: {"Error":{"error":"null","PageError1":"Your request cannot be processed. Contact your Business Objects administrator.","PageError6":"Description: ","PageError9":"Error: ","PageError8":"Stack Trace: ","stacktrace":"","desc":"URL: /PerformanceManagement/jsp/dm-getpage.jsp? url=AfQJDRWQ3WNOtAoCRk5MQ4U","type":4}}

    Hi,
    I'm on a 3.1 SP7 version, and some times I face this problem.
    I'm not sure, but when I have many users exporting reports to Excel, this problem is more probably.
    I read a workaround, but I didn't try it (because my problem is intermittent)
    - Increase the cache size from initconfig.properties file up to 4096 (500 by default)
    If this step doesn't work the try:
    - Increase the stack size of Performance Management servers; changing (command line) -JTCss 1024 (default value) to -JTCss 2048
    Another workaround: this problem just appear in our environment with IE8; if we try with Firefox the behaviour is perfect

Maybe you are looking for