Sapmnt mount point not available

Hi All
Need some quick inputs .
We haev a distributed environment where SAP is on Windows 2003 and Database is on Linux.
After changing ip on both the systems ..Windows server came up fine but in Linux Db sapmnt mount point was not fine.
While accessing sapmnt it was showing ?? ?? in place of fiel ermission user group.
Unfortunately we have no backup of sapmnt in Linux..
I guess sapmnt is the source and ther are soft link to usr/sap/SID
Can anybody let me knwo how to proceed here.
Strangely system were running fine  when i started SAP.
Reagrds
Ajay

HI All,
Not sure about samba ,actually DNS is not configured here.
Now i am facing a strange issue.Copied files exe from similar server with same application and kernel.copied the global and profile directory from Windows system to the Linux DB server.
But SAP didn't started with this.
Reverted the changes, system are all running except Java and Pi system.
Here is the log for PI
dev_ms
[Thr 11604] Sun Jul 04 23:41:30 2010
[Thr 11604] ms/http_max_clients = 500 -> 500
[Thr 11604] MsSSetTrcLog: trc logging active, max size = 52428800 bytes
systemid   562 (PC with Windows NT)
relno      7110
patchlevel 0
patchno    37
intno      20020600
make       multithreaded, Unicode, 64 bit, optimized
pid        13540
[Thr 11604] ***LOG Q01=> MsSInit, MSStart (Msg Server 1 13540) [msxxserv.c   2174]
[Thr 11604] ***LOG Q0I=> NiIBindSocket: bind (10013: WSAEACCES: Permission denied) [nixxi.cpp 3587]
[Thr 11604] *** ERROR => NiIBindSocket: SiBind failed for hdl 25/sock 1536
    (SI_EPORT_INUSE/10013; I4; ST; 0.0.0.0:3907) [nixxi.cpp    3587]
[Thr 11604] *** ERROR => MsSCommInit: NiBufListen(3907) (rc=NIESERV_USED) [msxxserv.c   11627]
[Thr 11604] *** ERROR => MsSInit: MsSCommInit (internal) [msxxserv.c   2233]
[Thr 11604] *** ERROR => MsSInit failed, see dev_ms for details
[msxxserv.c   7114]
[Thr 11604] ***LOG Q02=> MsSHalt, MSStop (Msg Server 13540) [msxxserv.c   7173]
=============
dev_disp
sysno      07
sid        DP1
systemid   562 (PC with Windows NT)
relno      7110
patchlevel 0
patchno    44
intno      20020600
make       multithreaded, Unicode, 64 bit, optimized
profile    G:\usr\sap\DP1\SYS\profile\DP1_DVEBMGS07_Mngnetmcdev
pid        11636
Sun Jul 04 23:41:31 2010
kernel runs with dp version 117000(ext=117000) (@(#) DPLIB-INT-VERSION-117000-UC)
length of sys_adm_ext is 584 bytes
SWITCH TRC-HIDE on ***
***LOG Q00=> DpSapEnvInit, DPStart (07 11636) [dpxxdisp.c   1248]
     shared lib "dw_xml.dll" version 44 successfully loaded
     shared lib "dw_xtc.dll" version 44 successfully loaded
     shared lib "dw_stl.dll" version 44 successfully loaded
     shared lib "dw_gui.dll" version 44 successfully loaded
     shared lib "dw_mdm.dll" version 44 successfully loaded
     shared lib "dw_rndrt.dll" version 44 successfully loaded
     shared lib "dw_abp.dll" version 44 successfully loaded
     shared lib "dw_sym.dll" version 44 successfully loaded
rdisp/softcancel_sequence :  -> 0,5,-1
use internal message server connection to port 3907
rdisp/dynamic_wp_check : 1
rdisp/calculateLoadAverage : 1
Sun Jul 04 23:41:43 2010
WARNING => DpNetCheck: NiHostToAddr(www.doesnotexist0134.qqq.nxst) took 12 seconds
Sun Jul 04 23:42:00 2010
WARNING => DpNetCheck: NiAddrToHost(1.0.0.0) took 17 seconds
***LOG GZZ=> 2 possible network problems detected - check tracefile and adjust the DNS settings [dpxxtool2.c  6385]
MtxInit: 30000 0 0
DpSysAdmExtInit: ABAP is active
DpSysAdmExtInit: VMC (JAVA VM in WP) is active
DpIPCInit2: write dp-profile-values into sys_adm_ext
DpIPCInit2: start server >Hostname_DP1_07                      <
DpShMCreate: sizeof(wp_adm)          42864     (2256)
DpShMCreate: sizeof(tm_adm)          5796848     (28840)
DpShMCreate: sizeof(wp_ca_adm)          64000     (64)
DpShMCreate: sizeof(appc_ca_adm)     64000     (64)
DpCommTableSize: max/headSize/ftSize/tableSize=500/16/584064/584080
DpShMCreate: sizeof(comm_adm)          584080     (1144)
DpSlockTableSize: max/headSize/ftSize/fiSize/tableSize=512/48/65600/90416/156064
DpShMCreate: sizeof(slock_adm)          156064     (104)
DpFileTableSize: max/headSize/ftSize/tableSize=3800/16/364864/364880
DpShMCreate: sizeof(file_adm)          364880     (80)
DpShMCreate: sizeof(vmc_adm)          40896     (2152)
DpShMCreate: sizeof(wall_adm)          (41664/36752/64/192)
DpShMCreate: sizeof(gw_adm)     48
DpShMCreate: sizeof(j2ee_adm)     3936
DpShMCreate: SHM_DP_ADM_KEY          (addr: 000000000AB30050, size: 7210208)
DpShMCreate: allocated sys_adm at 000000000AB30060
DpShMCreate: allocated wp_adm_list at 000000000AB330C0
DpShMCreate: allocated wp_adm at 000000000AB332B0
DpShMCreate: allocated tm_adm_list at 000000000AB3DA30
DpShMCreate: allocated tm_adm at 000000000AB3DA80
DpShMCreate: allocated wp_ca_adm at 000000000B0C4E80
DpShMCreate: allocated appc_ca_adm at 000000000B0D4890
DpShMCreate: allocated comm_adm at 000000000B0E42A0
DpShMCreate: allocated slock_adm at 000000000B172C40
DpShMCreate: allocated file_adm at 000000000B198DF0
DpShMCreate: allocated vmc_adm_list at 000000000B1F1F50
DpShMCreate: allocated vmc_adm at 000000000B1F2000
DpShMCreate: allocated gw_adm at 000000000B1FBFD0
DpShMCreate: allocated j2ee_adm at 000000000B1FC010
DpShMCreate: allocated ca_info at 000000000B1FCF80
DpShMCreate: allocated wall_adm at 000000000B1FCFA0
DpCommAttachTable: attached comm table (header=000000000B0E42A0/ft=000000000B0E42B0)
DpSysAdmIntInit: initialize sys_adm
rdisp/test_roll : roll strategy is DP_NORMAL_ROLL
dia token check not active (10 token)
MBUF state OFF
DpCommInitTable: init table for 500 entries
DpFileInitTable: init table for 3800 entries
DpSesCreateTable: created session table at 000000000F6A0050 (len=161328)
DpRqQInit: keep protect_queue / slots_per_queue 0 / 2001 in sys_adm
rdisp/queue_size_check_value :  -> off
EmInit: MmSetImplementation( 2 ).
MM global diagnostic options set: 0
<ES> client 0 initializing ....
<ES> InitFreeList
<ES> block size is 4096 kByte.
<ES> Info: em/initial_size_MB( 32762MB) not multiple of em/blocksize_KB( 4096KB)
<ES> Info: em/initial_size_MB rounded up to 32764MB
Using implementation view
<EsNT> Using memory model view.
<EsNT> Memory Reset disabled as NT default
<ES> 8190 blocks reserved for free list.
ES initialized.
DpVmcSetActive: set vmc state DP_VMC_ENABLED
DpVmcSetActive: set vmc state DP_VMC_ACTIVE
DpVmcInit2: o.k.
MPI: dynamic quotas disabled.
MPI init: pipes=4000 buffers=1279 reserved=383 quota=10%
J2EE server info
  start = TRUE
  state = STARTED
  pid = 7736
  argv[0] = G:\usr\sap\DP1\DVEBMGS07\exe\jstart.EXE
  argv[1] = G:\usr\sap\DP1\DVEBMGS07\exe\jstart.EXE
  argv[2] = pf=G:\usr\sap\DP1\SYS\profile\DP1_DVEBMGS07_Hostname
  argv[3] = -DSAPSTART=1
  argv[4] = -DCONNECT_PORT=64997
  argv[5] = -DSAPSYSTEM=07
  argv[6] = -DSAPSYSTEMNAME=DP1
  argv[7] = -DSAPMYNAME=Hostname_DP1_07
  argv[8] = -DSAPPROFILE=G:\usr\sap\DP1\SYS\profile\DP1_DVEBMGS07_Hostname
  argv[9] = -DFRFC_FALLBACK=ON
  argv[10] = -DFRFC_FALLBACK_HOST=localhost
  start_lazy = 0
  start_control = SAP J2EE startup framework
DpJ2eeStart: j2ee state = STARTED
rdisp/http_min_wait_dia_wp : 1 -> 1
***LOG CPS=> DpLoopInit, ICU ( 3.4 3.4 4.1) [dpxxdisp.c   1635]
***LOG Q1K=> MsIAttachEx: StoC check failed, Kernel not compatible with system (rc=5) [msxxi.c      814]
ERROR => not allowed to connect to message server via port 3907 [dpxxdisp.c   12156]
ERROR => Please check your configuration (profile parameter rdisp/msserv_internal) [dpxxdisp.c   12157]
DpHalt: shutdown server >Hostname_DP1_07                      < (normal)
DpIJ2eeShutdown: send SIGQUIT to SAP J2EE startup framework (pid=7736)
ERROR => DpProcKill: kill failed [dpntdisp.c   408]
DpIJ2eeShutdown: j2ee state = SHUTDOWN
DpHalt: stop work processes
Sun Jul 04 23:42:06 2010
DpHalt: stop gateway
DpHalt: stop icman
DpHalt: terminate gui connections
DpHalt: wait for end of work processes
DpHalt: wait for end of gateway
DpHalt: waiting for termination of gateway ...
Sun Jul 04 23:42:07 2010
ERROR => [DpProcDied] Process died  (PID:17280  HANDLE:1112  rc:0x0) [dpnttool2.c  147]
DpHalt: wait for end of icman
DpHalt: waiting for termination of icman ...

Similar Messages

  • USB flash disk mount points not created [SOLVED]

    I am having a problem where my USB flash disk is not always being mounted after being detected by KDE. I have just installed Arch 2007.05, updated it and installed KDEmod. Here's what happens when I plug in a device.
    1. I insert the device, KDE detects the device and shows the KDE Mount Daemon dialog. "Open in New Window" or "Do Nothing" are the two available options.
    2. At this point, my device shows up in system:/media (in Konqueror), but it is not mounted. When I place the cursor over the device in Konqueror a popup with the device's label and the text "Unmounted Removable Medium" appears. There is no corresponding directory in the /media directory in my file system.
    3. In the KDE Daemon window I select "Do Nothing" by either double-clicking it or selecting it and clicking OK.
    4, My device does not get mounted. This is annoying since most of the time I just want it mounted so I can do some command-line work.
    If in the KDE Daemon window if I select "Open in New Window" the device is mounted and the corresponding directory appears in my /media directory. Also, if I select "Do Nothing" but then later navigate into the contents of the device in Konqueror, at that point the device is mounted and the corresponding directory appears in my /media directory. When my device is plugged in, but not mounted there is a "Mount" option in the right-click context menu, but I want it to automatically mount.
    On my other machine with openSUSE, the drive is mounted even if you choose "Do Nothing" in the KDE Daemon window. That way seems more correct, because if I plug a device in, I most likely want it mounted. (I know there are some cases where this might not be true, but those cases are certainly not the normal use casese.) Now I have to open Konqueror just to get the drive to mount. It seems that the drive should be mounted no matter what you choose in the KDE Daemon window.
    Now, is this normal KDE or Arch behavior or is it something that can be adjusted or fixed?
    Thanks in advance.
    Last edited by jbromley (2007-10-22 07:22:08)

    Turing - imagine that! For some reason it never occurred to me to check the properties of the unmounted volume. Well, you learn something every day.
    In the meantime, I decided that maybe "Do Nothing" should do just that and I should perhaps add a service to mount the drive, but not bring up any windows. Here's what I did.
    1. Create a mount_vol.sh shell script with the following contents.
    #!/bin/bash
    udi=$(dcop kded mediamanager properties $1 2>/dev/null | head -n 1)
    if test -n "$udi"; then
    dcop kded mediamanager mount "$udi" >/dev/null 2>&1
    fi
    2. chmod +x mount_vol.sh.
    3. Move the script somewhere like /usr/local/bin.
    4. Plug in some removable device, in the KDE Daemon window click "Configure..."
    5. In the KDE Control Module dialog that appears click "Add"
    6. Give the service a name like "Mount Removable Medium", click the X icon to select a service icon, select "Unmounted Removable Medium" under "Available medium types" and click the right arrow. For the command use "/usr/local/bin/mount_vol.sh %u.
    7. Click OK.
    Now when the KDE Daemon action dialog appears, there will be the option to just mount it. You can of course use this same script for other medium types.
    Regards.

  • Numeric Pointer not available

    Dear all,
    I am trying to configure my generic data source for delta update through a numeric pointer but there's only timestamp and calendar day available.
    Anybody out there who knows this case?
    Many thanks in advance.
    Regards
    Tobias

    A numeric data pointer is a field in your table that contains sequential numbers.  The pointer works by noting down the last pointer value transferred.  The next delta takes the values appended to the table since the last pointer value.
    If you have timestamp and/or date field, that would be better because the are already there (you don't have to change the table).  If you have a timestamp field (not a time field but a time and date field), then I would use that since it is the most accurate. 
    Brian

  • Show focus points not available

    My pictures used to all have the option of showing the focus points in Aperture. I would click on the icon for focus points to toggle it on and off.
    Starting last year the icon for the focus points is not even there under the Info tab (same camera and lenses). i just received a new camera body but same lenses.
    Pictures still do not have the icon for Focus Points. Going to the menu and choosing "show focus points" does no good because the icon is not there (but the icon for jpeg/raw, metering, and wb are there). if i go to some old pictures the icon is there for the old pictures.
    i re-downloaded the program, trashed the preference files, etc.
    Camera- Canon 40D, Canon 7D mii. Lenses CAnon 17-55 f/2.8 and Canon 70-200 f/4, non IS.
    Any help would be appreciated.
    Terry

    I just checked one of my photos in Aperture for you..... it is the only reason I've kept aperture to be honest! Mine still works on my raw photo which is from a 5dmiii. I'm in the process tho of moving all my photos to external hard drives and only keeping current photos on laptop due to storage issues. I'm also switching to Bridge, ACR and PSCC. But when I open projects from Aperture off of hard drive I get a message that says I need to upgrade library and opens in Aperture. I don't want to be bound to Aperture anymore. What if I don't have Aperture on next Mac? So not how do I get my photos unhostaged? Sorry about rant.... just very frustrated if I have to go through some long drawn out process to get this straightened out.

  • Name : leopard Type : Volume  Disk Identifier : disk1s2 Mount Point

    Early in October, after installing SL and the first upgrade, I began experiencing colour flashes on transitions on slideshow projects created in iMovie '09. I contacted Apple Support and began working with one of the technicians. She reported that after viewing a short sample slideshow/movie that I sent her, Apple engineers concluded that something in SL was interfering with the video card. So far, no fix from Apple.
    One workaround suggested by a reader on these support discussions was to install Leopard on an external drive and work with iMovie from there. My contact person helped me connect and set up the external drive. She also talked me through steps that allowed me to continue using iMovie '09, my photos in iPhoto, and the iTunes Music Library on my internal drive to build my slideshows and have all of this 'show up' on the external drive. Really, all I was doing on the external drive was using iMovie to export my project and create a *.mov file.
    Until three days ago, I hadn't tried to access the external drive for about a month and I found that I couldn't. I called Apple Support again for help. In the end, the person I was working with suggested that he would get in touch with my first contact to see if written instructions could be sent to me so that I could repeat her process. At the same time, he suggested I ask for help here. When I first noticed the problem, the icon for the external drive was on my desktop; today it isn't. Below is the information that is relevant (I hope).
    *+From Disc Utility: Information on External Drive+*
    Name : leopard
    Type : Volume
    Disk Identifier : disk1s2
    Mount Point : Not mounted
    File System : Mac OS Extended (Journaled)
    Connection Bus : USB
    Device Tree : IODeviceTree:/PCI0@0/USB7@1D,7
    Writable : Yes
    Universal Unique Identifier : CED756E6-8A72-33BB-B79B-469908923E54
    Capacity : 999.16 GB (999,157,620,736 Bytes)
    Owners Enabled : No
    Can Turn Owners Off : Yes
    Can Be Formatted : Yes
    Bootable : Yes
    Supports Journaling : Yes
    Journaled : No
    Disk Number : 1
    Partition Number : 2
    *+Information from First Aid/Repair Disc+*
    Invalid node structure
    The volume leopard could not be verified completely.
    Error: Disk Utility can’t repair this disk. Back up as many of your files as possible, reformat the disk, and restore your backed-up files.
    *+Information at bottom of Disc Utility Window:+*
    Mount Point : Not mounted Capacity : 999.16 GB (999,157,620,736 Bytes)
    Format : Mac OS Extended (Journaled) Available : -
    Owners Enabled : - Used : -
    Number of Folders : - Number of Files : -
    I have no idea what is wrong or what to do now. I do know that the first thing is to reformat the drive, but I don't know how to do that. The second thing will be to install all the upgrades so that both internal and external drives are exactly the same - that I can manage! The third thing is to figure out how to get back to the original set up.
    If anyone can help or point me to sites that will tell me how to do any or all of this, I will be most appreciative.

    {quote:}I wanted help with two things: (1) getting the external HD back up and (2) reestablishing the connections that had been made so that iMovie, iPhoto, and iTunes pointed back to those applications and libraries on the internal HD. In other words, before I lost the connection to the external HD, when I clicked on iMovie on the external HD, what opened was iMovie on the internal HD, with all of my projects available. Same with the other two.{quote}
    To be honest I'm not sure why you were opening an Applications from the External HD and pointing it to the Library on the Internal HD? Seem's like it would be a lot simpler and easier just to open and run everything right from the Internal HD. Some folk's will open and run the Application from there Internal HD and then point it to the Library on the External HD, but you need to be very careful doing so because if the external HD fails then you lose your data.
    {quote:}Until about 3 hours ago, I couldn't use Disk Utility to erase the external HD and start over because the icon wasn't showing in the Devices panel. I have been at this all day and finally the icon did appear and I erased the external drive. After 4 attempts to reinstall Leopard OS, that, too finally worked. I have now reinstalled iLife '09 and all of the necessary upgrades.{quote}
    Not sure why you lost the connection to the External HD, could be just a bad connection or a early warning sign for a failing drive?
    {quote:}I knew that if I erased the external HD I would lose all of the connections I had between the two drives. That was why I was hoping for a way to fix it without erasing. So now I need help with (2) above: reestablishing those connections. I'm afraid I don't understand partitioning - what it is or what it does. If you have time to explain it to me, I will certainly give it a try.{quote}
    I think you need to run everything from your Internal HD and only use the External HD as a backup drive either with Time Machine or on the great advise of elmac using SuperDuper.
    As for Partitioning and Formatting I can give you some article's to study, but would suggest not fussing or experimenting unless you had a second External HD to play around with.
    Dennis

  • Can I used mount point with AlwaysOn

    Hi,
    I have windows failover cluster with 10 database drives due the nature of the design. I need to migrate to AlwaysOn which is fine? Can I take benefits of mount point after configure AlwaysOn.
    Thanks you. 

    Stan,
    Hope you are having a good day so far
    Thanks, long day - just getting back.
    the way I interpreted the question "the WSFC has 10 database drives" - I assumed that it implies that the drives are clustered.
    I don't know if they are on shared storage or not. WSFCs can use local storage for certain things, such as alwayson availability groups, tempdb, etc.
    so, let's say - WSFC has two nodes NODEA,NODEB and and the mounted drives are CLUSTERED (DRIVES G and H).
    Ok, so base drives and mount points off of the base. Got it.
    Node A has stand alone sql instance with database files on Drive G and H. A scenerio when the Node A was restarted , the drives will roll over to Node B and will stay on that.
    Mmmmmm.... It depends on what the potential owner of the nodes are for the drives. I also wouldn't use clustered drives for local instances - just my opinion.
    But when Node A comes backup, the sql server cannot come start, since it be will missing G and H drives.
    Then I wouldn't allow the drives to fail over. To be honest, in this scenario I also wouldn't have them as clustered drives. Setting their possible owners to only be node A will in essence make them always be on node a.
    so, in this installing stand alone sql on clustered drives is not a good solution(i do not know if sql would allow that in first place). Of course, shared storage or regualr mount points(not clustered) should be fine.
    Bingo.
    -Sean
    The views, opinions, and posts do not reflect those of my company and are solely my own. No warranty, service, or results are expressed or implied.

  • R3load cannot export more than 100 mount points for Oracle?

    We have a DB with more than 390 sapdata###  mount points  (HPUX-PARISC). They are truly mount points, NOT directories under some other mount points.
    After export using R3load (i.e NO DB-specific ), the keydb.xml generated for import has only from sapdata1 to sapdata100.
    Is there any limit here for R3load?
    Thanks!

    R3load doesn't copy the filesystem structure but unloads the content of the database after having checked the size of it and then distributes it across the files.
    Why do you have so many different mountpoints? Is there a technical reason behind? Just curious...
    Markus

  • Mount Points in Windows 2012 R2 Cluster not displaying correctly

    Hi,
    Try
    and I might, I can't get Mount Points displayed properly in a Windows 2012 R2 cluster.
    In Windows 2008 R2, I added them, and they appear under Storage, as for Example, Cluster Disk 7: Mounted Volume (S:\SYSDB). (I may have had to bring them offline/online).
    in Windows 2012 R2, they are showing up as, for example, '\\?\Volume{7c636157-e7e9-11e4-80dc0005056873123}'
    In the error log it shows up as :
    Cluster disk resource 'Cluster Disk 7' contains an invalid mount point. Both the source and target disks associated with the mount point must be clustered disks, and must be members of the same group.
    Mount point 'SYSDB\' for volume '\\?\Volume{7c636106-e7e9-11e4-80dc-005056873123}\' references an invalid target disk. Please ensure that the target disk is also a clustered disk and in the same group as the source disk (hosting the mount point).
    Now I've checked the error, and in
    https://technet.microsoft.com/en-au/library/dd353925(v=ws.10).aspx it says
    "The mounted disk and the disk it is mounted onto must be part of the same clustered service or application. They cannot be in two different clustered services or applications, and they cannot be in the general pool of Available Storage in the cluster."
    So I have created a 'Other Server' Role. When I go right click on the Role and go to 'Add Storage', Cluster Disk 6 (the root volume) displays S:\, and Cluster Disk 9 (hosting the mountpoint) says Mount Point(s): (S:\SYSDB). I select both, and add, but
    alas, the Mount Point still shows up as '\\?\Volume{7c636106-e7e9-11e4-80dc-005056873123}\ (not S:\SYSDB or Cluster Disk 6: Mounted Volume (S:\SYSDB).) as I would expect.
    They are both clustered disks (iSCSI). I would expect when it says in the "same group", both added to the same role would be in the same group.

    Hi,
    Thankyou for your response. That's (sort of) good to know, but it seems to be a step backwards from Windows 2008 R2, where you would actually have the meaningful Mounted Volume: (S:\SYSDB) displayed, to the meaningless '\\?\Volume{7c636157-e7e9-11e4-80dc0005056873123}'
    GUID. Obviously before you do anything, you need to cross reference the disk number to 'Disk Management'; it would be better if is was displayed correctly in Failover Cluster Manager in the first place.
    Secondly, the GUID is somewhat misleading. In Windows 2008 R2 for example, it appears as though the same GUID was displayed on each node (e.g. using Mountvol.exe). In Windows 2012 R2, it appears as though different GUID's are displayed on each node, e.g.
    Node 1.
    \\?\Volume{7c6368a4-e7e9-11e4-80dc-005056873123}\
            S:\SYSDB\
    Node 2.
    \\?\Volume{97cc0d34-e7e9-11e4-80db-0050568724c4}\
            S:\SYSDB\
    But the GUID in Failover cluster manager remains the same (you can't really cross reference with what you see in FCM to Mountvol).
    Strangely enough, when I check the registry in 'MountedDevices' on Node 1, both of the GUIDs are displayed (even though only one is displayed in MountVol.exe),  referencing the same Disk ID listed in Diskpart.exe. I can see this mentioned in
    https://support.microsoft.com/en-us/kb/959573, where is says :
    A volume can be multiple unique volume names (and thus multiple GUIDs) when it is used by multiple running installations of Windows.  This could happen in the following scenarios and in similar scenarios where multiple installations of Windows have
    accessed the volume:
    Using a volume on a shared disk in a cluster where multiple nodes have accessed the volume.
    Oh well, that's progress I guess.

  • Installing Oracle RAC problem: Could not determine /tmp mount point

    Folks,
    Hello. I am installing Oracle 11gR2 RAC using 2 Virtual Machines (rac1 and rac2 whose OS are Oracle Linux 5.6) in VMPlayer and according to the tutorial
    http://appsdbaworkshop.blogspot.com/2011/10/11gr2-rac-on-linux-56-using-vmware.html
    I am installing Grid infrastructure. I am on step 7 of 10 (verify Grid installation enviroment) and get this error:
    "Free Space: Rac2: /tmp"
    Cause: Could not determine mount point for location specified.
    Action: Ensure location specified is available.
    Expected value: n/a
    Actual value: n/a
    I have checked the free space using the command:
    [root@Rac2 /]# df -k /tmp
    Output:
    Filesystem     1k-blocks     used     Available     Use%     Mounted on
    /dev/sda1     30470144     7826952     21070432     28%     /
    As you see above, the free space is enough, but could not determine mount point for /tmp.
    Do any folks understand how to determine the mount point for directory /tmp ?
    Thanks.

    Folks,
    Hello. I have found the file ./bash_profile under /home/ora11g/ and the file is invisible. However, I edit the file using the command:
    [ora11g@Rac2 ~]$ vi /home/ora11g/.bash_profile
    I add the 2 lines into the file as below:
    TMP=/tmp; export TMP
    TMPDIR=$TMP; export TMPDIR
    I save the file .bash_profile and reboot Oracle Linux 5.6 and check again on step 7 of 10 in the Installer, but the problem is still not solved.
    Can any folk help to solve the strange problem "could not determine mount point for /tmp" ?
    Thanks.

  • DPM 2012 R2 All backups fail with recovery point volume not available after resizing OS disk

    I resized the C drive partition of my DPM server (the data is on a separate dedicated array) and everything failed. I put it back but everything is still failing with the same problem as here:
    DPM 2007: The recovery point volume is not available?
    I've run chkdsk for a couple and both have given the following after doing 3 stages:
    Windows has scanned the file system and found no problems.
    No further action is required.
    The backups still fail and the only error information I can find in the event logs is:
    Backup job for datasource: Online\<VM name> on production server: <host FQDN> failed.
    How can I find out what the problem actually is?
    Is it possible to resize the C drive containing a DPM install? If so what can it be safely resized to? It easily met the minimum requirements and I don't know of anything in the documentation that says resizing the OS disk may cause issues.
    Preparing your environment for System Center 2012 R2 Data Protection Manager (DPM)

    Hi,
    Resizing the boot partition (usually C:) should not effect DPM in any way.  DPM writes directly to it's volumes contained in the storage pool and not through the mount points on the C: drive.    So is the DPM UI showing
    missing volume next to some data sources ?   Try doing a DPM disk rescan and see if that removes the missing volume flag. 
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. Regards, Mike J. [MSFT] This
    posting is provided "AS IS" with no warranties, and confers no rights.

  • I am extremely upset. I purchased my iPad in SA and I am traveling in Greece. When I want to make use of the free apps, I get a message that the app is not available in the SA store. What is the point of having an iPad if you cannot use it worldwide?

    I am extremely upset. I purchased my iPad in SA and now I am in Greece. I cannot download free apps as I get a message that the apps are not available in the SA store and only in US stores. When I change to the US store the same thing happens. What is the point of having an iPad if I cannot use it worldwide??? I feel that I wasted my money purchasing it as I specifically purchased it to use when I travel. How can I get access to all the available apps and why are they restricted.

    You can use your iPad worldwide. However, each AppleID is tied to
    a specific country's store. To use the AppStore in any country, you
    must be in that country and have a credit/debit card issued by a financial
    institution in that country with a verified billing address in that country.
    It is the developer's choice which AppStores he makes his app available
    from, and some countries prohibit certain apps.
    To make a purchase from the US store (including downloading a free app
    available in the US store), you must be in the US and have card issued
    in the US with verified billing address in the US.
    You can use your purchases from the SA store worldwide, but you
    cannot make purchases in other than the SA store unless you meet
    the aforesaid conditions.

  • HT201372 I am trying to create a bootable USB drive for Yosemite and the terminal is telling me /Volumes/Untitled is not a valid volume mount point.  HELP!!!

    I erased the USB drive and partitioned it as directed in order to create a bootable drive.  When I type in the sudo command in the terminal, it is telling me that /Volumes/Untitled is not a valid volume mount point.  HELP!!!

    FYI
    my process just completed:  here is what I see when it is done... (the first you saw in my earlier post.  It took about 15-20 minutes.
    To continue we need to erase the disk at /Volumes/Recovery.
    If you wish to continue type (Y) then press return: y
    Erasing Disk: 0%... 10%... 20%... 30%...100%...
    Copying installer files to disk...
    Copy complete.
    Making disk bootable...
    Copying boot files...
    Copy complete.
    Done.

  • Nfs mount point does not allow file creations via java.io.File

    Folks,
    I have mounted an nfs drive to iFS on a Solaris server:
    mount -F nfs nfs://server:port/ifsfolder /unixfolder
    I can mkdir and touch files no problem. They appear in iFS as I'd expect. However if I write to the nfs mount via a JVM using java.io.File encounter the following problems:
    Only directories are created ? unless I include the user that started the JVM in the oinstall unix group with the oracle user because it's the oracle user that writes to iFS not the user that creating the files!
    I'm trying to create several files in a single directory via java.io.File BUT only the first file is created. I've tried putting waits in the code to see if it a timing issue but this doesn't appear to be. Writing via java.io.File to either a native directory of a native nfs mountpoint works OK. ie. Junit test against native file system works but not against an iFS mount point. Curiously the same unit tests running on PC with a windows driving mapping to iFS work OK !! so why not via a unix NFS mapping ?
    many thanks in advance.
    C

    Hi Diep,
    have done as requested via Oracle TAR #3308936.995. As it happens the problem is resolved. The resolution has been not to create the file via java.io.File.createNewFile(); before adding content via an outputStream. if the File creation is left until the content is added as shown below the problem is resolved.
    Another quick question is link creation via 'ln -fs' and 'ln -f' supported against and nfs mount point to iFS ? (at Operating System level, rather than adding a folder path relationship via the Java API).
    many thanks in advance.
    public void createFile(String p_absolutePath, InputStream p_inputStream) throws Exception
    File file = null;
    file = new File(p_absolutePath);
    // Oracle TAR Number: 3308936.995
    // Uncomment line below to cause failure java.io.IOException: Operation not supported on transport endpoint
    // at java.io.UnixFileSystem.createFileExclusively(Native Method)
    // at java.io.File.createNewFile(File.java:828)
    // at com.unisys.ors.filesystemdata.OracleTARTest.createFile(OracleTARTest.java:43)
    // at com.unisys.ors.filesystemdata.OracleTARTest.main(OracleTARTest.java:79)
    //file.createNewFile();
    FileOutputStream fos = new FileOutputStream(file);
    byte[] buffer = new byte[1024];
    int noOfBytesRead = 0;
    while ((noOfBytesRead = p_inputStream.read(buffer, 0, buffer.length)) != -1)
    fos.write(buffer, 0, noOfBytesRead);
    p_inputStream.close();
    fos.flush();
    fos.close();
    }

  • Ur mount point have not more space and ur system table space is full how u

    ur mount point have not more space and ur system table space is full how u resize or add file . u have not other mount point .what steps u follow

    Answers in your duplicated thread: Some inter view Questions Please give prefect answer  help me
    You can get all the answers at http://tahiti.oracle.com.
    You understimate job recruiters, a simple crosscheck is enough to discard people with experience from people who memorize 'interview answers'.
    Don't expect to get a job just because you memorize answers for 'job interviews', get real life experience.

  • Mount point /proc/bus/usb does not exist

    Does anybody know why this mount point would not be there? There is indeed a directory at that location...

    hi,
    kleptophobiac wrote:Does anybody know why this mount point would not be there? There is indeed a directory at that location...
    sarah31 wrote:perhaps it is the syntax in your fstab?
    Nope, it's a miracle, concerning the scsi kernel only, AFAIK. Some devs have
    rebuilt with some module variations but no clue so far where it does come
    from.
    It goes like that:
    usbcore loads usbfs (when it shouldn't), at a time when proc is not mounted.
    usbfs tries to create the dir and throws that error. Later on boot process
    hotplug corrects that. So it's a kind of annoyance, but it does feel
    uncomfortable as long as no one where it does come from :-/
    -neri

Maybe you are looking for