Slow files sharing, Slow files sharing

Hi there
I have many problems sharing files in the Internet
When I am connected in the same router with my mini Mac lion server and my air Mac, they share the files very good, but when I am outside from another net, it connects but very slow.
Is any extra configuration I need to perform?
Also I can't connect my WebDAV folder in my iPad with the server
What I need to config so it connects?
Thanks
Andres

OS X has two (2) protocols that are primarily used for sharing files.
1.  Server Message Block, or SMB.  This is primarily for sharing with Windows users.
2.  Apple File Protocol, or AFP.  This is primarily for sharing with OS X users.
Some ISPs will block SMB traffic.  AFP traffic is not usually blocked, however for large files over the Internet it can be problematic.  FTP is more efficient in those situations.
Having said that, I do not think your file sharing protocol is the problem.  I think it is your server's upload speed.  If the Internet at your server has an upload (to the Internet) speed of 0.4Mbps, that would appear as about 45KB/s in Finder windows when transferring files.  That is slow.  The limiting factor when accessing files on the server is the server's upload speed.
If the Internet connection at your outside location has an upload (to the Internet) speed of 1Mbps, then the limiting factor to sending files back to the server is your upload speed.  Upload speeds are usually the limiting factor, however your upload speeds are so low that you cannot expect good performance.  I would say you need upload speeds of at least 5Mbps.
Also try pinging your remote server, provided there is no firewall blocking pings.  If you see high latency in the ping times, or if pings are being dropped, that would affect performance as well.

Similar Messages

  • Collaboration On Shared Code File Slow

    I heard about all of the hype of collaboration, and I had a Java project that my friend and I had been working on. We both set up Java Studio Enterprise 8 on Fedora Core 5 workstations, and one of us ran the collaboration server.
    After signing in and beginning a shared code file, we noticed that the code line synchronization is extremely slow, with update times in lines of 5,000 - 10,000 milliseconds. Is this normal? Is there any way to configure the server such that changes made by one user are reflected quicker to the other users? Or is it just "Java slow" and there isn't anything that can be done? I personally was expecting something a bit more realtime (e.g. 50-250 msecs) than a few thousand milliseconds.

    The following are known performance issues that may be of relevant to your situation:
    http://collab.netbeans.org/issues/show_bug.cgi?id=62291
    http://collab.netbeans.org/issues/show_bug.cgi?id=69392
    You can try using later versions of the IDE:
    NetBeans 5.0: http://www.netbeans.org/downloads/index.html (Collab is not bundled with NB5 product; you will need to connect to the update center and download and install the module).
    Or Sun Java Studio Enterprise 8.1 Beta (http://developers.sun.com/prodtech/javatools/jsenterprise/downloads/index.jsp)

  • Slow access to DBF files on shared folder in Win7 Prof

    Hi,
    I've a LAN based on 5 XP workstations and one Win7x64 computer with shared folder. No firewall, no antivirus software on this machine. There are some DBF-files in this folder. DOS application is running on these 5 workstations. This application uses shared
    DBF-files. When I useed share folder based on XP-machine application worked normaly, access to these files was well rounded. After changing the "server" the access is never ending. Browsing/reading/writing data is a horror.
    Have you any ideas?
    Regards.

    Hi,
    Could you please tell more details regarding this sentence” After changing the "server"
    the access is never ending. Browsing/reading/writing data is a horror.”
    How does the DOS application work here?
    When we configured the application to access the DBF files on Windows 7, have we received any
    special errors?
    As the application worked well on the Windows XP workstation, might we consider to use process
    monitor to compare with the same process on Windows 7?
    Troubleshooting
    with Process Monitor
    Best regards
    Michael Shao
    TechNet Community Support

  • Why is there a delay of up to 5 minutes when sharing a file?

    See title.

    Do you mean a delay with the person receiving the email from Adobe that a file has been shared?
    An alternative is to copy the link and email it yourself to the person you are sharing the file with. If you try this alternative are you still seeing a delay?
    Does this happen every time you share a file, or did it just happen once?
    The Adobe email could possibly be slow (I am not able to reproduce). Possibly your email server or the person receiving the email had a temporary glitch with your email server.

  • Extremely slow deleting / modifying of files

    Hi all,
    I have a strange problem with windows 7 in combination with server 2008 R2.
    Situation.
    1] I copy a bunch of files to the server (UNC of mapped driver) with W7 (32 or 64 bits professional.)
    2] I want to delete the files i've just copied (same issue with modification) it is extremely slow. Only 1 file per second
    sometimes even slower.
    When I perform the same actions with Vista or XP it is extremely fast. For vista I had to switch off Remote Differential Compression.
    I did this also for W7 but no improvement. Delete files with the W7 machine is also fast when copied to the server from the vista or xp
    machine. I've tried serveral new machines with W7 but all show the same problems.
    Any help is welcome,
    with kind regards
    Roland

    Try Fred Langa's fix @ http://windowssecrets.com/2010/10/14/01:
    "<big>Easy steps to a significantly faster network
    </big> <small>
    </small> To start, if you're in a homegroup, exit it. (Need info? See MS's Help
    page , "Leave a homegroup.") Once every PC has left the homegroup, it no longer exists.
    Next, to disable HomeGroup and change encryption levels, open the Control Panel, select
    Network and Internet, then Network and Sharing Center. Once there, in the left-hand pane choose
    Change advanced sharing settings.
    It's a two-click fix, as shown in Figure 1.
    Figure 1. Two quick radio-button clicks are all it takes to restore classic networking speed to Windows 7.
    In the Home or Work section of the dialog box, scroll down to File Sharing Connections and click the radio button for
    Enable file sharing for devices that use 40- or 56-bit encryption. (Note: You need this setting anyway, if you want to share files between Win7 and an XP box. More on this to come.)
    Next, a bit further down under HomeGroup connections, select
    Use user accounts and passwords to connect to other computers.
    Save the changes, and you're done!"

  • Slow write transfer speed on shared xsan folder - osx 10.8.3

    XSAN3 - OS X 10.8.3
    I am getting slow transfer speeds when saving to a shared xsan folder using the smb protocol.
    Writes are about 15 MB/s.  Reads are about 85 MB/s.
    Some transfers to the shared xsan folder will fail and it doesn't matter if I am reading or writing.
    If I share a folder from the system drive, the speeds are much higher and I do not have any transfer failures.
    Writes are about 76 MB/s.  Reads are about 99 MB/s.
    If I transfer files using the file server I am sharing from, I get write speeds close to 90 MB/s to the xsan volume.
    Using the AJA Test Tool:
    Maximum write speeds to the xsan volume are about 600 MB/s
    Maximum read speeds from the xsan volume are bout 700 MB/s
    The file server and metadata controllers are Mac Pro 5.1 Mid 2012.
    Xsan Storage is the Vtrak EX30.
    It is configured according to documentation supplied by apple and promise.
    All Xsan clients are dual nic with nic 1 configured for public and nic 2 configured for the private.
    I have checked the XSAN admin tool and all metadata IPs appear correct for all clients,
    thery are all using the private IP.

    What template did you use for the creation of the SAN?  Recall, if you did a video template but you are transferring generic office files, your performance will suffer.  This is due to the block allocation size calculations.  The SAN may be tailored for specific types of writes.  If you are doing something outside of the expectation, you may experience negative performance.

  • MacMini 7,1 (OSX server) network file transfer slower to its SSD than its attached Drobo?

    Hi all,
    I have a MacMini 7,1 (OSX 10.10.3, Server 4.1) that I use as a file server via a shared folder on a thunderbolt-attached Drobo 5D. I'm posting here after noticing that wired read/write speeds from workstations to a shared folder on the MacMini's internal SSD [64Mbps (8MBps) write / 743Mbps (93MBps) read] are slower than to the Drobo plugged into it [750Mbps (94MB/s) read / 180Mbps (22.5MB/s) write]. What's going on?
    The Drobo has three drives:
    ST3200644NS (95MB/s sequential read/write)
    2x WD2002FYPS-0 (92.6MB/s sequential read 82.7 MB/s sequential write)
    It consistently (checked 3 workstations) gets around 750Mbps (94MB/s) read / 180Mbps (22.5MB/s) write speeds on wired (Cat5e) connecting via SMB on OSX Yosemite/ Windows 7. We have all gigabit switches/routers. I've checked all the wires. All Cat5e.
    Configuration goes: Drobo (Thunderbolt) > MacMini 7,1 > Netgear JGs524 (24port switch) > Trendnet TEG-S160g > Workstation. There's a Sonicwall TZ 215, Comcast router and ip gateway as well, although I don't believe they are between the server and workstations.
    Would anyone please recommend what I should check next? Are these write speeds typical? What other information would be helpful to provide?
    No Droboapps are installed. I've also posted to the Drobo community forums.

    Bump. Anyone? Should I repost this to the MacMini forum?

  • Excel 2007 and Smartview 11.1 file open slow

    We are in the process of upgrading to HFM system 11.1 and smartview 11.1.1.3 and as part of this process we are starting to use Excel 2007. I am encountering an interesting problem when opening files in Excel 2007 using smartview 11.1 and I am wondering if any one else has seen this problem and found a solution.
    First I converted an excel 2003 file to excel 2007 and saved the file as an xlsx file type. Then I am able to open the file and it would take about 30 seconds to open the file. The file has about 15,000 formulas on for spreadsheets. If I click on each spreadsheet individually and use the "Refersh" button I can save the file in about 10 seconds and then close the file. When I open the file again it will open in 30 seconds. Now using the same file I click "Refresh All" and the file processes fine. I am able to save and close the file in about 10 seconds. However, when I try to open the file again it takes about 20 minutes to open, and then 20 minutes to close.
    This only happens with two or three files. I have another file that is 40 spreadsheets and about 3,000 formulas that is not affected when I click Refresh or Refresh All.
    Any help would be apperciated.
    Rich

    Hi
    On metalink i found some notes about this issure
    "Background
    SmartView communicates over HTTP using .NET. This just means that requests and responses are sent in a standard XML format that is tucked in the HTTP headers. The mechanism is the same as when your internet browser requests a file or submits a form (simplification). A standard Microsoft component that is part of Internet Explorer is used for this.
    There are three components (four if you include the network) in a SmartView refresh:
    -the client
    -the web server
    -the HFM application server
    It is easiest to troubleshoot when they are on separate machines. (Running SmartView on the server to see whether it works better is a useful test to see what effect the network is having, but doesn't show any other bottlenecks.) If you run Task Manager on these 3 machines showing the Performance tab, you will see initial activity on the client as Excel clears the cells and the request is converted into XML. Then you see the web server get busy as it receives and parses the request. Then the application server is busy as it send the data, then the web server is busy as it compiles the results into XML again, and finally the client parses the XML, pushes the data into cells and runs an Excel calculation. This last stage is the longest of all. Overall, it is typical to see the client doing 90-95% of the work, so client performance is a very important factor. Look out for Excel's memory usage increasing and RAM becoming in short supply. Often, user dissatisfaction with refresh times can be alleviated by giving them more RAM or a faster PC.
    The situation we are dealing with here, however, is when the chain of events described above is interrupted. In this case, you will see the client or the web server wait in vain for a response from the next machine in the chain. The case of invalid XML is similar, but the response is truncated so that the XML is not well-formed when it arrives. XML consists of matching pairs of tags and is not well-formed when a closing tag is missing. This can be caused by certain network settings such as packet size (see later). The end result in either case is the same: as Excel cleared the cells before sending the request, cells are left blank or with zeroes.
    Your job now is to work out why the particular component is failing.
    Client
    Check the HTTP time out in the registry:
    [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]\
    "ReceiveTimeout"=dword:00dbba00
    "KeepAliveTimeout"=dword:002BF20
    "ServerInfoTimeout"=dword:002BF20
    The time-out limit is five minutes for versions 4.0 and 4.01 and is 60 minutes for versions 5.x and 6. Also, the time-out limit is 30 seconds for Windows Internet Explorer 7. To change the settings, add the above keys. The above values are hexadecimal, not decimal and correspond to 18000 milliseconds (3 minutes) and 240 minutes for the receive time-out. See
    - http://support.microsoft.com/kb/813827
    - http://support.microsoft.com/kb/181050
    Once you make those changes, my recommendation is that you reboot the client machine.
    Web server
    Look for errors in logs showing that the web app didn't respond/disconnected or timed out waiting for another application (i.e. the HFM application server). Eliminate load balancers etc. during testing in order to isolate the problem and to ensure that the user always goes to the same web server otherwise, logs can be hard to interpret. See also the IIS Tuning Guide for IIS settings. We cannot recommend particular settings, as each client must establish the optimal settings for their own environment after performance testing. If necessary, engage the services of a suitably qualified consultant.
    Application Server
    HFM application server performance is very dependent on whether the data is already in RAM or not. One symptom, where the first refresh fails but the second one succeeds, is probably a sign that the cache needed to be populated and once that had been done (by the first refresh) it went faster. HFM loads whole subcubes (Scenario, Year, Value and Entity) into cache, so asking for lots of accounts in the same subcube is fast, but asking for lots of entities will inevitably slow things down as it runs out of RAM and starts unloading subcubes to make room for more. The HFM event log (HsvEventLog.log) shows this happening, but don't forget that the messages you see are a normal consequence of the way HFM works and are not a problem unless you see a hugely elevated number of them during the period the refresh occurs. Another sign would be page file thrashing.
    If there are several servers in a cluster, use sticky sessions in HFM to make sure tests always go the same server, or hit one server directly rather than using the cluster.
    Solutions here involve giving more RAM to the server (probably not possible because of the 2 GB limit) and changing the worksheet design to restrict the number of subcubes accessed. A separate consolidation server, so that reads are not competing with intensive consolidations and calculations, would be a good thing.
    Network
    To find out if packets get fragmented, use
    ping -l size -f
    to send a big packet with the 'don't fragment' flag. E.g.
    Ping -l 1024 -f oracle.com
    If you see a reply like Reply from 141.146.8.66: bytes=1024 time=200ms TTL=245 then the size you specified is below the packet limit of the network. But if you use, for example, 2000 in place of 1024, you get Packet needs to be fragmented but DF set, then you know you have exceeded the limit. Combine this investigation with a Fiddler trace (https://www.fiddler2.com/fiddler2/) to see what size data messages are being sent and received.
    Troubleshooting Guidance
    If you seriously want to go into this in more detail, then it will take a considerable effort on your and your IT Department's part. My general advice would be:
    1. During this process, fix your environment so that it is predictable. Register a single HFM application server and connect to that. Use a single web server and don't go through Apache or any other redirector. Make sure we know which machine to go to for logs. This would be a temporary measure and it doesn't stop the other users using the full, load balanced system so long as it doesn't involve your server.
    2. As far as possible keep other users from using it during testing, unless you are specifically testing for performance under load. We want to see what activity is due to SmartView and not random other jobs.
    3. Use PerfMon on all three machines: client, web server and application server.
    4. Clear Logs before testing and note the times of each test so the logs can be correlated.
    5. Log CPU activity, memory usage, network throughput, page faults, thread count etc. I'm no expert on this so get advice from someone who is.
    6. In addition to the performance logs, from the IIS Server get:
    * HTTP.SYS Error Log - %windir%\System32\LogFiles\HTTPERR (Default location; configurable)
    * IIS Website Log - %windir%\System32\LogFiles\W3SVC# (Default location; configurable)
    * Event Log (both System and Application) in EVT and TXT (tab separated) formats
    7. From HFM Application server get:
    * HsvEventLog.log (in <HFM install>\Server working Folder)
    * Event Log (both System and Application) in EVT and TXT (tab separated) formats
    8. Be aware of all timeout settings in your environment (i.e. IIS & the network) and time your tests. This can show whether it is an ASP timeout, script timeout, client side timeout or whatever. Note that the Web Session Timeout setting for SmartView in HFM's Server and Web Configuration is a separate issue from IIS timeouts. Web Session Timeout controls the length of idle time before SmartView makes the user log in again. Default 20 mins.
    9. Run Fiddler traces to directly observe the XML and check for HTTP failures.
    Further resources
    - Tuning IIS 6.0 for HFM
    - SmartView Performance FAQ
    Both are available from Support.
    Author: Andrew Rowland
    Version: 11.x (Fri, 30 October 2009)"
    regards
    Alexander

  • File Adapter: Pick from shared folder in a domine network

    Dear friends,
    Is there any possibility for picking the file by using NFS from a shared folder. the Shared folder system is also in same domain. If so please give me what the prerequisites for the same to be configured.
    like host file entries. port opening.....
    Regards
    Vijay

    >
    vijay Kumar wrote:
    > Hi ,
    >
    > thanks a lot. If we create  a Active Directory Account and with password .
    > Can we access the file.
    > So that i can confirm to  business the only mean to access a file is FTP no other options are available.
    >
    >
    > Regards
    > Vijay
    the simple thing is that you need to have the folder on the xi server if you have to use NFS to access it.
    Else FTP would be the choice. I am not sure how it would act in case you are able to mount the external folder on to the XI server. As somebody from your network team if thats a possibility and try to access it via NFS else you will have to go for FTP only.

  • Questions on organizing/sharing music files on 3 PC networked drive

    I just added 1Tb network drive to 3 pc home network... I am planning on moving my iTunes music folder from the older, smaller network drive. I exceeded the 160GB size of the old drive when converting all of the WMA losselss files to Apple lossless while maintaining the old WMA files(I still think the Windows Media Player system is superior). With the new addition to our network, I am planning on the 3 pcs sharing music files... (my wife has been using iTunes for a while, I have just recently went over to the dark side)
    What is the best way to organize the files??
    -How to shuffle all without the folder of Christmas & Holiday music playing.
    -Segregate each users individual files.
    Any advice or suggestions are appreciated, I know it would save a lot of time and pain instead of the trial and error method.

    Thanks, Jason & Mike. This is how I resolved the problem. Turned out the disk with the data was in NTFS format. Neither my eMac nor my wife's Carbon iMac (prehistoric, in other words) was able to read it, on repeated attempts. In process I discovered (to my surprise) that the external hard drive attached to my eMac (a 320 Gig My Book) was formatted in FAT 32 format, and that the PC laptop (Thinkpad) read from it, and wrote to it, without any problem. (Is FAT 32 less than optimal for use with a pre-Intel Mac?). So I just copied the needed files from the NTFS disk to the FAT 32 disk, via the Thinkpad. When I reconnected the FAT 32 drive to the eMac, it turned out that it was easy [with some fiddling] to put the titles of the new music into the iTunes Library, without having to copy the music itself into iTunes. Making playlists does still seem to have to go manually. Sometimes the "New Playlist from Selection" option automatically gives a name (the original name) to the album from which the music came, sometimes it doesn't. Rationale not clear to me. Other problem: often the sequence is jumbled (this is after I added the whole collection of folders containing music to my iTunes Library in one fell swoop). Usually it was possible to correct this by copying track names for albums one album at a time from the external HD (in Columns view, selecting all the tracks in each folder simultaneously) directly into the playlist; the playlist name then always had to be copied or typed in separately. I don't know if I'm missing an easier route. Also, I see that I have version 4.7.1 of iTunes - I'll download a newer version! Don't know whether newer version will affect what I want to do in a positive sense.

  • In iMovie when I try to share a project to a file on an external hard drive it will work the first time but subsequently it says it is sharing but gives no time and file name and in fact just sits there not sharing the file.

    I have iMovie 10.0.4.  I want to transfer some old DV tape videos to MPEG 4 files to be stored on a 4 TB external drive. I successfully imported the video to iMovie, created a project by moving all captured files into a project and the first time I shared the project to a file  it worked as expected.  Every subsequent try to transfer another video would fail to transfer.  When I clicked save it gives the warning that it is too big for iCloud, iso I click create on this computer.  The progress circle comes up with no indication of progress.  The show activity drop down just says sharing to "file" no file name as in the first one and indicates no time remaining.  In fact it just sits there apparently doing nothing.  I'v tried it a half dozen times, deleted everything and started over.  It transfers the first project to file and then nothing on additional tries.  What do I have wrong, am I missing something?

    In iTunes 11 uncheck the preferences setting in in the iTunes Preferences panel "Advanced > Copy Files to iTunes Media folder when adding to Library"

  • Screen sharing and file sharing not working on LAN

    This is not a Mavericks specific problem since it existed before I upgraded to Mavericks.  I have a LAN that consists of 3 MBPs and 1 MacPro.
    Local screen sharing and file sharing works fine on all of these computers except for one MBP.  From this MBP I am able to connect to all of the other computers for screen sharing and file sharing.  But I can't connect to this particular MBP for screen sharing or file sharing from any of the other Macs.
    This MBP shows up in the left hand side column under "Shared" on all the other computers.  But I am unable to make a connection from any computer to the problematic MBP.  I get the message, "There was a problem connecting to the server "xxxxxxxx".  The server may not exist or is unavailable at this time".
    I have tried toggling off and on screen sharing and file sharing with no effect.  One probably pertinent strange thing that I notice is that under the Screen Sharing set up the green button next to "Screen Sharing:On" is green.  But the message under it states:
    "Other users can access your computer's screen
    at vnc://johnmacbookproretina/ or by looking for “John MacBook Pro” in the Finder sidebar."
    The problem is that my computer name is "John MacBook Pro" and the LocalHostName is "JohnMBPr.local".  I believe that at some point in the past the computer name and/or LocalHostName was "johnmacbookproretina".
    Similiarly, under File Sharing I am informed:
    "Other users can access shared folders on this computer, and administrators all volumes, at “afp://johnmacbookproretina” or “smb://johnmacbookproretina”."
    I would appreciate any help I could get to resolve this perplexing issue.  Thanks.
    John

    I've finally been able to solve my own problem.
    File sharing and screen sharing on the problematic MBP was being blocked by firewall rules that had been set up several weeks ago when I had installed a trial version of DoorStop X firewall software.  I had previously trashed the app when the 30 day trial ended.  I did not realize that the software had left a set of firewall rules in the Library/StartupItems folder.  Among other things, the rules were set to deny access to the computer on the ports used for file sharing and screen sharing.  Once I trashed the DoorStopStartup folder in the StartUpItems folder all the connection problems resolved with a computer restart.

  • No folios or sub tile in shared IPA file

    This is a follow on to Chris Huber's question posted at 7:52 am today. He has resolved his library view issues and successfully downloaded two folios to his iPad mini. He has emailed me the newest IPA file to test on my iPad Air.
    I now have the same problems he earlier encountered. The app downloads to Newsstand, but there is no subscription tile or folios showing up in the library view. The splash screen launches, but then the library view is empty. I am syncing per the guidelines, as I've done many times before on the apps that I have built, but no luck with the shared IPA file.
    Any suggestions?
    Thanks,
    Steve

    Bart,
    Here is the link to Chris Huber's original question:
    http://forums.adobe.com/message/6304522#6304522
    I am using the same .ipa file, sent to me via email, on an iPad Air versus his Mini.
    Steve

  • Error 0(Native: listNetInterfaces:[3]) and error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

    Hi Gurus,
    I'm trying to upgrade my test 9.2.0.8 rac to 10.1 rac. I cannot upgrade to 10.2 because of RAM limitations on my test RAC. 10.1 Clusterware software was successfully installed and the daemons are up with OCR and voting disk created. Then during the installation of RAC software at the end, root.sh needs to be run. When I run root.sh, it gave the error: while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory. I have libpthread.so.0 in /lib. I looked up on metalink and found Doc ID: 414163.1 . I unset the LD_ASSUME_KERNEL in vipca (unsetting of LD_ASSUME_KERNEL was not required in srvctl because there was no LD_ASSUME_KERNEL in srvctl). Then I tried to run vipca manually. I receive the following error: Error 0(Native: listNetInterfaces:[3]). I'm able to see xclock and xeyes. So its not a problem with x.
    OS: OEL5 32 bit
    oifcfg iflist
    eth0 192.168.2.0
    eth1 10.0.0.0
    oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth1 10.1.1.0 global cluster_interconnect
    eth0 192.168.2.0 global public
    cat /etc/hosts
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    My questions are:
    should ping on sunny1vip and sunny2vip be already working? As of now they dont work.
    if you look at oifcfg getif, I initially had eth1 10.0.0.0 global cluster_interconnect,eth0 192.168.2.0 global public then I created eth1 10.1.1.0 global cluster_interconnect with setif. Should it be 10.1.1.0 or 10.0.0.0. I looked at the subnet calculator and it says for 10.1.1.1, 10.0.0.0 is the subnet. In metalink they had used 10.10.10.0 and hence I used 10.1.1.0
    Any ideas on resolving this issue would be very much appreciated. I had been searching on oracle forums, google, metalink but all of them refer to DOC Id 414163.1 but it does n't seem to work. Please help. Thanks in advance.
    Edited by: ayyappa on Aug 20, 2009 10:13 AM
    Edited by: ayyappa on Aug 20, 2009 10:14 AM
    Edited by: ayyappa on Aug 20, 2009 10:15 AM

    a step forward towards resolution but i need some help from the gurus.
    root# cat /etc/hosts
    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    192.168.2.3 sunny1pub.ezhome.com sunny1pub
    192.168.2.4 sunny2pub.ezhome.com sunny2pub
    10.1.1.1 sunny1prv.ezhome.com sunny1prv
    10.1.1.2 sunny2prv.ezhome.com sunny2prv
    192.168.2.33 sunny1vip.ezhome.com sunny1vip
    192.168.2.44 sunny2vip.ezhome.com sunny2vip
    root# /u01/app/oracle/product/crs/bin/oifcfg iflist
    eth1 10.0.0.0
    eth0 192.168.2.0
    root# /u01/app/oracle/product/crs/bin/oifcfg getif
    eth1 10.0.0.0 global cluster_interconnect
    eth0 191.168.2.0 global public
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    ****ORACLE_HOME environment variable not set!
    ORACLE_HOME should be set to the main directory that contain oracle products. set and export ORACLE_HOME, then re-run.
    root# export ORACLE_BASE=/u01/app/oracle
    root# export ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
    root# export ORA_CRS_HOME=/u01/app/oracle/product/crs
    root# export PATH=$PATH:$ORACLE_HOME/bin
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP does not exist.
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny1pub -o $ORACLE_HOME -A 192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny2pub -o $ORACLE_HOME -A 192.168.2.44/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
    VIP exists.: sunny1vip.ezhome.com/192.168.2.33/255.255.255.0
    root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny2pub -a
    VIP exists.: sunny2vip.ezhome.com/192.168.2.44/255.255.255.0
    Once I execute the add nodeapps command as root on node 1, I was able to get vip exists for config nodeapps on node 2. The above 2 statements resulted me with same values on both nodes. After this I executed root.sh on both nodes, I did not receive any errors. It said CRS resources are already configured.
    My questions to the gurus are as follows:
    Should ping on vip work? It does not work now.
    srvctl status nodeapps -n sunny1pub(same result for sunny2pub)
    VIP is not running on node: sunny1pub
    GSD is not running on node: sunny1pub
    PRKO-2016 : Error in checking condition of listener on node: sunny1pub
    ONS daemon is not running on node: sunny1pub
    [root@sunny1pub ~]# /u01/app/oracle/product/crs/bin/crs_stat -t
    Name Type Target State Host
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    ora....pub.gsd application OFFLINE OFFLINE
    ora....pub.ons application OFFLINE OFFLINE
    ora....pub.vip application OFFLINE OFFLINE
    Will crs_stat and srvctl status nodeapps -n sunny1pub work after I upgrade my database or should they be working now already? I just choose to install 10.1.0.3 software and after running root.sh on both nodes, I clicked ok and then the End of installation screen appeared. Under installed products, I see 9i home, 10g home, crs home. Under 10g home and crs home, I see cluster nodes(sunny1pub and sunny2pub) So it looks like the 10g software is installed.

  • Cannot open shared object file: on Informatica Power Centre(8.1.1) Installa

    Hi Friends,
    I am trying to install Informatica Power Centre 8.1.1 and when i invoke the installet it' giving below error:
    My OS is Redhat Linux (64 bit).
    What's the issue?
    onfiguring the installer for this system's environment...
    awk: error while loading shared libraries: libdl.so.2: cannot open shared object file: No such file or directory
    dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    /bin/ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory
    basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    Launching installer...
    grep: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    /tmp/install.dir.28135/Linux/resource/jre/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
    [oracle@obidb PowerCenter_8.1.1_SE_for_Linux_x64_64Bit]$
    Regards,
    DB

    Hi;
    I am in the process of installing R12.1.1 on RHEL5(64-bit). All the pre-req have been done.. I did the installation twice but I am facing the same issue. When i try to run adconfig.sh on the appsTier. i get the following errorYou want to run adconfig.sh on appstier or dbtier?
    If you run appstier be sure you source env file with applmgr user
    Regard
    Helios

  • Cannot open shared object file: on Informatica Power Centre Installation

    Hi Friends,
    I am trying to install Informatica Power Centre 8.1.1 and when i invoke the installet it' giving below error:
    My OS is Redhat Linux (64 bit).
    What's the issue?
    onfiguring the installer for this system's environment...
    awk: error while loading shared libraries: libdl.so.2: cannot open shared object file: No such file or directory
    dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    /bin/ls: error while loading shared libraries: librt.so.1: cannot open shared object file: No such file or directory
    basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    dirname: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    basename: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    Launching installer...
    grep: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
    /tmp/install.dir.28135/Linux/resource/jre/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
    [oracle@obidb PowerCenter_8.1.1_SE_for_Linux_x64_64Bit]$
    Regards,
    DB

    Hi;
    I am in the process of installing R12.1.1 on RHEL5(64-bit). All the pre-req have been done.. I did the installation twice but I am facing the same issue. When i try to run adconfig.sh on the appsTier. i get the following errorYou want to run adconfig.sh on appstier or dbtier?
    If you run appstier be sure you source env file with applmgr user
    Regard
    Helios

Maybe you are looking for

  • Extensions not appearing in CC 2014

    extnesion manger indicates extensions are enabled in PSCC 2014, but PSCC 2014 shows no additional extensions or scripts.

  • Can't load data using DB Connect

    I'm trying to use DB connect to load data from an external source database into SAP BW.  Both databases are MSSQL.  I've followed the steps in the document, "Transferring Data with DB Connect".  The table in the source system has all CAPS for the tab

  • Filter attachments by name

    Hi All, I'd like my c100 Ironport appliances to filter (strip?) attachments of a certain name. I don't want to filter by file extension (ie. vbs, scr, etc.), only by the full filenames. I have a list of 20+ files I'd like the Ironport to lookout for,

  • How do I test a preloader in Flash CC?

    Hi, I'm trying to build an animated preloader for a movie in Flash CC. In previous version of Flash, when I tested the swf through 'test movie', I could 'simulate download settings' to see if the preloader was animating correctly as the movie loaded,

  • Multiple Null Value Ids for a LOV

    I have a list of POCs that I would like to sort by name, and I need to have TBD and N/A in there as well. I could add them to the table, but then they will be sorted with the other names. I'd like to have them as 'Include "No Selection" Item: Labeled