Question about Cluvfy stage -pre dbinst

I'm putting together a three-node RAC with Oracle 11gR2 under Solaris 10. I've installed the Grid Infrastructure and checked it out with 'crsctl check crs' and 'srvctl status asm'. All looks good. The next step is 'clufvy stage -pre dbinst -n all -osdba dba verbose'. Now there is a disconcerting inconsistency.
When I run the pre-db-installation check from Node 1, cluvfy returns complete success, no worries. However, when I run the same command from Nodes 2 and 3 -- where I would naively expect to get the same complete success -- cluvfy reports failure. It says that CRS is not installed on any of the nodes.
Does any expert here know if this inconsistency represents some inconsistency in node configuration that I need to address before proceeding with the installation? Or is it just not useful to run cluvfy on Nodes 2 and 3?
Thanks in advance for any insight you can provide.

Thanks for the quick attention. Yes, I have checked reachability. Not only was the installation of Grid Infrastructure from Node 1 to Nodes 2 and 3 successful, but I also used crs_stat to confirm that from each node the other two nodes appear online (I assume that's what you mean by 'checking reachability').
The error output from cluvfy is about /opt/oraInventory/inventory.xml, which is present on Node 1 but not on the other two nodes. Since Grid Infrastructure installation succeeded, I'm surprised that the file isn't found. I am going to open an SR about this and mark this question as answered.
Since you asked, here are the details of the error from cluvfy. Again: there is no error on Node 1, but on 2 and 3, when I run './cluvfy stage -pre dbinst -n all -osdba dba -verbose', all tests pass up to 'Checking CRS integrity...' and then there is a series of errors, which I've copied and pasted below:
---- CLUVFY OUTPUT FOLLOWS -----
Checking CRS integrity...
ERROR:
/opt/oraInventory/ContentsXML/inventory.xml (No such file or directory)
ERROR:
CRS is not installed on any of the nodes
Verification cannot proceed
CRS integrity check failed
Checking node application existence...
ERROR:
/opt/oraInventory/ContentsXML/inventory.xml (No such file or directory)
ERROR:
CRS is not installed on any of the nodes
Verification cannot proceed
Checking if Clusterware is installed on all nodes...
ERROR:
/opt/oraInventory/ContentsXML/inventory.xml (No such file or directory)
ERROR:
CRS is not installed on any of the nodes
Verification cannot proceed
PRVF-9676 : Clusterware is not installed on all nodes checked : "scl-sae-db-03"
PRVF-9652 : Cluster Time Synchronization Services check failed
Checking time zone consistency...
ERROR:
CRS home "{0}" is not a valid directory
Pre-check for database installation was unsuccessful on all the nodes.
--- CLUVFY OUTPUT ENDS -----

Similar Messages

  • Cluvfy stage -pre (fails)

    I have successfully completed installation of oracle 2 Node 12c RAC on Redhat Linux 6.3
    i was trying to add a node , But
    ]$ cluvfy stage -pre nodeadd -n node3
    Failed with Following error :
    ERROR:
    Reference data is not available for verifying prerequisites on this operating system distribution
    Verification cannot proceed
    I downloaded latest version of CVU from oracle.com , but still got error like above .
    if i ignore the check , and move to node addition script ,
    /u01/app/12.1.0/grid/oui/bin/  ./addNode.sh
    i cant even find the script ./addNode.sh
    any idea?

    newbieDBA wrote:
    Yes , i read that ,
    So is it a Dead end? no way out?
    I tried to change Version from /proc/version
    but it didnt let me even from Root , said fail to SYNC.
    How do i get out of this?
    pls suggest
    thanks
    If it's a bug, it's a bug! Check for the bug fix over My Oracle Support.
    Aman....

  • Question about XY Stage Stats

    Hello my is David Charlot and I'm trying to use a NI 7344 motion controller and a MID-7604 motor driver to control a microscope XY stage. I want to make sure I am controlling the stage correctly. The manufacturer told me that the stage has a 2mm pitch lead screw and has a linear encoder resolution of 2nm/pulse. I don't know how that translates to MAX's  steps/revolution and encoder counts per revolution. Any help would be most appreciated. Thanks.
    David C.

    The breakdown of these stages goes something like this:
    The stepper is most likely a 1.8 deg/step standard type.  The encoder will have XXX pulses/revolution and is connected to a second shaft coming directly from the motor.  While the stage has a 2mm pitch (2mm travel/revolution of screw), this gives XXX pulses pre 2mm of travel.  There is probably a gearing ratio, most likely a worm gear.  Worms ratios are usaully in the range of 20:1 to 100:1, giving a XXX*100pulses/(2mm travel).  With the 2nm/ pulse reported by the stage supplier,  and assumin a 100:1 gear ratio, the encoder has XXX=10^4 pulses/rev.  This is incredibly high.  I have never heard of anything higher than 2000 pulses/rev.
    I suggest contacting the supplier and asking for specs on the encoder itself, which will be reported in pulses/count ( or counts/rev), not in length/pulse.  This is the number you specify in MAX.
    For the steps/rev:
    MAX will read the microstep setting set on the 7604 and will assume a standard 1.8 deg stepper, which is 200step/rev.  So, if you set the microstep on the 7604 to 10x, the 7344 controller will read 10*200=2000 steps/rev.  Now, if you want to include gear ratios, you must manually multiply it in MAX.  Normally, this is not done, since max velocities are specifed at the motor.  If you use a rotation stage, then you might want to multiply it in.  It's a matter of taste.

  • Quick question about photos on Pre

    Is there any way to have a photo displayed on the PRE to also show the file name ie (joescar.jpg). The only data I can see is the album name and photo number (1/23) of the photos in that album.
    Thanks for any help.
    Post relates to: Pre p100eww (Sprint)
    Post relates to: Pre p100eww (Sprint)

    Hello and welcome to the forums;
    Unfortunately the Photos application is not set up to display file names at this time.
    If you would like to see this added in a future update, I recommend visiting www.palm.com/feedback where our developers monitor customer requests for new features or changes to existing ones.
    Hope this helps,
    TreoAide

  • Question about "Harry Potter" Pre-Release

    I am contemplating buying the "Harry Potter and the Goblet of Fire" album from the Music Store. However, in the description it says something about the digital booklet being available AFTER November 14. Does anyone know if this means that if I buy it today, I won't get the digital booklet? Has anyone bought the album or know whether this is the case or not? Also, am I correct in understanding that the album is only available this week? Thanks for any help.

    I am contemplating buying the "Harry Potter and the Goblet of Fire" album from the Music Store. However, in the description it says something about the digital booklet being available AFTER November 14. Does anyone know if this means that if I buy it today, I won't get the digital booklet? Has anyone bought the album or know whether this is the case or not? Also, am I correct in understanding that the album is only available this week? Thanks for any help.

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Questions about Contracts and New Phones

    I've had some questions about how upgrading and adding new lines to a contract works and I haven't been able to find the answers through Google. First some background information: We started this most recent contract in September of 2011 and as such the current contract will end September of 2013. According to the phone information portal, all three of the devices on my account will be eligible for an upgrade this Saturday (May 4th, 2013). My daughters birthday just so happens to coincide with that date and as such we were planning on surprising her by taking her to buy a new phone and we are going to allow our son to upgrade his phone as he has been asking to for a while.
    Is there any way of locking/limiting the amount of data allowed for each phone on a shared plan? I'm worried that my daughter will go way over our limit and not pay attention to the point where she will end up costing us a very large amount of money
    Does adding a new line require the start of a new contract?
    My son was interested in purchasing the "Samsung Galaxy S4" and has noticed that pre-orders for that phone are currently available. Is it possible to pre-order the phone on that date using the upgrade so that the price will be reduced?
    If the answer to number 2 is a no, does adding the new line take away the upgrades from other phones or would he be able to go to the store past the release date and upgrade then?
    Thanks for taking the time to read through all of this. Any help that I receive will be extremely appreciated.

    1. Yes. There is an additional fee per line you wish to do this for. When you add the line you can choose this option and set the data amount it is allowed to use.
    2. Yes. Each line is a separate contract with its own expiration date.
    3. The contract starts when you sign up for the contract. The amount will be pro-rated on the plan if it is not started until part-way through your billing cycle that is what you are asking. Doing the pre-order just guarantees you the phone in case they sell out.
    4. No it will not take any upgrades away as each line has its own termination date. You can move upgrades around between lines though.

  • Question about download file in OAS4

    Question about download file in OAS4:
    I use Oracle Application Server 4.0.7 on my Windows NT 4.0 SP6;
    I use PL/Sql Cartridge developer a document system; It's use the
    upload/download in PL/Sql Cartridge;
    I read the document , the Upload/download in Pl/Sql Base on the
    Oracle Application Server's Content Service. the Problem is when I
    download a document, If I upload a Html or MsWord file, It will store in a LongRaw column, when me download ; It's tell me can't
    find a application to open this file; I will select a application
    from list to open the download file;
    As normal, It will open MsWord Automatic when download a "doc" file ; also It will open a new window of Browser to view a Html file;
    I check the download process on client Browser; when download
    file, The content-type always return "application/octet-stream";
    Also the download File will lost the postfix of the file,
    So Browser don't open the File Automatic;
    I think If I set the correct Content-Type , Browser can know how open the file; So I use owa_content.set_content_type procedure
    set the Doc file to "application/msword" , but the WEb Server always
    return "application/octet-stream";
    I didn't know how to do this problem, Plese help me.
    null

    I have a Tecra M2 and rely on your email update to ensure I have the latest drivers on my machine.
    When I received a Toshiba support email on 14 April 2005 giving reference to a QFE from Microsoft I assumed it would be necessary for my Tecra.
    I was very confused when I found that this QFE and subsequent ones posted on the 16 April 2005 relate to Pre SP2 critical updates no9t required if one has already taken earlier advice of updating to Service Pack, at the very least your narrative should make mention of this. I find it very difficult to believe that your updates are two+ years out of date.

  • Runcluvfy.sh stage pre fails with node reachability on 1 node only

    Having a frustrating problem. 2 node RAC system on RHEL 5.2 installing 11.2.0.1 grid/clusterware. Performing the following pre check command from node 1:
    ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verboseI'm getting the following error and it cannot write the trace information
    [grid@node1 grid]$ sudo chmod -R 777 /tmp
    [grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose
    WARNING:
    Could not access or create trace file path "/tmp/bootstrap/cv/log". Trace information could not be collected
    Performing pre-checks for cluster services setup
    Checking node reachability...
    node1.mydomain.com: node1.mydomain.com
    Check: Node reachability from node "null"
      Destination Node                      Reachable?
      node2                       no
      node1                       no
    Result: Node reachability check failed from node "null"
    ERROR:
    Unable to reach any of the nodes
    Verification cannot proceed
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    [grid@node1 grid]$
    [grid@node1 grid]$ echo $CV_DESTLOC
    /home/grid/software/grid/11gr2/gridI've verified the following:
    1) there is user equivalence between the nodes for user grid
    2) /tmp is read/writable by user grid on both nodes
    3) Setting the CV_DESTLOC appears to do nothing - it seems to go back to wanting to write to /tmp
    4) ./runcluvfy comp nodecon -n node1,node2-verbose succeeds no problem
    And the weirdest thing of all, when I run ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose from node 2, it succeeds without errors.
    What am I missing? And TIA..

    I made a copy of the runcluvfy.sh and commented out all rm -rf commands so that it would at least save the trace files. Re-ran, and the following trace output - not entirely helpful to me, but any gurus out there see anything?
    [main] [ 2010-04-20 15:48:38.275 CDT ] [TaskNodeConnectivity.performTask:354]  _nw_:Performing Node Reachability verification task...
    [main] [ 2010-04-20 15:48:38.282 CDT ] [ResultSet.traceResultSet:341]
    Target ResultSet BEFORE Upload===>
            Overall Status->UNKNOWN
    [main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
    Source ResultSet ===>
            Overall Status->OPERATION_FAILED
            node2-->OPERATION_FAILED
            node1-->OPERATION_FAILED
    [main] [ 2010-04-20 15:48:38.283 CDT ] [ResultSet.traceResultSet:341]
    Target ResultSet AFTER Upload===>
            Overall Status->OPERATION_FAILED
            node2-->OPERATION_FAILED
            node1-->OPERATION_FAILED
    [main] [ 2010-04-20 15:48:38.284 CDT ] [ResultSet.getSuccNodes:556]  Checking for Success nodes from the total list of nodes in the resultset
    [main] [ 2010-04-20 15:48:38.284 CDT ] [ReportUtil.printReportFooter:1553]  stageMsgID: 8302
    [main] [ 2010-04-20 15:48:38.284 CDT ] [CluvfyDriver.main:299]  ==== cluvfy exiting normally.I'm still baffled why the precheck is successful from the second node. And, in fact, all other cluvfy checks that I've run succeed form both nodes.

  • Runcluvfy.sh stage -pre crsinst: error Unable to reach any of the nodes

    Hii all,
    Well, I've gone through the pre-reqs for trying to install 11G clusterware on RHEL 5.3.
    I'm to the point where i'm trying to run:
    ./runcluvfy.sh stage -pre crsinst -n node1 -verbose
    I get this:
    Performing pre-checks for cluster services setup
    Checking node reachability...
    Node reachability check failed from node "node1 ".
    Check failed on nodes:
    node1
    ERROR:
    Unable to reach any of the nodes.
    Verification cannot proceed.
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    I'm just wanting right now, to install a one node RAC system (I will add servers later as I get them online).
    I've verified that ssh is working (thinking it may be trying to connect to itself by ssh). I have the keys generated and installed...if I connect ssh as the oracle user back to the same machine, it gets me right on with no prompts for passwords.
    nslookup on node1 looks great.
    This box has 2 cards....eth0 and eth1. Right now in the /etc/hosts file, I have node1 to the IP for eth0, and node1-priv set for the IP address eth1.
    I do have a little trouble understanding what the node1-vip is supposed to do or be set. I found the an IP address one higher than for eth0 wasn't being used, and set node1-vip to be that.
    (Can someone explain to me a little more about the vip host?? Is it supposed to somehow point to node1's IP address on eth0 like the regular one does?)
    Since this is a one box, one node install...hoping clusterware and checks are just looking at the /etc/hosts file. I've tried playing around, and setting node1-vip to be the same as node1 (IP)...that doesn't work either.
    One thing I can guess 'might' be wrong. Does runcluvfy use "ping"? I found the oracle user cannot ping this box from this box. The box (node1) can be pinged from outside the box...it is registered on DNS, I can ssh into it no problem, and again, oracle can ssh into himself on same box with keys properly generated).
    I've been looking around, and I just don't see much of what to look at to troubleshoot with this error, I guess everyone gets past the verification the first time with no host unreachable errors?
    I'm a bit weak when it comes to networking. Any help greatly appreciated...suggestions, links...etc!!
    cayenne

    Ok...looks like this was the problem. It appears the SA's, per newer policy, had turned off "ping" for any other user on the box besides root.
    I took a shot in the dark, and had them turn it on (as that ssh'ing and other items to check seemed to work outside the runcluvfy script). They turned on ping. The nodes from the script are now reachable and test positive for equivalency.
    Performing pre-checks for cluster services setup
    Checking node reachability...
    Check: Node reachability from node "node1"
    Destination Node Reachable?
    node1 yes
    Result: Node reachability check passed from node "node1".
    Checking user equivalence...
    Check: User equivalence for user "oracle"
    Node Name Comment
    node1 passed
    Result: User equivalence check passed for user "oracle".
    Pre-check for cluster services setup was unsuccessful on all the nodes.
    I"m guessing that last line...was due to not having the clusterware running on any other boxes?
    Anyway, will try to config. RAC, and get things installed.

  • Few questions about upgrading database

    Hi everyone,
    greetings of the day
    I have few questions about the upgrading database,
    In export and import mode
    1.can we have new name for the target database,
    2.I think we need to create tablespaces ,do we need to create users as well
    3.If we are upgrading from 9i to 10g database ,any activity to be perfromed other than creating a new sysaux tablespace
    4.How come we get consistent export
    In DBUA mode ( in the same machine only)
    1.Do we need shutdown / startup restrict the database
    2.How can we move the files to the new location
    3.Can we change db_name of the database
    4.Can we use the old database as well
    In manual upgration using catupgrd scripts
    1.Can we rename the db_name
    2.can we use old database as well
    3.how to move the database files to the new location
    4.can we perform this kind of upgrade on a different server
    5.when we startup upgrade in the new home ,how it identifies the old database in-order to upgrade the old one
    Thanks

    udayjampani wrote:
    Hi everyone,
    greetings of the day
    Pl post details of source and target database versions, along with your OS details.
    I have few questions about the upgrading database,
    In export and import mode
    1.can we have new name for the target database,Yes.
    2.I think we need to create tablespaces ,do we need to create users as wellYou can create users, but it is not necessary. You need to pre-create tablespaces only if their characteristics/locations on the target are different than on the source.
    3.If we are upgrading from 9i to 10g database ,any activity to be perfromed other than thisNot that I am aware of - see the steps in the Upgrade Guide - http://docs.oracle.com/cd/B19306_01/server.102/b14238/expimp.htm
    4.How come we get consistent export wetherEnsure the database is started in restricted mode, so users will not be able to access the database during the export.
    >
    In DBUA mode ( in the same machine only)
    1.Do we need shutdown / startup restrict the databaseNo - DBUA will do this automatically for you.
    2.How can we move the files to the new locationYou can after the upgrade move the datafiles to wherever you want - use the ALTER DATABASE RENAME DATAFILE (http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_1004.htm#i2082829) command.
    3.Can we change db_name of the databaseI do not believe this is possible with DBUA.
    4.Can we use the old database as wellNo - the database will be upgraded by DBUA - there is no "old" database.
    >
    >
    In manual upgration using catupgrd scripts
    1.Can we rename the db_nameYes.
    2.can we use old database as wellNo - the scripts will upgrade the database - there is no "old" database.
    3.how to move the database files to the new locationSee above.
    4.can we perform this kind of upgrade on a different serverPl elaborate on what you mean by this. You can copy the existing database to a different server (assuming compatible OS) and upgrade it there.
    >
    >
    ThanksHTH
    Srini

  • Questions about SRM PO in Classic scenario

    Hello All
    I have a number of questions about the SRM PO in classic scenario.
    1) If the Backend PO is changed in ECC i.e. if any quantity is added , can we have an approval workflow
    for the same?
    We currently have release strategies for other PO's in ECC. How do we accommodate the PO changes only?
    Our requirement is not have an approval initially once the PO is created, but only for the changes
    2) If the PO is sent as XML to the Vendor, is it possible to capture the PO response in ECC? What are the Pre-requisites
    for this to happen. Should SAP XI be required for this?
    3) In case the PO is cancelled/ reduced , does the Balance goes back to SRM sourcing cockpit?
    We are using SRM 7.0
    Regards
    Kedar

    Hi,
    1) If the Backend PO is changed in ECC i.e. if any quantity is added , can we have an approval workflow
    for the same?
    We currently have release strategies for other PO's in ECC. How do we accommodate the PO changes only?
    Our requirement is not have an approval initially once the PO is created, but only for the changes
    Sol: In ECC6.0 if the P.O is changed and release strategy is there in ECC6.0 then it follows the ECC6.0 Approval Route.
    2) If the PO is sent as XML to the Vendor, is it possible to capture the PO response in ECC? What are the Pre-requisites
    for this to happen. Should SAP XI be required for this?
    XI is mandatory
    3) In case the PO is cancelled/ reduced , does the Balance goes back to SRM sourcing cockpit.
    Once P.O is created in ECC 6.0 for the P.R in Sourcing Cockpit, cancelling/reduction will not have a updation in the sourcing cockpit in SRM.
    Eg  100 nos P.R is in SRM sourcing cockpit for which  you have createdaa P.O for 40 nos is ECC6.0
    for the remaining 60 nos PR ,you can create a P.O in ECC6.0
    Regards
    Ganesh

  • Question about autodiscover in case of multiple bound namespace

    Hi Experts,
    I have a question about the autodiscover behaviour. Let's assume we have the below infrastructure:
    SiteA :
    MBX-Server-SiteA : Member of a DAG
    CAS-Server-SiteA : outlook anywhere url = siteA.domain.com
    SiteB :
    MBX-Server-SiteB : Member of a DAG
    CAS-Server-SiteB-1 : outlook anywhere url = siteB-1.domain.com
    CAS-Server-SiteB-2: outlook anywhere url = siteB-2.domain.com
    We have DB-1 and DB-2 that have copies in both MBX servers.
    My question is how does exchange select the access URLs to return in the autodiscover process? I know that it depends on where the mailbox is hosted but can't find details about the process in the technet articles.
    Thanks.

    There's quite a few details that are involved in the Autodiscover process. First of all - probably the most important thing is that there are 2 stages to the whole process.
    In the first stage, the client is simply concerned with getting the address of a CAS server that will help it further. We'll keep things simple and assume your scenario where the client is located inside the network and is domain-joined. The details of the
    LDAP query are detailed in
    this link. (At the time I was investigating this I've actually went ahead and ran the queries using the ldp.exe client against the Configuration partition of the respective AD domain - it's worth seeing the actual responses.) An interesting trick here is
    the 'keywords' attribute that's stamped on those SCP entries. The reason behind it is that you don't want a client located in a site to go half across the globe in order to connect to a CAS server, when there's one available in its own site. One simple way
    to get the 'keywords' attribute stamped is through the Set-ClientAccessServer cmdlet, using the -AutodiscoverSiteScope parameter. In your example, you'd probably want to run it against the CAS server in Site A and specify the name of the corresponding AD site
    ('SiteA') and correspondingly against the 2 CAS servers in Site B (using 'SiteB'). Once the client has got the response to his query, it will attempt to select one server that's handling the site he's in (essentially it will filter the results based on 'keywords'
    -contains <client-site>). Now that we got our endpoint we can go to stage 2.
    In stage 2, the client will actually use EWS in order to query the Autodiscover service itself running on our target server. There are 2 possible interfaces of accessing the Autodiscover service: POX (Plain Old XML) and SOAP. POX will be targeting the ../Autodiscover.xml
    URLS, while the SOAP one will be using ../Autodiscover.svc URLs. Details about this including some hardcoded parts are
    here. What happens next is detailed in point 3, section 2.1 The Autodiscover Process
    here. This last link is the key to the whole process:
    "It provides a list of CAS that has AutodiscoverSiteScope information set for the
    Associated AD site of the Database where the client Mailbox is located."
    In other words, the CAS is smart enough to return the URLs belonging to a CAS server in the AD site where the client's mailbox' database is active.
    My advice is to test this on your scenario. Tests can be done here first-hand: the Outlook's tray icon Test E-Mail AutoConfiguration can be used or alternatively - if you want to see the details in the communication - SoapUI
    for the SOAP access method, for POX there's an extension called 'Postman for Chrome' that can be used. I've used all these in my tests back when I was fighting conflicting results from the articles around the web about Autodiscover.
    That's a long way of saying we get the AutoD URL of a CAS server closest to the workstation, which then provides the configuration to use which are the the URLs closest to the mailbox  :)
    By all means look at the SOAP response, but Outlook will only use POX.  Lync does SOAP along with other 3rd party apps.
    Cheers,
    Rhoderick
    Microsoft Senior Exchange PFE
    Blog:
    http://blogs.technet.com/rmilne 
    Twitter:   LinkedIn:
      Facebook:
      XING:
    Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.

  • Question about using new battery in old Powerbook

    I have a pre-intel Powerbook G4, and the battery is pretty much toast (lasts about 15 minutes now). I have ordered a new battery for it, and I have this question about using it:
    Am I smarter to keep the new strong battery out of the PB most days (as I usually work with it plugged in at home) and just pop it in when I know I will be out surfing on batteries? Or is it just as good living in my laptop 24/7 and only occasionally being called upon to do its job?
    Current bad Battery Information below:
    Battery Installed: Yes
    First low level warning: No
    Full Charge Capacity (mAh): 1144
    Remaining Capacity (mAh): 1115
    Amperage (mA): 0
    Voltage (mV): 12387
    Cycle Count: 281
    thanks folks, Shereen

    Hi, Shereen. Every Powerbook battery wants to be used — drained and then recharged — at least every couple of weeks. If you've always used your Powerbook on AC power nearly all the time, and not followed that pattern of discharging and recharging the battery every week or two, it's possible that your use habits have shortened the lifespan and prematurely diminished the capacity of your old battery. Of course it's also possible that your battery is merely old, as a battery's capacity also diminishes with age regardless of how it's used. You didn't say how old the battery is in years, so this may or may not be an issue. I mention it only because it can be an issue.
    For general information on handling a battery for the longest possible lifespan, see this article. My advice on the basis of that article and long experience reading these forums is that it would be OK to do as you propose, but I doubt that you'd derive any significant benefit from it. You would still want to be sure of putting the new battery through a charge/discharge cycle every week or two, even if you didn't have a reason to use the Powerbook away from home or your desk, because sitting unused outside the computer is just as bad for a battery as sitting unused inside it. And you should never remove the battery from your computer when it's completely or almost completely discharged and let it sit that way any longer than a day or two.
    Message was edited by: eww

  • Question about setting track level

    Hi I finally got my new macbook pro and moto 8 pre and have just today tried a bit of recording just to get used to it. I am at the moment using garage band. I have a question about track levels. In Garage Band when I select a track to record there is a slider for input level or an option of clicking the automatic level control. But for some reason they are both just that color of grey you get when something can not be selected on a computer. How do I activate this? It's strange as I did actually manage to record the track, I just can't seem to really adjust the level. I thought maybe it has to do with the motu and I can only select the track level there, and that garage band level setting are (rightfully) disabled? Maybe that's what I should be doing, but just a bit confused. Anyway, thanks in advance for whoever takes the time to help out a newbie.

    Ah, ok. Good to know. Thanks for the info.

Maybe you are looking for