LDAP script for 11500

I configured two ldap services in the CSS 11500 and I use ldap script to check if the service is alive:
service LDAP1-ldap
  ip address 192.168.1.23
  port 389
  keepalive type script ap-kal-ldap "192.168.1.23"
  active
when i check the status I get the message:
Script Error: Script error in line: 40
I tried to play this script:
Error in script playback line:40
>>>socket waitfor ${SOCKET} "0a0100" 2000 raw
brgsikalb02#
Script Playback cancelled.
What is the problem and how I can fix it?

The script waits for a response from the server which contains a sequence of bytes that looks like this : 0a0100
If we do not receive it within 2 seconds, we fail.
Try to get a sniffer trace to see what is going on.
Gilles.

Similar Messages

  • CSS keepalive script for LDAP (Novell)

    I need an advanced script for Cisco CSS11000 for LDAP keepalive. The problem is the built-in script is too rudimental, what it does is just check the tcp 389 connection to the servers plus some expected bind response code "0A, 01, 00". But what happened for us is when the LDAP server (Novell) is doing DS repair, in which the server is too busy to handle the real LDAP call but still reply the tcp 389 request, CSS think it is still alive.
    We want a smart script that can handle real LDAP call (like a LDAP client) and send a real LDAP request instead of a simple tcp 389 request. Does anyone have any idea?
    Thanks in advance,
    Thanks in advance,
    Dave

    with the CSS script language you can send binary data and receive binary response.
    If you know what port to send the request to, what are the binary data and what is the expected binary response, we can easily do a script for you.
    The easiest way to get the binary info is to make a LDAP query and capture it with sniffer.
    Also capture the response.
    Make sure to do a query that will always result in the same response.
    Once you have this data, you can try to update the ldap script yourself [hint: use the raw keyword when sending the data].
    Or post the info here and will try to make a script for you.
    Gilles.

  • CSS keepalive script for LDAP

    I am trying to write a script for detecting the status of an LDAP server on a CSS. I figured out that I should capture the binary send and receive data of the LDAP query. I captured the request and response packets. But I have no idea of which part of the binary data (and how) I should put into the stock LDAP keepalive script. Could someone put me in the right direction?
    Thanks a lot.
    Daniel

    Just look at the existing ldap script
    CSS11503-2# sho script ap-kal-ldap
    !no echo
    ! Filename: ap-kal-ldap
    ! Parameters: HostName
    ! Description:    "Lightweight Directory Access Protocol v3"
    !   This script will connect to an LDAP server and attempt to
    !   "bind request" to the server.  Once the server gives a
    !   positive response we will disconnect (RFC-2251).
    ! Bind Response Code we will search for is: 0x0a 0x01 0x00
    ! Failure Upon:
    !   1. Not establishing a connection with the host.
    !       2. Failure to receive the above response code.
    ! Make sure the user has a qualified number of arguments
    if ${ARGS}[#] "NEQ" "1"
            echo "Usage: ap-kal-ldap \'Hostname\'"
            exit script 1
    endbranch
    ! Defines:
    set HostName "${ARGS}[1]"
    set EXIT_MSG "Connection Failed"
    ! Connect to the remote host (use default timeout)
    socket connect host ${HostName} port 389 tcp 2000
    set EXIT_MSG "Send: Failure"
    ! Send a Bind Request to the remote host.  This is simply a standard
    ! "capture" of a bind request in hex.  This should work for all standard
    ! version 3 LDAP servers.
    socket send ${SOCKET} "300c020102600702010204008000" raw
    set EXIT_MSG "Recieve: Failure"
    ! Expect to receive a standard response from the host.  This should
    ! be equal to a SUCCESS response code:
    socket waitfor ${SOCKET} "0a0100" 2000 raw
    set EXIT_MSG "Send: Failure"
    ! Send an exit "Unbind Request" to the remote host so that they
    ! are not left hanging.
    socket send ${SOCKET} "30050201034200" raw
    no set EXIT_MSG
    socket disconnect ${SOCKET}
    exit script 0
    CSS11503-2#
    In red, you see the command to send the binary (this includes everything inside the tcp payload - after the tcp header).
    In blue, you see the command to inspect received data and consider the response valid if the sequence is seens somewhere in the tcp payload of the response.
    Gilles.

  • ALUI Gateway Not Returning Scripts for Subset of Users

    We have a problem where the ALUI gateway is not returning some .NET scripts for a subset of users. We have the ALUI 6.5 portal and our using the .NET accelerator 3.1.
    The situation is that this subset of users request one of our portal pages via https, which then reaches through our firewall to our remote server which is running the .NET portlet. The .NET page is served and returned to the users correctly and quickly, but this particular subset of users do not see the result rendered in their browsers for about 3 minutes. A view html source in the browser, as well as tools like Fiddler, show the page is indeed in the browser, but it is stuck trying to request some .NET scripts, and only displays the page when those requests timeout.
    The .NET scripts that are problems are both WebResource.axd and ScriptResource.axd, which in some cases are in our .NET portlets because of the .NET framework itself, but in other cases they are there only because of the ALUI portal itself, when it munges the .NET portlet to handle multiple server forms and validators and such. These .axd scripts are gatewayed so that the client browser requests them through the ALUI gateway, which in turn requests them through our firewall to our remote server -- which always serves these scripts correctly and quickly according to the IIS logs. The problem seems to lie in the ALUI gateway, as it is receiving these scripts correctly and quickly, but it is not returning them to this subset of users. Instead the ALUI gateway seems to be processing for about 3 minutes, and eventually returns an html error page, which of course the client never sees since it is expecting javascript, but we can capture the error page via Fiddler and its just telling us there was a timeout -- the client browser just notes that there is a javascript error.
    The really bizarre part is that this only happens for a subset of users, which amounts to about 20% of our users. There are 2 things that delineate these users that we have found so far. First, these users have email addresses that are 27 - 30 characters long, and the email address is our login id. Note that both shorter and longer email addresses are OK, so there is not some limit to email addresses like this might sound like at first. Secondly, these users have to be in a particular branch of our ldap store, which means they are replicated across to the portal in a particular group. We can move these "bad" users into another branch of our ldap store and once they are replicated to the portal then they work fine, and then if we move them back they return to not working. We cannot find any other difference in our ldap branches or in the corresponding ALUI groups, plus its only the ones in that particular branch with the email lengths in that very specific range.
    The gatewayed requests for these scripts vary by user since the PTARGS in the gatewayed request include the integer userid, but that does not seem to matter because we can have a "good" user successfully request the script with a "bad" user's id, and we can have a "bad" user fail to successfully request the script with a "good" user's id. That seems to point to maybe the authentication cookie being the differentiating factor that determines whether or not a gatewayed request for one of these script files will succeed or fail. So far we have only seen the problem with these particular .net axd scripts, but that may simply be because we don't have many, if any, other scripts or resources that need to be gatewayed since we usually put resources on our imageserver -- these being different because .NET and/or the ALUI portal puts these references in there for us whether we like it or not. Long-term we can re-architect our .NET portlets to not get have these axd scripts, although as mentioned earlier, we also see the ALUI portal put these axd scripts in our portlets as part of their munging process -- so that is not in our control completely. We do need to test if this subset of users can successfully request other gatewayed resources or not -- this is actually the first time I thought of that test case, so all I can say right now is its axd scripts that we know are problems, but it may or may not be a bigger problem.
    One last comment, as we appear to have found a work-around, but it does not make sense at all, and its not our preferred solution, so we still very much believe there is a problem elsewhere -- most likely in the ALUI gateway, but possibly somehow related to authentication that we do not understand. Our work-around that so far seems to work is to make our remote server be accessed via https instead of http -- which matches the way the client browsers call our portal (https). Again that first doesn't make sense, since this is only a problem for a small subset of users -- obviously calling our remote server via http works successfully for all other users, so its not just is a simple case that we must match protocols or it won't work. We also use http successfully for our calls to the remote server for portlets that are Java, although its possible that they don't have any gatewayed resources. But we also would just prefer to not use https for our internal calls in our own network as there is no need for the extra overhead -- and by the way our dev and qa environments do use http even for these .NET portlets and do not have the same problem. What's different in our production environment? The only things that should be different are that we have multiple portal servers and multiple remote servers that are load balanced (not sure that's the right term for how the remote servers are combined) -- and of course we have a firewall between them that does not exist in dev or qa.
    So we would very much appreciate any thoughts on this, whether you've seen something like it before, or just have some additional insight into the gateway and/or authentication process that seems to be the issue.
    Thanks, Paul Wilson

    We've ran into this problem when using the Microsoft ReportViewer control. In our case, we found that the portal gateway malformed the urls containing webresource.axd, so the browser was unable to get the correct address to the files. Note that there are usually multiple links to the axd files, they return different resources depending on the query string they get.
    To solve the problem, we ended up with a bit of a hack solution, but it works well. We extracted the resources we needed from the ReportViewer control's assembly using Reflector, and then published them on the image server. The next piece was to override the Render method of the page that hosted the control. In our custom version of Render, we parsed the html of the page, and replaced the contents of the src= elements with pt:images// links. These processed just fine in the portal's transformer, and our resources started showing up.
    Our Render looks something like the following code sample. The "HACKReportViewerControlPortalImageGatewayFix" class has all of the code to do the parsing. In this case, it is specific to the report viewer, because it has some special considerations for parsing the urls. My bet if that your code will be quite custom as well. Therefore, I've not included this piece of code. The important piece below is the invocation of MyBase.Render, which tells the page to render all of it's contents. Once that method is done, all of the HTML for the page is in the writer. The ModifyImageTags method then parses the html, doing the necessary replacements. Finally, the modified html is written to the page's writer, so it can be output following the normal .net processes. Also note that when parsing for urls to replace, don't do all of them, just look for the ones containing axd.
    (VB.NET)
    Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
              Dim fixer As New HACKReportViewerControlPortalImageGatewayFix
              MyBase.Render(fixer.GetWriter)
              writer.Write(fixer.ModifyImageTags())
    End Sub
    This works great for images. However, if you are dealing with javascripts, I'm not sure if this will work for you - as some .NET controls send different scripts depending on the browser. For example, in IE, you get more buttons on the toolbar for the ReportViewer, so you get more javascript too. When using FF, you get less buttons, and less script. We didn't have a problem with the scripts, so we haven't needed to solve this one.
    As for timing, this type of solution doesn't take much to put together. You are really just doing some string parsing and replacements. If you are a regex ninja, it's probably even easier. We had our solution working in a day or two.
    An added benefit to this solution is that you are putting less bytes through the portal's gateway, and sending that traffic to the image server instead.

  • Creating SQL-Loader script for more than one table at a time

    Hi,
    I am using OMWB 2.0.2.0.0 with Oracle 8.1.7 and Sybase 11.9.
    It looks like I can create SQL-Loader scripts for all the tables
    or for one table at a time. If I want to create SQL-Loader
    scripts for 5-6 tables, I have to either create script for all
    the tables and then delete the unwanted tables or create the
    scripts for one table at a time and then merge them.
    Is there a simple way to create migration scripts for more than
    one but not all tables at a time?
    Thanks,
    Prashant Rane

    No there is no multi-select for creating SQL-Loader scripts.
    You can either create them separately or create them all and
    then discard the one you do not need.

  • Error while executing script for sharepoint online (office 365) - the remote server returned an error: (503) server unavailable

    error while executing script for sharepoint online (office 365) - the remote server returned an error: (503) server unavailable.
    I am creating many site collections reading records from sharepoint list using powershell in sharepoint online tenant (office 365).
    Few site collections are created and then getting above error so this error record will be skipped then few succeeding record processed then again getting error.
    pattern is like:
    success
    success
    success
    success
    Error
    success
    success
    success
    success
    success
    success
    error
    success

    Hi,
    As it is an online environment, to troubleshoot this issue in an easier way, I suggest you contact Office 365 Support to see if there is any useful information in
    the log files in the server side:
    https://support.office.com/en-us/article/Contact-Office-365-for-business-support-32a17ca7-6fa0-4870-8a8d-e25ba4ccfd4b?ui=en-US&rs=en-US&ad=US
    Best regards
    Patrick Liang
    TechNet Community Support

  • Custom calculation script for checkboxs

    Hello,
    Can anyone help me out with custom calculation script for this?  I have two mutually exclusive checkboxes that, when checked, I want to populate data into other text fields.
    If Checkbox1 is checked:
    Company1=Warehouse Alpha
    Address1=1234 Any Street
    City/State/Zip1= Los Angeles, CA 90020
    Contact Name1= Mr. Nice Guy
    Phone Number1= 213-854-8565
    Email1=[email protected]
    If Checkbox2 is checked:
    Company2=Warehouse Beta
    Address2= 5678 Awesome Blvd.
    City/State/Zip2= San Bernardino, CA 96545
    Contact Name2= Mr. Handsome
    Phone Number2= 909-824-8265
    Email2=[email protected]
    Thanks,
    Bryan

    So one has two check boxes and one wants them to be mutually exclusive. Name them the same and change the export value of the field. Try it and observe what happens as you check the different check boxes.
    You have described what happens if either box is checked but what happens when a checked box Is unckecked?
    One can place scripts in many locations. I would use a mouse up action for both the check boxes, use the same script for both check boxes.
    I would assume you are using the following names for the fields to populate:
    Company
    Address
    CityStateZip
    ContactName
    PhoneNumber
    Email
    // Mouse up action for both check boxes;
    // initial value for the fields:
    this.getField("Company"),value = "";
    this.getField("CityStateZip"),value = "";
    this.getField("ContactName"),value = "";
    this.getField("PhoneNumber"),value = "";
    this.getField("Email"),value = "";
    // test for check box value for selected box;
    if(event.value == 1) {
    this.getField("Company"),value = "Warehouse Alpha";
    this.getField("CityStateZip"),value = "1234 Any Street";
    this.getField("ContactName"),value = "Los Angeles, CA 90020";
    this.getField("PhoneNumber"),value = "213-854-8565";
    this.getField("Email"),value = "[email protected]";
    if(event.value == 2) {
    this.getField("Company"),value = "Warehouse Beta";
    this.getField("CityStateZip"),value = "5678 Awesome Blvd.";
    this.getField("ContactName"),value = "San Bernardino, CA 96545";
    this.getField("PhoneNumber"),value = "Mr. Handsome";
    this.getField("Email"),value = "[email protected]";
    // end Mouse up action for both check boxes;

  • Sharepoint warmup script for https sites

    we want to warm up https site which is based on sharepoint 2010.
    When we run some sample powershells it shows access forbidden error so we are not able to warm up https site.
    Its slow on first load so need some warmup script for https sites.
    sharepointer

    Just ensure that the service account that you use to trigger the Powershell scripts has access to IIS and SharePoint.  Most often, the SharePoint Farm account would be used for scheduling the warm up scripts on the WFE server.
    I trust that answers your question...
    Thanks
    C
    http://www.cjvandyk.com/blog

  • WSUS script for pending reboot possible addition - How

    Hi, I am found script for pending reboot and script work perfectly. My problem is that script generate only pending computers reboot for master wsus server not for replica servers. Can I modify this script to generate pending reboot for all replica servers on
    one place(wsus master server) or I must run this script on every replica server. This is script:
    [reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
    if (!$wsus) {
    $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
    $computerScope = new-object Microsoft.UpdateServices.Administration.ComputerTargetScope;
    $computerScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $updateScope = new-object Microsoft.UpdateServices.Administration.UpdateScope;
    $updateScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $computers = $wsus.GetComputerTargets($computerScope);
    $report = @()
    $computers | foreach-object {
    $computer = $_.FullDomainName
    $updatesForReboot = $_.GetUpdateInstallationInfoPerUpdate($updateScope)
    $updatesForReboot | foreach-object {
    $temp = "" | Select Computer,Update
    $temp.Computer = $computer
    $temp.Update = ($wsus.GetUpdate($_.UpdateId)).Title
    $report += $temp
    $report | Select "Computer","Update" | Export-Csv -Path c:\..PendingReboot.csv -Delimiter 1 -NoTypeInformation

    Modified script
    work great. I have report from all replica server and master server after new updates
    from today. I am add mail option and finaly this is what I am modify:
    [reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
    if (!$wsus) {
    $wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer();
    $computerScope = new-object Microsoft.UpdateServices.Administration.computerTargetScope;
    $computerScope.IncludeDownstreamComputerTargets = $true
    $computerScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $updateScope = new-object Microsoft.UpdateServices.Administration.UpdateScope;
    $updateScope.IncludedInstallationStates = [Microsoft.UpdateServices.Administration.UpdateInstallationStates]::InstalledPendingReboot;
    $computers = $wsus.GetComputerTargets($computerScope);
    $report = @()
    $computers | foreach-object {
    $computer = $_.FullDomainName
    $updatesForReboot = $_.GetUpdateInstallationInfoPerUpdate($updateScope)
    $updatesForReboot | foreach-object {
    $temp = "" | Select Computer,Update
    $temp.Computer = $computer
    $temp.Update = ($wsus.GetUpdate($_.UpdateId)).Title
    $report += $temp
    $report | Select "Computer","Update" | Export-Csv -Path c:\yourpath...PendingReboot.csv -Delimiter 1 -NoTypeInformation
    $smtpServer = "your mail server"
    $att = "c:\yourpath...PendingReboot.csv"
    $msg = new-object Net.Mail.MailMessage
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)
    $msg.From = "[email protected]"
    $msg.To.Add("[email protected]")
    $msg.Subject = "Pending Reboot"
    $msg.Body = "Your msg"
    $msg.Attachments.Add($att)
    $smtp.Send($msg)

  • Running Permission Scripts for App-V packages in VDI environment

    Hi
    We use App-V 5.0 SP1 in VDI environment.
    We have a major problem with packages' permissions
    Our users don't have administrative privileges on their machines.
    As the option for "Security Descriptors" is discontinued, the only way to give permissions to a folder in a package is to use the VFSCACLS.vbs as a startup script of a package.
    This way the first time users launch an application they're prompt to reopen it, and the second time they can use the application with the needed permissions.
    The problem:
    The script saves those permission changes under LOCALAPPDATA\AppV...
    Therefore, everytime the users logoff the folder is deleted (VDI...) and again, they must run the script for the first  again to get the permissions back after logon!
    We cannot roam the LOCALAPPDATA\AppV folder as its size can be dozens of GBs...
    Folder permissions with group policy is also not a solution, as the folder name changes everytime we upgrade a package and it's impossible to follow with hundreds of packages.
    So it's either we're missing something critical in the architecture with VDI environment or there's a normal solution for these situations.
    Would love to get some help
    Thanks
    Tamir Levy

    Hi Nicke
    that's what I did! the problem is that I find my self over and over again want to sequence packages in App-V 5.0 and forced to sequence it in App-V 4.6.
    I really hope that it wasn't App-V team's goal. announcing App-V 5.0 and tell us it doesn't support many things so we will still need App-V 4.6 forever.
    I have to maintain 2 different App-V environments with 4 different servers , 4 different sequencers and 2 clients on each computer. it doesn't make any sense for me to forced to stay with both of the versions forever.
    correct me if I'm wrong but App-V 4.6 is a legacy application. the new versions cover only support on newer operating systems and nothing more. I won't be surprised if in the next version of MDOP won't come with App-V 4.6 anymore and Microsoft will announced
    it's unsupported very soon.
    Every time I open a ticket with MS Support the best thing I get is "It's a known issue, we can't tell when it will be fixed"
    can you help me more ? move it forward to other people from the inside? at least agree with me that something is not as expected in App-V 5.0... :(
    I love the technology, I believe in it, I'm kinda depend on it and I only want it to be better
    Tamir Levy

  • One script for multiple loaded movie clips

    Hello,
    I am sure that this has been asked or answered before, but
    could not locate the correct response.
    Problem:
    There are 20 movie clips loaded onto the stage through
    actionscript. I have 20 different onPress scripts to start the drag
    for each (which also contain custom variable).
    Problem, I have one single onRelease script which is to be
    used for each, but do now wish to give 20 custom handled scripts.
    Can I somehow use certain scripting for using one single
    generic script for the onRelease? So no matter what was released it
    will go through this one script.
    Thanks
    D

    like this...
    activate
    set the_folder to choose folder with prompt "Select the folder you want to add folders to..."
    tell application "Finder"
    set the_name to "Name"
    set the_count to 3
    repeat with this_num from 1 to the_count
    set new_num to this_num as string
    if (count new_num) is 1 then set new_num to "0" & new_num
    make new folder at the_folder with properties {name:the_name & " " & new_num}
    end repeat
    end tell

  • Getting error while running script for online backup

    Hi,
    I am running a script for online backup but ended up with an the below error.
    *ERROR* [Backup Worker Thread] com.day.crx.core.backup.Backup Failed to create temporary directory
    Please help out in resolving this.
    Thanks in Advnace.
    Maheswar

    Hi mahesh,
    If you are using backup feature from crx console, I mean http://localhost:4502/crx/config/backup.jsp  I can say that we had also some problems with this functionalities.
    First off all what you need to check are the permissions, because when you check a source code there is line which creates a File object using a path specified by you to make a backup of repository.
    File targetDir = new File(req.getParameter("targetDir", listDir.getParentFile().getAbsolutePath()));
    You need to have sure that the proper read write access has been granted for this path.
    Another issue is that maybe there was already prepared some hotfix if you are using CQ5.4. Please refer to the following link:
    http://dev.day.com/content/kb/home/Crx/CrxSystemAdministration/CRXOnlineBackup.html
    and also to this one:
    http://dev.day.com/content/docs/en/crx/current/release_notes/overview.html which contains a hotfix number #34797 which was applied to backup.jsp file.
    Regards,
    kasq

  • Please help me resolve the Lync server 2013 deployment error: "An error occurred while applying SQL script for the feature BackendStore."

    I am getting an error in "Step 2 - Setup or Remove Lync Server Components" of "Install or Update Lync Server System" step.
    "An error occured while applying SQL script for the feature BackendStore. For details, see the log file...."
    Additionally, all previous steps such as: Prepare Active Directory, Prepare first Standard Edition server, Install Administrative Tools, Create and publish topology are done without any errors. The user that I used to setup the Lync server is member of:
    Administrators
    CSAdministrator
    Domain Admins
    Domain Users
    Enterprise Admins
    Group Policy Creator Owners
    RTCComponentUniversalServices
    RTCHSUniversalServices
    RTCUniversalConfigReplicator
    RTCUniversalServerAdmins
    Schema Admins
    I have tried to re-install all the things and started to setup a new one many times but the same error still occurred. Please see the log below and give me any ideas/solutions to tackle this problem.
    ****Creating DbSetupInstance for 'Microsoft.Rtc.Common.Data.BlobStore'****
    Initializing DbSetupBase
    Parsing parameters...
    Found Parameter: SqlServer Value lync.lctbu.com\rtc.
    Found Parameter: SqlFilePath Value C:\Program Files\Common Files\Microsoft Lync Server 2013\DbSetup.
    Found Parameter: Publisheracct Value LCTBU\RTCHSUniversalServices;RTC Server Local Group;RTC Local Administrators;LCTBU\RTCUniversalServerAdmins.
    Found Parameter: Replicatoracct Value LCTBU\RTCHSUniversalServices;RTC Server Local Group.
    Found Parameter: Consumeracct Value LCTBU\RTCHSUniversalServices;RTC Server Local Group;RTC Local Read-only Administrators;LCTBU\RTCUniversalReadOnlyAdmins.
    Found Parameter: DbPath Value D:\CsData\BackendStore\rtc\DbPath.
    Found Parameter: LogPath Value D:\CsData\BackendStore\rtc\LogPath.
    Found Parameter: Role Value master.
    Trying to connect to Sql Server lync.lctbu.com\rtc. using windows authentication...
    Sql version: Major: 11, Minor: 0, Build 2100.
    Sql version is acceptable.
    Validating parameters...
    DbName rtcxds validated.
    SqlFilePath C:\Program Files\Common Files\Microsoft Lync Server 2013\DbSetup validated.
    DbFileBase rtcxds validated.
    DbPath D:\CsData\BackendStore\rtc\DbPath validated.
    Effective database Path: \\lync.lctbu.com\D$\CsData\BackendStore\rtc\DbPath.
    LogPath D:\CsData\BackendStore\rtc\LogPath validated.
    Effective Log Path: \\lync.lctbu.com\D$\CsData\BackendStore\rtc\LogPath.
    Checking state for database rtcxds.
    Checking state for database rtcxds.
    State of database rtcxds is detached.
    Attaching database rtcxds from Data Path \\lync.lctbu.com\D$\CsData\BackendStore\rtc\DbPath, Log Path \\lync.lctbu.com\D$\CsData\BackendStore\rtc\LogPath.
    The operation failed because of missing file '\\lync.lctbu.com\D$\CsData\BackendStore\rtc\DbPath\rtcxds.mdf'
    Attaching database failed because one of the files not found. The database will be created.
    State of database rtcxds is DbState_DoesNotExist.
    Creating database rtcxds from scratch. Data File Path = D:\CsData\BackendStore\rtc\DbPath, Log File Path= D:\CsData\BackendStore\rtc\LogPath.
    Clean installing database rtcxds.
    Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
    ****Creating DbSetupInstance for 'Microsoft.Rtc.Common.Data.RtcSharedDatabase'****
    Initializing DbSetupBase
    Parsing parameters...
    Found Parameter: SqlServer Value lync.lctbu.com\rtc.
    Found Parameter: SqlFilePath Value C:\Program Files\Common Files\Microsoft Lync Server 2013\DbSetup.
    Found Parameter: Serveracct Value LCTBU\RTCHSUniversalServices;RTC Server Local Group.
    Found Parameter: DbPath Value D:\CsData\BackendStore\rtc\DbPath.
    Found Parameter: LogPath Value D:\CsData\BackendStore\rtc\LogPath.
    Trying to connect to Sql Server lync.lctbu.com\rtc. using windows authentication...
    Sql version: Major: 11, Minor: 0, Build 2100.
    Sql version is acceptable.
    Validating parameters...
    DbName rtcshared validated.
    SqlFilePath C:\Program Files\Common Files\Microsoft Lync Server 2013\DbSetup validated.
    DbFileBase rtcshared validated.
    DbPath D:\CsData\BackendStore\rtc\DbPath validated.
    Effective database Path: \\lync.lctbu.com\D$\CsData\BackendStore\rtc\DbPath.
    LogPath D:\CsData\BackendStore\rtc\LogPath validated.
    Effective Log Path: \\lync.lctbu.com\D$\CsData\BackendStore\rtc\LogPath.
    Checking state for database rtcshared.
    Reading database version for database rtcshared.
    Database version for database rtcshared - Schema Version5, Sproc Version 0, Update Version 1.
    Thanks and Regards,
    Thanh Le

    Thanks Lạc
    Phạm 2
    I Had similar issue i end up uninstalling and reinstallting but same issue, then i change the drive but same issue. It was I/O issue. After adjusting my I/O it fix our issue and installation went on without any issue. 
    If any one using KVM here is detail article 
    We just  give this option cache=‘writeback
    using this article http://www.ducea.com/2011/07/06/howto-improve-io-performance-for-kvm-guests/ and http://itscblog.tamu.edu/improve-disk-io-performance-in-kvm/ this fix my issue thanks 

  • An error occurred while applying SQL script for the feature BackendStore.

    Hello,
    I am using my AD in Windows Azure VMs. I created new VM of A3 (4 cores, 7 GB Memory) Windows Server 2012 R2, Port 1433 MSSQL added, made it a member of Domain and planned to install first Lync Server 2013 on it.
    In "Setup or Remove Lync Server Components" of "Install or Update Lync Server System", got an Red Coloured text "An error
    occurred while applying SQL script for the feature BackendStore."
    I have not enabled monitoring and archiving server in topology builder. I added "Network Service" and assign "Full Control" in Security Permissions of "C:\CsData" and "C:\LyncShare".
    I executed the SQL Setup Wizard and upgraded any instance to 2012.
    Please guide.
    Thanks, Divyaprakash Koli

    Please check you have enough disk space for the disk where the folders are.
    Check view log for detailed log information.
    The following link is a similar thread for you to refer:
    http://social.technet.microsoft.com/Forums/lync/en-US/a3cb9ab0-7451-4df5-af96-3d2784d1b075/an-error-occurred-while-applying-sql-script-for-the-feature-backendstore-for-details-see-the-log?forum=lyncdeploy
    Lisa Zheng
    TechNet Community Support

  • I am trying to update my iTunes to 10.5.1 so that I can upgrade my 3GS phone but am getting the following error message when trying to install the itunes:Install step failed: Run pre upgrade script for apple mobile device support. Contact software manufac

    I am trying to update my iTunes to 10.5.1 so that I can upgrade my 3GS phone but am getting the following error message when trying to install the itunes: Install step failed: Run pre upgrade script for apple mobile device support. Contact software manufacturer for assistance. I am on a MacBook pro running 10.5.8 OS. Has anyone seen this before and how can I get it resolved.
    Thanks for your help in advance....

    Did you ever figure out the problem? "Contact Software Manufacturer"?? That sounds ominous... I've got the same issue and I'm pretty durn aggravated right about now....
    Thanks!

Maybe you are looking for

  • UCS 1.4(3M) BIOS Issue ? no network/san boot when diskless

    Recreated faithfully on multiple  blades and chassis with VIC cards.  Have many SAN boot nodes in  production and testing and after upgrading firmware this week anytime I  apply my store policy of "diskless" to a blade I lose the ability to  boot fro

  • Integration with SAP order fulfillment?

    Hi, forgive me in advance for my ignorance and high-level questions in this post.  Though I am a good programmer, I have never used SAP before and am trying to learn as much as I can as quickly as I can. I am the founder and director of technology of

  • How to initialize org in a view definition?

    Hi, I am creating a custom view which uses Oracle standard function in the condition of the view definition. The function requires org intialization to provide the correct data. The view needs to be accessed in other DB instance via DB link. So how s

  • Simple copy&drag midi not so simple!   ?

    Trying to copy and drag verse exs24 piano track to another verse and it gets jumbled into a "new song" (MESS). Tried copying to another track, same thing. Trying quitting, restarting, rebooting, same thing. Time to post! Hoping to learn something new

  • What iLife Do I Need?

    Hi all, I just installed 10.4.6 for the first time on a PowerMac G5 iMac. I see the OS no longer comes with iMovie, GarageBand etc and that I will have to purchase iLife. It looks like the latest version of iLife won't run on my OS, so I need your ad