CSS keepalive script for LDAP (Novell)

I need an advanced script for Cisco CSS11000 for LDAP keepalive. The problem is the built-in script is too rudimental, what it does is just check the tcp 389 connection to the servers plus some expected bind response code "0A, 01, 00". But what happened for us is when the LDAP server (Novell) is doing DS repair, in which the server is too busy to handle the real LDAP call but still reply the tcp 389 request, CSS think it is still alive.
We want a smart script that can handle real LDAP call (like a LDAP client) and send a real LDAP request instead of a simple tcp 389 request. Does anyone have any idea?
Thanks in advance,
Thanks in advance,
Dave

with the CSS script language you can send binary data and receive binary response.
If you know what port to send the request to, what are the binary data and what is the expected binary response, we can easily do a script for you.
The easiest way to get the binary info is to make a LDAP query and capture it with sniffer.
Also capture the response.
Make sure to do a query that will always result in the same response.
Once you have this data, you can try to update the ldap script yourself [hint: use the raw keyword when sending the data].
Or post the info here and will try to make a script for you.
Gilles.

Similar Messages

  • CSS keepalive script for LDAP

    I am trying to write a script for detecting the status of an LDAP server on a CSS. I figured out that I should capture the binary send and receive data of the LDAP query. I captured the request and response packets. But I have no idea of which part of the binary data (and how) I should put into the stock LDAP keepalive script. Could someone put me in the right direction?
    Thanks a lot.
    Daniel

    Just look at the existing ldap script
    CSS11503-2# sho script ap-kal-ldap
    !no echo
    ! Filename: ap-kal-ldap
    ! Parameters: HostName
    ! Description:    "Lightweight Directory Access Protocol v3"
    !   This script will connect to an LDAP server and attempt to
    !   "bind request" to the server.  Once the server gives a
    !   positive response we will disconnect (RFC-2251).
    ! Bind Response Code we will search for is: 0x0a 0x01 0x00
    ! Failure Upon:
    !   1. Not establishing a connection with the host.
    !       2. Failure to receive the above response code.
    ! Make sure the user has a qualified number of arguments
    if ${ARGS}[#] "NEQ" "1"
            echo "Usage: ap-kal-ldap \'Hostname\'"
            exit script 1
    endbranch
    ! Defines:
    set HostName "${ARGS}[1]"
    set EXIT_MSG "Connection Failed"
    ! Connect to the remote host (use default timeout)
    socket connect host ${HostName} port 389 tcp 2000
    set EXIT_MSG "Send: Failure"
    ! Send a Bind Request to the remote host.  This is simply a standard
    ! "capture" of a bind request in hex.  This should work for all standard
    ! version 3 LDAP servers.
    socket send ${SOCKET} "300c020102600702010204008000" raw
    set EXIT_MSG "Recieve: Failure"
    ! Expect to receive a standard response from the host.  This should
    ! be equal to a SUCCESS response code:
    socket waitfor ${SOCKET} "0a0100" 2000 raw
    set EXIT_MSG "Send: Failure"
    ! Send an exit "Unbind Request" to the remote host so that they
    ! are not left hanging.
    socket send ${SOCKET} "30050201034200" raw
    no set EXIT_MSG
    socket disconnect ${SOCKET}
    exit script 0
    CSS11503-2#
    In red, you see the command to send the binary (this includes everything inside the tcp payload - after the tcp header).
    In blue, you see the command to inspect received data and consider the response valid if the sequence is seens somewhere in the tcp payload of the response.
    Gilles.

  • CSS Keepalive Script

    Hi,
    I am writing a keepalive script, which put the service in active mode or in suspended mode, depending on a web page content, requested by the script. It works very good from active to suspended, but once in suspended mode, the keepalive script is not run anymore and therefore cannot detect the page, which should put the service back online! Is there a way for the keepalive to continue, even if a service is suspended ?
    Thank you
    Yves Haemmerli

    Yes, I traced what the script does and it is clear to me that the keepalive stops if the service is put in suspended mode. I agree with you, if the service is down, the keepalive continue every retryperiod.
    But I solved my problem in the following way : I created a second service, which uses another script (actually a subset of the first script), which also monitors the test pages on the server. This second script always exits with return code 0 (sucessful) and therefore never stops working. As soon the second script recognizes the character string "PORTALUP" in the test page, it sets the first service in active mode, which restarts the keepalive scheduling. It works perfectly and this allows to put a server in maintenance mode (suspend) from the server itself, without to stop existing user flows.
    As this is a workaround solution, it would be better if the CSS would continue to keepalive in suspend mode...
    Yves Haemmerli

  • Looking for the adjoin.sh script for LDAP. `

    Anyone have a link to download?
    Thanks,
    Brad

    Chris, have you googled on XML and weather conditions? I
    think you also could find more detailed information through weather
    dot com by looking down for "developers" if I remember this
    correctly. But you can take a look at weather-related websites and
    they usually give you pretty much fundmental self-explantory simple
    instructions on how to set it up on your website.
    quote:
    Originally posted by:
    Newsgroup User
    First and foremost this probably isn't the right forum to
    post such a topic
    in but everyone in this forum is so helpful to me that it
    just made more
    sense. Anyhow I would like to get an XML feed from
    weather.com to put on
    this site I am building so I can allow users to view the
    weather conditions
    for wherever they choose. I am certainly not an XML
    programmer and even
    though I have weather.com's software development kit for
    there XML data
    feeds on the weather I think it would take me forever to
    create something
    right? Anyhow I use PHP on my server and I'm wondering if
    anyone knows of
    any good pre-made scripts to allow me to easily add the XML
    weather to my
    site...
    Thanks!
    Chris Jumonville

  • Cisco CSS 11503 ntp keepalive script

    Have setup a new Owner/Service/Group for loadbalancing NTP traffic to 2 NTP servers. It all appears to work fine apart from failure of one of the servers NTP service. I've currently set up a simple ping keepalive which works fine if one of the servers fail but this keepalive won't detect if the servers NTP service fails. I'm running 8.20 code. My question is has anybody created a working keepalive script for NTP traffic  for the CSS?

    Hi Daniel,
    I had looked at that script but it doesn't suit my needs. The script uses TCP port 37 for its keepalives whereas our  NTP servers use UDP port 123.
    Regards
    Noel

  • Looking for ACE Probe TCL script specific for LDAPS

    Hello Everyone,
    I have searched the forum, and i am having difficulty finding an example of how to modify the LDAP TCL probe from port 389 to secure LDAP port 636.
    Could someone kindly point me or provide me the modified TCL script if you happen to have it.
    During my search I also found a config that someone had provided, which contained the following probe:
    probe tcp LDAPS_Probe
      port 636
    probe tcp LDAP_Probe
      port 389
    I was trying to figure out if this a modified TCL script for LDAP or modifed TCP TCL script specific for port 636.
    This is how I applied the script for LDAP port 389.
    script file 1 LDAP_PROBE
    probe scripted LDAP_PROBE_389
    interval 5
    passdetect interval 30
    receive 5
    script LDAP_PROBE
    serverfarm host SF-LDAP-389
    description SF LDAP Port 389
    predictor leastconns
    probe LDAP_PROBE_389
    rserver LDAP-RS1-389
    inservice
    I will be more than glad to provide you any additional information that you need.
    As always thanks for your input.
    Raman Azizian
    SAIC/NISN Network services

    normally you would engage a TCL developer or ciso advanced services to develop a custom script for anything other than what Cisco provides in canned scripts. If you are comfortable with tcl you can do it yourself. Here is an example of the LDAP script modified to include initiation via ssl.  default port is 389 when you implement you would specify 636.
    #!name = LDAP_PROBE
    # Description:
    #    LDAP_PROBE opens a TCP connection to an LDAP server, sends a bind request. and
    #    determines whether the bind request succeeds.  LDAP_PROBE then closes the
    #    connection with a TCP RST.
    #    If a port is specified in the "probe scripted" configuration, the script probes
    #     each suspect on that port. If no port is specified, the default LDAP port 389
    #     is used.
    # Success:
    #   The script succeeds if the server returns a bind response indicating success
    #    (status code 0x0a0100) to the bind request.
    #   The script closes the TCP connection with a RST following a successful attempt.
    # Failure:
    #   The script fails due to timeout if the response is not returned.  This
    #    includes a failure to receive ARP resolution, a failure to create a TCP connection
    #    to the port, or a failure to return a response to the LDAP bind request.
    #   The script also fails if the server bind response does not indicate success.
    #    This specific error returns the 30002 error code.
    #   The script closes any attempted TCP connection, successful or not, with a RST.
    #  PLEASE NOTE:  This script expects the server LDAP bind response to specify length
    #   in ASN.1 short definite form.  Responses using other length forms (e.g., long
    #   definite length form) will require script modification to achieve success.
    # SCRIPT version: 1.0       April 1, 2008
    # Parameters:
    #   [DEBUG]
    #      username - user login name
    #      password - password
    #      DEBUG        - optional key word 'DEBUG'. default is off
    #         Do not enable this flag while multiple probe suspects are configured for this
    #         script.
    # Example config :
    #   probe scripted USE_LDAP_PROBE
    #         script LDAP_PROBE
    #   Values configured in the "probe scripted" configuration populate the
    #   scriptprobe_env array.  These may be accessed or manipulated if desired.
    # Documentation:
    #    A detailed discussion of the use of scripts on the ACE is included in
    #       "Using Toolkit Command Language (TCL) Scripts with the ACE"
    #    in the "Load-Balancing Configuration Guide" section of the ACE documentation set.
    # Copyright (c) 2005-2008 by Cisco Systems, Inc.
    # debug procedure
    # set the EXIT_MSG environment variable to help debug
    # also print the debug message when debug flag is on
    proc ace_debug { msg } {
        global debug ip port EXIT_MSG
        set EXIT_MSG $msg
        if { [ info exists ip ] && [ info exists port ] } {
         set EXIT_MSG "[ info script ]:$ip:$port: $EXIT_MSG "
        if { [ info exists debug ] && $debug } {
         puts $EXIT_MSG
    # main
    # parse cmd line args and initialize variables
    ## set debug value
    set debug 0
    if { [ regsub -nocase "DEBUG" $argv "" argv] } {
        set debug 1
    ace_debug "initializing variable"
    set EXIT_MSG "Error config:  script LDAP_PROBE \[DEBUG\]"
    set ip $scriptprobe_env(realIP)
    set port $scriptprobe_env(realPort)
    # if port is zero the use well known ldap port 389
    if { $port == 0 } {
        set port 389
    # PROBE START
    # open connection
    ace_debug "opening socket"
    set sock [  socket -sslversion all -sslcipher RSA_WITH_RC4_128_MD5 $ip $port ]
    fconfigure $sock -buffering line -translation binary
    # send a standard anonymous bind request
    ace_debug "sending ldap bind request"
    puts -nonewline $sock [ binary format "H*" 300c020101600702010304008000 ]
    flush $sock
    #  read string back from server
    ace_debug "receiving ldap bind result"
    set line [read $sock 14]
    binary scan $line H* res
    binary scan $line @7H6 code
    ace_debug "received $res with code $code"
    #  close connection
    ace_debug "closing socket"
    close $sock
    #  make probe fail by exit with 30002 if ldap reply code != success code  0x0a0100
    if {  $code != "0a0100" } {
        ace_debug " probe failed : expect response code \'0a0100\' but received \'$code\'"
        exit 30002
    ## make probe success by exit with 30001
    ace_debug "probe success"
    exit 30001

  • CSS Script for checking RADIUS Service

    Hi,
    We are using CSS 11501 boxes for load-sharing RADIUS (NAC) requests between different ACS Servers.
    How can I configure a keepalive method for checking the RADIUS service on the ACS Servers ?
    If this needs to be a script then Can anyone provide some hints\tips ?
    Thanks,
    Naman

    This needs to be a script.
    The best way would be to sniff a request/response from a known user [or fake user], then extract the udp header + payload in hex format, then create a CSS script to send the hex formatted query and to verify that the hex formatted response matches the server response.
    I believe the ap-kal-dns script uses a similar approach so you can look at it to get an idea of what you have to do.
    Gilles.

  • Cannot find the Novell Connection Manager for LDAP

    Novell Connection Manger for Java/LDAP
    Cannot find the Novell Connection Manager for LDAP in download
    I am trying to connect through a Java client to the Apache Directory Studio, LDAP server....I have downloaded the classes from the download page...see link below...but I can't see the NovellConnectionManager Class anywhere in this download when I use the open freely application to view the jar details.
    LDAP Classes for Java
    Environment: Windows 7

    Hi MentalSuplex, and a warm welcome to the forums!
    Don't know about Airport cards for it, but other options...
    http://eshop.macsales.com/item/Sonnet%20Technology/N80211PCI/
    Maybe this one, ask them...
    http://eshop.macsales.com/item/Newer%20Technology/MXP802NPCI/
    I use these...
    http://eshop.macsales.com/item/Newer%20Technology/MXP2802NU2C/
    http://eshop.macsales.com/item/Edimax/EW7711UMN/

  • CSS Keepalive for two applications on same IIS server

    We have 2 IIS servers running two MOSS 2007 appications. The app owners want keepalives setup for each application which each application is using the same IP and just different hostnames. So I need an example to do keepalives using a hostname / uri index.html for example rather then it using the IP address.
    I'm inheriting a system and don't want to break what's working so any examples would be greatly appreciated.
    Thanks
    Jim

    Jim,
    This is how I utilize 1 server with multiple web sites. The web server is configured to respond to http host headers. For this to work DNS needs to be correct. Each web site has a keepalive file for the script to check.
    service connections-1
    ip address 1.1.1.1
    keepalive port 80
    redundant-index 8001
    keepalive type script ap-kal-httptag "connections1.company.com /keepalive/lb.htm connections1"
    keepalive frequency 15
    active
    service datacentacc-1
    ip address 1.1.1.1
    keepalive port 80
    redundant-index 8030
    keepalive type script ap-kal-httptag "datacenteraccess1.company.com /keepalive/lb.htm datacenteraccess1"
    keepalive frequency 15
    active
    DNS entries for the individual web sites will point to the servers. I.E. connections1.company.com = 1.1.1.1
    datacenteraccess1.company.com = 1.1.1.1
    The Content rules will be the following.
    content connections
    vip address 2.2.2.2
    advanced-balance sticky-srcip
    redundant-index 8000
    add service connections-1
    protocol tcp
    port 80
    url "//connections.company.com/*"
    active
    content datacenteraccess
    vip address 2.2.2.2
    advanced-balance sticky-srcip
    redundant-index 8025
    add service datacentacc-1
    port 80
    protocol tcp
    url "//datacenteraccess.company.com/*"
    active
    Rich

  • Help w/CSS Keepalive for Exchange Outlook Web

    Hi,
    We're CSS newbies here, need some suggestions in terms of best keepalive methods for Exchange Outlook over Web. We currently have couple exchange web servers behind a CSS 11503, We need to monitor the health of the services beyond port 80 and 443, perhaps at https page level - "https://vip address/exchange/logon.asp"
    Thanks in advance for the help,
    Howard

    Howard,
    there is currently no way to do an HTTPS keepalive.
    The reason is the CSS can't do ssl.
    Only the ssl module can.
    So, you will have to limit the keepalive to TCP.
    For the port 80, you can create an http keepalive.
    Use the command 'keepalive type [tcp|http]' inside the service configuration to select your keepalive.
    IF chosing http, you need to provide a url with the command 'keepalive uri "url"'
    Regards,
    Gilles.

  • CSS 11503 keepalive scripts

    Is it possible to configure a keepalive script to detect the text returned from a cold fusion web page - eg "server available". Not the header but actual content of the page.

    Gilles:
    I know this is a two years old, but need some help with the issue regarding this post. I need to have a script keepalive to verify the content of a page. I tried what you mentioned here, but my service won't come up. My set up is this:
    SERVICE
    service serbancasawebback
    type ssl-accel-backend
    add ssl-proxy-list bhdssl
    keepalive type script ap-kal-httptagban
    protocol tcp
    port 80
    ip address 192.168.249.23
    active
    The script I used is as follows:
    !no echo
    ! Filename: ap-kal-httptagban
    ! Parameters: HostName WebPage HostTag
    ! Description:
    ! This script will connect to the remote host and do an HTTP
    ! GET method upon the web page that the user has asked for.
    ! This script also adds a host tag to the GET request.
    ! Failure Upon:
    ! 1. Not establishing a connection with the host.
    ! 2. Not receiving an HTTP status “200 OK”
    if ${ARGS}[#] “NEQ” “3”
    echo “Usage: ap-kal-httptagban \'192.168.249.23 /bancasa/start.swe?SWECmd=Logoff www2.bhd.com.do\'”
    exit script 1
    endbranch
    ! Defines:
    set HostName “${ARGS}[1]”
    set WebPage “${ARGS}[2]”
    set HostTag “${ARGS}[3]”
    ! Connect to the remote Host
    set EXIT_MSG “Connection Failure”
    socket connect host ${HostName} port 80 tcp
    ! Send the GET request for the web page
    set EXIT_MSG “Send: Failed”
    socket send ${SOCKET} “GET ${WebPage} HTTP/1.0\nHost: ${HostTag}\n\n”
    ! Wait for a good status code
    set EXIT_MSG “Waitfor: Failed”
    socket waitfor ${SOCKET} “SWE Internal Error” 2000
    no set EXIT_MSG
    socket disconnect ${SOCKET}
    exit script 0
    Notice this is an SSL back-end service. The web page the user should request is:
    https://www2.bhd.com.do/bancasa/start.swe?SWECmd=Logoff
    If it returs the page with the error, then is down.
    I'm not sure I have all arguments OK or in the correct format. Also,I'm a little confused regarding whta the Hostag should be.
    Can you please verify what I have wrong ?
    Thanks

  • Best / most popular software or scripts for adding search function to website?

    I'm trying to find a good piece of software or script for implementing a site search function into our website.  I am relatively knowledgeable in Dreamweaver and can write CSS and XHTML at the fairly intermediate to advanced level, as well as work with JavaScript and js files, but I don't really know much ASP or "by hand" Java coding.  Their are so many scripts and software out there for adding a site search that it's hard to sort through and narrow down.  I was hoping to find reviews of the popular ones or "top 10 lists" of some sort that would help me pinpoint a good one, but can't find anything like that.  These are the primary needs of the website:
    --Has under 50 searchable pages that won't change much and probably won't exceed 50. There are product part numbers and descriptions for some 250-300 part numbers, spread across only 24 of those pages.  The remaining pages are important but no part numbers-- About Us, News, Where to Buy, History, Featured Products, etc.  The product pages are very much like an online store but we don't sell directly on the site (only thru distributors/reps).
    --We are trying to keep the price under about $50, or use a free solution
    --The pages are all static XHTML+CSS pages, but our server can run ASP (we have another website for one of our other product line divisions, on the same server, with many more products beyond 1000, which was programmed completely in ASP by an outside company about 4 years ago).  We self-host both sites.
    --Our server can't run PHP
    --The search capabilities need only be rather basic-- a keyword search with a results page that uses the same design template as the rest of the site.  It would be nice, but not mandatory, to have a search filter and/or a drop down menu to enable selectively searching only certain parts of the site, or only product/part number search vs. general search, etc. (but again, not mandatory).
    --I do have a sitemap page already on it, if that matters or helps
    Some of the ones I've found so far that looked the most promising include:  Zoom Search Engine (http://www.wrensoft.com/zoom/), Site Search Pro (http://www.site-search-pro.com/), and FX Site Search which is a DW Extension I found in the exchange (http://www.felixone.it/extensions/prod/mxssen.asp)-- (that one looks possibly technically challenging or requiring more ASP skill, though)
    Forgive me if there is a better area to post this, if so let me know.

    For a static site, your options are:
    Google ~  http://www.google.com/sitesearch/
    Freefind ~ http://www.freefind.com/
    Zoom from Wrensoft ~ http://www.wrensoft.com/zoom/
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists 
    http://alt-web.com/

  • ALUI Gateway Not Returning Scripts for Subset of Users

    We have a problem where the ALUI gateway is not returning some .NET scripts for a subset of users. We have the ALUI 6.5 portal and our using the .NET accelerator 3.1.
    The situation is that this subset of users request one of our portal pages via https, which then reaches through our firewall to our remote server which is running the .NET portlet. The .NET page is served and returned to the users correctly and quickly, but this particular subset of users do not see the result rendered in their browsers for about 3 minutes. A view html source in the browser, as well as tools like Fiddler, show the page is indeed in the browser, but it is stuck trying to request some .NET scripts, and only displays the page when those requests timeout.
    The .NET scripts that are problems are both WebResource.axd and ScriptResource.axd, which in some cases are in our .NET portlets because of the .NET framework itself, but in other cases they are there only because of the ALUI portal itself, when it munges the .NET portlet to handle multiple server forms and validators and such. These .axd scripts are gatewayed so that the client browser requests them through the ALUI gateway, which in turn requests them through our firewall to our remote server -- which always serves these scripts correctly and quickly according to the IIS logs. The problem seems to lie in the ALUI gateway, as it is receiving these scripts correctly and quickly, but it is not returning them to this subset of users. Instead the ALUI gateway seems to be processing for about 3 minutes, and eventually returns an html error page, which of course the client never sees since it is expecting javascript, but we can capture the error page via Fiddler and its just telling us there was a timeout -- the client browser just notes that there is a javascript error.
    The really bizarre part is that this only happens for a subset of users, which amounts to about 20% of our users. There are 2 things that delineate these users that we have found so far. First, these users have email addresses that are 27 - 30 characters long, and the email address is our login id. Note that both shorter and longer email addresses are OK, so there is not some limit to email addresses like this might sound like at first. Secondly, these users have to be in a particular branch of our ldap store, which means they are replicated across to the portal in a particular group. We can move these "bad" users into another branch of our ldap store and once they are replicated to the portal then they work fine, and then if we move them back they return to not working. We cannot find any other difference in our ldap branches or in the corresponding ALUI groups, plus its only the ones in that particular branch with the email lengths in that very specific range.
    The gatewayed requests for these scripts vary by user since the PTARGS in the gatewayed request include the integer userid, but that does not seem to matter because we can have a "good" user successfully request the script with a "bad" user's id, and we can have a "bad" user fail to successfully request the script with a "good" user's id. That seems to point to maybe the authentication cookie being the differentiating factor that determines whether or not a gatewayed request for one of these script files will succeed or fail. So far we have only seen the problem with these particular .net axd scripts, but that may simply be because we don't have many, if any, other scripts or resources that need to be gatewayed since we usually put resources on our imageserver -- these being different because .NET and/or the ALUI portal puts these references in there for us whether we like it or not. Long-term we can re-architect our .NET portlets to not get have these axd scripts, although as mentioned earlier, we also see the ALUI portal put these axd scripts in our portlets as part of their munging process -- so that is not in our control completely. We do need to test if this subset of users can successfully request other gatewayed resources or not -- this is actually the first time I thought of that test case, so all I can say right now is its axd scripts that we know are problems, but it may or may not be a bigger problem.
    One last comment, as we appear to have found a work-around, but it does not make sense at all, and its not our preferred solution, so we still very much believe there is a problem elsewhere -- most likely in the ALUI gateway, but possibly somehow related to authentication that we do not understand. Our work-around that so far seems to work is to make our remote server be accessed via https instead of http -- which matches the way the client browsers call our portal (https). Again that first doesn't make sense, since this is only a problem for a small subset of users -- obviously calling our remote server via http works successfully for all other users, so its not just is a simple case that we must match protocols or it won't work. We also use http successfully for our calls to the remote server for portlets that are Java, although its possible that they don't have any gatewayed resources. But we also would just prefer to not use https for our internal calls in our own network as there is no need for the extra overhead -- and by the way our dev and qa environments do use http even for these .NET portlets and do not have the same problem. What's different in our production environment? The only things that should be different are that we have multiple portal servers and multiple remote servers that are load balanced (not sure that's the right term for how the remote servers are combined) -- and of course we have a firewall between them that does not exist in dev or qa.
    So we would very much appreciate any thoughts on this, whether you've seen something like it before, or just have some additional insight into the gateway and/or authentication process that seems to be the issue.
    Thanks, Paul Wilson

    We've ran into this problem when using the Microsoft ReportViewer control. In our case, we found that the portal gateway malformed the urls containing webresource.axd, so the browser was unable to get the correct address to the files. Note that there are usually multiple links to the axd files, they return different resources depending on the query string they get.
    To solve the problem, we ended up with a bit of a hack solution, but it works well. We extracted the resources we needed from the ReportViewer control's assembly using Reflector, and then published them on the image server. The next piece was to override the Render method of the page that hosted the control. In our custom version of Render, we parsed the html of the page, and replaced the contents of the src= elements with pt:images// links. These processed just fine in the portal's transformer, and our resources started showing up.
    Our Render looks something like the following code sample. The "HACKReportViewerControlPortalImageGatewayFix" class has all of the code to do the parsing. In this case, it is specific to the report viewer, because it has some special considerations for parsing the urls. My bet if that your code will be quite custom as well. Therefore, I've not included this piece of code. The important piece below is the invocation of MyBase.Render, which tells the page to render all of it's contents. Once that method is done, all of the HTML for the page is in the writer. The ModifyImageTags method then parses the html, doing the necessary replacements. Finally, the modified html is written to the page's writer, so it can be output following the normal .net processes. Also note that when parsing for urls to replace, don't do all of them, just look for the ones containing axd.
    (VB.NET)
    Protected Overrides Sub Render(ByVal writer As System.Web.UI.HtmlTextWriter)
              Dim fixer As New HACKReportViewerControlPortalImageGatewayFix
              MyBase.Render(fixer.GetWriter)
              writer.Write(fixer.ModifyImageTags())
    End Sub
    This works great for images. However, if you are dealing with javascripts, I'm not sure if this will work for you - as some .NET controls send different scripts depending on the browser. For example, in IE, you get more buttons on the toolbar for the ReportViewer, so you get more javascript too. When using FF, you get less buttons, and less script. We didn't have a problem with the scripts, so we haven't needed to solve this one.
    As for timing, this type of solution doesn't take much to put together. You are really just doing some string parsing and replacements. If you are a regex ninja, it's probably even easier. We had our solution working in a day or two.
    An added benefit to this solution is that you are putting less bytes through the portal's gateway, and sending that traffic to the image server instead.

  • Is it mandatory to run wcConfigure script for UCM integration????

    Hi,
    I'm looking to integrate an already running instance of UCM 10.1.3.5.1 version with webcenter. I have tried the integration steps by following the documentation and was able to do it. But wcConfigure script is making lots of configuration changes in our UCM instance which are not acceptable.
    Is it necessary to run the wcConfigure sript for UCM integration?
    Is there any other option to integrate UCM without running this script?
    Thanks,
    Varun

    Hey Varun,
    Each Contribution Folder in UCM has a set of metadata associated with it. You can change this metadata in the UCM interface by going to the particular folder and selecting Information (or Folder Information) from the actions drop down. This will display the metadata associated with that folder including security group and account. Select Update from the actions drop down on that page to change this info.
    Whenever an item (folder or content) is created under another folder it will automatically inherit the default metadata set for that folder. There is also an option to propagate the changes you make to the metadata to content directly underneath a particular folder. That option is also in the actions dropdown on the folder information screen.
    The other thing to check is that the user who is logged into Webcenter has privileges in UCM to the security group and account that you define for the folders and content. By default UCM is not going to be hooked up to the embedded WLS LDAP so you would need to create the user locally in UCM via the User Admin applet or (preferably) integrate UCM with the WLS LDAP. (See Chapter 7 in the Managing Security guide http://download.oracle.com/docs/cd/E10316_01/cs/cs_doc_10/documentation/admin/managing_security_10en.pdf for LDAP integration instructions)
    Hope that helps,
    Andy Weaver - Senior Software Consultant
    Fishbowl Solutions < http://www.fishbowlsolutions.com?WT.mc_id=L_Oracle_Consulting_amw_OTN_WC >

  • Script for Windows

    Hi Experts,
    We have Goldengate environment and replicating the data between on source to Many target environments , both source and target environments are Oracle only,  To monitor the process and exceptions we have used shell scripts on both the environments,
    now we have new requirment for replicating the data between oracle to sql server , we have install goldengate on windows NT server for target environment , But in source we are using shell script for monitirong manger process and extract process , like if mgr or extract process is abended immediatly we will get the alert mail through mailx functionality, Now we need to use the same approach for windows sql serer as well , can any one please guide how to achive the monitoring requirment in windows environment as like unix/solaris, is there any option available in windows to writing the scritps like shell also sending the alert mails, Kindly suggest on this.
    Thanks in Advance.
    AT

    I get the scheduler to run this batch file, dbcgen.bat
    rem Set Correct Starting Directory
    D:
    cd \gw8\admin\utility\dbcopy\win32
    cscript //NoLogo dbcgen.vbs > dbcrun.bat
    dbcrun.bat
    and the dbcgen.vbs does the rest
    ' Script to generate DBCRun.Bat file
    ' Result needs to be something like this
    ' D:\gw8\admin\UTILITY\DBCOPY\WIN32\dbcopy.exe /i 12-01-2010 /w n:\gw5po
    c:\gw5back
    Dim cDate
    cDBCPath = "D:\GW8\Admin\Utility\DBCopy\Win32\DBCopy.exe"
    cSrcPath = "N:\GW5PO"
    cTgtPath = "C:\GW5Back"
    dDate = Now() - 1
    'WScript.Echo dDate
    cDay = cStr(Day(dDate))
    if Len(cDay) = 1 then
    cDay = "0" & cDay
    end if
    'WScript.Echo cDay
    cMonth = cStr(Month(dDate))
    if Len(cMonth) = 1 then
    cMonth = "0" & cMonth
    end if
    'WScript.Echo cMonth
    cYear = cStr(Year(dDate))
    'WScript.Echo cYear
    cDate = cMonth & "-" & cDay & "-" & cYear
    'WScript.Echo cDate
    WSCript.Echo cDBCPath & " /i " & cDate & " /w " & cSrcPath & " " & cTgtPath
    set cDate = Nothing
    You can add/alter the various startup flags to suit yourself
    Cheers Dave
    Dave Parkes [NSCS]
    Occasionally resident at http://support-forums.novell.com/

Maybe you are looking for