Web serv gen'd wrong?

Using JDev 10.1.3.0.4.3673...
The goal of this is to pass an array of objects from a java/oracle web service to a .NET client. I'm posting here because it seems there may be a JDev issue with how a web service is created.
OK, I create a stateless session bean. Also, create a bean defining a drivelayout object; generate it's getters/setters. The session bean has a method that'll take rows from a DB, put the data into a drivelayout object, add that object to an arraylist, then do it for the next row found. I then use toArray to take the data from the Arraylist holding the drivelayout objects then return the array as the method's result..
I then generate a web service for the session bean. No problems gen'ing the service.
If I gen the service as document/literal I'll get a serialization error when I test the service; using the servlet gen'd in OC4J. This "should" work, but...
If I gen the service as document/wrapped I get a correct output; an XML doc indicating an array of drivelayout elements, with the elements of drivelayout. Visual Studio 2003 can create a valid web reference to the web service and parse the data passed in correctly.
The problem is that, for interop, I've been under the impression that doc/literal is supposed to be the way to go.
One strange thing is the WSDL gen'd by JDev does show doc/literal
<binding name="DriveSchedRTNSoapHttp" type="tns:DriveSchedRTN">
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<operation name="readlayouts">
<soap:operation soapAction="http://createsched//readlayouts"/>
<input>
<soap:body use="literal" parts="parameters"/>
</input>
Since the whole thing seems to work I guess I'm happy but I'm wondering if something will get "fixed" causing this to break in a future release.
Let me know if you need any supporting docs; code, WSDL, etc.
Thanks

Hi,
I don't think that JDev is generating the service incorrectly. Doc-wrapped is really just a special form of doc-lit, intended to make the input parameters of a procedure call look more like one big single input "document". There's no hard-and-fast way to spot a doc-wrapped service, other than to look at the input message for an operation. If it has one part named parameters, defined by an element rather than a complexType, then you've got a wrapped input.
So I think you should be OK with respect to interoperability. If you have any doubts, then you can use the JDev HTTP Analyzer and WS-I testing tools integration to verify that the WSDL document and SOAP messages sent and received match the WS-I interop standards.
Hope that helps,
Alan.

Similar Messages

  • Web service gen'd wrong?

    Using JDev 10.1.3.0.4.3673...
    The goal of this is to pass an array of objects from a java/oracle web service to a .NET client. I'm posting here because it seems there may be a JDev issue with how a web service is created.
    OK, I create a stateless session bean. Also, create a bean defining a drivelayout object; generate it's getters/setters. The session bean has a method that'll take rows from a DB, put the data into a drivelayout object, add that object to an arraylist, then do it for the next row found. I then use toArray to take the data from the Arraylist holding the drivelayout objects then return the array as the method's result..
    I then generate a web service for the session bean. No problems gen'ing the service.
    If I gen the service as document/literal I'll get a serialization error when I test the service; using the servlet gen'd in OC4J. This "should" work, but...
    If I gen the service as document/wrapped I get a correct output; an XML doc indicating an array of drivelayout elements, with the elements of drivelayout. Visual Studio 2003 can create a valid web reference to the web service and parse the data passed in correctly.
    The problem is that, for interop, I've been under the impression that doc/literal is supposed to be the way to go.
    One strange thing is the WSDL gen'd by JDev does show doc/literal
    <binding name="DriveSchedRTNSoapHttp" type="tns:DriveSchedRTN">
    <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
    <operation name="readlayouts">
    <soap:operation soapAction="http://createsched//readlayouts"/>
    <input>
    <soap:body use="literal" parts="parameters"/>
    </input>
    Since the whole thing seems to work I guess I'm happy but I'm wondering if something will get "fixed" causing this to break in a future release.
    Let me know if you need any supporting docs; code, WSDL, etc.
    Thanks

    Hi,
    I don't think that JDev is generating the service incorrectly. Doc-wrapped is really just a special form of doc-lit, intended to make the input parameters of a procedure call look more like one big single input "document". There's no hard-and-fast way to spot a doc-wrapped service, other than to look at the input message for an operation. If it has one part named parameters, defined by an element rather than a complexType, then you've got a wrapped input.
    So I think you should be OK with respect to interoperability. If you have any doubts, then you can use the JDev HTTP Analyzer and WS-I testing tools integration to verify that the WSDL document and SOAP messages sent and received match the WS-I interop standards.
    Hope that helps,
    Alan.

  • Web server generating wrong url.

    I am getting following error message in errors file.
    warning ( 5394): for host 10.215.152.49 trying to POST <root>/servletWindchillAuthGW/wt.enterprise.URLProcessor/URLTemplateAction, send-file reports: can't find <root>/servletWindchillAuthGW/wt.enterprise.URLProcessor/URLTemplateAction (File not found)
    Interesting observation is one forward slash is missing in the above url.
    There should be a "/" between servlet and WindchillAuthGW. Hence the error.Can someone tell me what is the reason.

    Web Server didn't "generate" that URI; that's the URI the client requested. If that URI is wrong, the most likely explanation is a typo in the referring page.

  • What am I doing wrong when running multiple applications with the Web Server enabled ?

    I am running multiple Vi's (applications) on the same Nt Workstation and the Web Server is enable for all applications. However, I am able to view only one Panel from the browser. Only the first application that is started can be viewed. What am I doing wrong ?

    Only one application can act as a web server at the default http port 80.
    To solve this either run all VIs into a single application or use different
    http ports for each application. For example an application can use the
    default port 80 and others use ports 8080, 8081, 8082 etc.
    In the browser, you enter the URL for non default ports as
    http://hostname:8080/...
    HTH
    Jean-Pierre Drolet
    "zemaitis" a ecrit dans le message news:
    [email protected]..
    > I am running multiple Vi's (applications) on the same Nt Workstation
    > and the Web Server is enable for all applications. However, I am able
    > to view only one Panel from the browser. Only the first application
    > that is started can be viewed. What am I doing
    wrong ?
    LabVIEW, C'est LabVIEW

  • How do I use Sun Web Server 7.0u1 reverse proxy to change public URLs?

    Some of our installations use the Sun Web Server 7.0 (update 1, usually)
    for hosting some of the public resource and reverse-proxying other parts
    of the URI namespace from other backend servers (content, application
    and other types of servers).
    So far every type of backend server served a unique part of the namespace
    and there was no collision of names, and the backend resources were
    published in a one-to-one manner. That is, a backend resource like, say,
    http://appserver:8080/content/page.html would be published in the internet
    as http://www.publicsite.com/content/page.html
    I was recently asked to research whether we can rename some parts of
    the public URI namespace, to publish some or all resources as, say,
    http://www.publicsite.com/data/page.html while using the same backend
    resources.
    Another quest, possibly related in solution, was to make a tidy url for the
    first page the user opens of the site. That is, in the current solution when
    a visitor types the url "www.publicsite.com" in his or her browser, our web
    server returns an HTTP-302 redirect to the actual first page URL, so the
    browser sends a second request (and changes the URL in its location bar).
    One customer said that it is not "tidy". They don't want the URL to change
    right upon first rendering the page. They want the root page to be rendered
    instantly i the first HTTP request.
    So far I found that I can't solve these problems. I believe these problems
    share a solution because it relies on ability to control the actual URI strings
    requested by Sun Web Server from backend servers.
    Some details follow, now:
    It seems that the reverse proxy (Service fn="service-passthrough") takes
    only the $uri value which was originally requested by the browser. I didn't
    yet manage to override this value while processing a request, not even if
    I "restart" a request. Turning the error log up to "finest" I see that even
    when making the "service-passthrough" operation, the Sun Web Server
    still remembers that the request was for "/test" (in my test case below);
    it does indeed ask the backend server for an URI "/test" and that fails.
    [04/Mar/2009:21:45:34] finest (25095) www.publicsite.com: for host xx.xx.xx.83
    trying to GET /content/MainPage.html while trying to GET /test, func_exec reports:
    fn="service-passthrough" rewrite-host="true" rewrite-location="true"
    servers="http://10.16.2.127:8080" Directive="Service" DaemonPool="2b1348"
    returned 0 (REQ_PROCEED)My obj.conf file currently has simple clauses like this:
    # this causes /content/* to be taken from another (backend) server
    NameTrans fn="assign-name" from="/content" name="content-test" nostat="/content"
    # this causes requests to site root to be HTTP-redirected to a certain page URI
    <If $uri =~ '^/$'>
        NameTrans fn="redirect"
            url="http://www.publicsite.com/content/MainPage.html"
    </If>
    <Object name="content-test">
    ### This maps http://public/content/* to http://10.16.2.127:8080/content/*
    ### Somehow the desired solution should instead map http://public/data/* to http://10.16.2.127:8080/content/*
        Service fn="service-passthrough" rewrite-host="true" rewrite-location="true" servers="http://10.16.2.127:8080"
        Service fn="set-variable" set-srvhdrs="host=www.publicsite.com:80"
    </Object>
    I have also tried "restart"ing the request like this:
        NameTrans fn="restart" uri="/data"or desperately trying to set the new request uri like this:
        Service fn="set-variable"  uri="/magnoliaPublic/Main.html"Thanks for any ideas (including a statement whether this can be done at all
    in some version of Sun Web Server 7.0 or its opensourced siblings) ;)
    //Jim

    Some of our installations use the Sun Web Server 7.0 (update 1, usually)please plan on installing the latest service pack - 7.0 Update 4. these updates addresses potentially critical bug fixes.
    I was recently asked to research whether we can rename some parts of
    the public URI namespace, to publish some or all resources as, say,
    http://www.publicsite.com/data/page.html while using the same backend
    resources.> now, if all the resources are under say /data, then how will you know which pages need to be sent to which back end resources. i guess, you probably meant to check for /data/page.html should go to <back-end>/content/page.html
    yes, you could do something like
    - edit your corresponding obj.conf (<hostname>-obj.conf or obj.conf depending on your configuration)
    <Object name=¨default¨>
    <If $uri = ¨/page/¨>
    #move this nametrans SAF (for map directive - which is for reverse proxy within <if> clause)
    NameTrans.. fn=map
    </If
    </Object>
    and you could do https-<hostname>/bin/reconfig (dynamic reconfiguration) to check out if this is what you wanted. also, you might want to move config/server.xml <log-level> to finest and do your configuration . this way, you would get enough information on what is going on within your server logs.
    finally,when you are satisfied, you might have to run the following command to make your manual change into admin config repository.
    <install-root>/bin/wadm pull-config user=admin config=<hostname> <hostname>
    <install-root>/bin/wadm deploy-config --user=admin <hostname>
    you might want to check out this for more info on how you could use <if> else condition to handle your requirement.
    http://docs.sun.com/app/docs/doc/820-6599/gdaer?a=view
    finally, you might want to refer to this doc - which explains on ws7 request processing overview. this should provide you with some pointers as to what these different directives mean
    http://docs.sun.com/app/docs/doc/820-6599/gbysz?a=view
    >
    One customer said that it is not "tidy". They don't want the URL to change
    right upon first rendering the page. They want the root page to be rendered
    instantly i the first HTTP request.
    please check out the rewrite / restart SAF. this should help you.
    http://docs.sun.com/app/docs/doc/820-6599/gdada?a=view
    pl. understand that - like with more web servers - ordering of directives is very important within obj.conf. so, you might want to make sure that you verify the obj.conf directive ordering is what you want it to do..
    It seems that the reverse proxy (Service fn="service-passthrough") takes
    only the $uri value which was originally requested by the browser. I didn't
    yet manage to override this value while processing a request, not even if
    I "restart" a request. Turning the error log up to "finest" I see that even
    when making the "service-passthrough" operation, the Sun Web Server
    still remembers that the request was for "/test" (in my test case below);
    it does indeed ask the backend server for an URI "/test" and that fails.
    now, you are in the totally wrong direction. web server 7 includes a highly integrated reverse proxy solution compared to 6.1. unlike 6.1, you don´t have to download a separate plugin . however, you will need to manually migrate your 6.1 based reverse proxy settings into 7.0. please check out this blog link on how to set up a reverse proxy
    http://blogs.sun.com/amit/entry/setting_up_a_reverse_proxy
    feel free to post to us if you need any futher help
    you are probably better off - starting fresh
    - install ws7u4
    - use gui or CLI to create a reverse proxy and map one on one - say content
    http://docs.sun.com/app/docs/doc/820-6601/create-reverse-proxy-1?a=view
    if you don´t plan on using ws7 integrated web container (ability to process jsp/servlet), then you could disable java support as well. this should reduce your server memory footprint
    <install-root>/bin/wadm disable-java user=admin config=<hostname>
    <install-root>/bin/wadm create-reverse-proxy user=admin uri-prefix=/content server=<http://your back end server/ config=<hostname> --vs=<hostname>
    <install-root>/bin/wadm deploy-config --user=admin <hostname>
    now, you can check out the regular express processing and <if> syntax from our docs and try it out within <https-<hostname>/config/<hostname>-obj.conf> file and restart the server. pl. note that once you disable java, ws7 admin server creates <vs>-obj.conf and you need to edit this file and not default obj.conf for your changes to be read by server.
    >
    I have also tried "restart"ing the request like this:
    NameTrans fn="restart" uri="/data"
    ordering is very important here... you need to do this some thing like
    <Object name=default>
    <If not $restarted>
    NameTrans fn=restart uri from=/¨ uri=/foo.
    </If>

  • FLV File does not play when published to my web server

    Hi. I have created a short video in VC3 and selected the FLV option to create a web page. When I uploaded the entire file content to my web server (windows 2003 server hosted at Network Solutions) the FLV file will not play. Now... if I do the same with a WMV tile, the video plays.
    You can view the file at
    http://www.airforcehomeseller.com/videos/va_education/VATutorials/index.htm
    We have another video published in WMV format, although the video vs audio sync is out of wack at the moment, at
    http://www.airforcehomeseller.com/videos/Energy%20Pricing%20101/index.htm
    What am I doing wrong?
    Any help would be greatly appreciated.
    G-II

    Hi again
    You should probably double-check to ensure the standard.js file was also copied in when the HTML page was imported. It should be there but it never hurts to double check.
    You might also perform a double-check to ensure things ended up in the correct folder (if you are organizing your project into folders). It could be that you moved the HTML page into a different folder and orphaned the JavaScript file.
    Cheers... Rick
    Helpful and Handy Links
    Captivate Wish Form/Bug Reporting Form
    Adobe Certified Captivate Training
    SorcerStone Blog
    Captivate eBooks

  • I want to remove the COnlineBank, OnlineBank and csample applications from my application and web server.

    How can I safely do this on Solaris? I've tried "iasdeploy removeapp" and "iasdeploy removemodule" with no luck. I want to clear all of this demo stuff out to make both the web and app server look more production like.
    Thanks
    Eric

    Eric,
    these applications are applogics which are not controlled by the
    iasdeploy tool. Unfortunately, there is no tool available to remove
    these applications for you. What you can do is to manually remove all
    their entries, but be careful if you do that. If you remove the wrong
    keys you might break your ias installation, so be warned and make sure
    you backup the whole ias registry before attempting to remove anything.
    Eric Coleman wrote:
    >
    I want to remove the COnlineBank, OnlineBank and csample applications
    from my application and web server.
    How can I safely do this on Solaris? I've tried "iasdeploy removeapp"
    and "iasdeploy removemodule" with no luck. I want to clear all of
    this demo stuff out to make both the web and app server look more
    production like.
    Thanks
    Eric
    Try our New Web Based Forum at http://softwareforum.sun.com
    Includes Access to our Product Knowledge Base!--
    Han-Dat Luc ([email protected])
    Senior Consultant
    SUN Professional Services (iPlanet)
    o .
    o .
    O _ ____ _ _
    (_) _ \| | __ _ _ __ ___| |_ TM
    | | |_) | |/ _` | '_ \ / _ \ __|
    | | __/| | (_| | | | | __/ |_
    |_|_| |_|\__,_|_| |_|\___|\__|
    e-commerce solutions
    Sun Microsystems Australia Pty Ltd

  • NI Application Web Server refuses to be enabled

    I'm trying to deploy a web service made in LabVIEW 2010 and it fails to deploy saying that the NI Application Web service is not running....
    So I connect to http://localhost:3580, login as Admin (blank password) and click the web servers page. There I set the port of the Application Web Server to 8080, click the enable checkbox...and hit the apply button - and the only thing that happens is that the browser shows the Error on page symbol in its status bar....There is no "Yes" showing next to the Enable checkbox like for the system web server...
    So - if I start from scratch again and do the same, but also click the 32 bit radio button prior to hitting the apply button - what happens? Well, then I get an error dialog:
    The service itself (32bit, the 64 bit is listed but not as started) is running according to the services control panel...
    I'm stuck....So do anyone know what's missing here, is there something I'm doing wrong, or something that needs to be done prior to activating the application service? 
    MTO
    Solved!
    Go to Solution.

    Uninstalling and reinstalling 32 bit LabVIEW 2010 on two different machines revealed that the problem only showed up on my 64 bit Vista macine - not the 32 bit Windows 7 macine...
    Looking at the services running on the machine with the problem I could see that it was running the 32 bit version of the Web application server, had a 64 bit installed but set to disabled....- but - only the latter was set to depend on the NI Web server service. This looked a bit strange as the 32 bit version on the 32 bit machine was dependant on the NI Web Server....
    So - I disabled the 32 bit NI Web Application server service, enabled the 64 bit...and voila - I am now able to configure the Web Application Server to start.
    So why is both the 32 and 64 bit Web application services installed, with only the 32 bit one running - but not properly? Is this what happens to everyone, but everyone fixes it by switching to the 64 bit version...or is there something that causes the installation to get messed up?
    MTO

  • How to do auto URL redirect in sun web server ?

    Hi, i need to do auto url redirect in my sun web server. Currently i'm setup some rules for the reverse proxy in obj.conf file and the syntax looks like:
    <Object name="reverse-proxy-/test">
    <If $internal and $uri =~ "index.html">
    NameTrans fn="redirect" from="/" uri="/examples/abc.html"
    </If>
    Route fn="set-origin-server" server="http://localhost:8989"
    </Object>
    The situation is:
    1) When users browse "*http://localhost/examples/abc.html*" it will redirect to abc.html
    2) When users browse "*http://localhost/test*" it will redirect to the localhost admin GUI (http://localhost:8989/admingui/admingui/serverTaskGeneral)
    My desire output should be whenever users browse the "*http://localhost/test*" , it will redirect to abc.html page.
    the syntax might be wrong. So, anyone knows how to fix this? I'm keep trying but nothing worked. Please help me.

    Moderator action: Moved from Servers General Discussion.
    db

  • SUN Java System Web Server 7.0U1 How to install certificate chain

    I am trying to install a certificate chain using the SUN Java Web Server 7.0U1 HTTPS User interface. What I have tried so far:
    1. Created a single file using vi editor containing the four certificates in the chain by cutting an pasting each certificate (Begin Certificate ... End Certificate) where the top certificate is the server cert (associated with the private key), then the CA that signed the server cert, then the next CA, then the root CA. Call this file cert_chain.pem
    2. Go to Certificates Tab/Server Certificates
    3. Choose Install
    4. Cut and paste contents of cert_chain.pem in the certificate data box.
    5. Assign to httplistener
    6. Nickname for this chain is 'server_cert'
    7. Select httplistener and assign server_cert (for some reason, this is not automatically done after doing step 5).
    8. No errors are received.
    When I display server_cert (by clicking on it), only the first certificate of the chain is displayed and only that cert is provided to the client during the SSL handshake.
    I tried to do the same, except using the Certificate Authority Tab, since this gave the option of designating the certificate as a CA or chain during installation. When I select ed "chain," I get the same results when I review the certificate (only the first cert in the file is displayed). This tells me that entering the chain in PEM format is not acceptable. I tried this method since it worked fine with the F5 BIG-IP SSL appliance.
    My question is what format/tool do I need to use to create a certificate chain that the Web Server will accept?

    turrie wrote:
    1. Created a single file using vi editor containing the four certificates in the chain by cutting an pasting each certificate (Begin Certificate ... End Certificate) where the top certificate is the server cert (associated with the private key), then the CA that signed the server cert, then the next CA, then the root CA. Call this file cert_chain.pemIn my opinion (I may be wrong) cut and pasting multiple begin end
    --- BEGIN CERTIFICATE ---
    ... some data....
    --- END CERTIFICATE ---
    --- BEGIN CERTIFICATE ---
    ... some data....
    --- END CERTIFICATE ---is NOT the way to create a certificate chain.
    I have installed a certificated chain (it had 1 BEGIN CERTIFICATE and one END CERTIFICATE only and still had 2 certificates) and I used the same steps as you mentioned and it installed both the certificates.
    some links :
    [https://developer.mozilla.org/en/NSS_Certificate_Download_Specification|https://developer.mozilla.org/en/NSS_Certificate_Download_Specification]
    [https://wiki.mozilla.org/CA:Certificate_Download_Specification|https://wiki.mozilla.org/CA:Certificate_Download_Specification]

  • DECODING MAIL FROM WEB SERVER IN PLAIN TEXT FORMAT(THE MAIL BEING SENT BY LABVIEW APPLICATION)

    Hi All
    I have a labview application that send mail every hour automatically.
    But actually the mail has to be decoded from the web server(by another application).But now when that application decode the data in the mail(that is send by labview application)its getting some funny characters inside that can not be detected by the decoding application
    (When open the mail no problem.)But actually our goal is to decode the mail from the web server.
    Why the extra characters are appearing when decoding from the server?Is it because of the HTML format?
    Is there option to send the mail in plain text format(not like attachment)?
    In outlook we can change the setting (tools->options->send->mail sending format->....here we can set as HTML format/Plain Text format)
    Like that at the sending time can i chenge the sending option as plain text format in my labview application?
    Thanks...

    smercurio_fc wrote:
    Then it sounds to me like this other application is not decoding the attachment correctly, especially if you looked at the attachment yourself after you received it and verified it's correct.
    No, no, smercurio. This is charcter encoding here. In older versions of LabVIEW you could specify what character encoding to use when sending an email through the SMTP VIs. But that gave problems since people in certain locales used certain characters that where not transfered right when the wrong encoding was specified, and that encoding stuff is not understood by most people at all, so the wrong selected encoding was rather the rule than the exception. In newer versions of LabVIEW do the SMTP VIs handle the encoding automatically based on the currently used locale on the system.
    This change is documented in the Upgrade Notes of LabVIEW and probably happened around LabVIEW 7.1 or 8.0.
    A decent mail client will recognize the encoding and convert it back to whatever is necessary before presenting it to the user. The OPs posters server application obviously isn't a smart mail client but probably just some crude text file parser that has no notion of proper mail character encoding and how to deal with it.
    I would suppose that there is a chance to dig into the SMTP VIs itself and try to manipulate or disable that encoding altogether in there but that may open a whole can of worms somewhere else. The proper way would be to process the incoming mail by a character encoding aware mail client before passing it to the text parser. On Unix setting up something like this would be fairly trivial.
    Rolf Kalbermatter
    Message Edited by rolfk on 01-23-2008 10:21 AM
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • Problem with NAT? can get to web server internally but not externally

    We are trying to setup our helpdesk software website so external users
    can access it. However, we have been unsuccessful. We don't have any
    issues accessing it internally from our 10.1.1.X LAN
    We have had our ISP setup a public DNS "A" record of
    customerservice.amerinet-gpo.com which resolves to 198.88.234.40 and that
    appears to be working.
    Next we added a NAT to our Firewall to take 198.88.234.40 traffic and put
    it to the local IP of 10.1.1.23 which is our local address for the
    webserver running the helpdesk software.
    We also made sure that BM filters are allowing traffic on ports 80 and
    443 to the local IP as well.
    We have 4 other webservers (on a separate servers than our helpdesk
    software website) that are exposed to the outside in this same manner and
    all work fine.
    The helpdesk website is on Windows 2003 server SP1 running IIS 6.0. Our
    firewall server is NetWare 6 SP5 and BM 3.7 SP3.
    I have tried to just telnet to the public IP of 198.88.234.40 on port 80
    and it times out. I can't understand why, and have checked my entries on
    BM and even deleted and re did them 3 times to make sure I didn't make a
    mistake. I even have another web server on that block NAT'd the same way
    and it works (198.88.234.36), if you telnet to it on port 80 it goes
    right away.
    What else can I try? Any insight would be greatly appreciated!
    Thanks,
    SCOTT

    > > ok, the easiest way to calculate valid addresses is to use an IP
    subnet
    > > calculator. The one I like the most is the free utility by Wildpackets
    > >
    http://www.wildpackets.com/products/...tcalc/overview
    > >
    > > Anyway, with a 255.255.255.248 network mask the valid IP addresses
    > > associated to the primary address of your BM server are in the range:
    > > 198.88.234.33-198.88.234.38
    > > therefore .40 isn't included. Actually .40 is the subnet identifier
    of
    > a
    > > separate subnet. The addresses from .33 to .38 are the ones you can
    use.
    > >
    > > --
    > > Cat
    > > NSC Volunteer Sysop
    >
    > I was mistaken, the subnet for that block is 255.255.255.240 so I was
    > told by our ISP that our range was is 198.88.234.32 to 198.88.234.47
    or
    > 14 usable IPs since first and last are unusable.
    >
    > We have 3 different IP blocks from our ISP, the above 198.88.234.32 one
    > with the 255.255.255.240 subnet, then a 199.217.136.184 with
    > 255.255.255.248 subnet and finally a 198.88.233.1 with a
    255.255.255.248
    > subnet.
    >
    > So I think we should be able to use the 198.88.234.40 address.
    >
    > SCOTT
    >
    I was really hoping that we had the wrong sub net in BM for the
    198.88.234.32 block! When I read your post last night, I thought that's
    gotta be it...sadly I checked and it does have it as 255.255.255.240 when
    I look in inetcfg under bindings. I even checked our Cisco router as
    well to make sure it had the sub net correct since this is the first time
    I've tried to use an IP above 198.88.234.36. The router looked fine as
    well. Is there anyplace else that this could be wrong, maybe a config
    file on BM or something?
    Thx,
    SCOTT V.

  • Not able to access schema from a Web Server

    I am not successful in using a schema from a web server. I am using j2sdk1.4.2_06, and JDOM-b10. I have been successful in accessing schema using a file path, but not with a web address. The server is Windows Server 2003, I create a virtual directory using IIS under Inetpub.
    Let the URL be: http://server/research_schema/client.xsd
    This would be accessible only on the intranet.
    Here is some XML:
    <client xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:noNamespaceSchemaLocation="http://server/research_schema/client.xsd">
    <execute matlab="false" commit="false" return_server_msg="false">
    <t1>
    <load_batch_data>
    <username>test</username>
    <password>test</password>
    <database>db</database>
    <get_next_batch>true</get_next_batch>
    </load_batch_data>
    </t1>
    </execute>
    </client>
      Here is the code that sets the parser to be validating, and sets the schema location, I am using JDOM:
    builder = new SAXBuilder("org.apache.xerces.parsers.SAXParser", true);
    builder.setFeature(
      "http://apache.org/xml/features/validation/schema",
      true);
    builder.setProperty(
       "http://apache.org/xml/properties/schema/external-noNamespaceSchemaLocation",
            "http://server/research_schema/client.xsd");
    doc = builder.build(source);The error I get is a JDOMException saying the client element is not found, but I know that the client element is there, after seeing the XML output to
    a file from the sender.
    I see many exmples on the internet using a URL for the schema location, and it is usually the internet and not just the intranet. So it does work for someone.
    I'd appreaciate any help.

    Steve, et. al.,
    My apologies for using this in lieu of email, but I have been searching and searching for the answer to your questions from last summer concerning the correct method for getting the local xsd file to be correctly accessed from the xml file when using JDOM to parse with validation..
    I did not see successful resolution of the thread from last summer, but this one seems closely related and suggests that you were either instructed to give up, or gave up on your own and went to the solution of placing the xsd file on a server.
    Maybe I've got my head screwed on wrong, but I, like you, would like to find a successful way to make xml processing work for a JWS-provisioned application the same as it would if I just sent the clients a big jar file and told them to unjar it to some convenient local directory. In that scenario xsi:noNamespaceSchemaLocation = "itsrighthere.xsd" works as expected.
    Can anyone tell us what the correct method of specification is in the JWS context?
    Thank you.

  • How to Add a New CGI-Handler to Web Server?

    I need to setup a website that will have pages coded in "LiveCode" (http://livecode.com/), an English-like scripting language, that can be used as an 'easier-to-program' replacement for PHP. It has its own interpreter ("livecode-server"), which is supposed to work under Apache as a cgi script. The idea is that the programmer writes web pages in LiveCode, composed of HTML with embedded LiveCode scripting, and these pages are saved using their own extension (".lc"). When a ".lc" page is requested, Apache should send the page for processing by the 'livecode-server' cgi.
    The installation instructions for 'livecode-server' are quite simple. They state that it should be able to be easily installed via an '.htaccess' file, like this:
    1) in the website's root directory, add a ".htaccess" file with the following directives:
    Options ExecCGI
    AddHandler livecode-script .lc
    Action livecode-script /cgi-bin/livecode-server
    2) put the 'livecode-server' executable inside your 'cgi-bin' folder
    These instructions are obviously for users in a shared host environment, but I thought I should be able to do the same with OS X Web Server. But... where can I find a site's "cgi-bin" folder?
    Any guidance is truly appreciated.

    I tried using a custom httpd_livecode.conf file, and storing it in /Library/Server/Web/Config/apache2/other, but without success. To be totally honest, I'm not certain as to whether the configuration is not loading, or whether there is just something wrong with my syntax, and right now I just don't have the time to troubleshoot.
    I did manage to get it working with the .htaccess file, which was quite easy, once I knew where to put the executable - which, as you mentioned, had to be in /Library/Server/Web/Data/CGI-Executables.
    So, for future reference, the 'recipe' for adding a new cgi handler via .htaccess is:
    place the cgi executable (usually a language parser or interpreter) into
    /Library/Server/Web/Data/CGI-Executables
    make sure the parser/interpreter (and any accompanying files or directories) are readable and executable (usually permissions should be set to '755')
    in your site folder, add a .htaccess file with the following contents:
    Options ExecCGI
    AddHandler script-type .extensionType
    Action script-type /cgi-bin/script-interpreter
    In the case of LiveCode, my .htaccess file looks like this:
    Options ExecCGI
    AddHandler livecode-script .lc
    Action livecode-script /cgi-bin/livecode-server
    And the 'livecode-server' executable, as well as its accompanying files, were downloaded directly from the LiveCode website, and placed in /Library/Server/Web/Data/CGI-Executables.
    last of all in Server.app, in the Advanced settings for your site, make sure you have enabled "Allow overrides using '.htaccess' files"

  • When viewing documents on a Web Server, IE automatically opens a blank page and then opens the document in the default program. The blank screen should close automatically but isn't happening.

    So here is the situation, my company uses a third party scanning software, that puts documents scanned or created onto a webserver. We use a program that grants a user access to view the documents on the webserver. The user will have to log into the system
    using credentials, we have Kerberos doing this. Once the user is logged onto the system they navigate to the document they want to view or edit. So the user wants to view a .pdf file, they click the document they want, and it is supposed to open in the default
    program associated with file extension. ie: .doc would open Word, .xls would open Excel. When the document is loading in the default program Internet Explorer opens a blank page, this page is supposed to prompt the user if they want to open or save the document.
    (I changed the registry to have the files selected to automatically open without this prompt.) The blank screen generated is still staying open, this screen is supposed to close automatically after the prompt or close automatically after the document is opened.
    So when a user is looking at several documents, this floods the screen with all these blank Internet Explorer pages. This problem is only occurring for users that have Windows 7 PC's, XP users do not have this problem at all. My PC is the only one in the company
    that is Windows 7 64-bit, and working properly with the IE blank page automatically closing. I'm wondering what can be done to fix this problem, is it as simple as registering some .dll files or do I need to make a registry change? Any help would greatly be
    appreciated.

    What is the registry change exactly?  Sounds like one which is supposed to avoid the prompt.  I know
    that there used to be such a thing, before the new prompt and I know that some users think that there should be such a thing for any program but I'm not sure if that expectation is valid or not.  I think this may also be related to a concurrent change
    in the OS.  E.g. in XP we had File Types Options editor but starting in Vista that was removed.  You might get some more clues on this tack from NirSoft's FileTypesMan utility.
    The registry change that I did was to remove the open and save prompt. I will be sure to look into that. Our company
    isn't worried about the user's automatically opening the files since they are all trusted. They also do not have the ability to install anything, so if they were to download a potentially harmful file they could never install it. We monitor downloads as well,
    if they download files off the internet that aren't work related we remove their download capabilities.
    Sounds like a scenario for using ProcMon to take two traces and compare them for their essential differences.
    I talked to my boss about using this tool and he is worried about using it. I'm only an intern at my office and do not have the authority to use a program without their knowledge or consent. I will run the idea by them again and see if it's okay I use this.
    I'll maybe make some presentation about how it can help us with our problem and how much time and money it will save. Because I really want to get down to the bottom of this.
    Probably the most significant thing is going to be the Content-type that the files are being served by. 
    You could use the Developer Tools, Network Capture to check on that and ProcMon could help explain what happens after that.  A related workaround to change the symptom when the Content-type is not what is expected is to disable MIME Sniffing (e.g. in
    Security options, Miscellaneous section).  In fact, perhaps that could explain your machine's difference with anyone else?  MIME Sniffing is a default, previously known as "save according to type not extension" (or some such wording).
    Okay I will be completely honest I have no idea how to use Developer Tools (F12), yes I could google it, when I tried using it on the blank pages it would never work, I also have no idea what I'm looking at. I should mention that I do not have access to
    the web server. Our programming and development side only has access to our AS400, which is where the web server is located. I do have MIME Sniffing enabled and so does everyone else. I may be wrong but, I believe it is needed in order to open the file extension
    in the correct default program, yes? I thought MIME Sniffing would see a .docx file and then find the appropriate program to open that .docx file in. I may be wrong but I know it's used somehow along these lines.

Maybe you are looking for

  • How to get current sesssion information using Webappscontext in servlet?

    Hi, I have an applet embedded in a self-service page (using OAHTMLWebBean). This applet sends information to a servlet, to query the database. I would like to use the session that was created when I logged into Oracle Applications as the session for

  • When atemping to use wifi hotspots FireFox does not redirect to login page.

    when trying to use a hot spot like in a hotel or airport (tmoble for one) will not redirect to login page. Nor can it find the page when I enter the DNS name. If I enter the IP address of the long in page then it can find it but not the subsequent pa

  • Interesting Reqrmnt - How to avoid sending namespace in the xml response

    Hi, Below is the XML response we generate and send it to HTTP Receiver. - <ns1:CATSIMPORT xmlns:ns1="urn:pweh.com:erp:hr" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">   <Version>1.0</Version>   <TrackID>{5A3C87A5-48FD-4BCD-9178-A200F10F118D

  • Movie disappears after screen saver kicks in

    If my screen saver (it doesn't matter which one) kicks in while watching a movie, after I press a key to stop the screen saver, Quicktime still plays the movie but there's nothing on the screen, all I see is my background. I can hear the sound, but n

  • Cannot edit iCal to do's

    I can no longer edit the to do entries in my iCal! I used to be able to click on it and the edit box came up. Now it no longer does and I cannot edit them at all!! Help!