MapListener network overhead ?

If I have an application with a number of classes all of which want to be advised of updates to an object, should each of those classes implement MapListener and each do cache.addMapListener(...)? or should the object implement MapListener and maintain a list of listening objects within the application? If I have 10 classes listening for updates I don't want 10 copies of the event to be sent from the cache to the app. If Coherence only sends one update to each JVM per event and then dishes out copies of that update to interested listeners then it's ok for each class to implement MapListener.
Thanks,
Andrew

Hi Andrew,
A MapListener is registered with an instance of an ObservableMap implemented within a NamedCache within a CacheService. In your application, if the instances of these classes in a jvm are using the same NamedCache instance then using a shared MapListener would enable your application to sequence or otherwise coordinate the event handling. If they each added their own unique MapListener, the registered listeners would be dispatched in the order that they were added (FIFO). Note: Coherence will prevent the same instance of a MapListener from being registered to same ObservableMap more than once. Even if these application classes use distinct instances of a NamedCache, the registered listeners would be dispatched by the CacheService. So as long as these NamedCaches are sharing the same CacheService, there will be no additional network overhead to report the same event.
It is also an advantage to share a MapListener that is registered with a Filter because you save the repeated filter runtime costs when an event is being dispatched.
Regards,
Harv

Similar Messages

  • APEX vs Forms 6i - Processor/System/Network Overhead

    We have been developing and deploying applications using Forms 6i for some years, have moved to web forms and are now developing in APEX. The IT department of client of ours has asked us to provide the relative performance merits and impact on CPU performance for each of the three technologies with particular focus on the server on which the Oracle database is running, in order to determine a basis for charging.
    Assuming that the application is the same. i.e. There is a common set of PL/SQL commands across the deployment technologies. Would it be true to say that this would be relatively the same for Forms and Web Forms, since these are generally deployed with separate forms or application servers, but would be higher for APEX, since APEX PL/SQL commands are required to build web pages before being sent (in this case) to the Oracle HTTP server? If so, are there any figures available to substantiate this case?
    Taking this one step further. Given that there is a network overhead for each of the deployments (in addition to the database overhead) has anyone conducted an analysis on the relative efficiencies of the three in presenting the same content? Or any insight as to what that might be? This could potentially be offset against an increase in datbase server cycles, if the former is true.
    Thanks very much for your help.
    Regards, Malcolm

    This will be hard to quantify without running your own tests, but based on feedback from other customers, the server resources required for APEX are somewhere in the neighborhood of 1/3 to 1/10 that required for Forms. This is especially true for memory, since every Forms client requires a dedicated server connection whereas APEX uses connection pooling. So, lets say you have 1,000 Forms users with an average memory requirement of 5mb per client (just guessing here), that's 4.8gb of RAM just for client connections. The typical number of sessions in that size APEX deployment is 10-20 = 50-100mb of RAM for client connections. The CPU impact of rendering APEX pages is VERY insignificant compared to the CPU required for most of the queries your developers will write. One of the busiest internal APEX instances has over 200,000 page views per day and is a 4 processor machine.
    Regarding network traffic, I'm not sure but you could measure the Forms traffic with Wireshark. You can probably estimate your average page view for an APEX to be somewhere between 35 and 50kb excluding CSS, JavaScript, and Images which should only need to load on the first page view. I highly doubt either client-server forms or web forms are less than that.
    Thanks,
    Tyler

  • Screen Sharing: reducing network overhead?

    After setting up screen sharing, are there some ways of keeping the overhead low in terms of CPU and hard drive load?
    I noticed after getting screen sharing going, a fair amount of hard drive chatter on the computer who's screen was being shared; not sure about the computer using that screen...
    I'll be operating a desktop mac via screen sharing from a mac book pro...doing a little 3D rendering and file management...maybe batch processing.
    thanks for any info on this!
    ray

    QT is still on 1.5Mps, which I had set before trying the tips suggested here. iChat video still has the 500Kbps setting I gave it as advised.
    Oddly, screen sharing is now available even though I am back on wireless not ethernet. This all seems a bit unpredictable and unreliable for something which is supposed to be 'easy' screen sharing!
    Thanks again for the help people have given which has (hopefully) fixed this.

  • Can you use a network share as location for users' home

    I am running a Mac mini with Mountain Lion and Server.app as a home server. For added storage space I have a NAS. I would like to set up the open directory network users' home share on the NAS. Is that even possible?
    I am able to mount the network share on the mini using either afp or nfs, and I can also add the NAS share as a home folder enabled share in the server.app and select that as location for a users home folder, but that only results in the user not being able to log on to his account. Is there something that I am missing or is what I am trying to accomplish simply not possible?

    It used to be officially supported to use NFS for clients to access their network home directories but with Lion (and Mountain Lion) this was no longer officially supported.
    Therefore the need as I mentioned to 're-share' the volume via AFP. As you implied this does unfortunately impose an additional network overhead as traffic has to go as follows.
    client ----> AFP ----> Server -----> NFS -----> NAS
    What you could consider to help at least a little bit is to connect the server to the NAS on a totally separate network to the network the server uses to talk to the clients on. The main network would be between the server and clients, and you would use a second Ethernet connection just link the NAS and the server. As the clients will not need to talk directly to the NAS this will not be a problem. This would at least mean that each network only gets one set of traffic either AFP or NFS but not both and means the server can use both at full speed at the same time.
    The Mac Pro still has two built-in 1Gbps Ethernet ports but you can also get either a USB3 Ethernet adaptor or a Thunderbolt to Ethernet adaptor (I have used one of these on a Mac mini server).
    If you did not already have the NAS then people starting from new would be better off either getting a Thunderbolt RAID which can be directly attached to the Mac server, or at the higher-end go the traditional route of setting up a SAN and using an FDDI connection.
    Note: You can now get Thunderbolt to FDDI interfaces.

  • Can you use AirPrint to print to hard wired printer via your WiFi network?

    Recently I bought an Officejet 6700 Premium e-All-in-One to serve all my PCs in the house and to use the AirPrint functionality for all the iPhones & iPads.
    The printer is located in the basement office where there is poor wireless reception, and therefore connected with a wire (ethernet) to a wireless router one level up on the ground floor. The wireless router covers & connects everything (PCs, iPhones & iPads) in rest of the house.
    I do not seem to be able to set up AirPrint when the printer is connected through the ethernet to the local network. Only when I unplug the ethernet cable and connect the printer wirelessly (with poor reception) to the  wireless network, I get AirPrint to work ...... As soon as I plug in the Ethernet cable the WiFi of the printer shuts off and AirPrint does not function anymore.
    Is there anyway for me to keep the printer "ethernet connected" through the wire and use AirPrint ??
    I underdstand ther is a lot AirPrint is wireless, but why can it not be wireless from the iPhone/iPad to the wireless router and then through the ethernet cable to hard wired printer ? It's all on the same network with proper IP addresses etc. Do not understand why the printer also needs to connect wirelessly to the router to enable AirPrint.......
    thanks for helping out,

    I came across this discussion when looking for a solution to a similar problem. We have an AirPrint certified printer that does not have built-in Wi-Fi. Using a USB cable it worked well with our iMac, but we also wanted to use AirPrint with our iOS devices. In our situation, we cannot run an Ethernet cable from our router (a 4th generation Airport Extreme) to either our iMac or printer. With help from Apple Support, here is what became our solution.
    Using an Ethernet cable, we connected an AirPort Express router (2012 model) to the printer to provide a Wi-Fi capability to the printer. AirPrint apparently will not work with a USB cable. Because the AirPort Express was being repurposed, we reset it using what Apple refers to as a “Factory” or “Default Settings reset.” With the reset completed, the Express became visible within Airport Utility, showing a connection to the Extreme. Within Airport Utility, click on the Airport Express to open its status window; click on the Edit button, then the Wireless tab, and then select “Join a wireless network” within the Network Mode options. This made the Express into a “wireless bridge” between the printer and the Extreme, allowing our iOS devices to see the AirPrint printer. Our iMac also connects wirelessly to the printer.
    Within Airport Utility, we could have selected “Extend a wireless network” as the Network Mode, however because we rely on a wireless connection between the Extreme and Express, doing so has the potential of reducing throughput among Wi-Fi devices because of the resulting network overhead. We had been using Printopia to allow our iOS devices to work with an earlier non-AirPrint printer. To avoid conflicts we turned Printopia off, accessing it within the iMac’s System Preferences. AirPrint is easier to use and provides better printer output than did Printopia.
    When our HP printer sleeps, the iMac computer wakes it up as needed, however the sleeping printer becomes invisible to the iOS devices. We hope to fix this weakness, possibly by resetting the printer's sleep setting.

  • Multimaster replication survival & overhead

    Hi
    We are planning a system which should have a very high survivability rate. To this aim we consider using 3 oracle servers (8.1.7), located in 3 different sites, replicated in an asynchronous multimaster configuration, with a short refresh time. The nature of the application is that many small update transactions are happening constantly.
    Several questions:
    1. We are trying to assess the required bandwidth of the site-to-site network connection we will need to support this configuration. What is the approx. network overhead of a small (~500 bytes) transaction?
    2. Provided we come up with a smart enough conflict resolving mechanism, will this system survive? It is vital that committed transactions will never, ever, be lost.
    3. Can the 3 db architecture can be left in an inconsistent state? For example, can it be that a committed transaction in database A, is propagated to database B, and then A fails before it can be propagated to database C? leaving inconsistencies in working databases B & C (atleast until A is restored).
    4. How difficult would be to return such a system to normal state after one (or more) sites fails?
    Thanks,
    Idan

    <BLOCKQUOTE><font size="1" face="Verdana, Arial, Helvetica">quote:</font><HR>Originally posted by Patricia McElroy ([email protected]):
    This requires a bit more detail - what indicates that replication is hung... is there anything in any of the database logs that indicates a problem. You indicate that even changes to the local database can not be made - is this true also for non-replicated tables? Also, Run "Select * from dba_repprop;" and confirm that the method for propagation is indeed ASYNCHRONOUS. <HR></BLOCKQUOTE>
    thanks patricia
    here is the details:
    1. i am having 2 DB servers at remote locations running oracle 8.1.6 on solaris 2.7
    2. When thre is a link fails between these two server, i am not able to put the job in a deffered que transactions though i have scheduled job and link as well.
    3. it's not allowing me to commit insert statement on local server,when the link is down.
    null

  • What will be tomorrows network infrastructure?

    Due to historical reasons, many different network infrastructures co-exist in today's networks, such as Ethernet, SONET, ATM, etc. However, if we are building a complete new infrastructure for a metro or large campus network, which technology would be the most appropriate? What is Cisco Expert’s opinion?

    I would answer this question differently if I were considering the use of the network to be a service providers network or an enterprise network in that I could achieve a different cost model based on single versus multiple customers on the same networks. Also, the end needs of each customer are different, and if they were completely defined could lead to a different cost model for the services rendered.
    Generally, a service providers network requires support for TDM and data protocols, requires stringent Service Level Agreements, and generally either owns their own or has a lower cost per mile for fiber than an enterprise. For this reason, a SONET/SDH based network with data capabilities provides the most cost effective way to transport the variety of circuits and streams from the end users location to the point-of-presence (POP) and a WDM system based network is used for the long haul and inter-POP traffic is used. Data can be Ethernet over SONET/SDH or WDM, or it can be done with RPR (802.17).
    The enterprise is usually seen as more fiber constrained, and therefore uses a metro WDM (Coarse or Dense) to transport SAN and Ethernet connections between buildings. The ability for the enterprise to convert most legacy traffic into Ethernet, and the ability to combine Ethernet ports into ever-larger trunk speeds, can also lead to very cost effective Ethernet over dark fiber networks up to the new 10 Gigabit per second standard. RPR systems can also be used to extend SONET/SDH recovery mechanisms to Layer 2 and 3 networks.
    With voice and video moving to native IP, and the ability to tunnel most other legacy traffic over IP, Metro Ethernet systems over dark fiber, or extended over WDM or SONET/SDH systems are beginning to be the most cost effective way for businesses to connect within the campus or metro today.
    The question we have to ask whenever we build a network is what will the next 10 years bring us. My vision is one of lots of devices with wireless mobility, the mixing of SIP and HTTP services to the mobile devices, and lots of high speed Global Area Networking overhead to figure out who and where you are. What is your vision of the traffic and protocols you will be required to support in ten years?

  • Premiere crashes when importing over a network

    I've been trying to import video files from my capture computer. Both machines are on the same domain and connected through a gigabit switch.
    Each time I import a file, the connection to the capture machine gets lost and I either have to restart the capture computer, or prove to my edit computer that the connection exists. While it's importing (or exporting if I've made it that far) I have to keep an explorer window open and continually open up folders on that networked (mapped) drive. Sometimes, I even have to play a file from there.
    Here's the details on the computers
    Capture (purchased before my time):
    Motherboard: Asus a8n32-delux
    Proc: AMD something
    OS: XP
    Edit:
    Motherboard: Asus Sabertoothx58
    Proc: i7
    OS: Win7-64
    Project is an NTSC DV Widescreen 48k. I'm not sure what other information you need.
    The files are about 90GB (20MB/s) blackmagic 8bit yuv avi's.

    Have you tried copying the file using explorer from the capture machine to the edit machine and placing it on a local drive for the edit machine ?
    This would help you establish if the problem is with Premiere or with your network.
    Most people here will tell you not to edit a file opened over the network, but to copy the file to the local drive before opening in Premiere.
    Although your gigabyte switch might imply speed, its no where as fast as the access from a fast HDD into the processor in the same machine.  There is massive amout of network overhead that isn't involved in a DMA transfer from a HDD.
    You may be experiencing firewall issues.  Does premiere have read-write access on the other PC?.  Premiere creates hundred of IO threads when you're editting.

  • Jms message overhead ..

              Hi,
              I am interesting using JMS in a wireless evironment where message size costs money.
              Does anybody know what is the overhead in terms of bytes added by using JMS. So
              for example if my message is 100 bytes how many bytes extra will JMS add to the original
              message ?
              Thanks
              Nick
              

    The inactive JMS subscription itself incurs no network overhead, but
              the underlying "idle" connection between client JVM and server
              JVM does periodically generate small "ping"-like messages. This ping
              is used to check if the socket is still OK. I think a ping is sent out
              every 60 seconds, and the period is configurable via WebLogic internal
              (undocumented) settings.
              Note that WebLogic multiplexes all JVM-to-JVM traffic
              over a single socket connection, so figure 1 ping message
              per 60 seconds per remote client JVM...
              Tom, BEA
              Ron wrote:
              > Is there must network overhead JMS subscribers?
              > How do you quantify this?
              >
              > Example,
              >
              > If we have few publishers and many subscribers in a system where messages are
              > sent lets say once a week, if the subscribers are constantly listening, how much
              > network overhead is involved in that listening?
              

  • PL/SQL procedure is 10x slower when running from weblogic

    Hi everyone,
    we've developed a PL/SQL procedure performing reporting - the original solution was written in Java but due to performance problems we've decided to switch this particular piece to PL/SQL. Everything works fine as long as we execute the procedure from SQL Developer - the batch processing 20000 items finishes in about 80 seconds, which is a serious improvement compared to the previous solution.
    But once we call the very same procedure (on exactly the same data) from weblogic, the performance seriously drops - instead of 80 seconds it suddenly runs for about 23 minutes, which is 10x slower. And we don't know why this happens :-(
    We've profiled the procedure (in both environments) using DBMS_PROFILER, and we've found that if the procedure is executed from Weblogic, one of the SQL statements runs noticeably slower and consumes about 800 seconds (90% of the total run time) instead of 0.9 second (2% of the total run time), but we're not sure why - in both cases this query is executed 32742-times, giving 24ms vs. 0.03ms in average.
    The SQL is
    SELECT personId INTO v_personId FROM (            
            SELECT personId FROM PersonRelations
            WHERE extPersonId LIKE v_person_prefix || '%'
    ) WHERE rownum = 1;Basically it returns an ID of the person according to some external ID (or the prefix of the ID). I do understand why this query might be a performance problem (LIKE operator etc.), but I don't understand why this runs quite fast when executed from SQL Developer and 10x slower when executed from Weblogic (exactly the same data, etc.).
    Ve're using Oracle 10gR2 with Weblogic 10, running on a separate machine - there are no other intensive tasks, so there's nothing that could interfere with the oracle process. According to the 'top' command, the wait time is below 0.5%, so there should be no serious I/O problems. We've even checked JDBC connection pool settings in Weblogic, but I doubt this issue is related to JDBC (and everything looks fine anyway). The statistics are fresh and the results are quite consistent.
    Edited by: user6510516 on 17.7.2009 13:46

    The setup is quite simple - the database is running on a dedicated database server (development only). Generally there are no 'intensive' tasks running on this machine, especially not when the procedure I'm talking about was executed. The application server (weblogic 10) is running on different machine so it does not interfere with the database (in this case it was my own workstation).
    No, the procedure is not called 20000x - we have a table with batch of records we need to process, with a given flag (say processed=0). The procedure reads them using a cursor and processes the records one-by-one. By 'processing' I mean computing some sums, updating other table, etc. and finally switching the record to processed=1. I.e. the procedure looks like this:
    CREATE PROCEDURE process_records IS
        v_record records_to_process%ROWTYPE;
    BEGIN
         OPEN records_to_process;
         LOOP
              FETCH records_to_process INTO v_record;
              EXIT WHEN records_to_process%NOTFOUND;
              -- process the record (update table A, insert a record into B, delete from C, query table D ....)
              -- and finally mark the row as 'processed=1'
         END LOOP;
         CLOSE records_to_process;
    END process_records;The procedure is actually part of a package and the cursor 'records_to_process' is defined in the body. One of the queries executed in the procedure is the SELECT mentioned above (the one that jumps from 2% to 90%).
    So the only thing we actually do in Weblogic is
    CallableStatement cstmt = connection.prepareCall("{call ProcessPkg.process_records}");
    cstmt.execute();and that's it - there is only one call to the JDBC, so the network overhead shouldn't be a problem.
    There are 20000 rows we use for testing - we just update them to 'processed=0' (and clear some of the other tables). So actually each run uses exactly the same data, same code paths and produces the very same results. Yet when executed from SQL developer it takes 80 seconds and when executed from Weblogic it takes 800 seconds :-(
    The only difference I've just noticed is that when using SQL Developer, we're using PL/SQL notation, i.e. "BEGIN ProcessPkg.process_records; END;" instead of "{call }" but I guess that's irrelevant. And yet another difference - weblogic uses JDBC from 10gR2, while the SQL Developer is bundled with JDBC from 11g.

  • Lookout OPC Client – Asynchronous I/O and Update Rate serious problems (Sequence of data)

    I am using the Lookout OPCClient driver to connect to AB PLCs (EtherNet/IP protocol) and power measurement equipment (Modbus TCP protocol). The OPC server is the NI OPC Servers. The data that are read out from PLCs and PMs are energy meter readings, energy counters, power, voltage, current, frequency, power factor and el. energy quality measurements (THD). That energy meter readings are being stored in SQL database.
    I am experiencing a serious problem regarding the accuracy of the meter readings. Several times per day, randomly, meter readings are losing the time sequence. For example, sequence is: 167, after few seconds 165, 166.  In other words, present value followed by two previous old values. That generates a serious problem in our application that is expecting a naturally rising sequence of counter values.
    Analyzing further, I isolated the problem to the connection between Lookout OPCClient and OPC Server. I made a simple application in Lookout 6.7 (opcproc.lkp, attached) with OPCClient parameters: NIOPCServers, OPC2, Asynchronus I/O, Update rate: 10000, Deadband: 0.0, that is reading just one tag from NI OPC Servers demo application (simdemo.opf).
    By using OPC diagnostic tool from NI OPC Servers I record the sequence of OPC requests and responses.  I found out that OPCClient sends every 2.5 sec “IOPCAsyncIO2::Refresh2()” call that is request for refreshing of all items in one OPC group. Few milliseconds later OPC Sever responds with callback function “IOPCDataCallback:: OnDataChange()(Device Refresh)” that actually refresh the data.
    This periodic sequence is intrinsic to the OPCClient and cannot be disabled or changed (by my knowledge).  This sequence is periodically interrupted by “IOPCDataCallback:: OnDataChange()” caused by update rate parameter of OPCClient (client is subscribed to server for periodic update of changed items).
    In the case of demo application on every 4 refresh callbacks caused by refresh requests (2.5 sec) there is one update subscription callback determined by Update rate (10 sec).
    QUESTION 1:
    What is the purpose of update sequence and update rate when we have every 2.5 sec fresh values?
    PROBLEM
    The problem arises when we have a large number of items in OPC group. In that case the OPC Server starts to queue refresh requests because they cannot be fulfilled in 2.5 sec time because of large number of I/O points that must be scanned. At the same time update subscription callbacks are running at the period determined by Update rate. I observed in my production system that regular update callbacks has higher priority than refresh callbacks from the queue. That causes the loosing of timed sequence of data. After the update callback with fresh data, sometimes follow one or two refresh callbacks from queue with old (invalid) data. By adjusting Update rate parameter (1 hour, 2hours …) I can postpone the collision of data refreshes but I cannot eliminate it. Furthermore, the 2.5 sec automatic refresh are large burden for systems with many I/O points.
    QUESTION 2:
    Is there a way to disable automatic refresh request every 2.5 sec and just use update requests determined by Update rate?
    QUESTION 3:
    Is there a way (or parameter) to change the period of automatic refresh (2.5 sec)?
    This problem is discovered for Lookout 6.5, 6.6 and 6.7 so I could say it is intrinsic to OPCClient. If I use synchronous I/O requests there is not an automatic refresh, but that is not an option for large systems.
    Thanks!
    Alan Vrana
    System engineer
    SCADA Projekt d.o.o.
    Picmanova 2
    10000 ZAGREB
    CROATIA
    T +385 1 6622230
    F +385 1 6683463
    e-mail [email protected]
    Alan Vrana
    SCADA Projekt d.o.o.
    ZAGREB, Croatia
    Attachments:
    opcproc.zip ‏4 KB

    The physical connection from LV to the switch is (I believe) copper crossover to fiber converter into a switch.  Then, fiber from the switch to the end device (relay).  The relay has all of the typical modbus registries and has been verified by inducing signals in to the system and measured/polled in LabVIEW and observed Variable Monitor.  I am working with LV 8.2 and 8.5. 
    An OPC server would only add an additional translation of addressing within the configuration.  The only real draw back would be the network overhead required to do this processing and not being representative of the end design configuration.
    I will reiterated my question in another way:
    I must answer the question to management that relates to data collection, test results and analysis; how often are you polling the client in relation to the outcomes measured?  At this time I can not point at any configuration in the set up and execution that directs the data framing rate.  I only measure the traffic and work with results.  This needs to be clearly identified based on the relay modbus/tcp design capability of supporting an fixed number of client requests per second. 
    For testing purposes, I would like to be able to stress the system to failure and have prove capabilities with measured data.  The present problem is that I have no basis to establish varying polling rates that effect the measured data transmission. 
    This raises another question.  What handles the Variable Monitor data requests and how is this rate determined?
    Thanks for your interest in my efforts.
    Steve

  • Slow performance for large queries

    Hi -
    I'm experiencing slow performance when I use a filter with a very large OR clause.
    I have a list of users, whose uid's are known, and I want to retrieve attributes for all users. If I do this one at a time, I pay the network overhead, and this becomes a bottleneck. However, if I try to get information about all users at once, the query runs ridiculously slow - ~ 10minutes for 5000 users.
    The syntax of my filter is: (|(uid=user1)(uid=user2)(uid=user3)(uid=user4).....(uid=user5000))
    I'm trying this technique because it's similar to good design for oracle - minimizing round trips to the database.
    I'm running LDAP 4.1.1 on a Tru64 OS - v5.1.

    This is a performance/tuning forum for iPlanet Application Server. You'd have better luck with this question on the Directory forum.
    The directory folks don't have a separate forum dedicated to tuning, but they answer performance questions in the main forum all of the time.
    David

  • Error with SSL Message

    Hello Guys,
    I am implementing solution where in I need to post http request to a secure server. I am using following mechanisam to talk to the ssl server. But when I run the program on my local machine I get following error. Can you guys please help me out since I have limited knowledge of security API and I need to get this done in very short time. Please help me understand necessary steps required to resolve this issue.
    Thanks
    Code
    SSLSocketFactory factory = (SSLSocketFactory)SSLSocketFactory.getDefault();
    tunnelHost = "<my proxy server >";
    tunnelPort = "<proxy server port>";
    tunnel = new Socket(tunnelHost, tunnelPort);
    doTunnelHandshake(tunnel, host, port ,username , password);
    socket =(SSLSocket)factory.createSocket(tunnel, host, port, true);
    socket.addHandshakeCompletedListener(
    new HandshakeCompletedListener()
         public void handshakeCompleted(
         HandshakeCompletedEvent event)
              {"\t CipherSuite:" + event.getCipherSuite());
              System.out.println(
              "\t SessionId " + event.getSession());
              System.out.println(
              "\t PeerHost "+
              event.getSession().getPeerHost());
    socket.startHandshake();
    socket.close();
    tunnel.close();
    } catch (Exception e) {
         e.printStackTrace();
    private void doTunnelHandshake(Socket tunnel, String host, int port , String username , String password)
    throws IOException
    OutputStream out = tunnel.getOutputStream();
    String AuthString = new String("NORTHAMERICA\\"+username+ ":" + password );
    byte [] AuthBytes = AuthString.getBytes();
    char []AuthChar = Base64encode(AuthBytes);
    String test = String.valueOf(AuthChar);
    String ProxyAuthorization = new String("Proxy-Authorization: Basic " + test);
    String msg = "CONNECT " + host + ":" + port + " HTTP/1.0\n"
    + "User-Agent: Java SSL Sample\n"
    + "Host: FSM Gateway\n"
    + "Proxy-Connection: Keep-Alive\n"
    + "Pragma: No-Cache\n"
    + ProxyAuthorization
    + "\r\n\r\n";
    byte b[];
    try {
    b = msg.getBytes("ASCII7");
    } catch (UnsupportedEncodingException ignored) {
    * If ASCII7 isn't there, something serious is wrong, but
    * Paranoia Is Good �
    b = msg.getBytes();
    out.write(b);
    out.flush();
    byte reply[] = new byte[200];
    int replyLen = 0;
    int newlinesSeen = 0;
    boolean headerDone = false; /* Done on first newline */
    InputStream in = tunnel.getInputStream();
    boolean error = false;
    while (newlinesSeen < 2) {
    int i = in.read();
    if (i < 0) {
    throw new IOException("Unexpected EOF from proxy");
    if (i == '\n') {
    headerDone = true;
    ++newlinesSeen;
    } else if (i != '\r') {
    newlinesSeen = 0;
    if (!headerDone && replyLen < reply.length) {
    reply[replyLen++] = (byte) i;
    * Converting the byte array to a string is slightly wasteful
    * in the case where the connection was successful, but it's
    * insignificant compared to the network overhead.
    String replyStr;
    try {
    replyStr = new String(reply, 0, replyLen, "ASCII7");
    } catch (UnsupportedEncodingException ignored) {
    replyStr = new String(reply, 0, replyLen);
    /* We asked for HTTP/1.0, so we should get that back */
    if (!replyStr.startsWith("HTTP/1.0 200")) {
    throw new IOException("Unable to tunnel through "
    + tunnelHost + ":" + tunnelPort
    + ". Proxy returns \"" + replyStr + "\"");
    System.out.println("tunneling Handshake was successful!");
    Exception is javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
    at com.sun.net.ssl.internal.ssl.InputRecord.b(Unknown Source)
    at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.a(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.j(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source)
    at SSLSocketClient.doIt(SSLSocketClient.java:166)
    at SSLSocketClient.main(SSLSocketClient.java:54)
    Debug information is
    keyStore is :
    keyStore type is : jks
    init keystore
    init keymanager of type SunX509
    trustStore is: C:\Program Files\Java\j2re1.4.2_06\lib\security\cacerts
    trustStore type is : jks
    init truststore
    adding as trusted cert:
    Subject: CN=Baltimore CyberTrust Code Signing Root, OU=CyberTrust, O=Baltimore, C=IE
    Issuer: CN=Baltimore CyberTrust Code Signing Root, OU=CyberTrust, O=Baltimore, C=IE
    Algorithm: RSA; Serial number: 0x20000bf
    Valid from Wed May 17 09:01:00 CDT 2000 until Sat May 17 18:59:00 CDT 2025
    adding as trusted cert:
    Subject: CN=Entrust.net Secure Server Certification Authority, OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS incorp. by ref. (limits liab.),
    Issuer: CN=Entrust.net Secure Server Certification Authority, OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS incorp. by ref. (limits liab.),
    Algorithm: RSA; Serial number: 0x374ad243
    Valid from Tue May 25 11:09:40 CDT 1999 until Sat May 25 11:39:40 CDT 2019
    adding as trusted cert:
    Subject: CN=Baltimore CyberTrust Root, OU=CyberTrust, O=Baltimore, C=IE
    Issuer: CN=Baltimore CyberTrust Root, OU=CyberTrust, O=Baltimore, C=IE
    Algorithm: RSA; Serial number: 0x20000b9
    Valid from Fri May 12 13:46:00 CDT 2000 until Mon May 12 18:59:00 CDT 2025
    adding as trusted cert:
    Subject: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Net
    Issuer: CN=VeriSign Class 3 Public Primary Certification Authority - G3, OU="(c) 1999 VeriSign, Inc. - For authorized use only", OU=VeriSign Trust Net
    Algorithm: RSA; Serial number: 0x9b7e0649a33e62b9d5ee90487129ef57
    Valid from Thu Sep 30 19:00:00 CDT 1999 until Wed Jul 16 18:59:59 CDT 2036
    adding as trusted cert:
    init context
    trigger seeding of SecureRandom
    done seeding SecureRandom
    tunneling Handshake was successful!
    Socket is 15e83f9[SSL_NULL_WITH_NULL_NULL: Socket[addr=/10.0.1.38,port=80,localport=2133]]
    %% No cached client session
    *** ClientHello, TLSv1
    RandomCookie: GMT: 1115833203 bytes = { 119, 0, 234, 70, 240, 74, 55, 9, 64, 89, 133, 251, 64, 160, 105, 25, 113, 219, 252, 65, 240, 228, 184, 117, 235,
    Session ID: {}
    Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_
    Compression Methods:  { 0 }
    main, WRITE: TLSv1 Handshake, length = 73
    main, WRITE: SSLv2 client hello message, length = 98
    main, handling exception: javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
    main, SEND TLSv1 ALERT: fatal, description = unexpected_message
    main, WRITE: TLSv1 Alert, length = 2
    main, called closeSocket()
    javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
    at com.sun.net.ssl.internal.ssl.InputRecord.b(Unknown Source)
    at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.a(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.j(Unknown Source)
    at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(Unknown Source)
    at SSLSocketClient.doIt(SSLSocketClient.java:166)
    at SSLSocketClient.main(SSLSocketClient.java:54)

    No it is not correct.
    The socket creation should be provided with the proper host and the port. the resource on the host is something you should ask your server (HTTP GET ... whatever).
    The https://abc.com:443 is equal to https://abc.com as the default port for https is 443. the host variable should be "abc.com" and the port "443" and the rest negotiated in application level (HTTP GET /XYZ [is not the proper syntax]).
    Further, with this description, the first url (https://server/resource:port) is not making any sense.
    You problem in first place is probably the host and port parameters (specifically the port has been set to 80 which most likely is wrong) . you need to consider the other port regarding newline and CRs buildging the proxy authentication header, but you debug logs suggest that your test proxy server takes it.

  • Business delegate and Session facade design patterns

    Does any one tell me, what is the difference between business delegate and session facade design patterns.

    1. Session Facade decouples client code from Entity beans introducing session bean as a middle layer while Business Delegate decouples client code from EJB layer ( Session beans).
    2. SF reduces network overhead while BD reduces maintenance overhead.
    3. In SF any change in Session bean would make client code change.
    While in DB client is totally separate from Session bean because BD layer insulate client from Session beans(EJB layer).
    3. In only SF scenario, Client coder has to know about EJB programming but BD pattern no EJB specialization needed.
    4.SF emphasizes on separation of Verb, Noun scenario while BD emphasizes on separation of client(presentable) and EJB layer.
    Anybody pls suggest more differences ?

  • How to load Java properties file dynamically using weblogic server

    Hi,
    We are using Java properties file in Java code. Properties in java properties file are frequently modified. If I keep these properties file in project classpath or as part of war, I will have to redeploy application after each change.
    We are using Weblogic Server.
    Can you please suggest me how can this properties file be loaded at weblogic server startup. Then in that case, how to refer property file in Java code?
    What is the best practice for this?
    Thanks,
    Parshant

    Another alternative is, keep the property file in any pre-defined location. Write a class which reads the properties from the file and returns the one which is requested by caller and deploy this class. Whenever you have to change the properties just update the property file on server and next call to fetch the property should return the updated one.
    Downside of this approach is file I/O everytime. To overcome that you can actually "cache" the properties in a hashmap. Basically when any property if requested, first check the hashmap, if not found then only read from property file and also update in hash map. Next time same property will be returned from hash map itself. The hash map will be cleared at every server restart since its in the memory. You will also need to build a method to clear the hashmap when you update the values in the property file on server.
    This solution would be suitable for small size files and when network overhead of calling a DB needs to be avoided.

Maybe you are looking for