Protocols in XI architecture

Please mention the various Protocols like http used in XI architecture????

Hi Gabriel,
The Protocol used is XI Message protocol for communication.
check this link:
http://help.sap.com/saphelp_nw04/helpdata/en/b6/0b733cb7d61952e10000000a11405a/content.htm
regards
Ramesh P
Message was edited by:
        Ramesh Parashivamurthy

Similar Messages

  • Help for Smpp protocol handeler integration in URL  architecture using java

    Hi
    im student of BS.SE final year and im working on a research project by extending the URL framework
    by developing a component
    using smpp protocol implementation java smpp Api 3.4
    by writing a smpp protocol handler using smpp implementation 3.4 by java
    plz help me doing this project
    how this can be implemented
    and how the design of this procect will be
    by developing this component any application can be sms powered application
    using our custom protocol handler for smpp
    using smpp java Api
    how the coding of this project will be done
    how the smpp listeners will be integrated in url framework
    Plz help me developing this project
    i need ur full help
    thanks in advance
    plz reply me any material
    and developement ideas
    and any sample coding for this project

    System.setProperty("proxySet", "true");Does nothing. Remove.
    System.setProperty("proxyHost", proxyHost.trim());
    System.setProperty("proxyPort", proxyPort.stringValue());Those are for HTTP and they are obsolete, should be http.proxyHost/proxyPort. But as you are using FTP:
    System.setProperty("ftp.proxyHost", proxyHost.trim());
    System.setProperty("ftp.proxyPort", proxyPort.stringValue());

  • Performance issues with LOV bindings in 3-tier BC4J architecture

    We are running BC4J and JClient (Jdeveloper 9.0.3.4/9iAS 9.0.2) in a 3-tier architecture, and have problems with the performance.
    One of our problems are comboboxes with LOV bindings. The view objects that provides data for the LOV bindings contains simple queries from tables with only 4-10 rows, and there are no view links or entity objects to these views.
    To create the LOV binding and to set the model for the combobox takes about 1 second for each combobox.
    We have tried most of tips in http://otn.oracle.com/products/jdev/tips/muench/jclientperf/index.html, but they do not seem to help on our problem.
    The performance is OK (if not great) when the same code is running as 2-tier.
    Does anyone have any good suggestions?

    I can recommend that you look at the following two bugs in Metalink: Bug 2640945 and Bug 3621502
    They are related to the disabling of the TCP socket-level acknowledgement which slows down remote communications for EJB components using ORMI (the protocol used by Oracle OC4J) to communicate between remote EJB client and server.
    A BC4J Application Module deployed as an EJB suffers this same network latency penalty due to the TCP acknowledgement.
    A customer sent me information (that you'll see there as a part of Bug# 3621502) like this on a related issue:
    We found our application runs very slow in 3-Tier mode (JClient, BC4J deployed
    as EJB Session Bean on 9iAS server 9.0.2 enterprise edition). We spent a lot
    of time to tune up our codes but that helped very little. Eventually, we found
    the problem seemed to happen on TCP level. There is a 200ms delay in TCP
    level. After we read some documents about Nagle Algorithm,  we disabled a
    registry key (TcpDelAckTicks) in windows2000  on both client and server. This
    makes our program a lot faster.
    Anyway, we think we should provide our clients a better solution other than
    changing windows registry for them, for example, there may be a way to disable
    that Nagle's algorithm through java.net.Socket.setTcpNoDelay(true), in BC4J,
    or anywhere in our codes. We have not figured out yet.
    Bug 2640945 was fixed in Oracle Application Server 10g (v9.0.4) and it now disables this TCP Acknowledgement on the server side in that release. In the BugDB, I see backport patches available for earlier 9.0.3 and 9.0.2 releases of IAS as well.
    Bug 3621502 is requesting that that same disabling also be performed on the client side by the ORMI code. I have received a test patch from development to try out, but haven't had the chance yet.
    The customer's workaround in the interim was to disable this TCP Acknowledgement at the OS level by modifying a Windows registry setting as noted above.
    See Also http://support.microsoft.com/default.aspx?kbid=328890
    "New registry entry for controlling the TCP Acknowledgment (ACK) behavior in Windows XP and in Windows Server 2003" which documents that the registry entry to change disable this acknowledgement has a different name in Windows XP and Windows 2003.
    Hope this info helps. It would be useful to hear back from you on whether this helps your performance issue.

  • Ask the Experts: IOS-XR Fundamentals and Architecture

    Welcome to the Cisco Support Community Ask the Expert conversation. 
    Learn and ask questions about IOS-XR Fundamentals and Architecture.
    November 18, 2014 through November 28, 2014.
    Cisco IOS XR Software is a modular and fully distributed network operating system for service provider networks. Cisco IOS XR creates a highly available, highly secure routing platform.
    It distributes processes across the control, data, and management planes with their own access controls and delivers routing-system scalability, service isolation, and manageability.
    This is a Q&A extension of the Live expert Webcast.
    Cisco subject matter experts Sudeep, Raj, and Sudhir, will focus on IOS-XR fundamentals.
    Including:-
    High-Level Overview of Cisco IOS XR
    Cisco IOS XR Infrastructure
    Configuration Management
    Cisco IOS XR Monitoring and Operations
    Cisco IOS XR Security
    Introduction to different IOS-XR platforms
    Sudeep Valengattil is a customer support engineer in High-Touch Technical Services at Cisco specializing in service provider technologies and platforms. Sudeep has got experience on XR platform like ASR9000, CRS, NCS and GSR. Sudeep has more than 9 years of experience in the IT industry and holds CCIE certification (36098) in Service provider.
    Sudhir Kumar is a customer support engineer in High-Touch Technical Services at Cisco specializing in service provider technologies and platforms. His areas of expertise include Cisco CRS, ASR 9K and Cisco XR 12000 Series Routers. Sudhir has more than 10 years of experience in the IT industry and holds CCIE certification (35219) in Service provider and Routing and switching.
    Raj Pathak is a customer support engineer in High-Touch Technical Services at Cisco specializing in service provider technologies and platforms. He serves as a support engineer for technical issues supporting Cisco IOS XR Software customers on Cisco CRS and Cisco XR 12000 Series Routers. Raj has more than 8 years of experience in the IT industry and holds CCIE certification (38760) in routing and switching.
    For more information about this topic, visit the Expert Corner > Knowledge Sharing
    Remember to use the rating system to let the experts know if you have received an adequate response.

    Hi Charles,
    To answer your question,
    LPTS would be acting only on packet/traffic which is ingressing the router and destined for the router itself (for-us packets).  It provides an internal forwarding table to route control/management protocol packets destined to local router to the right application for further processing.  Once we have a packet entering the interface, the network processor would be performing a lookup to determine, if this packet is destined for us.  Based on which, it will forward to LPTS.  For eg, the ICMP packets coming in on an interface with destination IP of router itself, would be processed by LPTS.  It also provides policing function for this traffic transparently.
    Key facts about LPTS
    1. LPTS is an always on feature.  No user configuration needed to enable it.
    2. LPTS is only applicable for traffic entring to the router and destined to the local router. Applies for control-plane and management plane traffic.
    3. Packets originated by router and transit traffic is not processed by LPTS
    4. LPTS polices the incoming traffic based on the pre-defined policer rates.
    Here is an o/p snip to view the LPTS entries.
    RP/0/RP0/CPU0:CRS-C#sh lpts pifib hard police loc 0/0/cpu0
    Tue Nov 25 23:32:10.666 EDT
    Node 0/0/CPU0:
    Burst = 100ms for all flow types
    FlowType Policer Type Cur. Rate Def. Rate Accepted Dropped
    unconfigured-default 100 Static 500 500 0 0
    L2TPv2-fragment 185 Static 700 700 0 0
    Fragment 106 Static 1000 1000 0 0
    OSPF-mc-known 107 Static 20000 20000 44818 0
    OSPF-mc-default 111 Static 5000 5000 11366 0
    Do let us know if you have any further queries.
    Regards,
    Sudeep Valengattil

  • Help needed in architecture

    We have to implement a module which is visualized as a Java process. A huge volume of transactions are expected to come in and these need to be processed at high speed(response time of 5 seconds or so). This module should be able to support non java clients (the module is being invoked from the non-stop Tandem Server in this case.) As part of deciding on the architecture we need to decide on the protocol used for communication � http/tcp/iiop etc.
    The various options we have are
    1.Implement the module as a Webservcie
    2. As EJB Session Beans
    3. RMI (is this really an option?)
    4. Socket connection
    5. any other ??
    Please give your suggestions.

    Are you the same person as in this thread?: http://forum.java.sun.com/thread.jspa?threadID=688792

  • Operation of Search architecture.

    Good morning MS community,
     So far i have been done the SharePoint Server 2013 search architecture:
    http://technet.microsoft.com/en-us/library/cc263199%28v=office.15%29.aspx
    Afterward, i myself interpret the diagram into the image below: This is an operation of SharePoint 2013 search from my understanding.
    My question is: "Could you please advise me whether i understand correctly about the "search architecture" - SharePoint 2013 - from my below analytic?"
    ******************* My analysis of "Search operation " *****************************
    1. User enter a string of information that he/she wants to looking for. The string is sent to query components.
    2. Query processing components will:
    - Analyze the Search queries.
    - Perform linguistic processing. Such as "word-breaking" and "stemming".
    - Optimize precision, recall, relevancy.
    - Submit to index component.
    * Later on, index component will gather data from 2 source:
    - Crawl and components process.
    - Analytic process.
    3. Gathering Data from Crawl and components.
    3.1.
    Crawl component will collect crawling content sources: invoke connectors/ protocol handler to interact with content source to retrieve data.
    - Type of data:
    + Actual data.
    + Metadata.
    - At the same time, the crawl components will:
    + Store information about crawled items on crawled DB. Crawl component will write the following information to crawl Database:
    * Last crawl time.
    * Last crawl ID.
    * Type of updating during last crawl.
    + Track crawl history.
    è After successfully gathering information from content resource, the crawled
    data will be sent to "content processing component".
    3.2. Content processing component:
    - Transform crawled items into "artifacts" that can be included in "search index".
    + How to transform: performing some operation such as "document parsing
    and property mapping".
    - Perform linguistic process, such as "language detection + entity extraction".
    - At the same time, content processing component will write information about "links + URL" to the links database.
    è When finish, content processing will send data to index component.
    4. Gathering data from Analytic process.
    * Explain about components in this part:
    4.1. Analytic process component.
    - Operations: search analytic + usage analytic.
    - Aims: improve search relevance, create search reports, and generate recommendations and deep links.
    4.1.1. Search analytic:
    - Extracting information
    + Link
    + Number of times clicked.
    + Anchor text.
    + Data related to people.
    + Metadata.
    4.1.2. Usage analytic:
    - Analyze usage log information. (retrieve this info from the font-end via event store).
    è Generate usage and statistic reports, store in "analytics reporting
    Database".
    4.2. Analytic reporting database.
    - Result of usage analytics: usage + statistic reports.
    4.3. Link database.
    - Store information of:
    + search clicks.
    + Number of times people click on a search result from the search result page.
    4.4. Event store:
    - Usage events ( such as number of times viewed).
    *** operation of "analytic processing component ***
    - Get information from "link DB", "analytic DB", "event store".
    - Extracting information (links, number of times an items clicked, anchor text, data related to people, metadata …)
    - Send information to "index components".
    - At the same time, write new information to "link DB" + "analytic DB".
    5. "index and query process"
    - Receive processed items from "content processing components" "analytic processing components"
    è write those items to an index file.
    - Send back results to "query processing component".
    6. "Query processing component" Return a set (results) based on the processed query back to the query processing component.

    Hi,
    Search in SharePoint 2013 has been re-architected. You could refer Figure 1 from the link below for big picture of Search process and components:
    http://msdn.microsoft.com/en-us/library/office/jj163300(v=office.15).aspx
    For some basic concepts, you could refer to:
    http://technet.microsoft.com/en-us/library/jj219738(v=office.15).aspx
    If you would like more detail information about search, you might need to contact Microsoft support engineer for more sufficient information.
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Remote Monitoring Latest Best Practice Architecture

    Hi guys,
    I've developed very few remote monitoring systems in the past. One of them was using a PXI RT and the rest are cRIO. The approach and architecture were based from some of the things I've read from ni.com and this forum. In the process, there were much difficulties and some extensive troubleshooting exercises that I need to do. The results, while the system work and meet the user's requirements, it didn't meet my own expectation. I was hoping that the system can be expanded (adding more cRIO or PXI) with much ease and little or no re-programming effort. Anyway, 2-3 years have passed and opportunities with similar requirements has emerged. So, I would like to get started to think about the architecture at an early stage (ie. now). 
    In my past systems, I've used Shared Variables (SV) a lot - and it gave much much headache too. Some of the troubles I had were:
    1. I can't decide whether to lump all SV in one library and host them in one system, or to separate them into various libraries and systems... neither do I know what's the best approach, as I've read too many 'suggestions' and 'advices',
    2. Some of the SV are from custom control and the control is type-def. When running the VI in RT with these SV in development platform, everything works smoothly but when I compiled and deploy, the program didn't run. After extensive troubleshooting, I found out that this had something to do with these SV - because when I removed the type-def from the custom controls and recreate my SV, everything worked fine. I suspect this may have something to do with how I deploy but after I tried several approach, the problem still persist.
    3. The best and most common of all is unstable connectivity - it work today but that doesn't guarantee it will work tomorrow. When the host PC changes, the same problems resurfaced again. I read somewhere that I need to read or interface with the .alias file but this work some times and other times, the same problem persist.
    Attached is the most common architecture that I've used. I would like to move away from SV as much as possible. If the application is 1:1, there's no problem as I can easily use TCP/IP & Network Stream. However, my doubts and headache comes when the RT:Host communication is either 1:N, N:N or N:1. I've read in ni.com and found out that there are various new approach to this, such as AMC (derivated from UDP), Web Services (or was it HTTP). 
    I really appreciate it if you guys share your thoughts and advices here, please?
    Shazlan
    Attachments:
    Remote Mon Sys - Arch.pdf ‏27 KB

    Nick,
    I was not talking about the mgmt0 interface. The vlan that you are testing will have a link blocked between the two 3750 port-channel if the root is on the nexus vPC pair.
    Logically your topology is like this:
        |                             |
        |   Nexus Pair          |
    3750-1-----------------------3750-2
    Since you have this triangle setup one of the links will be in blocking state for any vlan configured on these devices.
    When you are talking about vPC and L3 are you talking about L3 routing protocols or just intervaln routing.
    Intervlan routing is fine. Running L3 routing protocols over the peer-link and forming an adjaceny with an router upstream using L2 links is not recommended. Teh following link should give you an idea about what I am talking here:
    http://bradhedlund.com/2010/12/16/routing-over-nexus-7000-vpc-peer-link-yes-and-no/
    HSRP is fine.
    As mentioned tracking feature purpose is to avoid block hole of traffic. It completely depends on your network setup. Don't think you would be needing to track all the interfaces.
    JayaKrishna

  • [CS3 Win] Doc-Observer, new architecture

    Hi all,<br /><br />so I finally get around to port stuff to CS3. While things have gone smoothly until now, I'm obviously getting to the tougher part now: the new command-architecture.<br />I've written a command to import text from a file. I guess I don't really need my own command here, but in CS2 I could do all the stuff needed at undo in the Undo()-Method of my command. Needless to say that this is obsolete now. So I figured I'd just broadcast changes on my own PMIID in the commands DoNotify() and use a doc-observer with LazyUpdate(), since that'll tell me of changes not only at Do() but also at Undo()/Redo() - I hope so at least. Unfortunately attaching the observer according to the porting-guide/API-reference crashes InDesign.<br />Now, here's some code, I've stripped it of nil-pointer checks for breviety:<br /><br />from the DoNotify():<br /><br />IDocument* doc = Utils<ILayoutUIUtils>()->GetFrontDocument();<br />InterfacePtr<IMCMImportTextData> importData(this, UseDefaultIID());<br />InterfacePtr<ISubject> docSubject(doc, UseDefaultIID());<br />ListLazyNotificationData<PMString>* lnData = new ListLazyNotificationData<PMString>;<br />lnData->ItemChanged(importData->GetPath());<br />docSubject->ModelChange(kMCMImportTextCmdBoss, IID_IMCMIMPORTTEXTDATA, this, lnData);<br /><br />IMCMImportTextData ist an interface aggregated on my CmdBoss to allow for Data-Exchange, in this case it's a path to file, still need to change that to an IDFile due to the changes to PMString. Seems to work ...<br /><br />Now the attach-method on my doc-observer - at which point ID crashes when opening the doc:<br /><br />InterfacePtr<ISubject> iDocSubject(iDocument, UseDefaultIID());<br />if (!iDocSubject->IsAttached(this, IID_IMCMIMPORTTEXTDATA, ISubject::kRegularAttachment)){<br />iDocSubject->AttachObserver(this, IID_IMCMIMPORTTEXTDATA, ISubject::kRegularAttachment);<br />}<br />if (!iDocSubject->IsAttached(this, IID_IHIERARCHY, ISubject::kRegularAttachment)){<br />iDocSubject->AttachObserver(this, IID_IHIERARCHY, ISubject::kRegularAttachment);<br />}<br /><br />iDocument is an IDocument* passed as a parameter, I've also tried the various AttachmentTypes. If I do it the "old-fashioned-way" (iDocSubject->AttachObserver(this, IID_IMCMIMPORTTEXTDATA, IID_MCMDOCOBSERVER)) it works, but obviously only Update() will be called, not UpdateLazy() which I'll prefer and need. I used IID_IHIERARCHY for test-purposes, commenting out my own IID - unfortunately with the same result.<br />Now I'm stuck - I'm not sure whether I'm missing something basic here, if I just didn't get the documentation or I don't know what.<br /><br />Thanks in advance for any help,<br /><br />Bernt

    The selection filter was mentioned for a very specific use case - Jelle had to notify an external application. For details see that other thread, but for you that approach is probably wrong anyway.
    Typically you follow the selection in order to update a widget - if not, please explain.
    For that case you would aggregate a selection observer (e.g. derived from ActiveSelectionObserver* ) on your widget or its parent panel. In the observer, you would _not_ override Update(), instead use the various Handle() methods.
    That notification should be sufficient for simple selection changes on page items, the selection should not need to follow multiple protocols. If there is a deeper sense behind your quoted enormous collection of protocols, then you might need a Suite with SelectionExt* which would preprocess all those notifications and reduce them to a single custom protocol. You'll probably need a Suite anyway unless you can reuse an existing one, to obtain a value for your widget.
    In other words, all the _DOCUMENT are not related to the selection, they are just side effects. IID_ISELECTIONFILTER makes no sense. All those PATHSELECTION are only relevant if you want to watch every change to shapes. The reference point again is a sideeffect - I think of scrolling / active spread, it is very unlikely that you really need that notification.
    Btw, do not check for IsAttached() during AutoAttach. It just suppresses an ASSERT, better fix the reason (unbalanced Attach / Detach).
    * = search the SDK.
    Dirk

  • Display a PDF in a web architecture

    All interested Flex developers,
    I was asked to post this entire issue to Flex forum.
    Flash Web applications cannot directly display PDF images (unlike PNG, JPEG, GIF images).
    Our network architecture is:
    user browser <---- Flex web app <---- SOAP/WSDL protocol <---- backend PDF file server
    The Flex web app receives a Base64 PDF and decodes to a ByteArray[]
    A) Can anyone map a PDF ByteArry to JPEG or PNG?
    B) Can we write a temporary file.PDF in the web server?
    Then we could display the temporary file using a web iFrame.
    C) Does someone have a neat idea to display a PDF in our web apps without prompting the user?
    thanks,
    Medical Flash developers

    I agree that option A would be a pretty intensize task to map PDF bytes to bitmap bytes.
    For option B,  there's a good post here about how you can upload a bytearray by loading the data manually into a FileReference: http://www.pavlasek.sk/devel/?p=10.  He even gives some code for a Java servlet that will accept the ByteArray.
    -Mike
    ramblingdeveloper.com

  • MII Implementation Architecture

    Hi,
    Due to service maintenance costs for each server deployed, I've been asked to try limit the number of servers used for a multi-site MII roll out.  The MII system requirements are for Operator/Management Production Reports with a view for system integration at a later stage.  The current architecture options on the table are:
    1. Local - one instance per production site (Typical and my usual approach)
    2. Regional - one instance serving four or five Production Sites within close proximity (50 kms)
    3. Central - one instance serving all Production Sites Globally.
    While I've successfully been able to convince the necessary parties that option 3 is not an option, I'm finding it difficult to build up a convincing case for option 1 over option 2 (other than this is the official/preferred way - money talks I'm afraid ).
    My immediate reluctance for the regional approach is because:
    1. Increased communication overhead will impact on performance (esp. if interactive screen)
    2. Increased risk of communication failure to source production systems (located on each site).
    Point 1 is easy to test and measure, but Point 2 is what I'm having difficulty quanitifying for this evaluation.  This will be a 12.x installation, so the Query Data Buffering will be available (Tag and SQL), but I haven't used it within a production environment extensively so I'm not too sure if it's a recommended route to rely on.  I'm also of the thought that it's better to avoid the problem than "fix" it.  Also, while the buffering is great for an integration/transactional environment, it doesn't help much with regards to an operator screen/report - from the perspective of the operator waiting for data.
    Does anyone have any experience/views on the Regional Approach, in particular my concerns on the communication failure, or am I being over paranoid?
    Thanks.

    Hi, Lawrence.  Here's my view, for what it's worth...
    Since you're paying a license for each site anyway, it isn't a "license-based" cost decision - it's largely a question of the cost of administering multiple MII instances/servers and related hardware.  In the 11.X era, this cost was reasonably low.  With 12.X, it has increased a bit with the more frequent need for NW patches and management (or so I've been told by a few customers who I trust greatly).
    A few key considerations are performance/responsiveness, availability, and overall application manageability.  As I recall, the networking infrastructure in S.A. can be a challenge in some remote locations, with limited bandwidth ISDN or DSL connections.  If there will be a lot of "trending" views by the users, mostly against data local to their site, you'll be wasting an enormous amount of network bandwidth (and response time) shipping data up to the regional or central server and then all the way back to the user.  Also, there is always the question of availability, and the likelihood of a local server on a local network being down versus a central/regional server with intermittent outages is important to consider.
    One of the "hidden features" of MII that offers a good compromise solution is the "Virtual Server" (a special type of connector, not something like VMware).  This approach allows you to have MII systems at each site handling communications to historians/databases, but also regional or central servers that can utilize these data connections remotely.  Customers have benchmarked performance and generally found that accessing a historian from a regional server, for example, is far more efficient and faster if you use a Virtual Server connection to the historian than versus connecting to it directly from the regional server.  The reason is often due to the binary protocol that MII uses being more efficient/lean than the vendors underlying protocols.  Of course, you may find different results, but it is something to consider.
    Similarly, you might want to consider application segmentation/partitioning, whereby you could create very ad-hoc "engineering" applications on the local MII server at each site, and do the more "corporate oriented" dashboards, reports, and ERP integration activities on the regional or central servers.  This way you can get the best of both worlds.

  • Need help on TCP communiction architecture. Patterns, books?

    Hi
    I am currently developing a Java ME application that uses TCP for communication. Unfortunately the binary protocol I have to implement on the client is half-duplex whereas TCP is full-duplex. So I had to artificially limit and synchronize transmission and reception by using sync blocks, locks on mutex objects and message qureue decoupling. While implementing the protocol, step by step new cases appeared where I had to tweak the existing locking a bit. Now, the protocol seems to work (no stress/system test yes) but to be, the architecture is not as good as it could be and probably a bit fragile. For the next release/refactoring I'm looking for a better way to solve the problem.
    There are different possible message flows:
    - Client transmits, Server sends ACK
    - Clients is Idle, Server requests data, Clients sends ACK, Client sends response, server sends ACK
    - Exceptions must be handled: Reception timeout, bad packet format etc. Then the packets must be resent...
    Right now, I have a main thread that will use a queue to store messages. When transmission starts, it will send messages. If it fails, it will roll back the queue and send again. Then it waits on a queue for the ACK using a timeout. There is a second thread that is used for reception. It waits on a blocking inputstream read, parses the messages and puts them into a reception queue.
    In this current architecture, there are some problems: When receiving data, sometimes I have to already handle and interpret the packets right now without storing it into a queue because because I need to send an ACK immediately which again depends on the outcome of the handler. But this could mess up a ongoing transmission/reception so I have to sych it by using a flow control mutex object of which I don't think it's a good way. A second problem is, that when I'm sending the response messages to a server request, the reception thread is locked because the response sending is already called by the reception thread itself and I'm deep down in the call stack. My work-around is to start a new thread to transmit the packets and wait for an ack so the reception thread can go on receiving the ack.
    But I don't have a good feeling about this.
    Is there anyone with a good approach or any hints on books or patterns about this? I bet there must be other people before me having the same problems... :)
    Thanks a lot!
    /Jan

    stelzbock wrote:
    Hi, thank you all for the valuable comments. I try to get to a bookstore to get a deeper look into that book because it's unix and I don't have anything to do with any NIX OSes. But probably I could use some of the general archticture ideas.
    jschell wrote:
    You have a protocol and messages.
    The protocol layer handles the send/receive the message layer handles what messages to send and expected responses.
    Hmm, interesting, is the message layer higher than the protocol layer? I mean, what tasks do the message layer cover?
    Yes. Specifics depend on details of the actual specification.
    The message layer might use three methods provided by the protocol layer.
    - Send request and receive response
    - Send request
    - Receive response.
    The message layer constructs the message while the protocol layer handles the CRC (as an example.)
    I need to send an ACK immediately which again depends on the outcome of the handler.I would like to think that you are mis-interpreting that.
    Normally something like that would be something like the following
    - Get the message
    - Verify the CRC
    - Send the ACK if the CRC is valid.
    In a case like that it is still part othe protocol rather than the message flow.generally, I would agree. But what I am doing is an extremely resource optimized embedded protocol that runs via TCP but could also be RS232 or CAN or anything else. It's packet header contains only 3bytes and the protocol supports dedicated messages that are used for instance to set a configuration item on a remote device. In this case, the message content must be parsed and interpreted before sending an ACK because in this case, the ACK acknowledges not only the header but the payload as well.Still depends on the specifics.
    But then the ACK is a response and nothing more. At the message level you can sequence it as
    - Hold protocol (method provided by protocol that dedicates socket to this flow)
    - Send request and receive response
    - process request
    - Send Ack response (protocol layer can actually encapsulate ACK or message layer can do it.)
    - Release protocol (releases the socket.)

  • Which is more in line with MVC architecture with Struts?

    Hello all
    When using the MVC Model 2 architecture, the JSP's are the view, servlets the control, and the beans are the model. If we say that a control method should represent a specific use case, then in theory, you should be able to call the control method from any interface to request that a specific use case be performed, whether it be over a simple socket connection receiving bytes, or using HTTP.
    However, when using jsp's/servlets, if thr servlet is the control, then it means that the interface must make the request using HTTP and contain a request/response object. But supposing you wanted to change the interface to request the same use case, but makes an http request but supplying XML (instead of several request parameters) which contains the request data, you cannot then simply use the same servlet use-case.
    So what is the solution? If you write another servlet to handle the different request format (XML) it copies a lot of the control code from the other servlet which is a bit messy. Or, would it be correct to write a seperate Controller class (standard Java class), which would contain a set of related use cases, and are called by the servlet. Each use case (which would be a method call in the controller class), would take in its parameter list the exact type and data it needs to complete the use case. In this case the servlets are simply pulling data from the HttpRequest object, converting them to the correct java type to be passed to the controller class you create.
    This introduces an extra layer; the servlet now sits between the request interface and control. It means that the control methods can be called from any type of interface, but is it the right way of doing things, and how would the new control objects be held in the servlet?
    Please could someone give their opinion on which they think is the best way of architecting this?
    Many thanks,
    Shaun.

    Shaun,
    I'm going through the same issues as I try to build my own MVC framework. Struts is useful, but does not cover everything. If you're interested, I've found that the book "Core J2EE Patterns - Best Practices and Design Strategies" by Alur, Crupi and Malks is very helpful. It contains design patterns for all the various tiers. It does not describe a framework, just a set of patterns from which you can pick and choose.
    In the example you describe, one of the applicable patterns is the "Session Facade" which is basically a high-level business interface. The goal is to hide the complexity of the entire business API from the client. The book recommends each facade to correspond to a related set of use cases. e.g. methods in one facade could include OpenAccount, CloseAccount, GetBalance etc. Implementation would be Java classes.
    This facade should be independent of the request protocol and could be used for HTTP, by a Java application, by a web service etc. Usually the facade classes would be located close to the business objects to minimize network delay and traffic.
    In your example, the controller servlet (Struts Action) would invoke services from the Session Facade.
    You're right about this introducing an extra layer. Depending on your present and future needs, you can end up with others such as abstracting the persistence layer. The trade-off is between up-front effort and future flexibility.
    You ask how to reference the new objects. In my case, the initialization servlet calls a factory class method to get references to the facades. These references are stored in an application-specific object that is added as a ServletContext attribute for use by other controller servlets.
    I know this doesn't fully answer your question, but hopefully it helps a little.

  • Net8 Protocol

    Where do I find the Net8 protocol architecture and complete documentation? Is it a published protocol or not?

    Where do I find the Net8 protocol architecture and complete documentation? Is it a published protocol or not?

  • Oracle RAC 2 node architecture-- Node -2 always gets evicted

    Hi,
    I have Oracle RAC DB with simple 2 node architecture( Host RHEL5.5 X 86_64) . The problem we are facing is, whenever there is network failure on either of nodes, always node-2 gets evicted (rebooted). We do not see any abnormal errors on alert.log file on both the nodes.
    The steps followed and results are:
    **Node-1#service network restart**
    **Result: Node-2 evicted**
    **Node-2# service network restart**
    **Result: Node-2 evicted**
    I would like to know why node-1 never gets evicted even if the network is down or restarted on node-1 itself?? Is this normal.
    Regards,
    Raj

    Hi,
    Please find the output below:
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 50% heartbeat fatal, removal in 14.120 seconds
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) is impending reconfig, flag 132108, misstime 15880
    2011-06-03 16:36:02.817: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 27000 ms, remote disk timeout set to 27000, impending reconfig status(1)
    2011-06-03 16:36:05.994: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 760 > margin 750 cur_ms 1480138014 lastalive 1480137254
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:07.493: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:08.084: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 850 > margin 750 cur_ms 1480140104 lastalive 1480139254
    2011-06-03 16:36:09.831: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 75% heartbeat fatal, removal in 7.110 seconds
    2011-06-03 16:36:10.122: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 880 > margin 750 cur_ms 1480142134 lastalive 1480141254
    2011-06-03 16:36:11.112: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 860 > margin 750 cur_ms 1480143124 lastalive 1480142264
    2011-06-03 16:36:12.212: [    CSSD][1132276032]clssnmvSchedDiskThreads: DiskPingMonitorThread sched delay 950 > margin 750 cur_ms 1480144224 lastalive 1480143274
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:12.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:13.840: [    CSSD][1216194880]clssnmPollingThread: local diskTimeout set to 200000 ms, remote disk timeout set to 200000, impending reconfig status(0)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmTagize: version(1), type(13), tagizer(0x494dfe)
    2011-06-03 16:36:14.881: [    CSSD][1205705024]clssgmHandleDataInvalid: grock HB+ASM, member 2 node 2, birth 21
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:17.487: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:22.486: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: [network] failed recv attempt endp 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 4, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x2616, usrFlags 0x0 }, req 0x2aaaac308bb0 [0000000001ff4b7d] { gipcReceiveRequest : peerName '', data 0x2aaaac2e3cd8, len 10240, olen 0, off 0, parentEndp 0x2eb80c0, ret gipc
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos op : sgipcnTcpRecv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos dep : Connection reset by peer (104)
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos loc : recv
    2011-06-03 16:36:23.162: [ GIPCNET][1205705024]gipcmodNetworkProcessRecv: slos info: dwRet 4294967295, cookie 0x2aaaac308bb0
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmeventhndlr: Disconnecting endp 0x1fed69c ninf 0x2aaab0000f90
    2011-06-03 16:36:23.162: [    CSSD][1205705024]clssgmPeerDeactivate: node 2 (prddbs02), death 0, state 0x80000001 connstate 0x1e
    2011-06-03 16:36:23.162: [GIPCXCPT][1205705024]gipcInternalDissociate: obj 0x2eb80c0 [0000000001fed69c] { gipcEndpoint : localAddr 'gipc://prddbs01:80b3-6853-187b-4d2e#192.168.7.1#33842', remoteAddr 'gipc://prddbs02:gm_prddbs-cluster#192.168.7.2#60074', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x1e10, pidPeer 0, flags 0x261e, usrFlags 0x0 } not associated with any container, ret gipcretFail (1)
    2011-06-03 16:36:32.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:37.493: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:37.494: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:40.598: [    CSSD][1216194880]clssnmPollingThread: node prddbs02 (2) at 90% heartbeat fatal, removal in 2.870 seconds, seedhbimpd 1
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sending status msg to all nodes
    2011-06-03 16:36:42.497: [    CSSD][1226684736]clssnmSendingThread: sent 5 status msgs to all nodes
    2011-06-03 16:36:43.476: [    CSSD][1216194880]clssnmPollingThread: Removal started for node prddbs02 (2), flags 0x20000, state 3, wt4c 0
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Initiating sync 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 57
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: local disk timeout set to 27000 ms, remote disk timeout set to 27000
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: new values for local disk timeout and remote disk timeout will take effect when the sync is completed.
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmDoSyncUpdate: Starting cluster reconfig with incarnation 178830908
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: Ack message type (11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetupAckWait: node(1) is ALIVE
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: NULL
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: Ack message type(11), ackCount(1)
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Node prddbs01, number 1, is EXADATA fence capable
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssscUpdateEventValue: NMReconfigInProgress val 1, changes 58
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: local disk timeout set to 27000 ms, remote disk timeout set t:
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Sending Event(2), type 2, incarn 178830907
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[1] state = 3, birth = 178830889, unique = 1305623432
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmQueueClientEvent: Node[2] state = 5, birth = 178830907, unique = 1307103307
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleSync: Acknowledging sync: src[1] srcName[prddbs01] seq[73] sync[178830908]
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmSendAck: node 1, prddbs01, syncSeqNo(178830908) type(11)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 1 active, birth 178830889
    2011-06-03 16:36:43.476: [    CSSD][1247664448]clssnmHandleAck: src[1] dest[1] dom[0] seq[0] sync[178830908] type[11] ackCount(0)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmStartNMMon: node 2 active, birth 178830907
    2011-06-03 16:36:43.476: [    CSSD][1240850064]NMEVENT_SUSPEND [00][00][00][06]
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSendSync: syncSeqNo(178830908), indicating EXADATA fence initialization complete
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmUpdateEventValue: CmInfo State val 5, changes 190
    2011-06-03 16:36:43.476: [    CSSD][1237174592]List of nodes that have ACKed my sync: 1
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmSuspendAllGrocks: Issue SUSPEND
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmWaitForAcks: done, msg type(11)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion:node1 product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: properties common to all nodes: 1,2,3,4,5,6,7,8,9,10,11,12,13,14
    2011-06-03 16:36:43.476: [    CSSD][1237174592]clssnmSetMinMaxVersion: min product/protocol (11.2/1.4)
    2011-06-03 16:36:43.476: [    CSSD][1240850064]clssgmQueueGrockEvent: groupName(IG+ASMSYS$USERS) count(2) master(1) event(2), incarn 22, mbrc 2, to member 1, events 0x0, state 0x0
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmSetMinMaxVersion: max product/protocol (11.2/1.4)
    2011-06-03 16:36:43.477: [    CSSD][1237174592]clssnmNeedConfReq: No configuration to change
    etc.etc....
    Let me know if any other logfile required. No unususal messages on /var/log/messages.
    Regards,
    Raj

  • Application Architecture Suggestions

    Hi,
    I'm currently writing a library/application which will integrate with a number of systems. The application will be used by other systems to generate HTML forms, for which there are definitions stored in a database, and to store the information saved in the form by system users.
    I'm new to Java EE and was wondering if anybody could point me in the general direction of technologies (or architecture) which I should be using (I'll define some rules bellow for what is required) and to suggest tools which I could use. I'm currently using JBoss AS 1.5 along with the JDK 1.6, MySQL 1.5 and Eclipse for an IDE.
    Here are some requirements for the system:
    - Must have a web interface to allow users to define a form (which will be stored in a database)
    - Must be able to dispense forms for use within a system
    - Must either be hosted in the application (i.e. a code library + database hosted in a system) or be a stand alone application which has its own database and which can tightly integrate with a system)
    - Must appear to the user that the form is tightly integrated in the system (e.g. looks the same, does not need to open in a pop-up window...)
    - Must be able to populate form fields with information stored in the system (I was thinking of web services for this part)
    - Must be able to store information entered in form in the system (e.g. a date of birth...) (Again I was thinking of web services for this part)
    I could do with some suggestions on possible architectures of how this should all fit together.
    I know this is a big ask but I'm close to giving up because I'm not sure where to start.
    Thanks in advance,
    J.Love

    J.Love wrote:
    Hi,
    I'm currently writing a library/application which will integrate with a number of systems. The application will be used by other systems to generate HTML forms, for which there are definitions stored in a database, and to store the information saved in the form by system users.
    I'm new to Java EE and was wondering if anybody could point me in the general direction of technologies (or architecture) which I should be using (I'll define some rules bellow for what is required) and to suggest tools which I could use. I'm currently using JBoss AS 1.5 along with the JDK 1.6, MySQL 1.5 and Eclipse for an IDE.
    JBoss AS 1.5? You should at least be on 4.2.x or 5.x.
    Here are some requirements for the system:
    - Must have a web interface to allow users to define a form (which will be stored in a database)Pick a MVC framework of your choosing. Spring MVC, JSF 2.0, Grails, etc. When you say "users to define a form". What does that mean? That users will drag and drop UI elements in a designer and then publish a web form for others to use?
    - Must be able to dispense forms for use within a systemAgain, I'm not clear on what you mean here.
    - Must either be hosted in the application (i.e. a code library + database hosted in a system) or be a stand alone application which has its own database and which can tightly integrate with a system) JBoss can handle this for you. Package everything appropriately as a WAR or EAR, preferably with a build tool such as Ant or Maven.
    - Must appear to the user that the form is tightly integrated in the system (e.g. looks the same, does not need to open in a pop-up window...)Again, is this a form the user has designed or simply one the user uses that you have designed?
    - Must be able to populate form fields with information stored in the system (I was thinking of web services for this part)Information stored in a system can be the filesystem or a database or a network resource somewhere else. The web service part is simply how clients will invoke your services. (Do a search on service-oriented architecture). Personally, I would opt not for a web service but for something simpler, like REST.
    - Must be able to store information entered in form in the system (e.g. a date of birth...) (Again I was thinking of web services for this part)
    Separate the protocol and format you are using (e.g., web service SOAP, JSON over REST, XML over HTTP POST, etc.) from the requirement for storage. You will generally have a controller receive a request and forward to a service (or have the service receive the request if there is no UI) and then delegate to a data access object to fetch and store data. See the MVC pattern generally and data access object pattern specifically.
    I could do with some suggestions on possible architectures of how this should all fit together.
    See the above. MVC is your first high-level separation. Then you likely want a service tier (see SOA). If you think it is merited, you can implement your service and controller tiers with the Command pattern, though this is optional. Don't worry so much about protocols and standards. Get things working first. Then worry about how clients will communicate with you.
    I know this is a big ask but I'm close to giving up because I'm not sure where to start.
    Hence, take it in small pieces. Get things working in memory. Then save things to the database. Then accept requests to do so from a web page. Then start choosing how you want things to work as a whole. If you are totally stuck, bottom-up often works. At some point, you will have to revisit things and go top-down. There's no right or wrong answer, just start coding and have an idea in your head (or on a napkin) of how things should flow and work. Use bubbles and arrows and nouns and verbs. Then translate those one at a time into working code. Wash, rinse, repeat.
    Thanks in advance,
    J.LoveBest of luck.
    - Saish

Maybe you are looking for