Best practice: Extend client re-connect attempts when cluster down?

Hi,
We have a basic extend TCP client that uses a few continuous queries to display data updated in a cluster. Due to the nature of the system it is possible for a client to be started before the cluster (and extend proxies) are available. When this happens the client throws the usual connection error upon trying to establish the initial connection but then doesn't attempt to reconnect after that. Once the cluster is up a client has to be restarted. Is there any way to configure the client to re-attempt connections at specified intervals?
Regards.

Hi,
Maybe you can catch the exception, do a programmatical pause and then restart in a way like below:
public class ShutdownInvocable extends AbstractInvocable
public static class ShutdownTask implements Runnable
public void run()
CacheFactory.shutdown();
public void run()
(new Thread(new ShutdownTask())).start();
Hope it helps.
Regards,
Cris

Similar Messages

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Best Practices to configure the connectivity with Bank and EBS

    Hi All,
    We are working on a requirement where Middleware(SOA) has to read the file from Oracle E-biz server and send it to BANK.
    For Connectivity setup between Oracle E-biz and Middleware(SOA), Middleware(SOA) and BANK do we need to use oracle as user or we can use any user to communicate between E-biz and Bank?
    Could you please share best practices/documents if you have any on how we should establish the connectivity and setup the workflow with Bank and E-Biz. E-Business
    Thanks & Regards
    Narendra

    Hi Narendra,
    Oracle User and the FTP user are 2 different users.
    I'm assuming you'll be reading the file from R12 through File Adapter and writing it to Bank using FTP Adapter.
    Oracle User is able to login into R12, do some operations, submit some concurrent programs/requests based on responsibilities and generate the file to be transferred (like in my case it did by running a Concurrent Request). The file so generated should be placed at a location from where File Adapter can read it within the BPEL process. Now to read the file, the user that is used is a SOA server user (again different from R12 user). This is the same user that you use to login into your SOA server physical box. Hence to be able to read the file, your file should have appropriate privileges (we set that as 777) so that it can be read by the SOA process (using SOA user).
    FTP user, on the other hand, is the user that allows connection to Bank FTP server. This has absolutely no connection with R12. Bank who hosts the FTP server must give you the FTP user details that you'll use inside your FTP JNDI Configuration on Weblogic. When you deploy and run your process (you don't deploy adapter), it picks up the connection details from FTP JNDI properties that you defined in weblogic.
    Hence both the jusers can be different and I don't think any best practices are required or do exist for this.
    Regards,
    Neeraj Sehgal

  • Best practice for client-server(Socket) application

    I want to build a client-server application
    1) On startup.. client creates connection to Server and keeps reading data from server
    2) Server keeps on sending different messages
    3) Based on messages(Async) from server client view has to be changed
    I tried different cases ended up facing IllegalStateChangeException while updating GUI
    So what is the best way to do this?
    Please give a working example .
    Thanks,
    Vijay
    Edited by: 844427 on Jan 12, 2012 12:15 AM
    Edited by: 844427 on Jan 12, 2012 12:16 AM

    Hi EJP,
    Thanks for the suggestion ,
    Here is sample code :
    public class Lobby implements LobbyModelsChangeListener{
        Stage stage;
        ListView<String> listView;
        ObservableList ol;
         public Lobby(Stage primaryStage) {
            stage = primaryStage;
               ProxyServer.startReadFromServer();//Connects to Socket Server
             ProxyServer.addLobbyModelChangeListener(this);//So that any data from server is fetched to Lobby
            init();
        private void init() {
              ProxyServer.getLobbyList();//Send
            ol = FXCollections.observableArrayList(
              "Loading Data..."
            ol.addListener(new ListChangeListener(){
                @Override
                public void onChanged(Change change) {
                    listView.setItems(ol);
         Group root = new Group();
        stage.setScene(new Scene(root));
         listView = new ListView<String>();
        listView.maxWidth(stage.getWidth());
         listView.setItems(ol);
         listView.getSelectionModel().setSelectionMode(SelectionMode.SINGLE);
        istView.setOnMouseClicked(new EventHandler<MouseEvent>(){
                @Override
                public void handle(MouseEvent t) {
    //                ListView lv = (ListView) t.getSource();
                    new NewPage(stage);
         root.getChildren().add(listView);
         @Override
        public void updateLobby(LobbyListModel[] changes) {
    //        listView.getItems().clear();
            String[] ar = new String[changes.length];
            for(int i=0;i<changes.length;i++)
                if(changes!=null)
    System.out.println(changes[i].getName());
    ar[i] = changes[i].getName();
    ol.addAll(ar);
    ProxyServer.javaProxyServer
    //Implements runnable
    public void run()
         ......//code to read data from server
    //make array of LobbyListModel[] ltm based on data from server
         fireLobbyModelChangeEvent(ltm);
    void addLobbyModelChangeListener(LobbyModelsChangeListener aThis) {
    this.lobbyModelsChangeListener = aThis;
         private void fireLobbyModelChangeEvent(LobbyListModel[] changes) {
    LobbyModelsChangeListener listner
    = (LobbyModelsChangeListener) lobbyModelsChangeListener;
    listner.updateLobby(changes);
    Exception :
    java.lang.IllegalStateException: Not on FX application thread; currentThread = Thread-5
             at line         ol.addAll(ar);
    But ListView is getting updated with new data...so not sure if its right way to proceed...
    Thanks,
    Vijay                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • What is the best practice to get database connection?

    What are best best practices in database connection to follow?

    &#24335; &#21487; &#20197; &#37319; &#29992;Class.forName &#26041; &#27861; &#26174; &#31034; &#21152; &#36733;&#65292; &#22914; &#19979; &#38754; &#30340; &#35821; &#21477; &#21152; &#36733;Sun &#20844; &#21496; &#30340;JDBC-ODBCbridge &#39537; &#21160; &#31243; &#24207;&#65306;
    Class.forName&#65288;�sun.jdbc.odbc.JdbcOdbcDriver�)&#65307;
    &#28982; &#21518; &#36816; &#29992;DriverManager &#31867; &#30340;getConnection &#26041; &#27861; &#24314; &#31435; &#19982; &#25968; &#25454; &#28304; &#30340; &#36830; &#25509;&#65306;
    Connectioncon=DrivenManagerget-
    Connection(url);
    &#35813; &#35821; &#21477; &#19982;url &#23545; &#35937; &#25351; &#23450; &#30340; &#25968; &#25454; &#28304; &#24314; &#31435; &#36830; &#25509;&#12290; &#33509; &#36830; &#25509; &#25104; &#21151;&#65292; &#21017; &#36820; &#22238; &#19968; &#20010;Connection &#31867; &#30340; &#23545; &#35937;con&#12290; &#20197; &#21518; &#23545; &#36825; &#20010; &#25968; &#25454; &#28304; &#30340; &#25805; &#20316; &#37117; &#26159; &#22522; &#20110;con &#23545; &#35937; &#30340;&#12290;
    &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290; &#26412; &#25991; &#20171; &#32461; &#22522; &#20110;Statement &#23545; &#35937; &#30340; &#26597; &#35810; &#26041; &#27861;&#12290; &#25191; &#34892;SQL &#26597; &#35810; &#35821; &#21477; &#38656; &#35201; &#20808; &#24314; &#31435; &#19968; &#20010;Statement &#23545; &#35937;&#12290; &#19979; &#38754; &#30340; &#35821; &#21477; &#24314; &#31435; &#21517; &#20026;guo &#30340;Statement &#23545; &#35937;&#65306;
    Statement guo=con.creatStatement()&#65307;
    &#22312;Statement &#23545; &#35937; &#19978;&#65292; &#21487; &#20197; &#20351; &#29992;execQuery &#26041; &#27861; &#25191; &#34892; &#26597; &#35810; &#35821; &#21477;&#12290;execQuery &#30340; &#21442; &#25968; &#26159; &#19968; &#20010;String &#23545; &#35937;&#65292; &#21363; &#19968; &#20010;SQL &#30340;Select &#35821; &#21477;&#12290; &#23427; &#30340; &#36820; &#22238; &#20540; &#26159; &#19968; &#20010;ResultSet &#31867; &#30340; &#23545; &#35937;&#12290;
    ResultSet result=guo.execQuery(�SELECT*FROM A�)
    &#35813; &#35821; &#21477; &#23558; &#22312;result &#20013; &#36820; &#22238;A &#20013; &#30340; &#25152; &#26377; &#34892;&#12290;
    &#23545;Result &#23545; &#35937; &#36827; &#34892;&#65288; &#19979; &#36716;76 &#39029;&#65289;( &#19978; &#25509;73 &#39029;&#65289; &#22788; &#29702; &#21518;&#65292; &#25165; &#33021; &#23558; &#26597; &#35810; &#32467; &#26524; &#26174; &#31034; &#32473; &#29992; &#25143;&#12290;Result &#23545; &#35937; &#21253; &#25324; &#19968; &#20010; &#30001; &#26597; &#35810; &#35821; &#21477; &#36820; &#22238; &#30340; &#19968; &#20010; &#34920;&#65292; &#36825; &#20010; &#34920; &#20013; &#21253; &#21547; &#25152; &#26377; &#30340; &#26597; &#35810; &#32467; &#26524;&#12290; &#23545;Result &#23545; &#35937; &#30340; &#22788; &#29702; &#24517; &#39035; &#36880; &#34892;&#65292; &#32780; &#23545; &#27599; &#19968; &#34892; &#20013; &#30340; &#21508; &#20010; &#21015;&#65292; &#21487; &#20197; &#25353; &#20219; &#20309; &#39034; &#24207; &#36827; &#34892; &#22788; &#29702;&#12290;Result &#31867; &#30340;getXXX &#26041; &#27861; &#21487; &#23558; &#32467; &#26524; &#38598; &#20013; &#30340;SQL &#25968; &#25454; &#31867; &#22411; &#36716; &#25442; &#20026;Java &#25968; &#25454; &#31867; &#22411;

  • What is the best practice to perform DB Backup on Sun Cluster using OSB

    I have a query on OSB 10.4.
    I want to configure OSB 10.4 on 2 Node Sun Cluster where the oracle database is running.
    When im performing DB backup, my DB backup job should not get failed if my node1 fails. What is the best practice to achieve this?

    Hi,
    Each Host that participates in an OSB administrative domain must also have some pre-configured way to resolve a host name to an IP address.Use DNS, NIS etc to do this.
    Specify cluster IP in OSB, so that OSB always looks for Cluster IP only instead of physical IPs of each node.
    Explanation :
    If it is 2-Node OR 4-Node, when Cluster software installed in these nodes we have to configure Cluster IP so that when one node fails Cluster IP will automatically move to the another node.
    This cluster IP we have to specify whether it is RMAN backup or Application JDBC connection. Failing to second node/another Node is the job of Cluster IP. So wherever we install cluster configuration we have to specify in all the failover places specify CLUSTER IP.
    Hope it helps..
    Thanks
    LaserSoft

  • Best practices with external drives connected to Time Capsule?

    I'm enjoying my new MBP but the storage limitations are a challenge with twenty+ years of data. I know it's not technically sanctioned by Apple but in another forum I found a discussion about successfully connecting an external drive to the Time Capsule. I tried this for a few months and had some problems, so I wonder if anyone here might have some insight into which of my drives would be best to use in this way and what, if anything, I must do to bless it for the task.
    The drive most highly recommended in the forums at the time is the WD My Book. After buying it I learned that is Windows users install a package and update the driver for use with Time Machine, which Mac users are not able to do. The driver is available on the web but it's said to cause problems with Mountain Lion, which I can confirm. It's USB 3.0 and powered, though, which my two alternatives are not. It's also 2 TB and presently in two partitions. I also have a Seagate Free Agent GoFlex 500 which gets extremely hot with any use and an Iomega eGo Helium 500 I've been using for backups. Both are USB 2.0.
    My goal is to put my iTunes Library and Time Capsule on the net (running  2.4 Ghz and 5 Ghz, averages 50Mbs). When I did this for a few months with the WD both its shares frequently dropped off my Desktop, as if powered down or something. Also there were the issues with Time Machine many others are having. Right now our video collection sits on the Time Capsule. Would it make sense to move our video to the WD and run iTunes and Time Machine on it? If so, do you have any tips for formatting, petitioning, or drivers (or anything else) to help this run smoothly? The WD works fine when hard-wired and the TC, too is fine without the WD at the party.
    With apologies for the tome.

    kevlee64 wrote:
    Thanks, but the drive is connected directly to the back of the Time Capsule by an ethernet cable.
    Ah, that's a different colored horse.   
    You have a NAS (Network Attached Storage) drive, not a USB external drive connected to the USB port.
    Many of those need software/firmware updates to work with Lion.  Check with Iomega.

  • Best practice for making database connection to Forms 10 apps?

    Hi
    To upgrade our Forms applications we are moving from version 3 to 10.
    Our old system runs Forms applications and the connection to the database is based on the individual user. This means that any tables or views used require that the user has specific access granted to them. We have a bespoke system to manage this which generates scripts (GRANT statements) based on lists of tables and users and their appropriate access.
    I have concerns that managing the table access for thousands of individual users in the Forms 10 environment is going to be technically difficult, especially with RADs to consider. Is it feasible to generate and frequently refresh RAD scripts to maintain the current list of users and their permissions?
    I am trying to decide if it is better to:
    A) Connect with the same database user (such as "APP_USER") which has access to everything
    or
    B) Connect with individual usernames/passwords
    Currently, the individual user database passwords are generated weekly and users have means to obtain them (once signed in) rather than setting and remembering them. Some views refer to the Oracle system parameter "USER" to decide what data is returned so this functionality would need to be preserved.
    Any help is greatly appreciated, especially if you can tell me if option A or B is how you connect at your site.

    Thanks for the advice so far.
    It would appear that connecting with individual usernames is not a fundamental error, which I was concerned about.
    Will it still be necessary to create and refresh RAD scripts, or is this only an issue when using OID? We have OID here already because we have a website using Oracle Portal. The sign-on process for this connects to Active Directory for authentication.
    I do not like the idea of having to schedule a refresh of RAD scripts, perhaps 3 times a day, just to keep it current. I do not think the RADs are expected to change as frequently as this, but perhaps other forum members have experience of this?

  • Best practice for client migration from SMS 2003 to SCCM 2012 R2?

    Can anyone advise what the most recommended method is for migrating SMS 2003 clients (Windows 7) to SCCM 2012 R2?
    Options are:
    1. SMS package to deploy ccmclean.exe to uninstall SMS 2003 client. Deploy SCCM 2012 R2 client via SCCM console push.
    2. Simply push SCCM client from console installing over existing SMS 2003 client.
    I am finding some folks saying this is ok to do but others saying no it is not supported by Microsoft. Even if it appears to work I do not want to do it if it is not supported by Microsoft. Can anyone provide a definitive answer?
    3. Use a logon script or Group Policy preference to detect the SMS client and delete it using CCMclean and then install the SCCM client.
    Appreciate any feedback. Thank you.

    Hi,
    >>I am finding some folks saying this is ok to do but others saying no it is not supported by Microsoft. Even if it appears to work I do not want to do it if it is not supported by Microsoft. Can anyone provide a definitive answer?
    I haven't seen any official document records the option 2 method. There is only upgrading from SCCM 2007 to SCCM 2012 directly.
    I think you could use a logon script or Group Policy preference to detect the SMS client and delete it using CCMclean.exe. Then use SCCM 2012 to discover these computers and push clients to them.
    Best Regards,
    Joyce

  • What is the best practice for PXI controller, connect to the company network and install antivirus? Special Subnet?

    I need your suggestions and common practices. 

    Hello TomMex,
    Thanks for posting. If what you are looking for are suggestions for how to use your PXI controller in regards to some of the issues you mentioned, then here are my suggestions. For networking purposes, you can consider your PXI controller the same as any other computer; you should be able to connect it to your network just fine and it will be able to see other computers and devices that are on the same subnet. Antivirus software in general should be fine for your system until you want to install new NI software, at which point you may want to disable it to avoid issues during installation. Does this answer your question? Let me know, thanks!
    Regards,
    Joe S.

  • AnyConnect Client shows 'Connected' even when it is not

    Hello,
    We have one Windows 8 user, with AnyConnect Client version 3.1.05152 (but we've seen this with a previous version).  He generally does not disconnect  his client, so it will time out at some point during the night, according to our logs.    However, several hours later, the client still indicates that it is connected.  Has anyone else seen this, and found a solution for it?
    Thanks
    MJ

    You can return a new Mac within 14 days of purchase.
    Return it and get another one.
    A new Mac comes with 90 days of free tech support from AppleCare.
    AppleCare: 1-800-275-2273

  • Best practices; how to reduce the wait when making live changes

    I am getting tired of waiting 20 seconds or so every time
    that I save a change to one file. How can I put changes to the
    server, and disregard all unchanged documents? I would be thrilled
    if I could just get rid of this wait, it slows my whole thought
    process down.

    Pacoan wrote:
    > I am getting tired of waiting 20 seconds or so every
    time that I save a change
    > to one file. How can I put changes to the server, and
    disregard all unchanged
    > documents? I would be thrilled if I could just get rid
    of this wait, it slows
    > my whole thought process down.
    >
    Are you working live from the server? In Site Management,
    setup your
    Local Info to point to your local hard disk, and your Remote
    Info to
    point to the live web site. The synchronize command will then
    do what
    you want.
    See the help button within the Site Manager.
    Harvey

  • Best Practice: Application runs on Extend Node or Cluster Node

    Hello,
    I am working within an organization wherein the standard way of using Coherence is for all applications to run on extend nodes which connect to the cluster via a proxy service. This practice is followed even if the application is a single, dedicated JVM process (perhaps a server, perhaps a data aggregater) which can easily be co-located with the cluster (i.e. on a machine which is on the same network segment as the cluster). The primary motivation behind this practice is to protect the cluster from a poorly designed / implemented application.
    I want to challenge this standard procedure. If performance is a critical characteristic then the "proxy hop" can be eliminated by having the application code execute on a cluster node.
    Question: Is running an application on a cluster node a bad idea or a good idea?

    Hello,
    It is common to have application servers join as cluster members as well as Coherence*Extend clients. It is true that there is a bit of extra overhead when using Coherence*Extend because of the proxy server. I don't think there's a hard and fast rule that determines which is a better option. Has the performance of said application been measured using Coherence*Extend, and has it been determined that the performance (throughput, latency) is unacceptable?
    Thanks,
    Patrick

  • Automatic Proxy Failover for Extend Client Connections

    Hi
    I looked at the documentation but this is a still unclear to me. We have a C++ application doing continuous puts/putAlls on a Coherence cluster through a set of storage disabled Proxy nodes. (I am guessing this is referred to as 'active' client?)
    Clients:
    Multiple C++ processes doing puts and putAlls via multiple proxy nodes
    Proxies:
    6 nodes acting purely as proxies without storage
    Servers:
    6 Storage nodes
    Each client has the addresses of all proxy nodes and ports. We are running a failover test where we kill a proxy node and see if the client fails over to next proxy that is alive. From what we see, this is not happening. Can someone explain what happens when a proxy server fails? I read in one of the forum responses that
    "For active client, when a request to proxy failed, the client will automatically connect to the next proxy server. But the reconnection only occurs the next request to proxy. It’s up to the client to retry the failed request."
    What does "retry the failed request" mean? - Is it - retry the PUT or PUTALL() that failed or retry getting the instance of the cache in C++ once I catch the socket failure exception in my code?
    Any pseudo code you can furnish would be very helpful
    Thank you
    Sairam

    As soon as we kill the proxy server that the client is connected to, we are getting the following socket disconnect exception, although other proxy nodes are up and running. What am I missing?
    terminate called after throwing an instance of 'coherence::lang::throwable_spec<coherence::net::messaging::ConnectionException, coherence::lang::extends<coherence::io::pof::PortableException, std::runtime_error>, coherence::lang::implements<void, void, void, void, void, void, void, void, void, void, void, void, void, void, void, void>, coherence::lang::throwable_spec<coherence::io::pof::PortableException, coherence::lang::extends<coherence::lang::RuntimeException, std::runtime_error>, coherence::lang::implements<coherence::io::pof::PortableObject, void, void, void, void, void, void, void, void, void, void, void, void, void, void, void>, coherence::lang::throwable_spec<coherence::lang::RuntimeException, coherence::lang::extends<coherence::lang::Exception, std::runtime_error>, coherence::lang::implements<void, void, void, void, void, void, void, void, void, void, void, void, void, void, void, void>, coherence::lang::throwable_spec<coherence::lang::Exception, coherence::lang::extends<coherence::lang::Object, std::exception>, coherence::lang::implements<void, void, void, void, void, void, void, void, void, void, void, void, void, void, void, void>, coherence::lang::TypedHandle<coherence::lang::Object const> >::hierarchy>::hierarchy>::hierarchy>::bridge'
      what():  coherence::net::messaging::ConnectionException: coherence::component::util::TcpInitiator::TcpConnection@0xf511730{Id=0x0000012D76A6F7DB0A9869A922AC93E0ABB1489FC9E126BAC29CF570C15A218E, Open=1, LocalAddress=NULL, RemoteAddress=PosixRawSocketAddress[family=2]}: socket disconnect
        at virtual coherence::lang::TypedHandle<coherence::net::messaging::Response> coherence::component::net::extend::AbstractPofRequest::Status::getResponse()(AbstractPofRequest.cpp:189)
        at coherence::component::net::extend::AbstractPofRequest::Status::getResponse()
        at coherence::component::net::extend::AbstractPofRequest::Status::waitForResponse(long)
        at coherence::component::net::extend::PofChannel::request(coherence::lang::TypedHandle<coherence::net::messaging::Request>, long)
        at coherence::component::net::extend::PofChannel::request(coherence::lang::TypedHandle<coherence::net::messaging::Request>)
        at coherence::component::net::extend::RemoteNamedCache::BinaryCache::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>, long, bool)
        at coherence::component::net::extend::RemoteNamedCache::BinaryCache::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>)
        at coherence::util::WrapperCollections::AbstractWrapperMap::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>)
        at coherence::util::ConverterCollections::ConverterMap::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>)
        at coherence::component::net::extend::RemoteNamedCache::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>)
        at coherence::component::util::SafeNamedCache::put(coherence::lang::TypedHandle<coherence::lang::Object const>, coherence::lang::TypedHolder<coherence::lang::Object>)
        at CoherenceCache::insertData(std::string const&, std::string const&, std::string const&, unsigned long)
        at SessionManager::executeCacheOperation(int, std::string const&, std::string const&)
        at KeyPublisher::publishCycle()
        at VECLFunctor<KeyPublisher>::operator()()
        at VEThread::_run(void*)
        <stack frame symbol unavailable>
        on thread "Thread-1"
    Caused by: coherence::io::IOException: socket disconnect
        at virtual coherence::lang::size32_t coherence::net::Socket::readInternal(coherence::lang::octet_t*, coherence::lang::size32_t)(Socket.cpp:333)
        at coherence::net::Socket::readInternal(unsigned char*, unsigned int)
        at coherence::net::Socket::SocketInput::read(coherence::lang::SubscriptHandle<coherence::lang::Array<unsigned char>, unsigned char, unsigned int>, unsigned int, unsigned int)
        at coherence::io::BufferedInputStream::fillBuffer()
        at coherence::io::BufferedInputStream::read()
        at coherence::component::util::TcpInitiator::readMessageLength(coherence::lang::TypedHandle<coherence::io::InputStream>)
        at coherence::component::util::TcpInitiator::TcpConnection::TcpReader::onNotify()
        at coherence::component::util::Daemon::run()
        at coherence::lang::Thread::run()
        on thread "ExtendTcpCacheService:coherence::component::util::TcpInitiator:coherence::component::util::TcpInitiator::TcpConnection::TcpReader"See below our proxy and client configs
    Client:
    <remote-cache-scheme>
          <scheme-name>extend-dist</scheme-name>
          <service-name>ExtendTcpCacheService</service-name>
          <initiator-config>
            <tcp-initiator>
              <remote-addresses>
                <socket-address>
                  <address system-property="tangosol.coherence.proxy.address">10.152.105.169</address>
                  <port system-property="tangosol.coherence.proxy.port">9099</port>
                </socket-address>
              </remote-addresses>
             <remote-addresses>
                <socket-address>
                  <address system-property="tangosol.coherence.proxy.address">10.152.105.171</address>
                  <port system-property="tangosol.coherence.proxy.port">9099</port>
                </socket-address>
              </remote-addresses>
             <remote-addresses>
                <socket-address>
                  <address system-property="tangosol.coherence.proxy.address">10.152.105.170</address>
                  <port system-property="tangosol.coherence.proxy.port">9099</port>
                </socket-address>
              </remote-addresses>
             <remote-addresses>
                <socket-address>
                  <address system-property="tangosol.coherence.proxy.address">10.152.105.172</address>
                  <port system-property="tangosol.coherence.proxy.port">9099</port>
                </socket-address>
              </remote-addresses>
             <remote-addresses>
                <socket-address>
                  <address system-property="tangosol.coherence.proxy.address">10.152.105.173</address>
                  <port system-property="tangosol.coherence.proxy.port">9099</port>
                </socket-address>
              </remote-addresses>
              <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
              <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
          </initiator-config>
        </remote-cache-scheme>
    Proxy:
    <!--
        Proxy Service scheme that allows remote clients to connect to the
        cluster over TCP/IP.
        -->
        <proxy-scheme>
          <service-name>ExtendTcpProxyService</service-name>
          <thread-count system-property="tangosol.coherence.extend.threads">25</thread-count>
          <acceptor-config>
            <tcp-acceptor>
              <local-address>
                <address system-property="tangosol.coherence.extend.address">localhost</address>
                <port system-property="tangosol.coherence.extend.port">9099</port>
              </local-address>
            </tcp-acceptor>
            <outgoing-message-handler>
              <request-timeout>10s</request-timeout>
            </outgoing-message-handler>
          </acceptor-config>
          <autostart>true</autostart>
        </proxy-scheme>
    ...Thanks
    Sairam
    Edited by: SKR on Jan 12, 2011 3:09 PM

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

Maybe you are looking for