ExternalizableLite outside of cluster and licensing?

Hi!
To avoid creating redundant temporary objects I would like to pass objects extracted from a Coherence cache on to a client using Java RMI "as is" (by letting the classes implement both ExternalizableLite for optimal storage and transfer withing the cluster and normal Java Externalizable for the RMI transport out of the cluster) but then I belive will get a problem to de-serialize then in my client unless I have the Coherence.jar available there (since the ExternalizbleLite interface is otherwise missing).
My question is if I (according to the tangosol license - I am very bad at reading legal language so I better sort it out this way!) am allowed to install the coherence jar on the client (for this sole purpose of de-serialization of my objects passed over standard Java RMI) without paying a license fee for the client machines?
If the answer to the first question is yes, am I also allowed to re-package the ExternalizableLite class into one of my own jar-files to avoid having to have the whole coherence.jar at the client?
The ideal solution (if the intention is that one should be allowed to use this single interface on machines outside the cluster without having a license for them) would be if it by tangosol could be packaged in some separate "coherence_export.jar" file or something similar...
Best Regards
Magnus

Magnus -
This month, we are announcing some licensing changes that should make this extremely easy and efficient for you. Previous to our 3.2 release, the client needed to be licensed using a "client" or "local" license to use any of our code. With the new licensing model, that particular type of license will be available for no additional fee.
Please don't take this as a "legal" answer, but simply as an explanation of the direction that we are headed in. Your account manager (Charlie, I believe) will have all the materials to walk you through it within a few weeks time.
Peace,
Cameron.

Similar Messages

  • When setting up converged network in VMM cluster and live migration virtual nics not working

    Hello Everyone,
    I am having issues setting up converged network in VMM.  I have been working with MS engineers to no avail.  I am very surprised with the expertise of the MS engineers.  They had no idea what a converged network even was.  I had way more
    experience then these guys and they said there was no escalation track so I am posting here in hopes of getting some assistance.
    Everyone including our consultants says my setup is correct. 
    What I want to do:
    I have servers with 5 nics and want to use 3 of the nics for a team and then configure cluster, live migration and host management as virtual network adapters.  I have created all my logical networks, port profile with the uplink defined as team and
    networks selected.  Created logical switch and associated portprofle.  When I deploy logical switch and create virtual network adapters the logical switch works for VMs and my management nic works as well.  Problem is that the cluster and live
    migration virtual nics do not work.  The correct Vlans get pulled in for the corresponding networks and If I run get-vmnetworkadaptervlan it shows cluster and live migration in vlans 14 and 15 which is correct.  However nics do not work at all.
    I finally decided to do this via the host in powershell and everything works fine which means this is definitely an issue with VMM.  I then imported host into VMM again but now I cannot use any of the objects I created and VMM and have to use standard
    switch.
    I am really losing faith in VMM fast. 
    Hosts are 2012 R2 and VMM is 2012 R2 all fresh builds with latest drivers
    Thanks

    Have you checked our whitepaper http://gallery.technet.microsoft.com/Hybrid-Cloud-with-NVGRE-aa6e1e9a for how to configure this through VMM?
    Are you using static IP address assignment for those vNICs?
    Are you sure your are teaming the correct physical adapters where the VLANs are trunked through the connected ports?
    Note; if you create the teaming configuration outside of VMM, and then import the hosts to VMM, then VMM will not recognize the configuration. 
    The details should be all in this whitepaper.
    -kn
    Kristian (Virtualization and some coffee: http://kristiannese.blogspot.com )

  • Cluster and Read Write XML

    In my applications I allow users to save their settings. I used to do this in a Ini file. So I wrote a Vi that could write any cluster and it worked ok. When I discovered that in the newer versions of LabVIEW you could Read/Write From/To xml, I changed inmediatly because it have some advantages form me but I am having some trouble.
    Every time I want to save a cluster I have to use
    Flatten To XML -> Escape XML -> Write XML
    and to load
    Load From XML -> Unescape XML -> Unflatten from XML.
    I also do other important things each time I save or load settings. So it seems to be reasonable to put all this in just two subvi's. (One for loading, One for saving). The problem is that I want to use it with any cluster
    What I want with can be summarized as following:
    - SaveSettings.vi
    --Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    --Outputs
    ---Error Out
    -LoadSettings.vi
    Inputs:
    ---Filename
    ---AnyCluster
    ---Error In
    Outputs
    ---DataFromXML
    ---Error Out
    I have tried using variants or references and I was not able to make generic sub vi (sub vi that don't know what is in the cluster). I don't want to use a polymorphic vi because it will demand making one load/save pair per each type of settings.
    Thanks,
    Hernan

    If I am correct you still you need to wire the data type to the Variant To Data. How can you put in a subvi ALL that is needed to handle the read/write of the cluster? I don't want to put any Flatten To XML or Unflatten From XML outside.
    The solution I came out with INI files was passing a reference to a cluster control but it is real unconfortable because I have to itereate through all items.
    When a control has a "Anything" input, is there any way to wire that input to a control and remains "Anything"?
    Thanks
    Hernan

  • How to run the Start_Script outside the cluster

    Hi
    We have a 3 node cluster and we have 4 RG created. All is working fine.
    In the situation , where the appliaction stops for some reason , I have created the probe script to only page the team and not do anything else. The probe script is always returning 0.
    Also , I have suppresed the PMF monitoring as described by http://blogs.sun.com/TF/entry/disabling_pmf_action_script_with
    This is the start of my start script
    =====================================
    #!/bin/ksh
    while getopts 'R:G:' opt
    do
    case "${opt}" in
    R) RESOURCE=${OPTARG};;
    G) RESOURCEGROUP=${OPTARG};;
    esac
    done
    sleep 60 &
    /usr/cluster/bin/pmfadm -s ${RESOURCEGROUP},${RESOURCE},0.svc
    [...rest of your code to start the application...]
    ===============================================
    The reason , we did this way is because
    1>When the application daemon goes down , only the master process goes down , all the child process are still up and running . The child process do not need the main daemon as long as they are UP. They only need main daemon if they need to restart.
    2>If we go by complete restart of the Resource , it will kill all the child process before starting them. This can be cause of concern for us since restarting of the child process required vendor interaction attimes as most of them are TCPIP connections with various applications(We are interface engine shop)
    3>However, We can just start the main daemon only and all the previous child process can connect back to it.
    Question is how? I tried starting the Start script outside the cluster by passing the parameters of Resource and RG , but it threw this error
    "pmfadm: ",,0.svc" No such <nametag> registered" , and I think this is because Iam running it outside the cluster as they do not have access to pmadm process. However, it starts the daemon , but I do not know if that will start PMF monitoring
    4>So , how can I start the process outside the cluster
    5>Also ,if I start the process outside the cluster , will it start the PMF monitoring
    thanks
    Dajiba
    Edited by: Cluster_newbie on Jul 29, 2008 11:06 AM
    Edited by: Cluster_newbie on Jul 29, 2008 11:08 AM

    thanks for the answer , Thats good then. Ok , now why I need to run it outside the SC
    1>Ours is a Egate envirnoment and the process that gets started by my START_SCRIPT is a boot strap kind of a daemon process. This process in turn launches multiple TCPIP , ODBC , FTP process and they run independently once they are launched.
    2>The daemon process is what we are doing a probe on. This daemon process sometimes can go down and more or less it is normal , however the child process still continues to run. In non cluster envirnoment , we just start the daemon again and all works fine
    3>However, if I take a restart option from Cluster when the daemon dies , It will first STOP the data service(stop all the child process) , then start it . My requirement is that I should NOT stop the Dataservice because stoping the DS will result in killing of child processes too My requirement is just to launch the START_SCRIPT so that only daemon is started. If the child process are still running , the daemon will not launch any duplicate child processes
    I can make Stop script more intellegent , but it is very difficult to know in what situation the Stop was called from. If I know it was from RESTART option , I can then ignore killing the child process. But I guess it is very difficult to know
    4>So , I thought , if the Daemon goes down , I will start the daemon outside the SC and my probe script will still do its job . I have anyway disabled the PMF monitoring , so I do not care about the PMF
    does it make sense?
    Do you know any other way of starting the start_script withing SC , without doing shutdown and start
    thanks
    Dajiba

  • Some error in cluster alertlog but the cluster and database is normal used.

    hi.everybody
    i used RAC+ASM,RAC have two nodes and ASM have 3 group(+DATA,+CRS,+FRA),database is 11gR2,ASM used ASMlib,not raw.
    When i start cluster or auto start after reboot the OS,in the cluster alertlog have some error as follow:
    ERROR: failed to establish dependency between database rac and diskgroup resource ora.DATA.dg
    [ohasd(7964)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
    I do not know what cause these error but the cluster and database can start normal and use normal.
    I do not know whether these errors will affect the service.
    thanks everybody!

    anyon has the same question?

  • NAT outside to inside and inside to outside (in 8.4(2) version)

    Thanks a lot and i attached a diagram here
    Requirement:
    need to pass through traffic from outside to inside and inside to outside.
    I also attached a diagram with the ip 
    and also tell me one thing that natting is only for private to public or public to private.

    Hi,
    I think i replied on your post earlier as well.
    As per your query , you can NAT any kinds of IP(Public or Private) into any kind((Public or Private)).
    For Bidirectional traffic , you always need static NAT
    When you want Uni Directional Traffic , you can use Dynamic NAT/PAT.
    For the Inside to Outside Traffic , you can use this NAT:-
    object network LAN
    subnet 0 0
    nat (inside,outside) dynamic interface
    FOr Outside to Inside Traffic , you would only want access for certain Servers. Just like Internally hosted Web Servers
    For this , you can either use , Static PAT/NAT:-
    object network host
    host 10.10.10.10
    nat (inside,Outside) static interface service tcp 3389 3389
    access-list outside_inside permit tcp any host 10.10.10.10 eq 3389
    This will enable you to take the RDP access for your PC from the internet.
    Is this what you want ?
    Thanks and Regards,
    Vibhor Amrodia

  • I have two Creative Cloud accounts, occurred by accident, I wish to merge them and have all my details and licenses under one account, how do I do that?

    I have two Creative Cloud accounts, occurred by accident, I wish to merge them and have all my details and licenses under one account, how do I do that?
    This is due to the different changes Adobe has gone through and unfortunately I used different emails at certain times.

    Under one Adobe ID you can have one CC only,if by accident you have purchased two CC under one account, it needs to be cancelled & refunded, please contact Adobe Support at
    http://adobe.ly/yxj0t6
    Regards
    Rajshree

  • I tried to update the App that I purchased but the message told me that my Apple account is not valid for use outside of US and I must switch back to US store to able to do it. How can I switch the account from foreign countries back to US?

    I tried to update the App that I purchased but the message told me that my Apple account is not valid for use outside of US and I must switch back to US store to able to do it. How can I switch the account from foreign countries back to US?

    On your phone (from http://support.apple.com/kb/ht1311):
    Change your iTunes Store country
    Sign in to the account for the iTunes Store region you'd like to use. Tap Settings > iTunes & App Stores > Apple ID: > View Apple ID > Country/Region.
    Follow the onscreen process to change your region, agree to the terms and conditions for the region if necessary, and then change your billing information.

  • I live outside the US and am confused if i should get a locked or unlocked Iphone 4s?

    I live outside the US and probably will not move their either. I am confused if i should purchase a locked or unlocked IPhone 4S. I travel frequently but buy a sim card for each country i travel so i want a phone that i can insert any sim card into. At the same time i also want it so that if i move to the US it will work their too. PLZ HELP FAST!!

    Did you read the specs I linked to? All iPads have a sim card slot for local sim cards regardless of where worldwide. Again, a local carrier must offer local iPad data plans and a local sim card.
    The Verizon, etc. carrier designations only have significance in the USofA where two networks are avaialable, GSM and CDMA. In the USofA if you subscribe to a USofA carrier using CDMA then you must purchase the iPad designed to use CDMA. But again and per the specifications,  any post iPad2 iPad, GSM or CDMA will work worldwide on GSM networks.

  • After bring the Cluster and ASM is up, I'm not able to start the rdbms ins.

    I am not able to bring the database up.
    Cluster and ASM is up and running.
    ORA-01078: failure in processing system parameters
    LRM-00109: could not open parameter file '/oracle/app/oracle/product/11.1.0/db_1/dbs/initTMISC11G.ora'

    That is wre the confusion is: I coud nt find it under O_H/dbs,
    However the log file indicates that it is using spfile which is located at :
    spfile = +DATA_TIER/dintw10g/parameterfile/spfile_dintw10g.ora
    Now it checks only the /dbs directory only i guess.
    Could you please how can i bring it up now.
    Just for the information: ASM instance is up.

  • Using ASM in a cluster, and new snapshot feature

    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?
    How does ASM work in a cluster environment? How does each node in a cluster connect to it?
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?
    Thanks!

    Markus Waldorf wrote:
    Hi,
    I'm currently studying ASM and trying to find some answers.
    From what I understand and experienced so far, ASM can only be used as a local storage solution. In other words it cannot be used by network access. Is this correct?Well, you are missing one point that it would entirely depend on the architecture that you are going to use. If you are going to use ASM for a single node, it would be available right there. If installed for a RAC system, ASM instance would be running on each node of the cluster and would be managing the storage which would be lying on the shared storage. The process ASMB would be responsible to exchange messages, take response and push the information back to the RDBMS instance.
    How is the RDBMS database connecting to the ASM instance? Which process or what type of connection is it using? It's apparently not using listener, although the instance name is part of the database file path. How does this work please?Listener is not need Markus as its required to create the server processes which is NOT the job of the ASM instance. ASM instance would be connecting the client database with itself immediately when the first request would come from that database to do any operation over the disk group. As I mentioned above, ASMB would be working afterwards to carry forward the request/response tasks.
    How does ASM work in a cluster environment? How does each node in a cluster connect to it? Each node would be having its own ASM instance running locally to it. In the case of RAC, ASM sid would be +ASMn where n would be 1, 2,... , going forward to the number of nodes being a part of teh cluster.
    As of 11g release 2, ASM provides a snapshot feature. I assume this can be used for the purpose of backup, but then each database should use it's own diskgroup, and I will still need to use alter database begin backup, correct?You are probably talking about the ACFS Snapshot feature of 11.2 ASM. This is not to take the backup of the disk group but its like a o/s level backup of the mount point which is created over the ASM's ACFS mount point. Oracle has given this feature so that you can take the backup of your oracle home running on the ACFS mount point and in the case of any o/s level failure like someone has deleted certain folder from that mountpoint, you can get it back through the ACFS snapshot taken. For the disk group, the only backup available is metadata backup(and restore) but that also does NOT get the data of the database back for you. For the database level backup, you would stillneed to use RMAN only.
    HTH
    Aman....

  • Question about the Cluster and Column width in Graphs in Illustrator CS2

    How do they relate to each other? How do the Percentages work together? Is there any information about how you can mathematically figure how much area each Cluster width takes. If you make a graph and make both the cluster and column widths 100%, the entire area is filled. If you make the Column width 50% the bar will sit exactly in the center of the Cluster width, but if you make the cluster width 75%, the bars move closer together.

    Gregg,
    The set of bars that represent each row of data in the graph spreadsheet is called a "cluster".
    Let's let
       W = the width of the whole area inside the graph axes
       R = the number of rows of data
       C = the number of columns of data
    Then the maximum potential space to use for each row of data is W/R. The cluster width controls how much of this potential space is assigned to each row. This cluster space is then divided by C, giving the maximum potential width of each bar. The maximum potential width for each bar is then multiplied by the column width percentage to get the actual width of each bar.
    So the actual width of each bar is ((W/R)*Cluster width)/C)*Column width
    The graphs illustrated below have three rows of data, with two columns each. The solid colored areas in the background have the same cluster width as the main graph, but they all have a column width of 100%. This lets you see more easily how the gradient bars that have a column width of 80% are using up 80% of their potential width.
    Notice that as the Cluster percentage gets lower, the group of bars that represent a row of data get farther apart, so that you can more easily see the "clumping" of rows. When the Cluster percentage is 100%, the columns within each row are no closer to each other than they are to the columns in other rows.
    As the Column percentage gets lower, the bars for each data value occupy less of the space within their row's cluster, so that spaces appear between every bar, even in the same row.

  • ECC 6.0 installation on HPUX Cluster and oracle database

    Hi,
    I need to install ECC6.0 installation on HPUX Cluster and oracle database. Can anybody plz suggest me or send any document to me ..
    Thanks,
    venkat.

    Hi Venkat,
    Please download installation guide from below link:-
    https://websmp105.sap-ag.de/instguides
    Regards,
    Anil

  • Solaris 8 patches added to Solaris 9 patch cluster and vice versa

    Has anyone noticed this? On the Solaris 8 and 9 patch cluster readmes, it shows sol 9 patches have been added to the sol 8 cluster and sol 8 patches have been added to the sol 9 cluster. what's the deal? I haven't found any information about whether or not this is a mistake.

    Desiree,
    Solaris 9's kernel patch 112233-12 was the last revision for that particular patch number. The individual zipfile became so large that it was subsequently supplanted by 117191-xx and that has also been supplanted when its zipfile became very large, by 118558-xx.
    Consequently you will never see any newer version than 112233-12 on that particular patch.
    What does <b>uname -a</b> show you for that system?
    Solaris 8 SPARC was similarly effected, for 108528, 117000 and 117350 kernel patches.
    If you have login privileges to Sunsolve, find <font color="teal">Infodoc 76028</font>.

  • Hardware key and license key

    Hi All,
    I have a question about the HW key and license key for SAP on Solaris.
    As currently, we would like to replace the existing hard-disk with a larger one.
    And i would like to know that, can we using the same license generated before, or we need to generate a new one?
    Regards,
    Cheung

    Hi Wing,
    Got no documentation explicit about this, only personal experience. You can look at note 94998, and perhaps this PDF: https://websmp106.sap-ag.de/~sapidb/011000358700005997252005
    Best Regards,
    JC Llanes.

Maybe you are looking for

  • Query regarding kernel upgrade

    Hi All, Soon we are going for kernel upgrade, OS is HPUX. 1) Do we need to download latest version of SAPCAR for extracting the *.SAR files? Pls have a look on steps involved for kernel upgrade and let me know if i m missing something 1) Take backup

  • ORA-06512 at stringline string

    Has anyone ever dealt with this? if so, what is it? how do I fix it? also... Cause: Backtrace message as the stack is unwound by unhandled exceptions What is a backtrace?

  • Low volume in my MuVo

    Hi. I have a Creative MuVo TX, 52 MB. A great mp3 player, except that I feel that the volume in it is very low. When I usually listen to my songs, I have to have the volume setting to about 30 to be able to hear descent. And if I go above 30, there i

  • How many sensors BAM supports within a process?

    Hello everyone I'm somewhat new to BAM, I'm doing my integration with BPEL 10.1.3.4 & BAM and I want to know if supports introducing a sensor for each of the PartnerLink's invoking of the process. Currently in my process have 3 PartnerLink's, two of

  • Changes in Project

    Hi, I wanted track down the changes in a Project (Project System Module). which Tcode or Report is available on the system to track all changes in the project. Thanks & Regards Subrato Chowdhury