No processFunction method for FuBA Z_BLOG_RFC on host host name

Hi,
I am following the blog:
http://www.sdn.sap.com/irj/scn/weblogs;jsessionid=(J2EE3417200)ID0395011850DB00589505960689226617End?blog=/pub/wlg/12240
But when I test my application from R/3 I get the above mentioned exception in R/3.
Any help will be appreciated.
Thanks.
Regards.
Rajat Jain

My problem got resolved by adding a weak reference for the JCO Library in application-j2ee-engine.xml file of the EAR project.
it looks like
?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE application-j2ee-engine SYSTEM "application-j2ee-engine.dtd">
<application-j2ee-engine>
     <reference
          reference-type="weak">
          <reference-target
               provider-name="sap.com"
               target-type="library">com.sap.mw.jco</reference-target>
     </reference>
     <provider-name>sap.com</provider-name>
     <fail-over-enable
          mode="disable"/>
</application-j2ee-engine>

Similar Messages

  • Checking for valid (and working) host name/ip

    hi
    i have posted this in other topic, but i think it was the wrong forum... so if you wanna try to help me...
    http://forum.java.sun.com/thread.jspa?threadID=603987

    no that was the right forum, although it gets low traffic. This is not the right forum for that quesiotn. Your best bet is the "Java Programming" or "New To Java" forums.

  • Virtual Host Name for CM

    Hi ,
    R12.1.3 & 11gR2 DB
    I have a qucik question . When We have Virtual Host Name for setting , What Should be the node name for ICM in Concurrent Manager Admin form.
    In One of our Instance : For ICM & Other Manager , its Virtual Host Name
    For Other : ICM : Physical Host Name & Other Manager Virtual Host Name .
    Which is good. We having a issue with ICM on both Nodes , in every minutes is going down and come up automatically.
    Regards
    Sourabh Gupta

    Hi Hussein,
    Thanks a lot. I have already review these document.
    Is Auto Failover With Virtual Hostnames For Concurrent Processing Servers Supported In 11i Or R12? (Doc ID 456540.1)
    Adding an Alias Hostname with Oracle E-Business Suite Release 12 (Doc ID 603883.1)
    Concurrent Processing - CONC-SM TNS FAIL In Internal Concurrent Manager (ICM) Log File With Virtual Hosts Configuration [ID 961216.1]
    ICM Not Starting: Node Name not Registered [ID 602942.1]
    I am curious what should be the node name in concurrent manager Administrator form.
    W ether is should be the virtual host name or physical host name.
    I have just checked with few people , Some Says ICM node name should be Physical server name & Other CM name can be Virtual name. I am not sure on what basis people are saying.
    Few says node name should be the physical host for every CM.
    Note Sure what is correct & best configuration.
    Regards
    Sourabh GUpta

  • Logical host name Question

    Hi
    Iam in process of creating 2 node cluster on 2 domains.This is for FailOver Clustering
    I would appreciate if someone can throw some light and explain if Iam doing right and answer some of the Questions that I had
    We have created 2 RG and we are in the process of creating logical host name. Here are the steps that I think Iam going to execute
    1> The name of the logicalhost name resources are LH-01 , LH-02
    The name of the logical host name are virtualhost-01 , virtualhost-02
    The name of the domains are domainapp-01 , domainapp-02
    2> Create entry in /etc/hosts for the new logical host name and IP address
    Question : Is there any command which can be used to add entries to this files , instead of manually addding them. Iam not a Sys Admin and I do not know how many more places I have to add.
    3>Create logicalhostname resources by following commands
    a>clreslogicalhostname create -g RG-01 -h virtualhost-01 LH-01
    b>clreslogicalhostname create -g RG-02 -h virtualhost-02 LH-02
    4>Edit the /etc/nsswitch.conf
    Question : I could not understand why we have to do this and what needs to be modified. Do I have to comment out the lines which says
    host: cluster files
    rpc : files
    5>Question : Do I need to also create Shared Address Resource? I really did not understand the concept of Shared Address Resource
    I would really appreciate if someone can throw some light on this. I have gone through the documents , but It really did not clear up for me
    Thanks
    Edited by: Cluster_newbie on Jun 25, 2008 4:06 PM

    actually scinstall put this host: cluster files , i dont remember was the the logic
    here is some explaination about shared address which used scalable applications like apache
    The SUNW.SharedAddress Resource Type
    Resources of type SUNW.SharedAddress represent a special type of IP
    address that is required by scalable services. This IP address is configured
    on the public net of only one node with failover capability, but provides a
    load-balanced IP address that supports scalable applications that run on
    multiple nodes simultaneously. This subject is described in greater detail
    I hope this helps
    Regards

  • Migrate Server 2008 Certificate Authority To New Server Different Host Name?

    Our internal CA is installed on one of our Exchange servers.  Exchange is being migrated from 2010 to 2013, so all current Exchange servers will decommissioned and replaced with new new Hyper-V VMs with either Server 2008 R2 or 2012 R2 OS. The old VM
    containing Exchange 2010 and the current CA will go away since we cannot afford to use the server resources and use up the required Windows license to keep that server running doing nothing but acting as a CA.
    So, we will either need to move the CA to the new replacement Exchange 2013 server or some other existing server that's being used for something else (maybe one of the domain controllers).
    What is the best way to handle this?  I don't think the migration from Exchange 2010 to 2013 allows for reusing the same host name on the replacement server and if we move the CA to another existing server, it will also be on a server with a new host
    name anyways.
    Can we migrate the CA to a new server with a different host name?
    What about reissuing all the active certificates from the current server to replace them with new certificates from the new server and then decommissioning the original CA?  Can this be automated in some way?
    Which way is best and how would it be done?

    When following the instructions in the guide, I would also consider to switch to new "sanitized" URLs in certificates (CDP and AIA) in case you had used the default URLs now. That's what I did this when migrating W2K3 CAs (with default URLs) to W2K8 R2.
    Per default, the LDAP and HTTP URLs point to the CA server itself (HTTP) or to an LDAP objects that has the same name as the current server.
    Migrating to a new server, you need to make sure that CRLs will still be published to the old locations - thus the new CA server would write to the LDAP object that corresponded to the old server, and HTTP might be fixed by redirecting the DNS record. (Make
    sure this is documented so that nobody thinks it might be a good idea to delete the old object as it "references a server that does not exist anymore".)
    As you have to fiddle with URLs anyway, I would then add new URLs having neutral names - that is URLs not including the name of the CA server. The new CA instance would then 1) publish to the old locations but not add these to certificates anymore and 2)
    publish to new sanitized locations and add them to CDP and AIA URLs.
    Elke

  • Mail server config inconsistent when host name differs from domain name

    Hi,
    I've installed Leopard server on host server.example.com but want to server email for example.com the host name for incoming mail should be mx01.example.com. DNS is setup properly resolving server.example.com and mx01.example.com (both forward and reverse, mx01 having a separate IP) and mx01.example.com is set as MX for example.com. Both IPs are routed to the internal server by the firewall.
    In Server Admin, server.example.com is set as computer name (also resolved to the proper internal IP by our internal DNS), and in mail settings -> General, domain name is set to example.com and host name to mx01.example.com (so that the mail server identifies itself properly when connected on port 25).
    Normal mail traffic works perfectly: Users can send and receive mail from/to addresses like <user_shortname>@example.com. I've got problems with mailing lists, though:
    First, group based mailing lists (for groups created in Workgroup Manager) have the wrong addresses: <group_shortname>@server.example.com. Those addresses are linked when you view the croup in Directory.app. You can send mails to eg [email protected] when connecting to the server via telnet on port 25, but get an error email ([email protected]>: mail for server.example.com loops back to myself, Reporting-MTA: mx01.example.com) back.
    Second, mailman based mailing lists also have the wrong addresses: <listname>@server.example.com and at the bottom of the mails received from the lists, there are also links to the lists web page with the url mx01.example.com instead of the host's name server.example.com.
    mx01 should only be used by incoming SMTP, but if this isn't possible, I could revert to using server.example.com for the MX record as well. But the email addresses should definitely use the domain name and not the full server name as domain part. This is a typical email server setup and I can't understand why leopard server handles this differently. Does anyone have a workaround for this or could tell me how to setup an email system properly?
    Thanks a lot in advance!!
    Cheers, Thomas

    Well, you pointed me into the right direction: Changing the mail server's host name helped a bit. I just dumped the name mx01.example.org. The host is called server.example.org and in mail settings, the domain name is set to example.org while the host name is set to server.example.com. The MX record will have to point to server.example.org, then.
    The server now accepts emails to group based mailing lists à la <group_shortname>@example.org. In Directory.app, the link the the group's mailing list still opens a new Mail to <group_shortname>@server.example.org, though.
    For mailman based mailing lists, the link to the list's listinfo page in the bottom of emails received from the list is now correctly pointing to server.example.org. The list's address shown there still is <list_name>@server.example.org.

  • Email host name for mailbean

    Hi, I have a problem about finding my own email host name when I am trying to send an email within JSP. I have tried many names such as: "mail.gmail.com"," yahoo.com", and the one included in the code blow. I think I have a misunderstanding about this host name, could anyone explain for me please? Thanks a lot.
    Blow is my code
    * Mail.java
    * Created on 2006&#24180;8&#26376;14&#26085;, &#19979;&#21320;2:59
    package myclass;
    import java.io.*;
    import java.util.*;
    import javax.mail.*;
    import javax.mail.event.*;
    import javax.mail.internet.*;
    * @author weiming514
    public final class MailBean extends Object implements Serializable{
    private String to = null;
    private String from = null;
    private String subject = null;
    private String message = null;
    public static Properties props = null;
    public static Session session = null;
    static{
    /** Setting Properties for STMP host **/
    props = System.getProperties();
    props.put("mail.smtp.host", "hostname");
    session = Session.getDefaultInstance(props, null);
         /* Setter Methods */
         public void setTo(String to) {
              this.to = to;
         public void setFrom(String from) {
              this.from = from;
         public void setSubject(String subject) {
              this.subject = subject;
         public void setMessage(String message) {
              this.message = message;
    /* Sends Email */
    public void sendMail() throws Exception{
    if(!this.everythingIsSet())
    throw new Exception("Could not send email.");
    try{
    MimeMessage message = new MimeMessage(session);
    message.setRecipient(Message.RecipientType.TO, new InternetAddress(this.to));
    message.setFrom(new InternetAddress(this.from));
    message.setSubject(this.subject);
    message.setText(this.message);
    Transport.send(message);
    catch(MessagingException e){
    throw new Exception(e.getMessage());
    /* Checks whether all propertises have been set or not */
         private boolean everythingIsSet() {
              if((this.to == null) || (this.from == null) ||
              (this.subject == null) || (this.message == null))
                   return false;
              if((this.to.indexOf("@") == -1) ||
                   (this.to.indexOf(".") == -1))
                   return false;
              if((this.from.indexOf("@") == -1) ||
                   (this.from.indexOf(".") == -1))
                   return false;
              return true;
    Error:
    unknown smtp host: hostname

    and the one included in the code blow.You have written "hostname" in the code below. Since "hostname" is certainly NOT a valid SMTP host name,
    that's why you get the error: unknown smtp host: hostname.
    What did you expect? You must provide a valid SMTP host name, NOT "hostname". You see the difference right?
    Try with your ISP (Internet Service Provider) SMTP host name.
    It should be something like: stmp.somename.com.
    That's the one you must set in your email client (Outlook, Thunderbird, Eudora, Lotus Notes, whatever) to send your emails.
    Next time please paste your code between &#91;code&#93; tags with the help of the code button just above the edit message area.
    Regards

  • Best method for timestamping? (for later use with perl script)

    What is the best method that I can use to timestamp events in Linux for later use with perl script?
    I am performing some energy measurements.. where I am running several tasks separated by 20 secs in between. Before I start any execution of tasks, I always place initial delay for me to start the script and start the measurement device.
    My problem is that I don't know how long is that first delay exactly. So to solve this, I thought I could use date commands to time stamp all tasks.. or at least to timestamp first dela.
    Here is example of what I am doing:
    1st delay
    task 1
    20s
    task 2
    20s
    task 3..... etc
    What would be the best to use?

    logger.
    It posts messages straight to the system log.  You can see the message, in all its glory using tools like journalctl.  You will see the message, the date, time, host name, user name, and the PID of logger when it ran.

  • Best method for networking with ubuntu linux

    Hi,
    I'm setting up an ubuntu linux fileserver, and I was wondering what the best method for filesharing with my mac is. I'm currently running osx 10.4.11, though I may be upgrading to 10.5 soon. I'll be running SMB networking for a couple of other computers, but I'd prefer something a bit more robust that can handle file permissions etc.

    Mac OS X supports NSF out of the box. Configuration isn't documented.
    I recall Apple got rid of net info manager in Leopard, so the configuration will be different. Perhaps more unix like.
    Mac OS X support the Unix Network File System (NFS). However, it leaves out
    the GUI.
    This page show you how to use NetInfo Manager:
    http://mactechnotes.blogspot.com/2005/09/mac-os-x-as-nfs-server.html#c1168221713 40271068
    NFS Manager can both setup NFS shares and connect to NFS shares.
    http://www.bresink.com/osx/NFSManager.html
    Once you figure out how NFS Manager configures the NFS shares, you can
    use Applications > Utilities > NetInfo Manager to create more shares.
    You will either have to coordinate Unix Userid number and Unix Group Id number or use the mapall option on the share.
    To find out your Mac OS X userid and group id do:
    applications > utilities > terminal
    ls -ln
    ls -l
    # lists the NFS share on your mac
    showmount -e localhost
    #list NFS shares on a remote host
    showmount -e remote-ip-address
    Once you see what NFS Manager does, you will be able to use NetInfo Manager to manage a connection. In Mac OS 10.4 you can configure the /etc/exports control file. See man exports for the details. Before that you had to have the data in NetInfo manager. When Mac OS X came out, many common Unix control files were not present. Instead the data had to be in NetInfo manager. Over time Apple has added more and more standard Unix control files.
    ======
    You do know about the need to match userids & groupids.
    # display uid and gid
    ls -ln
    sudo find / -user short-user-name -exec ls '-l' {} \;
    # on Mac OS X
    you will need to go into NetInfo Manager and select user and find your short-user-name. Change uid and guid.
    #on Linux look in
    /etc/passwd
    /etc/group
    # with care...
    # change 1000:20 to your values for uid:gid
    sudo find / -user short-user-name -exec chown 1000:20 {} \;
    The manual for Tenon MachTen UNIX (which Apple checked when doing Mac OS
    X) says that one should crate the file /etc/exports, which will cause
    portmap, mountd and nsfd to launch at startup via the /etc/rc file. The
    file must not contain any blank lines or comments, and each line has the
    syntax
    directory -option[, option] hostlist
    where 'directory is the pathname of the directory that can be exported,
    and 'hostlist' is a space separated list of hostnames that can access the
    directory. For example
    /usr -ro foo bar
    /etc
    /Applications
    /User/gladys gladys
    The client the uses a command like
    /sbin/mount -t type [-rw] -o [options] server:directory mount_point
    where 'type' is 'nfs', 'server' the name of the server, 'directory' the
    server directory mounted, and 'mount_point' the client mount point. See
    'man mount' for details.
    I haven't tried the above, but it would be nice to know if it works on Mac OS X.
    Hans Aberg
    This will give you some hints on NFS. Post back your questions.
    Robert

  • Standardise definition & selection method for ALL third party plugin presets!!!

    Over at LogicProHelp there is a very useful topic relating to Logic presets - in which many users have spent time saving individual third party plugin settings as LOGIC presets... which is enormously helpful when selecting instruments etc because they can be selected using a keystroke etc and found within the Logic Library browser window - which is of course searchable!
    These presets have then been uploaded to the forum for others to download.
    http://www.logicprohelp.com/viewtopic.php?p=367267#367267
    I posted on the topic recently and thought it worth mentioning over here on the Apple forum.
    Here's what I said - does anyone agree/disagree? Is this something that Apple should work on? For me it's a no brainer but I'd like to know what others think....
    "MO 3rd party instruments that rely on mouse-clicks or anything more than one button press to change a preset are incredibly badly designed. It's massively frustrating and impedes creative flow to have to use a mouse, or press several keyboard buttons just to move from one sound to another. Native Instruments interfaces are amongst the worst offenders - you even have to MANUALLY specify what presets you want to use with specific MIDI Program Change messages - because the latter are the only way I know of using anything other than the NI interface to change sounds in their plugins.
    The Logic Channel Strip settings saved along with 3rd party Plugin settings saved as Logic Presets have proved a recent revelation for me.
    Now I can change instrument presets using a keystroke, a midi message, almost anything I want.
    And then there's the Logic Library browser - now that so many sounds are saved as Logic Presets, the Logic browser window has become really powerful - being able to search my entire library for "bass" or a particular instrument name - REGARDLESS of which third party plugin is required to play the sound - IT JUST LOADS AND PLAYS!
    Maybe I'm in the minority but I think of a sound I want first, NOT which instrument I should load and then browse through - the latter is kind of backwards for me.
    I really think that we users should pressure Apple and plugin developers to provide Logic (and indeed other DAWs) presets with ALL of their products because the amount of effort on their part is minimal compared with countless end-users doing this task manually over and over. The current situation is ridiculous.
    DAWs are incredibly powerful these days but the lack of a plugin preset/sound library definition standard is quite crippling to work flow.
    I mean if there were a STANDARD LIBRARY DEFINITION such as with Native Instrument libraries or Apple loops where sounds are defined universally and supplied as such in a common preset it would revolutionise sound discovery/selection within DAWs.
    Kind of like a document management system that applies to ALL plugins, by ALL developers and the installer for a plugin would then add all its presets to the management system/common library"

    Sid Merrett wrote:
    Can you give me an example of a plugin that doesn't work with the Logic save preset function?
    Sure, I could give you lists of plugins for each of those particular scenarios I addressed.
    In that specific case, for one example, Spectrasonics instruments for a few years did not support the hosts' save/recall settings function (at least in Logic), and when contacting them and asking why Stylus RMX could not recall settings saved via Logic's preset system I was given the explanation that it wasn't supported for technical reasons.Basically, it only save/recalled a small handful of settings, the rest were lost, meaning it was impossible for quite some time to use the Logic's save/load settings feature to save user presets, you had to use the plugin's own internal save features. (Yes, the instrument's state was saved in the song, but *wasn't* able to be saved via the preset handling, for technical reasons).
    A year or so later, and later Logic versions, they finally got round to making it work as the varioous parts of the host and instruments were upgraded to handle larger data requirements. Spectrasonics instruments are examples of instruments with very specific, custom browser features, many of which you lose if you save data using Logic's functionality - not to mention the effort of saving, naming and organising 8000+ patches per instrument. Plus you have many levels of data - you often load a loop into a part, and a part into a group of parts, and they are different types of data. The hosts save/load feature only saves the state of the plugin - it can't be used to load Loop 14 into part 2, for example. It's the whole plugin, or nothing. More workarounds in terms of how you organise your loading/saving of different types of data.
    There are other instruments too, and ones I've beta tested where I've got into the technical details and had to have the preset format changed or simplified because hosts had difficulties storing the data correctly. It's quite a complex thing, in the main, and different instruments implentation of preset handling mean that the whole thing is full of workarounds, compromises, or plain failure to use the feature completely.
    Other instruments, such as Minimonsta, lose functionality when saving via the host - for example, you only get the meta patches and patch morphing functions when saving via the plugin's own file format, and that stuff doesn't happen when saving via the host.
    I could go on and on about this - every develop has different requirements and they mostly all either do it very basically and rely on the host, or bypass the hosts save/load functionality and implement their own which they can be in control of and that will work across hosts and across platforms. For instance, there is little point having a beautifully organised Oddity library if all those sounds are now unavailable to me if I choose to work in Live...
    That's why the whole preset thing is a complicated mess and isn't liable to be sorted soon.
    There is a developer over on the PC trying to make a system to unify this stuff between plugins - it's PC only right now, and I forget the details this second, but it's acknowledgement that people recognise this whole thing is a mess and looking at ways to improve the situation.
    Sid Merrett wrote:
    I hadn't spotted that - thanks. You can't use the Logic Library window to search for Plugin Settings, only Channel Strip Settings.
    You *can* use it to search for plugin presets, but only within the context of a single plugin. Ie, if I select the ES2, I can see, and search, the ES2 plugin settings. But often I want "a bass sound", I don't necessarily know that I was an "ES2 bass sound", or a "Minimonsta bass sound" or whatever.
    The point being, I just want a sound that will work, and I don't know where it will come from. Forcing me to choose and instrument first can be counter productive. How many times have you needed a sound, and you go "Ok, there might be something in pluginX that'll work?" - you load up pluginX, flip through the sounds, don't find anything useful, then go "Ok, let's try pluginY", no... pluginZ etc etc
    I miss Soundiver in many ways. *Ideally*, I'd like one central method of storing my presets which will work cross-plugin, cross-host, cross-format and cross-platform that is a standard that all developers and hosts support, and that offers the main core features that developers need, from simple, single patches with names, up to full meta data, cataloging, organising, author and notes tagging and so on. You can still give the users a choice to use the individual plugin and individual plugin's gui for patch selection, but you can get far more standardised if you want, with the advantages that gives - and you don't have to export patches into the system, as it's developer supported, all presets for your instruments would be instantly available in the library, just as they are in the instrument directly.
    But it's difficult to get developers to agree on much, these days - the most likely thing to happen is someone somewhere creates a cross-platform standard for this that gains momentum and that developers and hosts want, or *need* to support.

  • No 'processFunction' method found on RFM call - JNDI questions

    I have beens struggling with calling a Java RFM from ABAP.
    ABAP makes the call with the name of the function all in upper case, apparently.  (Can anyone confirm that?)  Unfortunately, the bean name is in mixed case, and after deployment, the RFC engine is unable to find the function.  It was my belief that I could define a JNDI name for the Enterprise Bean>Session Bean>JNDI name.  However, when I do so, and then deploy, the name doesn't seem to appear in the JNDI directory using Visual Administrator.
    Q1:  Is there soem series of operations that must take place in order for the bean's JNDI name to get placed into he directory, other than deployment?  It feels like the JNDI directory isn't really getting updated at deployment, but after I bounce the server...
    At any rate, after much thrashing, restarting servers, etc., lo and behold! the JNDI name showed up in the directory root, and apparently the RFC engine was able to find it, because he quit dumping me off with 'function not found'.  However, now he's saying that the processFunction method is missing (which it's not). I'm wondering if I've generated the correct JNDI entry, and if not, what should I do to fix it?
    Q2.  Here are the JNDI directory root entries for my JNDI name - do they look right?  (The string of periods don't exist - just trying to show indentation) How does this entry lead the RFC entry to my bean?
    Z_RFC_EXAMPLE  (which is my JNDI name)
    ....[Class Name]: com.sap.engine.interfaces.cross.ObjectReferenceImpl
    ....[Object Value]: 95
    I also found these entries in the JNDI directory: localejbs [Context]
    ...Z_RFC_EXAMPLE
    ........[Class Name]: <package>.<Z_rfc_exampleLocalHomeImpl0
    ........[Object Value]: NON Serializable Object 
    I also found:
    rfcaccessejb [Context]
    ....Z_RFC_EXAMPLE
    ......[Class Name]: javax.naming.Reference
    ......[Object Value]: Reference Class Name: localejbs/Z_RFC_EXAMPLE
    Q3:  Are these entries enough for the RFC engine to handle a call from ABAP for Z_RFC_EXAMPLE and invoke my bean Z_rfc_exampleBean?  If so, how come he can't find my processFunction() method?  If not, how do I specify things for deployment so that he can find the right bean and method?
    Sure hope someone out there can help on this; it's very frustrating...

    Hi David,
    > ABAP makes the call with the name of the function all
    > in upper case, apparently.  (Can anyone confirm
    > that?)
    Function module names are always uppercase in ABAP, so the call will be in uppercase, too.
    >  Unfortunately, the bean name is in mixed
    > case, and after deployment, the RFC engine is unable
    > to find the function.  It was my belief that I could
    > define a JNDI name for the Enterprise Bean-->Session
    > Bean-->JNDI name. 
    This is true, the bean name doesn't matter actually, if you define the JNDI name of the bean as the uppercase name of the function module, which should be called. If the function module in the ABAP system is "RFC_CALL_J2EE", then the JNDI name of the bean, which handles the call, should also be "RFC_CALL_J2EE".
    >However, when I do so, and then
    > deploy, the name doesn't seem to appear in the JNDI
    > directory using Visual Administrator.
    >
    > Q1:  Is there soem series of operations that must
    > take place in order for the bean's JNDI name to get
    > placed into he directory, other than deployment?
    > It feels like the JNDI directory isn't really
    > y getting updated at deployment, but after I bounce
    > the server...
    No, the server application (containing the beans you want to use to be called) must be deployed without errors <b>and</b> must be running. If both is true, the JNDI entries "contained" in the server application are added to the JNDI registry. You can check, if an application is deployed and running using the "Deploy" service of the visual admin.
    >
    > At any rate, after much thrashing, restarting
    > servers, etc., lo and behold! the JNDI name showed up
    > in the directory root, and apparently the RFC engine
    > was able to find it, because he quit dumping me off
    > with 'function not found'.  However, now he's saying
    > that the processFunction method is missing (which
    > it's not). I'm wondering if I've generated the
    > correct JNDI entry, and if not, what should I do to
    > fix it?
    >
    If the processFunction method is searched, the RFC provider service is able to find the bean, so the JNDI entry is correct.
    > Q2.  Here are the JNDI directory root entries for my
    > JNDI name - do they look right?  (The string of
    > periods don't exist - just trying to show
    > indentation) How does this entry lead the RFC entry
    > to my bean?
    >
    > Z_RFC_EXAMPLE  (which is my JNDI name)
    > ....[Class Name]:
    > com.sap.engine.interfaces.cross.ObjectReferenceImpl
    > ....[Object Value]: 95
    >
    >
    > I also found these entries in the JNDI directory:
    > localejbs [Context]
    > ...Z_RFC_EXAMPLE
    > ........[Class Name]:
    > <package>.<Z_rfc_exampleLocalHomeImpl0
    > ........[Object Value]: NON Serializable Object 
    >
    > I also found:
    >
    > rfcaccessejb [Context]
    > ....Z_RFC_EXAMPLE
    > ......[Class Name]: javax.naming.Reference
    > ......[Object Value]: Reference Class Name:
    > localejbs/Z_RFC_EXAMPLE
    >
    Yes, they're ok. The RFC provider service uses the "rfcaccessejb/Z_RFC_EXAMPLE" for the lookup. Since the service is able to find the bean, this problem is solved.
    > Q3:  Are these entries enough for the RFC engine to
    > handle a call from ABAP for Z_RFC_EXAMPLE and invoke
    > my bean Z_rfc_exampleBean?  If so, how come he can't
    > find my processFunction() method?  If not, how do I
    > specify things for deployment so that he can find
    > the right bean and method?
    >
    There are (at least) two important things for the method search to succeed:
    1. The method signature has to be:
    public <anyReturnType> processFunction(com.sap.mw.jco.JCO.Function.class) throws <exceptionList>
    The return type and exception list don't matter, since they are not relevant for a method search via reflection.
    2. The server application classloader must "know" JCO. So please ensure, that you have added the reference to the JCO library to you EAR project also, NOT only to your EJB module project.
    Hope that helps.
    Regards
    Stefan

  • I/O Speed for the HP H221 Host Bus Adapter- Need IO/s or IOPS

    Please point me to HP specifications with I/O Speed for the HP H221 Host Bus Adapter. 
    650931-B21
    HP H221 Host Bus Adapter
    The parameter needed is the IO/s or IOPS – ie number of reads/s – or Writes/s that can be achieved

    Phil.
    Dimensions (excluding bracket)
    6.6 x 2.7 x 0.6 in (16.8 x 6.86 x 1.6 cm)
    Disk Drive and Enclosure Protocol Support
    6 Gbps SAS (Serial Attached SCSI)
    6 Gbps SATA (Serial Advanced Technology Attachment)
    Architecture
    Serial Attached SCSI (SAS)
    Serial Advanced Technology Attachment (SATA)
    SAS Connectors
    2x4 external SFF-8088 (Mini-SAS)
    Data Transfer Method
    600 MB/s bandwidth (1200 MB/s, full duplex) per port for combined throughput of up to 48 Gb/s
    PCIe Bus Speed
    PCIe 3.0 x8 (8 GT/s maximum theoretical bandwidth) or PCIe 2.0 x8 (5 GT/s maximum theoretical bandwidth) depending on the model
    Port Transfer Rate
    24 Gb/s per x4 Wide SAS Port (4 x 6 Gb/s)
    Memory Addressing
    64-bit, supporting servers memory greater than 4 GB
    Upgradeable Firmware
    4-MB flashable ROM online/offline
    http://www8.hp.com/h20195/v2/GetHTML.aspx?docname=​c04322800
    REO
    HP Expert Tester "Now testing HP Pavilion 15t i3-4030U Win8.1, 6GB RAM and 750GB HDD"
    Loaner Program”HP Split 13 x2 13r010dx i3-4012Y Win8.1, 4GB RAM and 500GB Hybrid HDD”
    Microsoft Registered Refurbisher
    Registered Microsoft Partner
    Apple Certified Macintosh Technician Certification in progress.

  • What is Licensing Method for SCCM and SCOM 2012

    What is Licensing Method for SCCM and SCOM 2012
    We have 75000 clients, so we going to implement SCCM and SCOM 2012 environment in azure. Actually we plan to go for One Central Administration site and three primary sites (to manage this clients), going to host in Azure. I need to know how the licensing
    process work for this environment, how many licenses we need to get for this? is charging for Client based, Site based or environment based?  Please update.
    Thank you
    Fazal
    Fazal(MCTS)

    Hi,
    Running the SCCM and SCOM servers themselves in Azure to manage clients outside azure is not really supported if I remember correctly..
    http://blogs.technet.com/b/configmgrteam/archive/2013/10/23/configmgr-and-endpoint-protection-support-for-windows-azure-vms.aspx
    Licensing is explained here.
    http://www.microsoft.com/licensing/about-licensing/SystemCenter2012-R2.aspx
    In short for Client OS you need a cal per client, included in Core CAL and Enterprise CAL.
    For servers it is licensed per CPU and can be licensed for all virtual servers on a host as well, included in the guide above.
    ps. side note a CAS is not really necessary in most case if you don't have more than 100'000 clients, it adds a lot of complexity and I would avoid it if I could.
    Regards,
    Jörgen
    -- My System Center blog ccmexec.com -- Twitter
    @ccmexec

  • RE: [REQ] Assisted management tools/methods for planshierarchies(imp/ex

    Hi William, and all fort&eacute; subscribers.
    Thanks for your answers.
    To discuss on honest basis, let's say we don't plan to buy any product yet.
    But the 'small' fort&eacute; code management is source of many problems.
    We're interfacing the 'workspaces handling' with text-based source control
    tools, and this (can) allows proper 'control'. Shame we can't build fort&eacute;
    applications from proper text files :)
    Our issues are more about understanding the hierarchy between projects when
    it's not documented. Therefore, quite a reverse-engineering point of view,
    despite we stay in a " forte TOOL code " level.
    For your information, we have Select, used as " reference " model, and as
    interface to Express.
    For the Issues I've expressed, here's my (current) status.
    I learned 'b-tree' repositories have become standard for 3.0 (despite 2.0 had many),
    and therefore we're extremely waiting for 'R.4.0' code management.
    We've got a way to " visualize " plans depandancies from an exported workspace,
    but this doesn't allow 'update' (lacks fort&eacute; workspace cross-verification facility).
    We've got another way to export a 'workspace' in both 'wex' and set of 'pex's,
    and building the import script from the wex file using an intelligent grep/sed script.
    Thanks for all your suggestions,
    J-Paul GABRIELLI
    Integration Engineer
    De: William Poon[SMTP:[email protected]]
    Date d'envoi: vendredi 10 juillet 1998 16:58
    A: 'Jean-Paul GABRIELLI'
    Cc: John Apps; Forte -- Development
    Objet: RE: [REQ] Assisted management tools/methods for plans hierarchies(imp/exports, supplier plans management & external APIs control)
    Hi Jean-Paul,
    One of our consultants has forwarded your message to me. I am
    extremely interested in learning more about your requirement. I am the
    lead engineer in the Compaq-Digital enterprise organization building
    component based development tools. Following are some of my thoughts as
    well as questions.
    William Poon, Compaq-Digital
    -----Original Message-----
    From: John Apps
    Sent: Friday, July 03, 1998 6:40 AM
    To: Forte -- Development
    Subject: FW: [REQ] Assisted management
    tools/methods for plans hierarchies (imp/exports, supplier plans
    management & external APIs control)
    From the Forte-Users newslist: I think this person is
    looking for CBD tools?! Perhaps William has an answer out of his
    research?
    Cheers,
    From: Jean-Paul
    GABRIELLI[SMTP:[email protected]]
    Reply To: Jean-Paul GABRIELLI
    Sent: Friday, July 03, 1998 11:28
    To: '00 Forte Mailinglist'
    Subject: [REQ] Assisted management tools/methods
    for plans hierarchies (imp/exports, supplier plans management & external
    APIs control)
    Hi,
    I'm looking for cross-projects investigation tools, to
    provide graphical understanding and management of
    fort&eacute; source code. Viewing/updating plans dependencies,
    as well as managing external APIs calls are my
    requirements.
    I am not clear about this question. But I will give my best shot. Are
    you looking for some form of "source profiling" or "reverse engineering"
    tool where by it reads the Forte TOOL code and turn that into UML which
    then can be displayed in graphical form. My understanding is that
    SELECT's Enterprise has this capability for C/C++ code. They also work
    closely with Forte so they might have something that will work with TOOL
    code.
    In order to manage international developments between
    sites, applications have been split into 'components'.
    Therefore, and to keep it simple, each component comes
    out of a separate dedicated repository.
    At integration and build time, those sets of source code
    are merged together, supplier plans updated and
    fort&eacute; applications built.
    Controlling UUIDs at export time keeps simple the reload
    of new delivered versions.
    But my issue is in the physical process to actually
    (get) deliver(ed) those sets of source code.
    Fort&eacute; fscript allows to export classes, plans, or
    workspaces.
    Only 'plan exports' (pex) can provide a way to only
    deliver plans which have to be delivered.
    (i.e. without test projects, stubs or third party
    products plans which could be part of the workspace).
    Therefore, whereas an export script can easily be
    automated (list plans, and then for each plan find it
    and then export it with a meaningful name), the import
    process can't, because of plans dependancies.
    In order to assist that process, I would like to know if
    any of you did find ways to :
    1) Display in a tree view the plans hierarchy for a given
    workspace, or for a given repository baseline
    I don't think you can do it in Forte 3.0. But my I understanding is
    that they will have this capability in Forte 4.0. But Forte people will
    have more information.
    2) Export from a given workspace plans as separate
    files, as well as related import script (with proper sequence)
    3) Get from a set of pex files a plans hierarchy and a
    proper import script.
    Current workaround has been first to 'batch load' all
    the pex files until having them all loaded
    (first go loads top providers, then more and more as
    dependancies are resolved).
    Another one has been spending time grep'ing from pex
    files the 'includes' lines and designing on paper
    the tree. But that's long and evolving.
    Thanks for ideas and suggestions,
    J-Paul GABRIELLI
    Integration Engineer
    France
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive
    <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Maybe Hamish Speirs can explain it - it was his post in another thread that gave me the idea and commands to try (see http://forums.novell.com/forums/nove...r-10038-a.html).
    We had a confluence of changes at the beginning of the semester (Sept) that no doubt helped contribute to the problem and yet also mask the real cause to a certain extent.
    1. The Thawte cert expired and was replaced with a new cert - Thawte does not support doing renewals on NetWare. This happened around the start of Sept.
    2. School semester begins. Thousands of students return.
    3. We use Pcounter for pay-for-print and it uses httpstk to provide a webpage for students to authorize print jobs.
    4. Printing activity in general goes way up.
    5. All printers are Secure.
    6. Apache, iPrint and httpstk all use the same Thawte certificate
    7. The print server was also hosting the netstorage service which also uses the Thawte cert (via apache).
    8. The print server was recently (August) virtualized (via p2v using the excellent Portlock Storage Manager)
    Eventually I built a new NetWare vm to host print services and got a new cert so at least the netstorage and print services were no longer running together. I suspected at that point that the likely source of the abends was NetStorage since Nile and SSL were almost always involved in the abends.
    After the separation the issues continued - so it wasn't netstorage's fault. Desparate searching of the 'net lead to H.'s post. The rest is history!
    It has now been 9 days up uptime without a single nile/ssl related abend ( I had one abend in pcounter but services survived).
    Ron
    "Seasoned Greasings and Happy New Rear!"

  • Error in Pre- Export methods for a request

    Hi All,
    I am getting an error while releasing a request, its giving
    "Error in Pre- Export methods for a request DM0k.....". Could you please tell me why and how can I rectify that?
    Thanks.
    NA

    Please go through as per the link
    [SAP Transport Request |http://www.sap-basis-abap.com/sapbs008.htm]
    thanks
    G. Lakshmipathi

Maybe you are looking for

  • Moving on...Alternative to iWeb to suit my requirements

    Hi guys, I was wondering whether many have escaped from iWeb since they have announced they are not supporting it anymore? My website today seems to have a problem displaying and think it may be related to me recently migrating to iCloud from mobilem

  • Multiple itunes libraries and music folders

    I have itunes libraries and music folders on my computerd hard drive and 2 external drives. How do I get it all into one library on one drive without losing my music?  Please help I have been trying to figure this out for a long time. Thanks Michael

  • How can I transfer my itunes from my iPhone to my new computer without deleting my songs off of my iPhone?

    I have a new computer with new songs I have purchased. But my iPhone has almost 2,000 songs on it and I have deleted it once before by trying to transfer my songs to a new computer before. Now I am a little bit scared to even try, so... How can I tra

  • Fan never comes on

    The fan on my Powerbook G4/167mhz never comes on except when a CD or DVD is initially inserted or when it is accessed. Is there a way to turn the fan on manually? And what is the normal/good temperature range for the Powerbook? Also, when a disk is i

  • BOM that needs to produce and consume same component

    If I have a process in which I am mixing powders in a closed system with dust control (large vacuum) what would you recommend for the following scenario? As I make the final blended powder, I am salting back in a variable amount of powder which has b