No JobServers are configured for ISJobServerGroup. (COR-10715)

hello,
I have problem when do data profiling in Information steward:
No JobServers are configured for ISJobServerGroup. (COR-10715)
com.bobj.mm.sdk.SDKException: No JobServers are configured for ISJobServerGroup. (COR-10715)
i search in the forums, marketplace, about  similiar error, and tried all the suggestion, such as
create Job server use "Data Service manager" but still didn't work.
for information I install BO Platform 4.0, data Service 4.0 SP1 and then Information steward 4.0, with database sql server 2008
any help will be appreciate...
Thank you.

Hi laksh thank you for your replay,
laksh89 wrote:
hi,
you have to configure your server and then have to associate with the IS repository.
what i've done were:
Create repository central, profiler and local use Data Service Repository Manager
Configure repository in BO Central Management Console, there are Central, Profiler and Local
Create Group for Repository Central and assign a User in Data service management Console
With Data Service Server manager i've created Job Server local and Job server Profiler
Use Data Service Designer i've add Central repository and Activate it.
laksh89 wrote:
> set the environment variable first in command line and  in service manager run
> $ cd $LINK_DIR/bin/
about this i don't know how to run service manager, when i go to $LINK_DIR/bin/   use dos command, i didn't found service manager application, i found al_jobserver, al_jobservice etc.
laksh89 wrote:
> $ . ./al_env.sh
> $ ./svrcfg
above objects i didn't found also.
laksh89 wrote:
> enter 3 : configure  server
> enter c: add server , then give name and specify the port of the server
> enter a: add to the repository connection, here you have to specify the connection info
>
> once evrything is done press q  and then x to exit service manager.
>
> Also when you have installed Data services, plz make sure you have selected job server under server component in "select feature" option and MDS and VDS is chosen during the DS installation as it is unchecked by default
when install data service i already select feature Job server but i didn't found MDS and VDS during installation.
I'm sorry it's quite new for me..
Edited by: Martinus Hendriyanto on Dec 4, 2011 2:16 PM

Similar Messages

  • No Logged on Office Users are configured for IRM

    Setup:
    AD RMS on 2012 configured with https Crypto 1
    Exchange 2010 SP3 RU6
    Office 2013
    I can open OWA and use IRM to protect a document but any Office application I open and try to protect gives me an error like this:
    No Logged on Office Users are configured for Information Rights Management (IRM).
    I have googled and taken a look at all of the options out there and everything seems to be configured correctly.
    Any ideas or other troubleshooting tips I can do???

    Hi JerHiggs123,
    It seems that this is a known bug. 
    Please try the following fix:
    Open regedit
    Go to HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Common\Identity\Identities\[email protected]
    Delete the registry key “SignedOut”=dword:00000001
    If there is no such key, add it as name "SignedOut", type of REG_DWORD and value of "00000001"
    Did my post help you or make you laugh? Don't forget to click the Helpful vote :) If I answered your question please mark my post as an Answer.

  • Geekbench results are different for 12-core

    Hello,
    I'm using a 12-core Mid 2010 2,66 GhZ. After having some performance troubles I've used Geekbench and found that the very same system has very different benchmark results. Following my results (http://browse.geekbench.ca/geekbench2/view/519313)
    Integer
    Processor integer performance
    17404
    19222
    Floating Point
    Processor floating point performance
    33928
    Memory
    Memory performance
    4194
    Stream
    Memory bandwidth performance
    4178
    Another system with the same Processor, Motherboard, OS, etc. shows this result (http://browse.geekbench.ca/geekbench2/view/313784)
    Integer
    Processor integer performance
    22938
    23771
    Floating Point
    Processor floating point performance
    40492
    Memory
    Memory performance
    4707
    Stream
    Memory bandwidth performance
    6299
    The only difference between them, I have 32 GB RAM and the faster system has 24 GB RAM, but is it possible that this will cause such a performance difference?

    I've bought it... and now, the result is much faster... http://browse.geekbench.ca/geekbench2/view/519415
    Integer
    Processor integer performance
    22033
    22447
    Floating Point
    Processor floating point performance
    37994
    Memory
    Memory performance
    4598
    Stream
    Memory bandwidth performance
    5186
    Unbelievable, but true! I know that Geekbench didn't make my system faster, but the results better... 
    So there's only a slight difference left, which could probably be eliminated by using only 3 memory slots...

  • Stock RAM Configuration for 15" MacBook Pro 2.4GHz Intel Core 2 Duo

    Hello,
    I have a new 15" MacBook Pro 2.4GHz Intel Core 2 Duo on it's way to me and I need to order a RAM upgrade. I know the specifications of what I need, but am getting conflicting information from the retailer on how the stock models are configured for RAM. The total amount is 2GHz. Does anyone know if this amount is provided by one or two RAM modules?
    "Technological change is like an axe in the hands of a pathological criminal.” (Albert Einstein, 1941),
    Dr. Z.
    Message was edited by: Dr. Z.
    Message was edited by: Dr. Z.

    You'll find the RAM is in a 2 x 1GB configuration. Unfortunately they both need to come out for you to upgrade to 4GB RAM in a 2 x 2GB configuration. Still cheaper than going with Apple which makes it a little softer.

  • OER Atrifacts Store Setup and Configuration for CVS.

    Hello,
    My question is related to proper configuration of a CVS based Artifact Store in Oracle Enterprise Repository.
    I've attempted to configure a CVS Artifact Store from within OER's Asset Editor (as described on page 27 of the OER Configuration Guide & pages 92-93 of the OER Admin Guide.) I have also ensured that this new Artifact Store is selected in the dropdown for the Submission Upload Artifact Store system setting on the OER Admin page. However, my configuration settings for the Store appear to be incorrect and I haven't found a CVS example that has been thorough enough to infer the proper settings.
    So I'm hoping someone can assist me who has been through configuring a CVS Artifact Store for OER. I'll try to provide detailed information below with the hope that it may be of assistance.
    First, analogous CVS settings that are configured for my standard CVS plug-in in Oracle Workshop. These settings are for the pserver protocol, but I think they will provide some value to someone who has experience in configuring a CVS Artifact Store.
    The standard Eclipse CVS plug-in settings for our enterprise repository location:
    Connection Type: pserver
    User: sampleuser
    Password: Pa55wd
    Host: dev003
    Repository Path: /cvs/Integration
    This translates to repository location --> :pserver:sampleuser:Pa55wd@dev003:/cvs/Integration
    (Which is the root of our enterprise CVS repository)
    Now…within this repository location above there is a module (Development/OER-POC) that is located in:
    /cvs/Integration/Development/OER-POC
    …and checked out into a project called "Sandbox" located in the default workspace in Oracle Workshop.
    Additionally, within the organization we also have HTTP access to CVS. This previous example XSD I just mentioned has an HTTP URI of:
    http://dev003:8080/viewcvs/viewcvs.cgi/Development/OER-POC/src/schemas/ExtOfAddrRef/v1/ExtOfAddrRef.xsd?cvsroot=Integration
    Now as I have attempted to properly set up the configuration for the OER Artifact Store I have "translated" the above information into the following entries on the Artifact Store setup screen:
    Name: CVS Enterprise Store
    Type: Raw SCM
    Hostname: dev003
    SCM Location: Integration (??? Not sure if this has been inferred correctly. If not what should be specified here.)
    SCM Type: CVS
    Download Path URI Suffix: cvsroot=Integration (??? Not sure if this correct based in previous information?)
    Download Path URI: (??? Not sure what should be specified here. I have inferred several logical options but they have not worked.)
    Finally, when I referenced page 62 of the ORE Core Registrars Guide PDF the "Additional Development documentation" link (http://devwiki.flashline.com/index.php/B02831) states:
    • "All files from an SCM will be URL addressable. The SCM (or a third party) must provide a way to get a particular file based on a URL. In other words, we are not going to use any client libraries to write code that will retrieve us a file from an SCM. "
    • "Added concept of a 'download path' to an artifact store. For example, consider our development environment. Eclipse will have SCM information (i.e. cvs.flashline.com), eclipse/cvs project information (i.e. projects/framework/modules/com.flashline.geneva.rbac), and file/cvs file information (i.e. /code/com/flashline/geneva/rbac/base/RoleContextPersistBroker.java?rev=1.66). Using this info, a fileinfo's uri can be set. The artifact store will then allow us to specify a download base path such as http://cvs.flashline.com/viewcvs/viewcvs.cgi/."
    To conclude my questions are:
    1) Based on the comments in the Registrar's Guide it seems clear that the intent of an Artifact Store is purely for the support of downloading the physical artifact that corresponds to an OER asset. I would conclude that "Raw SCM" based Artifact Stores do not intend to support direct check-ins for the various SCM systems. (rather assets/artifacts in Eclipse would be manually checked in from within the IDE environment). If someone could confirm whether this is correct that would be much appreciated.
    2) Based on the information I supplied for the example enterprise CVS repository...what would the appropriate settings be for these fields on the Artifact Store setup screen:
    a) SCM Location
    b) Download Path URI Suffix
    c) Download Path URI
    3) Since the "CVS" SCM Type does NOT specify fields for username and password (unlike when you select other potential SCM Types in the Store setup screen); how should one handle credentials in CVS repositories?
    Thanks in advance to any assistance.
    ~Todd

    Hello user642477,
    I'm facing the same problem.
    It seems to me that the Oracles's guide line don't give enough information. I'll try to fixed it and whether I'm able to do so far I'll be in touch...
    By the way, how could you browse to the link: http://devwiki.flashline.com/index.php/B02831? When I try it so a page cannot be displayed message is displayed.
    Regards
    felipe

  • Drop a database which is configured for alwayson

    Hi All,
    Can i delete a database which is configured for always on without removing the database from Always on availability group?
    In my environment i have few databases which are configured for Always ON and all the databases belong to the same availability group. So if I remove all the databases from availability group then there is no need to retain the availability group. what are
    the steps that I should follow to remove the availability group from both primary and secondary replicas?
    Thanks in Advance,
    Kranthi

    Check these
    https://msdn.microsoft.com/en-us/library/hh213326.aspx
    https://msdn.microsoft.com/en-us/library/ff878113.aspx
    http://blogs.msdn.com/b/psssql/archive/2012/06/13/how-it-works-drop-availability-group-behaviors.aspx
    Regards, Ashwin Menon My Blog - http:\\sqllearnings.com

  • Need help configuring for POF

    I am trying to use POF to serialize one specific named cache only. The client nodes are configured for near caches with no local storage. I ran into a problem where I got error log complaints that another node in the cluster was not configured for POF serialization for the DistributedCache service. So, I created a new service PofDistributedCache service for use by the pof cache. That changed my errors but didn't get me very far.
    Q1: If I have mixed pof / non-pof caches, to they need to use different DistributedCache services?
    Q2: Does the server (back-cache) also need a <serializer> block?
    Q3: Does the server need all the object classes and the classes needed to (de)serialize the objects?
    --Larkin
    Client side coherence-cache-config.html:
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>pof-*</cache-name>
                   <scheme-name>default-near-pof</scheme-name>
                   <init-params>
                        <init-param-name>front-size-limit</init-param-name>
                        <init-param-value system-property=foo.coherence.default.front-size-limit">0</init-param-value>
                   </init-params>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>*</cache-name>
                   <scheme-name>default-near</scheme-name>
                   <init-params>
                        <init-param-name>front-size-limit</init-param-name>
                        <init-param-value system-property="foo.coherence.default.front-size-limit">0</init-param-value>
                   </init-params>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <near-scheme>
                   <scheme-name>default-near</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <scheme-ref>default-distributed</scheme-ref>
                        </distributed-scheme>
                   </back-scheme>
              </near-scheme>
              <near-scheme>
                   <scheme-name>default-near-pof</scheme-name>
                   <front-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </front-scheme>
                   <back-scheme>
                        <distributed-scheme>
                             <scheme-ref>default-distributed-pof</scheme-ref>
                        </distributed-scheme>
                   </back-scheme>
              </near-scheme>
              <local-scheme>
                   <scheme-name>default-local</scheme-name>
                   <high-units>{front-size-limit 0}</high-units>
              </local-scheme>
              <!--
                   This config file is for client use only. The back-cache will not
                   provide any local storage to the cluster.
              -->
              <distributed-scheme>
                   <scheme-name>default-distributed</scheme-name>
                   <service-name>DistributedCache</service-name>
                   <local-storage>${coherence.back-cache.storage}</local-storage>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
              </distributed-scheme>
              <distributed-scheme>
                   <scheme-name>default-distributed-pof</scheme-name>
                   <service-name>PofDistributedCache</service-name>
                   <local-storage>${coherence.back-cache.storage}</local-storage>
                   <backing-map-scheme>
                        <local-scheme>
                             <scheme-ref>default-local</scheme-ref>
                        </local-scheme>
                   </backing-map-scheme>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                   </serializer>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Server side coherence-cache-config.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>pof-*</cache-name>
    <scheme-name>default-distributed-pof</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-distributed</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>default-local</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <distributed-scheme>
    <scheme-name>default-distributed-pof</scheme-name>
    <service-name>PofDistributedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-ref>default-local</scheme-ref>
    </local-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <unit-calculator>BINARY</unit-calculator>
    <scheme-name>default-local</scheme-name>
    </local-scheme>
    </caching-schemes>
    </cache-config>

    Hi Larkin,
    llowrey wrote:
    I am trying to use POF to serialize one specific named cache only. The client nodes are configured for near caches with no local storage. I ran into a problem where I got error log complaints that another node in the cluster was not configured for POF serialization for the DistributedCache service. So, I created a new service PofDistributedCache service for use by the pof cache. That changed my errors but didn't get me very far.
    Q1: If I have mixed pof / non-pof caches, to they need to use different DistributedCache services?Yes. You can control POF/old-style-serialization on a service by service basis only.
    Q2: Does the server (back-cache) also need a <serializer> block?It is not relevant on near-cache. The scheme defining the back cache (and invocation services and replicated cache schemes) need to have the serializer specified.
    Q3: Does the server need all the object classes and the classes needed to (de)serialize the objects?
    If you want to deserialize the objects, then certainly they do. But with POF you don't necessarily need to deserialize entries from partitioned caches to define indexes or run entry-processors/aggregations on them. You can leverage PofExtractor-s and PofNavigator-s to do all your server-side logic, although for complex data access it may be less efficient. You do need the key classes (on cache NamedCache caller side) for being able to do operations on a partitioned cache, though.
    Best regards,
    Robert

  • PFIs on E series configured for input, by default?

    Are the PFIs on the E series configured, by default, for input?
    I work with a 6071E board and am trying to trigger analog acquisition with an external trigger on PFI0/TRIG1, but it's not working.
    Of course, there could be any number of reasons for this (especially comedi software...ugh), but I wanted to make sure I don't have to explicity configure PFIs for input to use them as such.
    thanks, jon

    The lines are configured for input by default. According to Appendix C of the E Series User Manual at system power-on and reset, both the PFI and DIO lines are set to high-impedance by the hardware. This means that the device circuitry is not actively driving the line either high or low.
    Regards,
    Justin Britten

  • 6602: Want to route a dedicated DIO (0-7) Line configured for output to a RTSI line

    PXI-6602: I want to use a dedicated DIO (0-7) line configured for output to trigger all 8 counters on the 6602 card. The counters are configured for two-signal-edge-separation measurement. I Have tried to use Route-Signal.vi to route PFIn (0-7) to RTSI bus with no luck.

    You should be able to trigger counters on the 6602 using the Digital Lines DIO (0-7).
    Use the Set Attribute VI and set the attribute value type to Enabled and attribute ID to Start Trigger.
    Wire the output of Set Attribute Task ID to the Task ID input of the Route Signal VI. Select the start trigger for the Signal Name input, PFI n for Signal Source input and PFI line Number for Signal Source Line Number input. Try this and see if this works.
    Regards,
    Bharat Sandhu
    Applications Engineering
    National Instruments."
    Penny

  • The e-mail message could not be sent. Make sure the outgoing e-mail settings for the server are configured correctly

    I have a 2 server SharePoint farm.
    All outgoing emails were working fine.
    I just restarted both servers and now after that none of the emails are being sent. I am using OOB publishing workflow and it shows message:
    The e-mail message could not be sent. Make sure the outgoing e-mail settings for the server are configured correctly.
    Even if I setup Alert on some list, it doesn't send email.
    I have checked that outgoing email setting in CA is defined and like I said it was working fine without problems but after restart it is showing error.
    What could be the cause of this and how to fix it?
    EDIT
    I removed outgoing mail server in CA, added again and restart IIS but still emails from OOB workflow is not being sent. But email from Alerts are being sent. Don't know what to do now.

    This is really weird, It works when an alert is set but workflows doesn't send emails... Can you create a simple 1 Step workflow in SPD to send an email when a specific field is set. See if this sends an email..
    AJ MCTS: SP 2010 Configuration MCSA: Windows 7 If you find this post useful kindly please mark it as an answer :) TY
    I created a test workflow which sends email to user and it is also not sending email. But I am getting email from SharePoint regarding "variation" page changes as well as alerts which I told above.

  • Reports are not posting with report repository webserver configured for Sin

    Hi Everyone,
    We have configured Single Signon on our Test environment (UADB1) using Sun Authentication Manager. Everything went well, we can login using our LDAP accounts except for one thing. The reports are not posting to the report repository.
    Our setup goes like this. We have used only one webserver for login and for report repository purposes. SSL certificate was configured in the webserver and we are using https in the report node. Both URLs https://dv001.test.com:8450 and http://dv001.test.com:8400 were configured for Single Signon.
    Report Node Definition
    Node Name: uadb1
    URL: https://dv001.test.com:8450/psreports/uadb1
    Connection Information
    https
    URI Port: 8450
    URI Host: dv001.test.com
    URI Resource: SchedulerTransfer/uadb1
    Below is the error I am getting. If I will use another webserver which is not the Single Signon configured as report repository the reports are posting. So, I am thinking this has something to do with the Single Signon setup and SSL. ANy idea? Thanks.
    PSDSTSRV.2093190 (10) [06/13/10 01:05:43 PostReport](3) 1. Process Instance: 9499/Report Id: 8465/Descr: Process Scheduler System Purge
    PSDSTSRV.2093190 (10) [06/13/10 01:05:43 PostReport](3) from directory: /psft/pt849/appserv/prcs/UADB1/log_output/AE_PRCSYSPURGE_9499
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) (JNIUTIL): Java exception thrown: java.net.SocketException: Unexpected end of file from server
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](3) HTTP transfer error.
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](3) Post Report Elapsed Time: 0.2300
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) =================================Error===============================
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](1) Unable to post report/log file for Process Instance: 9499, Report Id: 8465
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Process Name: PRCSYSPURGE, Type: Application Engine
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Description: Process Scheduler System Purge
    PSDSTSRV.2093190 (10) [06/13/10 01:05:44 PostReport](2) Directory: /psft/pt849/appserv/prcs/UADB1/log_output/AE_PRCSYSPURGE_94

    Duplicated thread : Reports not posting if using Single Signon webserver as report repo
    Nicolas.

  • #554 5.4.4 SMTPSEND.DNS.MxLoopback; DNS records for this domain are configured in a loop ##

    Hi,
    This is my first post here. 
    My exchange server of late is facing a peculiar problem. I get the error message that I have posted below when sending mails to any outside domain. However when I restart the server the mails can be resend to the address without any issue. After a certain
    time again the issue pops up upon which I am forced to restart the server again. I am running 2007 Exchange on Windows 2003.
    Generating server: name.mydomain.com
    [email protected]
    #554 5.4.4 SMTPSEND.DNS.MxLoopback; DNS records for this domain are configured in a loop ##
    [email protected]
    #554 5.4.4 SMTPSEND.DNS.MxLoopback; DNS records for this domain are configured in a loop ##
    Original message headers:
    Received: from name.mydomain.com ([1xx.xxx.xxx.xx5]) by MHDMAILS.mouwasat.com
     ([1xx.xxx.xxx.xx5]) with mapi; Wed, 19 Oct 2011 08:56:29 +0300
    From:  <[email protected]>
    To: <[email protected]>
    CC: "Al Alami,Tareq" <[email protected]>
    Date: Wed, 19 Oct 2011 08:56:27 +0300
    Subject: RE:   
    Thread-Topic:   
    Thread-Index: AcyAQ5tu8z9CvBfdT5+1pcGQkk6x0AIuwczAAAGZjeABQyW5sAADeeJQAAETNDA=
    Message-ID: <[email protected]>
    References: <[email protected]com>
     <[email protected]com>
    Accept-Language: en-US
    Content-Language: en-US
    X-MS-Has-Attach: yes
    X-MS-TNEF-Correlator:
    acceptlanguage: en-US
    Content-Type: multipart/related;
                boundary="_004_EEC8FA6B3B286A4E90D709FECDF51AA06C0588CA11namedomain_";
                type="multipart/alternative"
    MIME-Version: 1.0

    On Sun, 23 Oct 2011 15:05:15 +0000, Jobin Jacob wrote:
    >
    >
    >Even af
    >
    >ter removing my domain from the send connector I continue to receive the error. I would like to say I do have a firewall, Cyberoam. However, it was the same configuration till now in the firewall. I did try Mx lookup and found the following.
    >
    >Could there be any other solution to this issue ?
    Sure, but it's necessary to ask a lot of questions since none of us
    know how your organization is set up.
    I see you also have "Use the External DNS Lookup settings on the
    transport server" box checked. How have you configured the "External
    DNS Lookups" on the HT server's property page? Is there any good
    reason why you aren't just using your internal DNS servers? If the
    internal DNS servers are configured to resolve (or forward) queries
    for "external" domains then there's no reason to use that checkbox. In
    most cases checking that box is a mistake.
    http://technet.microsoft.com/en-us/library/aa997166(EXCHG.80).aspx
    The behavior you describe (it works for a while and then fails;
    restarting the server returns it to a working state) sure sounds like
    some sort of DNS problem.
    Rich Matheisen
    MCSE+I, Exchange MVP
    --- Rich Matheisen MCSE+I, Exchange MVP

  • The e-mail message cannot be sent. Make sure the outgoing e-mail settings for the server are configured correctly.

    Hi,
    I have SP 2013, and it was working properly. and workflows send emails perfectly. yesterday, i decided to build a fresh site collection so i deleted the old one and created a new one. now, workflows are unable to send emails. i'm sure it has nothing to do
    with my mail server cuz no changes happened to it. i'm sure about the settings for outgoing mail on central admin. all i did, was to enter mail server IP in outgoing mail field and email below it.
    in the workflow status the status says "error" followed by this message: "The e-mail message cannot be sent. Make sure the outgoing e-mail settings for the server are configured correctly."
    Any help please?

    Hi,
    According to your post, my understanding is that you failed to send email after building a fresh site collection.
    I recommend to verify that you have entered your SMTP server name correctly.
    You can add it as FQDN e.g. ServerName.DomainName.
    Here are a similar articles for your reference:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/f0605f59-0baa-49c9-854e-0fb369a9e5a0/cant-seem-to-get-emails-to-send-in-sharepoint
    http://alpesh.nakars.com/blog/sharepoint-outgoing-email-issue/
    Best Regards,
    Linda
    Linda Li
    TechNet Community Support

  • The email message cannot be sent. Make sure the outgoing email settings for the server are configured properly

    i have an issue when loading a workflow. it gives me a following error "The email message cannot be sent. Make sure the outgoing email settings for the server are configured properly". it doesnt send me any alerts and worlflow fails at the end
    with the above error message.

    Hi,
    I agree with Bistesh. But after Outgoing e-mail settings are configured properly, if the issue still exists,
    It may result from your Anti-Virus. Please refer to the following steps:
    Open MCAfee Console and go to Access Protection window.
    Click Anti-Virus Standard Protection and edit “prevent mass mailing 
    worms from sending emails” rule.
    Now we need to know which processes are being blocked therefore we need to check the MCAfee Log located at
    C:\Documents and Settings\All Users\Application Data\McAfee\DesktopProtection\AccessProtectionLog.txt
    you may find entries of DtExec , DtExecUI and DatabaseMail90, now these processes need to be entered in the exclusion list of selected rule .
    Reset the IIS And SharePoint Timer service to check if this works for you.
    Here are some similar issues with you, you can use as a reference:
    http://social.technet.microsoft.com/Forums/en-US/667f0d61-4914-43fa-80c1-8cf430b113bb/workflow-email-not-working-but-normal-email-alerts-working-fine?forum=sharepointgeneralprevious
    http://techsuite.wordpress.com/2008/12/08/workflow-history-the-email-message-cannot-be-sent-make-sure-the-outgoing-email-settings-for-the-server-are-configured-properly/
    Best Regards,
    Lisa chen

  • Which BC4J / JDBC pooling configurations are global for a JVM?

    There are several documents and postings saying that some of the BC4J / JDBC Pooling properties are unique for the JVM.
    So if i have 3 WAR files with 3 different settings in bc4j.xcfg (transaction factory, pooling settings) some of the settings are ignored after the first AM Pool is instanciated. This could be the reason for some "unreproducable" problems we have.
    So please provide us a list, which properties additional to the ones below are global per JVM and which are really used from the bc4j.xcfg.
    1. Getting the Connection object out of ApplicationModule says about the TransactionFactory
    Please not that this property is a static BC4J property. Meaning that the value of this property when the first ApplicationModule is created is the value which will be used. If you have multiple ApplicationPool(s) then it is necessary to define the property in all configurations, dynamically, or as a system property.
    2. http://www.oracle.com/technology/products/jdev/tips/muench/ampooling/index.html says about jbo.ampool.monitorsleepinterval
    Since there is only a single application monitor pool monitor per Java VM, the value that will effectively be used for the AM pool monitor polling interval will be the value found in the AM configuration read by the first AM pool that gets created. To make sure this value is set in a predictable way, it is best practice for all application modules to use the same Pool Polling Interval value.
    and
    3. Since the tuning parameters for all ADF database connection pools - regardless of <JDBCURL,Username> value - will be set based on the parameters found in the configuration for the first AM pool that is created. To insure the most predictable behavior, it is best practice to leave the values of the parameters in the Connnection Pooling section of the Pooling and Scalability tab at their default values - so that no entry for them is written into the bc4j.xcfg file - and to instead set the desired values for the database connection pooling tuning parameters as Java System Parameters in your J2EE container.
    Sounds like this means the parameters: jbo.initpoolsize, jbo.maxpoolsize, jbo.poolmonitorsleepinterval, jbo.poolmaxavailablesize, jbo.poolminavailablesize, jbo.poolmaxinactiveage ?
    4. And http://oracle-web.petersons.com/bc4jdoc/bc_aappmodpooling.htm tells There is one connection pool manager for each business logic tier's Java VM. Connections remain in the pool until the Java VM stops running.
    Thanks, Markus

    Just another funny observation regarding BC4J parameter settings in 9.0.5.2:
    Setting jbo.ampool.sessioncookiefactoryclass in the jboserver.properties is ignored. Setting in bc4j.xcfg works.
    rgds, Markus

Maybe you are looking for