SD EDI with many partners / best practice

I need some input on how best to approach a large EDI project.
We will be accepting orders from about 80 customers.  Each one will be drop-shipping products from our warehouse to their customers.  I've set up the config for one location to take advatange of the EDPAR ext/int customer conversion table.  The IDOC uses the sold-to party as the KU type partner profile name (ex. 237661) which allows me to use the EDPAR conversion.  I'm able to get the IDOC processed now through to the finished order.
Question:  How do I scale this?  Is this the best way to handle 80 partners?  If so, I will have to have one EDI translation per sold-to.  Should we really be hard-coding a sold-to account# as the partner profile name at the EDI translation level or is there a more generic way to handle this?
It seems like there should be a way to say the partner profile for this customer group is EDIGRP01 and then use the incoming sold-to (ext. customer#) to determine which IDOC partner profile to use OR use user-exits to make that logic happen.  I want to use the configurable best practices here but it sure seems like a lot of work with hard-coded account#'s to boot.
Thank you for your thoughts.

Reynolds, the partner profiles are to identify the message type and process code and then function module to post the idocs. These partner numbers will not be used anywhere else. for creating sales orders you need sales area ,sold to and shipto numbers,and material numbers.
   These three values will be converted using the EDPAR and customer material info records.
    Could you please explain why the validation for customer number is required?
If you really need the customer validation sales order creation automatically do when determining the sales area or material number.
I question for you. what configuration you did for automatic conversion of external partner numbers into internal customer numbers? does this is used for outbound
idocs as well? i am doing some outbound messages for orderacknowledgment.
where i need the external partner numbers to be passed in idoc and edu message.
but the automatic translation is not taking place? this is also not happening for inbound also for me? Could you please tell what i am missing?
Please mail me at [email protected]
Thanks for the help.
Regards,
Praveen

Similar Messages

  • Working with many sequences- best practice

    Hi.
    I´ve just started using Adobe Premiere CS6. My goal is to create a 2 hour long movie, based on 30 hours of raw gopro footage recorded on a recent vacation.
    Now my question is, what is the best practice for working with so many sequences/movie clips?
    Do you have one heavy project file, with all the clips?
    Or do you make small chapters that contains x number of sequences/is x minutes long, and in the end combine all these?
    Or how would you do it the best way, so its easiest to work with?
    Thanks alot for your help.
    Kind regards,
    Lars

    I'll answer your second question first, as it's more relevant to the topic.
    You should export in the very highest quality you can based on what you started with.
    The exception to this is if you have some end medium in mind. For example, it would be best to export 30 FPS if you are going to upload it to YouTube.
    On the other hand, if you just want it as a video file on your computer, you should export it as 50 FPS because that retains the smooth, higher framerate.
    Also, if you are making slow-motion scenes with that higher framerate, then export at the lowest framerate (for example, if you slow down a scene to 50% speed, your export should be at 25 FPS).
    About my computer:
    It was for both, but I built it more with gaming in mind as I wasn't as heavily into editing then as I am now.
    Now, I am upgrading components based on the editing performance gains I could get rather than gaming performance gains.

  • SOLMAN with India Baseline Best Practice solution

    Hello,
    With India Baseline best practice solutin route, you need to go-thru the activation process which creates the configuration objects for the selected scenarios. How can i integrate this process with SOLMAN in place as I understand in SOLMAN base project implementation we need to create a project & do the configuration via SOLMAN for individual nodes.
    Regards,
    Manish.

    Hi,
    Our current solution is to have to NICs and have both connect to the server. They then have 2 different IPs and we have the DHCP give out the IP as the gateway. The only problem with that is that we cannot control the automated change of gateway IP if the
    main connection fails. 
    We are also willing to look into other hardware solutions that could control this.
    Regards,
    Rudi

  • RBAC with OIM/OIA - Best practice

    Just wondering what should be the RBAC architecture with OIM and OIA as per best practices when the number of applications is huge e.g. >1000.
    Normally, we create one or more OIM Access policies and corresponding user groups for automated provisioning of the user to target applications. And further integrate OIM with OIA to govern user access by aligning the OIA policies with the OIM Access policies.
    This is fine when the number of applications is manageable. But what if the number of applications rises to more that 1000 or 5000. What would be our approach to handle this.

    A fine topic that has been discussed many times over the years in this forum.
    It is also something I have spent far more time than what is actually healthy working on so there are a couple of articles on my blog about the subject:
    http://iamreflections.blogspot.com/2010/10/oim-vs-tim-basic-rbac.html
    http://iamreflections.blogspot.com/2010/09/rbac-vs-abac.html
    http://iamreflections.blogspot.com/2010/08/role-based-group-memberships-in-oim.html
    http://iamreflections.blogspot.com/2010/08/primary-limitation-of-oim-access.html
    The basic answer is that you have to build your own RBAC framework once things leave the very basic state.
    Hope this helps
    /Martin

  • Dealing with complex rules - best practice?

    hello,
    I am currently involved in a project to develop BRFplus rules for a new social security benefit.
    some ofd the legislation is complex. for example:
    if ( A = true) AND ( (B = true) OR (C = true) OR (D = true) OR ( (E = true) AND (F= true)) ) then
       Result = true
    Else
       Result = false
    Endif
    My question is, what is the best/most efficient way of developing this rule?
    is it to develop one rule with a very complex condition?
    what are the alternative/preffered approaches?
    Thanx in advance.

    You can create "named" rules for each sub-rule (the condition A, B, C,...) and then use them in a master rule. This would improve the readability and hence maintenance by breaking the complexity down into simpler modules and would also let you re-use the sub-rules at other places.

  • Use both iPhoto and Aperture with one library-best practice?

    I'd like to use both iPhoto and Aperture, but have both programs use/update just one photo library.  I have the latest versions of both programs, but was wondering if the optimum approach would be to:
    a)point Aperture to the existing iPhoto library and use that as the library for both programs
    or
    b)import the entire iPhoto library into a new Aperture library, delete the iPhoto library, and point iPhoto to use the Aperture library.
    I should point out that up to now I've been using iPhoto exclusively, and have close to 20K photos in the iPhoto library, tagged with Faces, organized into various albums, etc; if that makes a difference...
    Appreciate any advice!
    Thanks,
    Dave

    Thanks Frank!  I'll try it that way.
    Appreciate the help!

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • SAP BI4 SP2 Patch 7 Webi Connection to BW Best Practice

    We are working with the version 4.0 SP 2 patch 7 of  BI4 and developing some reports with WEBI and we are wondering about wich is the best method to access to BW Data.
    At the moment we are using BICS because read in no few places that this is the best method to consume BW DATA cause have improvements is perforance, hierarchies, etc, but i don't know if this is really true.
    Are BICS the best method to access to BW Data, this is the way recomended by SAP?
    In the fillter panel of a webi document, we cant use "OR" clause, is not possible use this clause????
    When we working with hierarchies and change the hierarchy for the dimension value or viceversa the report throw an error of AnwserPromts API (30270)
    When we working with BEX queries containning variables and try to merge that variable with a Report Prompt(From another query) , and execute the queries shows an error indicating that one prompt has no value.
    (fAnyone experienced this problems too? anyone find out a solutions to this issues?
    Best Regards
    Martin.

    Hi Martin
    In BI 4.0 BICS is the method to access BW not universes.  .UNV based on BW are there for legacy.
    Please look at this forum ticket with links on Best practices BI 4.0 - BW and if you do a search in SDN you can find many tickets on this topic.
    How to access BEx directly in WEBI 4.0
    Regards
    Federica

  • CE Benchmark/Performance Best Practice Tips

    We are in the early stages of starting a CE project where we expect a high volume of web service calls per day (e.g. customer master service, material master service, pricing service, order creation service etc).
    Are there any best-practice guidelines which could be taken into account to avoid any possible performance problems within the web service u201Cinfrastructureu201D? 
    Should master data normally residing in the backend ECC server be duplicated outside ECC? 
    e.g. if individual reads of the master data in the backend system take 2 seconds per call, would it be more efficient to duplicate     the master data on the SAP AS Java server, or elsewhere u2013 if the master data is expected to be read thousands of times each    day.
    Also, what kind of benchmarking tools (SAP std or 3rd party) are available to assess the performance of the different layers of the infrastructure during integration + volume testing phases?
    I've tried looking for any such documentation on SDN, OSS, help.sap.com, but to no avail.
    Many thanks in advance for any help.
    Ali Crawshaw

    Hi Ali,
    For performance and benchmarking have you had a look at Wiley Introscope?
    The following presentation has some interesting information [Wiley Introscope supports CE 7.1|http://www.google.co.za/url?sa=t&source=web&ct=res&cd=7&ved=0CCEQFjAG&url=http%3A%2F%2Fwww.thenewreality.be%2Fpresentations%2Fpdf%2FDay2Track6%2F265CTAC.pdf&ei=BUGES-yyBNWJ4QaN7KzXAQ&usg=AFQjCNE9qA310z2KKSMk4d42oyjuXJ_TfA&sig2=VD1iQvCUmWZMB5OB-Z4gEQ]
    With regards to best practice guidelines, if you are using PI for service routing try to keep to asynch services as far as possible, asynch with acknowledgments if need be. Make sure your CE Java AS is well tuned according to the SAP best practice.
    Will you be using SAP Global Data Types for your service development? If you are then the one performance tip i have regarding the use of GDT's is to keep your GDT structures as small (number of fields) as possible, as large GDT structures have an impact on memory consumption at runtime.
    Cheers
    Phillip

  • OVD best practices for app-specific views?

    I have a requirement to create app-specific views of joined (OID+AD) ldap directory data. It occurs to me that logically I could take 2 approaches to this as laid out below as option1 & 2. Although I'm not sure how to actually create option 2. I've listed the adapters i'd construct, the adapter type, and name/purpose of each. The end product of each option is 2 join adapters that present different app-specific views derived from the same source ldap data. Each join adapter would be consumed by different apps and present different subsets and transformations of that directory data.
    OPTION1:
    1 ldap oid1
    2 ldap ad1
    3 ldap oid2
    4 ldap ad2
    5 join oid1+ad1 (for app1)
    6 join oid2+ad2 (for app2)
    OPTION2:
    1 ldap oid1
    2 ldap ad1
    3 ? oid2 (a transformed subtree derived from oid1)
    4 ? ad2 (a transformed subtree derived from ad1)
    5 ? oid3 (a transformed subtree derived from oid1)
    6 ? ad3 (a transformed subtree derived from ad1)
    7 join oid2+ad2 (for app1)
    8 join oid3+ad3 (for app2)
    With option 1, i would create create 2 OID and 2 AD adapters; repeating the connectivity configuration for each; and each adapter once deployed is going to establish its own pool of ldap connections to the source ldap servers. this is a little clunky as you scale it beyond the initial two app-specific views, and leaves me a little concerned about how well this model scales considering each ldap adapter is going to setup its own pool of connections. i.e. with 5 app-specific view to construct, i'd have 5 OID and 5 AD pools... seems to somewhat defeat the whole point of pooling.
    Option 2 is predicated on the idea of creating 1 single ldap adapter in ovd for oid and another single one for ad; then create secondary adapters which pull and transform data from those two primary source adapters. No matter how many secondary OID & AD adapters I create, only the two primary adapters actually have pooled connections to OID and AD. The advantage here clearly is in how we manage and limit how many pools we are setting up. But I'm not sure what kind of adapter to use for oid2/3 and ad2/3. I looked at using a join adapter, configured not to actually join anything, but rather just pull from a single primary adapter, but I couldn't see any way to change the subtree being pulled from the primary adapter. The alternative might be to create ldap adapters that connect to oid1 and ad1... a loopback approach... but this gets us into pools on top of pools. Again, a little clunky.
    Any thoughts or recommendations with regard to best practices here?

    I haven't done this, so I haven't solved the problem as such. But those organizations who I've seen mention it either just get free apps via this process:
    http://support.apple.com/kb/HT2534
    or use a corporate credit card with the accounts. You can use a single credit card for all the accounts, to the best of my knowledge. There's also a Volume Purchase Plan for businesses which can simplify matters:
    http://www.apple.com/business/vpp/
    I believe that a redemption code obtained through this program can be used to set up an iTunes Store account, but I'm not certain.
    Regards.

  • Request info on Archive log mode Best Practices

    Hi,
    Could anyone from their personal experience share with me the Best Practices for maintaining Archiving on any version of oracle. Please tell me
    1) Whether to place archives and log files on same disks?
    2) How many lgwr processes to use.
    3) checkpoint frequency.
    4) How to maintain speed of the server being run in archivelog mode.
    5) Errors to look.
    Thanks,

    1. Use separate mount point for archive logs like /archv
    2. Start using with 1 and check the performance.
    3. This is depends upon the redo log file size. Create your redo log file such that hourly maximum 5-8 log switch will happen. Try to make it less than 5 log switch per hour.
    4. Check the redo log file size.
    5. Check for archive log mount point space allocation. Take the backup of archive by RMAN and deleted the backed up archive logs from the archived destination.
    Regards
    Asif Kabir

  • Hyperion Essbase Best Practices Document...

    Hi All,
    Is there any document out there that has the best practices for Hyperion Essbase development? I am looking for methodologies, naming conventions and such information. Wondering if there is any such doc with Hyperion Essbase Guru's that outlines the best approaches for Essbase-outline development with it . Searching this forum with the string "best practice" yields a lot of threads on specific issues, but couldn't find any document.
    Any pointer is most appreciated.
    Regards,
    Upendra.

    Various consulting organizations have different
    guidelines, each with their own strengths and
    weaknesses. To get them to cough it up without
    bringing them in for a paid project might be
    difficult, but not impossible.I agree with Doug here.. Many of these consulting companies have developed their best practices over a number of years and view them as a competitive advantage over the other consulting firms. I would be highly surprised if you managed to get ahold of such a document very easily.
    That being said, those same consultants share information here in bits and pieces, so you can learn at least some of the best practices here (along with best practice tips from consultants/developers/customers who don't have an 'official' best practices guide)..
    Tim Tow
    Applied OLAP, Inc

  • Best practice in gallerys

    we are making a gallery that will have thumbnails and larger
    images....simple click the thumb, and the focus goes to it's larger
    counterpart. When the large image loads, the thumbs follow. The
    question is, should we load thumbnails from another image(ie , much
    smaller images, scaled less than 75px) or duplicate the large image
    to a thumbnail size....what are the pros/cons? Makes sense that
    scaling down the big one would be pretty fast as it is already
    loaded...but would it not be heavy resource user?
    we are using cs3

    Hi,
    Others will have different ideas that are probably more useful, but I personally like "green field" opportunities like you're describing.
    One thing you have to figure out is what technology you want to develop and maintain your components in. Once built, they can be exposed as web services, Java POJOs, EJBs, .NET assemblies and databases which Oracle BPM can consume. Pick a technology that your team is most comfortable with.
    A best practice preference would be to use a Service Bus as the intermediary layer between Oracle BPM and the components consumed if you own one. If you don't, Oracle BPM will need to consume the components directly.
    I'd use Oracle BPM for what it was intended for. Sometimes I see the architecture "flipped" where the customer wants a third party UI to drive instances through the process via the API. While this will work, it's a lot of extra work to rebuild what Oracle BPM does a good job of OOTB.
    Dan

  • Best practice for application help for a custom screen?

    Hi,
    The system is Netweaver 7.0 SP 15 with e-recruiting .
    We have some custom SAP GUI transactions and have written Word documents with screen prints and explanations. I would like to make the procedure document accessible from the custom transaction or at least provide custom help text that includes a link to the full documents.
    Can anyone help me out with options and best practices for providing customized application help for custom SAP GUI transactions?
    Thanks,
    Margaret

    Hello Margaret,
    sorry I though you might be still in a design or proof of concept phase where the decision for the technology is still adjustable.
    If the implementation is already done things change of course. The standard in-system documentation is surely not fitting your needs as including screenshots won't work well.
    I would solve the task the following way:
    I'd make a web or pdf document out of the word document and put it on a web ressource - as you run e-recruiting you have probably the possibility for that.
    I would then just put a button into the transaction an open a web container to show the document.
    I am not sure if this solution really qualifies as "best practise" but SAP does the same if you call the Help for application in the help menue. This is implemented in function module SAPGUIHC_OPEN_HELP_CENTER. I'd just copy it, throw out what I do not need and hard code the url to call.
    Perhaps someone could offer a better solution but I think this works a t least without exxagerated costs.
    Kind Regards
    Roman

  • Best practice in database

    Dan
    I would appreciate help with the following query:
    The best practice in the development and deployment in the use of database is:
    1. Creating a resource external SQL database. If positive, indicates that when the DataSource is created in the WebLogic Server.
    2. Create a remote JDBC.
    Thanks and Best Regards,

    Hi,
    Others will have different ideas that are probably more useful, but I personally like "green field" opportunities like you're describing.
    One thing you have to figure out is what technology you want to develop and maintain your components in. Once built, they can be exposed as web services, Java POJOs, EJBs, .NET assemblies and databases which Oracle BPM can consume. Pick a technology that your team is most comfortable with.
    A best practice preference would be to use a Service Bus as the intermediary layer between Oracle BPM and the components consumed if you own one. If you don't, Oracle BPM will need to consume the components directly.
    I'd use Oracle BPM for what it was intended for. Sometimes I see the architecture "flipped" where the customer wants a third party UI to drive instances through the process via the API. While this will work, it's a lot of extra work to rebuild what Oracle BPM does a good job of OOTB.
    Dan

Maybe you are looking for

  • Error in IDOC-XI-IDOC scenario

    Hi folks, Let me explain the issue. This is an IDOC-XI-IDOC scenario. No BPM’s involved. SAX parsing used to parse the IDOCs. We had around 35000 IDOCs coming in from Brazil system, and they are all stuck in SMQ2 (Inbound queue) in XI. The first mess

  • Audio in MP4 from captivate

    I have converted a presentation to MP4 but some of my audio has not transferred. Audio transferred with video demostrations (Narration) and training simulation(added on backend  after simulation was recorded,). Audio did not transfer with basic slide

  • SQL Question - Interesting One

    I need to write a SINGLE SQL statement which has two different queries, if the first query retrieves records then there is no need to execute the second one, if the first query doesn't retrieve the records then I need to make use of second query to r

  • Query string is null while dispatching the request in Websphere for Endeca pages

    Hi, When AssemblerPipelineServlet forward the request with dispatcher the query string is missing from the request in Websphere 7.0 only(works fine in JBOSS). This is happening only for the pages those are created in the Experience Manager. Ex: /stor

  • Emails lost in cyberspace???

    I've read on the .mac website that a "small" percentage of .mac users have experienced troubles the last days with their mail accounts, being impossible to retreive their new emails. Supposedly, those emails would arrive with some delay... someday. T