Best practicies exposing AM (OAF 11.5.10) as webservice to external systems

IHAC how is developing extensions to there ebusiness install base using OAF 11.5.10 and they have approached me with questions on how they could expose some of the business services developed (AM VO mainly) as webservices to be used in a BPEL/Webservice framework. The BPEL service is seebeyond (not sure how it is spelled) and not Oracle's.
I have outlined 2 ways, but since I have not developed anything on OAF I have no idea it is possible.
First was to migrate the ADF BC (or BC4J) projects from OAF with JDeveloper 10.1.3 and just more or less create a simple facade layer of a session bean right-click and deploy as webservice.
Second: was to use a webservice library such as axis to be used in JServ directly to expose them on the "target" server.
Has anyone any best practicies on this topic,

For recognition and stable functionality of USB devices, your B&W G3 should be running OS 8.6 minimally. The downloadable OS 8.6 Update can be run on systems running OS 8.5/8.5.1. If (after updating to 8.6), your flash drive still isn't recognized, I'd recommend downloading the OS 9.1 Update for the purpose of extracting its newer USB support drivers, using the downloadable utility "TomeViewer." These OS 9.1 USB support files can be extracted directly to your OS 8.6 Extensions folder and are fully compatible with the slightly older OS software. It worked for me, when OS 8.6's USB support files lacked a broad enough database to support my first USB flash drive.

Similar Messages

  • Best Practice???  Change from internal boot disk to external disk

    I have a mini running 10.5.6 server and it currently boots off it's internal disk. I was hoping to get some feedback/input from others on a good process to convert the system from it's internal boot disk to an external boot disk (firewire).
    I wanted minimum downtime during the conversion and I of course, want a complete snapshot on the new boot disk. Lastly, I do not have a local keyboard and console on the system although I could connect one if it seemed to be much easier that way.
    In general, I am thinking of the following:
    1) Boot into the Leopard Server CD.
    2) Use diskutility to Restore from the internal boot disk to the new external boot disk.
    3) Choose the new boot disk as the startup disk.
    4) Reboot onto new disk.
    Is diskutility the best bet?? Meaning will it work this way if the drives are different sizes??
    Should I try to clone the disk from the internal boot disk (assuming I shutoff services first) using SuperDuper or Carbon Copy Cloner?? But I believe they do not copy over all logfiles, etc..
    Or does anyone have a quick overview of a methodology which they have done in the past or just are suggesting might be a better process than the ones I described??
    In Summary:
    1) From the CD, clone using diskutil and change boot disk
    2) From the running OS, clone using SuperDuper or CCC and than reboot onto new disk
    3) Something else??
    Thank you in advance.

    For cloning the machine you have the approach fine. Disk Utility is fine and booting from the CD is the best method. Simply use the restore method.
    But why on earth would you want to boot from an external Firewire drive? First, there is the issue of speed. You have a mini, let's assume it is a one generation back Intel. It has an internal SATA drive on a 1.5 Gbps connection. You want to move that to a 400 Mbps Firewire bus? Next, beyond the speed issues, you have a persistence issue. You are taking the boot volume and moving into to a transitory bus. One of Firewire's greatest strengths is easy connection/disconnection. Persistence is not a strong point.
    Next, if your plan is to move the boot volume to some form of a Firewire RAID, then you are even penalizing yourself more. The mini has one FireWire port. If you are using two devices and creating a mirror RAID, then you need to daisy chain. Talk about points of failure, asynchronous startup time, bus blocking, etc. Not wise.
    Plus, I can not count how many external firewire devices have burnt up in the effort to have small footprints. Lacie and the "let's put a drive in a metal case with no fan" approach = melted drive. Western Digital and Lacie with the "let's make a completely un-reusable external power brick that either breaks in a small breeze or falls out when the heavy guy walks by the server" approach.
    If you are looking at a Firewire RAID enclosure, then you are missing the objective of speed as you are limited by the 400 bus. It is nice to say that you have a four drive SATA 2 RAID case running a RAID 5, but you are defeating the purpose of why you bought the raid. The RAID 5 can provide an exponential increase in I/O performance. But that goes out the window because of the slow bus.
    If your argument is that "this is a server and my bottleneck is Ethernet," that too does not hold up. You are likely running on a gigabyte network.
    For system details check out http://developer.apple.com/documentation/HardwareDrivers/Conceptual/Macmini_0602 /Articles/architecture.html.
    Take this with a grain of salt. You caught me on a grumpy day as yesterday I dealt with a melted external firewire drive.
    My advice is buy real server class hardware. What is you objective? Drive redundancy? Capacity? A mini is a great dev server. Not a production server. This is your data. Presumably the data that makes your business function. Don't trust it to a single platter. And don't trust it to a consumer level, disposable system. I am not trying to malign the mini. It is a fine machine for its role. Its role however is not to be a production file server. Now as a web server, we are talking a different situation.
    Ok, I am rambling. Hope this helps in some way.

  • Best practice to print to HP-6908 connected to TC from external location (WAN).

    I am able to bounjour to TC and access disc image partition. How do I connect to printer? IP is dynamic but I have custom DnyDNS.com forward. Thanks in advance!

    This forum is focused on consumer level products.  For your question you may have better results in the HP Enterprise forum here.
    Bob Headrick,  HP Expert
    I am not an employee of HP, I am a volunteer posting here on my own time.
    If your problem is solved please click the "Accept as Solution" button ------------V
    If my answer was helpful please click the "Thumbs Up" to say "Thank You"--V

  • Web service, best practice

    Hi,
    I would need some oppionions on best practices for a WS interface.
    Lets say I have a system with 5 different states on an entity, lets say states are A, B, C, D and E. It is not possible to shange from any state to any other state, there are certain rules.
    Shall the knowledge on these transition rules be on the service consumer or in the service itself. What I'm looking for is what kind of operations I shall expose:
    setState(State aState)
    or
    changeToStateA()
    changeToStateC()
    And so on... In the first case all knowlege of state transitions must be on the service consumer. In the second case this is not needed as the operation will take care of that.
    Is there any guidelines on this?
    Thanks,
    Mattias

    services should be idempotent and stateless.
    that means that transitions and workflow should be the responsibility of the client.
    %

  • Best practice for using messaging in medium to large cluster

    What is the best practice for using messaging in medium to large cluster In a system where all the clients need to receive all the messages and some of the messages can be really big (a few megabytes and maybe more)
    I will be glad to hear any suggestion or to learn from others experience.
    Shimi

    publish/subscribe, right?
    lots of subscribers, big messages == lots of network traffic.
    it's a wide open question, no?
    %

  • FTP best practice with XI

    What is the best practice in using FTP adapter which is in vendor network ( external) ? If the connection parameter is set as permanent, or file per transfer, whats the major difference in performance and security? The vendor FTP has uname and pwd for each interfaces from SAP system. If I use file per transfer, everytime the data is sent, FTP takes it in a directory using uname and pwd access. Will it affect the performance ?
    Thanks
    Ricky

    > In per file transfer,  is there any way to avoid
    > checking the uname and pwd always so that performance
    > can be increased.
    Dont think this can be done. If you do not want User Id and password check the FTP server should allow Anonymous Login and then in the File Adapter you can select Anonymous Login.
    And what was the slight performance
    > issue you were talking about in per file transfer
    As the File Adapter will connect to the FTP server every time it polls in the case of the sender file adapter or every time it wants to transfer a file  in the case of a receiver file adapter , there will be a slight issue when comapred to permanenet ( dont think we will even be able to see the difference in the runtime )  . But dont think it will be an issue at all.
    Regards
    Bhavesh

  • Best Practice of Parallel Development Lifecycles

    Dear all,
    Looking for some best practice information regarding the Parallel Development Lifecycles used either for the system upgrade or phased implementation. I need to understand the way how to change the object ownership while merge the two lifecycles into one production system.
    Any help and/or direction is appreciated.
    Thanks,
    Joshua

    Hi There are many levels involved in PDL .
    Please have white papers on tehis
    http://www.infoworld.com/pdf/whitepaper/SAPParallelDevelopmentManagementByNewmerix103107.pdf
    http://help.sap.com/saphelp_nwce10/helpdata/en/45/4b5d276e891192e10000000a1553f7/content.htm
    http://www.agilejournal.com/news/397-mks-integrity-and-sap-linking-software-and-production-processes-at-continental-automotive-systems
    All the best
    nag

  • Installation of Best Practice Scenarios for SAP IS Retail

    Hi All
    Not sure if i posted this in the right forum
    We have a requirement for Installing best practices for SAP IS Retail and its an ECC 6.0 System
    and country version is for UK.. So we have downloaded the relevant components . I wanted to know
    further information on this as to how to proceed with them . Is it similar to Support Pack Application?
    Any inputs would be appreciated.
    Regards,
    Ershad Ahmed

    you will need to install using tcode SAINT in client 000.

  • Is a system copy to development a best practice?

    ("Refresh", as I use it here, refers to a system copy.)
    We typically refresh our sandbox and pre-production test systems from production. Not often, since its a non-trivial amount of work (although our functional/business teams don't seem to realize that  ), but we've got the process down. The production DBs also total around 8-9 TB, and its not practical to store too many copies of that. The full environment includes the core ERP 6 system (HR, PY, FI, CO, parts of PLM), SRM 5, BI 7, and two portals (employee and vendor), each with development, QA/test, pre-production, and production.
    We have never refreshed our development environment, mainly since that version history and test data.
    We recently had a project consultant request that we refresh development from production, and claim doing so was a "best practice". The project might significantly reconfigure FI, but everything else is out of scope.
    Although refreshing the development system/environment can be done, should it be? What kind of circumstances might make it a good idea? Is it considered it a "best practice" by others? Why?
    (Cross posted to the [ASUG Systems Management forum|http://hosted.jivesoftware.asug.com/community/sig_communities/business_integration__technology_%26_infrastructure/systems_management_sig].)
    Thanks,
       Dave

    Well i cannot view the ASUG discussion but here is my 2 cents:
    cons:
    - versions history has to be saved prior to copy
    - cost (additional disk space, backups, cpu/memory)
    - the production data will age pretty fast, so the copy has to be done on a regular basis
    - lots of interfaces to other systems will have to be corrected, risk of connecting to prod systems during the copy process
    - the dev system will be unavailable for several days during the copy
    pros:
    - real data on the prod system, no need to manually generate data
    - most performance issues can be discovered at design time, given the hardware is comparable to the prod system
    In my opinion regularly copying the prod system to dev is not best practice. Personally i would never do it because of the cost and amount of work needed to do it.
    Best regards, Michael

  • Where to find best practices for tuning data warehouse ETL queries?

    Hi Everybody,
    Where can I find some good educational material on tuning ETL procedures for a data warehouse environment?  Everything I've found on the web regarding query tuning seems to be geared only toward OLTP systems.  (For example, most of our ETL
    queries don't use a WHERE statement, so the vast majority of searches are table scans and index scans, whereas most index tuning sites are striving for index seeks.)
    I have read Microsoft's "Best Practices for Data Warehousing with SQL Server 2008R2," but I was only able to glean a few helpful hints that don't also apply to OLTP systems:
    often better to recompile stored procedure query plans in order to eliminate variances introduced by parameter sniffing (i.e., better to use the right plan than to save a few seconds and use a cached plan SOMETIMES);
    partition tables that are larger than 50 GB;
    use minimal logging to load data precisely where you want it as fast as possible;
    often better to disable non-clustered indexes before inserting a large number of rows and then rebuild them immdiately afterward (sometimes even for clustered indexes, but test first);
    rebuild statistics after every load of a table.
    But I still feel like I'm missing some very crucial concepts for performant ETL development.
    BTW, our office uses SSIS, but only as a glorified stored procedure execution manager, so I'm not looking for SSIS ETL best practices.  Except for a few packages that pull from source systems, the majority of our SSIS packages consist of numerous "Execute
    SQL" tasks.
    Thanks, and any best practices you could include here would be greatly appreciated.
    -Eric

    Online ETL Solutions are really one of the biggest challenging solutions and to do that efficiently , you can read my blogs for online DWH solutions to know at the end how you can configure online DWH Solution for ETL  using Merge command of SQL Server
    2008 and also to know some important concepts related to any DWH solutions such as indexing , de-normalization..etc
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927103-data-warehousing-workshop-2-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927173-data-warehousing-workshop-3-4-
    http://www.sqlserver-performance-tuning.com/apps/blog/show/12927061-data-warehousing-workshop-1-4-
    Kindly let me know if any further help is needed
    Shehap (DB Consultant/DB Architect) Think More deeply of DB Stress Stabilities

  • Integrating Multiple systems - best practice

    Hi,
    I need to integrate the following scenario let me know the best practice with steps.
    Scenario is end to end syncrhonous
    A system(supports open tech i.e webservice client) => SAP PI<=>Oracle(call stored procedure) => Update in ECC=> Reponse back to A system (source)
    Thanks
    My3

    Hi Mythree,
    First get the request from the web service to pi and then map to stored procedure using synchronous send step (Sync Send1) in bpm and get the response back. Once when you get the response back from oracle then use another synchronus send step. Here take the response from the database and map to ecc (use either a proxy or rfc) which is Sync Send2 in the bpm and get the udpdates back. Once when you get the response back send back the response to source system. The steps in BPM would be like this:
    Start --> Receive --> Sync Send1 --> Sync Send2 --> Send --> Stop
    These blogs might be useful for this integration:
    /people/siva.maranani/blog/2005/05/21/jdbc-stored-procedures
    /people/luis.melgar/blog/2008/05/13/synchronous-soap-to-jdbc--end-to-end-walkthrough
    Regards,
    ---Satish

  • One-time import from external database - best practices/guidance

    Hi everyone,
    I was wondering if there was any sort of best practice or guideline on importing content into CQ5 from an external data source.  For example, I'm working on a site that will have a one-time import of existing content.  This content lives in an external database, in a custom schema from a home-grown CMS.  This importer will be run once - it'll connect to the external database, query for existing pages, and create new nodes in CQ5 - and it won't be needed again.
    I've been reading up a bit about connecting external databases to CQ (specifically this:http://dev.day.com/content/kb/home/cq5/Development/HowToConfigureSlingDatasource.html), as well as the Feed Importer and Site Importer tools in CQ, but none of it really seems to apply to what I'm doing.  I was wondering if there exists any sort of guidelines for this kind of process.  It seems like something like this would be fairly common, and a requirement in any basic site setup.  For example:
    Would I write this as a standalone application that gets executed from the command-line?  If so, how do I integrate that app with all of the OSGi services on the server?  Or,
    Do I write it as an OSGi module, or a servlet?  If so, how would you kick off the process? Do I create a jsp that posts to a servlet?
    Any docs or writeups that anyone has would be really helpful.
    Thanks,
    Matt

    Matt,
    the vault file format is just an xml representation of what's in the
    repository and the same as the package format. In fact, if you work on
    your projects with eclipse and maven instead of crxdelite to do your
    work, you will become quite used to that format throughout your project.
    Ruben

  • Best Practices for Configuration Manager

    What all links/ documents are available that summarize the best practices for Configuration Manager?
    Applications and Packages
    Software Updates
    Operating System Deployment
    Hardware/Software Inventory

    Hi,
    I think this may help you
    system center 2012 configuration manager best practices
    SCCM 2012 task-sequence best practices
    SCCM 2012 best practices for deploying application
    Configuration Manager 2012 Implementation and Administration
    Regards, Ibrahim Hamdy

  • Best Practice on Not Exposing your internal FQDN to the outside world

    Exchange server 2010, sits in DMZ, internet facing. The server is currently using the Default Receive Connector. This exposes the internal fqdn to the outside world (ehlo). Since you should not (can't) change the FQDN on your Default Receive connector, what
    is the best practice here?
    The only solution I can see is the following:
    1. Change the Network on the Default Receive Connector to only internal IP addresses.
    2. Create a new Internet Receive Connector port 25 for external IP addresses (not sure what to put in Network tab?) and use my external FQDN for ehlo responses (e.g. mail.domain.com)
    3. What do I pick for Auth and Permissions, TLS and Annoymous only?
    Michael Maxwell

    Yes, it fails PCI testing/compliance. I shouldn't be able to see my internal server and domain. I understand that is the recommendation, but my client doesn't want to host in the cloud or go with a Trend IHMS (trust me I like that better, but its
    not my choice). I have to work with the deck of cards dealt to me. Thanks, just want a solution with what I have now.
    Michael Maxwell
    Understand. I wont go into the value of those tests  :)
    If the customer is really concerned about exposing the internal name, then create a new receive connector with a different FQDN  ( and corresponding cert)  for anonymous connections as you mention above. Know that  it also means internal clients
    can connect to the server on port 25 as well if you dont have the ability to scope to set of ip addresses ( i.e. a SMTP gateway).
    The internal names of the servers will also be in the internet headers of messages sent out:
    http://exchangepedia.com/2008/05/removing-internal-host-names-and-ip-addresses-from-message-headers.html
    http://www.msexchange.org/kbase/ExchangeServerTips/ExchangeServer2007/SecurityMessageHygiene/HowtoremoveinternalservernamesandIPaddressesfromSMTPheaders.html
    Twitter!:
    Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.

  • Best practice for exposing internal data to external world?

    Currently we have our Internet server sitting in our corporate DMZ taking website and web service requests from the outside world.  Class libraries with compiled connection strings exist on that server.  That server then has a connection through
    the firewall to the database server.  I'm told that this is no longer the secure/recommended best practice.
    I'm told to consider having that Internet server make requests of not the database server, but rather a layer in between (application server, intranet server, whatever) that has those same Web UI methods exposed.. and then THAT server (being inside the firewall)
    connects to the database server.
    Is this the current recommended best practice to have external users interact with internal data?  It seems like lots of hoops -- outside person app queries Web UI methods on Internet server which in-turn queries same method (duplicated) on Intranet
    server which then talks to the database.
    I'm just trying to determine the simples practice, but also what is appropriately secure for our ASP.NET applications and services.
    Thanks.

    IMO this has little to do with SOA and all about DMZs. What you're are trying to stop is the same comm protocol accessing the database as accessed the web site. As long as you fulfil that then great. WCF can help here because it helps with configuring
    the transport of calls. Another mechanism is to use identities but IMO it's easier to use firewalls and transports.
    http://pauliom.wordpress.com

Maybe you are looking for

  • Who's who ESS portal - How to customize the way it's displayed?

    Hi everyone, I'd like to make a structured list of fields with titles. Do you know know if it's possible to customize parts in the "output fields detail" view of the who's who. In the standard view there are two main sections, one for data from OM an

  • External Antenna Mount for AIR-ANT5160NP-R

    I about to embark on a project to create a wireless bridge using a pair of Cisco Aironet 1252's, which will be mounted internally, and a pair of Cisco Aironet 5-GHz MIMO 6-dBi Patch Antenna (AIR-ANT5160NP-R), which will be mounted externally. Could a

  • Change view "Settings for Infosets": Details

    Hi, I would like to know why there is a value of 100 under the heading Settings for Infosets in BW when you go to the 'Change view "Settings for Infosets": Details' screen. What does it mean and what value must I input there? Your assistance is urgen

  • Search and Replace "Column Break" with "Page Break"?

    Hi, I have been using the "column break" where I really should have been using the "page break". The page break just makes more sense in my layout. Is there anyway to search and replace the column break with a page break? Thanks, Rhek

  • Can not stream movies from netflix

    I have an MacBook Air and I can not stream movies from Netflix.  I have downloaded Silverlight and unloaded it and reloaded it but still can not stream movies. What is up