Question regarding best practice

Hello Experts,
What is the best way to deploy NWGW?
We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
Kind regards,
Kunal.

Hi Kunal,
My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
Hth,
Simon

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • New to ColdFusion - Question regarding best practice

    Hello there.
    I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
    The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
    As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
    I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
    I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
    I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
    Thanks for the help.

    Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
    For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
    While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
    Michael G. Workman
    [email protected]
    http://www.usbid.com
    http://ic.locate-ic.com

  • Architecture/Design Question with best practices ?

    Architecture/Design Question with best practices ?
    Should I have separate webserver, weblogic for application and for IAM ?
    If yes than how this both will communicate, for example should I have webgate at both the server which will communicate each other?
    Any reference which help in deciding how to design and if I have separate weblogic one for application and one for IAM than how session management will occur etc
    How is general design happens in IAM Project ?
    Help Appreciated.

    The standard answer: it depends!
    From a technical point of view, it sounds better to use the same "midleware infrastructure", BUT then the challenge is to find the lastest weblogic version that is certified by both the IAM applications and the enterprise applications. This will pull down the version of weblogic, since the IAM application stack is certified with older version of weblogic.
    From a security point of view (access, availability): do you have the same security policy for the enterprise applications and the IAM applications (component of your security architecture)?
    From a organisation point of view: who is the owner of weblogic, enterprise applications and IAM applications. In one of my customer, application and infrastructure/security are in to different departments. Having a common weblogic domain didn't feet in the organization.
    My short answer would be: keep it separated, this will save you a lot of technical and political challenges.
    Didier.

  • Infomation regarding Best Practices Required,

    Dear Friends,
        Happy New Year......
    Im working as a part BI Excellence team in a reputed company.
    I jst want to say a client to install the BI Best Practice(Scenario -  SCM), inorder to do that i need to present him the advantages and difference between Best practice (SPECIFIC FOR BI) over General Implementation.
    When i search in Help.sap.com, it generally speaks about the time consumption n guidelines of Overall SAP Best Practices.
    Can anyone help me wrt to BI (From Blue Print to Go Live), Time line diferrences between SAP BI Best Practice and General Implementation.
    An Example with Specific Scenario Like SCM, Taking a Cube for IM and describing the Start to End Implemenation process and its timeline. How the same differs, when we go by using a SAP BI Best Practice installation?
    Please provide your Valuable suggesstions, as i dont hav any Implementation experience.
    Requesting your Valuable Guidence.
    Regards
    Santhosh kumar.N

    Hi,
    http://help.sap.com/saphelp_nw2004s/helpdata/en/f6/7a0c3c40787431e10000000a114084/frameset.htm
    http://help.sap.com/bp_biv370/html/Bw.htm
    Hope it helps........
    Thanks & Regards,
    SD

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Regarding Best Practices Documents

    Hi All,
    How to search and download SAP Best Practices documents.
    Thanks in Advance
    Pavan

    Hi Pavan,
    Pl go to the URL: http://help.sap.com/
    On the top centre of the page, you find SAP Best Practise tab.
    In there, you have Overview, Baseline packages, Industry packages, Cross-industry packages.
    Click on the desired option and you get to download the BEST PRACTICES.
    Given below is the Best practice URL for industry package for Automotive: Dealer business managament:
    http://help.sap.com/bp_dbmv1600/DBM_DE/html/index.htm
    (This is for your reference only).
    Hope this helps!
    Regards,
    Shilpa

  • Question on best practice to extend schema

    We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
    1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
    2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
    3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
    Any info would be helpful.
    Thanks
    Mamta

    I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
    If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

  • Question on best practice for NAT/PAT and client access to firewall IP

    Imagine that I have this scenario:
    Client(IP=192.168.1.1/24)--[CiscoL2 switch]--Router--CiscoL2Switch----F5 Firewall IP=10.10.10.1/24 (only one NIC, there is not outbound and inbound NIC configuration on this F5 firewall)
    One of my users is complaining about the following:
    When clients receive traffic from the F5 firewall (apparently the firewall is doing PAT not NAT, the client see IP address 10.10.10.1.
    Do you see this is a problem? Should I make another IP address range available and do NAT properly so that clients will not see the firewall IP address? I don't see this situation is a problem but please let me know if I am wrong.

    Hi,
    Static PAT is the same as static NAT, except it lets you specify the protocol (TCP or UDP) and port for the local and global addresses.
    This feature lets you identify the same global address across many different static statements, so long as the port is different for each statement (you CANNOT use the same global address for multiple static NAT statements).
    For example, if you want to provide a single address for global users to access FTP, HTTP, and SMTP, but these are all actually different servers on the local network, you can specify static PAT statements for each server that uses the same global IP address, but different ports
    And for PAT you cannot use the same pair of local and global address in multiple static statements between the same two interfaces.
    Regards
    Bjornarsb

  • Question on best practice/optimization

    So I'm working with the Custom 4 dimension and I'm going to be reusing the highest member in the dimension under several alternate hierarchies. Is it better to drop the top member under each of the alternate hierarchies or create a single new member and copy the value from the top member to the new base one.
    Ex:
    TotC4
    --Financial
    -----EliminationA
    ------EliminationA1
    ------EliminationA2
    -----GL
    -------TrialBalance
    -------Adjustments
    --Alternate
    ----AlternateA
    -------Financial
    -------AdjustmentA
    -----AlternateB
    -------Financial
    -------AdjustmentB
    In total there will be about 8 Alternate Adjustments(it's for alternate trasnlations if you're curious).
    So should I repeate the entire Financial Hierarchy under each alternate rollup, or just write a rule saying FinancialCopy = Financial. It seems like it would be a trade off between performance and database size, but I'm not sure if this is even substantial enough to worry about.

    You are better off to have alternate hierarchies where you repeat the custom member in question (it would become a shared member). HFM is very fast at aggregating the rollups. This is more efficient than creating entirely new members which would use rules to copy the data from the original member.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Advice needed regarding best practice

    Hi - curious as to if what i have setup now should be changed to best utilize Time Machine. I have iMac with 750GB drive (a small chunk is partitioned for Vista) - lets assume I have 600 GB dedicated for the mac.
    I havetwo firewire external drives - a 160GB and a 300GB.
    Currently, I have my itunes library on the 300GB drive as well as a few FCE files. I have made the 160GB the Time Machine drive. Would I be better off moving my iTunes library to the internal HD and then using the 300GB drive as the Time Machine drive? As I have it now, I don't think my iTunes library is getting backed up. In an ideal situation, is it safe to assume your Time MAchine disk should be at leasta s large if not larger than the internal HD? Thanks.
    Steve

    Steve,
    I would recommend using a drive that is 2x the size of the files you are going to back up. This is specifcally in the event that you make changes to the files and Time Machine starts backing up the new files that you have created. It will back up once every hour and it will only make a back-up copy of files that you have modified. If you are backing up your home folder, and you are using FCE, I would say back up to the 160Gb drive would be sufficient. If you were planning on backing up your home folder & your iTunes library, I would recommend the 300Gb drive. The only reason that you would need a backup drive 2x the size of your HD is if you were backing up your entire drive.

  • Question on best practice....

    Friends,
    Final Cut Studio Pro 5/Soundtrack Pro 1.0.3
    Powerbook G4, 2GB Ram
    I have a DV session recorded over 6 hours that I need some assistance with. The audio for the session was recorded in two instances....via a conference "mic" plugged into a Marantz PDM-671 audio recorder onto compactflash (located in the front of the room by the presenter(s)) AND via the built-in mics on our Sony HDR-FX1 video camera. Needless to say, the audio recording on the DV tape is not very good (presenters' voice(s) are distant with lots of "noise" in the foreground), while the Marantz recording is also not great...but better.
    Since these two were not linked together or started recording at the same time, the amount/time of recording doesn't match. I'm looking for either of the following:
    (a) Ways to clean up or enhance the audio recording on the DV tape so that the "background" voices of the presenters are moved to the foreground and able to be amplified properly.
    OR
    (b) A software/resource that would allow me to easily match my separate audio recording from the Marantz to the DV tape video, so I could clean up the "better" of the two audio sources, but match the audio and video without having our speakers look like they're in a badly dubbed film.
    Any advice or assistance you could give would be great. Thanks.
    -Steve
    Steven Dunn
    Director of Information Technology
    Illinois State Bar Association
    Powerbook G4   Mac OS X (10.4.6)   2GB RAM

    Hello Steven,
    What I would do in your case since you have 6 hours is to edit the show with the audio off the DV camera. Then, as painfull as this will be, get the better audio from the recorder and sync it back up till it "phases" with the audio from the DV camera. One audio track will have the DV camera audio on it. Create another audio track and import the audio from the recorder and place it on the 2nd audio track. Find the exact "bite" or audio and match it to the start of the DV camera audio clip. Now slip/slid the recorder audio till the sound starts to "phase". This will take awile but in the end works when original camera audio is recorded from across the room. Good luck.

  • A question about Best Practices

    Im currently working on a project and have run into a bit of structure debate.
    Our project works with a retional database.
    Hence we have classes that model certain sections of the db.
    We wish to create a Data Access Object to interface the model classe to the db. To enforce consitency in programming we were thinking of using an DAOInterface object that would define all methods ( ie load() , save() ect... )
    This leads to one issue... because each model is different our interface would need to declare arguments and returns ast Object.
    Which means a lot of casting .... ugh.. ugly.
    however the solution to this problem is to create an interface for each DAOObject however this defeats the purpose... cause now any developer onthe team and sneak a method in without being standard across the board...
    I was hoping my fellow developers may be able to share their experiences with this problem and provide recomendations.
    thanks
    J.

    You can declare "marker" interfaces for your DO Classes to be included in the interface for the DAO Class.
    public interface DAOInterface {
        DOInterface create(DOPrimaryKeyInterface key) throws DAOException;
    public interface DOInterface {
    public interface DOPrimaryKeyInterface {
    }It still involves casting, but at least not from Object - and it does enforce the "contract."
    As to keeping other developers from screwing it up, that's called Team Management and is out of the purview of this forum. ;D

  • Questions about best practic

    I have this existing functionality
    public class EmployeeUtility{
    public EmpDetail getEmployeeDetail(Employee employee){
    int accountKey = employee.getAccountKey();
    String name = employee.getName();
    int age = employee.getAge();
    return getDetail(accountKey,name,age,sex,address)
    I am making this class a webservice.In order to make this call easier for consumer I am changing the parameter to
    public EmpDetail getEmployeeDetail(String empKey){
    because empKey is the key to get all the other details like account key,sex,name,address.The bad part is an extra database call to get the account key,sex,name,address based on empKey.Remember Employee class above has much more variables than what I have shown.I dont want my consumer of my webservice to bang their head in order to get the info they want.
    Is this the right approach .Basically I believe my webservice should be very flexible and easy to use.
    Thanks
    m
    Edited by: shet on May 21, 2010 7:13 AM

    shet wrote:
    The bad part is an extra database call to get the account key,sex,name,address based on empKey.I guess If I understand correctly, earlier you were receiving employee object with more information than just a key. Now since you have changed the parameter to key means the purpose of web service has been changed also. So, you shouldn't be bothering much. Why do you think that this is the issue?
    I dont want my consumer of my webservice to bang their head in order to get the info they want.What actually are you looking for?
    Is this the right approach .Basically I believe my webservice should be very flexible and easy to use.What flexibility and ease of use are you looking for?

  • Azure Search Best Practice

    I have a few questions regarding best practices for implementing Azure Search, I am working on loading Titles and Usernames into Azure Search with a unique ID as key, search would be on various matching words in Titles or Usernames:
    - Is there a detailed article or whitepaper that discusses best practice?
    - Should we always use filter instead of search to improve response time?
    - I don't know how things work under the hood, is it possible to turn off certain features; for example, scoring profiles, to improve response time?
    - Can i run a load test on GET queries?  How many different GET queries?  Does the same query get cached?
    - I have setup an indexer with a AzureSQL data source with a data change policy against a DateModified column in the table.  This indexer runs every 5 minutes.  I'd imagine an index is necessary on this column in the SQL table?  Also, when
    the indexer runs, does it check all documents in the search index against the data source?
    Thanks in advance,
    Ken

    We don't have an end-to-end whitepaper that covers all of this yet. Here are notes on your specific questions, feel free to add more questions as details come up:
    Filter vs search: in general, more selective queries (where the combination of filter + search matches less documents of the overall index) will run faster since we need to score and sort less documents. Whether you choose filter vs search: if you want
    an exact boolean predicate then use a filter; if you want soft search criteria (with linguistics and such applied to it) then use search.
    Scoring profiles are off by default. They only kick in if you create a scoring profile in the index explicitly and either reference it in queries or mark it as default. With no scoring profiles present, scoring of documents is based on the properties of
    the search text and document text.
    Yes, you can do your perf testing using GET for search requests. While the same query doesn't get cached the underlying data ends up being warmer. A good pattern is to have a bunch of keywords and have your test create different searches with 2-3 words each
    (or whatever is typical in your scenario) based on those keywords.
    For the SQL table question, yes, it's better if you have an index in the column you use as high-watermark so SQL doesn't need to do a table scan each time the indexer runs. The larger the table the more important this is.
    This posting is provided "AS IS" with no warranties, and confers no rights.

Maybe you are looking for