Bring CRM to clients and partners, Architecture DMZ best practices

Hi, we need to bring access to CRM from internet to clients and partners.
So that we need to know the best practices to architecture design.
We have many doubts with these aspects:
- We will use SAP portal, SAP Gateways and web dispatchers with a DMZ:
       do you have examples about this kind of architecture?
- The new users will be added in 3 steps: 1000, 10000 and 50000:
       how can regulate the stress at internal system?, is it possible?
- The system can't show any problems to the clients:
       we need 24x7 system, because the clients are big clients.
- At the moment we have 1000 internal users.
thanks

I use the Panel Close? filter event and discard it and use the event to signal to my other loops/modules that my software should shut down. I normally do this either via user events or if I'm using a queued state machine (which I generally do for each of my modules) then I enqueue a 'shutdown' message where each VI will close its references (e.g. hardware/file) and stop the loop.
If it's just a simple VI, I can sometimes be lazy and use local variables to tell a simple loop to exit.
Finally, once all of the modules have finished, use the FP.Close method to close the top level VI and the application should leave memory (once everything else has finished running).
This *seems* to be the most recommended way of doing things but I'm sure others will pipe up with other suggestions!
The main thing is discarding the panel close event and using it to signal the rest of your application to shut down. You can leave your global for 'stopping' the other loops - just write a True to that inside the Panel Close? event but a better method is to use some sort of communications method (queue/event) to tell the rest of your application to shut down.
Certified LabVIEW Architect, Certified TestStand Developer
NI Days (and A&DF): 2010, 2011, 2013, 2014
NI Week: 2012, 2014
Knowledgeable in all things Giant Tetris and WebSockets

Similar Messages

  • When I share a file to YouTube, where does the output file live? I want also to make a DVD. And is this a best practice or is there a better way?

    I want also to make a DVD, but can't see where the .mov files are.
    And is this a best practice or is there a better way to do this, such as with a master file?
    thanks,
    /john

    I would export to a file saved on your drive as h.264, same frame size. Then import that into youtube.
    I have never used FCP X to make a DVD but I assume that it will build the needed vob mpeg 2 source material for the disk.
      I used to use Toast & IDVD. Toast is great.
    To "see" the files created by FCP 10.1.1 for YouTube, rt. (control) click on the Library Icon in your Movies/show package contents/"project"/share.

  • JAX-RPC Client - java.rmi.RemoteException:/getPort best practices

    We are working on java webservices(JAX-RPC style) and while consuming Java WebService sometime getting ‘Remote Exception’ .I have generated client side code with weblogic ant task “clientgen”.
    1: Exception
    java.rmi.RemoteException: SOAPFaultException - FaultCode [{http://schemas.xmlsoap.org/soap/envelope/}Server] FaultString [Failed to invoke end component {service implementation class name} (POJO), operation= {webmethode name}
    -> Failed to invoke method
    ] FaultActor [null] Detail [<detail><java:string xmlns:java="java.io">java.lang.NullPointerException
    </java:string></detail>]; nested exception is:
    weblogic.wsee.jaxrpc.soapfault.WLSOAPFaultException: Failed to invoke end component {service implementation class name} (POJO), operation={webmethode name}
    -> Failed to invoke method
    {Package name}.ManagementPortType_Stub.createXXX(xxxPortType_Stub.java:37) // This line is clientgen generated code
    {From this line its clear that clientgen generated code failed to get webservice port}
    2: Following is our implementation to invoke webservice:
    ManagementService service =
    new ManagementService_Impl(“WSDL URL”)
    ManagementPortType port = service.getManagerHTTPPort();
    Port.getServiceName();
    Our code is executing first two lines for every webservice request, and as per our observation these two lines (mark in bold) is taking long time to execute and due to this sometime gives ‘remote exception’ (when there is more request for web service consumption).
    3: My questions:
    1> Why does it take so long on initialization of service and port object?
    2> Is there any problem if I share “port” object for multiple request?
    3> what are the best practices in this type of implementation?
    Help would be greatly appreciated !

    Hi,
    Thanks for your reply.
    My service is deployed and working fine.
    NPE is due to {Package name}.ManagementPortType_Stub is null and code is executing createXXX() methode on it.
    Anyway i cant do anaything here because this is a clientgen generated code.

  • Mac Pro 10.7 Server DMZ best practice

    The Mac Pro has 2 gigE, what is the best practice for lion server and DMZ?
    Should I Ignor one and put the server in the DMZ and firewall from the LAN to the server (pain for file share), or use one port for the DMZ and one into the LAN.
    I have been trying to use the two ports and LION SERVER seems to want to bind only to one address (10.1.1.1 DMZ or 192.168.1.1 LAN).
    Does anyone have a best practice for this? I a using a Cisco ASA 5500 for the firewall.
    Thank you

    If you put your server in a DMZ all trafffic will be sent to it unfiltered, in which case the server firewall would be your only line of defense against attack. 
    For better security, set firewall rules in the Cisco that will pass trafffic to the ports you want open and deny traffic on all other ports.  You can also restrict access to specific ports by allowing or denying specific IP addresses or address blocks in the firewall settings.

  • Oracle SLA Metrics and System Level Metrics Best Practices

    I hope this is the right forum...
    Hey everyone,
    This is what I am looking for. We have several SLA's setup and we have defined many Business Metrics and we are trying to map them to System level metrics. One key area for us is Oracle. I was wondering is there is a best practice guide out there for SLA's when dealing with Oracle or even better, System Level Metric best practices.
    Any help would be ideal please.

    Hi
    Can you also include the following in the FAQ?
    1) ODP.NET if installed prior to this beta version - what is the best practice ? De-install it prior to getting this installed etc ..
    2) As multiple Oracle home's have become the NORM these days - this being a Client only should probably be non-intrusive and non-invasive.. Hope that is getting addressed.
    3) Is this a pre-cursor to the future happenings like some of the App-Server evolving to support .NET natively and so on??
    4) Where is BPEL in this scheme of things? Is that getting added to this also so that Eclipse and .NET VS 2003 developers can use some common Webservice framework??
    Regards
    Sundar
    It was interesting to see options for changing the spelling of Webservice [ the first one was WEBSTER]..

  • OVM Repository and VM Guest Backups - Best Practice?

    Hey all,
    Does anybody out there have any tips/best practices on backing up the OVM Repository as well ( of course ) the VM's? We are using NFS exclusively and have the ability to take snapshots at the storage level.
    Some of the main points we'd like to do ( without using a backup agent within each VM ):
    backup/recovery of the entire VM Guest
    single file restore of a file within a VM Guest
    backup/recovery of the entire repository.
    The single file restore is probably the most difficult/manual. The rest can be done manually from the .snapshot directories, but when we're talking about having hundreds and hundreds of guests within OVM...this isn't overly appealing to me.
    OVM has this lovely manner of naming it's underlying VM directories off of some abiguous number which has nothing to do with the name of the VM ( I've been told this is changing in an upcoming release ).
    Brent

    Please find below the response from the Oracle support on that.
    In short :
    - First, "manual" copies of files into the repository is not recommend nor supported.
    - Second we have to go back and forth through templates and http (or ftp) server.
    Note that when creating a template or creating a new VM from a template, we're tlaking about full copies. No "fast-clone" (snapshots) are involved.
    This is ridiculous.
    How to Back up a VM:1) Create a template from the OVM Manager console
    Note: Creating a template requires the VM to be stopped (this is required because the if the copy of the virtual disk is done with the running will corrupt data) and the process to create the template make changes to the vm.cfg
    2) Enable Storage Repository Back Ups using the step above:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-storage-repo-config.html#vmusg-repo-backup
    2) Mount the NFS export created above on another server
    3) Them create a compress file (tgz) using the the relevant files (cfg + img) from the Repository NFS mount:
    Here is an example of the template:
    $ tar tf OVM_EL5U2_X86_64_PVHVM_4GB.tgz
    OVM_EL5U2_X86_64_PVHVM_4GB/
    OVM_EL5U2_X86_64_PVHVM_4GB/vm.cfg
    OVM_EL5U2_X86_64_PVHVM_4GB/System.img
    OVM_EL5U2_X86_64_PVHVM_4GB/README
    How to restore up a VM:1) Then upload the compress file (tgz) to an HTTP, HTTPS or FTP. server
    2) Import to the OVM manager using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-repo.html#vmusg-repo-template-import
    3) Clone the Virtual machine from the template imported above using the following instructions:
    http://docs.oracle.com/cd/E27300_01/E27309/html/vmusg-vm-clone.html#vmusg-vm-clone-image
    Edited by: user521138 on Sep 5, 2012 11:59 PM
    Edited by: user521138 on Sep 6, 2012 3:06 AM

  • UDDI and deployed Web Services Best Practice

    Which would be considered a best practice?
    1. To run the UDDI Registry in it's own OC4J container with Web Services deployed in another container
    2. To run the UDDI Registry in the same OC4J container as the deployed Web Services

    The reason you don't see your services in the drop-down is because, CE does lazy initialization of EJB components (gives you a faster startup time of the server itself). But your services are still available to you. You do not need to redeply each time you start the server. One thing you could do is create a logical destinal (in NWA) for each service and use the "search by logical destination" button. You should always see your logical names in that drop-down that you can use to invoke your services. Hope it helps.
    Rao

  • What is the Account and Contact workflow or best practice?

    I'm just learning the use of the web services. I have written something to upload my customers into accounts using the web services. I need to now include a contact for each account. I'm trying to understand the workflow. It looks like I need to first call the web service to create the account, then call a separate web service to create the contact and include the account's ID with the contact to that they are linked, is this correct?
    Is there a place I can go to find the "best practices" for work flows?
    Can I automatically create the contact within my call to create the account in the web service?
    Thanks,

    Probably a poor choice of words. Sorry.
    So basically, I have gotten further, but I just noticed related problem.
    I'm using the WebServices(WS) 1.0. I insert an account, then, on a separate WS call, I insert my contacts for the account. I include the AccountID, and a user defined key from the Account when creating the Contact.
    When I look at my Contact on the CRMOD web page, it shows the appropriate links back to the Account. But when I look at my Account on the CRMOD web page, it shows no Contacts.
    So when I say workflow or Best Practice, I was hoping for guidance on how to properly write my code to accomplish all of the necessary steps. As in this is how you insert an account with a contact(s) and it updates the appropriate IDs so that it shows up properly on the CRMOD web pages.
    Based on the above, it looks like I need to, as the next step, take the ContactID and update the Account with it so that their is a bi-directional link.
    I'm thinking there is a better way in doing this.
    Here is my psuedocode:
    AccountInsert()
    AccountID = NewAcctRec
    ContactInsert(NewAcctRec)
    ContactID = NewContRec
    AccountUpdate(NewContRec)
    Thanks,

  • Grid Control and SOA suite monitoring best practice

    Hi there,
    I’m trying to monitor a SOA implementation on Grid Control.
    Are there some best practices about it?
    Thanks,     
    Nisti
    Edited by: rnisti on 12-Nov-2009 9:34 AM

    If they use it to access and monitor the database without making any other changes, then it should be fine. But if they start scheduling stuff like oradba mentioned above, then that is where they will clash.
    You do not want a situation where different jobs are running on the same database from different setups by different team (cron, dbcontrol, dbms_job, grid control).
    Just remember their will be aditional resource usage on the database/server to have both running and the Grid Control Repository cannot be in the same database as the db console repository.

  • GRC AACG/TCG and CCG control migration best practice.

    Is there any best practice documents which illustrates the step by step migration of AACG/TCG and CCG controls from the development instance to the production? Also, how should one take the back up for the same ?
    Thanks,
    Arka

    There are no automated out of the box tools to migrate anything from CCG.  In AACG/TCG  you can export and import Access Models (includes the Entitlements) and Global Conditions.  You will have to manual setup roles, users, path conditions, etc.
    You can't clone AACG/TCG or CCG.
    Regards,
    Roger Drolet
    OIC

  • Deployment (Redistrubution) CR2008 - bloated (large) files and what's the best practice

    Post Author: basit
    CA Forum: Deployment
    I am a rookie to CR ; and started deployment(CRXII) with my POS program written in .Net VB 2005  and the msi was really large (85mb) ; I bought the latest version (CR2008) after reading that it's much smaller. . .
    I have a dilemma and frankly donu2019t know which one of the packages to bundle with my redistribution or how to do this?
    You given three options :-
    1)      The 25mb;  exe version u2013 This is great as itu2019s very much smaller yet when executed it requires you to insert a serial number? ; how can I use this without having the client insert my serial number.
    2)     The msi version which is almost twice the size of the above
    3)     The click one version which is like the one above just does have some language folders
    Please assist ; thanking you in advance

    Post Author: Justin Azevedo
    CA Forum: Deployment
    You dont need to insert your serial number. Just press NEXT and go right past it leaving the field blank. This only works with CR2008, the XI runtime needed this number.

  • Setup internal and external DNS namespaces best practice

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local) able to run on the same DNS server (using Microsoft Windows DNS servers)?
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly
    or companydomain.com then create a subdomain corp?
    Thanks in advanced.
    William Lee
    Honf Kong

    Is external name space (e.g. companydomain.com) and internal name space (e.g. corp.companydomain.com or companydomain.local)
    able to run on the same DNS server (using Microsoft Windows DNS servers)?
    Yes, it is technically feasible. You can have both of them running on the same DNS server(s). Just only your public DNS zone can be published for external resolution.
    MS said it is highly recommended to use a subdomain to handle internal name space - say corp.companydomain.com
    if the external namespace is companydomain.com.  How shall this be setup?  Shall I create my ADDS domain as corp.companydomain.com directly or companydomain.com then create a subdomain corp?
    What is recommended is to avoid having a split-DNS setup (You internal and external DNS names are the same). This is because it introduces extra complexity and confusion when managing it.
    My own recommendation is to use .local for internal zone and .com for external one.
    This posting is provided "AS IS" with no warranties or guarantees , and confers no rights.
    Get Active Directory User Last Logon
    Create an Active Directory test domain similar to the production one
    Management of test accounts in an Active Directory production domain - Part I
    Management of test accounts in an Active Directory production domain - Part II
    Management of test accounts in an Active Directory production domain - Part III
    Reset Active Directory user password

  • Re: OVM Repository and VM Guest Backups - Best Practice?

    Hi,
    I have also been looking into how to backup an OVM Repository, and I'm currently thinking of doing it with the OCFS2 reflink command, which is what it used by the OVMM 'Thin Clone' option to create a snapshot of the virtual disk's .img file. I thought I could create a script that reflink's all the virtual disks to sperate directory within the repository, then export the repository via OVMM and backup all the snapshots. All the snapshots can then be deleted once the backup was complete.
    The VirtualMachines directory could also be backed up in the same way, or I would have thought it would be safe to back this directory directly, as the files are very cfg files are very small and change infrequently.
    I would be interested to hear from anyone that has any experience of doing a similar thing, or has any advise about whether this would be doable.
    Regards,
    Andy

    yes, that is one common way to perform backups. Unfortunately, you'll have to script those things yourself. Some people also use the xm command to pause the VM for just the time it takes to create the reflinks (especially if there is more than one virtual disk in the machine and you want to make sure they are consistent).
    You can read a bit more about it here:
    VM Manager and VM Server - backup and recovery options
    great article (in german but you can understand the scripts) about this http://www.trivadis.com/fileadmin/user_upload/PDFs/Trivadis_in_der_Presse/120601_DOAG-News_Snapshot_einer_VM_mit_Oracle_VM_3.pdf
    and I have blogged about an alternative way where you clone a running machine and then backup that cloned (and stopped) machine
    http://portrix-systems.de/blog/brost/taking-hot-backups-with-oracle-vm/
    cheers
    bjoern

  • Server 2012 with HQ and 2 branch locations - Best practice?

    Hello.  I'm trying to plan a domain for a company with 1 Headquarters and 2 branch offices.<o:p></o:p>
    Currently both branch offices are communicating with headquarters via site-to-site VPN.  However, most if not all of our services are cloud based.  The routers in each
    location are performing DHCP and DNS.  The speed at each location is good, 10 mbps at the branches and 70 at HQ. Both branch offices have about 30 people at them.  HQ has around 80-100<o:p></o:p>
    We want to implement a domain so there is user authentication for accessing the computers and preferably a print server at HQ for HQ printers.  We may want to move DHCP to
    a domain controller.  We also want to utilize group policy.<o:p></o:p>
    My questions are the following:<o:p></o:p>
    1. Since we are primarily cloud based, would putting a DC on Amazon EC2 or other product be advisable?<o:p></o:p>
    2. Should I put RODC at the remote locations?<o:p></o:p>
    3. If I have redundant DC's at HQ on 2 different Xenservers and have credential caching, would only have the 2 DC's at HQ be advisable?<o:p></o:p>

    1. Makes sense to put DCs in each location. That way, even when the network to the internet is down, resources are still available locally.
    2. Dependent on applications you are installing in remote offices.  Most applications are okay, but some require access to a rewritable domain controller.
    3. Two DCs are HQ is the minimum you should have at HQ.  Then one in each remote site.
    . : | : . : | : . tim

  • Populating users and groups - design considerations/best practice

    We are currently running a 4.5 Portal in production. We are doing requirements/design for the 5.0 upgrade.
    We currently have a stored procedure that assigns users to the appropriate groups based on the domain info and role info from an ERP database after they are imported and synched up by the authentication source.
    We need to migrate this functionality to the 5.0 portal. We are debating whether to provide this functionality by doing this process via a custom Profile Web service. It was recommended during ADC and other presentation that we should stay away from using the database security/membership tables in the database directy and use the EDK/PRC instead.
    Please advise on the best way to approach(With details) this issue. We need to finalize the best approach to take asap.
    Thanks.
    Vanita

    So the best way to do this is to write a custom Authentication Web Service.  Database customizations can do much more damage and the EDK/PRC/API are designed to prevent inconsistencies and problems.
    Along those lines they also make it really easy to rationalize data from multiple backend systems into an orgainzation you'd like for your portal.  For example you could write a Custom Authentication Source that would connect to your NT Domain and get all the users and groups, then connect to your ERP system and do the same work your stored procedure would do.  It can then present this information to the portal in the way that the portal expects and let the portal maintain its own database and information store.
    Another solution is to write an External Operation that encapsulates the logic in your stored procedure but uses the PRC/Server API to manipulate users and group memberships.  I suggest you use the PRC interface since the Server API may change in subtle ways from release to release and is not as well documented.
    Either of these solutions would be easier in the long term to maintain than a database stored procedure.
    Hope this helps,
    -Akash

Maybe you are looking for

  • Error Message no. FICUSTOM098

    Dear Experts I am getting the below error while posting the downpayment in FB01 T.code, my client doesn't want to use the F-48 to make the down payment. so i used the FB01 down payment since parking & posting option is available for FB01. No Funds Ma

  • IPhone 4 Car Connection (USA Spec Intergration) And Docking

    Hi, I'm likely going to be getting the new iPhone 4 when it rolls out on Verizon. First off, please don't tell me how big of a mistake I'm making and alike. AT&T is terrible in my area, and most of the people I talk to (work or friends) have Verizon.

  • Installing windows 7 on mac book pro late 2013

    Installing Windows 7 on Mac Book Pro Late 2013 Background Mac Book Pro Late 2013 uses a USB 3.0 hub for the external ports, the keyboard and mouse devices. Windows 7 does not have the drivers to support USB 3.0 ports so these must be integrated into

  • SQL query using Group by and Aggregate function

    Hi All, I need your help in writing an SQL query to achieve the following. Scenario: I have table with 3 Columns. There are 3 possible values for col3 - Success, Failure & Error. Now I need a query which can give me the summary counts for distinct va

  • Changes to Variable in Customer exit

    Hi Experts, We have two variables in the ready for input query. First  Variable : Customer exit variable which gets populated based on user login details. Second Variable: Input ready variable. The values  will be populated depending on the first var