Question on ESB Best Practice

Hi,
I would like to know what is the best practice for following scenario.
1) I have to call different web services based on message content through ESB
2) i have two ways, either i have one ESB developed and based on content i route them to different web services or create all different ESB services for different web services.
Can any one tell me, what would be best with respect to performance and all perspective.
Thanks,
Jack

I don't thin that I'm experienced enough but I guess that it depends to many things.
- First about logic
Where you want to place logic and how to manage it.
You can place logic (routing to diferent endpoints) into BPEL but there is hard to manage it. When you put routing logic in ESB you can manage it without redeploying BPEL. It is easier update ESB routing rules than BPEL processes.
- Performance
I don't think that one ESB instead of 3 ESB is bottleneck but it is about performance.
You can even create separate subprocess to keep routing logic. But I think it is the same as 1 ESB.

Similar Messages

  • ESB best practices

    I have the following questions on ESB, appreciate your inputs on these
    1. Does one ESB map to multiple/one operation in the service provider's wsdl ?
    2. How can we implement security if we want to restrict the operations exposed in the ESB WSDL to certain clients?
    3. We are planning to expose the routing service to the consumers in a synchronous call, should the ESB define the schema for the consumers or should the ESB use the schema provided by the consumers (assuming more than one consumer)
    4. Is it a bad practice to use DB Adapters?
    5. Is it necessary to use Service Registry?
    Thanks for any help on these

    Best Practice is well overstated, it is a term that all the theorist band around to justify why they spend so much time talking about it instead of doing it.
    In the real world you have to take a bit from real world and what is so call best practice, because sometimes best practice does not fit your use case.
    I would like to understand your 2nd approach. Are you saying your services can call the ESB directly, e.g. via web services?
    If this is the case I would go with your first option as this provides guaranteed delivery, and more options at fail over as you can have your process run in a XA transaction so the message is not dequeued until it has been enqueued on the next.
    cheers
    James

  • Wireless authentication network design questions... best practices... etc...

    Working on a wireless deployment for a client... wanted to get updated on what the latest best practices are for enterprise wireless.
    Right now, I've got the corporate SSID integeatred with AD authentication on the back end via RADIUS.
    Would like to implement certificates in addition to the user based authentcation so we have some level of dual factor authentcation.
    If a machine is lost, I don't want a certificate to allow an unauthorized user access to a wireless network.  I also don't want poorly managed AD credentials (written on a sticky note, for example) opening up the network to an unathorized user either... is it possible to do an AND condition, so that both are required to get access to a wireless network?

    There really isn't a true two factor authentication you can just do with radius unless its ISE and your doing EAP Chaining.  One way that is a workaround and works with ACS or ISE is to use "Was machine authenticated".  This again only works for Domain Computers.  How Microsoft works:) is you have a setting for user or computer... this does not mean user AND computer.  So when a windows machine boots up, it will sen its system name first and then the user credentials.  System name or machine authentication only happens once and that is during the boot up.  User happens every time there is a full authentication that has to happen.
    Check out these threads and it explains it pretty well.
    https://supportforums.cisco.com/message/3525085#3525085
    https://supportforums.cisco.com/thread/2166573
    Thanks,
    Scott
    Help out other by using the rating system and marking answered questions as "Answered"

  • Questions VLAN design best practices

    As per best practices for VLAN design:
    1) Avoid using VLAN 1 as the “blackhole” for all unused ports.
    2) In the local VLANs model, avoid VTP (use transparent mode).
    Point 1
    In a big network, I'm having VLAN 1 as the blackhole VLAN. I'd like to confirm that, even if we're not complying with best practices, we're still doing fine.
    a) all trunk ports on all switches have the allowed vlans explicitly assigned.
    b) about all ports on all switches are assigned to specific data/voice vlans, even if shutted down
    c) the remaining ports (some unused sfp ports for example) are shutted down
    d) we always tag the native vlan (vlan dot1q tag native)
    So, no data is flowing anywhere on VLAN 1. In our situation, it is safe to use VLAN 1 as blackhole VLAN?
    Point 2
    Event if we're using local VLANs model, we have VTP in place. What are the reasons of the best practice? As already said, we allow only specific VLANs on trunk ports (it's part of our network policy), so we do not have undesired layer 2 loops to deal with.
    Any thoughs?
    Bye
    Dario

    We are currently using VTP version 3 and migrating from Rapid-PVST to MST.
    The main reason for having VTP in place (at least for use) is to have the ability to assign ports to the correct VLAN in each site simply looking at the propagated VLAN database and to manage that database centrally.
    We also avoid using the same VLAN ID at two different sites.
    However, I did find something to look deeped: with MST and VTP, a remote switch can be root for a VLAN it doesn't even use or as active ports into, and this doesn't feel right.
    An example:
    1) switch1 and switch528 share a link with allowed vlan 100
    2) switch1 is the root for instances 0 and 1
    4) VLAN 100 is assigned to instance 1
    5) VLAN 528 is not assigned to any particular instance so it goes under instance 0
    6) VLAN 528 is the Local Data LAN for switch528 (switch501 has VLAN 501)
    swtich528#sh spanning-tree vlan 528
    MST0
      Spanning tree enabled protocol mstp
      Root ID    Priority    24576
                 Address     1c6a.7a7c.af80
                 Cost        0
                 Port        25 (GigabitEthernet1/1)
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
      Bridge ID  Priority    32768  (priority 32768 sys-id-ext 0)
                 Address     1cde.a7f8.4380
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
    Interface           Role Sts Cost      Prio.Nbr Type
    Gi0/1               Desg FWD 20000     128.1    P2p Bound(PVST)
    Gi0/2               Desg FWD 20000     128.2    P2p Edge
    Gi0/3               Desg FWD 200000    128.3    P2p Edge
    Gi0/4               Desg FWD 200000    128.4    P2p
    Gi0/5               Desg FWD 20000     128.5    P2p Edge
    switch1#sh spanning-tree vlan 501
    MST0
      Spanning tree enabled protocol mstp
      Root ID    Priority    24576
                 Address     1c6a.7a7c.af80
                 This bridge is the root
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
      Bridge ID  Priority    24576  (priority 24576 sys-id-ext 0)
                 Address     1c6a.7a7c.af80
                 Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
    Interface           Role Sts Cost      Prio.Nbr Type
    Should I worry about this?

  • Question about Mobilink best practice

    Hello,
    I have the following data workflow:
    Around 20 tables synchronize upload only.
    7 tables synchronize download only.
    2 tables have bidirectional sync.
    I was wondering if it could be a good idea to create 3 schema models, instead of one.
    This way, the upload, which is critical, could run independently of the download.
    Please, tell me if this is a good practice.
    Thank you
    Arcady

    Hi Arcady,
    No, you cannot run multiple instances of dbmlsync against the same SQL Anywhere database concurrently. If you try this, you will see the error:
    SQL statement failed: (-782) Cannot register 'sybase.asa.dbmlsync' since another exclusive instance is running
    dbmlsync client accesses must be serialized against the same database.
    I was wondering if it could be a good idea to create 3 schema models, instead of one.
    This way, the upload, which is critical, could run independently of the download.
    See: DocCommentXchange - Upload-only and download-only synchronizations
    It's up to you and what you really prefer to manage - you can do all of the work in one model (and create synchronization scripts that do the "right" work), or you can create three separate models and synchronize them separately. If you're "more concerned" (i.e. want to synchronize more often) about the upload-only tables then you can create a separate model and use dbmlsync -uo or the UploadOnly (uo) extended option for that specific model.
    A reminder that if you do end up splitting your one model into multiple models, all of the models have to be kept in synchronization with the MobiLink synchronization server in order for dbmlsync to advance the remote database transaction log truncation offset.
    (i.e. in order for delete_old_logs to continue to work and removing offline logs, all of the logs must be synchronized for all synchronziation subscriptions).
    Regards,
    Jeff Albion
    SAP Active Global Support

  • Quick Question: CS6 Installation Best Practice

    Hi Guys
    I have CS5 and CS5.5 Master Collection running on my PC (Win7 64bit SP1, Intel Core i7 2.67ghz, 24 gig RAM) and I've just taken ownership of the CS6 upgrade. When I loaded MC CS5.5 I had a bunch of errors, which turned out to be related to the update not needing to replace certain existing components of CS 5.
    Should I just insert the disk and run or is there anything I should do to prepare this time? Any advice is welcome. I'd like to avoid uninstalling the previous versions but if that's the advice, I'll run with it.
    Regards,
    Graham

    If you need the flash builder and is very important for you then ,
    Go to control panel and Uinstall CS5.5 , when you start the uninstall the screen with all the product name will come up , just check flash builder and uncheck everything , It would remove just flash builder.
    But forget it f you dont need it or doesn't use it.

  • Question About CRM Best Practices Configuration Guide...

    In the CRM Connectivity (C71) Configuration Guide, Sections 3.1.2.2.2 and 3.1.2.2.3, it mentions two clients, Cient 000 and the Application Client.  What are these two client?  I assumed Client 000 was my CRM client, but that sounds the same as what the application client should be.
    http://help.sap.com/bp_crmv340/CRM_DE/BBLibrary/Documentation/C71_BB_ConfigGuide_EN_US.doc

    Keith,
    Client 000 is not the application client.
    The client which is used in the middleware(e.g.CRM quality client - R/3 Quality client or CRM Production client-R/3 Production client)is the application client.
    You have to do it once in client 000 and in your own created client which is used in the middleware connectivity.
    regards,
    Bapujee

  • Best Practices for NCS/PI Server and Application Monitoring question

    Hello,
    I am deploying a virtual instance of Cisco Prime Infrastructure 1.2 (1.2.1.012) on an ESX infrastructure. This is being deployed in an enterprise enviroment. I have questions around the best practices for moniotring this appliance. I am looking to monitor application failures (services down, db issues) and "hardware" (I understand this is a virtual machine, but statistics on the filesystem and CPU/Memory is good).
    Firstly, I have enabled via the CLI the snmp-server and set the SNMP trap host destination. I have created a notification receiver for the SNMP traps inside the NCS GUI and enabled the "System" type alarm. This type includes alarms like NCS_DOWN and PI database is down. I am trying to understand what the difference between enabling SNMP-SERVER HOST via the CLI and setting the Notification destination inthe GUI is? Also how can I generate a NCS_DOWN alarm in my lab. Doing NCS stop does not generate any alarms. I have not been able to find much information on how to generate this as a test.
    Secondly, how and which processes should I be monitoring from the Management Station? I cannot easily identify the main NCS procsses from the output of ps -ef when logged in the shell as root.
    Thanks guys!

    Amihan_Zerrudo wrote:
    1.) What is the cost of having the scope in a <jsp:useBean> tag set to 'session'? I am aware that there are a list of scopes like page, application, etc. and that if i use 'session' my variable will live for as long as that session is alive. (did i get this right?). You should rather look to the functional requirements instead of costs. If the bean need to be session scoped (e.g. maintain the logged in user), then do it so. If it just need to be request scoped (e.g. single page form data), then keep it request scoped.
    2.)If the JSP Page where i use that <useBean> is to be accessed hundred of times a day, will it compensate my server resources? Right now i am using the Sun Glassfish Server.It will certainly eat resources. Just supply enough CPU speed and memory to a server. You cannot expect that a webserver running at a Pentium 500MHz with 256MB of memory can flawlessly serve 100 simultaneous users at the same second. But you may expect that it can serve 100 users per 24 hour.
    3.) Can you suggest best practice in memory management given the architecture i described above?Just write code so that it doesn't unnecessarily eat memory. Only allocate memory if your application need to do so. You should rather let the hardware depend on the application requirements, not to let the application depend on the hardware specs.
    4.)Also, I have implemented connection pooling in my architecture, but my application is to be used by thousands of clients everyday.. Can the Sun Glassfish Server take care of that or will I have to purchase a powerful sever?Glassfish is just an application server software, it is not server hardware. Your concerns are rather hardware related.

  • Static NAT refresh and best practice with inside and DMZ

    I've been out of the firewall game for a while and now have been re-tasked with some configuration, both updating ASA's to 8.4 and making some new services avaiable. So I've dug into refreshing my knowledge of NAT operation and have a question based on best practice and would like a sanity check.
    This is a very basic, I apologize in advance. I just need the cobwebs dusted off.
    The scenario is this: If I have an SQL server on an inside network that a DMZ host needs access to, is it best to present the inside (SQL server in this example) IP via static to the DMZ or the DMZ (SQL client in this example) with static to the inside?
    I think its to present the higher security resource into the lower security network. For example, when a service from the DMZ is made available to the outside/public, the real IP from the higher security interface is mapped to the lower.
    So I would think the same would apply to the inside/DMZ, making 'static (inside,dmz)' the 'proper' method for the pre 8.3 and this for 8.3 and up:
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    Am I on the right track?

    Hello Rgnelson,
    It is not related to the security level of the zone, instead, it is how should the behavior be, what I mean is, for
    nat (inside,dmz) static yy.yy.yy.yy
    - Any traffic hitting translated address yy.yy.yy.yy on the dmz zone should be re-directed to the host xx.xx.xx.xx on the inside interface.
    - Traffic initiated from the real host xx.xx.xx.xx should be translated to yy.yy.yy.yy if the hosts accesses any resources on the DMZ Interface.
    If you reverse it to (dmz,inside) the behavior will be reversed as well, so If you need to translate the address from the DMZ interface going to the inside interface you should use the (dmz,inside).
    For your case I would say what is common, since the server is in the INSIDE zone, you should configure
    object network insideSQLIP
    host xx.xx.xx.xx
    nat (inside,dmz) static yy.yy.yy.yy
    At this time, users from the DMZ zone will be able to access the server using the yy.yy.yy.yy IP Address.
    HTH
    AMatahen

  • Best Practice to Assign Network

    Hi Experts,
    I have a question - What is best practice to Assign networks. Is it Header assignment or Activity Assignment.
    I have a requirement which asks for a WBS Level Cost and Revenue posting while settlement. I followed the standard design of having a 1st level WBS and assigned a Network to that. Also have 2nd level WBSs linked to 1st level WBS, which have the activities. Is this suffice the settlement requirement.
    Thanks
    Rajesh

    Hi,
    The asked question needs more clarifications.
    Header assigned Newtork is used in Assembly processing i.e. from sales order when the project is generated automatically. in that case each sales order line item will have one network assigned to it or there is activity assigned network which is observed intermediator between WBS and activity.
    Regarding project profile if you want to assign the network to Proejct defination then only 1 network will be there in project struructre or if to WBS element then Each WBS will have one network.
    Further you have also mentioned about settlement?? which question needs more elaboration.
    regards
    sameer

  • Agent determination best practices

    Hi, workflow dev team!
    Another question about workflow best practices.
    I build a workflow, the first step of each workflow, at least as I see it, is to determine agents for all steps in the workflow and store it in multiline WF-container. After that, to use this WF-container data to execute a step inside of workflow.
    The questions are:
    1. Suppose, I filled a WF-container with a list of all possible agents and their position (in order to supply more user-friendly approve/decline notifications) for the different steps inside of this workflow (username & position). As I understand, I can access to specific line of multiline container with &AGENTS[&INDEX&]& inside of EXPRESSION field (not via ABAP-code in method of class), but how can I access specific column of the WF-container? E.g. &AGENTS[1]& will return «USJOHN — Junior Manager», is it possible to retrieve username and position from multi line and pass it to the task as two separate container elements?
    2. Is it possible to query specific line of multiline container element according to some logical condition inside of EXPRESSION field (not via ABAP-code in method of class) or in order to do this I have to implement that logic inside of ABAP class method?
    And the last question, probably, less connected to the questions asked above.
    3. If I need to define agents for the task, what is the best approach to do that:
    — define the task as General Task and insert the relevant agents as multiline container element via EXPRESSION field;
    — to play with Agent Assignment (try to implement agent determination logic there) and not to define the task as general one.
    Thanks.

    Ronen already had great input, but here are some additional points:
    -In general I would try to avoid some kind of "Let's get all the agents at the beginning of the workflow into a multiline container, and try to pick up the correct ones for the individual steps" approach. Of course if your aim is just to display the list of agents somewhere (task description etc.,), then this might be OK approach.
    -Rules are better than expressions in agent determination (at least in most cases). If the approvers change during the process, you can (as an WF admin) just click the "Execute agent rule again" functionality, and the system will find the correct new approver for the work item. Otherwise you need to go to manipulate the container element(s) (and first figure out that who is the approver).
    Answer for 3: You don't need to use only expressions/container elements. Try to check if you can use rules. In some rare cases you might consider of using the tasks' agent assignment (the same place where you set the task as general task). Let's say that you want to send the work item to all users that have certain authorization role. Now you can set the role as a "restricting entity" at the task level (=same place where you set the general task). This will mean that only the users who have the role are the possible agents for the task. And now if you leave the agent determination part at the workflow template empty (no expressions, no rules etc), you are done!
    Regards,
    Karri

  • Best practice: Developing report in Rich Client or InfoView?

    Hi Experts,
    I have a question on the best practice of developing webi reports.
    From what I know, a Webi report can be created in Rich Client and then exported to one or more folders. From InfoView, the report can also be changed, but the change is only local to the folder.
    To simplify development and maintenance, I believe both creation and change should be done solely in either Rich Client or InfoView. However, some features are only available in InfoView, not in Rich Client. One example is hyperlink for another Webi report. As a second step, I can add the extra features in InfoView after the export. However, if I change the report in Rich Client and re-export it, the extra features added via InfoView (e.g. report hyperlink) will be overwritten.
    As I'm new to BO, may I have some recommendations on the best practice for building reports? For instance:
    1) Only in Rich Client - no adding of feature via InfoView
    2) First in Rich Client, then in InfoView - extra features need to be added again after each export
    3) Only in InfoView -  all activities done in InfoView, no development in Rich Client
    4) Others?
    Any advice is much appreciated.
    Linda
    Edited by: Linda on May 26, 2009 4:28 AM

    Hi Ramaks, George and other experts,
    Thanks a lot for your replies.
    For my client, the developers will build most of the reports for regular users to view. However, some power users may also create their own reports to meet ad-hoc reporting requirements.
    It's quite unlikely for my client to develop reports based on Excel or CSV data files. And we need to use features such as hyperlink for documents (which is not available in Rich Client). Based on these considerations, I'm thinking of doing all development in InfoView (both developers and power users). Do you foresee any issue if I go for this approach?
    Thanks in advance.
    Linda

  • CAS array internal DNS IP address best practice

    Hi, Just a question about a best practice approach for DNS and CAS arrays.
    I have an Exchange 2010 Org. I have two CAS/HUB servers and two MBX servers. My external DNS (mail.mycompany.biz) host record points to a public IP address which is NAT'd to the internal IP address of my NLB CAS cluster. I maintain a split brain
    DNS. Should the internal DNS entry for mail.mycompany.biz also point to the public IP address or should it point to the internal IP address of the NLB cluster?

    A few comments:
    The reason you have split DNS is to do exactly these sort of things: inside users hit the inside IP and outside users hit the outside IP.  You'll have to look at your overall network design to see if it makes sense for users to take this shortest route
    to the services, or if there is value in knowing all users simply take the same path.
    You should not be using the same DNS name for your web services (e.g. OWA) as you are for your CAS array.  This can cause very long connection delays on Outlook clients, not to mention overall confusion in your design.  Many orgs will use something
    like "outlook.domain.com" for the Client Access Array and "mail.domain.com" for the web services.  Only the later of these two need to be exposed to the internet.
    Keep in mind, Exchange 2013 dramatically changes this guidance.  There is no more CAS array, and the
    recommended design is to use dedicated namespaces for each web service.
    Mike Crowley | MVP
    My Blog --
    Planet Technologies

  • Best Practices for Export

    I have recently begun working with a few AIC-encoded home movie files in FCPX. My goal is to compress them using h.264 for viewing on computer screens. I had a few questions about the best practices for exporting these files, as I haven't worked with editing software in quite some time.
    1) Is it always recommended that I encode my video in the same resolution as its source? For example, some of my video was shot at 1440x1080, which I can only assume is anamorphic. I originally tried to export at 1920x1080 but then changed my mind as I assumed the 1440x1080 would just stretch naturally. Does this sound right?
    2) FCPX is telling me that a few of my files are in 1080i. I'd like to encode them in 1080p as it tends to look better on computer screens. In FCPX, is it as simple as dragging my interlaced footage into a progressive timline and then exporting? I've heard about checking the "de-interlace" box under clip settings and then doubling the framerate but that seemed to make my video look worse.
    3) I've heard that it might be better practice to export my projects as master files and then encode h.264 in Compressor. Is there any truth to this? Might it be better for the interlaced to progressive conversion as well?
    Any assistance is greatly appreciated.

    1) yes. 1440 will display ax 1920.
    2) put everything in a 1080p project.
    3) Compressor will give you more options for control. The h264 from FCP is a very high data rate and makes large files.

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

Maybe you are looking for

  • Data is not coming correctly through delta from DSO to Cube

    HI All, When there is change in Org. Unit, Job or any characteristic of employee then the the delta of data source picks that change records. I have used DSO before Cube. Now in DSO active table have one record against the Employee and same Schedule

  • Export in Problem from oracle 8.1.6 to 8.1.7

    when i am trying to export schema with data from oracle 8.1.6 to 8.1.7 using the command exp uid/passwrd@servicename full=n file=filename log=logfilename, i am facing two errors: 1. invalid column name ora 904 error 2. no statements parses ora1003 er

  • Java service call from webdynpro

    Hi, Pls provide me some doc/info on how to call java service from abapwebdynpro platform. Thanks in advance.

  • Multiple LDAP Servers in Fusion Middleware (OBIEE 11g)

    Hello, I have a question, regarding integration of multiple LDAP servers with single Weblogic Server of Fusion Middleware (OBIEE 11g). We are currently using OBIEE 10g. We are on verge of migrating to 11g. However, I have a question regarding the LDA

  • Premiere Pro 2014 Cannot Handle Medium-Sized Project

    Dear Adobe, I just got back from filming 2 projects in China and wanted to start organizing my footage. I started in Prelude, thinking that would be a great place to start, but it was SO SLOW. It took a full day for it to just register all my files f