Architecture question, global VDI deployment

I have an architecture question regarding the use of VDI in a global organization.
We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
Thanks

Yes, simply bind individual interfaces for each domain on your web server,
one for each.
Ensure the appropriate web servers are listening on the appropriate
interfaces and it will work fine.
"Paul S." <[email protected]> wrote in message
news:407c68a1$[email protected]..
>
Hi,
We want to host several applications which will be accessed as:
www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
Is it possible to have a separate Weblogic domain for each application,all listening
to ports 80 and 443?
Thanks,
Paul

Similar Messages

  • Query related to UPN Suffix in Hierarchical domain architecture in Active Directory deployment

    This is regarding a query related to UPN Suffix in Hierarchical domain architecture in Active Directory deployment.
    We use LDAP query (filter uPNSuffixes=* for the parent domain DN) to retrieve the upn suffixes configured in the AD Domain. This returns the UpnSuffixes configured for the entire domain tree ( upnsuffixes of parent domain and all the child domains) in the
    hierarchy. The AD Domains and Trusts configuration lists all the upnsuffixes as part of the dnsroot domain. 
    For one of our implementation, we need to distinguish between the UPNsuffixes belonging to the parent and child domain and map the UPN suffixes with the respective domain in the hierarchy. As the upnsuffixes are stored as part of the root domain in the AD
    domains and trusts configuration, it was not clear how to retrieve the information specific to each domain in the hierarchy.
    It would be helpful if you could provide pointers on how to obtain the above mapping for the upn suffixes in a hierarchical domain setup.
    Thank you,
    Durgesh

    By default, you can use only the domain name as UPN suffix for user accounts you create within the domain. It is possible to add extra UPN suffixes but these are added at the forest level and not specific to a domain.
    This posting is provided AS IS with no warranties or guarantees , and confers no rights.
    Ahmed MALEK
    My Website Link
    My Linkedin Profile
    My MVP Profile

  • Oracle VM Server for SPARC - network multipathing architecture question

    This is a general architecture question about how to best setup network multipathing
    I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
    From reading the document is appears it is possible to:
    (a) Configure IPMP in the Service Domain (pg. 155)
    - This protects against link level failure but won't protect against the failure of an entire Service LDOM?
    (b) Configure IPMP in the Guest Domain (pg. 154)
    - This will protect against Service LDOM failure but moves the complexity to the Guest Domain
    - This means the there are two (2) VNICs in the guest though?
    In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
    https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
    Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
    (1) Two (2) Service Domains
    (2) Network Redundancy within the Service Domain
    (3) Service Domain Redundancy
    (4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
    Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
    Edited by: 905243 on Aug 23, 2012 1:27 PM

    Hi,
    there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
    For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
    ldm set-vnet linkprop=phys-state vnetX ldom-name
    If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
    ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
    Bye,
    Alexander.

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • Architectural question

    Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
    Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...

    Little architectural question: why is all the stuff
    that is needed to render a page put into the
    constructor of a backing bean? Why is there no
    beforeRender method, analogous to the
    afterRenderResponse method? That method
    can then be called if and only if a page has to be
    rendered. It seems to me that an awful lot of
    resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
    >
    Reason I bring up this question is that I have to do
    a query in the constructor in a page backing bean.
    Every time the backing bean is created the query is
    executed, including when the page will not be
    rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
    If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
    One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
    Craig McClanahan

  • BPEL/ESB - Architecture question

    Folks,
    I would like to ask a simple architecture question;
    We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
    1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
    2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
    Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
    Thanks in advance,
    Ranjith

    You can go through the design pattern called Channel Adapter.
    Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
    Hope this helps.

  • Architecture Question...brain teasing !

    Hi,
    I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
    I have two management servers M1 and M2.
    two VIP's(Virtual IP's) V1 and V2
    two Agents A1 and A2
    the scenerio
    M1 ----> M2
    | |
    V1 V2
    | |
    A1 A2
    Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
    Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
    If M1 dies, and V1 forwards A1 to M2 or
    if M2 dies, V2 forwards A2 to M1
    How woudl this work.
    I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
    and A1 to A2 and just change V1 to V2. Would this work ????
    please advice!!

    SLB is not an option for us here !
    Can we just repoint all A1 to M2 using DNS CNAME change ??

  • Inheritance architecture question

    Hello,
    I've an architecture question.
    We have different types of users in our system, normal users, company "users", and some others.
    In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
    How would you suggest to design this?
    Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
    Inheritance mapping? Which type?
    No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
    Other ways?
    thanks
    Dirk

    sorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
    You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
    Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance.

  • Deploying 4260 into Architecture Question

    Hello,
    I have been tasked with updating/evaluating/integrating a Cisco 4260 into an inline state on our current network. Currently it is in promiscuous mode spanning traffic, but no profiles or device management is set to actively block traffic. Inline however are currently two existing ASA 5520's in a redundant active/standby pair. My question is, is it possible to bring 1 IPS into the equation and have it cabled inline to both ASA's. From my understanding there are 6 interfaces on the Cisco 4260, one being  the management interface, and for inline mode to work the interfaces have to work as interface pairs. This leads me to believe that either one or the other ASA can be cabled inline, but not both at the same time based on only having 1 IPS. Is this statement correct? If not please provide details on potential cabling of this device in this scenario.
    Thank you,
    Charles

    Hi Charles,
    You may connect the IPS 4260 to both ASAs without a problem. As the ASAs are running in an active/standby failover, traffic will only pass through one ASA at a time.
    You may configure interfaces pairs o inline vlan pairs in order to save space.
    http://tools.cisco.com/squish/f7C75
    http://tools.cisco.com/squish/8cC04
    I hope it helps.
    regards,
    Itzcoatl Espinosa

  • Enterprise Manager 11g Sybase Plugin architecture question

    Hi,
    have successfully installed and configured Grid 11g on RedHat Enterprise 5.5. Deployed and configured agents to solaris and linux environments..so far so good.
    However, we're going to test the Sybase ASE plugin to monitor ASE with EM. My question is a simple one and I think I know the answer but I'd like to see what you guys think of this.
    We'd like to go with a simple centralised agent rather than one agent/plugin per sybase machine, atleast for the tests. No doubt there may be pro's (first one clearly being one of a single point of failure - welll we can live with this for now) to this approach. My instinct is to install the oracle agent/plugin on a machine other than the grid machines itself, however the question arose - why not install the ASE plugin on the grid infrastructure machine agents themselves? Pros and cons?
    The architecture we have currently : repository database configured to failover between 2 redhat boxes. 2 OMS running 1 on each of these boxes configured behind SLB using nfs based shared upload directory. One 'physical agent' running on each box. Simple for now. But I have the feeling , given that the Sybase servers will communicate or be interrogated via the sybase plugin directly to the grid infrastructure machines placing load etc on them , and in case of problems might interfere with the healthy running of the grid. Or am I being over cautious?
    John
    Edited by: user1746618 on 12-Jan-2011 09:01

    well I have followed the common sense approach and avoided the potential problem by installing on a remote server and configuring the plugin on this.
    Seems to be working fine and keeps the install base clean..

  • Can JWS do this...? Architecture question

    Hi,
    I'm designing the architecture for a school board that is moving a COBOL system to a Java App Server system. Within the system there are approx 180 'Modules' - each module being a set of screens that allow a user to accomplish a task. For example, the Teacher Grading module allows a teacher to access their student records and maintain the student's grades.
    I'm looking to use JWS for deployment of the front end but am unsure if JWS will support the framework I want to put in place.
    From a UI perspective, as there are so many modules in the system I want to design the architecture in a way that allows each module to plug in to the existing framework.
    The front end would consist of a container application that would house each module in a sort of tabbed view. As each module is added to the system so the user would see a new tab in the UI that housed the new module (depending on whether the users had permissions to access the module).
    So the front end container would display the modules that the user can access, adding new ones as they are defined.
    Question:
    Is the server's JNLP file for the application static? If it was ammended to include new jars would this cause problems on the Client side post initial installation, or would it take it in it's stride and just upload the new jars as required?
    What I want to do:
    I was hoping that I could just ammend the JNLP file on the server to include the new Module (jar file), the Container app could then just get a list of class names from the App server that were applicable to the User. The Container app could then instantiate the class objects and the JWS would automatically upload jars any that were missing (using lazy loading), and subsequently add the new Module( jar) to it's list of versioned jars to update when required.
    Question:
    Is this possible to do using JWS?
    Another possiblity:
    1) Main application Container gets installed using JWS
    2) User starts app and signs on
    3) Container talks to App Server and determines which Modules the User has permissions to use
    4) Container downloads missing or new Modules - jar files (maybe using javax.jnlp.DownloadService?)
    5) Want JWS to subsequently evaluate downloaded Modules (and main app) for any updates - though this would happen at step 2.
    Question:
    Would an individual Module's jar file/s need to be referenced in the JNLP file to download it using javax.jnlp.DownloadService? I am thinking it would.
    Things to note - it is not possible to define the app with all 180 Modules embedded as it may take years to recode all 180 COBOL Modules in Java. and the system it being implemented iteratively. Also few, if any users will have access to all 180 Modules. Users are part of Groups (Teachers, Superintendants, Subtitutes etc) and each Group only has access to a certain set of screens (Modules).
    Any advice would be appreciated, as I would like to be aware of any potential problems before I define the architecture.
    cheers
    Ray

    You are completely free to dynamically generate the JNLP file if you wish via a regular Java servlet. In fact Sun has available a simple servlet called JnlpDownloadServlet which you'll find in the jnlp-servlet.jar file in your jdk installation. So you could create the servlet and pass arguments to it giving the user id and it could generate it with the modules that this user has access to. You would then probably also generate arguments passed to the main() function which would tell your app what classes (modules) to load into your app.
    The disadvantage of this approach is that your server has to keep track of what modules this user can use, and he would probably have to use the web site and another servlet to configure it. (Assuming the user has any control over what modules he can access).
    However there may be a better way for you to proceed. If you create a static JNLP file that contains ALL the modules but with the download="lazy" option, then all modules will be in the JNLP but not downloaded unless necessary.
    Then you can download the bits explicitly you want to use. (The DownloadService class) will tell you how to do this.... http://java.sun.com/products/javawebstart/docs/javadoc/index.html
    You will need to have all the modules listed in the JNLP file. Whenever a user starts up the app it will refresh their copy of the JNLP file. Pass the list of available modules to the main() function within the JNLP file so that the app knows they are there.
    If necessary have some arguments that indicate permissions on the modules within the JNLP file that are passed to main(). (e.g. --module=mymod1.jar,perm=teachers,students )
    You may want to create a ClassLoader that accesses that jar file directly (passing in the URL)... (once it is downloaded via DownloadService) and loading information directly from each jar. e.g. have an info.properties file in the "root directory" of every jar file that explains what the entry point or points are for this module. That avoids having to pass even more info in the JNLP file (e.g. --entryPoint=com.foo.MyModule1). or else having some guessable naming scheme for classes. That is a good thing at least for entry points so that the module is completely self-contained describing its own entry points. That is an approach I've used before. But you wouldn't use it for permissions, because then you would have to download it before you could tell if you need it.
    Now the application itself can manage its own modules and resources using whatever criteria you desire. (You could even give the user some control). If all your modules have a standard interface for launching them, you can dynamically load those classes on demand. Use DownloadService to download the jar for that module, and then use Class.forName() to access the entry point for the module. Use the java.util.prefs.Preferences class if you need to keep track of anything on the client side about modules.
    If it were me, I'd have the code be able to work without webstart as well which is easier for local debugging. That shouldn't be a problem.

  • General architecture questions

    Hello,
    I am developing a web application and could use some architectural advice. I've done lots of reading already, but could use some direction from those who have more experience in multi-tier development and administration than I. You'll find my proposed solution listed below and then I have some questions at the bottom. I think my architecture is fairly standard and simple to understand--I probably wrote more than necessary for you to understand it. I'd really appreciate some feedback and practical insights. Here is a description of the system:
    Presentation Layer
    So far, the presentation tier consists of an Apache Tomcat Server to run Servlets and generate one HTML page. The HTML page contains an embedded MDI style Applet with inner frames, etc.; hence, the solution is Applet-centric rather than HTML-centric. The low volume of HTML is why I decided against JSPs for now.
    Business Tier
    I am planning to use the J2EE 1.4 Application Server that is included with the J2EE distribution. All database transactions would be handled by Entity Beans and for computations I'll use Session Beans. The most resource intensive computational process will be a linear optimization program that can compute large matrices.
    Enterprise Tier
    I?ll probably use MySql, although we have an Oracle 8 database at our disposal. Disadvantage of MySql is that it won't have triggers until next release, but maybe I can find a work-around for now. Advantage is that an eventual migration to Linux will be easier on the wallet.
    Additional Information
    We plan to use the system within our company at first, with probably about 5 or less simultaneous users. Our field engineer will also have access from his laptop. That means he?ll download the Applet-embedded HTML page from our server via the Internet. Once loaded, all navigation will be Applet-centered. Data transfer from the Applet to Servlet will be via standard HTTP.
    Eventually we would like to give access of our system to a client firm. In other words, we would be acting as an application service provider and they would access our application via the Internet. The Applet-embedded HTML page would load onto their system. The volume would be low--5 simultaneous users max. All users are well-defined in advance. Again, low volume HTML generation--Applet-centric.
    My Questions
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described above? Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but what is the most practical solution that strikes a balance between ease-of-programming and speed?
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    4). I am running Tomcat between our inner and outer firewalls. The database would of course be completely inside both firewalls. Should the J2EE (or other) Application Server also be in the so-called ?dmz? with Tomcat? Should it be on the same physical server machine as Tomcat?
    5). Can Tomcat be used externally without the Apache Web Server? Remember, our solution is based on Servlets and a single Applet-embedded HTML page, so high volume HTML generation isn?t necessary. Are there any pros/cons or security issues with running a standalone Tomcat?
    So far I've got Tomcat and the J2EE Application Server running and have tested my small Servlet /Applet test solution on both. Both servers work fine, although I haven't tested any Enterprise Beans on the application server yet. I?d really appreciate if anyone more experienced than I can comment on my design, answer some of my questions, and/or give me some advice or insights before I start full-scale development. Thanks for your help,
    Regards,
    Itchy

    Hi Itchy,
    Sounds like a great problem. You did an excellent job of describing it, too. A refreshing change.
    Here are my opinions on your questions:
    >
    My Questions
    1). Is the J2EE 1.4 Application Server a good
    production solution for the conditions that I
    described above? Or is it better to invest in a
    commercial product like Sun Java System Application
    Server 7 ? Or should I forget the application server
    concept completely?
    It always depends on your wallet, of course. I haven't used the Sun app server. My earlier impression was that it wasn't quite up to production grade, but that was a while ago. You can always consider JBoss, another free J2EE app server. It's gotten a lot of traction in the marketplace.
    2). If I use the J2EE Application Server, is this a
    good platform for running computational programs (via
    Session Beans)? Or is it too slow for that? How
    would it compare with using a standalone Java
    application--perhaps accessed from the Servlet via
    RMI? I guess using JNI with C++ in a standalone
    application would be the fastest, though a bit more
    complex to develop. I know it is a difficult
    question, but what is the most practical solution that
    strikes a balance between ease-of-programming and
    speed?
    People sometimes forget that you can do J2EE with a servlet/JSP engine, JDBC, and POJOs. (Plain Old Java Objects). You can use an object/relational mapping layer like Hibernate to persist objects without having to write JDBC code yourself. It allows transactions if you need them. I think it can be a good alternative.
    The advantage, of course, is that all those POJOs are working objects. Now you have your choice as to how to package and deploy them. RMI? EJB? Servlet? Just have the container instantiate one of your working POJOs and delegate to it. You can defer the deployment choice until later. Or do all of them at once. Your call.
    3). Can the J2EE 1.4 Application Server be used for
    running the presentation tier (Servlets and HTML)
    internally on our intranet? According to my testing,
    it seems to work, but is it a practical solution to
    use it this way?
    I think so. A J2EE app server has both an HTTP server and a servlet/JSP engine built in. It might even be Tomcat in this case, because it's Sun's reference implementation.
    4). I am running Tomcat between our inner and outer
    firewalls. The database would of course be completely
    inside both firewalls. Should the J2EE (or other)
    Application Server also be in the so-called ?dmz? with
    Tomcat? Should it be on the same physical server
    machine as Tomcat?I'd have Tomcat running in the DMZ, authenticating users, and forwarding requests to the J2EE app server running inside the second firewall. They should be on separate servers.
    >
    5). Can Tomcat be used externally without the Apache
    Web Server? Remember, our solution is based on
    Servlets and a single Applet-embedded HTML page, so
    high volume HTML generation isn?t necessary. Are
    there any pros/cons or security issues with running a
    standalone Tomcat?
    Tomcat's performance isn't so bad, so it should be able to handle the load.
    The bigger consideration is that the DMZ Tomcat has to access port 80 in order to be seen from the outside without having to open another hole in your outer firewall. If you piggyback it on top of Apache you can just have those requests forwarded. If you give port 80 to the Tomcat listener, nothing else will be able to get it.
    >
    So far I've got Tomcat and the J2EE Application Server
    running and have tested my small Servlet /Applet test
    solution on both. Both servers work fine, although I
    haven't tested any Enterprise Beans on the application
    server yet. I?d really appreciate if anyone more
    experienced than I can comment on my design, answer
    some of my questions, and/or give me some advice or
    insights before I start full-scale development. Thanks
    for your help,
    Regards,
    Itchy
    There are smarter folks than me on this forum. Perhaps they'll weigh in. Looks to me like you're doing a pretty good job, Itchy. - MOD

  • Questions on Patch Deployment - From older post.

    Almost a year ago I had a post with questions on the patch scan process.
    https://forums.novell.com/novell-pro...s-updates.html
    I have been reviewing my patch process again due to Student laptop's getting re-imaged this summer. I am hoping I can get some additional information based off the replies from that post.
    1. It was stated that monthly patch bundles were created and deployed. I am unsure how that is best accomplished. If I create an all Microsoft (Windows 7 for example) Patch bundle for each month, yet the workstations it is deployed to may not require the patch, would this not cause the bundle to fail? If it just fails on that section will the remaining patch's continue to deploy?
    What is the best way to deploy a monthly patch bundle? In the past I would create a patch bundle through the Patch Management area for Windows 7 but assign to only a single workstation and then go back and assign to the Windows 7 group as a "run on ref" option. Is it better to assign the patch bundle to all nonpatched devices? If this is done will a system that is reimaged and no longer has the patch or a new system created after the bundle create be automatically assigned said bundle?
    Any other good strategies for patching systems? I create custom bundles for Adobe, Java, and Quicktime to ensure I control how they are deployed. Java seems to be one that works better when older versions are not installed. This method seems to be working well for those products. It is my MS Windows updates that are way off the mark. I have most of my systems with 60 to 80 patches reported ready. No matter how many times I deploy the patches they never seem to report as patched on the devices (Even tho the bundle reports back as successful). I am getting ready to start an SR since I think this is an issue with the server since so many of my systems are doing this. Good deployment procedures would be nice since I really hate to mess with my Universal WIndows image since it is working so well. Recreating it just to patch it with the latest Windows Updates would be pain.
    Thanks
    Richard

    rhuhman,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Newbie questions on Admin, Deployment & Answers

    Hi Gurus,
    Sorry to be raising questions that may have been repeated over again. I tried to search thru the net/forums but cudn't find anything useful. Really wud appreciate any help/directions/links to the answers I seek
    Developement and Production Proposed environment
    - OBIEE Client Tools - Windows based
    - OBIEE Apps Server - Unix (HP-UX)
    - DB Server - Oracle 10g
    Based on our Dev R&D, we have one physical layer, 5 business models and 5 corresponding presentation layers.
    Currently we are facing some issues with the Unix setup so we are proceeding with our development R&D on Windows whilst re-installing the Unix box
    Administration
    A1. Currently we are using our Data Source Connection Type is ODBC 3.5. As we are plan to have the Dev and Prod on Unix, what is the best Connection Type we should use?
    A2. Is there any configuration when we use Unix?
    Deployment
    B1. Since we are performing our Dev R&D, is it possible to move all our work from the current windows environment into our Dev Unix environment?
    From the forum, I found there are 2 methods
    B2. - copy the directories (C:\OracleBI\server\Repository and C:\OracleBIData\web\catalog) over for Windows. Is it possible to do the same from Windows to Unix?
    B3. - using Catalog Manager. How do I use this? (Cudn't find any docu for it)
    Answers
    C1. The total number of records is 9 but when we pull the data out from Answer the records are grouped returning only 8 records. Is there a config setting to return all the rows?
    Also wud appreciate if you experts can share any tips/common practises that you practise for easier development/deployment etc? e.g. creating alises instead of using the imported table.
    Thanks
    B

    A1. It's better to use Oracle native drivers connection for an Oracle database (OCI).
    A2. See [this thread|http://forums.oracle.com/forums/thread.jspa?threadID=845201].
    B1. Yes, no issues.
    B2. Yes, no problems. Just use FTP.
    B3. Open two instances of Catalog Manager (one for your Windows Server and one for your Unix Server) and then just do Copy/Paste between them. Although it's simpler and better to copy the whole catalog using FTP as you mentioned in B2.
    C1. Grouping is done by default because you have aggregable measures. Include the field that will break the records if you want to see them all. You can include a field in the criteria tab and hide it if you don't want to see it. This will force OBIEE to select it and show all records.

  • Questions on SEFAutil deployment in Lync 2013

    Hello All,
    We have the following environment:
    Environment
    Background is that 4 geographically dispersed sites, each site has 15000 users, 2 data centres per site, and EE Lync 2013 FE Pool with 6-8 FE servers per data centre.
    Questions
    Does SEFAUtil need a dedicated server for large deployment like ours ? 75,000 users worldwide. 
    Recommendation of dedicated server was on Lync 2010. With Lync 2013 official stand is that you can run it on any FE. 
    But considering the user base, what is official Microsoft recommendation ? 
    Based on the above, if it can be installed on FE's is it best to install it on multiple Front end servers or all FE servers? 
    I'd assume all FE pools created as Application pools and install SEFAutil on all FE servers as you can use any those servers to run the util as long as the server is part of the FE pool defined in the application pool. 
    What is the recommendation for SEFAUtil for a deployment with multiple geographically dispersed sites ? 
    Does it need to be installed on all sites ? 
    What is the official recommendation ? 
    Different ports for all the application pools if we are creating individual application pool for all FE Pools ? 
    Or Can same port be used for all application pools ? 
    What additional load does SEFAUtil create on the FE servers ? Depending on answer to #1. 
    Please advise. MANY THANKS.

    Does SEFAUtil need a dedicated server for large deployment like ours ? 75,000 users worldwide. 
    Yes
    The SEFAUtil tool can be run only on a computer that is a part of a Trusted Application Pool. UCMA 3.0 must be installed on that computer. To run the tool, a new Trusted Application with the SEFAUtil application ID must be created on that pool
    Based on the above, if it can be installed on FE's is it best to install it on multiple Front end servers or all FE servers? 
    Wouldn't recommend that
    What is the recommendation for SEFAUtil for a deployment with multiple geographically dispersed sites ? 
    As long as user is part of Lync pool it will work based on the trusted application pool setting  
    Different ports for all the application pools if we are creating individual application pool for all FE Pools ? 
    NO
    What additional load does SEFAUtil create on the FE servers ? Depending on answer to Not recommended
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question please click "Mark As Answer" Regards Edwin Anthony Joseph

Maybe you are looking for