EAS/Architecture Question

Doing some research regarding EAS version 11. I've read on some other threads that Shared Services is not necessary for EAS. However, some other threads seem to indicate it is required for installation. Can anyone help clear this up for me? Thanks.
Drew Rushford

No, it is not ticked. I don't know if it makes a difference, but this environment was set up with EAS in Shared Services security mode. We were having another issue, and thought it might get resolved if we went back to Native security, and performed the necessary procedures to do so, and that's how it got into native security mode. Unfortunately, it didn't resolve our problem, but it's definitely in native security mode now.
Sabrina

Similar Messages

  • Oracle VM Server for SPARC - network multipathing architecture question

    This is a general architecture question about how to best setup network multipathing
    I am reading the "Oracle VM Server for SPARC 2.2 Administration Guide" but I can't find what I am looking for.
    From reading the document is appears it is possible to:
    (a) Configure IPMP in the Service Domain (pg. 155)
    - This protects against link level failure but won't protect against the failure of an entire Service LDOM?
    (b) Configure IPMP in the Guest Domain (pg. 154)
    - This will protect against Service LDOM failure but moves the complexity to the Guest Domain
    - This means the there are two (2) VNICs in the guest though?
    In AIX, "Shared Ethernet Adapter (SEA) Failover" it presents a single NIC to the guest but can tolerate failure of a single VIOS (~Service LDOM) as well as link level failure in each VIO Server.
    https://www.ibm.com/developerworks/mydeveloperworks/blogs/aixpert/entry/shared_ethernet_adapter_sea_failover_with_load_balancing198?lang=en
    Is there not a way to do something similar in Oracle VM Server for SPARC that provides the following:
    (1) Two (2) Service Domains
    (2) Network Redundancy within the Service Domain
    (3) Service Domain Redundancy
    (4) Simplify the Guest Domain (ie single virtual NIC) with no IPMP in the Guest
    Virtual Disk Multipathing appears to work as one would expect (at least according the the documentation, pg. 120). I don't need to setup mpxio in the guest. So I'm not sure why I would need to setup IPMP in the guest.
    Edited by: 905243 on Aug 23, 2012 1:27 PM

    Hi,
    there's link-based and probe-based IPMP. We use link-based IPMP (in the primary domain and in the guest LDOMs).
    For the guest LDOMs you have to set the phys-state linkprop on the vnets if you want to use link-based IPMP:
    ldm set-vnet linkprop=phys-state vnetX ldom-name
    If you want to use IPMP with vsw interfaces in the primary domain, you have to set the phys-state linkprop in the vswitch:
    ldm set-vswitch linkprop=phys-state net-dev=<phys_iface_e.g._igb0> <vswitch-name>
    Bye,
    Alexander.

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • Running MII on a Wintel virtual environment + hybrid architecture questions

    Hi, I have two MII Technical Architecture questions (MII 12.0.4).
    Question1:  Does anyone know of MII limitations around running production MII in a Wintel virtualized environment (under VMware)?
    Question 2: We're currently running MII centrally on Wintel but considering to move it to Solaris.  Our current plan is to run centrally but in the future we may want to install local instances local instances of MII in some of our plants which require more horsepower.  While we have a preference for Solaris UNIX based technologies in our main data center where our central MII instance will run, in our plants the preference seems to be for Wintel technologies.  Does anybody know of any caveats, watch outs or else around running MII in a hybrid architecture with a Solarix Unix based head of the hybrid architecture and the legs being run on Wintel?
    Thanks for your help
    Michel

    This is a great source for the ins/outs of SAP Virtualization:  https://www.sdn.sap.com/irj/sdn/virtualization

  • Architectural question

    Little architectural question: why is all the stuff that is needed to render a page put into the constructor of a backing bean? Why is there no beforeRender method, analogous to the afterRenderResponse method? That method can then be called if and only if a page has to be rendered. It seems to me that an awful lot of resources are waisted this way.
    Reason I bring up this question is that I have to do a query in the constructor in a page backing bean. Every time the backing bean is created the query is executed, including when the page will not be rendered in the browser...

    Little architectural question: why is all the stuff
    that is needed to render a page put into the
    constructor of a backing bean? Why is there no
    beforeRender method, analogous to the
    afterRenderResponse method? That method
    can then be called if and only if a page has to be
    rendered. It seems to me that an awful lot of
    resources are waisted this way.There actually is such a method ... if you look at the FacesBean base class, there is a beforeRenderResponse() method that is called before the corresponding page is actually rendered.
    >
    Reason I bring up this question is that I have to do
    a query in the constructor in a page backing bean.
    Every time the backing bean is created the query is
    executed, including when the page will not be
    rendered in the browser...This is definitely a valid concern. In Creator releases prior to Update 6 of the Reef release, however, there were use cases when the beforeRenderResponse method would not actually get called (the most important one being when you navigated to a new page, which is a VERY common use case :-).
    If you are using Update 6 or later, as a side effect of other bug fixes that were included, the beforeRenderResponse method is reliably called every time, so you can put your pre-rendering logic in this method instead of in the constructor. However, there is still a wrinkle to be aware of -- if you navigate from one page to another, the beforeRenderResponse of both the "from" and "to" pages will be executed. You will need to add some conditional logic to ensure that you only perform your setup work if this is the page that is actually going to be rendered (hint: call FacesContext.getCurrentInstance().getViewRoot().getViewId() to get the context relative path to the page that will actually be displayed).
    One might argue, of course, that this is the sort of detail that an application should not need to worry about, and one would be absolutely correct. This usability issue will be dealt with in an upcoming Creator release.
    Craig McClanahan

  • BPEL/ESB - Architecture question

    Folks,
    I would like to ask a simple architecture question;
    We have to invoke a partner web services which are rpc/encoded from SOA suite 10.1.3.3. Here the role of SOA suite is simply to facilitate communication between an internal application and partner services. As a result SOA suite doesn't have any processing logic. The flow is simply:
    1) Internal application invokes SOA suite service (wrapper around partner service) and result is processed.
    2) SOA suite translates the incoming message and communicates with partner service and returns response to internal application.
    Please note that at this point there is no plan to move all processing logic from internal application to SOA suite. Based on the above details I would like get some recommedation on what technology/solution from SOA suite is more efficient to facilate this communication.
    Thanks in advance,
    Ranjith

    You can go through the design pattern called Channel Adapter.
    Here is how you should design - Processing logic remains in the application.. however, you have to design and build a channel adapter as a BPEL process. The channel adapter does the transformation of your input into the web services specific format and invoke the endpoint. You need this channel adapter if your internal application doesn't have the capability to make webservice calls.
    Hope this helps.

  • HT2486 I am new to apple - sorry for ease of question. With my HP I have a label tab for email. I can create any title. How do I do this in apple

    Sorry for ease of question - I am new to apple/MAC. With my HP I can label my emails and keep all from a certain person. How do I do that with mac pro?

    Create a new smart mailbox. Details in Mail's help files. Once you straighten that out, see these:
    Switching from Windows to Mac OS X,
    Mac Basics—Tutorials on using a Mac,
    Mac OS X keyboard shortcuts,
    Anatomy of a Mac,
    MacTips,
    Switching to Mac Superguide, and
    Switching to the Mac: The Missing Manual, Mountain Lion Edition.
    Additionally, *Texas Mac Man* recommends:
    Quick Assist,
    Welcome to the Switch To A Mac Guides,
    Take Control E-books, and
    A guide for switching to a Mac.

  • Architecture Question...brain teasing !

    Hi,
    I have a architecture question in grid control. So far Oracle Support hasnt been able to figure out.
    I have two management servers M1 and M2.
    two VIP's(Virtual IP's) V1 and V2
    two Agents A1 and A2
    the scenerio
    M1 ----> M2
    | |
    V1 V2
    | |
    A1 A2
    Repository at M1 is configured as Primary and sends archive logs to M2. On the failover, I have it setup to make M2 as primary repository and all works well !
    Under normal conditions, A1 talks to M1 thru V1 and A2 talks to M2 thru V2. No problem so far !
    If M1 dies, and V1 forwards A1 to M2 or
    if M2 dies, V2 forwards A2 to M1
    How woudl this work.
    I think (havent tried it yet) but what if i configure the oms'es with same username and registration passwords and copy all the wallets from M1 to M2
    and A1 to A2 and just change V1 to V2. Would this work ????
    please advice!!

    SLB is not an option for us here !
    Can we just repoint all A1 to M2 using DNS CNAME change ??

  • Inheritance architecture question

    Hello,
    I've an architecture question.
    We have different types of users in our system, normal users, company "users", and some others.
    In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
    How would you suggest to design this?
    Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
    Inheritance mapping? Which type?
    No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
    Other ways?
    thanks
    Dirk

    sorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
    You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
    Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance.

  • General architecture questions

    Hello,
    I am developing a web application and could use some architectural advice. I've done lots of reading already, but could use some direction from those who have more experience in multi-tier development and administration than I. You'll find my proposed solution listed below and then I have some questions at the bottom. I think my architecture is fairly standard and simple to understand--I probably wrote more than necessary for you to understand it. I'd really appreciate some feedback and practical insights. Here is a description of the system:
    Presentation Layer
    So far, the presentation tier consists of an Apache Tomcat Server to run Servlets and generate one HTML page. The HTML page contains an embedded MDI style Applet with inner frames, etc.; hence, the solution is Applet-centric rather than HTML-centric. The low volume of HTML is why I decided against JSPs for now.
    Business Tier
    I am planning to use the J2EE 1.4 Application Server that is included with the J2EE distribution. All database transactions would be handled by Entity Beans and for computations I'll use Session Beans. The most resource intensive computational process will be a linear optimization program that can compute large matrices.
    Enterprise Tier
    I?ll probably use MySql, although we have an Oracle 8 database at our disposal. Disadvantage of MySql is that it won't have triggers until next release, but maybe I can find a work-around for now. Advantage is that an eventual migration to Linux will be easier on the wallet.
    Additional Information
    We plan to use the system within our company at first, with probably about 5 or less simultaneous users. Our field engineer will also have access from his laptop. That means he?ll download the Applet-embedded HTML page from our server via the Internet. Once loaded, all navigation will be Applet-centered. Data transfer from the Applet to Servlet will be via standard HTTP.
    Eventually we would like to give access of our system to a client firm. In other words, we would be acting as an application service provider and they would access our application via the Internet. The Applet-embedded HTML page would load onto their system. The volume would be low--5 simultaneous users max. All users are well-defined in advance. Again, low volume HTML generation--Applet-centric.
    My Questions
    1). Is the J2EE 1.4 Application Server a good production solution for the conditions that I described above? Or is it better to invest in a commercial product like Sun Java System Application Server 7 ? Or should I forget the application server concept completely?
    2). If I use the J2EE Application Server, is this a good platform for running computational programs (via Session Beans)? Or is it too slow for that? How would it compare with using a standalone Java application--perhaps accessed from the Servlet via RMI? I guess using JNI with C++ in a standalone application would be the fastest, though a bit more complex to develop. I know it is a difficult question, but what is the most practical solution that strikes a balance between ease-of-programming and speed?
    3). Can the J2EE 1.4 Application Server be used for running the presentation tier (Servlets and HTML) internally on our intranet? According to my testing, it seems to work, but is it a practical solution to use it this way?
    4). I am running Tomcat between our inner and outer firewalls. The database would of course be completely inside both firewalls. Should the J2EE (or other) Application Server also be in the so-called ?dmz? with Tomcat? Should it be on the same physical server machine as Tomcat?
    5). Can Tomcat be used externally without the Apache Web Server? Remember, our solution is based on Servlets and a single Applet-embedded HTML page, so high volume HTML generation isn?t necessary. Are there any pros/cons or security issues with running a standalone Tomcat?
    So far I've got Tomcat and the J2EE Application Server running and have tested my small Servlet /Applet test solution on both. Both servers work fine, although I haven't tested any Enterprise Beans on the application server yet. I?d really appreciate if anyone more experienced than I can comment on my design, answer some of my questions, and/or give me some advice or insights before I start full-scale development. Thanks for your help,
    Regards,
    Itchy

    Hi Itchy,
    Sounds like a great problem. You did an excellent job of describing it, too. A refreshing change.
    Here are my opinions on your questions:
    >
    My Questions
    1). Is the J2EE 1.4 Application Server a good
    production solution for the conditions that I
    described above? Or is it better to invest in a
    commercial product like Sun Java System Application
    Server 7 ? Or should I forget the application server
    concept completely?
    It always depends on your wallet, of course. I haven't used the Sun app server. My earlier impression was that it wasn't quite up to production grade, but that was a while ago. You can always consider JBoss, another free J2EE app server. It's gotten a lot of traction in the marketplace.
    2). If I use the J2EE Application Server, is this a
    good platform for running computational programs (via
    Session Beans)? Or is it too slow for that? How
    would it compare with using a standalone Java
    application--perhaps accessed from the Servlet via
    RMI? I guess using JNI with C++ in a standalone
    application would be the fastest, though a bit more
    complex to develop. I know it is a difficult
    question, but what is the most practical solution that
    strikes a balance between ease-of-programming and
    speed?
    People sometimes forget that you can do J2EE with a servlet/JSP engine, JDBC, and POJOs. (Plain Old Java Objects). You can use an object/relational mapping layer like Hibernate to persist objects without having to write JDBC code yourself. It allows transactions if you need them. I think it can be a good alternative.
    The advantage, of course, is that all those POJOs are working objects. Now you have your choice as to how to package and deploy them. RMI? EJB? Servlet? Just have the container instantiate one of your working POJOs and delegate to it. You can defer the deployment choice until later. Or do all of them at once. Your call.
    3). Can the J2EE 1.4 Application Server be used for
    running the presentation tier (Servlets and HTML)
    internally on our intranet? According to my testing,
    it seems to work, but is it a practical solution to
    use it this way?
    I think so. A J2EE app server has both an HTTP server and a servlet/JSP engine built in. It might even be Tomcat in this case, because it's Sun's reference implementation.
    4). I am running Tomcat between our inner and outer
    firewalls. The database would of course be completely
    inside both firewalls. Should the J2EE (or other)
    Application Server also be in the so-called ?dmz? with
    Tomcat? Should it be on the same physical server
    machine as Tomcat?I'd have Tomcat running in the DMZ, authenticating users, and forwarding requests to the J2EE app server running inside the second firewall. They should be on separate servers.
    >
    5). Can Tomcat be used externally without the Apache
    Web Server? Remember, our solution is based on
    Servlets and a single Applet-embedded HTML page, so
    high volume HTML generation isn?t necessary. Are
    there any pros/cons or security issues with running a
    standalone Tomcat?
    Tomcat's performance isn't so bad, so it should be able to handle the load.
    The bigger consideration is that the DMZ Tomcat has to access port 80 in order to be seen from the outside without having to open another hole in your outer firewall. If you piggyback it on top of Apache you can just have those requests forwarded. If you give port 80 to the Tomcat listener, nothing else will be able to get it.
    >
    So far I've got Tomcat and the J2EE Application Server
    running and have tested my small Servlet /Applet test
    solution on both. Both servers work fine, although I
    haven't tested any Enterprise Beans on the application
    server yet. I?d really appreciate if anyone more
    experienced than I can comment on my design, answer
    some of my questions, and/or give me some advice or
    insights before I start full-scale development. Thanks
    for your help,
    Regards,
    Itchy
    There are smarter folks than me on this forum. Perhaps they'll weigh in. Looks to me like you're doing a pretty good job, Itchy. - MOD

  • Enterprise Manager 11g Sybase Plugin architecture question

    Hi,
    have successfully installed and configured Grid 11g on RedHat Enterprise 5.5. Deployed and configured agents to solaris and linux environments..so far so good.
    However, we're going to test the Sybase ASE plugin to monitor ASE with EM. My question is a simple one and I think I know the answer but I'd like to see what you guys think of this.
    We'd like to go with a simple centralised agent rather than one agent/plugin per sybase machine, atleast for the tests. No doubt there may be pro's (first one clearly being one of a single point of failure - welll we can live with this for now) to this approach. My instinct is to install the oracle agent/plugin on a machine other than the grid machines itself, however the question arose - why not install the ASE plugin on the grid infrastructure machine agents themselves? Pros and cons?
    The architecture we have currently : repository database configured to failover between 2 redhat boxes. 2 OMS running 1 on each of these boxes configured behind SLB using nfs based shared upload directory. One 'physical agent' running on each box. Simple for now. But I have the feeling , given that the Sybase servers will communicate or be interrogated via the sybase plugin directly to the grid infrastructure machines placing load etc on them , and in case of problems might interfere with the healthy running of the grid. Or am I being over cautious?
    John
    Edited by: user1746618 on 12-Jan-2011 09:01

    well I have followed the common sense approach and avoided the potential problem by installing on a remote server and configuring the plugin on this.
    Seems to be working fine and keeps the install base clean..

  • Three tier architecture questions

    Hello,
    My question is in regards to using Toplink in a three tier architecture situation. If I wish to send an object A which has a collection of Bs and B has a collection of C ( A nested object structure with two or more levels of indirection). Is the best solution to have the named query be part of a unit of work so that even if on the client side somebody unknowingly were to make the modification to one of the entity objects ( a POJO) the shared session cache would not be affected ?
    This is assuming the client side HTTP layer and the RMI/EJB layer are on different JVMs.
    Some of the other suggestions I have heard is to retrieve it from the shared session cache directly and if in case I need to modify one or more of the objects do a named query lookup on that object alone and then proceed to register that object in a unit of work and then commit the changes.
    Also the indirection would have to be utilised before the data objects are sent to the Servlet layer I presume ?(That is if I do a a.getAllOfBObjects() on the servlet side I would get a nullpointer exception unless all of B were already instatiated on the server side). Also when the objects are sent back to the server do I do a registerObject on all the ones that have changed and then do a deepMergeClone() before the uow.commit() ?
    Thanks,
    Aswin.

    Aswin,
    If your client is remote to the EJB tier then all persistent entities are detached through serialization. In this architecture you do not need to worry about reading and modifying the shared instance as it never the one being changed on the client (due to serialization).
    Yes, you do need to ensure that all required indirect relationships are instantiated on the server prior to returning them from the EJB call.
    Yes, you do need to merge the changes of the detached instance when returned to the server. I would also recommend first doing a read for the entity being merged (by primary key) on the new UnitOfWork prior to the merge. This will handle the case where you are merging into a different node of the cluster then where you read as well as allowing you to check for the case where the entity no longer exists in the database (if the read returns null then the merge will result in an INSERT and this may not be desired).
    Here is an example test case that does this:
        public void test() throws Exception {
            Employee detachedEmp = getDeatchedEmployee("Jill", "May");
            assertNotNull(detachedEmp);
            // Remove the first phone number
            PhoneNumber phone = detachedEmp.getPhoneNumber("Work");
            assertNotNull("Employee does not have a Work Phone Number",
                          detachedEmp.getPhoneNumber("Work"));
            detachedEmp.removePhoneNumber(phone);
            UnitOfWork uow = session.acquireUnitOfWork();
            Employee empWC = (Employee) uow.readObject(detachedEmp);
            if (empWC == null) { // Deleted
                throw new RuntimeException("Could not update deleted employee: " + detachedEmp);
            uow.deepMergeClone(detachedEmp);
            uow.commit();
         * Return a detached Employee found by provided first name and last name.
         * Its phone number relationship is instantiated.
        public Employee getDeatchedEmployee(String firstName, String lastName) {
            ReadObjectQuery roq = new ReadObjectQuery(Employee.class);
            ExpressionBuilder builder = roq.getExpressionBuilder();
            roq.setSelectionCriteria((builder.get("firstName").equal(firstName)).and(builder.get("lastName").equal(lastName)));
            Employee employee = (Employee)session.executeQuery(roq);
            employee.getPhoneNumbers().size();
            return (Employee)SerializationHelper.serialize(employee);
        }One other note: In these types of application optimistic locking is very important. You should also make sure that the locking field(s) are mapped into the object and not stored only in the TopLink cache. This will ensure the locking semantics are maintained across the detachment to the client and the merge back.
    Doug

  • Architecture question...where to put the code

    Newbie here, so please be gentle and explicit (no detail is
    too much to give or insulting to me).
    I'm hoping one of you architecture/design gurus can help me
    with this. I am trying to use good principals of design and not
    have code scattered all over the place and also use OO as much as
    possible. Therefore I would appreciate very much some advice on
    best practices/good design for the following situation.
    On my main timeline I have a frame where I instantiate all my
    objects. These objects refer to movieClips and textFields etc. that
    are on a content frame on that timeline. I have all the
    instantiation code in a function called initialize() which I call
    from the content frame. All this works just fine. One of the
    objects on the content frame is a movieClip which I allow the user
    to go forward and backward in using some navigation controls.
    Again, the object that manages all that is instantiated on the main
    timeline in the initialize() function and works fine too. So here's
    my question. I would like to add some interactive objects on some
    of the frames of the movieClip I allow the user to navigate forward
    and backward in (lets call it NavClip) . For example on frame 1 I
    might have a button, on frame 2 and 3 nothing, on frame 4 maybe a
    clip I allow the user to drag around etc. So I thought I would add
    a layer to NavClip where I will have key frames and put the various
    interactive assets on the appropriate key frames. So now I don't
    know where to put the code that instantiates these objects (i.e.
    the objects that know how to deal with the events and such for each
    of these interactive assets). I tried putting the code on my main
    timeline, but realized that I can't address the interactive assets
    until the NavClip is on the frame that holds the particular asset.
    I'm trying not to sprinkle code all over the place, so what do I
    do? I thought I might be able to address the assets by just
    providing a name for the asset and not a reference to the asset
    itself, and then address the asset that way (i.e.
    NavClip["interactive_mc"] instead of NavClip.interactive_mc), but
    then I thought that's not good since I think there is no type
    checking when you use the NavClip["interactive_mc"] form.
    I hope I'm not being too dim a bulb on this and have missed
    something really obvious. Thanks in advance to anyone who can help
    me use a best practice.

    1. First of all, the code should be:
    var myDraggable:Draggable=new Draggable(myClip_mc);
    myDraggable.initDrag();
    Where initDrag() is defined in the Draggable class. When you
    start coding functions on the timeline... that's asking for
    problems.
    >>Do I wind up with another object each time this
    function is called
    Well, no, but. That would totally depend on the code in the
    (Draggable) class. Let's say you would have a private static var
    counter (private static, so a class property instead of an instance
    property) and you would increment that counter using a
    setInterval(). The second time you enter the frame and create a new
    Draggable object... the counter starts at the last value of the
    'old' object. So, you don't get another object with your function
    literal but you still end up with a faulty program. And the same
    goes for listener objects that are not removed, tweens that are
    running and so on.
    The destroy() method in a custom class (=object, I can't
    stress that enough...) needs to do the cleanup, removing anything
    you don't need anymore.
    2. if myDraggable != undefined
    You shouldn't be using that, period. If you don't need the
    asset anymore, delete it using the destroy() method. Again, if you
    want to make sure only one instance of a custom object is alive,
    use the Singleton design pattern. To elaborate on inheritance:
    define the Draggable class (class Draggable extends MovieClip) and
    connect it to the myClip_mc using the linkage identifier in the
    library). In the Draggable class you can define a function unOnLoad
    (an event fired when myClip_mc is removed using
    myClip_mc.removeMovieClip()...) and do the cleanup there.
    3. A destroy() method performs a cleanup of any assets we
    don't need anymore to make sure we don't end up with all kinds of
    stuff hanging around in the memory. When you extend the MovieClip
    Class you can (additionally) use the onUnLoad event. And with the
    code you posted, no it wouldn't delete the myClip_mc unless you
    program it to do so.

  • Replication Architecture Question

    Hello,
    I have a problem with identifying the architecture for my materialized view environment. Let me tell you what I want to achieve:
    I have a production database that is the source (master site) of my data. Portions of that data gets replicated to materialized view sites on a daily basis. The materialized view sites get accessed through the internet. The problem is that after the refresh more or less extensive calculations have to be done. during that time the materialized view site is not ready to be accessed via internet, because some data tables are not yet filled (they get filled during the calculations). That means there is a downtime of the system.
    In order to eliminate this downtime I want to have 2 instances of the materialized view site, which get replicated sequentially, e.g. mv1 gets updated every 48 hours as well as mv2, but there is a time shift of 24 hours between the refresh of each materialized view site.
    The webapplication accesses either mv1 or mv2. (it accesses mv1 while mv2 is updated and vice versa).
    This approach will eliminate the downtime of the system.
    But now i have a little problem with understanding the materialized view concepts. If mv1 is refreshed do all the entries in the materialized view logs at the master site get emptied? If so then I have a problem when i want to refresh mv2, because then only the changes since refresh of mv1 are present in the materialized view logs at the master site. so i will loose all the changes made to the database that already got replicated to mv1.
    Is there a way of achieving the scenario mentioned above? What I want to have are separate materialized view logs for each of the materialized view sites.
    Does anyone know how can I do this or has an alternative approach that fits to the above requirements?
    Any suggestions are welcomed.
    best regards
    Mirko

    Hi Justin,
    Thank you for your reply. If I understand you correctly, the oracle replication mechanism automatically handles the situation of various materialized view sites that refresh from one master site at different times. Easily spoken, that means that oracle automatically stores all changes on the master site in one materialized view log for each table, but keeps track, which entry was already sent to the individual materialized view site (just as if there were one materialized view log for each materialized view site). So I don't have to worry about anything, when i want to refresh more than one materialized view site from one master site? All Materialized view sites will get the correct data?
    If so then it sounds good.
    I have another short questions for clarification of the materialized view technique:
    What happens when the materialized view site refreshs and queries are sent to the materialized view site at the same time? What data will be used to perform the query?
    My assumption is that until the refresh group (all mvs are within one refresh group) completed the update, the user (that initiated the query) sees the old data. Is that correct? Does that user notice (performance related) that the materialized view site is currently refreshed?
    Regarding the calculation after refresh: Yes I have to do it after the refresh was made, because i have to call java stored procedures, that do the calculation. so i can't do it while creating / refreshing the materialized views.
    regards
    mirko

  • EDW Architecture question

    Hi experts.
    I was searching for orientation in SAP help about EDW architecture details, but I didn’t find anything.
    We’re developing a new BW (2004s version) project and the challenge is to have one physical layer with EDW and another layer just for reporting.
    My question is about the communication between these two layers, as long as they are in two different machines. If my reporting layer infocubes are loaded by EDW layer DSO´s how can I create a transformation linking both objects? Is it possible? Or our concept is not right?
    Thanks in advance.
    TP

    Physical layer with EDW: It will be the Modeling area and data will reside in your InfoProviders (DSO/Cube etc).
    Rreporting: You will use Front-Ent which is BI Reporting Tools to create, edit, use reports in Bex tools like Analyzer, WAD, Report Designer. And data will be coming from your InfoProviders.
    My question is about the communication between these two layers, as long as they are in two different machines.
    Do you mean to keep two production server where one will be used just for backup and another one for all users? BTW users will be using reports only through BW Reporting Tools and they will be able to access through network.
    If my reporting layer infocubes are loaded by EDW layer DSO´s how can I create a transformation linking both objects? Is it possible? Or our concept is not right?
    Check these:
    Modeling:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/6ce7b0a4-0b01-0010-52ac-a6e813c35a84
    https://www.sdn.sap.com/irj/sdn/docs?rid=/webcontent/uuid/93eaff2d-0a01-0010-d7b4-84ac0438dacc
    Hope it helps..

Maybe you are looking for