Commands per environment or per element

I am new to exchange, but for a risk assessment I need to run a few shell commands to determine certain settings (around backups and autoforward). however, I am unsure if you have multiple mailbox servers/mailbox databases, if these commands
need running once per environment, or once per server/database.
so for example the command.... $dbs = get-mailboxdatabase -status
if there is more than one mailbox database, will this list them all in the output, or do I need to specify a mailbox database and run it once per mailbox database?
i have a similar command for autoforward, which uses the get-mailbox command, but again i dont know if it is going to enumerate all mailboxes in the environment, just mailboxes on a given server, or just mailboxes in a given database.

Any of the Get commands will attempt to return all items in the directory unless there is an obvious focus for the command.  For instance, your example Get-MailboxDatabase has no obvious focus, so it will return all databases.  However, its associated
Get-MailboxDatabaseCopyStatus has a focus on a specific mailbox database copy.  Sadly, from most of the commands, it's not obvious whether there is a focus or not, but if you check the commands closely, you see that a Mailbox Database is a specific item,
and its Copy Status is how that item is being run.  Same is true for Mailboxes - Get-Mailbox will return all mailboxes, but Get-MailboxStatistics gets the statistics on a specific mailbox.
There are exceptions to this:  For instance, if you attempt to run Get-ReceiveConnector, your focus is the server, not the connectors.  So you will get the receive connectors on the specific server you are running the command from - which fails
if you use a management server with only the management tools installed.  So trial and error is going to get these exceptions.  HTH ...

Similar Messages

  • Would it help Firefox to have me test beta flash player on Aurora, since I use click to play per element?

    I use Aurora to help the Mozilla team work out any bugs before a FF product is shipped. I see the "download flash beta" link just below the link to d/l Aurora.
    I always wonder if I should install this. My use of flash is limited, b/c I use click to play per element (extension). Under these conditions, would it help Mozilla to have this information in the telemetry reports sent from my browser? Or would using click to play per element, defeat the purpose of testing the flash beta?

    Now that's a product worth looking at. I'll see if I want to install that later on this week.
    Bottom line is that I have picked up such a strong detestation (word?) for Flash that I really don't want to be a beta tester for Adobe. If it would help FF in any way, I'd run it if it didn't make me crash every day.
    At any rate, I'll take a look at Shumway. Thanks for the link. Never heard of that before. Hopefully it will see the light of day before the web has moved on from flash, if/when that day ever comes.

  • Can we provide a command link in a subview element

    Hai,
    When we include a page we mention it under <f:subview> tag.
    can we specify any command link between the subview element?
    if we can what should we give navigation case in faces-config's from-view-id, the subview page or the main view page in which we included the subview page?

    I dont think so. Because the AM would be the owner of the transaction which is being created in the current database only. Other DB might be difficult to maintain. However Ihave not tried it myself.

  • Number of commands per partition

    I am new to this and am wondering how many commands can I add to one partition?

    Hi Kieran,
    According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
    The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
    "In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
    Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
    you have the more things you have to manage. Here is some links which describe the partition optimization
    http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
    http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
    Regards,
    Charlie Liao
    TechNet Community Support

  • IFrame deprecated on Composition Environment, Which UI Element to use

    Hi All,
    I want to render and display an XML content in Web Dynpro IView. I searched in the forum and found that the IFrame API is used for this issue. But the problem is, I am developing the Web Dynpro application on Composition Environment 7.1 and on this the IFrame API is deprecated.
    Is there any other UI Element that in Composition Environment that is usable for the same purpose?
    Thanks
    Yasar

    Hi Srinivas,
    Thanks for your reply,
    I will display the XML Structure.
    I retrieve document informations (like create date, created by, document name...) and document content from an RFC and want to display the header information and content on the same web dynpro view if it is possible.
    Is it possible to store the XML Content in the development component and to show the content as an URL IView on the portal content? If that could be done I can display two IViews (one for the header information and the other for the document content) on the same page.
    Actually displaying the XML content as an external window is not preferred.
    Regards,
    Yasar

  • ALV Table: DROPDOWN-Column with different valuesets per row

    Hello,
    I tried to create a dropdown by index cell in a table with different valuesets in each row. So I created an attribute VALUESET of type WDR_CONTEXT_ATTR_VALUE_LIST in my node to provide different valuesets per element. In my ALV-table I bound the property "valueset_fieldname" of the dropdown-cell to the context-attribute VALUESET:
      lo_column = lo_alv_model>if_salv_wd_column_settings~get_column( id = 'PRICE').
      CREATE OBJECT lo_drop_down_idx
        EXPORTING
          selected_key_fieldname = u2018PRICEu2019.
      lo_drop_down_idx->set_valueset_fieldname( value = u2018VALUESETu2019 ).
      lo_column->set_cell_editor( lo_drop_down_idx ).
    Now I have the problem, that the list of the dropdown-cell displays the proper amount of values but not the proper texts . My valueset looks for example like this:
    Value: A
    Text:  A
    Value: B
    Text:  B
    Value: C
    Text:  C
    Value: D
    Text:  D
    But my Dropdown-cell shows these values:
    A
    A
    A
    D
    Could you please help?
    Edited by: Developer on Feb 2, 2010 5:32 PM

    Hello Lekha,
    thank you for your answer. I think there might be an other reason for this problem. When I debug the view with the Webdynpro-Debugger the valueset in the context contains the correct values but the dropdown shows wrong values.
    You also sent me a link with a codesample. In this coding you use the following statement:
    lr_drp_idx->set_texts( 'VALUESET'   ). This is a method of the class CL_WD_DROPDOWN_BY_IDX. I used the class cl_salv_wd_uie_dropdown_by_idx as I'm working with an ALV-Table. This class doesn't have the method set_texts. Instead it has a method called 'set_valueset_fieldname'. Maybe this method has a bug?
    Regards,

  • Feedback on use of incubator command pattern

    Hi,
    We are currently prototyping some different solutions using coherence incubator (namely command pattern) and are looking for some feedback as to the viability and potential improvements to the solution.
    h3. Summary of Prototype
    The prototype does the following (i have a nice sequence diagram for this but don't see a way to attach it :():
    + client (e.g. through coherence extend) calls local api to save a "message" for a particular account (e.g. Account id = 1234). This calls namedcache.put and inserts an entry into the cache.
    + BackingMapListener is configured for the cache into which the client indirectly inserts. In the prototype this is a spring bean that extends AbstractMultiplexingBackingMapListener - which is fully "loaded" with all the required dependencies for the processing of the message (services, etc.).
    + The listener then registers a new context (using ContextManager) using a "grouping" id based on the sequence/ordering requirements. For example, say that each message against an account needs to be processed in order. The context would get instantiated with name = "1234", so that subsequent requests for account 1234 will get queued against the context with the same name whilst the previous request(s) are still processing. Messages for other accounts would register a different context name so they will get simultaneously processed.
    NB: The functionality of this listener can be paralleled to the sample in CommandPatternExample for one submission. I am not entirely clear where command submissions typically "tie-in" but I am planning to kick them off from a backingmaplistener. I briefly explored using the 'com.oracle.coherence.common.events.dispatching.listeners.DelegatingBackingMapListener' to dispatch the commands but not entirely how this would tie in. As I understand it the delegating backingmaplistener is used within the 'liveobjects' context and dispatches entries that implement the LifecycleAwareEntry but not sure how we would create "custom-contexts" as we require (i.e. the identifier is not for the key of the cache entry but rather a subset of that -e.g. account id versus account message id).
    + A command is then created to process the account message, which is comprised of
    - the Account which needs processed (the value of the backing map listener contains the Account itself)
    - Any components that are required during processing (services, daos, etc - service might itself be injected with daos, etc.)
    + The newly instantiated command is then then submitted to the CommandSubmitter for the appropriate contextIdentifer (the one returned by 1234 in our example).
    From some basic tests, the prototype is behaving as I desire - i.e. it queues and "synchronizes" the commands for the same context and also simultaneously processes commands assigned to different contexts asynchronously. That's great.
    However, there are a number of things I am exploring for the actual implementation. I believe most of these are typical concerns so I wonder if Oracle or anyone can provide some feedback from past experience/proposed recommendations:
    h3. Questions
    h4. 1. Grid/server-side Business Logic Deployment
    One of the things that has occurred to us is that ideally we would like to store the business processing logic (i.e. the heart of the processing within the command) either inside the grid or within a coherence node (i.e. made available through the classpath of the node startup).
    In our case we have a few different "processing models", but ideally the processor/command will simply determine the appropriate control flow (i.e. within the command - or maybe the appropriate lifecycle if we end up using that) and associated business logic off the attributes of the object to be processed. I am not sure if our use case is typical, but to be clear we have a fair bit of business logic to be performed within the 'command', each in separate modules. In implementation, most modules will be interacting with the grid for lookups, etc. but ideally that will be abstracted from the Processor/Command which will only know that it is using an 'accountService' - for e.g.
    Currently the business logic is "loaded" into the listener and "passed on" to the command through composition. Ideally we ant the command would be light-weight and the various "processing models" would either:
    a) be deployed to each node and somehow "available" to the command during execution. Would need to work out how this would be come available to the execution environment; perhaps each 'Context' would wrap the processing details. However, even this is a bit too granular as likely a processing model will apply to many contexts.
    b) Perhaps the business logic/processing components are deployed to the cache itself. Then within the command attributes on the object would be consulted to determine which processing model to "apply" and a simple lookup could return the appropriate control flow/processor(s).
    c) Perhpaps the different logic/flow is embedded in a different "lifecycle" for the event processing and the appropriate lifecycle is detected by the listener and appropirately applied. Even with such a model we'd still like the various processing for each phase to be maintained in the server if possible.
    Has anyone else done something like this and/or are there any thoughts about deploying the business logic to the grid this way? I see advantages/disadvantages with the different solutions, and some of them seem better for upgrades. For example if you upgrade the processing logic whilst requests are still coming in (clearly you would attempt to avoid this) and it is embedded into each node, what would happen if one node has been upgraded and a request comes to that node. Say one of the business logic modules performs a query against the cache which needs to consult another node (e.g. assuming you're using partitioned data) and that node has not received the upgrade and there's a conflict. In that regard perhaps deploying the different processing logic to a replicated cache makes more sense because once updated it should get pushed immediately to all nodes?
    Are these known concerns? I'm new to grid-side processing concepts so just correct me if there's an obvious issue with tis.
    h4. 2. Cleanup/Management of contexts
    One thing I noticed on my prototype is that the context's that I create don't really go away. We are envisioning creating Many context per day (let's just say a few hundred million to be safe)
    so ...
    a) how do people normally remove the contexts? Does the command framework sort this out behind the scenes? I can see the 'stop' method on the CommandExecutor removing the context, but from a quick follow-through the only scenario which seems to potentially call this is if the context version number has changed. Is there some way to change the version when we submit additional commands to the same context?
    b) Is there an issue with creating this many Contexts? As per earlier mention, to reduce overhead ideally the context will not be too heavy but any thoughts on our intended usage? We could use something like a hashing scheme to "bucket" the requests to contexts to reduce the total number of Contexts if required but this is not ideal.
    h4. 3. Creation of new Command Every time.
    In our scenario, each command needs to act upon a given object (e.g. one account). As I see it, this requires us to create a new Command for each message, because I do not see a way to 'pass in' the object to the execute method. Setting it to the context does not work either because we need to queue a few requests to each given context; I played with wrapping the object with GenericContext and setting the value but in reality we're submitting the commands whilst others are currently being processed so I don't see how this could work.
    Any thoughts on this? Do you agree we'll have to create a new command for every message to be processed? We'll likely have millions of Commands per day so this will make a difference for us (although if we eliminate the logic from q#1 or the dependencies are singletons it's not a big deal)
    h4. 4. Concurrency guarantees with the commandpattern
    I also want to confirm my understanding of concurrency controls around the command pattern. Unlike an entry processor which controls updates to the entry upon which it was invoked, the command pattern only guarantees concurrency against processing occuring within the context of the currently operating command. Commands submitted to the same context will be processed synchronously but any entries which may have had a listener which spawned the command submission are in no way guarded. This latter point is pretty obvious I believe since there's no real link but I just want to make sure my assumptions are correct.
    NB: in the scenario I am describing we do NOT need to update the original cache entry into which the account message was submitted. Instead other caches will be updated with results from additional processing logic so this is not that much of an issue for us.
    h4. 5. Confirmation of concerns with "straight" entry processor
    If we were to use a "straight" entry processor (versus command pattern which uses entry processor) which gets kicked off from a threadpool on a backing map listener (for example on insert or update), is it true that if a node were to go down, we would have issues with failover? NB: The reason we would kick off the entry processor from a threadpool would be to "simulate" asynchronous processing. As I see it, if we kicked off a thread on the listener and returned back to the client, nothing would "re-submit" the request if a node goes down. Is that correct?
    ALTERNATIVELY, As I understand it, with an entry processor invoked from a client, it is the client coherence jar that receives the exception when a node goes down mid-process and the coherence jar takes care of "re-sending" the request to another node. So - if the threadpool is managed by the client and the client kicks off an invoke in one of the threads - then I believe the client WILL re-submit the entry processor requests if the node goes down - through the coherence jar/extend - not sure on the details but my point is that the client application does not have to provide any code for the "failover" but the coherence client jar performs this.
    h4. 6. Lifecycle
    I have not explored the "lifecycle" functionality available within the incubator - but as I understand it the main thing it could offer is that if we have many phases of the processing (as we do in most our use cases) - that the processing can be managed with the different lifecycles. NB: To be clear I am referring to 'live objects' with their own series of processing steps - not 100% if Lifecycle directly relates to 'live objects'. If a node goes down and is in the midst of processing 200,000 commands - the entire processing doesn't need to start over.. each request will need to go back to the previous completed phase of the lifecycle but may well avoid duplicated processing. All processing will need to be idempotent regardless, but lifecycles could avoid re-processing that was already complete.
    Is this correct?
    Other benefits?
    (e.g. configurable processing logic as alluded to in Q#1).
    Thanks very much
    Edited by: 822486 on 21-Dec-2010 16:23
    Edited by: 822486 on 21-Dec-2010 16:59

    Hi User 822486,
    When delving into a detailed prototype like the one you have below it's often useful to understand the use cases and business requirements before jumping into a solution. I think it may be best for you to reach out to the Coherence organization within oracle to further discuss these questions in detail so we can better guide you in the different ways to solve problems with Coherence and the incubator. I'll do my best to comment on your prototype and address the questions that you currently have:
    NB: The functionality of this listener can be paralleled to the sample in CommandPatternExample for one submission. I am not entirely clear where command submissions typically "tie-in" but I am planning to kick them off from a backingmaplistener. I briefly explored using the 'com.oracle.coherence.common.events.dispatching.listeners.DelegatingBackingMapListener' to dispatch the commands but not entirely how this would tie in. As I understand it the delegating backingmaplistener is used within the 'liveobjects' context and dispatches entries that implement the LifecycleAwareEntry but not sure how we would create "custom-contexts" as we require (i.e. the identifier is not for the key of the cache entry but rather a subset of that -e.g. account id versus account message id).
    Command submissions are just that, submissions to the command pattern for execution and they can be triggered from anywhere since they run asynchronously. The DelegatingBackingMapListener and the associated eventing model provides you with the foundations for building an Event Driven Architecture on top of coherence. It's used by both the Push Replication Pattern as well as the Messaging Pattern which you could use as references if you wanted to go down the path of using the eventing model as well. It really comes down to your use case (which I don't have a lot of details on at the moment). An Entry that is a LifecycleAwareEntry can basically take action when it's state is changed (an event occurs). As a completely bogus example you could have a AccountMessageDispatcher object in a cache with a DelegatingBackingMapListener configured and you could submit EntryProcessors to this dispatcher that gives it a set of messages to perform for a set of accounts. The Dispatcher could then every time it's updated submit commands for execution. In essence it's formalizing an approach to responding to events on entries - or server side event driven programming.
    h2. Grid/server-side business logic deployment
    Have you looked at the processing pattern at all? It's a framework for building compute grids on top of Coherence and may have more plumbing in place for you to achieve what you're looking for. I think it may be best for us to discuss your use case in more detail to understand the pros and cons of each approach before commenting further on a solution for you.
    h2. Cleanup and Management of contexts
    Contexts are marker interfaces so they can be incredibly lightweight which should allow you to create as many of them as you need. The biggest concern is ensuring that you have enough processing power in your grid to handle the volume of work you want to manage. This should be a simple matter of figuring out your load and sizing your cluster appropriately. The initial design of the command pattern was to have a set of well established contexts that would be used repeatedly. Given that the Command Pattern is primarily an example, you could extend the DefaultContextsManager to have an unregisterContext method.
    h2. Creation of new command every time
    I'm a little confused by your requirement here. Are you saying that you have a set of pre-defined operations that you want to apply to an account for example incrementAccountBalancyBy1? If so, I don't understand why you couldn't submit the same command instance to a context multiple times. While I wouldn't recommend using statics you could have a CommandFactory that returned the same command each time you call getCommand once it was instantiated once. Usually however we expect that you'll have some additional data unique to each message that the command must execute. This could be handled by having a setter on your command for these properties.
    h2. Concurrency Guarantees
    The Command Pattern Guaranteees that for a given context commands are processed synchronously in the order they are received. If you have multiple submitters sending commands to the same context, then the order of when the commands are processed will be based on the order in which they arrive at the node where the Context resides. A context is the control point that gives commands their ordering.
    h2. Confirmation of concerns with "straight" entry processor
    I'm not sure if I follow your question here. EntryProcessors are guaranteed to execute, even in the failure scenario (this is why they're backed up and why they must be idempotent). If you're referring to processing events based on a backing map listener rather than submitting commands, it handles your processing then it's a matter of wether you're asynchronously processing the events or not. If you are synchronously processing things and your node dies while the BML is executing you're right a node failure at that point will result in "nothing happening" and the client will re-try. If however you're asynchronously handling the events from your BML, then you could lose state. This is why we use entries the way we do in the common event layer, we persist state on an entry that we can't lose when a node fails. This allows us to asynchronously process the data after the node has been updated.
    h2. Lifecycle
    With respect to lifecycle if you're referring to LifeCycleAwareEntry - this is a way of designating that an Entry in the cache can process events when modified/mutated. This may be better discussed by phone or in person.

  • Does Touch have "remove color cast" and adjust lighting - levels like Photoshop Elements?

    Does Touch have "remove color cast" and adjust lighting -> levels like Photoshop Elements?

    There isn't a "remove color cast" command per se but there are Levels and Curves adjustments so you can effectively remove/diminish colors yourself.

  • Automate command to index in 11G

    Hi,
    We're going through manual migration of our 5.2 instances over to 11g, and have a big list of indexed attributes. It is really a tough jobs to run multiple commands per suffix across various environment. I was wondering if someone has a better way to help me out.
    Thanks, John.

    For initial system configuration, I usually put my index configurations in an LDIF and then replace all the non-system indexes with mine. As long as you do this before an LDIF import that's about all there is to it. Of course there are dsconf commands to do it too if you want to go that way. If you are initializing data from a binary backup, you just need to make sure the backup comes from a system with the same indexes. IIRC the restore will fail if that's not the case.
    If what you are talking about is a multi-system reindex utility intended to change existing indexes on multiple systems that are currently in service, you need to be very careful with that. Since reindexing puts the backend into readonly, that kind of tool will have the potential to put an entire topology into readonly, with an associated topology-wide write outage. We had a thread about that a while back. If this forum's search tools worked better, I'd probably be able to fish that out for you.

  • How to change the commands from a Form when a certain item is selected???

    Hello,
    I'm developing a J2ME application and I'm having some problems. I have more items on a form and I when i"m moving from an item to another I want that the commands of the form to change. Is that possible? How can it be done? I've tried some ways but without success. Please help.
    Thanks

    There is no focusListener and there ain't any focus events in j2me,
    if you want to have this effect you need to have only one command per item
    so that it won't appear in 'options' menu

  • Lumia 520 win8.1 voice commands.

    After windows 8.1 update.
    Voice commands have now moved to the search button and the start button now does nothing. This means I have lost the search function. Is this by design or by accident?
    Also to save disturbing my sleep I turn off wifi at night as my emails are not that urgent. Don't want to mute the phone in case an urgent call comes in. It seems that wifi sense overrides and turns wifi back on. What is the quickest way of ensuring silence without going through numerous clicks?

    Thanks for the update, SonicBlue. Let's further verify if the issue is SW related or not. Try the following steps and see if the 'call' voice command will work:
    1. Reboot the phone. Press and hold the volume down & the power key for 10-15 seconds until it vibrates and restarts. This won't erase any of your files, but you will need to re-set up the date & time settings. 
    2. Since the phone is already running Lumia Cyan, try checking if the system apps have already been updated as well. Go to Settings > swipe to applications > store > check for updates. 
    3. Try to use the speech feature using another language pack. This will help verify if the issue is related to the English UK language pack that you're currently using or not. Make sure that the phone's regional settings are correctly set as well. Some commands only work within supported countries. You can check this link for info about the availability of voice commands per country: Feature and service availability - Go to Speech.
    Let us know the outcome. 

  • Session facade vs Command object pattern

    Hello,
    I am debating using the Command pattern as my primary strategy for implementing my J2EE enterprise app business logic and would like some advice.
    The general idea is to have only a few types of abstract commands (such as a ReadComand, an UpdateCommand -- I might have a slightly finer granularity than this, I don't know) and implement each use case as a command instance. For example, I might have a command called GetOrderItemsCommand which is a ReadCommand and returns the list of order items (i.e., has a getItems() method) when execute()'d.
    The result of my design will be a very small set of stateless session beans with an execute() interface. No entity beans (will use Hibernate) and may very well eventually implement a Message bean interface with a similar executeAsynchronously() type method, if eventually necessary.
    I guess the popular alternative (or, more correctly, the norm) is to implement session facades for this.
    However, I am attracted to the Command pattern because of its simplicity, it's rapid application development angle, its disconnected-ness from EJB (e.g., a modification to a command does not mean touching any EJB at all, just the command class), and the ability for implementing cross-cutting features (such as transactions or security, although I also plan to use an AOP solution at the POJO level). At the same time, it works as a delegate of an EJB, so it's accessible for all kinds of network clients.
    I am also familiar with it's drawbacks (e.g., maintainability for large numbers of commands, per the literature, such as the Marinescu 2002 EJB Patterns book).
    So, very few types of stateless session beans (though there may be many instances) -- mainly there for remote client acceessibility and the ability to leverage application-level transactions if needed.
    Nevertheless, I have never tried the command pattern approach and was curious if others had feedback or case studies.
    Best regards --

    You normally use Command to decouple your controller from the view and model tiers. Your take on making command objects 'polymorphic' is interesting, and an angle I had not thought of. In general, I implement command objects rather like the Action class in Struts.
    Each command has implements an execute() method declared in the ICommad interface. The constructor of the command object ensures that a given command has all the variables and data required to execute properly. I also create a likes() method that returns a boolean. That way, I can add all my commands to a handler and iterate until on returns true on likes().
    - Saish

  • Manipulating elements in a cluster and passing to subVI and back again

    I have done some searching to see if there was already a question like this, but I didn't see one.  If there are others that are similar enough point me there, or tell me to keep looking. (Don't do that )   Otherwise, thanks for reading.
    Background:  I am maintaining VIs written in LV 8.0.  I am working with a Strict Type Def, which is a cluster, with some ring elements, numeric, and boolean elements, totaling 21 different elements.  In the main VI, there is an event structure.  In different events, different elements of the cluster are disabled.  In some of the events, I want to call another VI with the Strict Type Def.  The new VI will need to have the appropriate elements in the cluster disabled.  There is a possibility of some elements changing these or other properties that need to be passed up to the calling VI. 
    So the question:  Is there an easy way to pass the properties of the elements in the cluster other than passing each property separately or in a cluster?  It might not just be one property per element either.
    Thanks in advance.
    brian

    For complicated GUIs I use anAction Engine designed for the app.
    In the "init" action I cache the control refs I will use during the run.
    When it comes time to change how thing look I invoke the appropriate action and set all of the properties inside the AE.
    In your case you laso ahve to cache the ref to the sub-VI's control but the idea is the same. Just do the same thing to both and you should be OK. You COULD cache all of the cahnges and then apply them latter but that complicates things.
    I hope that helps,
    Ben
    Message Edited by Ben on 04-27-2009 11:11 AM
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction
    Attachments:
    Init.PNG ‏61 KB
    Set_in_AE.PNG ‏63 KB

  • Maxl command to logout users for particular application

    Hello Everybody,
    Can you please help me on the below question.
    I have essbase 11.1.2.2 and I have a maxl script which does copy of applications A to B for maintenance, but during this process, the script logs out all the users from the system even though users dont access to A and B since they working on other applications eg., C, D and E.
    Currently I see the below command for logging out all users in the maxl script.
    "alter system logout session all force"
    Can you please let me know how I can logout user accessing only particular applications/database ( example A and B) in maxl script instead of logging out all the users from Essbase.
    Thanks for your help in advance.

    You can use...
    alter system logout session on application A force
    You will need to use one command per application.  Further details in the documentation for the same command (alter system):  http://docs.oracle.com/cd/E17236_01/epm.1112/esb_tech_ref/maxl_altsys.html  In particular, see the section titled "Session Specification" in the notes at the end.

  • How to apply a fade to all video, not just one element.

    Hi. Ignorant here. New to FCP X and pretty new to video editing.  I'm doing green screen stuff and I want to fade everything up from black. Trying to match the speed of the transitions is tough.  I'm assuming there's a function to control the whole video component, not just per element...'Little help?
    Thanks.

    Thanks Tom.  Yep, that works.  That was my next go-to but I'm reticent to create compound clips as I'd imagine there's a lack of individual control at that point.  Perhaps that just makes one plan one's workflow. I was hoping there was another way but hey, compound clips are easy to work with and can always be separated when you need to manipulate them individually.  Thanks much.

Maybe you are looking for

  • Error while posting Invoice IDOC (The difference is too large for clearing)

    Hi All, While posting Invoice IDOC to Remittance Advice IDOC get fails with status 51 &  message ' The difference is too large for clearing'. Please suggest any solution or reason to fail IDOC. Thanks & Regards, Ajay Moderator message: please search

  • Open URL or File not Working

    Hi I've scoured the forum and found some similar issues but not an answer to my problem. I have a button that launches a web page in a new browser window using "On success: Open URL or file". The Captivate files are published out as SWF and will be b

  • Doubt in restricting table field value

    Hi, I Want to restrict one field in my table to have only predefined values. ie, I have a field named Status which can have only 2 values either 'Present' or 'Absent'. How to make this constraint enforced in the table ? Suggestions & ideas are very m

  • Print problem in epiphany after upgrade

    I am having printing problems in epiphany after I did my upgrade.  It says epiphany book marks crashed, and it is collecting information to report the bug.  But then it closes everything.  Some pages work fine, but others do not.  Also, I can't choos

  • Add read more link SharePoint 2013 blog site

    Hi, I have blog site in SharePoint 2013. I want to add "Read More" link for all the blog post. I found a lot of solution by manipulating through the layouts folder in 15 hive directory. But I have no access to the 15 hive directory. Is it possible th