Question regarding Command pattern

Hi!
I have a question regarding the Command pattern:
//Invoker as defined in GOF Design Patterns
public class SomeServer {
    //Receiver as defined in GOF Design Patterns.
    private Receiver receiver;
    //Request from a network client.
    public void service(SomeRequest request) {
        Command cmd = CommandFactory.createCommand(request);
        cmd.execute();
}The concrete command which implements the Command needs a reference to Receiver in order to execute it's operation but how is the concrete command best configured? Should I send Receiver along with the request
as a parameter to the createCommand method or should i configure the receiver inside the CommandFactory or
send it as a paramter to the execute method? Since SomeServer acts as both client and invoker, SomeServer "knows" about the Commands receiver. Is this a bad
thing?
Regards
/Fredrik

#!/bin/bash
DATE=$(date '+%y-%m-%d')
if find | grep -q $DATE ; then
echo "OK - Backup files found"
exit 0
else
echo "Critical - No Backups found today!"
exit 2
fi
should work too and it's a bit shorter.
Please remember to mark the thread as solved.

Similar Messages

  • Question regarding the "mcxquery" and "dscl -mcxread" commands:

    Question regarding the mcxquery and dscl -mcxread commands:
    Does anyone know why the mcxquery and the dscl . -mcxread commands don't show any info about MCX managed login items & printers? The System Profiler's "Managed Client" section does. Id like to see info regarding managed printers and managed login items using the mcx tools. I have Mac users running 10.5.2 with both login items and printers that are pushed out to them via MCX. The System Profiler app shows all of their policies, but the dscl . -mcxread and mcxquery tools dont. Why not?
    -D
    Message was edited by: Daniel Stranathan
    null

    How do you "call procedures/functions" without sql code? You need at least the call statement like
    {call myProc(?,?,?)}that you wrap into a CallableStatement.
    Other than that: when you switch off autocommit, you need to call commit/rollback at the end. Usually, if you don't commit/rollback a non-autocommitted connection, the transaction get's committed/rollbacked when you close the connection - that depends on the JDBC driver. But it's never a good idea to ommit the commit/rollback calls on a non-autocommit connection. Usually you enclose your code in a try/catch block like this:
    con.setAutocommit(false);
    try {
       con.commit();
    } catch (Exception e) {
       con.rollback();
    } finally {
        con.setAutocommit(true); //or:
        con.close();
    }

  • Feedback on use of incubator command pattern

    Hi,
    We are currently prototyping some different solutions using coherence incubator (namely command pattern) and are looking for some feedback as to the viability and potential improvements to the solution.
    h3. Summary of Prototype
    The prototype does the following (i have a nice sequence diagram for this but don't see a way to attach it :():
    + client (e.g. through coherence extend) calls local api to save a "message" for a particular account (e.g. Account id = 1234). This calls namedcache.put and inserts an entry into the cache.
    + BackingMapListener is configured for the cache into which the client indirectly inserts. In the prototype this is a spring bean that extends AbstractMultiplexingBackingMapListener - which is fully "loaded" with all the required dependencies for the processing of the message (services, etc.).
    + The listener then registers a new context (using ContextManager) using a "grouping" id based on the sequence/ordering requirements. For example, say that each message against an account needs to be processed in order. The context would get instantiated with name = "1234", so that subsequent requests for account 1234 will get queued against the context with the same name whilst the previous request(s) are still processing. Messages for other accounts would register a different context name so they will get simultaneously processed.
    NB: The functionality of this listener can be paralleled to the sample in CommandPatternExample for one submission. I am not entirely clear where command submissions typically "tie-in" but I am planning to kick them off from a backingmaplistener. I briefly explored using the 'com.oracle.coherence.common.events.dispatching.listeners.DelegatingBackingMapListener' to dispatch the commands but not entirely how this would tie in. As I understand it the delegating backingmaplistener is used within the 'liveobjects' context and dispatches entries that implement the LifecycleAwareEntry but not sure how we would create "custom-contexts" as we require (i.e. the identifier is not for the key of the cache entry but rather a subset of that -e.g. account id versus account message id).
    + A command is then created to process the account message, which is comprised of
    - the Account which needs processed (the value of the backing map listener contains the Account itself)
    - Any components that are required during processing (services, daos, etc - service might itself be injected with daos, etc.)
    + The newly instantiated command is then then submitted to the CommandSubmitter for the appropriate contextIdentifer (the one returned by 1234 in our example).
    From some basic tests, the prototype is behaving as I desire - i.e. it queues and "synchronizes" the commands for the same context and also simultaneously processes commands assigned to different contexts asynchronously. That's great.
    However, there are a number of things I am exploring for the actual implementation. I believe most of these are typical concerns so I wonder if Oracle or anyone can provide some feedback from past experience/proposed recommendations:
    h3. Questions
    h4. 1. Grid/server-side Business Logic Deployment
    One of the things that has occurred to us is that ideally we would like to store the business processing logic (i.e. the heart of the processing within the command) either inside the grid or within a coherence node (i.e. made available through the classpath of the node startup).
    In our case we have a few different "processing models", but ideally the processor/command will simply determine the appropriate control flow (i.e. within the command - or maybe the appropriate lifecycle if we end up using that) and associated business logic off the attributes of the object to be processed. I am not sure if our use case is typical, but to be clear we have a fair bit of business logic to be performed within the 'command', each in separate modules. In implementation, most modules will be interacting with the grid for lookups, etc. but ideally that will be abstracted from the Processor/Command which will only know that it is using an 'accountService' - for e.g.
    Currently the business logic is "loaded" into the listener and "passed on" to the command through composition. Ideally we ant the command would be light-weight and the various "processing models" would either:
    a) be deployed to each node and somehow "available" to the command during execution. Would need to work out how this would be come available to the execution environment; perhaps each 'Context' would wrap the processing details. However, even this is a bit too granular as likely a processing model will apply to many contexts.
    b) Perhaps the business logic/processing components are deployed to the cache itself. Then within the command attributes on the object would be consulted to determine which processing model to "apply" and a simple lookup could return the appropriate control flow/processor(s).
    c) Perhpaps the different logic/flow is embedded in a different "lifecycle" for the event processing and the appropriate lifecycle is detected by the listener and appropirately applied. Even with such a model we'd still like the various processing for each phase to be maintained in the server if possible.
    Has anyone else done something like this and/or are there any thoughts about deploying the business logic to the grid this way? I see advantages/disadvantages with the different solutions, and some of them seem better for upgrades. For example if you upgrade the processing logic whilst requests are still coming in (clearly you would attempt to avoid this) and it is embedded into each node, what would happen if one node has been upgraded and a request comes to that node. Say one of the business logic modules performs a query against the cache which needs to consult another node (e.g. assuming you're using partitioned data) and that node has not received the upgrade and there's a conflict. In that regard perhaps deploying the different processing logic to a replicated cache makes more sense because once updated it should get pushed immediately to all nodes?
    Are these known concerns? I'm new to grid-side processing concepts so just correct me if there's an obvious issue with tis.
    h4. 2. Cleanup/Management of contexts
    One thing I noticed on my prototype is that the context's that I create don't really go away. We are envisioning creating Many context per day (let's just say a few hundred million to be safe)
    so ...
    a) how do people normally remove the contexts? Does the command framework sort this out behind the scenes? I can see the 'stop' method on the CommandExecutor removing the context, but from a quick follow-through the only scenario which seems to potentially call this is if the context version number has changed. Is there some way to change the version when we submit additional commands to the same context?
    b) Is there an issue with creating this many Contexts? As per earlier mention, to reduce overhead ideally the context will not be too heavy but any thoughts on our intended usage? We could use something like a hashing scheme to "bucket" the requests to contexts to reduce the total number of Contexts if required but this is not ideal.
    h4. 3. Creation of new Command Every time.
    In our scenario, each command needs to act upon a given object (e.g. one account). As I see it, this requires us to create a new Command for each message, because I do not see a way to 'pass in' the object to the execute method. Setting it to the context does not work either because we need to queue a few requests to each given context; I played with wrapping the object with GenericContext and setting the value but in reality we're submitting the commands whilst others are currently being processed so I don't see how this could work.
    Any thoughts on this? Do you agree we'll have to create a new command for every message to be processed? We'll likely have millions of Commands per day so this will make a difference for us (although if we eliminate the logic from q#1 or the dependencies are singletons it's not a big deal)
    h4. 4. Concurrency guarantees with the commandpattern
    I also want to confirm my understanding of concurrency controls around the command pattern. Unlike an entry processor which controls updates to the entry upon which it was invoked, the command pattern only guarantees concurrency against processing occuring within the context of the currently operating command. Commands submitted to the same context will be processed synchronously but any entries which may have had a listener which spawned the command submission are in no way guarded. This latter point is pretty obvious I believe since there's no real link but I just want to make sure my assumptions are correct.
    NB: in the scenario I am describing we do NOT need to update the original cache entry into which the account message was submitted. Instead other caches will be updated with results from additional processing logic so this is not that much of an issue for us.
    h4. 5. Confirmation of concerns with "straight" entry processor
    If we were to use a "straight" entry processor (versus command pattern which uses entry processor) which gets kicked off from a threadpool on a backing map listener (for example on insert or update), is it true that if a node were to go down, we would have issues with failover? NB: The reason we would kick off the entry processor from a threadpool would be to "simulate" asynchronous processing. As I see it, if we kicked off a thread on the listener and returned back to the client, nothing would "re-submit" the request if a node goes down. Is that correct?
    ALTERNATIVELY, As I understand it, with an entry processor invoked from a client, it is the client coherence jar that receives the exception when a node goes down mid-process and the coherence jar takes care of "re-sending" the request to another node. So - if the threadpool is managed by the client and the client kicks off an invoke in one of the threads - then I believe the client WILL re-submit the entry processor requests if the node goes down - through the coherence jar/extend - not sure on the details but my point is that the client application does not have to provide any code for the "failover" but the coherence client jar performs this.
    h4. 6. Lifecycle
    I have not explored the "lifecycle" functionality available within the incubator - but as I understand it the main thing it could offer is that if we have many phases of the processing (as we do in most our use cases) - that the processing can be managed with the different lifecycles. NB: To be clear I am referring to 'live objects' with their own series of processing steps - not 100% if Lifecycle directly relates to 'live objects'. If a node goes down and is in the midst of processing 200,000 commands - the entire processing doesn't need to start over.. each request will need to go back to the previous completed phase of the lifecycle but may well avoid duplicated processing. All processing will need to be idempotent regardless, but lifecycles could avoid re-processing that was already complete.
    Is this correct?
    Other benefits?
    (e.g. configurable processing logic as alluded to in Q#1).
    Thanks very much
    Edited by: 822486 on 21-Dec-2010 16:23
    Edited by: 822486 on 21-Dec-2010 16:59

    Hi User 822486,
    When delving into a detailed prototype like the one you have below it's often useful to understand the use cases and business requirements before jumping into a solution. I think it may be best for you to reach out to the Coherence organization within oracle to further discuss these questions in detail so we can better guide you in the different ways to solve problems with Coherence and the incubator. I'll do my best to comment on your prototype and address the questions that you currently have:
    NB: The functionality of this listener can be paralleled to the sample in CommandPatternExample for one submission. I am not entirely clear where command submissions typically "tie-in" but I am planning to kick them off from a backingmaplistener. I briefly explored using the 'com.oracle.coherence.common.events.dispatching.listeners.DelegatingBackingMapListener' to dispatch the commands but not entirely how this would tie in. As I understand it the delegating backingmaplistener is used within the 'liveobjects' context and dispatches entries that implement the LifecycleAwareEntry but not sure how we would create "custom-contexts" as we require (i.e. the identifier is not for the key of the cache entry but rather a subset of that -e.g. account id versus account message id).
    Command submissions are just that, submissions to the command pattern for execution and they can be triggered from anywhere since they run asynchronously. The DelegatingBackingMapListener and the associated eventing model provides you with the foundations for building an Event Driven Architecture on top of coherence. It's used by both the Push Replication Pattern as well as the Messaging Pattern which you could use as references if you wanted to go down the path of using the eventing model as well. It really comes down to your use case (which I don't have a lot of details on at the moment). An Entry that is a LifecycleAwareEntry can basically take action when it's state is changed (an event occurs). As a completely bogus example you could have a AccountMessageDispatcher object in a cache with a DelegatingBackingMapListener configured and you could submit EntryProcessors to this dispatcher that gives it a set of messages to perform for a set of accounts. The Dispatcher could then every time it's updated submit commands for execution. In essence it's formalizing an approach to responding to events on entries - or server side event driven programming.
    h2. Grid/server-side business logic deployment
    Have you looked at the processing pattern at all? It's a framework for building compute grids on top of Coherence and may have more plumbing in place for you to achieve what you're looking for. I think it may be best for us to discuss your use case in more detail to understand the pros and cons of each approach before commenting further on a solution for you.
    h2. Cleanup and Management of contexts
    Contexts are marker interfaces so they can be incredibly lightweight which should allow you to create as many of them as you need. The biggest concern is ensuring that you have enough processing power in your grid to handle the volume of work you want to manage. This should be a simple matter of figuring out your load and sizing your cluster appropriately. The initial design of the command pattern was to have a set of well established contexts that would be used repeatedly. Given that the Command Pattern is primarily an example, you could extend the DefaultContextsManager to have an unregisterContext method.
    h2. Creation of new command every time
    I'm a little confused by your requirement here. Are you saying that you have a set of pre-defined operations that you want to apply to an account for example incrementAccountBalancyBy1? If so, I don't understand why you couldn't submit the same command instance to a context multiple times. While I wouldn't recommend using statics you could have a CommandFactory that returned the same command each time you call getCommand once it was instantiated once. Usually however we expect that you'll have some additional data unique to each message that the command must execute. This could be handled by having a setter on your command for these properties.
    h2. Concurrency Guarantees
    The Command Pattern Guaranteees that for a given context commands are processed synchronously in the order they are received. If you have multiple submitters sending commands to the same context, then the order of when the commands are processed will be based on the order in which they arrive at the node where the Context resides. A context is the control point that gives commands their ordering.
    h2. Confirmation of concerns with "straight" entry processor
    I'm not sure if I follow your question here. EntryProcessors are guaranteed to execute, even in the failure scenario (this is why they're backed up and why they must be idempotent). If you're referring to processing events based on a backing map listener rather than submitting commands, it handles your processing then it's a matter of wether you're asynchronously processing the events or not. If you are synchronously processing things and your node dies while the BML is executing you're right a node failure at that point will result in "nothing happening" and the client will re-try. If however you're asynchronously handling the events from your BML, then you could lose state. This is why we use entries the way we do in the common event layer, we persist state on an entry that we can't lose when a node fails. This allows us to asynchronously process the data after the node has been updated.
    h2. Lifecycle
    With respect to lifecycle if you're referring to LifeCycleAwareEntry - this is a way of designating that an Entry in the cache can process events when modified/mutated. This may be better discussed by phone or in person.

  • Client/server RMI app using Command pattern: return values and exceptions

    I'm developing a client/server java app via RMI. Actually I'm using the cajo framework overtop RMI (any cajo devs/users here?). Anyways, there is a lot of functionality the server needs to expose, all of which is split and encapsulated in manager-type classes that the server has access to. I get the feeling though that bad things will happen to me in my sleep if I just expose instances of the managers, and I really don't like the idea of writing 24682763845 methods that the server needs to individually expose, so instead I'm using the Command pattern (writing 24682763845 individual MyCommand classes is only slightly better). I haven't used the command pattern since school, so maybe I'm missing something, but I'm finding it to be messy. Here's the setup: I've got a public abstract Command which holds information about which user is attempting to execute the command, and when, and lots of public MyCommands extending Command, each with a mandatory execute() method which does the actual dirty work of talking to the model-functionality managers. The server has a command invoker executeCommand(Command cmd) which checks the authenticity of the user prior to executing the command.
    What I'm interested in is return values and exceptions. I'm not sure if these things really fit in with a true command pattern in general, but it sure would be nice to have return values and exceptions, even if only for the sake of error detection.
    First, return values. I'd like each Command to return a result, even if it's just boolean true if nothing went wrong, so in my Command class I have a private Object result with a protected setter, public getter. The idea is, in the execute() method, after doing what needs to be done, setResult(someResult) is called. The invoker on the server, after running acommand.execute() eventually returns acommand.getResult(), which of course is casted by the client into whatever it should be. I don't see a way to do this using generics though, because I don't see a way to have the invoker's return value as anything other than Object. Suggestions? All this means is, if the client were sending a GetUserCommand cmd I'd have to cast like User user = (User)server.executeCommand(cmd), or sending an AssignWidgetToGroup cmd I'd have to cast like Boolean result = (Boolean)server.executeCommand(cmd). I guess that's not too bad, but can this be done better?
    Second, exceptions. I can have the Command's execute() method throw Exception, and the server's invoker method can in turn throw that Exception. Problem is, with a try/catch on the client side, using RMI (or is this just a product of cajo?) ensures that any exception thrown by a remote method will come back as a java.lang.reflect.InvocationTargetException. So for example, if in MyCommand.execute() I throw new MySpecialException, the server's command invoker method will in turn throw the same exception, however the try/catch on the client side will catch InvocationTargetException e. If I do e.getCause().printStackTrace(), THERE be my precious MySpecialException. But how do I catch it? Can it be caught? Nested try/catch won't work, because I can't re-throw the cause of the original exception. For now, instead of throwing exceptions the server is simply returning null if things don't go as planned, meaning on the client side I would do something like if ((result = server.executeCommand(cmd)) == null) { /* deal with it */ } else { /* process result, continue normally */ }.
    So using the command pattern, although doing neat things for me like centralizing access to the server via one command-invoking method which avoids exposing a billion others, and making it easy to log who's running what and when, causes me null-checks, casting, and no obvious way of error-catching. I'd be grateful if anyone can share their thoughts/experiences on what I'm trying to do. I'll post some of my code tomorrow to give things more tangible perspective.

    First of all, thanks for taking the time to read, I know it's long.
    Secondly, pardon me, but I don't see how you've understood that I wasn't going to or didn't want to use exceptions, considering half my post is regarding how I can use exceptions in my situation. My love for exception handling transcends time and space, I assure you, that's why I made this thread.
    Also, you've essentially told me "use exceptions", "use exceptions", and "you can't really use exceptions". Having a nested try/catch anytime I want to catch the real exception does indeed sound terribly weak. Just so I'm on the same page though, how can I catch an exception, and throw the cause?
    try {
    catch (Exception e) {
         Throwable t = e.getCause();
         // now what?
    }Actually, nested try/catches everywhere is not happening, which means I'm probably going to ditch cajo unless there's some way to really throw the proper exception. I must say however that cajo has done everything I've needed up until now.
    Anyways, what I'd like to know is...what's really The Right Way (tm) of putting together this kind of client/server app? I've been thinking that perhaps RMI is not the way to go, and I'm wondering if I should be looking into more of a cross-language RPC solution. I definitely do want to neatly decouple the client from server, and the command pattern did seem to do that, but maybe it's not the best solution.
    Thanks again for your response, ejp, and as always any comments and/or suggestions would be greatly appreciated.

  • How to implement command pattern into BC4J framework?

    How to implement command pattern into BC4J framework, Is BC4J just only suport AIDU(insert,update,delete,query) function? Could it support execute function like salary caculation in HR system or posting in GL(general ledger) system? May I create a java object named salaryCalc which use view objects to get the salary by employee and then write it to database?
    Thanks.

    BC4J makes it easy to support the command pattern, right out of the box.
    You can write a custom method on your application module class, then visit the application module wizard and see the "Client Methods" tab to select which custom methods should be exposed for invocation as task-specific commands by clients.
    BC4J is not only for Insert,Update,Delete style applications. It is a complete application framework that automates most of the typical things you need to do while building J2EE applications. You can have a read of my Simplifying J2EE and EJB Development Using BC4J whitepaper to read up on an overview of all the basic J2EE design patterns that the framework implements for you.
    Let us know if you have more specific questions on how to put the framework into practice.

  • Coherence 12 Command Pattern Not Working

    We are migrating to the latest version of the Coherence 12. We are using Command Pattern in our code. Followin is the error I am getting
    com.oracle.coherence.common.finitestatemachines.annotation.OnEnterState(value=STARTING) is not compatible with the required methid signature 'Instruction methid(State,State,Context<State>)
    Can you please suggest what can be wrong?
    Also is there any place where I can download the latest incubator jars directly?
    Regards,
    Ashish

    Added a Jira ticket for the same COHINC-94 issue can be tracked over there so closing the thread.
    Ashish Garg

  • 3 questions regarding duplicate script

    3 questions regarding duplicate script
    Here is my script for copying folders from one Mac to another Mac via Ethernet:
    (This is not meant as a backup, just to automatically distribute files to the other Mac.
    For backup I'm using Time Machine.)
    cop2drop("Macintosh HD:Users:home:Desktop", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Documents", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Pictures", "zome's Public Folder:Drop Box:")
    cop2drop("Macintosh HD:Users:home:Sites", "zome's Public Folder:Drop Box:")
    on cop2drop(sourceFolder, destFolder)
    tell application "Finder"
    duplicate every file of folder sourceFolder to folder destFolder
    duplicate every folder of folder sourceFolder to folder destFolder
    end tell
    end cop2drop
    1. One problem I haven't sorted out yet: How can I modify this script so that
    all source folders (incl. their files and sub-folders) get copied
    as correspondent destination folders (same names) under the Drop Box?
    (At the moment the files and sub-folder arrive directly in the Drop Box
    and mix with the other destination files and sub-folders.)
    2. Everytime before a duplicate starts, I have to confirm this message:
    "You can put items into "Drop Box", but you won't be able to see them. Do you want to continue?"
    How can I avoid or override this message? (This script shall run in the night,
    when no one is near the computer to press OK again and again.)
    3. A few minutes after the script starts running I get:
    "AppleScript Error - Finder got an error: AppleEvent timed out."
    How can I stop this?
    Thanks in advance for your help!

    Hello
    In addition to what red_menace has said...
    1) I think you may still use System Events 'duplicate' command if you wish.
    Something like SCRIPT1a below. (Handler is modified so that it requires only one parameter.)
    *Note that the 'duplicate' command of Finder and System Events duplicates the source into the destination. E.g. A statement 'duplicate folder "A:B:C:" to folder "D:E:F:"' will result in the duplicated folder "D:E:F:C:".
    --SCRIPT1a
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    with timeout of 36000 seconds
    tell application "System Events"
    duplicate folder sourceFolder to folder destFolder
    end tell
    end timeout
    end cop2drop
    --END OF SCRIPT1a
    2) I don't know the said error -8068 thrown by Finder. It's likely a Finder's private error code which is not listed in any of public headers. And if it is Finder thing, you may or may not see different error, which would be more helpful, when using System Events to copy things into Public Folder. Also you may create a normal folder, e.g. named 'Duplicate' in Public Folder and use it as desination.
    3) If you use rsync(1) and want to preserve extended attributes, resource forks and ACLs, you need to use -E option. So at least 'rsync -aE' would be required. And I rememeber the looong thread failed to tame rsync for your backup project...
    4) As for how to get POSIX path of file/folder in AppleScript, there're different ways.
    Strictly speaking, POSIX path is a property of alias object. So the code to get POSIX path of a folder whose HFS path is 'Macintosh HD:Users:home:Documents:' would be :
    POSIX path of ("Macintosh HD:Users:home:Documents:" as alias)
    POSIX path of ("Macintosh HD:Users:home:Documents" as alias)
    --> /Users/home/Documents/
    The first one is the cleanest code because HFS path of directory is supposed to end with ":". The second one also works because 'as alias' coercion will detect whether the specified node is file or directory and return a proper alias object.
    And as for the code :
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    It is to strip the trailing '/' from POSIX path of directory and get '/Users/home/Documents', for example. I do this because in shell commands, trailing '/' of directory path is not required and indeed if it's present, it makes certain command behave differently.
    E.g.
    Provided /a/b/c and /d/e/f are both directory, cp /a/b/c /d/e/f will copy the source directory into the destination directory while cp /a/b/c/ /d/e/f will copy the contents of the source directory into the destination directory.
    The rsync(1) behaves in the same manner as cp(1) regarding the trailing '/' of source directory.
    The ditto(1) and cp(1) behave differently for the same arguments, i.e., ditto /a/b/c /d/e/f will copy the contents of the source directory into the destination directory.
    5) In case, here are revised versions of previous SCRIPT2 and SCRIPT3, which require only one parameter. It will also append any error output to file named 'crop2dropError.txt' on current user's desktop.
    *These commands with the current options will preserve extended attributes, resource forks and ACLs when run under 10.5 or later.
    --SCRIPT2a - using cp(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "cp -pR " & quoted form of src & " " & quoted form of dst
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT2a
    --SCRIPT3a - using ditto(1)
    cop2drop("Macintosh HD:Users:home:Documents")
    on cop2drop(sourceFolder)
    set destFolder to "zome's Public Folder:Drop Box:"
    set src to (sourceFolder as alias)'s POSIX Path's text 1 thru -2
    set dst to (destFolder as alias)'s POSIX Path's text 1 thru -2
    set sh to "src=" & quoted form of src & ";dst=" & quoted form of dst & ¬
    ";ditto "${src}" "${dst}/${src##*/}""
    do shell script (sh & " 2>>~/Desktop/cop2dropError.txt")
    end cop2drop
    --END OF SCRIPT3a
    Good luck,
    H
    Message was edited by: Hiroto (fixed typo)

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

  • **question regarding 3G and wif**.

    I have a question regarding 3G and wifi. I have #G activated as well as wifi, when I go to retrieve mail for example I get a pop up asking me if I want to connect to a wifi network…should I have wifi and 3G activated at the same time, and why am I getting the pop up…
    Thanks

    You can have them on at the same time, but they will not be used at the same time for data. The order of preference for data is WiFi > 3G > EDGE > GPRS. You're getting the pop up, most likely, because you have Settings > Wi-Fi > Ask to Join Networks set to ON. You can set that to OFF, and the iPhone will still join known (i.e. previously used) WiFi networks automatically.

  • Question regarding Dashboard and column prompt

    My question regarding Dashboard and column prompt:
    1) Dashboard prompt usually work with only for columns which are in subject area. In my report I've created some of the columns which are based on other columns. Like I've daysNumber column that is based on two other columns, as it calculates the difference of two dates. When I create dashboard prompt I can't find this column there. I need to make a prompt on this column.
    2)For one of the column I've only two values 1 and 0. When I create prompt for this column, is it possible that in drop down list It shows 'Yes' for 1 and 'No' for 0 and still filter the request??

    Hi Toony,...
    I think there was another way of doing this...
    In the dashboard prompt go to Show option > select SQL Results from dropdown.
    There you need to write your Logical SQL like...
    SELECT CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END FROM SubjectAreaName
    Here.. Periods.Year is the column which is already exists in repository's presentation layer..
    and difference of date functionality is the code or formula of column which you want to show in drop-down...
    Also write the CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END code in fx of that prompt.
    I think it helps you in doing this..
    Just check and inform me if it works...
    Thanks & Regards
    Kishore Guggilla
    Edited by: Kishore Guggilla on Oct 31, 2008 9:35 AM

  • Questions regarding customisation/configuration of PS CS4

    Hello
    I have accumulated a list of questions regarding customising certain things in Photoshop. I don't know if these things are doable and if so, how.
    Can I make it so that the list of blending options for a layer is by default collapsed when you first apply any options?
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions

    NyanPrime wrote:
    <answered above>
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions
    2.  No.  It's a sore spot that got some forum time when Photoshop CS4 was first released, then again with CS5.  It's said that the rules change slightly when using full-screen mode, though I personally haven't tried it.
    3.  Not sure, since I haven't tried it.  However, you may want to explore the Edit - Keyboard Shortcuts... menu, if you haven't already.
    4.  What buttons are you talking about?  Those you are creating in your document?  If so, choose the layer you want to lock in the LAYERS panel, then look at the little buttons just above the listing of the layers:
    5.  There are many, many options for positioning and sizing panels.  Most start with making a panel visible, then dragging it somewhere by its little tab.  One of the important features is that you can save your preferred layout as a named workspace.  Choose the Window - Workspace - New Workspace... to create a new named workspace (or to update one you've already created).  The name of that menu is a little confusing.  Once you have created your workspace, if something gets out of place, choose Window - Workspace - Reset YourNamedWorkspace to bring it back to what was saved.
    You'll find that panels like to "stick together", which helps with arranging them outside of the Photoshop main window.
    As an example, I use two monitors, and this is my preferred layout:
    6.  No, it's not possible to affect the layer names Photoshop generates, as far as I know.  I have gotten in the habit of immediately naming them per their usage, so that I don't confuse myself (something that's getting easier and easier to do...).
    Hope this helps!
    -Noel

Maybe you are looking for

  • Format for hiding the text in sap script

    Hello gurus, I waanted to know if there is any tag column to hide a text in corresponding line. My detail requirement is that I need to add GUID as a indicator at the beginning of the formatting text but that should not be visible to the user. curren

  • Do you have an installer for Acrobat 6.0 Standard for Mac???

    Please help me!  Can you send me an installer for Acrobat 6.0 Standard for Mac???  The Acrobat Customer Service "help" says this is the only way I can get my Acrobat going again.  It won't bother YOUR installer to share it with me.....and I must use

  • Beta 2 broke link with printer including link to 1.3.1

    I downloaded the Beta and the printer suddenly was not recognized. Then when I went back to 1.3.1 it too would not recognize the printer. I have checked the printer with other software including PE6 and it works fine. I've deleted the printer and the

  • IPhoto 1.0 to iPhoto 2.0 doesn't update

    I'm using iPhoto 2.1 on my other Macs (iBook & iMac) to make Quicktime slideshows files w/ music for iDVD (great) -but, on my G4 running OS 10.1.5: still have iPhoto 1.0. I tried updating it to iPhoto 2.0 unsuccessfully. It almost does it, but I then

  • Code structure of existing documents - rearrange?

    Hello, knows somebody a hint or a extension for Dreamweaver which structuring the code in a existing document? It seems that Dreameaver only structure the code for news documents :( , if you open a old document all code lines are beginning on the lef