SEQUENTIAL EXECUTION OF MULTIPROVIDER !!!!

Hi All,
can anybody tell me the scenario in which i should  go for sequential execution of multiprovider instead of default parallel execution.....
thnx !!!!

Hi
Check this out http://help.sap.com/saphelp_nw04/helpdata/en/de/bcb73d73a5f972e10000000a114084/content.htm
which will give you complete information about this.

Similar Messages

  • Parallel query execution on Multiprovider with noncumulative KF

    Hello !
    We built a multiprovider (MP) on three not overlapping (disjunct?) basis cubes, which are all stock cubes, partitioned by 0PLANT and copies of 0IC_C03.
    The Multiprovider-explain of TX RSRT says:
    "The MultiProvider query is executed sequentially (reason: NCUM)".
    Ok so far, maybe its not possible to execute queries in parallel on that multiprovider even if desirable.
    But I found note 781921 which says as symptom:
    "If a MultiProvider query with non-cumulative key figures is processed in a parallel way, the system terminates due to a type conflict". That means that queries CAN be executed in parallel on MPs with noncumulative key figures.
    Does anybody know if that type of queries can run in parallel or not ??
    Any advice is appreciated.
    Kind regards, Philipp

    Note 717451 solves this.

  • Sequential execution of task

    Hello,
    I've written two tasks, namely Task A and Task B as shown in the attachment.
    My program works this way:
    1. Firstly the 1st iteration in Task A executes. Upon completion, it waits for 0.5s and the data from the 1st iteration in Task A will be passed to Task B to be executed.
    2. Upon completion of the 1st iteration in Task B, it waits for 0.5s and the data from the 1st iteration in Task B will be passed back to Task A to be used in iterarion no.2.
    3. The number of iteration continues and stops until N = 600.
    I'm not quite sure as to how I should proceed to wire both Task A and Task B to complete my programming. Does it help that I use the Timed Loop.vi to combine these two tasks? Or are there any better suggestions to my problem?
    Really appreciate your help in advance. Thank you
    Attachments:
    Sequential.jpg ‏24 KB

    Since the total number of iterations is known from the beginning, you might want to use a FOR loop. Place task A and task B is two cases of a case structure and execute case A in even iterations and case B on odd iterations. Keep the result in a shift register to be fed to the task in the next iteration. Are the tasks much faster than 500ms or longer? 
    You also need to define what kind of input task A should get on the first call and what should happen with the last output of task B.
    See how far you get and show us some real code once you have something wired up.
    LabVIEW Champion . Do more with less code and in less time .

  • Sequential Execution of database statements in ABAP

    Hi,
        I want to update some rows of table(userdefined) purchase order and my requirement is to change the status of that table after i clicked on that particular record. 
       Can we do that eventhandling type of things in ABAP.
    My table structure is like this
    Req_Cust_Num, Req_Date,PONum, Meterial_Num, Qty, Price,Supp_Cust_Num, Req_Delivery_Date and Status.
    at the time of inserting status of all records is InQueue.
    I want to update by particular date and Supp_Cust_Num.
    Is there any option to update status to processed after clicking on the row....
    Regards,
    Pushparaju.B.

    hi Pushparaj,
           i think this can be done like this,
    1. get the particular record by using sy-ucomm.
    2. then compare it with the table and update the particular record.
    with regards,
    S.Barane

  • Sequential mediator routing rules execution

    Have a mediator with three routing rules executed in a sequential fashion.
    The first routing rule simply writes to a file. The second does something and third does something. The third (or second) routing rule fails and supposedly all routing rules will rollback as sequential execution executes in a single thread. 
    Question: Does the data in the first routing rule get written out to its file?
    Or does this rollback in a sequential set of routing rules apply only to a transactional routing rule - i.e. writing to a database?
    Casey

    Hi Casey,
    According to the scenario you have mentioned, Let's say, the first routing rule writes a file, second rule inserts into a table, third rule calls a web service.
    The first and second rule executes good, then if the third routing rule fails, then the second routing rule transaction will be rollback (if the data source is XA enabled) and the first routing rule which writes a file; that file will still be there, means the transaction will not be roll back.
    Hope this helps,
    N

  • Parallel execution of interfaces

    Hi. In the ODI package I can locate my interfaces and join each two of them by two lines: 'ok' (succesful) and 'ko' (unsuccesful). So I get a sequential execution of these interfaces. How I can make ODI to execute them in parallel way?

    To do that, create a scenario from each of the interfaces (right-mouse button on the interface, generate scenario) and drag the scenario on to the package rather than the interface. This will give you an execute scenario tool, which you should set to execute Asynchronously. execute each of the interfaces, and then use an OdiWaitForChildSession tool to wait for the complketion of the child sessions. If only some of the tasks you execute asynchronously are on the critical path, you can use keywords when you start the executions, and in the wait tool. For those which are on the critical path, give a keyword CP. In the wait tool, wait with the keyword CP.

  • Parallel execution and temporary tablespaces

    I have a large long running (1 hour) data warehouse query in a materialized view.
    If I parallelize it using the parallel hint then I run out of temporary tablespace.
    I've tried creating a bunch of temporary tablespaces and putting them into a temp tablespace group but it still runs out of space. Parallel execution seems to use up way more temp tablespace than sequential execution.
    I know it is a very general question, but what are the tips for parallelizing a long running query with respect to temporary tablespace management?
    I've tried searching on the interwebs but I don't find anything that addresses this particular issue.

    And here is the parallel explain plan:
    PLAN_TABLE_OUTPUT
    Plan hash value: 1293981491
    | Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
    | 0 | SELECT STATEMENT | | 81M| 17G| | 19232 (2)| 00:00:01 | | | |
    | 1 | TABLE ACCESS BY INDEX ROWID | MART$SC_SCORES | 1 | 13 | | 3 (0)| 00:00:01 | | | |
    |* 2 | INDEX UNIQUE SCAN | SSCS_SDCC_FK_I | 1 | | | 2 (0)| 00:00:01 | | | |
    | 3 | PX COORDINATOR | | | | | | | | | |
    | 4 | PX SEND QC (RANDOM) | :TQ10017 | 81M| 17G| | 19232 (2)| 00:00:01 | Q1,17 | P->S | QC (RAND) |
    |* 5 | HASH JOIN RIGHT OUTER BUFFERED | | 81M| 17G| | 19232 (2)| 00:00:01 | Q1,17 | PCWP | |
    | 6 | PX RECEIVE | | 41925 | 491K| | 2 (0)| 00:00:01 | Q1,17 | PCWP | |
    | 7 | PX SEND BROADCAST | :TQ10014 | 41925 | 491K| | 2 (0)| 00:00:01 | Q1,14 | P->P | BROADCAST |
    | 8 | PX BLOCK ITERATOR | | 41925 | 491K| | 2 (0)| 00:00:01 | Q1,14 | PCWC | |
    | 9 | INDEX FAST FULL SCAN | I_DWH_ZIP_ZIPCODE_EIDIID | 41925 | 491K| | 2 (0)| 00:00:01 | Q1,14 | PCWP | |
    |* 10 | HASH JOIN | | 81M| 16G| | 19218 (2)| 00:00:01 | Q1,17 | PCWP | |
    | 11 | JOIN FILTER CREATE | :BF0000 | 6414K| 159M| | 806 (2)| 00:00:01 | Q1,17 | PCWP | |
    | 12 | PX RECEIVE | | 6414K| 159M| | 806 (2)| 00:00:01 | Q1,17 | PCWP | |
    | 13 | PX SEND HASH | :TQ10015 | 6414K| 159M| | 806 (2)| 00:00:01 | Q1,15 | P->P | HASH |
    | 14 | PX BLOCK ITERATOR | | 6414K| 159M| | 806 (2)| 00:00:01 | Q1,15 | PCWC | |
    |* 15 | INDEX FAST FULL SCAN | I_DWH_ADDRESS_COMB_ZIP | 6414K| 159M| | 806 (2)| 00:00:01 | Q1,15 | PCWP | |
    | 16 | PX RECEIVE | | 80M| 14G| | 18397 (2)| 00:00:01 | Q1,17 | PCWP | |
    | 17 | PX SEND HASH | :TQ10016 | 80M| 14G| | 18397 (2)| 00:00:01 | Q1,16 | P->P | HASH |
    | 18 | JOIN FILTER USE | :BF0000 | 80M| 14G| | 18397 (2)| 00:00:01 | Q1,16 | PCWP | |
    |* 19 | HASH JOIN RIGHT OUTER BUFFERED | | 80M| 14G| | 18397 (2)| 00:00:01 | Q1,16 | PCWP | |
    | 20 | PX RECEIVE | | 42M| 409M| | 827 (2)| 00:00:01 | Q1,16 | PCWP | |
    | 21 | PX SEND HASH | :TQ10012 | 42M| 409M| | 827 (2)| 00:00:01 | Q1,12 | P->P | HASH |
    | 22 | PX BLOCK ITERATOR | | 42M| 409M| | 827 (2)| 00:00:01 | Q1,12 | PCWC | |
    | 23 | MAT_VIEW ACCESS FULL | MBI$CMN_ACTION_COST | 42M| 409M| | 827 (2)| 00:00:01 | Q1,12 | PCWP | |
    | 24 | PX RECEIVE | | 80M| 14G| | 17549 (1)| 00:00:01 | Q1,16 | PCWP | |
    | 25 | PX SEND HASH | :TQ10013 | 80M| 14G| | 17549 (1)| 00:00:01 | Q1,13 | P->P | HASH |
    |* 26 | HASH JOIN BUFFERED | | 80M| 14G| | 17549 (1)| 00:00:01 | Q1,13 | PCWP | |
    | 27 | PX RECEIVE | | 6312K| 794M| | 7519 (1)| 00:00:01 | Q1,13 | PCWP | |
    | 28 | PX SEND HASH | :TQ10010 | 6312K| 794M| | 7519 (1)| 00:00:01 | Q1,10 | P->P | HASH |
    |* 29 | HASH JOIN RIGHT OUTER BUFFERED | | 6312K| 794M| | 7519 (1)| 00:00:01 | Q1,10 | PCWP | |
    | 30 | VIEW | | 4443K| 80M| | 2125 (2)| 00:00:01 | Q1,10 | PCWP | |
    | 31 | HASH GROUP BY | | 4443K| 33M| 158M| 2125 (2)| 00:00:01 | Q1,10 | PCWP | |
    | 32 | PX RECEIVE | | 10M| 78M| | 950 (1)| 00:00:01 | Q1,10 | PCWP | |
    | 33 | PX SEND HASH | :TQ10007 | 10M| 78M| | 950 (1)| 00:00:01 | Q1,07 | P->P | HASH |
    | 34 | PX BLOCK ITERATOR | | 10M| 78M| | 950 (1)| 00:00:01 | Q1,07 | PCWC | |
    | 35 | TABLE ACCESS FULL | DWH$PHONE | 10M| 78M| | 950 (1)| 00:00:01 | Q1,07 | PCWP | |
    |* 36 | HASH JOIN | | 6312K| 680M| | 5392 (1)| 00:00:01 | Q1,10 | PCWP | |
    | 37 | PX RECEIVE | | 6329K| 36M| | 130 (2)| 00:00:01 | Q1,10 | PCWP | |
    | 38 | PX SEND HASH | :TQ10008 | 6329K| 36M| | 130 (2)| 00:00:01 | Q1,08 | P->P | HASH |
    | 39 | PX BLOCK ITERATOR | | 6329K| 36M| | 130 (2)| 00:00:01 | Q1,08 | PCWC | |
    | 40 | INDEX FAST FULL SCAN | PK_DWH_DEBTOR | 6329K| 36M| | 130 (2)| 00:00:01 | Q1,08 | PCWP | |
    | 41 | PX RECEIVE | | 6312K| 644M| | 5259 (1)| 00:00:01 | Q1,10 | PCWP | |
    | 42 | PX SEND HASH | :TQ10009 | 6312K| 644M| | 5259 (1)| 00:00:01 | Q1,09 | P->P | HASH |
    |* 43 | HASH JOIN RIGHT OUTER BUFFERED| | 6312K| 644M| | 5259 (1)| 00:00:01 | Q1,09 | PCWP | |
    | 44 | PX RECEIVE | | 3689K| 31M| | 4271 (1)| 00:00:01 | Q1,09 | PCWP | |
    | 45 | PX SEND HASH | :TQ10005 | 3689K| 31M| | 4271 (1)| 00:00:01 | Q1,05 | P->P | HASH |
    | 46 | VIEW | | 3689K| 31M| | 4271 (1)| 00:00:01 | Q1,05 | PCWP | |
    | 47 | HASH GROUP BY | | 3689K| 56M| 84M| 4271 (1)| 00:00:01 | Q1,05 | PCWP | |
    | 48 | PX RECEIVE | | 3689K| 56M| | 3653 (1)| 00:00:01 | Q1,05 | PCWP | |
    | 49 | PX SEND HASH | :TQ10003 | 3689K| 56M| | 3653 (1)| 00:00:01 | Q1,03 | P->P | HASH |
    |* 50 | HASH JOIN | | 3689K| 56M| | 3653 (1)| 00:00:01 | Q1,03 | PCWP | |
    | 51 | BUFFER SORT | | | | | | | Q1,03 | PCWC | |
    | 52 | PX RECEIVE | | 3 | 21 | | 1 (0)| 00:00:01 | Q1,03 | PCWP | |
    | 53 | PX SEND BROADCAST | :TQ10000 | 3 | 21 | | 1 (0)| 00:00:01 | | S->P | BROADCAST |
    | 54 | INLIST ITERATOR | | | | | | | | | |
    |* 55 | INDEX RANGE SCAN | I_DWH_PAYMENT_TYPE_EIDIID | 3 | 21 | | 1 (0)| 00:00:01 | | | |
    | 56 | PX BLOCK ITERATOR | | 28M| 242M| | 3648 (1)| 00:00:01 | Q1,03 | PCWC | |
    |* 57 | TABLE ACCESS FULL | DWH$PAYMENT | 28M| 242M| | 3648 (1)| 00:00:01 | Q1,03 | PCWP | |
    | 58 | PX RECEIVE | | 6312K| 589M| | 986 (2)| 00:00:01 | Q1,09 | PCWP | |
    | 59 | PX SEND HASH | :TQ10006 | 6312K| 589M| | 986 (2)| 00:00:01 | Q1,06 | P->P | HASH |
    |* 60 | HASH JOIN | | 6312K| 589M| | 986 (2)| 00:00:01 | Q1,06 | PCWP | |
    | 61 | PX RECEIVE | | 2937 | 172K| | 5 (20)| 00:00:01 | Q1,06 | PCWP | |
    | 62 | PX SEND BROADCAST | :TQ10004 | 2937 | 172K| | 5 (20)| 00:00:01 | Q1,04 | P->P | BROADCAST |
    |* 63 | HASH JOIN BUFFERED | | 2937 | 172K| | 5 (20)| 00:00:01 | Q1,04 | PCWP | |
    | 64 | PX RECEIVE | | 220 | 1540 | | 2 (0)| 00:00:01 | Q1,04 | PCWP | |
    | 65 | PX SEND HASH | :TQ10001 | 220 | 1540 | | 2 (0)| 00:00:01 | Q1,01 | P->P | HASH |
    | 66 | PX BLOCK ITERATOR | | 220 | 1540 | | 2 (0)| 00:00:01 | Q1,01 | PCWC | |
    | 67 | TABLE ACCESS FULL | DWH$MANDATOR | 220 | 1540 | | 2 (0)| 00:00:01 | Q1,01 | PCWP | |
    | 68 | PX RECEIVE | | 2937 | 152K| | 2 (0)| 00:00:01 | Q1,04 | PCWP | |
    | 69 | PX SEND HASH | :TQ10002 | 2937 | 152K| | 2 (0)| 00:00:01 | Q1,02 | P->P | HASH |
    | 70 | PX BLOCK ITERATOR | | 2937 | 152K| | 2 (0)| 00:00:01 | Q1,02 | PCWC | |
    | 71 | TABLE ACCESS FULL | DWH$PACKAGE | 2937 | 152K| | 2 (0)| 00:00:01 | Q1,02 | PCWP | |
    | 72 | PX BLOCK ITERATOR | | 6312K| 228M| | 980 (1)| 00:00:01 | Q1,06 | PCWC | |
    | 73 | TABLE ACCESS FULL | DWH$CASE | 6312K| 228M| | 980 (1)| 00:00:01 | Q1,06 | PCWP | |
    | 74 | PX RECEIVE | | 78M| 4199M| | 10016 (1)| 00:00:01 | Q1,13 | PCWP | |
    | 75 | PX SEND HASH | :TQ10011 | 78M| 4199M| | 10016 (1)| 00:00:01 | Q1,11 | P->P | HASH |
    | 76 | PX BLOCK ITERATOR | | 78M| 4199M| | 10016 (1)| 00:00:01 | Q1,11 | PCWC | |
    |* 77 | TABLE ACCESS FULL | DWH$ACTION | 78M| 4199M| | 10016 (1)| 00:00:01 | Q1,11 | PCWP | |
    ------------------------------------------------------------------------------------------------------------------------------------------------------------------

  • Execution tracer idea - labview

    Hi, I have been working with labview for the past month with hardware
    and software.
    I was wondering if there is a way to see the order of
    execution of the vi....It would be nice to see on the block diagram a
    trace system or numbering scheme or drop down flow that tells the order of
    operation...I think this is all done behind the scenes in labview..It might
    be nice to have a trace or label..maybe this exists..I am getting
    the book for scientist(s) and engineers...so maybe it is there..any help..
    FOR simple programs its not an issue....I have seen some complex vi's!!
    I guess I am use to the old C programming...line by line..
    maybe with time?
    J
    Solved!
    Go to Solution.

    NitinD wrote:
    Alternatively, help yourself by creating a simple "Log results VI" that is connected after every major operation you are doing. LabVIEW essentially divides the codes in clumps, if you can figure out what clumps LV is dividing your code into, probably your job will be easier.
    But I don't see how knowing the order of execution will help you. Even LabVIEW doesn't know it before hand. If your code is so execution-dependent perhaps you should use Frame Structures, that force sequential execution.
    Actually LabVIEW knows it pretty well, but things like timers, delays and such can and will change the execution order between successive runs sometimes.
    Also it is quite useless information to debug an application, because if the execution order of nodes matters in a program you have to force it (dataflow) or you will have a potential race condition, that can and will expose itself at the latest at the moment your application is installed at the other side of the globe and with no internet connection available to even attempt to do remote debugging.
    Just learn to live with it and use its advantages. It is not hard to do correct code. Things like global variables and local variables are an ideal way to cause race condition so if you make it for yourself to a rule to never use them unless you have thought at least twice why they are necessary and how to make sure you have no race condition there, then you are already set for at least 50%.
    You can have race conditions too in C, but there you have to explicitedly write multithreading code, which is kind of hard and therefore is only done when absolutley needed and usually by people who understand exactly what the implications are, and if they don't, they learn it fast during development, or abandon the project sooner than later.
    Rolf Kalbermatter
    Message Edited by rolfk on 08-30-2009 09:12 AM
    Rolf Kalbermatter
    CIT Engineering Netherlands
    a division of Test & Measurement Solutions

  • How to call methods from within run()

    Seems like this must be a common question, but I cannot for the life of me, find the appropriate topic. So apologies ahead of time if this is a repeat.
    I have code like the following:
    public class MainClass implements Runnable {
    public static void main(String args[]) {
    Thread t = new Thread(new MainClass());
    t.start();
    public void run() {
    if (condition)
    doSomethingIntensive();
    else
    doSomethingElseIntensive();
    System.out.println("I want this to print ONLY AFTER the method call finishes, but I'm printed before either 'Intensive' method call completes.");
    private void doSomethingIntensive() {
    System.out.println("I'm never printed because run() ends before execution gets here.");
    return;
    private void doSomethingElseIntensive() {
    System.out.println("I'm never printed because run() ends before execution gets here.");
    return;
    }Question: how do you call methods from within run() and still have it be sequential execution? It seems that a method call within run() creates a new thread just for the method. BUT, this isn't true, because the Thread.currentThread().getName() names are the same instead run() and the "intensive" methods. So, it's not like I can pause one until the method completes because they're the same thread! (I've tried this.)
    So, moral of the story, is there no breaking down a thread's execution into methods? Does all your thread code have to be within the run() method, even if it's 1000 lines? Seems like this wouldn't be the case, but can't get it to work otherwise.
    Thanks all!!!

    I (think I) understand the basics.. what I'm confused
    about is whether the methods are synced on the class
    type or a class instance?The short answer is; the instance for non-static methods, and the class for static methods, although it would be more accurate to say against the instance of the Class for static methods.
    The locking associated with the "sychronized" keyword is all based around an entity called a "monitor". Whenever a thread wants to enter a synchronized method or block, if it doesn't already "own" the monitor, it will try to take it. If the monitor is owned by another thread, then the current thread will block until the other thread releases the monitor. Once the synchronized block is complete, the monitor is released by the thread that owns it.
    So your question boils down to; where does this monitor come from? Every instance of every Object has a monitor associated with it, and any synchronized method or synchonized block is going to take the monitor associated with the instance. The following:
      synchronized void myMethod() {...is equivalent to:
      void myMethod() {
        synchronized(this) {
      ...Keep in mind, though, that every Class has an instance too. You can call "this.getClass()" to get that instance, or you can get the instance for a specific class, say String, with "String.class". Whenever you declare a static method as synchronized, or put a synchronized block inside a static method, the monitor taken will be the one associated with the instance of the class in which the method was declared. In other words this:
      public class Foo {
        synchronized static void myMethod() {...is equivalent to:
      public class Foo{
        static void myMethod() {
          synchronized(Foo.class) {...The problem here is that the instance of the Foo class is being locked. If we declare a subclass of Foo, and then declare a synchronized static method in the subclass, it will lock on the subclass and not on Foo. This is OK, but you have to be aware of it. If you try to declare a static resource of some sort inside Foo, it's best to make it private instead of protected, because subclasses can't really lock on the parent class (well, at least, not without doing something ugly like "synchronized(Foo.class)", which isn't terribly maintainable).
    Doing something like "synchronized(this.getClass())" is a really bad idea. Each subclass is going to take a different monitor, so you can have as many threads in your synchronized block as you have subclasses, and I can't think of a time I'd want that.
    There's also another, equivalent aproach you can take, if this makes more sense to you:
      static final Object lock = new Object();
      void myMethod() {
        synchronized(lock) {
          // Stuff in here is synchronized against the lock's monitor
      }This will take the monitor of the instance referenced by "lock". Since lock is a static variable, only one thread at a time will be able to get into myMethod(), even if the threads are calling into different instances.

  • *RUNLOGIC call executes sometimes, but not others

    Hi all,
    I hope someone has insight into this problem.
    I have 2 applications, Sales & CostCenter, that each feed data into a Consol app. In both Sales & CostCenter, I use *DESTINATION_APP (both in default logic, and in a batch-mode logic) to post the values to Consol. All of this works fine.
    I also need the currency conversion logic in Consol to be triggered automatically, every time there is an update. I use *RUNLOGIC to do this, as the last step of the update.
    // From CostCenter -- Run the Consol curr conv
    *RUNLOGIC
    *APP=CONSOL
    *LOGIC=CurrConvCostCenter
    *ENDRUNLOGIC
    // From Sales -- Run the Consol curr conv
    *RUNLOGIC
    *APP=CONSOL
    *LOGIC=CurrConvSales
    *ENDRUNLOGIC
    Both of the CurrConv* logic files exist in Consol (and are identical except for a slightly different XDIM_MEMBERSET selection, before *INCLUDE'ing the core CurrConv logic).
    There is only this one *RUNLOGIC command in either of the logics -- no nesting, no sequential execution.
    My problem: it works perfectly fine from CostCenter, but not from Sales.
    I can see in the CostCenter logic debug log the complete CurrConvCostCenter data selections. On the other hand, in the Sales logic debug log, the log ends with the posting of LC values to Consol.
    When I remove all the earlier steps from the Sales logic, I can see that the problem is related to the fact that Consol's entity dimension is called "Entity" and in Sales it is "EntitySls".
    --> "Invalid Dimension : EntitySls"
    I tried referencing the Entity dim as follows (as per the admin guide) -- didn't work.
    // From Sales -- Run the Consol curr conv
    *RUNLOGIC
    *APP=CONSOL
    *DIMENSION Entity = %ENTITY_SET%
    *LOGIC=CurrConvSales
    *ENDRUNLOGIC
    --> "Invalid Dimension : EntitySls"
    I also tried the following (based on the Destination_App syntax), hoping it was an undocumented savior -- didn't work.
    // From Sales -- Run the Consol curr conv
    *RUNLOGIC
    *APP=CONSOL
    *RENAME_DIM EntitySls=Entity
    *LOGIC=CurrConvSales
    *ENDRUNLOGIC
    --> "Invalid Instruction: *RENAME_DIM EntitySls=Entity"
    And I can confirm that both CostCenter & Consol use the same "Entity" dimension, so that's why it works in one case and not the other.
    Does anyone have a solution?
    Regards,
    Tim

    Thanks for the replies, guys.
    Joost, I tried a few more *DIMENSION options (both EntitySls and Entity) and none of these seem to work.
    Marcel, that workaround is certainly a creative approach! If it comes to that extreme, I may use this option. However, I did want to include this in the default logic of the Sales app. My other options are either to replace EntitySls with Entity (a pretty massive undertaking at this point -- it'd be nice if this limitation were documented in the admin guide...) or else have the users execute the logic manually as a batch process.
    In the meantime, I recreated this in ApShell & filed a message with Support.
    What was also odd about this is that, when the ONLY command in the logic file is the RUNLOGIC command, I get an error message about invalid dimension. But when there are other commands (such as *DESTINATION_APP to transfer data from sales to consol), those execute cleanly... and then there's no error message relating to the RUNLOGIC. So it was a bit of a challenge to identify the problem.
    I haven't tested this to see what other dimensions have this same limitation. My instict tells me it's only entity/category/time that need to have the same dimension names, but I'll certainly test that before I design another set of apps where I'll want to do this auto-synchronization.
    If anyone else has info on that point, I'd be grateful to hear more.
    Regards,
    Tim

  • Converting of documents written in foreign languages

    hello - how to convert pdf documents in word, when they are written in foreign languages? wanted to convert a document written in czech < unreadable!

    Like Matt already said, you need the LabVIEW Development System(Evaluation/Licensed) to open the VI(s) you have.
    BTW, they are not documents, they are called Virtual Instruments - VIs, meaning software program or the source code.
    It has 2 components - Front panel & Block Diagram - both of them fully graphical. You may get a feel of VB when you see LV's FP, but the BD is also fully graphical in LV, unlike the text coding in VB.
    The prrgramming paradigm of LabVIEW is based on "Dataflow" model, whereas test-based programming languages are generally of Sequential execution model.
    Message Edited by parthabe on 10-21-2009 03:29 AM
    - Partha
    LabVIEW - Wires that catch bugs!

  • Job TP_BROADCASTING_* and RSRD_BROADCAST_FOR_TIMEPOINT Program error

    Hi Experts,
    We have a JOB TP_BROADCASTING_*  that is executed daily that contains the Program RSRD_BROADCAST_FOR_TIMEPOINT as second step.  It is used for scheduling broadcast. Besides that, the first step contains a custom program which checks a table that contains date (some random dates) which says if the second step should run otherwise it aborts through BP_JOB_ABORT. This results in a daily Canceled Job (except the days when there is entry in that table) entry in sm37. We want to avoid having this canceled Job in the system report everyday.
    One suggestion given by my developer is to call the second Program (SAP) from the custom program and remove it from the second step. Does it sound right ? Do you guys have any other suggestion

    Your error has been explained below in one of the WIKI.
    ERROR: 'Parallel processing not possible: no processing of X package(s)'
    To overcome Parallel Processing errors create a Process Chain which executes settings using report RSRD_BROADCAST_BATCH.
    Create the variants for report RSRD_BROADCAST_BATCH. Start transaction SE38 and execute the report. Select the setting(s) and save them as a variant.
    Include a process step in a process chain which executes an ABAP program. In the process step, choose the report RSRD_BROADCAST_BATCH and the variant (from step 1) in the process chain step.
    Simulate a sequential execution by adding an additional wait-step in between process steps calling report RSRD_BROADCAST_BATCH, e.g. by adding a process chain step which calls a simple Z-report that waits for a given timeframe using the ABAP statement WAIT UP TO XX SECONDS.

  • Checking multiple IF conditions

    I'm working on a stored procedure and having issues.  I want to check two conditions and is none of them are met, Insert a new record. In the first 2 checks, If I find a match I will hold that value and update later in the script.  If I can't
    find a match, the 3rd step will add a new record to the location table and I will hold that value and use it to update later in the script.  Any help would be appreciated.
    IF @Is_Flag = 'N'
    BEGIN
    IF EXISTS (select * from Location where Institution_Nm = @Institution_Nm )
    BEGIN
    Set @Location_Id = (Select Location_id from Location where Institution_Nm = @Institution_Nm)
    END
    ELSE
    BEGIN
    set @Location_Id = (Select Location_Id from Research_Project_Detail Where Research_Project_Id=@Research_Project_Id and Term_Id=@Term_Id and Is_flag = 'N')
    END;
    Update Location Set State_Id=@State_Id, Country_Id=@Country_Id, Institution_Nm=@Institution_Nm,ModifiedBy=@ModifiedBy,Modified_Dt=GETDATE() Where Location_Id=@Location_Id
    END
    ELSE --CAN'T FIND ONE...INSERT NEW ONE and hold the ID
    BEGIN
    Insert into Location ( State_Id, Country_Id, Institution_Nm,CreatedBy,Created_Dt) Values (@State_Id, @Country_Id, @Institution_Nm,@ModifiedBy,GETDATE())
    set @Location_Id = Scope_Identity( )
    select Scope_Identity( ) Location_Id
    END

    >> I'm working on a stored procedure and having issues. <<
    Yes; you have no idea how to write SQL, so you write COBOL or BASIC in SQL. Your code is not declarative, but classic procedural programming. 
    You do not even know that rows are not records! This is fundamental. You are using IDENTITY because it looks like a record number on a 1950's magnetic tape. Your files had singular names because the unit of work in files is a record; in RDBMS, we use plural
    names to show that tables are sets. 
    You use flags because that is how you wrote in assembly language or COBOL. 
    But worse, you keep audit information in the same table! This is illegal. 
    >> I want to check two conditions and is none of them are met, Insert a new record [sic]. In the first 2 checks, If I find a match I will hold that value and update later in the script. <<
    No, in a declarative language; we have no local variables and no concept of later sequential execution. 
    >> If I can't find a match, the 3rd step [sic] will add a new record [sic] to the Locations table and I will hold that value and use it to update later in the script. <<
    Do you know about the SAN (Standard Address Number) used in several industries? ISO country codes, state codes are NOT ids. The old Sybase GETDATE() is now the ANSI/ISO Standard CURRENT_TIMESTAMP in T-SQL; you did an illegal act with bad code. 
    >> Any help would be appreciated. <<
    You need a major education, not just a kludge. The schema needs to be thrown out and done correctly. Without DDL, we can only guess, but I think that once you fixed the schema, the statement would be something like this skeleton: 
    MERGE INTO Locations AS L
    USING 
       (SELECT X.* 
          FROM (VALUES (@in_state_code, @in_country_code, @in_institution_name))
    AS X (state_code, country_code, institution_name)) AS N 
    ON @in_research_project_id = L.research_project_id
       AND @in_term_id = L.term_id  
    WHEN MATCHED
    THEN UPDATE
      SET state_code = N.state_code, 
          country_code = N.country_code,
          institution_name = N.institution_name
    WHEN NOT MATCHED
    THEN INSERT  state_code, country_code, institution_name
         VALUES (@in_state_code, @in_country_code, @in_institution_name);
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • CPU load unbalanced and too low in multi-processor computers

    I have a rather large test system 5-6000 VIs and we have noted some peculiar behaviour.
    On a single processor system the application can take up to 100% of the available resources.
    On a two processor system the application can take up to 50% of the available resources.
    On a four processor system the application can take up to 25% of the available resources.
    On an eight processor system the application can take up to 12-13% of the available resources.
    I think you guys can get the gist of what I stated above (I didn't get pictures for 1 and 2 CPUs but I have observed the behaviour, personally)
    We haven't done any specific multithreading code (we do lauch many parrallel processing loops dynamically = should be a good thing)
    Why is the application behaving this way? Does anyone have a similar problem? Any tips?
    Thanks for any help you can offer.
    //David

    You really don't provide enough information.
    Is there really always something to do to keep a core busy at all times? Unless there is serious computations involved for solving a math problem, a typical VI should not be using a lot of CPU. (For example if it needs a full core today, it would not have been possible to even run the test a couple of years ago. That would be hard to believe). Don't overestimate how much a modern computer can do using only 5% of the CPU at any given time. I still run an acquisition system on a 120MHz Pentium 1 and LabVIEW 4. There are no performance problems even though the hardware is orders of magnitude slower than even a modern Atom processor on a netbook.
    How many things can really occur at the same time? If you make your code overly sequential by lining everything up along error wires or by overuse of sequence structures, the code cannot be efficiently parallelized, no matter how hard the compiler tries.
    How many of the 6000 VIs are called concurrently? How many require full CPU? If this is a test system, I assume that there is interaction with a device under test? Are the tests sequential or parallel? How many devices are tested at the same time? How much is post-processing compared to waiting for results? Are the tests really CPU limited??? Really???
    Have you done any profiling? Did you identify the part that is most demanding? Could it be that a single greedy loop is consuming most of the CPU doing nothing?
    Except for lengthy simulations or complex data processing, a typical application should never be CPU limited. The timings should be fully software controlled such that the code runs the same, independent of the power of the computer.
    All that said, LabVIEW can easily keep all CPU cores busy if really needed and if programmed correctly and with parallel processing in mind. I have a complicated lengthy fitting code where I was able to keep 64 cores at near 100% (4 AMD 6274 CPUs with 16 cores each). As a more typical example, and compared to sequential execution, the parallelized code is 4.5x faster on a quad core I7 (if I disable hyperthreading in the bios, I get a 3.8x speed increase), so not only can it keep all four cores at 100%, it can even get a measurable boost from hyperthreading. On a six core I7, I get a 6.9x speedup, also keeping all cores busy.
    I am not going to look at a project with 6000 VIs, but please show us some profiling data. Find the VI that carries most of the load and attach it if you want.
    LabVIEW Champion . Do more with less code and in less time .

  • Flow and FlowN equivalent activities in OSB

    What are the equivalent acivities for Flow and FlowN of BPEL in OSB ?
    In other way , what are the activities available in OSB for performing parallel processing of tasks a. when number of paralle branches are static b. When they are dynamic.

    Got this from Oracle Documentation !
    http://docs.oracle.com/cd/E23943_01/dev.1111/e15866/tasks.htm#OSBDV204
    There are two Split-Join patterns, the Static Split-Join and the Dynamic Split Join.
    The Static Split-Join can be used to create a fixed number of message requests (as opposed to an unknown number). For instance, a customer places an order for a cable package that includes three separate services: internet service, TV service, and telephone service. In the Static use case, you could execute all three requests in separate parallel branches to improve performance time over the standard sequential execution.
    The Dynamic Split-Join can be used to create a variable number of message requests. For instance, a retailer places a batch order containing a variable number of individual purchase orders. In the Dynamic use case, you could parse the batch order and create a separate message request for each purchase. Like the Static use case, these messages can then be executed in parallel for improved performance.
    Marking the thread as Answered. !

Maybe you are looking for

  • Safari hangs, freezes, and then i have to force quit

    coming up on on my computer's first week anniversary and i have issues. this is not a good sign. help me love my computer again. so, i open safari (3.1.2) and everything is going well. then after a few minutes, i click on a link or type in a url and

  • Vendor not found while creating SC after replicating from r/3 to srm

    Hi all I tried to replicate some vendors from r/3 to srm and when i used the t-code bbpgetvd  i got a message that it has already been replicated but wheni am trying to create a Shopping Cart and assigning vendor to the cart it is giving me a message

  • Inter-Company Eliminations in Hyperion Planning

    Hi, I need to implement Inter-Company eliminations concept in Hyperion Planning. I know HFM has a functionality to implement inter-company eliminations.Please let me know that how this concept can be implemented in Hyperion Planning and what steps I

  • Forcing manual check of email

    I have a MobileMe account that is using push fine. I also have another imap account set to fetch and I have set it to manual fetch. However, I notice that if I open the mail app on my iPhone 3G, all mail accounts refresh. I assumed that the manual se

  • Prime Infrastructure Demo License

    If I apply the Demo license to a licensed Prime Infrastructure server will it enable all the features so I can test the Netflow features for the demo period?  I have never used the Netflow features and would like to test drive for a little and decide