Parallel Processing - In conjuction with TWS scheduler.

We have a .Bat file that uses upshell.exe to execute a custom script we've created.
This custom script looks for files in the inbox and moves them to the relevant OpenBatches/OpenBatchesML folder. Once files have been moved it then runs a Parallel process up-to-load for all data. However, what we're finding is that because we're running this in parallel the .bat script is completing despite FDM still processing in the background. This is evident, as the TBATCH table shows the batch as not 100% complete. This unfortunately causes ourselves a problem as the scheduler (TWS) thinks all processing has finished and subsequent downstream processing kicks off. As I understand it this wouldn't be an issue if we'd used Serial processing and not Parallel.
We've chosen Parallel becuase of the volumes and the limitation of our batch window. Am I correct in assuming that in principle parallel will process data faster than serial? I am currently looking at specific performance tests on my data at present to prove this.
The real issue is that we somehow need to ensure TWS doesn't kick off downstream processes and I need to somehow have FDM create a log file for batches that complete successfully. I'm assuming it should be done within an Event script but I'm not sure which as this is quite new to me.
Has anyone come across this issue themselves? If so, I'm looking for some guidance/examples how you've managed to get round the problem.
Thanks in advance.

Hi,
Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
http://odiexperts.com/interface-parallel-execution-a-new-solution
http://odiexperts.com/processing-multiple-interface-through-single-package

Similar Messages

  • Parallel processing in background using Job scheduling...

    (Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
    Hi ABAP Gurus,
    I have read a bit till now about parallel processing. But I have a doubt.
    I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
    Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
    to use parallel processing in SAP.
    Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
    But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
    Can this be also called parallel processing ?
    Please let me know if this sounds wise to you guys...
    Regards,
    Tushar.

    Thanks for your reply...
    So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
    I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
    report ZMMI_MATERIAL_MASTER_TEST
          no standard page heading line-size 255.
    include bdcrecx1.
    parameters: dataset(132) lower case default
                                 '/tmp/testmatfile.txt'.
       DO NOT CHANGE - the generated data section - DO NOT CHANGE    ***
      If it is nessesary to change the data section use the rules:
      1.) Each definition of a field exists of two lines
      2.) The first line shows exactly the comment
          '* data element: ' followed with the data element
          which describes the field.
          If you don't have a data element use the
          comment without a data element name
      3.) The second line shows the fieldname of the
          structure, the fieldname must consist of
          a fieldname and optional the character '_' and
          three numbers and the field length in brackets
      4.) Each field must be type C.
    Generated data section with specific formatting - DO NOT CHANGE  ***
    data: begin of record,
    data element: MATNR
           MATNR_001(018),
    data element: MBRSH
           MBRSH_002(001),
    data element: MTART
           MTART_003(004),
    data element: XFELD
           KZSEL_01_004(001),
    data element: MAKTX
           MAKTX_005(040),
    data element: MEINS
           MEINS_006(003),
    data element: MATKL
           MATKL_007(009),
    data element: BISMT
           BISMT_008(018),
    data element: EXTWG
           EXTWG_009(018),
    data element: SPART
           SPART_010(002),
    data element: PRODH_D
           PRDHA_011(018),
    data element: MTPOS_MARA
           MTPOS_MARA_012(004),
         end of record.
    data: lw_record(200).
    End generated data section ***
    data: begin of t_data occurs 0,
          matnr(18),
          mbrsh(1),
          mtart(4),
          maktx(40),
          meins(3),
          matkl(9),
          bismt(18),
          extwg(18),
          spart(2),
          prdha(18),
          MTPOS_MARA(4),
        end of t_data.
    start-of-selection.
    perform open_dataset using dataset.
    perform open_group.
    do.
    *read dataset dataset into record.
    read dataset dataset into lw_record.
    if sy-subrc eq 0.
    clear t_data.
    split lw_record
       at ','
    into t_data-matnr
          t_data-mbrsh
          t_data-mtart
          t_data-maktx
          t_data-meins
          t_data-matkl
          t_data-bismt
          t_data-extwg
          t_data-spart
          t_data-prdha
          t_data-MTPOS_MARA.
    append t_data.
    else.
    exit.
    endif.
    enddo.
    loop at t_data.
    *if sy-subrc <> 0. exit. endif.
    perform bdc_dynpro      using 'SAPLMGMM' '0060'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'RMMG1-MATNR'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=AUSW'.
    perform bdc_field       using 'RMMG1-MATNR'
                                 t_data-MATNR.
    perform bdc_field       using 'RMMG1-MBRSH'
                                 t_data-MBRSH.
    perform bdc_field       using 'RMMG1-MTART'
                                 t_data-MTART.
    perform bdc_dynpro      using 'SAPLMGMM' '0070'.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MSICHTAUSW-DYTXT(01)'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=ENTR'.
    perform bdc_field       using 'MSICHTAUSW-KZSEL(01)'
                                 'X'.
    perform bdc_dynpro      using 'SAPLMGMM' '4004'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '/00'.
    perform bdc_field       using 'MAKT-MAKTX'
                                 t_data-MAKTX.
    perform bdc_field       using 'BDC_CURSOR'
                                 'MARA-PRDHA'.
    perform bdc_field       using 'MARA-MEINS'
                                 t_data-MEINS.
    perform bdc_field       using 'MARA-MATKL'
                                 t_data-MATKL.
    perform bdc_field       using 'MARA-BISMT'
                                 t_data-BISMT.
    perform bdc_field       using 'MARA-EXTWG'
                                 t_data-EXTWG.
    perform bdc_field       using 'MARA-SPART'
                                 t_data-SPART.
    perform bdc_field       using 'MARA-PRDHA'
                                 t_data-PRDHA.
    perform bdc_field       using 'MARA-MTPOS_MARA'
                                 t_data-MTPOS_MARA.
    perform bdc_dynpro      using 'SAPLSPO1' '0300'.
    perform bdc_field       using 'BDC_OKCODE'
                                 '=YES'.
    perform bdc_transaction using 'MM01'.
    endloop.
    *enddo.
    perform close_group.
    perform close_dataset using dataset.

  • Parallel processing in workflow with fork

    Hello,
    I have a case in production system. the workflow has parallel processing with fork. This fork has 2 branches as inputs.
    It has 2 necessary branches with no other condition.
    Does anyone know of any scenario where the workflow proceeds ahead with one branch executed even though 2 branches are mandatory.
    Thanks.

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • Parallel Processing in ODI with Unique Requirement

    Hi All,
    I have a unique requirement. I have a scenerio which i have to run multiple times in parallel with different user inputs.
    Eg Suppose user inputs A,B,C then i have to run the same scenerio 3 times with input filter as A B and C respectively in parallel.
    If user inputs A,B,C, and D then i have to run the same scenerio 4 times with inputs A B C and D in parallel.
    Does any one know a way to achieve this.
    Thanks in advance for your suggestions...
    Regards
    Edited by: 872116 on Jul 12, 2011 11:32 PM

    Hi,
    Take at look at the following 2 articles. Using the concepts outlined in them you should be able to achieve what you are trying to do.
    http://odiexperts.com/interface-parallel-execution-a-new-solution
    http://odiexperts.com/processing-multiple-interface-through-single-package

  • BPM Parallel Process with Exclusive Gateway

    Hi,
    I am facing issue with Exclusive Gateway in Parallel Process.
    Issu is, process always in In-Progress state at parallel Join. I mean process stops at Parallel Join and more over there are no errors in the process. If I delete Exclusive Gateway in Parallel process, the process is going to next level human task through Parallel Join. It means working fine.
    I have designed my process in such a way that, 1st task is Human Task ---> then Parallel Split with 2 Human tasks, out of one task performing throught Excusive Gateway and another one is just simple approval. Finally I am merging these two Human tasks using Parallel Join then finally triggerting Approval Human task, and closing the Process.
    Appreciate your quick suggestions to fix this issue.
    Thanks in advance,
    Dev...

    Hi Unni,
    Thanks for your reply.
    I have checked all the parallel tasks and all are in completed state. No errors.
    If I delete Exclusive Gateway it is working fine. I have checked step by step tasks in NWA, and every thing goes well.
    Please let me know If I missout anything.
    Thanks in advance,
    Dev

  • Parallel process with reentrant VI have same value in both threads

    Has it been so long since I programmed LabVIEW that I forgot some basic stuff??
    I have a main VI which originally called dynamic processes in parallel.
    I then called the sub-vi's dirtectly and still run them in parallel, but thwey now have seperate names.
    I use a QSM.  Each parallel thread now has it's own QSM, because although I was using a Queue Name for each dynamic queue, the data that was being extracted was shared or done between the two threads.  If I confused everyone with the statement, I shall explain.
    Two parallel processes, calling a QSM (re-entrant).  They have the same number of elements and matching sets.
    EX:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task2
    Task3                 Task3
    Task4                 Task4
    Task5                 Task5
    Task6                 Task6
    I was expecting each thread (each process) to extract from the queue the list of tasks as entered (from Task1 to Task6).  What the processes were getting was the following:
    Process 1          Process 2
    Task1                 Task1
    Task2                 Task3
    Task4                 Task5
    Task6                 default
    Each Process was sending a different Queue Name to the QSM.  Each queue should have it's own name.
    I need to get this running by tomorrow with no excuse!  So I decided to do a lame workaround by also having 2 QSMs.  That fixed it..
    In each parallel process (which are a copy of each other with different names) there is a call to open a telnet session.  I probed and placed breakpoints in the code.  Although each process has a different name and the call to the function that opens the telnet session is re-entrant, the very same telnet reference number is assigned to both processes.
    Why?  Why would they get the same reference number?  I made all vi's down to (and including) Telnet Open Connection as re-entrant (although it was not needed) and it still assigns the same reference number to each telnet session.  Why?  What I am not seeing?  What am I missing?
    Unfortunately, I cannot post the code..  But it is not complicated code.  Just 2 sessions with different IP addresses.  I would expect different telnet session references... 
    As a matter of fact, I need to try something silly.. 

    I should provide more details with the solution....
    I just have to stop saying "D'Oh!!"
    Okay... here goes...
    In the LVOOP, I am using Notifiers and Semaphores to ensure that a race condition cannot occur.  Works well with previously written code.
    In this particular implementation, I have the same / similar object being created more than once (twice at this time).
    Where I went wrong (D'Oh!!!!), was to have a static name for a given object when creating the Notifier and Semaphore references.  Since the same name was given, so was the reference.  SInce the references were the same, so was the data, and so on.
    D'Oh!!!!!!
    D'Oh!!!!!!
    Now I know why a particular bird was called D'Oh-D'Oh bird... 
    D'Oh!!!  Such a silly mistake...
    D'Oh!!!
    Hope it makes a few people laugh...  or help another bird....

  • Parallel process with a queue and file?

    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    Attachments:
    Queue_Data_Parallel_FORUM.vi ‏23 KB

    LapLapLap wrote:
    Hello, first of all sorry for my bad english^^:
    I am working for days on my project where I have to show the parallel process while transfering information by different ways and their problems (like timing and so on). 
    I chose the transmission of information to a parallel process by (1) a queue, and by (2) a file (.txt). (other ways are welcome, do you have 1-2 other ideas?)
    To solve this problem I made three while loops, the first one is the original one, where the original information (as a signal) is created and send by queue and file to the other two while loops, where this information is getting evaluted to create the same signal.
    so in the end you can compare all signals, if they are the same - so that you can answer the question with the parallelity of process.
    but in my vi file i have some problems:
    the version with the queue works pretty fine - it's almost parallel
    but the version with file doesn't work parallel, and i have no idea how i can solve it - -
    i'm a newbie^^
    can someone correct my file, so both (file and queue version) run parallel with the original one, or tell me what i can or must do?
    A queue is technically never parallell, though you can have several if you really need parallellism. Other methods include Events, Action Engines, Notifiers (and why not webservices) for information transfer between processes.
    Due to limitations in the disk system you can only read/write one file at a time from one process, so i wouldn't recommend that. If you use a ramdisk it might work.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Use of global data in transformation with parallel processing

    Hi,
    In an upgrade I have a global variable in a routine in 3.5 in a transfer rule. The global variable keeps count of an ID.
    The global variable is used in every data package,because the corrsponding infopackage is set to PSA only, where help says:
    "If you select this processing option and then request processing is done serially during loading, the global data are maintained as long as the process with which the data were processed remains.
    In 7.0 I also use a global variable to do the same thing, but an Infopackage is no longer available, because we use DTP´s now.
    How can I store my global variable so that all parallel processing of the data packages use the same global variable? or can I set an option so that the DTP does serial processing, similar to the infopackage setting in 3.5?
    As a workaround I increased the package size to a very large figure, but I would be more comfortable with a sounder solution.
    thanks

    Hi Max,
      Try to declare your global variable in start routine. Among the lines:
    $$ begin of global - insert your declaration only below this line  -
    $$ end of global - insert your declaration only before this line   -
    Best Regards.
    Javier Gómez

  • Pointers for optimizing system performance (run time) while running DP process chain with parallel processing

    Hi Experts,
    We are running APO DP process chain with parallel processing in our company, we are experiencing some issues regarding run time of process chain, need your help on below points;
    - What are the ways we can optimize process chain run time.
    - Special points we need to take care of in case of parallel processing profiles used in process chain.
    - Any specific sequence to be followed for different processes in process chain - if there is some best practice followed.
    - Any notes suggesting ways to improve system performance for APO version 7 with different enhancement packs 1 and 2.
    Any help will be really appreciated.
    Regards

    HI Neelesh,
    There are many ways to optimize performance of the process chains (background jobs) in APO system.
    Firstly I would recommend you to identify the pain areas (steps) which are completing with more runtimes. Then each one of the step has got different approaches to decrease the runtime.
    Like you may end up with steps like infopackage executions, DTPs, DP mass processing jobs etc which might be running with more runtimes. So now target each one of them differently and find out the ways to optimize. At the same time the approach you follow should be technically possible with basis perspective (system load and utilization) as well.
    And coming to parallel processing, you can use parallel processing for different for different jobs. You can further r explore on the same using parallel processing. Like loading an infocube, mass processing, infopackage execution, DTP, TSCOPY etc.
    Check the below link for more info
    Performance problems in DP mass processing
    Let me know if you require further info.
    Regards,
    Raj

  • Strange responsetime with query triggering parallel processing

    Hi,
    Are there anyone who can run this SQL below on a RAC 11g R2, and give me feedback on how long time it took?
    In my environment it takes more than a minute, and I suspect there is a problem with the database and parallel processing.
    If I collect the values from the inner select and make a list of comma-separated values, it goes fast.
    Replace SCHEMA_NAME with an existing SCHEMA_NAME on you database before running.
    SELECT sa.*
      FROM gv$sqlarea sa
      WHERE sa.sql_id in (SELECT sql_id from gv$sqlarea where parsing_schema_name like '%SCHEMA_NAME%' and rownum < 100)
    Thanks!

    You are using the hint a bit incorrectly.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/sql_elements006.htm#SQLRF50805
    <quote>
    The NO_PARALLEL hint overrides a PARALLEL parameter in the DDL that created or altered the table. For example:
    SELECT /*+ NO_PARALLEL(hr_emp) */ last_name
    FROM employees hr_emp;
    <quote>
    Check Metalink note 267330.1 as well.

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • Job fail with Timeout for parallel process (for SID Gener.): 006000

    Hello all,
    Im getting below error and not able to find any issue with Basis side. Please anyone help on this!
    Job started
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Background process BCTL_4XU7J1JPLOHYI3Y5RYKD420UL terminated due to missing confirmation
    Process 000006 returned with errors
    Data pkgs 000001; Added records 1-; Changed records 0; Deleted records 0
    Log for activation request ODSR_4XUG2LVXX3DH4L1WT3LUFN125 data package 000001...000001
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Activation is running: Data target CRACO20A, from 1,732,955 to 1,732,955
    Overlapping check with archived data areas for InfoProvider CRACO20A
    Data to be activated successfully checked against archiving objects
    Parallel processes (for Activation); 000005
    Timeout for parallel process (for Activation): 006000
    Package size (for Activation): 100000
    Task handling (for Activation): Backgr Process
    Server group (for Activation): No Server Group Configured
    Parallel processes (for SID Gener.); 000002
    Timeout for parallel process (for SID Gener.): 006000
    Package size (for SID Gener.): 100000
    Task handling (for SID Gener.): Backgr Process
    Server group (for SID Gener.): No Server Group Configured
    Activation started (process is running under user *****)
    Not all data fields were updated in mode "overwrite"
    Data package has already been activated successfully (will be skipped)
    Process started
    Process started
    Process started
    Process started
    Process started
    Import from cluster of the data package to be activated () failed
    Process 000001 returned with errors
    Process 000002 returned with errors
    Process 000003 returned with errors
    Process 000004 returned with errors
    Errors occured when carrying out activation
    Analyze errors and activate again, if necessary
    Activation of M records from DataStore object CRACO20A terminated
    Report RSODSACT1 ended with errors
    Job cancelled after system exception ERROR_MESSAGE

    Thanks for the link TSharma I will try that today.
    UPDATE:
    I ran a non-parallel Data Pump and just let it run overnight. This time it finished after 9 hours.  In this run I set the STATUS=300 parameter in the PARFILE which basically echos STATUS updates to standard out every 300 seconds (5 minutes).
    And as before after 2 hours it finished 99% of the export and just spit out WAITING status for the last 7 hours until it finished.  The remaining TABLES it exported (a few hundred) were all very small or ZERO rows.  There clearly is something going on that is not normal.  I've done this expdp before on clones of this database and it usually takes about 2-2.5 hours to finish.
    The database is about 415 Gigabytes in size.
    I will update what the TRACE finds and I'm also opening a case with MOS.

  • Parallel processing issue withing same server

    hi,
    i need to perform parallel processing withing same server using work processes available in same server.
    suggest if this can be accomplished and explain the design if possible.

    Hello Venkata,
    You can achieve parallel processing by using CALL FUNCTION .... STARTING NEW TASK <task name>.
    In this case function module runs in asynchronous mode without stopping calling program.
    For more details you can refer following link:
    https://wiki.sdn.sap.com/wiki/display/Snippets/Easilyimplementparallelprocessinginonlineandbatchprocessing
    Thanks,
    Augustin.

  • Math Interface Toolkit in Matlab: Can I interact with a VI as a parallel process

    I'm curious if it's possible to create a Matlab MEX file using the LabVIEW Math Interface Toolkit (MIT) that can be called in matlab and accessed while running?
    I'd like it to effectively work just like any other object in matlab.  I'd like to be able to query the object while it's running, dump values out of a buffer, and even hook events if possible.
    As it currently stands, the VI to MEX setup seems to just allow me to call a VI, run it, and then drop out.  I want to be able to continuously acquire and access the data as it's coming in and interact with it from matlab (i.e. fully integrate the VI into my matlab code as a separate object).
    Is this possible in some form or am I stuck with dedicating the matlab interface to the VI whenever I want to call it?
    Thanks

    Hi GusLott,
    You are correct that when you call a VI the command line will not continue until the output is obtained.  This makes sense since in LabVIEW that is how a VI operates (a subVI does not ouput until all outputs are obtained).  I believe this is also how commands in MATLAB work as well (correct me if I am wrong).  On the otherhand this is not a disadvantage in labVIEW since you can run VIs in parallel.  If you can create some way to run parallel threads in MATLAB then you will be able to do two (or more) MEX calls at once. 
    As far as object oriented programming goes, there is a labVIEW OOP, but I do not think it has been tested in conjuction with the Math Interface Toolkit.
    Brian K.

  • Query0;Runtime error time limit exceeded,with parallel processing via RFC

    Dear experts,
    I have created a report on 0cca_c11 cube and while running my report when i give cost center group which contains many cost centers , my report executes for long time and at last gives message
    "Error while reading data;navigation is possible" and
    "Query0;Runtime error time limit exceeded,with parallel processing via RFC"
    please tell me what is the problem and how can i solve this
    Regards
    Shweta

    hI,
    Execute the Query in RSRT with Execute and Debug option.
    Select SQL statements toknow where exactly it's taking time.
    Let us know the details once you done.
    Reg
    Pra

Maybe you are looking for

  • "Java file" & "Jsp file" compilation issues in Eclipse

    Hi, Eclipse Question 1: I have set the value for preference->java->Installed JREs as : j2re1.4.2_07 not to jdk When I write a java file, its compiling, how? From where its using "javac" exe file for compilation. Question 2: Whereas in prefrence->Tomc

  • Spotlight where files are stored?

    Is there a way to get spotlight to show the Path of a file? I don't want to just open it, I want to know where it is.

  • Rdisp/rfc_max_own_used_wp exceeded when doing ALE transfer of Masterdata

    i have set up Masterdata distribution model and my parameter value for rdisp/rfc_max_own_used_wp is 25% however when there is a change and we have like let's say 50 Idocs of GLMAST type in the productions system the RFC user occupies all DIA processe

  • 1.1.3 Update & Bluetooth Problems with Nuvi 680

    I have been using my iPhone with a Garmin nuvi 680. The nuvi 680 has Bluetooth and can be used as a hands free speaker/mike. I had been using it happily until I upgraded to the 1.1.3 update last night. Now even tough I get an initial connection, it i

  • Prompt @ Universe - Access restrictions

    Hi all, Can we apply the Access restriction for Universe level prompts? Requirement :have 2 prompts at Universe level based on user i need to restrict any one of the prompt at report level Best regards, reddeppa k