Problem in Dynamic Parallel processing using Blocks

Hi All,
My requirement is to have parallel approvals so I am trying to use the dynamic parallel processing through Blocks. I cant use Forks since the agents are being determined at runtime.
I am using the ParForEach block. I am sending the &agents& as a multiline container in 'Parallel Processing' tab. I have user decision in the block. Right now i am hardcoding 2 agents in the 'agents' multiline element. It is working fine..but i am getting 2 instances of the block.I understand that is because I am sending &AGENTS& in the parallel processing tab.. I just need one instance of the block going to all the users in the &AGENTS& multiline element.
Please let me  know how to achieve the same. I have already searched the forum but i couldnt find anything that suits my req.
Pls help!
Regards,
Soumya

Yes that's true when ever you try to use  ParForEach block then for each value entry in the table a separate workitem ID is created, i.e. a separate instance is created that paralle processing is not possible like that
Instead of that what you can do is create a fork with 3 branches and define a End Condition such that until all 3 branches are executed .
Before to the fork step determine all the agents and store them in a internal table , you can access the one internal table entry by using the index value check this [wiki|https://www.sdn.sap.com/irj/scn/wiki?path=/display/abap/accessingSingleEntryfromMulti-line+Element] to access single entry from a internal table.
For each task in the fork assgin a agent
So, as we have defined the condition that until all the three branches are executed you don't want to come out of the fork step so, it will wait until all the stpes are completed.

Similar Messages

  • Dynamic parallel processing using a multiline container element

    Hi All ,
      I just wanted to how things work when we use "Dynamic parallel processing" for a decision step . I came across a situation wherein a Rule gets the approving user(s) and the work item should be sent to all those users . After getting an approval from all the users , the workflow should proceed or else it should terminate .
       I was just wondering whether "Dynamic parallel processing" will do this job or not . I had also thought of using forks but as the number of approvers are  decided at runtime , i dont think it is possible .
       Any inputs ?
      Edit : We are working on CRM 5.0
    Thanks ,
    Shounak M.
    Message was edited by: Shounak  M

    Hi Shounak,
    Just do as Mike says:
    use the multiline element for a subflow.
    The subflow consists of your user decision, if someone rejects it, remember it (could be done by updating a small table using a method, or use an event, or what mike suggested, updating appending a table )
    In the top flow, after the multiline element step determine if someone rejected it (wait for event, or reading the table).
    Kind regards, Rob Dielemans
    Message was edited by: Rob Dielemans

  • Parallel processing using BLOCK step..

    Hi,
    I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
    Sukumar.

    Hi
    When I am sure that I am doing properly a binding but it doesn't work then:
    1. I activate workflow template definition (take a joke).
    2. I write the magic word
    /$sync
    in the command line, to refresh buffers.
    3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
    Regards
    Mikolaj
    There are no problems, just "issues" and "improvement opportunities".

  • Dynamic Parallel Processing using Rule

    Hello,
    I am using a User Decision within a Block (ParForEach type) step to send work-items to multiple Approvers parallelly.
    For this I have created a Multi-line container LI_APPROVERS and bound &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& to &_LI_APPROVERS_LINE& in the "Parallel Processing" tab of the Block.
    Now in User Decision I am using Agent as Expression = &_LI_APPROVERS_LINE&. This is working perfectly fine if I fetch the values in LI_APPROVERS via a background method before "Block" step is executed.
    I want to know if we can do this using a "Rule" within the User Decision? Meaning approvers are determined by the Rule(through a FM) at the run time instead of fetching them beforehand. I created a custom Rule and tried passing it under Agents but it didn't work. I do not know what bindings need to be done and how each line will be passed to User Decision to create a work-item for each user.
    Or
    I should remove the Block step completely and directly use the User Decision Task with Parallel Processing option under Miscellaneous tab?
    Can someone please explain how to achieve this using a Rule and exactly what bindings are required.
    Thanks.

    Hi Anjan,
    Yes, that's exactly what I want to know. I saw your below response in one of the threads but could not understand exactly how to do it. Can you please explain it.
    You have all  your agents in one multiline container element in workflow.
    Then you take a block step with perforeach.
    Then create a custom rule which will import multiline element of agents , and a line_no. Then in the rule you populate the actor_tab with agents from that multiline contaier elemens of agent. The logic will take the agent from the multiline container[line_no].
    Then you take a activity step . In agent use your custom rule usin prpoer bindin of multiline element of agents and for line_no you pass _***_line from block container. Then workitem will sent to n no of people parrallaly.
    This is my current design:
    Activity returns agents in LI_APPROVERS.
    At Block: I have binding &LI_APPROVERS[&_WF_PARFOREACH_INDEX&]& --> &_LI_APPROVERS_LINE&
    At UD: I have Agents as Expression = &_LI_APPROVERS_LINE&
    I want to remove the Activity step (to get Agents in background) and replace with Rule within UD. What binding do I need from Rule to Workflow? How to get the "Line_no" from rule as you mentioned above.
    Thanks for your response.

  • Multilevel dynamic approval process using precondition loop block

    HI,
    I am trying to create a multivel dynamic approval process using a precondition loop block. The structure of my process is,
    Process->1)Sequential Block containing requestor action->processor of requestor action is initiator
                2)Precondition Loop Block containing
                        i)Loop Decision action containing a business logic callable object
                        ii)Loop Body Block containing Approver action-processor of approver action is filled from context parameter
    The loop decision action implements the logic for loop decision. Can anybody help me by suggesting the proper target of each of these actions, and the processor for loop decision action?
    Whenever I am initiating the process, the requestor action is getting executed,  On completion of this action I am getting a message "No activity is currently selected", that is, it is not entering the precondition loop block.
    Please guide me with the proper process flow of this and how to adjust the roles and parameters
    Thanks,
    Swaralipi

    Posted another thread on the same issue

  • Dynamic parallel processing of the same object using asynchronous method

    Hi,
    Please can anyone help me?
    I have to send the same DMS document to several agents for parallel processing. The number of agents is not known until runtime. Each of them should process the document and at least change the status of it. In next step I check if he has changed it.
    I use dynamic parallel processing of subworkflows. Key task of this subworkflow uses standard method of object DRAW - DOCUMENT.EDIT  (standard transaction CV02N) which is asynchronous. The task is finished by event DOCUMENT.CHANGED. 
    During the parallel processing the appropriate number of workitems is generated. However, when the agent who processes the document as first completes his workitem the event DOCUMENT.CHANGED is generated and all parallel workitems are completed, even those of other agents that were not processed yet.
    Any help would be appreciated.
    Thanks.
    Eva Vahalova

    Hi all,
    The process is used to approve incoming invoices. Each scanned invoice is attached to a DMS document and than sent to one or several agents in parallel. People from several departments can approve the same invoice for instance energy or mobile phone costs. We have no HR module fully implemented. Each agent may write some remarks and has to sets the document status to either approved or rejected. This status is temporary therefore the others see the original status for approving.
    The process of incoming invoices was implemented by SAP consultants in 2003 on 4.6B and now runs on our 4.7 system.  Now new company was established running on a new SAP system ECC 6.0 and our accountant department and some agents will deal with invoices in both systems. Therefore, the process should appear the same or at least very similar. The majority of the old process was realized by programming while I would like to use workflow features that are available now and reduce the programming part.
    As I see, I will have to choose one of the solutions that Arghadip suggested.
    I wonder if there is a possibility to use asynchronous method and control the end of each work item by means of u201CComplete Work Itemu201D or u201CComplete executionu201D Conditions. I have never used them and I do not know how they work and what condition to use. Maybe program exit might be used as well. While controlling the agents I think I will have to do some programming anyway because the work item can be finished by a substitute too.
    Thanks for your help.
    Eva

  • Defining index in Dynamic parallel processing of workflow

    Hi all,
    I am using dynamic parallel processing feature in workflow for a particular multi-line element. But I am not able to define the index in that particular task.
    Consider that i have a multi-line element "Material". For this element, I need to loop that task for that n number of records. so i am using dynamic parallel processing. Now for each workitem generated, i need to show that particular material in the workitem. I remember that we need to use index, but i couldn't recollect how it is defined.
    Could anyone help me in this regard?
    Thanks in advance

    Nikhil,
    When you use dynamic parallel processing the index is available in <b>_Wf_ParForEach_Index</b>. A reference for the line of the multi line element is automatically generated for each work item created. You can see this in the Binding Editor for the step. In your case this will be "Material()". When you drag this element to the WF to Step binding window, it will be resolved as "&Material[&_Wf_ParForEach_Index&]&. Therefore you can get the material for each WI by defining "Material" in your task container (not as multiline) and doing the appropriate binding. If you in fact need the Index in your method, you can define a container element your task with reference to Type SWC_INDEX and bind to ]_Wf_ParForEach_Index.
    Cheers,
    Ramki Maley.
    Please reward points if the answer is helpful.
    For info on awarding points click on this link: https://www.sdn.sap.com/sdn/index.sdn?page=crp_help.htm

  • Parallel processing using NSOperation slows down application!

    For the "fun" of it, I'm creating a little application which matches images in iPhoto to my GPS logs (in GPX format) to geocode the photos.
    Currently, I've got it to the point where it will go through my images (about 1000, but only around 250 with matching GPS data) and gets the right data from the GPX files.
    The process was taking around 9 minutes to do, but with some alterations (using caching to eliminate GPX files outside the relevant date range) I've got it down to 3 minutes. The problem is that when I use multiple cores (Mac Pro 2x QX) it actually slows the process down to 3.5 minutes.
    All the processors are running with 50% load (were running at 100% before my other alterations that improved performance), but analysis shows 95% of the time being spent waiting in the XQuery routines.
    I imagined that this application would be perfect for multiple processors, as the matching section was processor intensive (complex XQuery expression) and appeared to be a very independent process.
    The shared resources passed to the function are:
    1) the path to the image being processed
    2) an array of GPX files (subclass of NSXMLDocument)
    Additionally, during the function it updates a dictionary related to the image (adds in the location data found).
    I've tried to eliminate the block by sending a copy of the GPX array in case the system is locking access to the array (although it is not mutable) while it is being enumerated (using Obj-C 2.0 enumeration).
    Can anybody give me a hint what would be causing these supposedly independent processes to be waiting? My only guess is that the XQuery library isn't thread safe, but can't find confirmation.
    Memory usage is being kept low, and there aren't any page swaps showing up.
    Thanks.
    Ray.

    etresoft wrote:
    Ray A wrote:
    Each photo is independent of the others, so I can't see any reason why multiple images couldn't be processed simultaneously.
    But there is only a single disk where all these images reside. But then, you said there was no disk activity. How are you accessing these images?
    I was a little inaccurate with my earlier posts. The images I speak of are at this stage is just the iPhoto albumdata.xml file, from which I obtain the date information for the image.
    The iPhoto file is read into memory and decomposed into the array of image data, from which the date is obtained and passed through to the xQuery methods.
    Perhaps not processor intensive (in terms of performing calculations) but reading from memory and comparing.
    Read from memory into where? Either it is on disk and you have serial access only or it has already been read into RAM.
    The GPX files are read into memory (i.e. RAM) when the programme launches, and converted into NSXMLDocuments. So there will be lots of memory access, but not disk access.
    As mentioned earlier, I've duplicated the array containing the files… (+realisation happening+)…
    Perhaps it is access to the NSXMLDocument files that is causing the lock. I've duplicated the array, but not a deep copy down to the actual XML objects. I'll see if that makes a difference.
    You shouldn't have to do even that. If you are just querying, there is no modification needed. They can all share the same data.
    That's what I figured too, especially since it is not a mutable array. I didn't have any luck with deep copying as my first attempt caused the programme to run out of available memory.
    But in reply to your question, when running as a single thread the bottleneck is also the XQuery execution. It takes about 1 second to execute a query on each relevant GPX file for a match. About 10 seconds when done in parallel! :-o
    How big are those files? How many GPX files do you have.
    Will have around 100 when it's running properly, but at the moment I'm using about a dozen. Each file is around 1MB.
    I would think you could merge all the data from all the GPX files first. Then start one thread to go through all the image files and pull out the search values and put them into a queue. Then have all the cores create an operator and grab a data element from the queue to process.
    My initial thought was to simplify things by merging them all, but the issue with that is the computer would need to scan one huge file for each image, rather than being able to omit the individual files that fall out of the date range relevant.
    Or, you could swap the problem space axes. You could divvy up the GPX data between all the cores. Then, for each image, look for a match in each subset - there should be only one. If NSXMLDocument is the bottleneck, this would solve it.
    May not be effective as it's only the relevant GPX file that takes a long time. The rest of the files are checked against to see if they cover the date in question, and skipped if not. Images without GPX data for their date (most of them since I'm using a test set of GPX files) get processed in the blink of an eye.
    But I appreciate the effort you're putting in to come up with possible solutions.
    While I'd like to know the reasons it's not working, I've got it to the point where it's processing images (with matching GPX files) in less than half a second on single thread, and putting the whole matching routine in separate thread has given me UI responsiveness. So if gets no better, at least the user (i.e. me) is kept entertained by the feedback while it runs.
    Ray.

  • Parallel processing using ABAP objects

    Hello friends,
                        I had posted in the performance tuning forum , regarding a performance issue problem , I am reposting it as it involves OO concept .
    the link for the previous posting
    Link: [Independent processing of elements inside internal table;
    Here is the scenario,
    I have a internal table with 10 records(indepentent) , and i need to process them .The processing of one record doesnt have any influence on the another . When we go for loop , the performance issue is that , the 10 th record has to wait until the 9 records get processed even though there is no dependency on the output.
    Could some one tell a way out to improve the performance..
    If i am not clear with the question , i would explain it still clearer...
    A internal table has 5 numbers , say( 1,3,4,6,7)
    we are trying to find square of each number ,,,
    If it is a loop the finding of suare of 7 has to wait until 6 is getting completed and it is waste of time ...
    This is related to parallel processing , I have refered to parallel processing documents,But I want to do this conceptually ..
    I am not using conventional procedural paradigm but Object orientedness...I am having a method which is performing this action .What am I supposed to do in that regard.
    Comradely ,
    K.Sibi

    Hi,
    As examplified by Edward, there is no RFC/asynchronous support for Methods of ABAP Objects as such. You would indeed need to "wrap" your method or ABAP Object in a Function Module, that you can then call with the addition "STARTING NEW TASK". Optionally, you can define a Method that will process the results of the Function Module that is executed asynchronously, as demonstrated as well in Edward's program.
    You do need some additional code to avoid the situation where your program takes all the available resources on the Application Server. Theoretically, you cannot bring the server or system down, as there is a system profile parameter that determines the maximum number of asynchronous tasks that the system will allow. However, in a productive environment, it would be a good idea to limit the number of asynchronous tasks started from your program so that other programs can use some as well.
    Function Group SPBT contains a set of Function Modules to manage parallel processing. In particular, FM SPBT_INITIALIZE will "initialize" a Server Group and return the maximum number of Parallel Tasks, as well as the number of free ones at the time of the initialization. The other FM of interest is SPBT_GET_CURR_RESOURCE_INFO, that can be called after the Server Group has been initialized, whenever you want to "fork" a new asynchronous task. This FM will give you the number of free tasks available for Parallel Processing at the time of calling the Function Module.
    Below is a code snippet showing how these Function Modules could be used, so that your program always leaves a minimum of 2 tasks for Parallel Processing, that will be available for other programs in the system.
          IF md_parallel IS NOT INITIAL.
            IF md_parallel_init IS INITIAL.
    *----- Server Group not initialized yet => Initialize it, and get the number of tasks available
              CALL FUNCTION 'SPBT_INITIALIZE'
              EXPORTING
                GROUP_NAME                           = ' '
                IMPORTING
                  max_pbt_wps                          = ld_max_tasks
                  free_pbt_wps                         = ld_free_tasks
                EXCEPTIONS
                  invalid_group_name                   = 1
                  internal_error                       = 2
                  pbt_env_already_initialized          = 3
                  currently_no_resources_avail         = 4
                  no_pbt_resources_found               = 5
                  cant_init_different_pbt_groups       = 6
                  OTHERS                               = 7.
              md_parallel_init = 'X'.
            ELSE.
    *----- Server Group initialized => check how many free tasks are available in the Server Group
          for parallel processing
              CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
                IMPORTING
                  max_pbt_wps                 = ld_max_tasks
                  free_pbt_wps                = ld_free_tasks
                EXCEPTIONS
                  internal_error              = 1
                  pbt_env_not_initialized_yet = 2
                  OTHERS                      = 3.
            ENDIF.
            IF ld_free_tasks GE 2.
    *----- We have at leasr 2 remaining available tasks => reserve one
              ld_taskid = ld_taskid + 1.
            ENDIF.
        ENDIF.
    You may also need to program a WAIT statement, to wait until all asynchronous tasks "forked" from your program have completed their processing. Otherwise, you might find yourself in the situation where your main program has finished its processing, but some of the asynchronous tasks that it started are still running. If you do not need to report on the results of these asynchronous tasks, then that is not an issue. But, if you need to report on the success/failure of the processing performed by the asynchronous tasks, you would most likely report incomplete results in your program.
    In the example where you have 10 entries to process asynchronously in an internal table, if you do not WAIT until all asynchronous tasks have completed, your program might report success/failure for only 8 of the 10 entries, because your program has completed before the asynchronous tasks for entries 9 and 10 in your internal table.
    Given the complexity of Parallel Processing, you would only consider it in a customer program for situations where you have many (ie, thousands, if not tens of thousands) records to process, that the processing for each record tends to take a long time (like creating a Sales Order or Material via BAPI calls), and that you have a limited time window to process all of these records.
    Well, whatever your decision is, good luck.

  • Parallel processing using flow activity

    Hey, I have two input directories in the same process for two different partnerlinks and I want to use these two directories to input two flat files to two tables simultaneously.I tried using flow to do this parallel process. But the process when uploaded onto the console takes one input file and it fails. It changes the state of the process to 'off'. Can anyone let me know as to what could be the reason? thanks and regards

    Hi can someone help me please

  • Parallel processing using arfc.

    Hi experts,
    Does any have a presntation on how to proceed with parallel processing with ARFC?
    Thanks in Advance.

    look at the piece of code ....
    FORM start_onhand_extract_task .
      DO.
        IF g_num_running < g_avail_wps.
          EXIT.
        ENDIF.
        WAIT UP TO 5 SECONDS.
      ENDDO.
    Creating the file name with task number
      ADD 1 TO g_task_num.
      CONCATENATE p_file1 g_task_num INTO g_task_name.
      CONCATENATE g_filename g_task_name  INTO task_tab-filename1.
      CONDENSE task_tab-filename1 NO-GAPS.
      task_tab-task_name    = g_task_name.
      APPEND task_tab.
      CLEAR g_msg_text.
      CALL FUNCTION 'ZMIO_GET_MARD_DATA'
        STARTING NEW TASK g_task_name
        DESTINATION IN GROUP p_grp
        PERFORMING decrease_wp ON END OF TASK
        EXPORTING
          i_filename = task_tab-filename1
        TABLES
          i_matnr    = r_matnr
          i_werks    = r_werks.
      CASE sy-subrc.
        WHEN 0.
          ADD 1 TO g_num_running.
          g_num_submitted = g_num_submitted + 1.
        WHEN 1.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
          g_hold_num = g_num_running.
          WAIT UNTIL g_num_running < g_hold_num OR
                     g_hold_num = 0
                     UP TO 5 SECONDS.
        WHEN OTHERS.
          error_rec-task_name    = g_task_name.
          error_rec-filename1    = task_tab-filename1..
          error_rec-msg_text     = g_msg_text.
          APPEND error_rec.
          APPEND it_exp_t001w TO it_err_t001w.
          ADD 1 TO g_num_err.
      ENDCASE.
    ENDFORM.                    " start_onhand_extract_task

  • OSB Parallel Processing using OSB

    Hi All,
    I am working on a proxy service based on WSDL-A.
    Stage-1
    I have to do a service callout (sc1) to another business service with a particular parameter.
    From the response of that sc1 i need to pick two variables var1 and var2 and use it the request of stage2 and stage 3.
    Stage- 2
    I have to do another service callout (sc2) to the same business service with a particular parameters including var1 returned by sc1
    Stage- 3
    I have to do another service callout (sc3) to the same business service with a particular parameters including var2 returned by sc1
    Stage-4
    I need to Use the response of sc1, sc2 and sc3 and transform that to the desired response.
    My only concern is I have to wait for stage 2 to complete to start with stage 3 when I have no real dependency on stage 2.
    How can I ensure that My stage 2 and stage 3 can be processed in parallel. ?
    Can any one please explain How I can achieve that?

    Hi Edwin,
    I was trying the example on your blog I am getting this issue that I am not sure how you initialize the response.
    It always gives me an exception that the response is used before its is initialised.
    If possible can you send me the configuration jar on my email [email protected] ?
    Cheers Nitin

  • Parallel Processing using common ssis variable

    Hi Guys,
    I am stuck in a scenario where I have to execute 2 script tasks in  parallel in such a way that task A initializes value to variable X and the other script task B reads it.
    Since I have assigned a default wait time in the script B before it actually tries to read the variable X, script A should be completed by that time and value should be assigned to variable X successfully before it is actually read by Script B.
    Both these scripts are inside a Sequence container.
    But the actual problem is that when I actually execute the package, the script A is actually waiting for completion for script B which is actually the opposite to what I thought.
    I tried moving one script out of the sequence container as well but that doesn't help.
    Any suggestions on this behaviour??
    Regards,
    Harsh

    "execute 2 script tasks in  parallel in such a way that task A initializes value to variable X and the other script task B reads it"
    To make them truly parralel
    Do not use the sequence container and do not connect the Scripts but then there is no control what starts 1st.
    Sharing a variable is thus difficult.
    But based on what you do there is a dependency on task B to wait for A. So it is a mutually exclusive design.
    Arthur
    MyBlog
    Twitter

  • Question on parallel processing using the STARTING NEW TASK keyword

    I have the following code in a program on my development system:
    call function 'FUNCTION_NAME'
        starting new task ld_taskname
        performing collect_output on end of task
          exporting
            pd_param1       = ld_param1
          tables
            pt_packet       = lt_packet.
    You'll notice in the code above I left out the following part of the function call:
    DESTINATION IN GROUP group
    In my one-server development system the topmost code executes fine, but I'm getting a 'REMOTE FUNCTION CALL ERROR' when I execute this in a multi-server Q&A system. 
    My question: Is the missing 'DESTINATION' keyword required in order for this technique to work in a multi-server environment or is it optional keyword?  I made the assumption that since it worked in my development environment that without the 'DESTINATION' addition the system simply used the current logged on system to start the new processes.
    Any input appreciated.
    Thanks,
    Lee

    Hi Lee,
    Just take F1 help on CALL FUNCTION key word and go to the variant
    CALL FUNCTION func STARTING NEW TASK task
                  [DESTINATION {dest|{IN GROUP {group|DEFAULT}}}]
                  parameter_list
                  [{PERFORMING subr}|{CALLING meth} ON END OF TASK].
    This gives you very clear information about this Key word.
    No one would give you better information better than this, i hope
    Cheers
    Ram

  • Issue in completing the block step for parallel processing

    Hi,
    i have created a workflow where in i have used a block step to send workitems to multiple agents.  I have used parallel processing in block step. Number of agents are determined in the runtime. Lets say i have two items in my multiple line container( two agent id). Now inside block statement i have put user decesion step. So at the same time workitems goes to the two approver for approval. When both approver take the decesion after that also command is not coming out of the block step. I want the command to be out of block step after this and goes to the next step of workflow.
    Please suggest any helpful solution for it.
    Regards,
    Smit Shah

    I think theremust be a binding problem , the binding must be some thing like the below
    &USERID[&_WF_PARFOREACH_INDEX&]& ----->&_USERID_LINE&
    of the block step from WF container to Block Conatiner. Because when I checked in my system it is behaving as you want., I also include one Decision step inside the block, and then hard coded the userid values int the table USERID and made the above binding and it ia working fine and in the Decision Agent I mentiond the EXPRESSION and assigned the value &_USERID_LINE&

Maybe you are looking for

  • Can you choose which emails are downloaded?

    I had a lot of fun tonight at launch of iPhone in the UK at local Apple Store. Had used the iPhone in US months ago - it's better than I remembered! But I'm puzzled about apparent lack of one feature which is important for mail management. have I mis

  • Group by selection

    I have a report which i am grouping by country. The country field appears once and the other fields appear based on country value. Is there a way to give users option in the crstal report to pick a group by field? Thanks

  • Issue with New OS 10.2.1.537 and "Delete On" with "Handheld Only"

    With the new setting "Delete On" set to "Handheld Only", the following scenarios will delete messages from both the Handheld, AND THE SERVER: 1) Using "Delete Prior" to delete all messages prior to a selected date. 2) Using "Select More" option to se

  • 10.4.6 Update killed my Buddy List. Why?

    I just installed the 10.4.6 updater, and my iChat Buddy List has disappeared. When I start iChat, the list appears, but it's blank. I tried adding buddies from my saved chats, but the list remains blank. Has anyone else had this problem? Does anyone

  • Mail Send Icon Not Working

    When i try to send a new message with adress correctly filled in the send icon doesn't do anything when i click on it. The iMac is online and server settings have all been checked but I don't think its that problem; more as if something has been disa