Parallel Processing and Capacity Utilization

Dear Guru's,
We have following requirement.
Workcenter A Capacity is 1000.   (Operations are similar)
Workcenter B Capacity is 1500.   (Operations are similar)
Workcenter C Capacity is 2000.   (Operations are similar)
1) For Product A: Production Order Qty is 4500. Can we use all workcenter as a parallel processing through Routing.
2) For Product B: Production Order Qty is 2500. Can we use only W/C A and B as a parallel processing through Routing.
If yes, plz explain how?
Regards,
Rashid Masood

May be you can create a virtual WC VWCA=ABC (connected with a hierarchy with transaction CR22) and another VWCB=A+B and route your products to each VWC

Similar Messages

  • Parallel processing and throttle

    Hi experts,
                  where this parallel processing and throttle used exactly in XI.

    You can develop scenarios in BPM for parallel processing. Simple use cases involve like splitting incomming file into multiple files and sending them out in parallel. A block with par for each would be needed in this case.
    Search in help for more information.
    VJ

  • Employment percent and Capacity Utilization Level

    What is the significance of the Employment percent field on infotype 0007 and the Capacity utilization field on infotype 0008?  Are either one of them used by the Payroll processor to determine salary and net pay if the CATS time system is not implemented?
    We want to pay part-time monthly employees, for example, at 80% of their salary if they have an 80% work schedule.  We do not wish to change the salary, but can't figure out how these percent fields would change their pay.  They don't seem to in our testing.
    Regards
    Janice Ishee

    If you  are  using Indirect valuation of wage types  & Specyfying  Reduction Method as 2  in wage type characteristics , the Amount is reduced According to the capacity Utilization specified in IT 8 .
    Subhash

  • Parallel processing and time-out

    Hi all,
    I've got a prob with doing a great number of postings.
    While the time elapsed for these postings is too long, I tried to do it with an function module and "IN BACKGROUND TASK". Well, there is also a alternative "STARTING NEW TASK".
    But I figured out, that these both variants are starting dialog work processes. I think there is a time out for dialog WP's of 300 seconds in standard.
    Will this timeout kill the processes or not??
    And witch alternative is the best to do some parallel processing??
    thanx in advanced
    regards
    Olli

    Hi Oliver,
    Some solutions here:
    1. You could increase the value of the dialog time-out (allthough this can only go to a maximum of 600 seconds). This parameter is in the SAP profiles (parameter name = rdisp/max_wprun_time).
    2. As suggested by Christian, decrease the amount of work within one LUW. You can do this by inserting (from time to time) a COMMIT WORK. This COMMIT WORK also resets the timeslice counter of the running dialog process (thus giving again an extra timeslice to work). The downside is, that if you have many related objects to modify, your ROLLBACK options become limited.
    3. Split the proces in several tasks and put the to work in the background (by scheduling jobs for them).
    4. Program your own parallel handler (see sample code). With this you could process document by document (as if each is done separately). The number of dialog processes (minus 2) is the limit you could use.
    Sample code:
    * Declarations
    CONSTANTS:
      opcode_arfc_noreq TYPE x VALUE 10.
    DATA:
       server       TYPE msname,
       reason       TYPE i,
       trace        TYPE i VALUE 0,
       dia_max      TYPE i,
       dia_free     TYPE i,
       taskid       TYPE i VALUE 0,
       taskname(20) TYPE c,
       servergroup  TYPE rzlli_apcl.
    * Parallel processes free check
    CALL 'ThSysInfo' ID 'OPCODE' FIELD opcode_arfc_noreq
                     ID 'SERVER' FIELD server
                     ID 'NOREQ'  FIELD dia_free
                     ID 'MAXREQ' FIELD dia_max
                     ID 'REASON' FIELD reason
                     ID 'TRACE'  FIELD trace.
    IF dia_free GT 1.
      SUBTRACT 2 FROM dia_free.
      SUBTRACT 2 FROM dia_max.
    ENDIF.
    * You must leave some dialogs free (otherwise no one can logon)
    IF dia_free LE 1.
      MESSAGE e000(38)
         WITH 'Not enough processes free'.
    ENDIF.
    * Prepare your run
    ADD 1 TO taskid.
    WRITE taskid DECIMALS 0 TO taskname LEFT-JUSTIFIED.
    CONDENSE taskname.
    * Run your pay load
    CALL FUNCTION 'ZZ_YOUR_FUNCTION'
      STARTING NEW TASK taskname
      DESTINATION IN GROUP servergroup
      EXPORTING
    *   Your exporting parameters come here
      EXCEPTIONS
        communication_failure  = 1
        system_failure         = 2
        RESOURCE_FAILURE       = 3
        OTHERS                 = 4.
    Of course you would put this within a loop and let your "payload" function fire off for each document.
    You MUST check the number of free processes just before you run the payload.
    And as last reminder: Do NOT use the ABAP statement WAIT (this will disrupt the counting of free processes).
    Hope this will help you,
    Regards,
    Rob.

  • How to get BI background jobs to utilize parallel processing

    Each step in our BI process chains creates exactly 1 active batch job (SM37) with in turn utilizes only 1 background process (SM50).
    How do we get the active BI batch job to use more than 1 background process similar to parallel processing (RZ20) in an ERP system?

    Hi there,
    Have you checked the number of background and parallel processes. Take a look in SAP Note 621400 - Number of required BTC processes for process chains. This may be helpful ...                                                                               
    Minimum (with this setting, the chain runs more or less serially):        
    Number of parallel SubChains at the widest part of the chains + 1.        
    Recommended:                                                              
    Number of parallel processes at the widest part of the chain + 1.         
    Optimal:                                                                  
    Number of parallel processes at the widest part of the chain + number of  
    parallel SubChains at the widest part + 1.                               
    The optimal settings just avoids a delay if several SubChains are         
    started in parallel at the same time. In case of such a Process Chain     
    implementation and using the recommended number of background processes   
    there can be a short delay at the start of each SubChain (depends on the  
    frequency of the background scheduler, in general ~1 minute only).                                                                               
    Attention: Note that a higher degree of parallel processing and           
    therefore more batch processes only make sense if the system has          
    sufficient hardware capacity.                                                                               
    I hope this helps or it may lead you to further checks to make .
    Cheers,
    Karen

  • ABAP OO and parallel processing

    Hello ABAP community,
    I am trying to implement a ABAP OO scenario where i have to take into account parallel processing and processing logic in the sense of update function modules (TYPE V1).
    The szenario is definied as follows:
    Frame class X creates a instance of class Y and a instance of class Z.
    Classes Y and Z sould be processed in parallel, so class X calls classes Y and Z.
    Classes Y and Z call BAPIS and do different database changes.
    If classes Y or Z have finished, the status of processing is written into a status table by caller class X.
    The processing logic within class Y and class Z should be a SAP LUW in the sense of a update function module (TYP V1).
    Can i use events?
    (How) Should i use "call function in upgrade task"?
    (How) Should i use "call function starting new task"?
    What is the best method to realise that behaviour?
    Many thanks for your suggestions.

    Hallo Christian,
    I will describe you in detail, whow I have solved this
    problem. May be there is a newer way ... but it works.
    STEPS:
    I asume you have splitt your data in packages.
    1.) create a RFC-FM: Z_WAIT
    It return OK or NOT OK.
    This FM: does following:
    DO.
      call function TH_WPINFO -> until the WPINFO has more
    than a certain number of lines. (==> free tasks)
    ENDDO.
    If it is OK ==> free tasks are available
    call your FM (RFC!) like this:
    CALL FUNCTION <FM>
    STARTING NEW TASK ls_tasknam " Unique identifier!
    DESTINATION IN GROUP p_group
    PERFORMING return_info ON END OF TASK
    EXPORTING
    TABLES
    IMPORTING
    EXCEPTIONS
    *:--- Take care of the order of the exceptions!
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    RESOURCE_FAILURE = 5
    OTHERS = 1.
    *:--- Then you must check the difference between
    *:--- the started Calls and the received calls.
    *:--- If the number increases a certain value limit_tasks.
    wait until CALLED_TASK < LIMIT_TASKS up to '600' seconds.
    The value should be not greater then 20!
    DATA-Description:
    parameters: p_group like bdfields-rfcgr default 'Server_alle'. " For example. Use the F4 help
    if you have defined the report-parameter as above.
    ls_tasknam ==> Just the increasing number of RFC-Calls
    as Character.
    RETURN_INFO is a form routine in which You can check the results. Within this Form you must call:
    RECEIVE RESULTS FROM FUNCTION <FM>
    TABLES: ... " The tables of your <FM> exactly the same order!
    EXCEPTIONS
    COMMUNICATION FAILURE = 3
    SYSTEM_FAILURE = 2
    UNFORCED_ERROR = 4
    NO_ACTIVATE_INFOSTRUCTURE = 1.
    Her eyou must count the received Calls!
    And you can save them into a internal table for checking!
    I hope I could help you a little bit
    God luck
    Michael

  • Parallel Process in a Process chain design

    Hi
    BAsed on what factors can we make a decision on How many parallel data loads (process) we can include while designing a Process chains
    Thanks

    Hi Maxi,
    There is no hard and fast rule for that, for trial purpose you can add specific no. of parallel processes and schedule the chain, if there are not enough background processes available to fulfill your request then SAP will give you warning there you can see how many processes are available.
    But if you go for maximum parallel process then it actually depends on how many processes are available at the time of process chain scheduling. Though your server have enough process but if they are utilized by other processes then still you will get warning while executing process chain.
    So just check in your server how many background processes are there and then take some optimum decision.
    Regards,
    Durgesh.

  • Parallel Processing

    Hi,
    I am trying to implement parallel processing and am facing a problem where in the Function Module contains the statement :
    submit (iv_repid) to sap-spool
                        with selection-table it_rspar_tmp
                        spool parameters iv_print_parameters
                        without spool dynpro
                        via job iv_name
                        number iv_number
                        and return.
    I call the Function Module as such :
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    But I keep getting the error Output Device "" unknown.
    Kindly advise.
    Thanks.

    I need the output of a report to be generated in the spool and then I am retreiving it later on from the spool and displaying ti along with another ALV in my current program.
    I have called the Job Open and Job Close function modules. Between these 2 FM calls, I have written the code for the parallel processing.
    CALL FUNCTION 'YFM_OTC_ZSD_ORDER'
        STARTING NEW TASK v_task_name
        DESTINATION 'NONE'
        PERFORMING receive_results ON END OF TASK
        EXPORTING
          iv_repid              = lv_repid
          iv_print_parameters   = v_print_parameters
          iv_name               = v_name
          iv_number             = v_number
        TABLES
          it_rspar_tmp          = t_rspar_tmp[]
        EXCEPTIONS
          communication_failure = 1
          OTHERS                = 2.
    After this, I retrieve the data from Function Module : RSPO_RETURN_SPOOLJOB.
    All the above steps work while I am in debugging mode.At the RFC call, it opens a new session and i execute the session completely, return to the main program execution, and then execute to the end , and I get the desired output.
    But in debug mode, if i reach the RFC, a new session opens.I do not execute the FM, instead if i go back to the main program and execute it directly, i can replicate the error : Output device "" unknown.
    So i guess it has got something to do with the Submit statment in the RFC.
    Any assistance would be great !
    Thanks !!

  • Troubleshooting the lockwaits for parallel processing jobs

    Hi Experts,
    I am facing difficulty tracing the job which is interfering with a business critical job.
    The job in discussion is using parallel processing and the other jobs running at the time are also using the same.
    If I see a lockwait for some process which may be a dailog process or update process spawned by these jobs, I am having difficulty knowing which one holding the lock and which one is waiting.
    So, Is there any way we could identify the dailog or update processes which are used for parallel processing for a particular backgorund job.
    Please help me as this is business critical and we have a high visibility in this area.
    Any suggestions will be appreciated.......
    Regards
    Raj

    Hi Raj,
    First of all, please indicate if you are using SAP Business One.  If yes, then you need to check those locks under SQL Management Studio.
    Thanks,
    Gordon

  • Use of parallel processing profiles with SNP background planning

    I am using APO V5.1.
    In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
    For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
    Is this expected behaviour? What are the 'good practices' for using these profiles?
    Any advise appreciated...

    Hello,
    I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
    For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
    If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
    If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
    Best Regards,
    Ada

  • Report for calculating capacity utilization and Efficency

    Hi,
    We are following REM in our company. The production line is defined in the production version. While backflushing the production line is called automatically and hence backflushing is done.
    We calculate the capacity utilization by using the formulae.
    Capacity Utilization = (Backflushed Qty/ Available capacity)*100.
    My queries are:
    1. Is there any standard report to determine the capacity utilization of a production line.
    2. Is there any standard report to calcualte the efficency of a production line.
    waiting for reply.
    With regards,
    Afzal

    Hi afzal
    1. you have mentioned ; Available capacity = Std.time per piece * no. of working hrs
    Let me explain with example
    suppose per piece if it takes 10 mins, now according to your formula
    A.C = 10* 24 * 60 = 14400 per day, but which is not correct
    normally if 10 mins/ peice means 6 peices/hr and for 24 hrs 24*6 = 144.
    so it must be A.C =  no. of working hrs / Std.time per piece.
    2. You have mentioned = capacity utlilised = total Backflushed qty per day., which means you are caluculating capacity utilization based on input material.
    3. Utilization = (Avaliable capacity/ Capacity utilised) * 100
    suppose let  us consider Available capacity per day =100
    capacity utilized = 50
    Utilization = (100/50) * 100 = 200%, which is not correct it should be only 50%
    Here my main doubt is why You are caluculating capacity based on input material.
    Please explanin  me you business process and whats the exact requirement so that I can help you out.
    Please check the formulae

  • Break the link between IT0007 and IT0008 for Capacity Utilization Level

    Hi Gurus!
    We have a system that connects the field "Employment Percentage" from IT0007 with the "Capacity Utilization Level" from IT0008. That means that every change in the Employment Percentage changes also the percentage in the Capacity Utilization level.
    Our idea is to disconnect those two fields and make them independent, because the HR Department preffer to introduce this information manually twice (once in each infotype).
    Do you have any idea of how I should do that?
    Thank you in advance!
    Abel

    I know what it means... The problem is for what they are using this infotype.
    For them IT0007 has the schedule of the person, and the percentage can change (<100% means part-time employee)... but in IT0008 they WANT ALWAYS the 100%, because for them this percentage is based in how many hours of all the hours informed in IT0007 the employee worked.
    In other words, if the employee has a Work Schedule Rule of 8 hours a day, but he is contracted part-time for only 4 hours a day, In the IT0007 he will be informed with an Employment percentage of 50%, but in the IT0008 he will have a Capacity Utilization Level of 100%, because he will work the 100% of the hours informed in the IT0007 (the 4 hours informed in Daily Working Hours).
    I know it is a little bit confusing, but it's the way they are working now... probably in the future they will correct their processes, but now we must adapt to their requirements and work in this way.
    Any Idea will be welcomed
    Edited by: Abel Raya on Jan 27, 2011 11:21 AM

  • Issues with parallel processing in Logical Database PCH and PNP

    Has anyone encountered issues when executing programs in parallel  that utilizes the logical database PCH or PNP?
    Our scenario is the following:
    We having have 55 concurrent jobs that execute a program that use the logical database PCH at a given time.  We load the the PCHINDEX table with the code below.
          wa_pchindex-plvar = '01'.
          wa_pchindex-otype = 'S'.
          wa_pchindex-objid_low = index_objid.
          APPEND wa_pchindex TO pchindex.
    We have seen instances where when the program is executed in parallel, with each process having its own range of positions id's, that some positions are dropped or some are added that is outside the range of the given process.
    For example:
    process 1 has a range of positions ID's 1-10
    process 2 has a range of positions ID's 11-20
    process 3 has a range of positions ID's 21-30
    Process 3 drops position 25 and adds position 46.
    Has anyone faced a similar issue?
    Thanks for your help.
    Best Regards,
    Duke

    Hi,
    first of all, you should read [Using Parallel Execution|http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/usingpe.htm#DWHSG024] in documentation for your version - almost all of these topics are covered there.
    1. According to my server specification how much DOP i can specify.It depends not only on number of CPU. More important factors are settings of PARALLEL_MAX_SERVERS and PARALLEL_ADAPTIVE_MULTI_USER.
    2. Which option for Setting Parallel is good - Using the 'alter table A parallel 4' or passing the parallel hints in the sql statementsIt depends on your application. When setting PARALLEL on a table, all SQL dealing with that table would be considered for parallel execution. So if it is normal for your app to use parallel access to that table, it's OK. If you want to use PX on a limited set of SQL, then hints or session settings are more appropriate.
    3. We have a batch processing jobs which are loading data into the tables from flat files (24*7) using sql loader. is it possible to parallel this operation and any negative effect if enabled parallel.Yes, refer to documentation.
    4. Query or DML - which one will be perform best with parallel option.Both may take advantages of using PX (with some restrictions to Parallel DML) and both may run slower than non-PX versions.
    5. What are the negative issue if parallel option is enabled.1) Object checkpoint happens before starting parallel FTS (true for >=10gR2, before that version tablespace checkpoint was used)
    2) More CPU and memory resources are used with PX - it may be both benefit and an issue, especially with concurrent PX.
    6. what are the things to be taken care while enabling the parallel option.Read the documentation - it contains almost all you need to know. Since you are using RAC, you sould not forget about method of PX slaves load balancing between nodes. If you are on 10g, refer to INSTANSE_GROUPS/PARALLEL_INSTANCE_GROUPS parameters, if you are using 11g then properly configure services.

  • Batch processing and parallelism

    I have recently taken over a project that is a batch application that processes a number of reports. For the most part, the application is pretty solid from the perspective of what it needs to do. However, one of the goals of this application is to achieve good parallelism when running on a multi CPU system. The application does a large number of calculations for each report and each report is broken down into a series of data units. The threading model is such that only say 5 report threads are running with each report thread processing say 9 data units at a time. When the batch process executes on a 16-CPU Sun box running Solaris 8 and JDK 1.4.2, the application utilizes on average 1 to 2 CPUs with some spikes to around 5 or 8 CPUs. Additionally, the average CPU utilization hovers around 8% to 22%. Another oddity of the application is that when the system is processing the calculations, and not reading from the database, the CPU utilization drops rather increase. So goal of good parallelism is not too good right now.
    There is a database involved in the app and one of the things that does concern me is that the DAOs are implemented oddly. For one thing, these DAO's are implemented as either Singletons or classes with all static methods. Some of these DAO's also have a number of synchronized methods. Each of the worker threads that process a piece of the report data does make calls to many of these static and single instance DAO's. Furthermore, there is what I'll call a "master DAO" that handles the logic of what work to process next and write the status of the completed work. This master DAO does not handle writing the results of the data processing. When each data unit completes, the "Master DAO" is called to update the status of the data unit and get the next group of data units to process for this report. This "Master DAO" is both completely static and every method is synchronized. Additionally, there are some classes that perform data calculations that are also implemented as singletons and their accessor methods are synchronized.
    My gut is telling me that in order to achieve, having each thread call a singleton, or a series of static methods is not going to help you gain good parallelism. Being new to parallel systems, I am not sure that I am right in even looking there. Additionally, if my gut is right, I don't know quite how to articulate the reasons why this design will hinder parallelism. I am hoping that anyone with an experience is parallel system design in Java can lend some pointers here. I hope I have been able to be clear while trying not to reveal much of the finer details of the application :)

    There is a database involved in the app and one of
    the things that does concern me is that the DAOs are
    implemented oddly. For one thing, these DAO's are
    implemented as either Singletons or classes with all
    static methods. Some of these DAO's also have a
    number of synchronized methods. Each of the worker
    threads that process a piece of the report data does
    make calls to many of these static and single
    instance DAO's. Furthermore, there is what I'll call
    a "master DAO" that handles the logic of what work to
    process next and write the status of the completed
    work. This master DAO does not handle writing the
    results of the data processing. When each data unit
    completes, the "Master DAO" is called to update the
    status of the data unit and get the next group of
    data units to process for this report. This "Master
    DAO" is both completely static and every method is
    synchronized. Additionally, there are some classes
    that perform data calculations that are also
    implemented as singletons and their accessor methods
    are synchronized. What I've quoted above suggests to me that what you are looking at may actually be good for parallel processing. It could also be a attempt that didn't come off completely.
    You suggest that these synchronized methods do not promote parallelism. That is true but you have to consider what you hope to achieve from parallelism. If you have 8 threads all running the same query at the same time, what have you gained? More strain on the DB and the possiblility of inconistencies in the data.
    For example:
    Senario 1:
    say you have a DAO retrieval that is synchronized. The query takes 20 seconds (for the sake of the example.) Thread A comes in and starts the retrieval. Thread B comes in and requests the same data 10 seconds later. It blocks because the method is synchronized. When Thread A's query finishes, the same data is given to Thread B almost instantly.
    Senario 2:
    The method that does the retrieval is not synchronized. When Thread B calls the method, it starts a new 20 second query against the DB.
    Which one gets Thread B the data faster while using less resources?
    The point is that it sounds like you have a bunch of queries where the results of those queries are bing used by different reports. It may be that the original authors set it up to fire off a bunch of queries and then start the threads that will build the reports. Obviously the threads cannot create the reports unless the data is there, so the synchrionization makes them wait for it. When the data gets back, the report thread can continue on to get the next piece of data it needs if that isn't back it waits there.
    This is actually an effective way to manage parallelism. What you may be seeing is that the critical path of data retrieval must complete before the reports can be generated. The best you can do is retrieve the data in parallel and let the report writers run in parallel once the data the need is retrieved.
    I think this is what was suggest above by matfud.

  • Changing Capacity Utilization for selected days before and after a CTM Run

    I have a number of resources with their respective capacity utilizations. I am running CTM which requires the resource utilizations for all the resources to be at 100%.
    Can I do that with 2 capacity variants for the resource with 100% and X% utilization and switch them as and when required. My Problem is that when i create a capacity variant from say date A to date B .. the period between these comes as blocked. Is there a way to correct this.

    Request you to go through help documents
    http://help.sap.com/saphelp_scm41/helpdata/en/92/a57337e68ac526e10000009b38f889/content.htm
    http://help.sap.com/saphelp_scm41/helpdata/en/2e/847337613fbc40e10000009b38f8cf/frameset.htm

Maybe you are looking for

  • Messages after permission repair

    this is the 1st time i am doing a disk repair permission after upgrading to 10.5.2. got a bunch of messages as below: Repairing permissions for “Macintosh HD” Permissions differ on "private/var/log/secure.log", should be -rw------- , they are -rw-r--

  • How to install MemCacheShim on PHP app (in Azure WebSite or Linux VM)?

    Hi, We would like to access Azure Caching (co-located in-role deployed at WebRole) from a PHP app running in Azure WebSites (or in Azure VMs). From the documentation it seems that there are 2 options: use memcached from PHP to connect to the gateway

  • BAPI-RETURN CODE INF

    Hi everybody what is the main purpose of return code. Data dictionary structures -BAPIRETURN -BAPIRETURN1 -BAPIRET1 -BAPRET2. THANKS IN ADVANCE.

  • Odd CSS behaviour in Firefox

    I'm scratching my head over a glitch on an html/css page I'm putting together for a colleague and I'd be very grateful for any advice that the group might be able to offer. The CSS elements for links are as follows: a:link { color: #0000FF; text-deco

  • Will apple ever support flac already?

    Hi, Apple - Will you ever support flac on iTunes? Audiophiles would like to use this format with iTunes / iPod. I don't need to explain my case, its not competition, just a different popular format. Thanks.