SORT_AREA_SIZE and HASH_AREA_SIZE

Guys,
Could someone explain the purpose of altering the SORT_AREA_SIZE and HASH_AREA_SIZE database parameters?
I had come across the following lines of code in one of my new application's package.
ALTER SESSION SET WORKAREA_SIZE_POLICY = MANUAL;
ALTER SESSION SET SORT_AREA_SIZE =xxx;
ALTER SESSION SET HASH_AREA_SIZE =yyy;
Thanks,
Bhagat

Thank you Sankar!
I've to perform ETL operation whereby it extracts data from a remote datasource,does sorting and loads into our database which involves huge amount of hash and sort.
So,I'm wondering how to dynamically set the value of either SORT_AREA_SIZE or HASH_AREA_SIZE or PGA_AGGREGATE_TARGET.Since our database is in 9i,I understand that adjusting the PGA_AGGREGATE_TARGET parameter would do the job for me.
I've read in an article that
The DBA may wish to consider dynamically changing the pga_aggregate_target parameter when any one of the following conditions are true:
Whenever the value of the v$sysstat statistic “estimated PGA memory for one-pass” exceeds pga_aggregate_target, the pga_aggregate_target should be increased.
==>How much should be the increase?
  1  select name,value
  2  from
  3  v$pgastat
  4  order by
  5* value desc
SQL> /
NAME                                                                  VALUE
bytes processed                                                  1.1742E+10
aggregate PGA target parameter                                    25165824
maximum PGA allocated                                              20128768
total PGA allocated                                                17669120
aggregate PGA auto target                                          12653568
total PGA inuse                                                    11108352
global memory bound                                                 1257472
maximum PGA used for auto workareas                                  688128
cache hit percentage                                                    100
total freeable PGA memory                                                 0
PGA memory freed back to OS                                               0
NAME                                                                  VALUE
maximum PGA used for manual workareas                                     0
over allocation count                                                     0
extra bytes read/written                                                  0
total PGA used for manual workareas                                       0
total PGA used for auto workareas                                         0
SQL> select name c1,cnt c2,decode(total, 0, 0, round(cnt*100/total)) c3
  2  from
  3  (
  4  select name,value cnt,(sum(value) over ()) total
  5  from
  6  v$sysstat
  7  where
  8  name like 'workarea exec%'
  9  );
Workarea
Profile                                    Count Percentage
workarea executions - optimal          1,098,557        ###
workarea executions - onepass                  0          0
workarea executions - multipass                0          0If I've an Oracle job which runs every minute or so to check the specified values and dynamically make adjustments to pga_aggregate_target,would it be the optimal way to go about?
Many Thanks !!!!
Regards,
Bhagat
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Similar Messages

  • Use of sort_area_size and sort_area_retained_size

    Hi,
    can u please say me what is the use of sort_area_size and sort_area_retained_size
    and where to apply this parameter and will theses parameters will help in executing queries faster?
    Thanks
    regards
    Sathish

    Hello Sathish,
    sort_area_size specifies the maximum amount of "memory" oracle will use for a sort. after the sort is complete oracle releases all of the memory allocated for the sort
    If you have large sorts, and the memory allocated for sorts is low, then temporary tablespace would be used for sorting and therefore result in io, which means performance degradation. You might require more sort areas for inserts and updates of bitmap indexes
    sort_area_retained_size specifies the amount of memory that would be retained after a sort run completes. This memory is not releases to the operating system
    Where to apply these parameters... look at the statspack report and see the sorts on the disk... use of the temporary ts. If I/O for disk-sort is high, you would get benefit of using larger sort-area
    yes it would help your queries to run faster... if disk-sorts are taking place
    Regards
    Sudhanshu

  • SORT_AREA_SIZE and PGA_AGGREGATE_TARGET

    I am using SAP R/3 kernel 4.6D with oracle 10g.
    I found that in init.ora file SORT_AREA_SIZE is still defined even if the PGA_AGGREGATE_TARGET is defined and since the datbase is 10G WORK_AREA_SIZE_POLICY is by default auto.
    I would like to know is the SORT_AREA_SIZE is required specifically (I mean to say whether SAP uses anywhere in its application "alter session set workarea_size_policy = manual;" kinda statement?) for SAP otherwise automatic PGA management is supposed to manage the scenario...
    Shall I remove SORT_AREA_SIZE  from the init.ora?
    Regards,

    SORT_AREA_SIZE is no longer required in SAP environments as soon as you set WORKAREA_SIZE_POLICY = AUTO.
    Kind regards
    Martin

  • SORT_AREA_SIZE and blob update in plsql?

    Can someone please answer the below questions I have.
    - The tables t1 and t2 both have around 1.5 million rows.
    - Table t1 has PK defined on col1 and col2
    - Table t2 has unique index defined on col1 and col2
    1) Does the below query use the SORT_AREA_SIZE while running the query?
    2) The below query when explain planned uses NESTED LOOP and FULL TABLE SCAN on table2. I am currently running the below in a cursor for loop to update a blob column on table1. It is taking long time? What are the alternative ways I can tune this query or tune PLSQL to update blob columns in table t1.
    <pre>
    SELECT rownum as rec, t1.col1, t1.col2, t2.col1, t2.col2,
    replace(replace(t2.col1, 'str1','strn'), 'str2','strn')
    FROM table1 t1, table2 t2
    where t1.col1 = t2.col1
    and t1.col2 = t2.col2
    </pre>
    <pre>
    <b>EXPLAIN PLAN OUTPUT</b>
    OPERATION OPTIONS OBJECT_NAME POSITION
    SELECT STATEMENT 408
    COUNT 1
    NESTED LOOPS 1
    TABLE ACCESS FULL TABLE2 1
    INDEX UNIQUE SCAN SYS_C000001 2
    </pre>

    It's a little hard to read your post. If you use any HTML tags then all bbcode tags are ignored - perhaps you included some HTML tags when you posted?
    Anyway, the AREASIZE settings depend on your Oracle version. From 9i onwards they are not used if workarea_size_policy is set to AUTO, in which case a percentage of pga_aggregate_target is used instead.

  • Best settings for sort_area_size and temporary tablespace??

    Hi, I'm trying to tune a data extract from a database which extracts records created by selects which concatenate various fields to form a record less than 100 characters long. The records are then loaded into a reporting cube.
    When it is run there are 8 instances of the extract package running which write to a separate file. When the procedures are finished Unix sorts them and merges them together.
    In terms of volume I am returning approx 280,000 records per day going back 4 years, with each instance of the procedure given a different date range.
    Currently the average procedure takes 4 hours to complete, but the sort and merge takes a further 6 hours.
    I would like to let Oracle do the sort, as even tough Oracle and unix sort the records slightly differently, this shouldn't impact the load to the reporting cube too much.
    What should I set the sort_area_size to, it currently stands at 1Mb,
    shared pool size is 200Mb.
    Any advice is welcome!!

    What should I set the sort_area_size to, it currently stands at 1Mb <<How would we know, you provide absolutely no information on user load and how much free memory your server has available. And are the sessions in question connected using dedicated sessions or shared server connections?
    Based on 280K 100 byte rows per day X 4 years the sort is going to disk. I would make sure my temp tablespace was stripped accross as many disk units as possible. And you should probably verify that you have at least 22G of sort available within the database. ( I figured at 200 days of data per year not 365 so if you have data for 7 days a week the temp estimate has to go up)
    HTH -- Mark D Powell --

  • Sort_area_size & hash_area_size on 10g

    I am confused as to whether I need to have sort_area_size and hash_area size set in my 10.2.0.3 database. I know the Oracle documentation states the following:
    "Oracle does not recommend using the SORT_AREA_SIZE parameter unless the instance is configured with the shared server option. Oracle recommends that you enable automatic sizing of SQL working areas by setting PGA_AGGREGATE_TARGET instead." This is the same statement made for hash_area_size as well.
    I have SHARED_SERVERS set to 1. So do I need to have sort_area_size and hash_area_size configured? Or will pga_aggregate_target take over anyway?
    And does anyone have any suggestions on sizing pga_aggregate_target?
    Oracle Doc says 20% of SGA. And I have seen recommendations where SGA should be sized 60-80% of total memory. Seems extreme.

    It is extreme. The SGA can be anywhere from 5% - 50% of the total memory depending on the size of memory, size of the database, type of database application (OLTP, Warehouse, DSS, OLAP), and user load.
    What else is on the machine besides the database is a big factor in how much memory can be allocated.
    Metalink has documents on sizing the SGA. You can size the PGA based on the expected concurrent user session count X an average memory per user + some extra for the unusual. How much memory you need per user depends on the application code tool set and coding stype.
    It is a bit of a guessing game. I like to start small but probably adequate allocation and add more resource as the load grows based on statistics.
    HTH -- Mark D Powell --
    HTH -- Mark D Powell --

  • SORT_AREA_SIZE Question

    Can someone please answer the below questions I have.
    - The tables t1 and t2 both have around 1.5 million rows.
    - Table t1 has PK defined on col1 and col2
    - Table t2 has unique index defined on col1 and col2
    1) Does the below query use the SORT_AREA_SIZE while running the query?
    2) The below query when explain planned uses NESTED LOOP and FULL TABLE SCAN on table2. I am currently running the below in a cursor for loop to update a blob column on table1. It is taking long time? What are the alternative ways I can tune this query or tune PLSQL to update blob columns in table t1.
    SELECT rownum as rec, t1.col1, t1.col2, t2.col1, t2.col2,
    replace(replace(t2.col1, 'str1','strn'), 'str2','strn')
    FROM table1 t1, table2 t2
    where t1.col1 = t2.col1 and t1.col2 = t2.col2

    What version of Oracle are you on?
    How many rows does the query return?
    Nested loops generally perform poorly when the amount of rows returned is very large.
    How long does the query take when you run it in sqlplus?
    Are statistics on the table current?
    SORT_AREA_SIZE and HASH_AREA_SIZE control the maximum amount of memory those operations could use in Oracle 8i and earlier. Since your query is using NESTED LOOPS then I would guess that increasing your SORT_AREA_SIZE parameter will not speed up the query.

  • Find duplicate records withouyqusing group by and having

    I know i can delete duplicate records using an analytic function. I don't want to delete the records, I want to look at the data to try to understand why I have duplicates. I am looking at tables that don't have unique constraints (I can't do anything about it). I have some very large tables. so I am trying to find a faster way to do this.
    for example
    myTable
    col1 number,
    col2 number,
    col3 number,
    col4 number,
    col5 number)
    My key column is col1 and col2 (it is not enforced in the database). So I want to get all the records that have duplicates on these fields and put them in a table to review. This is a standard way to do it, but it requires 2 full table scans of very large tables (many, many gigabtytes. one is 150 gbs and not partitioned), a sort, and a hash join. Even if i increase sort_area_size and hash_area_size, it takes a long time to run..
    create table mydup
    as
    select b.*
    from (select col1,col2,count(*)
    from myTable
    group by col1, col2
    having count(*) > 1) a,
    myTable b
    where a.col1 = b.col1
    and a.col2 = b.col2
    I think there is a way to do this without a join by using rank, dense_rank, or row_number or some other way. When I google this all I get is how to "delete them". I want to analyze them a nd not delete them.

    create table mytable (col1 number,col2 number,col3 number,col4 number,col5 number);
    insert into mytable values (1,2,3,4,5);
    insert into mytable values (2,2,3,4,5);
    insert into mytable values (3,2,3,4,5);
    insert into mytable values (2,2,3,4,5);
    insert into mytable values (1,2,3,4,5);
    SQL> ed
    Wrote file afiedt.buf
      1  select * from mytable
      2   where rowid in
      3  (select rid
      4      from
      5     (select rowid rid,
      6              row_number() over
      7              (partition by
      8                   col1,col2
      9               order by rowid) rn
    10          from mytable
    11      )
    12    where rn <> 1
    13* )
    SQL> /
          COL1       COL2       COL3       COL4       COL5
             1          2          3          4          5
             2          2          3          4          5
    SQL>Regards
    Girish Sharma

  • [sql performance] inline view , group by , max, join

    Hi. everyone.
    I have a question with regard to "group by" inline view ,
    max value, join, and sql performance.
    I will give you simple table definitions in order for you
    to understand my intention.
    Table A (parent)
    C1
    C2
    C3
    Table B (child)
    C1
    C2
    C3
    C4 number type(sequence number)
    1. c1, c2, c3 are the key columns of tabla A.
    2. c1, c2, c3, c4 are the key columns of table B.
    3. table A is the parent table of Table B.
    4. c4 column of table b is the serial number.
    (c4 increases from 1 by "1" regarding every (c1,c2,c3)
    the following is the simple example of the sql query.
    select .................................
    from table_a,
    (select c1, c2, c3, max(c4)
    from table_b
    group by c1, c2, c3) table_c
    where table_a.c1 = table_c.c1
    and table_a.c2 = table_c.c2
    and table_a.c3 = table_c.c3
    The real query is not simple as above. More tables come
    after "the from clause".
    Table A and table B are big tables, which have more than
    100,000,000 rows.
    The response time of this sql is very very slow
    as everyone can expect.
    Are there any solutions or sql-tips about the late response-time?
    I am considering adding a new column into "Table B" in
    order to identify the row, which has max serial number.
    At this point, I am not sure adding a column is a good
    thing in terms of every aspect.
    I will be waiting for your advice and every response
    will be appreciated even if it is not the solution.
    Have a good day.
    HO.
    Message was edited by:
    user507290

    For such big sources check that
    1) you use full scans, hash joins or at least merge joins
    2) you scan your source data as less as possible. In the best case each necessary table only once (for example not using exists clause to effectively scan all table via index scan).
    3) how much time you are spending on sorts and hash joins (either from v$session_longops directly or some tool that visualises this info). If you are using workarea_size_policy = auto, probably you can switch to manual for this particular select and adjust sort_area_size and hash_area_size big enough to do as less as possible sorts on disk
    4) if you have enough free resources i.e. big box probably you can consider using some parallelism
    5) if your full scans are taking big time check what is your db_file_multiblock_read_count, probably increasing it for this select will give some gain.
    6) run trace and check on what are you waiting for
    7) most probably your problem is IO bound so probably you can do something from OS side to make IO faster
    8) if your query now is optimized as much as you can, disks are running as mad and you are using all RAM then probably it is the most you can get out of your box :)
    9) if nothing helps then you can start thinking about precalculations either using your idea about derived column or some materialized views.
    10) I hope you have a test box and at least point (9) do firstly on it and see whether it helps.
    Gints Plivna
    http://www.gplivna.eu

  • DBMS_REDEFINITION problem!!

    Hi,
    We are trying to sub partition(on id field) a already partitioned(on date field) table having 1.8Billions (255GB approax) of records.
    We are using DBMS_Redefinition package to do so. Initially, it was failed due to the shortage of space in temp (we increased the size to 320GB). After doing so, the temp space exhaust problem was eliminated.
    Now we are facing the following problem:-
    ERROR at line 1:
    ORA-12008: error in materialized view refresh path
    ORA-04030: out of process memory when trying to allocate 65560 bytes
    (klcliti:kghds,kdblc memory)
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 52
    ORA-06512: at "SYS.DBMS_REDEFINITION", line 1646
    ORA-06512: at line 2
    We have increased the memory target to 31G. but no gain.
    We have the following ulimit settings:-
    !ulimit -a
    core file size (blocks, -c) 0
    data seg size (kbytes, -d) unlimited
    scheduling priority (-e) 0
    file size (blocks, -f) unlimited
    pending signals (-i) 299007
    max locked memory (kbytes, -l) 32
    max memory size (kbytes, -m) unlimited
    open files (-n) 65536
    pipe size (512 bytes, -p) 8
    POSIX message queues (bytes, -q) 819200
    real-time priority (-r) 0
    stack size (kbytes, -s) 32768
    cpu time (seconds, -t) unlimited
    max user processes (-u) 16384
    virtual memory (kbytes, -v) unlimited
    file locks (-x) unlimited
    What needs to be done to redefine the table .
    Is there any other method which does the same things quickly.
    Please suggest.
    Thanks in advance,
    Regards,
    Amit

    Hi
    You may try to set the workarea_size_policy=MANUAL and then set sort_area_size and hash_area_size to a nice size at session level and do the dbms_redefinition
    Alter session set workarea_size_policy=MANUAL;
    Alter session set sort_area_size=209715200;
    Alter session set hash_area_size=209715200;
    Do dbms_redefinition...now
    Remember. Untunable part of PGA can make a process exceed the max threshold of a PGA per process limit and go all the way to blow up the physical RAM causing ORA-4030.

  • PGA STUCK MY WORIKG PLZ HELP

    hey to all of you my PGA return this value when i issue this
    select 1048576+a.value+b.value pga_size
    from
    v$parameter a,
    v$parameter b
    where
    a.name = 'sort_area_size'
    and
    b.name = 'hash_area_size'
    Output
    1245184
    this query i copied from a artical which show that every connected Oracle session will about 1.1857 megabytes of RAM memory for the Oracle PGA.
    when i issue show paramenters area_size
    NAME TYPE VALUE
    bitmap_merge_area_size integer 1048576
    create_bitmap_area_size integer 8388608
    hash_area_size integer 131072
    sort_area_size integer 65536
    workarea_size_policy string MANUAL
    but i read in article We reserve 2 meg for Windows over head but my PGA is smaller than 2 MB why plz tell me the reason why it is so small my computer have 2GB memory what you suggest plz

    First of all I strongly suggest you adhere to basic Netiquette and stop typing subject lines in all CAPS.
    It is by many, including me, considered shouting and yelling, and consequently termed as rude.
    Secondly you really should start reading documentation.
    The PGA does NOT consist of sort_area_size and hash_area_size, it is a separate global, non-session specific area, consequently your query is incorrect.
    For the PGA to be used work_area_size_policy must be set to AUTO not to MANUAL.
    You would need to read the article more carefully, and/or start reading docs.
    So far all of your questions can be easily answered by reading the docs. If you don't want to read the docs, Oracle is not for you.
    Please be prepared (especially if you continue yelling and shouting) you are going to be ignored, at least by me.
    Sybrand Bakker
    Senior Oracle DBA

  • Values ignored if WORK_AREA_POLICY is AUTO

    I had a doubt with what is the significance of sort_area_size and hash_area_size parameters if WORK_AREA_POLICY is set to AUTO. As per my knowledge, the
    vaues of sort_area_size and hash_area_size parameters are ignored if WORK_AREA_POLICY is set to AUTO. My concern was to make sure that each oracle session
    executing has enough memory(RAM) to execute.
    I hope, my question is clear.
    Please, help in solving the doubt.
    regards

    I am going to assume you are using Oracle 10g.
    If you set PGA_AGGREGATE_TARGET to a value and WORK_AREA_POLICY to AUTO, the sort and hash area size parameters are ignored.
    To find out if the application has adequate PGA memory, you can do one or all of the following
    1) Get an AWR report at normal / peak work loads and look at PGA statistics
    2) Run queries on v$pga_target_advice and v$pga_target_advice_histogram to find out the optimal settings.
    3) Generate 10046 traces for the session and view the trace file for sorts or joins that typically use the PGA.
    As long as you don't have multipass executions, the performance should not degrade substantially if that is your concern. One pass executions in many cases are unavoidable although optimal executions are most desired. Use v$sql_workarea and v$sql_workarea_histogram for this information.
    Increase your PGA memory to the max you can allocate on your system and review the results. Do some iterations with different settings and find the best setting for your application.
    A good source of information is the Oracle 10g Perf Tuning guide, Ch 7: Memory Config & Use
    Edited by: user647214 on Feb 25, 2009 11:17 AM

  • About PGA

    Hi All,
    According to my knowledge on PGA
    1) PGA is a memory structure but not part of SGA
    2) PGA is allocated for each session
    3) PGA is allocated based on the value of PGA_AGGREGATE_TARGET parameter
    My doubt is
    Like SGA, PGA also occupies fixed amount of physical memory ?
    If it is, how much memory is allocated to each session? (equal to the value of PGA_AGGREGATE_TARGET to each session. In such a case all memory will be exhausted very soon)
    Please explain me how PGA is allocated to each session.
    I am using 10.2.0.4
    Thanks
    Prasanna

    Hi kpskram,
    According to your knowledge on PGA;
    1) You are right.
    2) You are right.
    3) PGA_AGGREGATE_TARGET is the maximum number that PGA area can increase.
    The value for SGA is automatically allocated as the amount for it during the instance startup but PGA is not 100% allocated. It decreases and increases based on user logins. It is the sort area and please also check for parameters sort_area_size and hash_area_size. But if you set the PGA_AGGREGATE_TARGET, you do not need those.
    There is a default value allocated for each session for PGA and if i remember correctly it is something like 5 megabytes?
    Ogan

  • DOUBT REGARDING SORT_AREA_SIZE

    Hello,
    me a Jr.Dba when i executed a large query which contains indexes so many
    disks sorts are happening.so i wan't to increase the SORT_AREA_SIZE.
    If i wan't to increase BUFFER_CACHE_SIZE i simply take the assistance of
    V$DB_CACHE_ADVICE to know what is the next efficient value for BUFFER_CACHE_SIZE .
    like the above is there any thing else for SORT_AREA_SIZE ,i am not gettitng
    how much i should increase the parameter SORT_AREA_SIZE.Can u plz assist me on this
    Regards,
    Vamsi

    Hi user581473,
    if you are running on 9i and above SORT_AREA_SIZE will not be effective at all as long as you have not set WORKAREA_SIZE_POLICY to MANUAL.
    It defaults to AUTO!
    As of 9i we have PGA_AGGREGATE_TARGET which is used a pool for PGA memory needed for workareas.
    Only if you set this to MANUAL the paramteres SORT_AREA_SIZE, CREATE_BITMAP_AREA_SIZE and HASH_AREA_SIZE will be effective.
    Otherwise the optimizer thinks that every sorting can get 5% of PGA_AGGREGATE_TARGET for serial and 30% for parallel execution for the sorting in memory and create an execution plan based on this assumption.
    This applies only to dedicated servers in 9i and to shared servers as well as of 10g.
    If you are running a large operation you should set WORK_AREA_SIZE_POLICY to MANUAL on session level and also adjust your AREASIZE parameters for this session.
    So the reports and batch jpobs use manual PGA management and all the others (OLTP) can use the default.
    Hope it helps,
    =;-)

  • Sort_area_size & pga_aggregate_target

    Hi,
    There is an ETL load job which is failing frequently because of TEMP tablespace issue. I have already extended the temp tablespace from about 6GB to 37GB.
    Code is definately an issue. However we can not possibly change it.
    I wanted to know what is the difference between sort_area_size and pga_aggregate_target? Can someone please explain these and the scenrios when we need to set which parameter?
    Current Environment:
    Oracle Database: 10.2.0.3
    Platform: HP-UX
    SQL> show parameter sort_area
    NAME TYPE VALUE
    sort_area_retained_size integer 1524288
    sort_area_size integer 1524288
    SQL> show parameter pga_aggregate
    NAME TYPE VALUE
    pga_aggregate_target big integer 524288000
    SQL> select sum(bytes)/1024/1024/1024 db_size_gb from dba_data_files;
    DB_SIZE_GB
    108.447266
    SQL> select sum(bytes)/1024/1024/1024 db_size_gb from v$tempfile;
    DB_SIZE_GB
    37.109375
    SQL>
    Regards
    Sudhanshu

    pga_aggregate_target is set and automatically sets workarea_size_policy = auto. It is used for sorts, hash joins etc operations. But it is set for all db level and a single session cannot use more than ~5% of it. So if you have one session that has many memory intensive memory operations it might be very useful to set workarea_size_policy = manual and set big sort/hash_area_sizes. Available mmemory VERY MUCH affects large sorts and hash_joins, for exact example how much - you can look at my article Long running Operations in Oracle (entries in v$session_longops) at http://www.gplivna.eu/papers/v$session_longops.htm under the chapter Hash joins can either fly or crawl.
    Gints Plivna
    http://www.gplivna.eu

Maybe you are looking for

  • Apple iPod Nano Replacement Program HELP?

    I requested the replacement in mid November, but never received any box or packaging in which to send the product back to Apple. Yet when I check the repair status on the site, it says it was dispatched. What do I do?

  • SoundBlaster X-Fi drivers corrupt or not working "properly" without Internet access. Please fix!

    The latest drivers located here are corrupt. It is a single exe file that does a CRC check before install begins. During this self check, it fails and stops extracting the files. As such I can not install the drivers. Now, I can probably even tell yo

  • Variable in Query

    Hi In one of my query I want to calulate the values based on price in material master (standard price & moving average price). I am getting these values in query using replacement path variables. Problem is that user want a selection for this price.

  • Acrobat 9 Ruined my Life

    I paid for Acrobat 9, installed it, and now I can't create a PDF. I will now loose a job, lots of money, and I can't back out the change. Adobe has illegally changed my machine's configuration as far as I'm concerned, and has cost me significant mone

  • How to changed the download  options?

    When I tried to download from internet or files they send me it automatically opens with itunes. How can I change this?