Does improve performance to analyze all tables?

Will the SELECT, INSERT, UPDATE and DELETE commands work faster if I analyze the main tables every month?

It's unlikely that you will see month-on-month improvement. Once the tables get to a certain size the stats son't really get stale (if you add 10,000 records a month to a table with 10,000,000 rows, there's not really a lot of difference).
So the advice is to gather stats for your tables once. Then use the monitoring option for tables that you think may be volatile and refresh their stats if they have grown stale. Note that in 10g the monintoring option has been deprecated because DBMS_STATS now supports the option to explicitly refresh stale statistics.
Cheers, APC

Similar Messages

  • Analyze all tables of one schema

    hi,
    the analyze table command is:
    analyze table hr.employees compute statistics;
    Now I want analyze all tables in the hr schema
    how can I do it in a easy way,
    thanks,
    Zhiwei.

    the only way to disable monitoring (globally) is to set statistics_level=basic.Exactly.
    SYS@db102 SQL> select table_name,monitoring from dba_tables
      2  where table_name like 'TEST%';
    TABLE_NAME                     MON
    TEST_P                         YES
    TEST_MV                        YES
    TEST_F                         YES
    TEST11                         YES
    TEST                           YES
    TEST_TABLE                     YES
    TEST_PART                      YES
    TEST_LOB                       YES
    TEST_EMP                       YES
    TESTDECIMAL                    YES
    TEST3                          YES
    TEST1_DL2                      YES
    TEST01                         YES
    13 rows selected.
    SYS@db102 SQL> alter system set statistics_level=basic;
    System altered.
    SYS@db102 SQL> select table_name,monitoring from dba_tables
      2  where table_name like 'TEST%';
    TABLE_NAME                     MON
    TEST_P                         NO
    TEST_MV                        NO
    TEST_F                         NO
    TEST11                         NO
    TEST                           NO
    TEST_TABLE                     NO
    TEST_PART                      NO
    TEST_LOB                       NO
    TEST_EMP                       NO
    TESTDECIMAL                    NO
    TEST3                          NO
    TEST1_DL2                      NO
    TEST01                         NO
    13 rows selected.
    SYS@db102 SQL>                                                                                                            that's what I meant by "in 10g you have to do it explicitly".

  • Analyzing all tables in schema

    Hello everyone,
    I am used below command to analyze all tables in schema
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);when look at tables in dba_tables, for none of the tables LAST_ANALYZED date is changed to today. But when I did below
    EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);I am see LAST_ANALYZED changed to today in dba_tables.
    If I need to change LAST_ANALYZED to all tables do I need to produce the above command for all tables? There are more then 700 tables for this application.
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for IBM/AIX RISC System/6000: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production

    user3636719 wrote:
    EXEC DBMS_STATS.gather_schema_stats (ownname => 'CONTRACT', cascade =>true,estimate_percent => dbms_stats.auto_sample_size);
    and
    EXECUTE DBMS_STATS.GATHER_TABLE_STATS(ownname => 'CONTRACT', tabname => 'CONT_NAME', method_opt => 'FOR ALL COLUMNS', granularity => 'ALL', cascade => TRUE, degree => DBMS_STATS.DEFAULT_DEGREE);are fundamentally different, you cannot compare them. In gather_schema_stats, oracle used most defaults, decided none needed new stats collected, so it didn't do anything. In the second, you changed method_opt, granularity and degree etc from default values (as set in your db perhaps), so db went ahead and collected stats.
    You need to look up manual and try to understand the default and non-default behavior for parameters and then make an educated decision. Changing stats randomly is not generally a great idea.

  • Improve performance with union all

    Hello there,
    Oracle Database 11g Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    SQL> show parameter optimizer
    ORA-00942: Tabel of view bestaat niet. (Does not exist)I have the following query using the following input variables
    - id
    - startdate
    - enddate
    The query has the following format
    - assume that the number of columns are the same
    - t1 != t3 and t2 != t4
    select ct.*
    from
      select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
      union all
      select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where ct.id = :id
      and ct.date >= :startdate
      and ct.date < :enddate
    order by ct.dateIt is performing really slow, after the first read it performs fast.
    I tried the following thing, which was actually even slower!
    with t1c as
    select t1.*
      from   tabel1 t1
        join tabel2 t2
          on t2.key = t1.key
    where t1.id = :id
      and t1.date >= :startdate
      and t1.date < :enddate
    t2c as
    select t3.*
      from   tabel3 t3
        join tabel4 t4
          on t4.key = t3.key
    where t3.id = :id
      and t3.date >= :startdate
      and t3.date < :enddate
    select ct.*
    from
      select *
      from   t1c
      union all
      select *
      from   t2c
    order by ct.dateSo in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    Thanks in advance!
    Kind regards,
    Metroickha

    >
    So in words, I have an 'union all' construction reading from different tables with matching columns 'id' and 'date'.
    How can I improve this? Can it be improved? If you do not know the answer, but maybe a suggestion, I will be happy aswell!!!
    >
    If you want to improve on what Oracle is doing you first need to know 'what Oracle is doing'.
    Post the execution plans for the query that show what Oracle is doing.
    Also post the DDL for the tables and indexes and the record counts for the tables and ID/DATE predicates.

  • Will Partitioning improve performance on Global Temporary Table

    Dear Guru,
    In one of complicated module I am using Global Temporary Table (GTT) for intermediate processing i.e. It will store required data in it but the rows of the data goes into 10,00,000 - 20,00,000.
    Can Partitioning or Indexing on Global Temporary Table improve the performance?
    Thanking in Advance
    Sanjeev

    Sounds like an odd use of a GTT to me, but I'm sure there are valid reasons...
    Presumably you are going to be processing all of these rows in some way? In which case I can't see how partitioning, even if it's possible (And I don't think it is) would help you.
    Indexes - sure, that might help, but again, if you are reading all/most of these rows anyway they might not help or even get used.
    Can you give a bit more detail about exactly what you are doing?
    Edited by: Carlovski on Nov 24, 2010 12:51 PM

  • How improve performance on access path TABLE ACCESS BY INDEX ROWID ?

    I have table MOVEMENT with about 26millions entries,
    select rowid from movement xxx
    where
    xxx.sTransType > 0
    AND xxx.sDevice < 1000
    AND xxx.sDevice >= 0
    AND (bitand(xxx.sSaleFlag,1) = 0 AND bitand(xxx.sSaleFlag,4) = 0)
    AND xxx.sArtClassRef < 100
    and xxx.tActionTime BETWEEN TO_DATE('13-05-2011 08:08:34', 'dd-mm-yyyy hh24:mi:ss') AND to_date('13-05-2011 14:08:34', 'dd-mm-yyyy hh24:mi:ss') ;
    PLAN_TABLE_OUTPUT
    Plan hash value: 679628763
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 34 | 10102 (1)| 00:02:02 |
    |* 1 | TABLE ACCESS BY INDEX ROWID| MOVEMENT | 1 | 34 | 10102 (1)| 00:02:02 |
    |* 2 | INDEX RANGE SCAN | MOVATIME_IX | 18489 | | 51 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    1 - filter("XXX"."SARTCLASSREF"<100 AND BITAND("XXX"."SSALEFLAG",1)=0 AND
    BITAND("XXX"."SSALEFLAG",4)=0 AND "XXX"."STRANSTYPE">0 AND "XXX"."SDEVICE"<1000
    AND "XXX"."SDEVICE">=0)
    2 - access("XXX"."TACTIONTIME">=TO_DATE('2011-05-13 08:08:34', 'yyyy-mm-dd
    hh24:mi:ss') AND "XXX"."TACTIONTIME"<=TO_DATE('2011-05-13 14:08:34', 'yyyy-mm-dd
    hh24:mi:ss'))
    there is index on tActionTime - MOVATIME_IX
    This query returns 12203 rows, so I would anticipate this number in plan table in row with id 1 and column Rows
    Final question if it is possible to optimize this query and what are the next steps to do it?
    Thanks.

    >
    I thought that access path via ROWID's is the fastest way to get row
    >
    It is the fastest way to get the row - FROM THE TABLE.
    But the ROWIDs have to be gotten from the index. That is what the INDEX RANGE SCAN is doing. It is getting the ROWIDs needed and then the TABLE ACCESS BY INDEX ROWID is getting the rows.
    >
    I'am still confused with COST values, TABLE ACCESS BY INDEX ROWID has 200times higher cost than INDEX RANGE SCAN,
    >
    The index entries for a range scan are in order so they are very compact. The actual rows might be all over the place.
    Have you ever you a library? Not the online ones - I mean the old-fashioned kind that actually has books printed on paper?
    If the librarian asks you, her helper, to go get all books whose title begins with the letter 'B' how would you do it?
    You could go back to the stacks and look at every book on every shelf for books with titles' starting with 'B'. That is the same as a FULL TABLE SCAN.
    Or you could go to the card catalog, pull out the drawer (or drawers) that has 'B' on the label and look at the information on the card. Part of that information is the location of the actual book: section, stack; that is similar to the ROWID.
    The card catalog might get you to the right stack of books; then you have to search the stack sequentially to look for the book by name.
    A ROWID will get Oracle to the right block but then it has to find the right row.
    So the cost of getting ROWIDs from an index using a RANGE SCAN (where values are scanned in order) is a lot cheaper than actually getting the rows. The first two index entries needed might be right next to each other in order but the rows themselves might be far apart on the disk.

  • Gather_schema_stats not picking up all tables

    My Oracle version is 9.2.0.5 and OS is unix on AIX 4.3
    I am trying to Analyze all tables and indexes in my current schema using :
    execute dbms_stats.gather_schema_stats(ownname=>null, cascade=>true);
    However, out of the 29 tables that the current user owns, only 23 tables has LAST_ANALYZED to the date when this was run. 6 of the tables don't have a LAST_ANALYZED date.
    What may be the reason that those 6 tables are skipped during analyze? They definitely belong to the same user and also they are not partitioned tables. Is there any place I can check ?
    Thanks

    You hit Bug#2453682
    As workaround use dbms_stats.gather_tables_stats on object tables

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Tip: Improving The Performance Of OLAP DML Table Inserts

    Quick Oracle OLAP Tip:
    If you need to write the contents of a variable, or a group of variables, to a relational table, you would normally use the SQL INSERT command. Normal practice is to loop round all of the variables' dimension values, inserting the variable values into the relational tables one by one, until the variable has been completely loaded into your database table.
    Oracle 9i OLAP however introduces two new commands, SQL PREPARE and SQL EXECUTE (http://download-west.oracle.com/docs/cd/B10501_01/olap.920/a95298/sql5.htm#1027902) , that allow us to prepare our INSERT statement in such a way that it uses bind variables to pass values to the Oracle tables. Bind variables are generally a 'good thing' and reduce the amount of time Oracle has to spend parsing your SQL insert statements. In addition, you can specify additional options with SQL PREPARE to specify 'direct path' insertions (quicker as they bypass the normal SQL engine and directly load data into Oracle blocks), nologging (to eliminate redo log generation), and to nominate individual partitions to load data in to. It's worth noting that there's an error in the current OLAP DML documentation that suggest that any OLAP DML insert operation into an Oracle table locks the entire table, preventing other AW processes from inserting into the table until you commit. This is actually incorrect, and full-table locking only occurs if you use the DIRECT=YES option, which locks the table in the same way that SQL*Loader locks the table as they both use the Direct Path API.
    However, an even better solution than using SQL EXECUTE and SQL PREPARE is to use the OLAP_TABLE feature in Oracle 9i (http://download-west.oracle.com/docs/cd/B10501_01/olap.920/a95295/olap_tab.htm#73729) to create a view against your AW variable, then use this view as the source for a "INSERT INTO table SELECT * FROM source" SQL statement, optionally using the /*+ INSERT APPEND */ option if you want to carry out direct path insertions. By using OLAP TABLE and having the SQL engine insert multiple variable values into our target table, rather than having an OLAP DML program loop through the variable and carry out multiple single-row insertions, we were able to increase our write performance by an order of magnitude compared to our earlier SQL INSERT command. One thing to bear in mind though is that, if you are running many copies of the program concurrently, using direct path insertions may well cause lock contention, as each process will obtain an exclusive table lock while the direct insertion takes place. In the case of concurrent processes, it may be better to use conventional path insertions (but still use SQL PREPARE and EXECUTE, or OLAP TABLE) as these only require row-level exclusive locks.

    you can use Execution plain
    http://stackoverflow.com/questions/7359702/how-do-i-obtain-a-query-execution-plan
    and add index.
    Index according to the fields you ask queries can improve performance greatly larger!
    You can use the statistics for building indexes:
    http://www.mssqltips.com/sqlservertip/2979/querying-sql-server-index-statistics/
    Tzuri Ben Ezra | My Certifications:
    CompTIA A+ ,Microsoft MCP, MCTS, MCSA, MCITP |
    FaceBook: Tzuri FaceBook | vCard:
    Tzuri vCard | 
    Microsoft ID:
    Microsoft Transcript 
     |

  • Improve Performance of Dimension and Fact table

    Hi All,
    Can any one explain me the steps how to improve performance of Dimension and Fact table.
    Thanks in advace....
    redd

    Hi!
    There is much to be said about performance in general, but I will try to answer your specific question regarding fact and dimension tables.
    First of all try to compress as many requests as possible in the fact table and do that regularily.
    Partition your compressed fact table physically based on for example 0CALMONTH. In the infocube maintenance, in the Extras menu, choose partitioning.
    Partition your cube logically into several smaller cubes based on for example 0CALYEAR. Combine the cubes with a multiprovider.
    Use constants on infocube level (Extras->Structure Specific Infoobject properties) and/or restrictions on specific cubes in your multiprovider queries if needed.
    Create aggregates of subsets of your characteristics based on your query design. Use the debug option in RSRT to investigate which objects you need to include.
    To investigate the size of the dimension tables, first use the test in transaction RSRV (Database Information about InfoProvider Tables). It will tell you the relative sizes of your dimensions in comparison to your fact table. Then go to transaction DB02 and conduct a detailed analysis on the large dimension tables. You can choose "table columns" in the detailed analysis screen to see the number of distinct values in each column (characteristic). You also need to understand the "business logic" behind these objects. The ones that have low cardinality, that is relate to each other shoule be located together. With this information at hand you can understand which objects contribute the most to the size of the dimension and separate the dimension.
    Use line item dimension where applicable, but use the "high cardinality" option with extreme care.
    Generate database statistics regularily using process chains or (if you use Oracle) schedule BRCONNECT runs using transaction DB13.
    Good luck!
    Kind Regards
    Andreas

  • What would be a sensible way to setup my Gigabit network? Will using both NICs improve performance at all?

    Hi everyone,
    I work in a small  digital agency, we do some post production, animation and some web dev.
    I'm looking to improve slightly our networking setup as it is a little disorganised and not always as fast as it should be. In the long term we're looking at spending sensible money on faster centralised storage, fibre etc, but for another year or so I need to do something more affordable based around what we already have.
    In brief, we have currently one Mac Pro Da Vinci grading workstation with attached highspeed storage RAID, and four Mac Pro edit machines for Premiere/After Effects. There are also various other machines and NAS devices, printers etc.
    Currently everything is on one unmanaged Gigabit switch. Each edit machine has a 4 disk RAID set of SATA disks, two of which are in what used to be the SuperDrive bays before we took them out. There is an SSD for the boot drive, and then one empty slot for transfering data in and out on other disks etc. When a project is ready to grade it's moved over to the storage attached to the grading machine over the network. That all works fairly well for us at the moment.
    However, with increasing frequency we're now working on projects with a few animators and editors at once, working over the network on the same files rendering them to one or other Mac Pro machine. I was wondering whether there is any way to improve our network architecture to make this a little faster (I know that ultimately we need a more expensive multi user centralised storage system of some sort, but in the short term..).
    I was considering a couple of options-
    1. Create a content only network on one NIC of each Mac Pro, into a dedicated switch, with jumbo frames turned on identically configured across all machines. Then on the second NIC join them into the 'everything else' network so they can access printers, the Internet, the slower NAS admin shares etc.
    or
    2. Configure a virtual ethernet device on each Mac aggregating the two NICs together (is that LACP?), taking both ethernet lines into the switch, and also using that switch for every other device (but connected in the normal manner).
    Is either of those a sensible way to go? I understand that there isn't any speed increase over link aggregated NICs when only doing one transfer, but is there any performance benefit at all if the machine is getting different files from different machines simultaneously, or if two machines are rendering over the network to one?
    I currently have one unmanaged Netgear JGS516 switch and one TP Link SG1024DE half-smart switch (has some useful functions but not what you'd expect from a more expensive model). Happy to buy a different switch is there's more management to be done, though don't want to spend a huge amount at this moment.
    Any advice whatsoever gratefully received, I could be way off here
    Thanks all.

    Thanks Grant,
    My knowledge of Jumbo Frames is limited.
    A lot of what I know comes from here- http://www.smallnetbuilder.com/lanwan/lanwan-features/30201-need-to-know-jumbo-f rames-in-small-networks?start=3
    One suggestion in there is to separate Jumbo and non-Jumbo supporting devices so that there is no impact on speed.
    I don't know anything about store-and-forward, I'll have to read up.
    I'm the same about Full Duplex

  • I have Airport Extreme as the base unit and an Express unit in a room separated by a brick wall to improve signal. Express shows a green light indicating signal but my iPad does not perform well. How can I improve other than hardwire ethernet connection?

    I have Airport Extreme as the base unit and an Express unit in a room separated by a brick wall to improve signal. Express shows a green light indicating signal but my iPad does not perform well. How can I improve other than hardwire ethernet connection?

    Other suggestions, and more info about the nature of the problem, may be in this Apple tech note.
    http://support.apple.com/kb/HT1365
    For example, you might find that the brick wall is not the only problem. There may be other devices pumping out enough wireless interference to be making things even worse.
    I agree with Bob Timmons that Ethernet is best and most reliable. And that powerline (which I use) is easier and potentially faster than wireless...but only if your power lines do not have electrical devices plugged in which produce electrical noise on the line. Powerline will be slower than Gigabit Ethernet.
    Ethernet cable is the only way to ensure that the signal goes directly there in a shielded way for a clear fast signal. Wireless and powerline are much slower because of all the other things the signal has to fight past to get to the other device.

  • Does making final field static improve performance or save memory?

    Are final fields given value at compile time by the compiler?
    private final String FINAL_FIELD = "31";
    Does making final field static improve performance or save memory?

    Actually it's final static fields with primitive or String type that are treated specially. For other final fields only extra compile time checking is produced, they are the same at run time.
    final static primitives are treated as compiler constants, and a literal value is substituted. No actual field is created.
    Myself I consider an awareness of Java internals valuable. For example, if you were unaware of the above the occasional misbehaviour of final static primitives referenced in other classes would be baffling. If your code references a final static primitive from another class it's compiled as a literal, not a reference, so if their value changes in the other class, the referring class will retain the old value if not re-compiled.

  • Does syncing ipad or iPhone improve performance?

    If my iPad is acting a little buggy, will regular syncing improve performance?

    No, that's not what sync is for. It serves to make a backup of the device from which you can restore if damaged or replaced and to ensure that data such as contacts, calendars, bookmarks, etc., are synchronized across all your devices and computer.
    Given that if correctly set up, iTunes will keep a backup of EVERYTHING, all apps included, you can proceed to remove those you are not currently using in the device, freeing up room in its storage. The backup stored in iTunes allows you to copy the app back if needed later without having to download from Apple's servers. iTunes also lets you update all the apps on the computer, which will then update on the device(s) at next sync. Don't know on a PC, but on my Mac the apps update way faster and in parallel than if I do it on the devices.
    Lastly, sometimes the performance of the devices can be returned to the original level by making a complete backup, wiping the device clean ( Settings / General / Reset / Erase All Contents and Settings ) and restoring from iTunes' backup.

  • How to improve performance(insert,delete and search) of table with large data.

    Hi,
    I am having a table which is used for maintaining history and have a large data and that keeps on increasing or decreasing based on the business rules.
    I am getting performance issues with this table which searching for any records or while inserting new data into it. I have already used index in this table but still I am facing lot of issues related to performance.
    Also, we used to insert bulk data into this table.
    Can we have any solution to achieve this, any solutions are greatly appreciated.
    Thanks in Advance!

    Please do not duplicate your posts across forums.  It's considered bad practice and rude, as people will not know what answers you've already received and may end up duplicating the effort.
    Locking this thread - answer on other thread please

Maybe you are looking for

  • Adobe Muse CC keeps crashing when I try to open my file.

    Hi all, I've recently created a project off Adobe Muse, but for some reason it keeps crashing when I try to open it. I can create a new file in muse, but I cannot open that file. The file size is only 13MB. Here is a screen shot of what appears on my

  • Which 3rd party apps support Apple Digital AV Adapter?

    I'm trying to find a list of which apps support video out through the Apple Digital AV Adapter. (I'm using an iPhone 4, so there's no mirroring function.) I was delighted to discover that I could stream 720p video to the living room TV using Home Sha

  • Positioning different objects over containers

    Dear Forum members, Thanks to the help of woogly, I have a grid in my program, so my main looks like this: public class Example extends JFrame { public static void main(String[] args) { new Example(); public Example() { super("Grid Example"); setSize

  • Problems exporting to a PDF

    Everytime I try to export an InDesign file to a PDF file, the pictured dialogue box shows up and it won't export the PDF. Does anyone know how to get around that? I am just barely learning how to use InDesign so that might be the problem.

  • Playbook os2 update problems

    Got a playbook and updated it to OS 2 that other night and now having problems with it after reading this forum seems to be a combination of problems that others are having. 1. Can not find any wireless networks. 2. power button doesn't work properly