DataPump imp: Re-organization a Heap table to IOT compressed table

Hello,
I'd like to re-organize a heap table to IOT TABLE the heap table has a size of 25GB.
After do the expdp table I drop the heap table and create the table as IOT using the compress 2 option. Because this table has a 9 PK columns the compress prefix optimal is 2.
After, I run the impdp with the option accesss_method=direct_path (to avoid UNDO generation). The IOT index is not compressed. I doing a alter table TAB_IOT move compress, the compress work becasuse the segment reduction.
The question is exists any way to force the compression at import phase to reduce steps and storage resources.
My env is Oracle 11.1.0.7 on LInux-x64.
Many thanks
Arturo

DBMS_REDEFINITION
http://www.morganslibrary.org/reference/pkgs/dbms_redefinition.html
It is not going to happen in any respect with DataPump.

Similar Messages

  • Heap tables and index organized tables

    I performing migration from mssql server to oracle 10gr2 rdbms, in mssql all tables have clustered pk, index. Is it necessary to use index organized tables for that migration, or enough ordinal heap organized tables and what differences between those tables, and mssql tables
    Thanx

    In Oracle, the typical table is a standard 'heap' table. Stuff goes into the heap table randomly, and randomly comes out.
    An Index Organizaed Tables is somewhat similar to a Cluster Index in SS. It can have some performance advantages over heap tables - when the heap table has an associated index on the primary key.
    The IOT can also have some disadvantages, such as the need for an Overflow table to handle the extra data when a row doesn't conveniently fit in a block (implying multiple I/Os), and an extra translation table if bitmap indexes are required (implying extra I/Os).
    An unintelligent developer will generally believe that Oracle and SQL Server are the same - after all they both run SQL - and will attempt to port by a simple translation of syntax.
    An intelligent developer will test both styles of tables, during a port. Such a developer will also be quick to learn about the changes in internals (such as locking mechanisms) and will realize that different styles of coding are required for many application situations.
    I recommend reading Tom Kyte's books to get handle on pros and cons as well as testing techniques to help a developer become intelligent.

  • Efficient way to do the Purge / Delete activity on a SQL Server Heap table

    Hi,
    I have a huge heap table (sql 2008) on a staging database which is used to store log history for an application.
    The application is not directly using this heap table.
    The table is having a Date column and We have a Purge Plan to remove the records that are all older than 1 Year.
    In this scenario, which one will help and support us in order to expedite the purge process ?
    Whether Crating a Clustered Index or Non-Clustered Index ?
    Of course, I am planning to use the following script in order to avoid Log file bloat and get rid of the blockings.
    Can some help in this regard by providing suggestion.

    I personally wouldn't create a clustered index on the table.  Adding a clustered index has two problems in your scenario.
    Adding a clustered index will be time consuming and resource intensive.  Talk about log file bloat...
    A clustered index will result in poorer insert performance when compared to leaving the table as a heap and adding a non-clustered index.
    I would add the non-clustered index to the table on the date column you refer to, then purge data in small batches.  Although purging data in small batches might not be quite as fast as purging the data in a single batch, it won't be much slower and
    will allow you to have total control over your log file.
    The non-clustered index on the date column will be small since even the largest date datatypes only consume 10 bytes of space.  So for a table containing 5 billion records the non-clustered index would be only about 90 GB in size.
    As stated above you could then purge data in small batches and perform log backups between batches to control log file bloat or simply switch the database to simple recovery model.

  • Create clustered table from heap table

    Hi
    I have a table with more than 15 million rows. I want to create a clustured table from this heap table and as you know, I can't use CTAS for clustered table
    I created a clustered table and started insert data to it (Using append hint and parallel 16). Now, waiting 10 hours there's no result. What's the best way?
    Thanks

    a. you can use ctas:
    SQL> select * from v$version;
    BANNER
    Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
    PL/SQL Release 9.2.0.1.0 - Production
    CORE    9.2.0.1.0       Production
    TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
    NLSRTL Version 9.2.0.1.0 - Production
    SQL>  create cluster clu(m_num number);
    Cluster created.
    SQL> create index clu_ind on cluster clu;
    Index created.
    SQL>  create table clu_tab
      2   (m_num)
      3   cluster clu(m_num)
      4   as select rownum from dual connect by level <=10;
    Table created.please post:
    1. the whole cluster creation script
    2. your oracle version
    Amiel Davis

  • Table created by datapump imp/exp, change default tablespace?

    Is there a way, without changing the users default tablespace, to force the table created by data pump import or export to go to a different tablespace than the users default? Our policy is that users must declare the tablespace they want an object created in, therefore the users tablespace is very small (6M), so if they accidently created an object in this TS it would fail. When we run expdp, this default tablespace fills up and export fails. I can't find a parameter in the documentation on giving an alternative tablespace for this table.

    When you run expdp, it creates a table under the user you logged in as. This table is used to keep track of the datapump process. I am looking for a way to direct datapump to create this table in a specific tablespace instead of the users default.

  • Datapump import error on 2 partioned tables

    I am trying to run impdp to import two tables that are partioned and use the LOB types...for some reason it always errors out. Anyone seen this issue in 11g?
    Here is the info:
    $ impdp parfile=elm_rt.par
    Master table "ELM"."SYS_IMPORT_TABLE_05" successfully loaded/unloaded
    Starting "ELM"."SYS_IMPORT_TABLE_05": elm/******** parfile=elm_rt.par
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/AUDIT_OBJ
    Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 1 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
    ORA-06512: at "SYS.KUPW$WORKER", line 1602
    ORA-06512: at line 2
    ORA-39014: One or more workers have prematurely exited.
    ORA-39029: worker 2 with process name "DW01" prematurely terminated
    ORA-31671: Worker process DW01 had an unhandled exception.
    ORA-04030: out of process memory when trying to allocate 120048 bytes (session heap,kuxLpxAlloc)
    ORA-06512: at "SYS.KUPW$WORKER", line 1602
    ORA-06512: at line 2
    Job "ELM"."SYS_IMPORT_TABLE_05" stopped due to fatal error at 13:11:04
    elm_rt.par_
    $ vi elm_rt.par
    "elm_rt.par" 25 lines, 1340 characters
    DIRECTORY=DP_REGRESSION_DATA_01
    DUMPFILE=ELM_MD1.dmp,ELM_MD2.dmp,ELM_MD3.dmp,ELM_MD4.dmp
    LOGFILE=DP_REGRESSION_LOG_01:ELM_RT.log
    DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS
    CONTENT=METADATA_ONLY
    TABLES=RT_AUDIT_IN_HIST,RT_AUDIT_OUT_HIST
    REMAP_TABLESPACE=RT_AUDIT_IN_HIST_DAT01:RB_AUDIT_IN_HIST_DAT01
    REMAP_TABLESPACE=RT_AUDIT_IN_HIST_IDX04:RB_AUDIT_IN_HIST_IDX01
    REMAP_TABLESPACE=RT_AUDIT_OUT_HIST_DAT01:RB_AUDIT_OUT_HIST_DAT01
    PARALLEL=4

    Read this metalink note 286496.1. (Export/Import DataPump Parameter TRACE - How to Diagnose Oracle Data Pump)
    This will help you generate trace for the datapump job.

  • Where can i get   SALES  ORGANIZATION    FIELD and TABLE

    Can anybody tell me   where can i get   FIELD  and   TABLE  Name   OF  " sales organization    "

    Dear Sandeep,
    u will find VKORG is the Sales Organisation .
    u will find this Field in every related SD Module Table...
    Like ...VBAP,BKPF ..etc..
    Hope it helps...!!!
    Pls reward if Helpful...!!!

  • Organization structure hierchy table determination in CRM

    Hi there,
    Can you please let me know in which table of CRM the organization hierchy structure is stored.
    Quick response will be greatly appreciated.
    Many thanks,
    Kate

    Hi,
    This is the CRM table HR1000 which has the organization hierarchy structure stored.
    Do reward points.
    Sajan.M

  • How to exp/imp both diff character set tables between in DB1 and DB2?

    In the Solaris 2.7 ,the oracle 8i DB1 has NLS_CHARACTERSET
    ZHS16CGB231280 and NLS_NCHAR_CHARACTERSET ZHS16CGB231280
    character set.
    In other linux7.2 system ,the oracle 8i DB2 is install into the
    NLS_NCHAR_CHARACTERSET US7ASCII and NLS_CHARACTERSET US7ASCII
    character set.
    The tables contents of DB1 have some chinese. I want to exp/imp
    tables of DB1 into DB2 . But the chinese can't correct display
    in the SQLWheet tools. How do the Exp/Imp operation ? ples help
    me . thanks .

    The supported way to store GB231280-encoded characters is using a ZHS16CGB231280 database or a database created using a superset of GB231280 ,such as UTF8 .Can you not upgrade your target database from US7ASCII to ZHS16CGB231280 ?
    With US7ASCII and NLS_LANG set to US7ASCII , you are using the garbage in garbage out (GIGO) approach. This may seem to work but there are many hidden problems :-
    1. Invalid SQL String Function behaviours - LENGTH ( ) , SUBSTR ( ) , INSTR ( )
    2. Data can be corrupted when data is loaded into another database. e.g. EXP / IMP , Dblinks
    3. Communication with other clients will generate incorrect results. e.g. other Oracle products - Oracle Text, Forms. , Java , HTML etc.
    4. Linguistic sorts not available
    5. Query using the standard WHERE clause may return incorrect results ..
    6. Extra coding overhead in handling character conversions manually.
    I recommend you to check out the FAQ and the DB Character set migration guide on the Globalization Support forum on OTN.
    Nat.

  • HR organization unit levels table

    Hi,
    Is there any table which havinging organization levels and its node i.e org level1,org level2....

    resolved

  • Datapump Export - multiple EXCLUDE patterns for TABLE

    I'm performing an export and I have two classes of tables (as in LIKE filters) that I wish to exclude. I've tried using multiple LIKE statements:
    EXCLUDE=TABLE:"LIKE 'FILTER1%'"
    EXCLUDE=TABLE:"LIKE 'FILTER2%'"
    However this way it appears the second EXCLUDE overwrites the first and only tables matching FILTER2% are excluded.
    Doing it like this has the same behavior and only tables matching FILTER2 are excluded
    EXCLUDE=TABLE:"LIKE 'FILTER1%'",TABLE:"LIKE 'FILTER2%'"
    The following are not syntactically correct but seemed worth trying
    EXCLUDE=TABLE:"LIKE 'FILTER1%' OR 'FILTER2%'"
    EXCLUDE=TABLE:"LIKE 'FILTER1%' OR LIKE 'FILTER2%'"
    Is there any way to accomplish what I'm trying to do here? This is 10.2.0.2.
    Thanks

    Hi,
    I can figure out a way for export, but not for import. If this is a user doing it's own tables, then you could use this
    exclude=table:'IN(select table_name from user_tables where table_name like ''TAB1%'' OR TABLE_NAME LIKE ''TAB2%'';
    If you are doing this for multiple schemas, then you need to use something like:
    exclude=table:'IN(select table_name from dba_tables where table_name like ''TAB1%'' OR TABLE_NAME LIKE ''TAB2%'';
    This does not work for import since chances are, the tables don't exist, so the query will return no rows found.
    Dean

  • Can't see organization from hr_legal_entities table

    I try to launch query:
    SELECT *
    FROM hr_legal_entities hle
    WHERE hle.organization_id = SUBSTRB( USERENV( 'CLIENT_INFO' ), 1, 10 );
    I tried with 2 responsibility - XX - AP USER and XX - AP MANAGER. Both return from SUBSTRB( USERENV( 'CLIENT_INFO' ), 1, 10 ) the same number. But with XX - AP USER i can't see row, but with XX - AP MANAGER i can. What can be the reason?

    Hi,
    XX - AP USER and XX - AP MANAGER have difference only in:
    ENDELIG_BILAGSNR_SEKVENS
    PA_ALLOW_FLEXBUILDER_OVERRIDES - N (have only XX - AP USER)
    PER_BUSINESS_GROUP_ID - 0 (have only XX - AP MANAGER)
    On test environment XX - AP USER have the same values as on production, but on production it's not working.

  • Import only some user tables

    Hi all,
    we have every day full export backup in eacly morning. but some tables's data has been delete unforutnaltely
    & structure of these tables intact. please anyone can suggest me how do i import only some tables of a user from
    daily full export backup? . this has to be done immdediately. quick response will be highly appreciated.
    Best Regards

    user11153253 wrote:
    Hi all,
    we have every day full export backup in eacly morning. but some tables's data has been delete unforutnaltely
    & structure of these tables intact. please anyone can suggest me how do i import only some tables of a user from
    daily full export backup? . this has to be done immdediately. quick response will be highly appreciated.
    Best RegardsYou can use this command; I suppose here you have the traditional exp/imp not datapump;
    imp SYSTEM/password FROMUSER=scott TABLES=(emp,dept)
    imp SYSTEM/password PARFILE=params.datThe params.dat file contains the following information:
    FILE=scott.dmp
    IGNORE=n
    GRANTS=y
    ROWS=y
    FROMUSER=scott
    TABLES=(%d%,b%s)http://download.oracle.com/docs/cd/B10500_01/server.920/a96652/ch02.htm#1012936
    http://download.oracle.com/docs/cd/B10500_01/server.920/a96652/ch02.htm

  • Datapump skipping partitioned tables in the database

    I have run expdp on Oracle 10.2.0.4.0 on AIX 5.6 Platform, the export runs well exporting rows in the database but when it comes to partitioned tables in the database it export no rows for all the partitioned tables. When I run a normal exp/imp the partitioned tables are exported with all their rows.
    I used the following commands:
    expdp system/****** dumpfile=export_data.dmp directory=DATA_PUMP_DIR full=y logfile=export_dump.log
    Output for expdp on partitioned table:
    . . exported "SCOTT"."DEPT":"DEPT_2003_P1" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P10" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P11" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P12" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P2" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P3" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P4" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P5" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P6" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P7" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P8" 0 KB 0 rows
    . . exported "SCOTT"."DEPT":"DEPT_2003_P9" 0 KB 0 rows
    And for exp:
    exp system/****** file=export_dump.dmp full=y log=export_log1.log
    Result from the export log for partitioned tables:
    . . exporting partition DEPT_2005_P1 881080 rows exported
    . . exporting partition DEPT_2005_P2 1347780 rows exported
    . . exporting partition DEPT_2005_P3 2002962 rows exported
    . . exporting partition DEPT_2005_P4 2318227 rows exported
    . . exporting partition DEPT_2005_P5 3122371 rows exported
    . . exporting partition DEPT_2005_P6 3916020 rows exported
    . . exporting partition DEPT_2005_P7 4217100 rows exported
    . . exporting partition DEPT_2005_P8 4125915 rows exported
    . . exporting partition DEPT_2005_P9 1913970 rows exported
    . . exporting partition DEPT_2005_P10 1100156 rows exported
    . . exporting partition DEPT_2005_P11 786516 rows exported
    . . exporting partition DEPT_2005_P12 822976 rows exported
    I am not sure about this behavour from datapump, my database is more than 800GB and we want to migrate the database from AIX to LINUX.
    Thanks

    Sorry I just copied and pasted some extracts from my exp and expdp logs:
    For testing purposes I tried to run a datapump export of only 1 partitioned table in the database and its going through, but when I do the same on a full datapump export these partitioned tables are being exported with no rows.
    Export: Release 10.2.0.4.0 - 64bit Production on Tuesday, 02 August, 2011 12:18:47
    Copyright (c) 2003, 2007, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** dumpfile=DEPT.dmp tables=scott.dept logfile=dept1.log
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 48.50 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/COMMENT
    Processing object type TABLE_EXPORT/TABLE/RLS_POLICY
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/TRIGGER
    Processing object type TABLE_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SCOTT"."DEPT":"DEPT_2009_P6" 1.452 GB 7377736 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P7" 1.363 GB 6935687 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P6" 1.304 GB 6656096 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P7" 1.410 GB 7300618 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P7" 1.296 GB 6641073 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P6" 1.328 GB 6863885 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P6" 1.158 GB 6568075 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P5" 1.141 GB 5801822 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P5" 1.162 GB 6027466 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P7" 1.100 GB 6214680 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P6" 1.106 GB 5762303 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P5" 1.133 GB 5859492 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P5" 1.001 GB 5664315 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P5" 1.023 GB 5229356 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P8" 1.078 GB 5549666 rows
    . . exported "SCOTT"."DEPT":"DEPT_2007_P8" 940.3 MB 5171379 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P8" 989.0 MB 4920276 rows
    . . exported "SCOTT"."DEPT":"DEPT_2009_P8" 918.6 MB 4553523 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P6" 821.0 MB 5220879 rows
    . . exported "SCOTT"."DEPT":"DEPT_2008_P4" 766.6 MB 3832262 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P8" 747.9 MB 4753538 rows
    . . exported "SCOTT"."DEPT":"DEPT_2006_P7" 741.8 MB 4708242 rows
    . . exported "SCOTT"."DEPT":"DEPT_2010_P4" 734.2 MB 3713567 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P7" 661.4 MB 4217100 rows
    . . exported "SCOTT"."DEPT":"DEPT_2005_P8" 647.1 MB 4125915 rows
    . . exported "SCOTT"."DEPT":"DEPT_2011_P4" 677.8 MB 3428887 rows
    I also tried to run a normal schema by schema export with the normal exp system/password command the and got my dump file which is about 300GB, when I run the imp system/password command and specify fromuser=<system > and touser=<schemas_in_the_dumpfile> seperated by commas, it just comes up with this message:
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    Export file created by EXPORT:V10.02.01 via conventional path
    import done in WE8ISO8859P9 character set and AL16UTF16 NCHAR character set
    Import terminated successfully without warnings.
    No tables are exported.
    If I specify the parameter imp system/password file=dept_export.dmp full=y log=dept_imp.log with the same dumpfile and it imports data from the dumpfile into my database.
    I am not sure what could be wrong with my dumpfile or my imp command and its parameters.

  • Corresponding Sales Organization of a project, starting from table COVP.

    Hello,
    I need to find the corresponding sales organization of a project from the table COVP(this is for an interface). The sales organization is in table proj. But to get there, i need a connection. I thought about this connection:
    COVP-OBJNR = AFVC-OBJNR - from afvc, take aufpl(Routing number of operations in the order)
    AFKO-AUFPL = AFVC-AUFPL - from afko, take pronr(Project definition)
    PROJ--PSPNR = AFKO-PRONR.
    And from proj, take the corresponding sales organization.
    What do you think?I am giving the right connection and all the necessary elements?
    PS: This is only for network elements, for WBS elements I know the connection.
    Thank you,
    Efren
    Edited by: Efren23 on Feb 22, 2012 8:55 AM

    Hello,
    Thanks Gokul.I've found the connections in this link:http://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/9659.
    The only connection that was missing was between COVP and AFVC. But i think so and you agree that the right condition is
    COVP-OBJNR = AFVC-OBJNR.
    I will leave the question not answered until tonight, maybe others have other suggestions.
    Thanks,
    Efren
    Edited by: Efren23 on Feb 22, 2012 10:33 AM

Maybe you are looking for

  • An accounting document is not required for this billing document

    Hi, While releasing the billing document to accounting the error is coming like "An accounting document is not required for this billing document" Diagnosis: The billing document has the net value '0' and does not create an accounting document. Despi

  • The Open GL acceleration has stopped working

    Hi, I would really appreciate some guidance on this. The other day I ran updates to 10.0.5 adding a couple of bug fixers, nothing unusual. However when I opened After Effects CS6 and Premiere Pro CS6 it told me that the system had reverted back to us

  • Give Control option grayed out (mid migration)

    I just set up a new Lync 2013 Server, and have moved all users over to it. Obviously I missed something. Desktop sharing works, but the "Give Control" option is grayed out.  I think I found a setting but... well here, take a look... PS C:\> Get-CsCon

  • PDF Preview Issues since 10.6.5 Update

    I've noted posts of similar PDF issues since the 10.6.5 update, though not exactly like mine. Since the update some PDF's are not rendering properly. Specifically no text shows up (though the headers / graphics, underlines and text boxes were there).

  • Website keeps saying Javascript is not installed so cannot load or print.

    Am trying to download coupons off of www.coupons.com and it keeps saying Javascript not enabled. Please enable Javascript for this browser. My default browser is Mozilla Firefox.