Migrate exp/imp into data pump

Hi Experts,
we use exp/imp to exp data 150G and works to support stream in past time.
As I know, that data pump will speed up exp.
How to migrate ex/imp syntax into data pump?
my imp/exp as
exp USERID=SYSTEM/tiger@test OWNER=tiger FILE=D:\Oraclebackup\CLS\exports\test.dmp LOG=D:\Oraclebackup\test\exports\logs\exportTables.log OBJECT_CONSISTENT=Y STATISTICS=NONE
imp USERID=SYSTEM/tiger FROMUSER=tiger TOUSER=tiger CONSTRAINTS=Y FILE=test.dmp IGNORE=Y COMMIT=Y LOG=importTables.log STREAMS_INSTANTIATION=Y
Thanks
Jim

You are right - expdp more faster and useful than classic exp utility
There are several features in EXPDP
- may do only local on current server
- you must create directory object in database &
grant read,write priveleges to user
For Example:
create directory dump as 'd:\export\hr';
grant read,write on directory dump to hr;
Then we may do export:
expdp hr/hr DIRECTORY=dump DUMPFILE=test.dmp LOGFILE=exportTables.log
After export we will see two files in directory 'd:\export\hr'
Other features see from expdp help=y & Oracle Documentation
Edited by: Vladandr on 15.02.2009 22:07

Similar Messages

  • Exp/imp no data just structure uses a ton of disk space

    I am attempting to migrate a schema from one db instance to another and have run into a problem with disk space. All I want to migrate is the structure. So I am exporting with out the data. Yet the tablespace datafiles grow to a size as if they actually held data. I don't have the disk space for this and would like to know if there is a way around this problem. Also, can anyone provide some insight into why the space would be taken up even though no data is being imported.
    I am not a DBA but our DBA suggested I include a compress=n parameter. However, this doesn't appear to solve the issue.
    Thanks,
    Jason

    Go Google for "Databee" and then search their home page for the DDL Wizard
    OK, forget that: just go visit http://www.ddlwizard.com/
    (It is tricky to see on their home page).
    Feed it your row-less export dump files and turn them into a bunch of 'create xxxx' statements. Edit those statements so they aren't asking for stupid INITIAL extent sizes. Run those statements. Then do a standard import using ignore=y. That will get your tables created empty and small and the subsequent import will get all your other schema objects back.
    The DBA that suggested compress=n is on the right track: compress=y will mean import will seek to create empty tables as big as the fully-populated table currently is, pre-allocating all the space it thinks the table will eventually need given the amount of data that might, one day, be inserted into it.
    Compress=n will seek to create the table with the smallest requested extent size and no more than that: row inserts will make it grow big later on, but that growth is left to happen when it needs to happen.
    The only other potential fly in the ointment is, as someone else said, the fact that if you're importing into a tablespace that has been created 'extent management local uniform size 100M', then the mere fact of creating a completely empty table will cause 100M of space to be consumed. You would be much better off making sure your tablespaces are created 'extent management local autoallocate': then you start of with small space allocations and only get bigger when you really need it.

  • Data Pump execution time

    I am exporting 2 tables from my database as a prelimary test before doing a full export ( as part of a server migration to an new server )
    I have some concerns about the time the export took ( and therefore the time the corresponding import will take - which typically is considerably longer than the export )
    Estimate in progress using BLOCKS method...
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 19.87 GB
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
    Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    . . exported "SAPPHIRE"."A_SDIDATAITEM" 15.45 GB 88263813 rows
    . . exported "SAPPHIRE"."A_SDIDATA" 1.775 GB 14011593 rows
    Master table "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully loaded/unloaded
    Dump file set for SYSTEM.EXPORT_TABLES_LIMSLIVE is:
    E:\ORACLE\PRODUCT\10.2.0\ADMIN\LIMSLIVE\DPDUMP\EXP_TABLES_LIMSLIVE.DMP
    Job "SYSTEM"."EXPORT_TABLES_LIMSLIVE" successfully completed at 15:43:38
    These 2 tables alone took nearly an hour to export. The bulk of the time seemed to be on the line
    Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Q1. Is that really the line that was taking time or was the export really working on the the export of the table on the following line ?
    Q2. Will such table stats be brought in on an import ? i.e. are table stats part of the dictionary therefore part of the SYS/SYSTEM schemas and will not be brought in on the import to my newly created target database ?
    Q3. Does anyone know of any performance improvements that can be made to this export / import ? I am exporting from the 10.2.0.1 source Data Pump and will be importing on a new target 11gR2 Data Pump. From experiement with the command line I have found that 10.2.0.1 does not support PARALLEL so I am not able to use that on the export side ( I should be able to use it on the 11gR2 import side )
    thanks,
    Jim

    Jim,
    Q1. what difference does it make knowing the difference between how long the Meta Data and the actual data takes on >export ?
    Is this because you could decide to exclude some objects and manually create them before import.You asked what was taking so long. This was just a test to see if it was metadata or data. It may help us try to figure out if there is a problem or not, but knowing what is slow, would help narrow things down.
    With the old exp/imp utility I sometimes manually created the tablespaces and indexes in this manner, however for Data >Pump the Meta Data contains a lot more than just tablespaces and indexes - I couldn't imagine manually creating all the >tables and grants for example. I guess you can be selective about what objects you include / exclude in the export or import >( via the INCLUDE & EXCLUDE settings ) ?No, I'm not suggesting that you change your process, just trying to figure out what is slow. Also, old exp/imp and Data Pump treat metadata and data the same way. Just to maybe clear things up - when you say content=metadata_only, it exports everything except for data. It will export tablespaces, grants, users, table, statistics, etc. Everything but the data. When you say content=data_only, it only exports the data. You can use this method to export and import everything, but it's not the best solution for most. If you create all of your metadata and then load the data, any indexes on the tables need to be maintained while the data is being loaded and this will slow down the data only job.
    Q2. If I do a DATA ONLY export I presume that means I need to manually pre-create every object I want imported into my >target database. Does this mean every tablespace, table, index, grant etc ( not an attractive option ) ?Again - I was not suggesting this method, just trying to figure out what was slow. If I were to do it this way, I would run the impdp on the metadata only dumpfile, then run the import on the data only dump file.
    Q3. If I use EXCLUDE=statistics does that mean I can simply regenerate the stats on the target database after the import >completes ( how would I do that ? )Yes, you can do that. There are different statistics gathering levels. You can collect them per table, per index, per schema, and I think per database. You want to look at the documentation for dbms_stats.gather...
    Dean

  • Exp move all data

    Hello,
    I would like to move all data from one database (10.2.0.1 32 bit) to the fresh environment 10.2.0.4 on 64 bit architecture. So I have some questions:
    - Can I use for this migration EXP / IMP utility ? All data contains about 10GB, and I can make downtime of database for exp and imp.
    - Should I use a full (FULL=Y) export with system user, and import to new enviroment ?
    - Do I need to create any tablespaces, roles, schemas on new database ?
    Can you give some hints about this kind of migration?
    Thank you a lot,
    Jimmy,

    If I understand, you don't recommend to use EXP with FULL=Y option. Why?
    Actually A full database export and import can be used in all Oracle version to transfer a database across platforms. IMP full=y + ignore=Y
    Anyway when you use imp ...full = y + ignore=Y , that might make more time than import each of schemasOn the target database I should first create all tablespaces, with temp tablespaces, create all users and use IMP utility with option FROMUSER=XXX TOUSER=XXX to import
    if you use FROMUSER=XXX TOUSER=XXX
    You have to create tablespaces, roles , profile , users and grant anything like source databaseWhat about roles? Is this possible to export only roles using EXP utility? Should I export anything else from source database?
    DBA or sysdba roles ,that can export full=y
    You should export full from source database...
    after that you can import full or each of schemas.You have a lot of schema, I think You should use exp/imp full=Y... this might good solution.
    1. create target database
    2. get tablespaces name from dba_tablespaces on source database and then create tablespaces on target database.
    3. export FULL from source database
    exp system/password FULL=y FILE=exp_full.dmp LOG=exp_full.log 4. transfer binary dump file to target database
    5. import full=y + ignore=y
    imp system/password FULL=y FILE=exp_full.dmp LOG=imp_full.log IGNORE=y remark: you can use DATA PUMP, that faster EXP/IMP

  • Does data pump really replace exp/imp?

    hi guys,
    Ive read some people saying we should be using data pump instead of exp/imp. But as far as I can see, if I have a database behind a firewall at some other place, and cannot connect to that database directly and need to get some data acrposs, then data pump is useless for me and I can only exp and imp the data.

    OracleGuy777 wrote:
    ...and i guess this means that data pump does not replace exp and imp.Well, depending of your database version, it is.
    "+Original Export is desupported for general use as of Oracle Database 11g. The only supported use of original Export in 11g is backward migration of XMLType data to a database version 10g release 2 (10.2) or earlier. Therefore, Oracle recommends that you use the new Data Pump Export and Import utilities, except in the following situations which require original Export and Import:+
    +* You want to import files that were created using the original Export utility (exp).+
    +* You want to export files that will be imported using the original Import utility (imp). An example of this would be if you wanted to export data from Oracle Database 10g and then import it into an earlier database release.+"
    http://download.oracle.com/docs/cd/E11882_01/server.112/e10701/original_export.htm#SUTIL3634
    Coming back on your problem, as already suggested, you have the NETWORK_LINK parameter, you're able to export data from source and import directly into your target db without the need of any intermediate file.
    Nicolas.

  • Exp/imp data pump

    I exported tables in 10g using exp(normal export)command
    Can I user impdp(datapump) while doing import

    There is no exclude or include option in the traditional export/import (exp/imp) utility. This is only given in datapump (expdp/impdp) to exclude or include the database objects while exporting or importing.
    All you can do as work around is, create the structure of the table in the target database/schema, then do import, while import if the table is found in the target schema, then that table will not imported. So, don’t use ignore=y, if you use, even if the table is found during the import, the rows will be imported in fact. Hope you got what I mean.
    Regards,
    Sabdar Syed.

  • Migrating from 9i to 10g through exp/imp

    Hi,
    We need to migrate a database on 9i in HP-UNIX to 10g in IBM AIX. Can you please tell me whether i can export data in 9i with 9i export utility and import it into 10g using 10g data pump utility for faster import ?
    We don't have the 10g software installed in the HP-UNIX platform.
    And what other alternatives we have to reduce the amount of time for the migration process as it's a critical Production database ?
    Thanks,
    Jayanta

    I believe that Justin is correct that if you use exp to unload the data you will need to use imp and not impdp to reload the data.
    To speed up the cross-platform exp/imp process I suggest you consider running multiple concurrent exp/imp jobs. You can generate a table=list into a spool file using sql. You can give each large table its own exp job and bunch the small tables.
    Depending on how your database objects are organized you might also be albe to export by owner or a combination of owner, tables= exports.
    You can use a full=y with the rows=n option to grab the public synonyms, non-owning users, packages, etc.... not brought in by the prior exp/imp.
    HTH -- Mark D Powell --

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • How to consolidate data files using data pump when migrating 10g to 11g?

    We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?

    hi
    datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
    Use PARALLEL option, set also these ones:
    * DISK_ASYNCH_IO=TRUE
    * DB_BLOCK_CHECKING=FALSE
    * DB_BLOCK_CHECKSUM=FALSE
    set high enough to allow for maximum parallelism:
    * PROCESSES
    * SESSIONS
    * PARALLEL_MAX_SERVERS
    more:
    http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
    that's it, patience welcome ;-)
    P.S.
    For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
    Edited by: g777 on 2011-02-02 09:53
    P.S.2
    breaking news ;-)
    I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
    Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
    Edited by: g777 on 2011-02-02 14:52

  • Migration from 10g to 12c using data pump

    hi there, while I've used data pump at the schema level before, I'm rather new at full database imports.
    we are attempting a full database migration from 10.2.0.4 to 12c using the full database data pump method over db link.
    the DBA has advised that we avoid moving SYSTEM and SYSAUX objects. but initially when reviewing the documentation it appeared that these objects would not be exported from the target system given TRANSPORTABLE=NEVER. can someone confirm this? the export/import log refers to objects that I believed would not be targeted:
    23-FEB-15 19:41:11.684:
    Estimated 3718 TABLE_DATA objects in 77 seconds
    23-FEB-15 19:41:12.450: Total estimation using BLOCKS method: 52.93 GB
    23-FEB-15 19:41:14.058: Processing object type DATABASE_EXPORT/TABLESPACE
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"UNDOTBS1" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"SYSAUX" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"TEMP" already exists
    23-FEB-15 20:10:33.185: ORA-31684: Object type TABLESPACE:"USERS" already exists
    23-FEB-15 20:10:33.200:
    Completed 96 TABLESPACE objects in 1759 seconds
    23-FEB-15 20:10:33.208: Processing object type DATABASE_EXPORT/PROFILE
    23-FEB-15 20:10:33.445:
    Completed 7 PROFILE objects in 1 seconds
    23-FEB-15 20:10:33.453: Processing object type DATABASE_EXPORT/SYS_USER/USER
    23-FEB-15 20:10:33.842:
    Completed 1 USER objects in 0 seconds
    23-FEB-15 20:10:33.852: Processing object type DATABASE_EXPORT/SCHEMA/USER
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OUTLN" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"ANONYMOUS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"OLAPSYS" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"MDDATA" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"SCOTT" already exists
    23-FEB-15 20:10:52.368: ORA-31684: Object type USER:"LLTEST" already exists
    23-FEB-15 20:10:52.372:
    Completed 1140 USER objects in 19 seconds
    23-FEB-15 20:10:52.375: Processing object type DATABASE_EXPORT/ROLE
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"SELECT_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"EXECUTE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.255: ORA-31684: Object type ROLE:"DELETE_CATALOG_ROLE" already exists
    23-FEB-15 20:10:55.256: ORA-31684: Object type ROLE:"RECOVERY_CATALOG_OWNER" already exists
    any insight most appreciated.

    Schema's SYS,CTXSYS, MDSYS and ORDSYS are Not Exported using exp/expdp
    Doc ID: Note:228482.1
    I suppose he already installed a software 12c and created a database itseems - So when you imported you might have this "already exists"
    Whenever the database is created and software installed by default system,sys,sysaux will be created.

  • Use exp/imp to migrate a database

    hi,
    I want to migrate a database from window to hp unix,
    Can I use the following way? thanks!
    First,exp from windows:
    1.exp the schema without data
    2.exp the whole database with data
    Second,imp to unix:
    1.use dbca to create a database;
    2.imp the schema
    3 imp the data to all schema.

    Yes. Just make sure that you do a binary file transfer between windows and unix system otherwise you will end up with a corrupted dump.
    Even after transfer, just to make sure, you can have a trial import with indexfile and show options to check the integrity of the dump.
    Probably you can merge steps 2 and 3 of import in 1 step.

  • Reconfigure one data pump extract into 4 and several replicats

    Hi
    I have setup a basic replication for two database, source 9iR2 and target 11gR2 (same platform), the goal is to perform a zero downtime migration.
    I have started from very basic configuration just to see how this works, one extract, one data pump and one replicat but I have observed some lagging problem so I think I might need to have more data pumps and more replicats.
    How can I reconfigure current configuration to a multiple replicat and data pump configuration without neededing to reinstantiate the target database? I have looked into support note How to Merge Two Extract And Two Replicate Group To One Each, from one source DB to a target DB (Doc ID 1518039.1) but I need to do the reverse.
    Thanks in advance
    Edited by: user2877716 on Feb 5, 2013 4:04 AM
    Edited by: user2877716 on Feb 5, 2013 4:05 AM

    Good example in the Apress book. Basically, stop extract, let pump and replicat finish the current stream, then split data pump and replicat as needed. Figure out where to start each pump and replicat with respect to the CSN.

  • Data Pump .xlsx into a SQL Server Table and the whole 32-Bit, 64-Bit discussion

    First of all...I have a headache!
    Found LOTS of Google hits when trying to data pump a .xlsx File into a SQL Server Table. And the whole discussion of the Microsoft ACE 64-Bit Driver or the Microsoft Jet 32-Bit Driver.
    Specifically receiving this error...
    An OLE DB record is available.  Source: "Microsoft Office Access Database Engine"  Hresult: 0x80004005  Description: "External table is not in the expected format.".
    Error: 0xC020801C at Data Flow Task to Load Alere Coaching Enrolled, Excel Source [56]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER.  The AcquireConnection method call to the connection manager "Excel Connection Manager"
    failed with error code 0xC0202009.
    Strangely enough, if I simply data pump ONE .xlsx File into a SQL Server Table utilizing my SSIS Package, it seems to work fine. If instead I am trying to be pro-active and allowing for multiple .xlsx Files by using a Foreach Loop Container and a variable
    @[User::FileName], it's erroring out...but not really because it is indeed storing the rows onto the SQL Server Table. I did check all my Delay
    Why does this have to be sooooooo difficult???
    Can anyone help me out here in trying to set-up a SSIS Package in a rather constrictive environment to pump a .xlsx File into a SQL Server Table? What in God's name am I doing wrong? Or is all this a misnomer? But if it's working how do I disable the error
    so that is stops erroring out?

    Hi ITBobbyP,
    According to your description, when you import data of .xlsx file to SQL Server database, you got the error message.
    The error can be caused by the following reasons:
    The excel file is locked by other processes. Please kindly resave this file and name it to other file name to see if the issue will be fixed.
    The ACE(Access Database Engine) is not up to date as Vaibhav mentioned. Please download the latest ACE and install it from the link:
    https://www.microsoft.com/en-us/download/details.aspx?id=13255.
    The version of OFFICE and server bitness is not the same. To solve the problem, please refer to the following document:
    http://hrvoje.piasevoli.com/2010/09/01/importing-data-from-64-bit-excel-in-ssis/
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    Wendy Fu
    TechNet Community Support

  • Migration using Data Pump confusion :(

    Hi,
    I have task to migrate a database from Windows x64 to Linux x64. From 10.2.0.4 to 11.2.0.4. Please correct my scenario:
    1. Install and configure binaries on new Linux host
    2. Create a empty database on Linux host
    3. Performe DP Export from old Windows database.
    4. Import dump to new database on Linux.
    Are the general steps correct? My confiusion is about Data Pump utility. As i know DP doesnt export SYS/SYSTEM schemas right? I've performed test migration with the following parameters:
    expdp ..... full=y
    impdp ..... full=y
    and i have a lot of errors similar to: "object already exists" and so on. Am i correct with using full=y in both expdp and impdp? Maybe i should export a full database and then import only specified schemas (application schemas)? What about priviledges? Where they are stored in database? Maybe when i import only application schemas there will be a problem with privilegdes grants etc?
    Thanks

    Yes your steps are correct. When you create the new database, the SYS/SYSTEM object are already populated, so you will see "object already exists" errors.
    Moving Data Using Oracle Data Pump
    HTH
    Srini

  • Migrate oracle 9207 DB 8 TB size frm Solaris to AIX?dont want  Exp/Imp

    Hi Guys,PLz help
    I want migrate 8TB oracle database from Solaris 8 to AIX 5.
    In my last post on the same topic I was told to refer Metalink notes
    291024.1,Note:77523.1,Note:277650.1
    acc to these notes 'EXPORT/IMPORT IS THE ONLY OPTION TO MIGRATE FROM SOLARIS TO AIX'.
    I was not convinced as In http://dba.ipbhost.com/index.php?showtopic=9523
    I read
    "As both solaris and AIX are UNIX O/S, cloning the DB is also possible from source box to target box"
    Also
    whats the role of "If the endianness is the same"??????
    Can you guys plz comment on this again.....as I'm really confused because exp/imp of 8TB
    is going to take half of my life :)
    PLz tell in any other option in 9.2.0.7?If we have
    Thanx

    You can do
    SELECT PLATFORM_NAME, ENDIAN_FORMAT
          FROM V$TRANSPORTABLE_PLATFORMto check endianness of different platform.
    -- Note, the view is only on 10g and above.
    datafiles from different endian format can't be directly copied over and use. On 10g you have option to use RMAN convert endianness.
    with that said, since Solaris and AIX are all big engianness. That means you can directly copy datafiles over and clone database without using exp/imp.
    this article has a list of how to clone
    http://www.dba-oracle.com/oracle_tips_db_copy.htm

Maybe you are looking for

  • Error while trying to instanciate IUriMapperService

    Hello, I'm getting an error while trying to instanciate the IUriMapperService. The error I get is the following: Failed to initialize the ServiceFactory: java.lang.NoClassDefFoundError: com/sapportals/wcm/crt/CrtClassLoaderRegistry at com.sapportals.

  • After Effects CS5.5 error: crash in progress last logged message was 3300 DynamicLink 5

    Randomly when I start after effects CS5.5 and premiere pro CS5.5 I get the following error: After effects error: crash in progress last logged message was <3300> <DynamicLink> <5> 000000001AAB2080 I have all programs updated. PC Specs: i7 930 @ 2.8GH

  • LX26 schedule information

    LX26 schedules the bin count based on ABC indicator specified in the material master throughout the year. Is there a standard report which will give out the information of what % of this schedule is complete at any given time of the year based on mat

  • How process flow of a foundry industry is mapped in SAP

    Hello All, Can anyone please tell me how to map process of foundry industry in sap for PP module.

  • Authorization Policy for only search users

    Hi all, I need create a custom authorization policy for only search all users in create request. The users can't see any profile information of others users. Anyone can help me ? Regards, Joel