Moving a schema

I am attempting to move a schema called "App" from one Oracle server to a completely separate Oracle server. It's a basic production/test environment where we need to test on production data. I've looked at the data pump functionality and I believe I've exported successfully based on the EXPDAT.log file but I can't seem to import it into the other server.
I'm very new to the oracle environment (nothing like flying by the seat of your pants) but I can't imagine it should be this difficult.
I have tried to import the entire database, schema and even a single table and they all continue to fail.
Here is a copy of the output from EM:
ORA-01403: no data found ORA-06512: at "SYSMAN.MGMT_JOB_ENGINE", line 5698 ORA-06512: at "SYSMAN.MGMT_JOB_ENGINE", line 7706 ORA-06512: at "SYSMAN.MGMT_JOB_ENGINE", line 8493 ORA-01403: no data found ORA-06512: at "SYSMAN.MGMT_JOBS", line 273 ORA-06512: at "SYSMAN.MGMT_JOBS", line 86 ORA-06512: at line 1
Earlier I was getting a succeeded but no data was actually being imported due to failure... un fortunately I can't even get that far now.
Any assistance would be greatly appreciated.
Thank you,

tsmeed,
If possible post your datapump export /import job and here i am posting working example for export/import with datapump. Make sure both the source and target uses have right privs to do so (EXP_FULL_DATABASE/IMP_FULL_DATABASE).
Replace MYSCHEMANAME=by your schema name
declare
  h1   NUMBER;
begin
  begin
      h1 := dbms_datapump.open (operation => 'EXPORT', job_mode => 'SCHEMA', job_name => 'MYSCHEMA', version => 'COMPATIBLE');
  end;
  begin
     dbms_datapump.set_parallel(handle => h1, degree => 1);
  end;
  begin
     dbms_datapump.add_file(handle => h1, filename => 'MY_SCHEMA_EXPORT.log', directory => 'EXPORT_DIR', filetype => 3);
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
  end;
  begin
     dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MYSCHEMANAME'')');
  end;
  begin
     dbms_datapump.add_file(handle => h1, filename => 'myschema_export.dmp', directory => 'EXPORT_DIR', filetype => 1);
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'ESTIMATE', value => 'BLOCKS');
  end;
  begin
     dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
  end;
  begin
     dbms_datapump.detach(handle => h1);
  end;
end;
/My import job and replace MYSCHEMANAME by your schema name.
declare
h1   NUMBER;
begin
  begin
      h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'MYIMPORT', version => 'COMPATIBLE');
  end;
  begin
     dbms_datapump.set_parallel(handle => h1, degree => 1);
  end;
  begin
     dbms_datapump.add_file(handle => h1, filename => 'IMPORT.LOG', directory => 'EXPORT_DIR', filetype => 3);
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 0);
  end;
  begin
     dbms_datapump.add_file(handle => h1, filename => 'myschema_export.dmp', directory => 'EXPORT_DIR', filetype => 1);
  end;
  begin
     dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MYSCHEMANAME'')');
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC');
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1);
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'TRUNCATE');
  end;
  begin
     dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
  end;
  begin
     dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0);
  end;
  begin
     dbms_datapump.detach(handle => h1);
  end;
end;
/Regards

Similar Messages

  • "No sensor values have been generated for this instance." BPELConsole msg.

    I've reworked the GoogleFlowWithSensors demo to use BAM sensors as a prelude to incorporating BAM into a a real BPEL process we have that's much more complicated. I've moved the schema in the GoogleFlowWithSensors.wsdl file to an imported XSD file as directed in literature. I created in BAM Architect a DataObject called ProcessExecutionTimestamp with 5 columns instanceId, receiveInput_TS, invokePartnerLink_TS, receivePartnerLink_TS, and callbackClient_TS. I was able to successfully tie the BAM sensors to this dataobject.
    Everytime I test the BPEL process in BPELConsole, I get the results back but I get nothing but a "No sensor values have been generated for this instance." message in the BPELConsole "sensors" pane. I have found nothing in any BAM or BPEL log that indicates what could be going on.
    Thanks in advance.
    Message was edited by:
    user565339

    Hi,
    BPEL Console does not show data published on BAM Sensors, to see on BPEL console you have use database sensor type.
    Bpel log (default.log) shows any errors encountered on sending data to BAM.
    Thanks

  • Facing issue with answers

    Hi i’m new to OBIEE and my issue is in my Administration tool when i’m moving my schema from BMM layer to PRESENTATION layer all the logical tables and columns are available. But in my answers I’m not able to view columns of one of my table. Could any one guide me how to get this thing.

    HI Paul
    Check in presentation layer in Administration Tool whether you are having permissions to see those columns or not.
    For that one
    presentation Layer --> Table --> column -->properties --> permissions --> mark show all users/groups
    If u r not having permissions it wont show those columns for u in presentation Services.
    Thanks
    Don

  • Expdp network link

    Hi,
    1,Can we use network link in expdp from 11g (destination)to 10g ( traget) for data migration?
    2,Can we do like copy the user password from 10g and apply to 11g?
    The below one is goet from source(10g).
    grant connect,resource to HKHR identified by 'EAA43BC83A9C1BD4';
    grant connect,resource to CSHEi dentified by 'C8F09D04F6AD4B05';
    If we execute the above one at 11g,can we get the same password in 11g as 10g?
    Is it possible?Because we have nearly 500 users in the database.
    Any other method.
    Thanks & Regards,
    VN
    Edited by: user3266490 on Feb 21, 2012 8:27 PM

    Yes this all should work just fine. If you want the schemas moved over, just make sure that you use a privileged account. If you are having each person move their own schemas and the the schemas are not prived, then the user account with passwords are not moved over.
    If you are using full=y the the schema running the datapump job needs to be prived anyway, so no issue. If you are moving multiple schemas, then the same applies. The schema running a multi schema job needs to be prived, so no issue.
    The passwords will be copied over when the schema is created.
    Again, this should all work fine.
    Dean

  • Sanction Party List [SAP GTS] Reporting & HANA on SAP BW

    Dear All,
    Since our company will be installing HANA in our BW environment, does anyone know if Santioned Party List Screening Analysis related to SAP GTS will be available after the install in BW OR does HANA need to be installed within SAP GTS environment to gain access to this reporting?
    I've read the following information, see below links, and I'm hoping it's not the later.  Any guidance is always appreciated.  Thank you.
    Sanctioned Party List Screening Analysis - SAP HANA Live for SAP GTS - SAP Library
    Release Notes for SAP HANA Live for SAP Business Suite 1.0, Q3/2 - SAP Library

    Hi Claire,
    If the sanctioned party list screening reports need to be displayed GTS server needs to be added on HANA studio. Basis can do the Initial setting and all the table related data will be moved as schema into HANA and reports will be available .No further installation of HANA is required.
    Thank you
    Karthi

  • ORA-27067: size of I/O buffer is invalid

    Hi Experts,
    Today we have faced the below error in our ASM instance alert log file.. the day before we have moved one schema datafiles from one location to another another location ..and it is successfuly and the schema is accessble and verified ... we are not understaiding why exactly we got that error.
    error :"ORA-27067: size of I/O buffer is invalid"
    below is the alert log error.
    Thu Jan 02 05:43:26 2014
    NOTE: client +ASM:+ASM registered, osid 827, mbr 0x0
    Thu Jan 02 05:46:39 2014
    Errors in file /home/oracle/app/oracle/diag/asm/+asm/+ASM/trace/+ASM_ora_1188.trc:
    ORA-27067: size of I/O buffer is invalid
    Additional information: 2
    Thu Jan 02 05:48:59 2014
    NOTE: ASMB process exiting due to lack of ASM file activity for 302 seconds
    Thu Jan 02 06:45:58 2014
    Starting background process ASMB
    Thu Jan 02 06:45:58 2014
    ASMB started with pid=22, OS id=9021
    Thu Jan 02 06:45:58 2014
    NOTE: client +ASM:+ASM registered, osid 9023, mbr 0x0
    Thu Jan 02 06:47:06 2014
    Thanks...

    Are you trying to install a seeded database from a preconfigured template? If yes, there's a bug with such a configuration, if the seeded files are read-only.
    Here the statement from Oracle:
    Bug 3996124 Abstract: RMAN CANNOT RESTORE BACKUPSET ON READONLY MEDIA
    Bug 2835956 Abstract: RMAN CANNOT RESTORE FROM READ-ONLY BACKUPS
    When DBCA creates a database from a template, it uses RMAN on the seeded files Seed_Database.ctl
    and Seed_Database.dfb
    The bug is that RMAN cannot restore a file that is read only.
    Since the files Seed_Database.ctl Seed_Database.dfb are not read/write, DBCA gives the error.
    Solution
    To implement the solution, please execute the following steps:
    1. Change the file permissions to 644 on the files.
    2. Run the installation again.

  • Non existent table still in sys.objects, how to fix that?

    I have moved a table from one table schema to another one (dbo.MY_TABLE to RAW.MY_TABLE), using ALTER SCHEMA..., the problem is when our ETL is trying to remove all the constraints it finds 2 records (dbo.MY_TABLE), one for the user table and the other
    the PK of that table. The problem is that the sys.objects points to dbo.MY_TABLE which now doesn't exist anymore. How could I update the sys.objects table to correctly reflect that the table has moved table schema?

    Hi,
    sys.objects lists user objects. If
    select * from sys.all_objects where name='MY_TABLE'
    gives you more than 1 table (look at the "type_desc" column), you DO have more than 1 table. One in "dbo" schema and one in the "raw" schema.
    Those 2 tables should have different object_id values. Object_id is unique within a database, and all dependant objects use that object_id to refer to the object. They do not use object_name, schema name, not even schema_id. Just object_id. If you want to
    see all objects, including system objects, query sys.all_objects.
    Regards,
    Vedran

  • Move FLOWS_FILES to new tablespace

    Hi
    is there any restrictions on moving FLOWS_FILES schema to a new tablespace?
    I'm planning to use export/import as I can see there is a lob there.
    Is there a better way?

    Mistakenly thought about LONG restrictions. Moved wwv_flow_file_objects$ via ALTER TABLE MOVE with subsequent index rebuilds

  • Performance slows down when moving from stage to test schema within same instance with same database table and objects

    We have created a stage schema and tested application which is working fine when we are moving it to another schema for further testing ( This schema is created using same scripts which were used to create objects in staging schema) the performanc of application (Developed in .NET) slows down drastically
    Some of the store procedures we have checked at Databse/SQLdeveloper level are giving almost same performance but at Application level there is lot of difference
    Can you please help
    We are using Oracke 11g Database

    Are you using the Database Cloud Service?  You cannot create schemas in the Database Cloud Service, which makes me think you are not.  This forum is only for the Database Cloud Service.
    - Rick Greenwald

  • Moving Subpartitions to a duplicate table in a different schema.

    +NOTE: I asked this question on the PL/SQL and SQL forum, but have moved it here as I think it's more appropriate to this forum. I've placed a pointer to this post on the original post.+
    Hello Ladies and Gentlemen.
    We're currently involved in an exercise at my workplace where we are in the process of attempting to logically organise our data by global region. For information, our production database is currently at version 10.2.0.3 and will shortly be upgraded to 10.2.0.5.
    At the moment, all our data 'lives' in the same schema. We are in the process of producing a proof of concept to migrate this data to identically structured (and named) tables in separate database schemas; each schema to represent a global region.
    In our current schema, our data is range-partitioned on date, and then list-partitioned on a column named OFFICE. I want to move the OFFICE subpartitions from one schema into an identically named and structured table in a new schema. The tablespace will remain the same for both identically-named tables across both schemas.
    Do any of you have an opinion on the best way to do this? Ideally in the new schema, I'd like to create each new table as an empty table with the appropriate range and list partitions defined. I have been doing some testing in our development environment with the EXCHANGE PARTITION statement, but this requires the destination table to be non-partitioned.
    I just wondered if, for partition migration across schemas with the table name and tablespace remaining constant, there is an official "best practice" method of accomplishing such a subpartition move neatly, quickly and elegantly?
    Any helpful replies welcome.
    Cheers.
    James

    You CAN exchange a subpartition into another table using a "temporary" (staging) table as an intermediary.
    See :
    SQL> drop table part_subpart purge;
    Table dropped.
    SQL> drop table NEW_part_subpart purge;
    Table dropped.
    SQL> drop table STG_part_subpart purge;
    Table dropped.
    SQL>
    SQL> create table part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition p_1 values less than (10) (subpartition p_1_s_1 values ('A'), subpartition p_1_s_2 values ('B'), subpartition p_1_s_3 values ('C'))
      5  ,
      6  partition p_2 values less than (20) (subpartition p_2_s_1 values ('A'), subpartition p_2_s_2 values ('B'), subpartition p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create index part_subpart_ndx on part_subpart(col_1) local;
    Index created.
    SQL>
    SQL>
    SQL> insert into part_subpart values (1,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'A');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'B');
    1 row created.
    SQL> insert into part_subpart values (2,'C');
    1 row created.
    SQL> insert into part_subpart values (11,'A');
    1 row created.
    SQL> insert into part_subpart values (11,'C');
    1 row created.
    SQL>
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create table NEW_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  partition by range (col_1) subpartition by list (col_2)
      3  (
      4  partition n_p_1 values less than (10) (subpartition n_p_1_s_1 values ('A'), subpartition n_p_1_s_2 values ('B'), subpartition n_p_1_s_3 values ('C'))
      5  ,
      6  partition n_p_2 values less than (20) (subpartition n_p_2_s_1 values ('A'), subpartition n_p_2_s_2 values ('B'), subpartition n_p_2_s_3 values ('C'))
      7  )
      8  /
    Table created.
    SQL>
    SQL> create table STG_part_subpart(col_1  number not null, col_2 varchar2(30))
      2  /
    Table created.
    SQL>
    SQL> -- ensure that the Staging table is empty
    SQL> truncate table STG_part_subpart;
    Table truncated.
    SQL> -- exchanging a subpart out of part_subpart
    SQL> alter table part_subpart exchange subpartition
      2  p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL> -- exchanging the subpart into NEW_part_subpart
    SQL> alter table NEW_part_subpart exchange subpartition
      2  n_p_2_s_1 with table STG_part_subpart;
    Table altered.
    SQL>
    SQL>
    SQL> select * from NEW_part_subpart subpartition (n_p_2_s_1);
         COL_1 COL_2
            11 A
    SQL>
    SQL> select * from part_subpart subpartition (p_2_s_1);
    no rows selected
    SQL>I have exchanged subpartition p_2_s_1 out of the table part_subpart into the table NEW_part_subpart -- even with a different name for the subpartition (n_p_2_s_1) if so desired.
    NOTE : Since your source and target tables are in different schemas, you will have to move (or copy) the staging table STG_part_subpart from the first schema to the second schema after the first "exchange subpartition" is done. You will have to do this for every subpartition to be exchanged.
    Hemant K Chitale
    Edited by: Hemant K Chitale on Apr 4, 2011 10:19 AM
    Added clarification for cross-schema exchange.

  • What is the impact on an Exchange server when moving FSMO role and schema master into another DC?

    What is the impact on an Exchange server when moving FSMO role and schema master into another DC? What do we have to do on exchange after performing a such task?
    I had 1 DC (Windows server 2008 R2), 1 Exchange 2010 SP3. I install a new DC (Windows server 2008 R2). I then move all the FSMO role including the schema master role into the NEW DC. I check to be sure that the new DC is a GC as well.
    I shutdown the old DC and my Exchange server was not working properly and specially Exchange Management Shell. It start working again after I turn up the older DC.
    I am wondering why Exchange did not recognize the new DC, even after moving all the roles on it.
    I am looking to hearing from you guys.
    Thanks a lot

    if you only have 1 DC, you might need to cycle the AD Topology service after shutting the one down.
    Also, take a look in the windows logs, there should be an event where Exchange goes to discover Domain Controllers, make sure both are listed there.  You can probably force that by cycling AD topology (this will take all services down so be careful
    when you do it)
    Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread

  • PLS-00201 error after moving a package to a new scheme

    hi
    I've moved a package to a new schema and all the packages in the original schema that reference the moved package now fail to compile. The moved package has had a public synonym created and the execute privileges assigned to the original schema by role. what am i missing? Using 11gR2 version 11.2.0.3.0

    Privileges granted through roles do not apply to stored procedures and packages that are compiled with definer rights (the default).  You need to grant the original schema execute privileges on the new schema's package directly.
    John

  • Moving Target DB and Abstract Schema

    I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
    We are converting many Access applications to Java/J2EE/AnyRerelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
    My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
    I need some help verifying this or pointing me in a better direction.
    Thanks for your help,
    Bob

    I apologize in advance for seeming clueless. My
    explanation is this: There is no money. I have
    inexperience staff. I've been away from building
    architectures too long to be specific. I can't buy a
    contractor. I need some advice.
    We are converting many Access applications to
    Java/J2EE/AnyRerelationalDB The way we have planned
    to approach this is to divide the DBs into various
    classes, say Personnel records, Vehicles, ...so on).
    These DBs will be moving targets that will change as
    s we are able to discover Access applications that
    add/change features to whatever class of DB we're
    working with at the moment.My first advice is that the description of you team doesn't bode well for the success of the project described in the second.
    Let me frame it in another context to illuminate how dubious this sounds:
    I want to build house with curved glass walls, high vaulted ceilings perched on a steep hillside. There is no money. I have inexperienced staff. I've been away from building houses too long to be specific. I can't buy a contractor.
    My goal is to eliminate changing each and every App
    everytime some DB parameter changes (DBMS, changed
    attribute, ..., etc). I think EBJ/abstract schemas
    will let me to get a generic view of the DB and
    insulate the App from the very real possibility of
    changing DB parameters.If you use an EJB layer than supports XDoclet or other portable CMP, yes, it will do this. However, it's not a simple and it your table structure changes significantly, your EJB will not work autmatically. The fact of the matter is that EJB is pretty complex and requires a lot of esoteric knowledge. Many EBJ projects have failed or produced terrible results. If you don't have any very capable developer/designers and/or have no developers with solid EJB experience I would under no circustances attempt this. EBJ is often overkill anyway. The real point of EJB is to help with distributed computing, not to abstract away the DB schema.
    I simple approach that many people overlook is to use stored procedures. Stored procedures create a layer of abstraction between your code and the DB such that the DB can change without changing the code.

  • EJB, Moving Target DB, Abstract Schema

    I apologize in advance for seeming clueless. My explanation is this: There is no money. I have inexperience staff. I've been away from building architectures too long to be specific. I can't buy a contractor. I need some advice.
    We are converting many Access applications to Java/J2EE/AnyRelationalDB The way we have planned to approach this is to divide the DBs into various classes, say Personnel records, Vehicles, ...so on). These DBs will be moving targets that will change as we are able to discover Access applications that add/change features to whatever class of DB we're working with at the moment.
    My goal is to eliminate changing each and every App everytime some DB parameter changes (DBMS, changed attribute, ..., etc). I think EBJ/abstract schemas will let me to get a generic view of the DB and insulate the App from the very real possibility of changing DB parameters.
    I need some help verifying this or pointing me in a better direction.
    Thanks for your help,
    Bob

    I think your best option is to implement CMP entity beans with a facade of services (business logic) that access the beans as tables in a DB. The only advantage of doing this will be the DB vendor independence and transparency because you define static queries in a declarative way.
    I don't quite understand what you mean by DB parameters. But if you refer to changes to the database schema like new tables, new fields or changes to existing fields, you still need to align those changes with the attribuites in your application.
    Cheers

  • SDK Schemas have moved - but where?

    Probably because of the recent OTN web site reorganization, the XSD files for XML extensions have moved. Would someone please update http://wiki.oracle.com/page/SQL+Dev+SDK+How+Tos to point to the new locations of navigator.xsd, query.xsd, editors.xsd ...
    Thanks.

    Would someone(Sue Harper?) please give us an ETA on providing the info we developers need to develop extensions to SqlDeveloper?
    We have been waiting over a year to:
    1. Get the XSD files the poster refers to - the list of XSDs is on the page cited but all of the links take you to a generic download page and the XSDs are nowhere to be found. These must exist somewhere so it is very frustrating that no one on the development team will provide them.
    2. Get the API Javadocs so we can understand the java classes available and how to use them. As with #1, these must be available to the development team so why won't you release them to us?
    3. Get a working example of a Jave extension. The lone example provided is not useful since it is really just an XML extension written in Java.l A useful Java extension would show how to create the hooks to cause SqlDeveloper to perform callbacks to the Java extension code when certain user actions take place. Same here as with #1 and #2. It's hard to beleive that someone on the dev team doesn't have the code for a simple Java extension with callbacks.
    JDeveloper has 'hook' elements in its example extension.xml files but there is no documentation for SqlDeveloper to show the equivalent.
    Please either provide the above requested items, provide an ETA on when you will provide the items or at least be gracious enough to tell us you won't provide the items.
    I'm sure there are many like myself that would love to start working with extensions but can't because you won't share information and data that almost assuredly already exists.
    What is your position on these issues?

Maybe you are looking for

  • How to manage huge (3 gb+) files in photoshop

    I have started creating 3gb+ files in CS2 photoshop and my computer is taking 3 minutes to open and 10 minutes to save, etc - driving me mad with the delays. My system (3.166 mhz core duo, ASUS P5K SE/EPU motherboard, 4GB Kingston DDR2 800 RAM, Quadr

  • Illustrator CS6 download through Creative Cloud "appears corrupted" according to App Manager

    I'm trying to install Illustrator CS6 through the Adobe Application Manager on Mountain Lion 10.8.  All of the other apps downloaded and installed fine. But when trying to downlaod Illustrator, it downlaods the file, and the a message appears saying

  • How to Convert 4:3 footage to 16:9 the best way in FCP??

    Hello, I see a lot of different answers for this but i havent found anything very recent so that is why I ask. I am going to be shooting in 4:3 standard def with my Canon T3i because the only way to shoot in Standard Definition with it is using 4:3.

  • Question about installing an old CS2 upgrade download

    I had a hard disk crash.  I lost my old Photoshop CS2 and need it working.  I redownloaded from Adobe since I bought it online but it wants a previous copy available which I don't have an do not have my old disk as I lost it in a flood a few years ag

  • I just want to save my mail

    This really isn't a MacBook Pro question but a general question. I can't seem to find any threads on this subject. I currently have a macbook pro and recently migrated everything to a MacBook Air. My mail did not transfer over which is not a big deal