Alter data on datapump import 10G?
Hi,
I see this is possible with a call to data_remap() in 11g. Is there an alternative or simple workaround for datapump import in 10G? I have 1 field that needs to be toggled occassionally when doing the import. Right now, the person who wrote the refresh job exports the active partitions from prod, imports into dev. If it needs to go into partition2, she updates the field, exports again and then reimports putting it, at that time, into the correct partition. I thought I might be able to change the value on the import (or the export if possible). Does anyone have any suggestion on this?
Thanks, Pete
REMAP_DATA sounds like it would work if you were on 11, but since you are on 10, the only think i can think of is to do a 2 step process. The first would be to create the table then create a trigger on the table to do the necessary data remap. Then load the data using a content=data_only parameter. Not sure how this will perform since the trigger will fire on all of the inserts, but I think it could be made to work.
Dean
Similar Messages
-
[10G] DATAPUMP IMPORT는 자동으로 USER 생성
제품 : ORACLE SERVER
작성날짜 : 2004-05-28
[10G] DATAPUMP IMPORT는 자동으로 USER 생성
===================================
PURPOSE
다음은 DATAPUMP의 기능을 소개하고자 한다.
Explanation
imp utility 에서는 target user가 존재해야 했었다
그러나 DATAPUMP 는 만익 target user가 존재하지 않는다면
이를 자동으로 생성한다.
Example :
=============
Source database 에서 TEST라는 user 가 있다.
TEST ( password : test , role : connect , resource )
Export the TEST schema using datapump:
expdp system/oracle dumpfile=exp.dmp schemas=TEST
Case I
=======
test user가 존재하지 않을 경우에 import 는 test user를 자동으로 생성한다.
impdp system/oracle dumpfile=exp.dmp
*************TEST does not exist*************************************
Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:02
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Data Mining option
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."DEPT" 5.648 KB 4 rows
. . imported "TEST"."SALGRADE" 5.648 KB 10 rows
. . imported "TEST"."BONUS" 0 KB 0 rows
. . imported "TEST"."EMP" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 01:02
connect TEST/TEST (on target database)
=> connected
SQL> select * from session_roles;
ROLE
connect
resource
Case II
========
Target database 에 TEST user가 존재하는 경우에 warning message가 발생하며 import
작업은 계속 진행된다.
impdp system/oracle dumpfile=exp.dmp
*************user TEST already exists************************************
Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:06
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Data Mining option
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."DEPT" 5.648 KB 4 rows
. . imported "TEST"."SALGRADE" 5.648 KB 10 rows
. . imported "TEST"."BONUS" 0 KB 0 rows
. . imported "TEST"."EMP" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 01:06
You will receive ORA-31684 error but import will continue.
Case - III
===========
Target database에 TEST user가 존재하지만 warning (ora-31684) 을 피하기 위해서는
EXCLUDE=USER 를 사용한다.
impdp system/oracle dumpfile=exp.dmp exclude=user
*********Disable create user statment as user TEST already exist***********
Import: Release 10.1.0.2.0 - Production on Friday, 28 May, 2004 1:11
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Data Mining option
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=exp.dmp exclud
e=user
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."DEPT" 5.648 KB 4 rows
. . imported "TEST"."SALGRADE" 5.648 KB 10 rows
. . imported "TEST"."BONUS" 0 KB 0 rows
. . imported "TEST"."EMP" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 01:11
TEST user가 존재하지 않는데 EXCLUDE=USER parameter 를 사용하면
ORA-01917 error 를 만나게 된다.
Reference Documents :
Note : 272132.1 DATAPUMP import automatically creates schema -
ORACLE 10g : Datapump-Import : Error: unknown parameter name 'REMAP_TABLE'
Hi,
I am working on Oracle 10g. When executed import datapump script in the UNIX box with option “RMAP_TABLE” I received the error “unknown parameter name 'REMAP_TABLE' “Can you please give me solution for it?
Scripts :-
impdp eimsexp/xyz TABLES=EIMDBO.DIVISION DIRECTORY=DATAMART_DATA_PUMP_DIR DUMPFILE=expdp_DATAMART_tables_%u_$BATCH_ID.dmp LOGFILE=impdp_DATAMART_tables.log REMAP_TABLE=EIMDBO.DIVISION:EIM2DBO:YR2009_DIM_DIVISION
Note :- The YR2009_DIM_DIVISION table is available in the target database. It is without partition table. The EIMDBO.DIVISION is partition table.
Thanks,See your post here
ORACLE 10g : Datapump-Import : Error: unknown parameter name 'REMAP_TABLE'
Srini -
How to consolidate data files using data pump when migrating 10g to 11g?
We have one 10.2.0.4 database to be migrated to a new box running 11.2.0.1. The 10g database has too many data files scattered within too many file systems. I'd like to consolidate the data files into one or two large chunk in one file systems. Both OSs are RHEL 5. How should I do that using Data Pump Export/Import? I knew there is "Remap" option could be used, but it's only one to one mapping. How can I map multiple old data files into one new data file?
hi
datapump is terribly slow, make sure you have as much memory as possible allocated for Oracle but the bottleneck can be I/O throughput.
Use PARALLEL option, set also these ones:
* DISK_ASYNCH_IO=TRUE
* DB_BLOCK_CHECKING=FALSE
* DB_BLOCK_CHECKSUM=FALSE
set high enough to allow for maximum parallelism:
* PROCESSES
* SESSIONS
* PARALLEL_MAX_SERVERS
more:
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_perf.htm
that's it, patience welcome ;-)
P.S.
For maximum throughput, do not set PARALLEL to much more than twice the number of CPUs (two workers for each CPU).
Edited by: g777 on 2011-02-02 09:53
P.S.2
breaking news ;-)
I am playing now with storage performance and I turned the option of disk cache (also called write-back cache) to ON (goes at least along with RAID0 and 5 and setting it you don't lose any data on that volume) - and it gave me 1,5 to 2 times speed-up!
Some says there's a risk of lose of more data when outage happens, but there's always such a risk even though you can lose less. Anyway if you can afford it (and with import it's OK, as it ss not a production at that moment) - I recommend to try. Takes 15 minutes, but you can gain 2,5 hours out of 10 of normal importing.
Edited by: g777 on 2011-02-02 14:52 -
Hi,
I am facing a small problem with datapump import, i exported a schema using datapump export(expdp) the same dump file i used for import using (impdp), once when the import is over i see all procedures,triggers and etc are all created with double quotes such as CREATE OR REPLACE PROCEDURE "ACCOUNTTRANSFERDETAIL_INSERT". So when doing a DataBase Difference these all comes as mismatch, so is there any way to overcome this error.
When i do a export and import in oracle 9i everthing was ok.
Now our QA team is doing a oracle 10g export and import (EXP & IMP) activities and i am using oracle 10g datapump for export and import (expdp & impdp) this is where we are facing problem. Comparison is between oracle 10g (exp & imp ) and oracle 10g datapump (expdp & impdp).
For their schema there was no double quotes and when i refreshed a schema using impdp objects where are enclosed using double quotes.
I also checked in data dictionary there also i see the object names enclosed with double quotes.
so i anybody has solution please revert back.
Thanks
Regards,
Ganesh RSomehow I don;t think this is a problem with a download. You'll get a lot more eyeballs on your question if you ask in the right place.
Since Data Pump is part of the database, you might try asking your question in a Database forum (go up 2 levels and take a look at the screen). -
Datapump import doesnt filter out table when mentioned in 'exclude'
The below datapump import fails to filter out the 'COUNTRIES' table. I thought that was what it was supposed to do ? Any suggestions ?
C:\Documents and Settings\>impdp remap_schema=hr:hrtmp dumpfile=hr1.dmp parfile='C:\Documents and Settings\para.par'
Import: Release 10.2.0.1.0 - Production on Monday, 18 January, 2010 15:00:05
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production
Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_01": /******** AS SYSDBA remap_schema=hr:hrtmp dumpfile=hr1.dmp parfile='C:\Documents a
nd Settings\para.par'
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HRTMP"."COUNTRIES" 6.093 KB 25 rows
. . imported "HRTMP"."DEPARTMENTS" 6.640 KB 27 rows
. . imported "HRTMP"."EMPLOYEES" 15.77 KB 107 rows
. . imported "HRTMP"."JOBS" 6.609 KB 19 rows
. . imported "HRTMP"."JOB_HISTORY" 6.585 KB 10 rows
. . imported "HRTMP"."LOCATIONS" 7.710 KB 23 rows
. . imported "HRTMP"."REGIONS" 5.296 KB 4 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at 15:00:13
para.par
========
exclude=TABLES:"like '%COUNTRIES%'"Hi,
The first thing I see is that you excluded TABLES (plural). The object type is not pluralized in DataPump. The parfile should look like:
exclude=TABLE:"like '%COUNTRIES%'"
Dean -
Datapump import of tables will automatically rebuild indexes ?
dear all,
do datapump import of tables will automatically rebuild the indexes againts the associated tables?
urgent response pleaseYes indexes are rebulit.
From dba-oracle
Set indexes=n – Index creation can be postponed until after import completes, by specifying indexes=n. If indexes for the target table already exist at the time of execution, import performs index maintenance when data is inserted into the table. Setting indexes=n eliminates this maintenance overhead. You can also use the indexfile=filename parm to rebuild all the indexes once, after the data is loaded. When editing the indexfile, add the nologging and parallel keywords (where parallel degree = cpu_count-1). -
Hi,
Oracle Version:10.2.0.1
Operating System:Linux
Can we put two datapump import process at a time on the same dump in same machine in parallel.
Thanks & Regards,
Poorna Prasad.Neerav999 wrote:
NO you cannot create datapump import as
Import process starts by creating master process,worker process,client process and shadow process no import can be done until these processes are freeI am not sure what you want to say here? Do you mean to say that we can't run two parallel import processes since the processes won't be free? Please see below,
E:\>time
The current time is: 18:24:17.04
Enter the new time:
E:\>impdp system/oracle directory=expdpdir dumpfile=dumpfile.dmp remap_schema=hr:hr2
Import: Release 11.1.0.6.0 - Production on Saturday, 06 February, 2010 18:24:25
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_02": system/******** directory=expdpdir dumpfile=dumpfile.dmp remap_schema=hr:hr2
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HR2"."COUNTRIES" 6.375 KB 25 rows
. . imported "HR2"."DEPARTMENTS" 7.015 KB 27 rows
. . imported "HR2"."EMPLOYEES" 16.80 KB 107 rows
. . imported "HR2"."JOBS" 6.984 KB 19 rows
. . imported "HR2"."JOB_HISTORY" 7.054 KB 10 rows
. . imported "HR2"."LOCATIONS" 8.273 KB 23 rows
. . imported "HR2"."REGIONS" 5.484 KB 4 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_IMPORT_FULL_02" successfully completed at 18:24:44And the second session,'
E:\Documents and Settings\aristadba>time
The current time is: 18:24:19.37
Enter the new time:
E:\Documents and Settings\aristadba>impdp system/oracle directory=expdpdir dumpfile=dumpfile.dmp remap_schema=hr:hr
Import: Release 11.1.0.6.0 - Production on Saturday, 06 February, 2010 18:24:22
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** directory=expdpdir dumpfile=dumpfile.dmp remap_schema=hr:h
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "HR1"."COUNTRIES" 6.375 KB 25 rows
. . imported "HR1"."DEPARTMENTS" 7.015 KB 27 rows
. . imported "HR1"."EMPLOYEES" 16.80 KB 107 rows
. . imported "HR1"."JOBS" 6.984 KB 19 rows
. . imported "HR1"."JOB_HISTORY" 7.054 KB 10 rows
. . imported "HR1"."LOCATIONS" 8.273 KB 23 rows
. . imported "HR1"."REGIONS" 5.484 KB 4 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 18:24:44Can you see the time difference? Its just 2 seconds before the jobs were executed!
Would you be kind to explain the statement that you gave just now since very well are the chances that I may have not understood it.
HTH
Aman.... -
DataPump Import over Network and Parallel parameter
I see and understand the advantages of using the Parallel parameter with datapump with dump files. I am trying to figure out is there any advantage to the parallel parameter when importing data over the network? I have noticed theren are sweet spots where parallel and dump files based upon the data. Sometimes ten workers and ten dmp files is no faster if not slower than three. I have not been able to find a clear answer on this. Any advice or explanation would be appreciated.
ThanksI am trying to figure out is there any advantage to the parallel parameter when importing data over the network?Reading data from network is slower than other options available from datapump.
The performance of Parallel import using datapump import are controlled by I/O contention. You have to determine the best value for performing parallel datapump export and import. -
I recently lost my iphone 4 and have an iphone3 on loan until I get upgrade in Aug. I want to sync my data - contacts most important - from my itunes account...is this possible? Is it possible to change the user ID on the borrowed phone to my ID?
There are no contacts in itunes.
Your contact should be on your computer in whatever program you have selected to sync. Just sync them to the new one.
Click Support at the top of this page, then click manuals
Changing the itunes account on nthe iphone is covered there as is restoring the iphone as new or from backup. -
How we create new data base in oracle 10g express edition
hello every body.. i student of B tech n new user of oracle so please help me how we creat new data base in oracle 10g express edition
Hello, Oracle XE can not create more than one instance, the other editions yes, but like other editions XE allows you to create database schemas, schemas logically grouped objects like tables, views, indexes created by a user. By creating an Oracle user is associated with a schema of the same name.
Using SYS or system accounts for creating user accounts.
Syntax to create a user:
create user Your_user
IDENTIFIED BY password
default tablespace users;
grant connect, resources to your_user;
Edited by: rober584812 on Jun 25, 2010 9:03 PM -
I need help proving the date tag on a photo stored in my iPhoto is from the date it was sent to my iphone/date it was imported into iphoto - and that it is NOT the date the photo was actually taken. I recieved a photo via text on my iphone and then I synced my iphone to my macbook and now it is in iphoto. I already know that the date on the photo per the tag that shows up on it in iphoto is NOT the date the photo was actually taken. I need article or literature or something confirming the tag is from when it was sent to the iphone and/or when it was imported. I greatly appreciate some assistance!
All I am trying to do is find something on a forum board or article etc stating that the the date showing in iphoto could be the date it was imported or synced or sent to me and not the actual date taken.
The date on the photo could be anything because you can edit the date with iPhoto or any of 100 apps, free and paid for. So, the date on the photo will prove nothing, I'm afraid.
Regards
TD -
Data Loader - Only imports first record; remaining records fail
I'm trying to use Data Loader to import a group of opportunities. Everytime I run the Data Loader it only imports the first record. All the other records fail with the message "An unexpected error occurred during the import of the following row: 'External Unique Id: xxxxxxx'". After running the Data Loader, I can modify the Data file and remove the first record that was imported. By running the Data Loader again, the first row (previously the second row) will import successfully.
Any idea what could be causing this behavior?W need a LOT more information, starting with the OS, and the version of ID, including any applied patches.
Next we need to know if you are doing a single record per page or multiple records, whether the placeholders are on the master page, how many pages are in the document and if they all have fields on them (some screen captures might be useful -- embed them using the camera icon on the editing toolbar on the webpage rather than attaching, if it works [there seem to be some issues at the moment, though only for some people]).
What else is on the page? Are you really telling it to merge all the records, or just one?
You get the idea... Full description of what youhave, what you are doing, and what you get instead of what you expect. -
I am trying to change the date on an imported home video clip. I entered 9/7/2012 and got 6/9/0339! I checked modify original file, but it is not working. Any ideas?
Are you using the Adjust Date and Time option or the Batch Change option? If it's the former try the latter. If you get the same results launch iPhoto with the Option key held down and create a new, test library. Import the same video file and check to see if the same problem persists.
OT -
Brief about Console, data manager and Import manager
Hi,
Iam new to MDM we just started installing our MDM components and iam a MM functional guy Can you any one tell me about console, data manager and import manger i need to know the following,
1. what is it
2. How the 3 is interlinked
3. can the data manager be loaded directly without a import
4. is there a std way of creating repository
5. I need to do samples on vendor, customer harmonisation and the spend analysis and matl master harmonisation..
Iam asking too much in this forum but i knw you guys are there to help me out.
Thanks in advance,
SureshMDM Console
The MDM Console allows the system manager to administer and monitor the MDM Server, and to create, maintain the structure of, and control access to the MDM repositories themselves. Records are not entered or managed with the MDM Console, but rather with client apps such as the MDM Client.
MDM Client(data manager)
The MDM Client is the primary client of the MDM Server, and is the program most users see on their PCs. It allows users to store, manage and update master data consisting of text, images, and other rich content, and to create taxonomies, families, and relationships. Other clients include the Import Manager, the Image Manager, and the Syndicator.
Import Manager
Allows you to import master data from most types of flat or relational electronic
source files (e.g. Excel, delimited text, SQL, XML, and any ODBC-compliant source),
and to restructure, cleanse, and normalize master data as part of the import process.
<i>2. How the 3 is interlinked</i>
--->
Suppose you are sending vendor information to MDM repository be sending IDOC from R/3. Then the whole process can go as... IDOC will be sent to XI, so XI will generate XML file that contains data. Then import manager can import this file and will so place the data in repository.
Now if you want to send data in MDM repository, the process goes reverse way. Syndicator will generate flat/xml file. This file will be given to XI so that XI will generate IDOC out of it and will send it to XI. This whole process ensures data consistancy
For details refer this link
http://help.sap.com/saphelp_mdm550/helpdata/en/43/D7AED5058201B4E10000000A11466F/frameset.htm
Maybe you are looking for
-
Superdrive is dead on MacBook Pro 17"
MacBook Pro 17" 2.66 GHz Intel Core 2 Duo, running 10.6.8 My drive will not accept a CD or a DVD, it seems completely lifeless. I've tried rebooting with the click held dow
-
Chapters Won't Stay After Saving Project
I'm transferring a bunch of old VHS tapes through QuickTime and iDVD 7.0.1. I've added chapters every 5 minutes so that I can skip ahead when watching the DVD (I didn't set up a scene selection menu; I just want to be able to jump ahead while watchin
-
I am a VPP program manager, i created a facilator account but i can't find out how to create a end user account to download the apps that the facilator purchases. In the VPP video training they talk about creating both types of accounts but I am not
-
Linking an LOV to a form in webDB 3.0 and getting the help files to work ?
Hi All I can link an LOV to a form in webDB 2.2 I can link an LOV to a report in webDB 3.0 but I cannot seem to figure out how to link an LOV to a form in webDB 3.0 can anyone help me ? Also I can't get the help files to work I get the message : Data
-
How to uninstall the .tar -file installation
Hi all, I just downloaded the new version of Studio 12. This time the package install. But before I installed the tar-file version. There is no uninstall file in the /opt/SUNWspro/bin directory. How to uninstall this version? Thanks.