Retrofitted during the import of tables
hi
I have a source module (oracle database 8.1.7) and a target module (oracle database 10.2) and owb 10g rel 2.
I cannot directly import from the source schema, instead I have to access through a read only schema where the grants given for the table.
when I import the tables, I get the description as retrofitted in the owb screen
I am able to create the locations, validate and generate the mappings successfully, but when I deploy the mappings I get an error as
ORA-06550: line 59, column 12:
PL/SQL: ORA-00942: table or view does not exist
I checked the db links, connectors and also registered the modules successfully.
as it is a remote database, I have to access through db links and there is no question of giving the grants.
pls suggest what exactly the problem might be
regards
kishan
Kishan,
you may want to create synonyms for the tables and views you use in your mappings.
Regards,
Jörg
Similar Messages
-
What are the important standard table use in FI/CO module?
What are the important standard table use in FI/CO module?
Moderator Message: Please avoid asking such queries.
Edited by: kishan P on Jan 24, 2012 12:37 PMHi Sanj,
Please go through the available information and then you can ask if there is any queries.
[Google|http://www.google.co.in/search?sourceid=chrome&ie=UTF-8&q=importanttablesinficoin+sap]
Regards,
Madhu. -
Controlling posting sequence during GL Import(GL_INTERFACE table)
I am putting multiple journal records in GL_INTERFACE table having same accounting date. I have enabled Auto Post.
My requirement is, during Journal import and subsequent Auto posting. Some particular journals to be posted before posting other journals. Is there any column in GL_INTERFACE other than accounting date, which can be used to sequence the posting of journals in GL.
This requirement comes due to the fact, that some of the journals add funds to a account and other journals use that funds from that account. My requirement is when I do a journal import, GL should first post the journals which add funds and then post the other journals which use those funds, else I may have funds check failed issue.
Please let me know of solutions...Hi,
Thanks for the solution. My company has assigned me similar kind of automation process. So if you could give me more input on this that would be great. You can send me further mails on the following id : [email protected] or a reply in the same forum is much appriciated. Waiting for your reply.
Thanks,
Nagaraj -
Whats the important of " table-type " in sap abap?
hi,
i am ahmed. abap fresher.
i want to what use and importance of table-type in sap abap which comes in
datadictionary
V
data types
V----
V V V
data element structure table type
i want to know about table type. plz give a brief idea.
bye.hi,
Transparent Tables
A transparent table in the dictionary has a one-to-one relationship with a table in the database. Its structure in R/3 Data Dictionary corresponds to a single database table. For each transparent table definition in the dictionary, there is one associated table in the database. The database table has the same name, the same number of fields, and the fields have the same names as the R/3 table definition. When looking at the definition of an R/3 transparent table, it might seem like you are looking at the database table itself.
Transparent tables are much more common than pooled or cluster tables. They are used to hold application data. Application data is the master data or transaction data used by an application. An example of master data is the table of vendors (called vendor master data), or the table of customers (called customer master data). An example of transaction data is the orders placed by the customers, or the orders sent to the vendors.
Transparent tables are probably the only type of table you will ever create. Pooled and cluster tables are not usually used to hold application data but instead hold system data, such as system configuration information, or historical and statistical data.
Both pooled and cluster tables have many-to-one relationships with database tables. Both can appear as many tables in R/3, but they are stored as a single table in the database. The database table has a different name, different number of fields, and different field names than the R/3 table. The difference between the two types lies in the characteristics of the data they hold, and will be explained in the following sections.
Table Pools and Pooled Tables
A pooled table in R/3 has a many-to-one relationship with a table in the database (see Figures 3.1 and 3.2). For one table in the database, there are many tables in the R/3 Data Dictionary. The table in the database has a different name than the tables in the DDIC, it has a different number of fields, and the fields have different names as well. Pooled tables are an SAP proprietary construct.
When you look at a pooled table in R/3, you see a description of a table. However, in the database, it is stored along with other pooled tables in a single table called a table pool. A table pool is a database table with a special structure that enables the data of many R/3 tables to be stored within it. It can only hold pooled tables.
R/3 uses table pools to hold a large number (tens to thousands) of very small tables (about 10 to 100 rows each). Table pools reduce the amount of database resources needed when many small tables have to be open at the same time. SAP uses them for system data. You might create a table pool if you need to create hundreds of small tables that each hold only a few rows of data. To implement these small tables as pooled tables, you first create the definition of a table pool in R/3 to hold them all. When activated, an associated single table (the table pool) will be created in the database. You can then define pooled tables within R/3 and assign them all to your table pool (see Figure 3.2).
Pooled tables are primarily used by SAP to hold customizing data.
When a corporation installs any large system, the system is usually customized in some way to meet the unique needs of the corporation. In R/3, such customization is done via customizing tables. Customizing tables contain codes, field validations, number ranges, and parameters that change the way the R/3 applications behave.
Some examples of data contained in customizing tables are country codes, region (state or province) codes, reconciliation account numbers, exchange rates, depreciation methods, and pricing conditions. Even screen flows, field validations, and individual field attributes are sometimes table-driven via settings in customizing tables.
During the initial implementation of the system the data in the customizing tables is set up by a functional analyst. He or she will usually have experience relating to the business area being implemented and extensive training in the configuration of an R/3 system.
Table Clusters and Cluster Tables
A cluster table is similar to a pooled table. It has a many-to-one relationship with a table in the database. Many cluster tables are stored in a single table in the database called a table cluster.
A table cluster is similar to a table pool. It holds many tables within it. The tables it holds are all cluster tables.
Like pooled tables, cluster tables are another proprietary SAP construct. They are used to hold data from a few (approximately 2 to 10) very large tables. They would be used when these tables have a part of their primary keys in common, and if the data in these tables are all accessed simultaneously. The data is stored logically as shown in Figure 3.3.
Figure 3.3 : Table clusters store data from several tables based on the primary key fields that they have in common.
Table clusters contain fewer tables than table pools and, unlike table pools, the primary key of each table within the table cluster begins with the same field or fields. Rows from the cluster tables are combined into a single row in the table cluster. The rows are combined based on the part of the primary key they have in common. Thus, when a row is read from any one of the tables in the cluster, all related rows in all cluster tables are also retrieved, but only a single I/O is needed.
A cluster is advantageous in the case where data is accessed from multiple tables simultaneously and those tables have at least one of their primary key fields in common. Cluster tables reduce the number of database reads and thereby improve performance.
For example, as shown in Figure 3.4, the first four primary key fields in cdhdr and cdpos are identical. They become the primary key for the table cluster with the addition of a standard system field pageno to ensure that each row is unique.
Reward if helpful
Jagadish -
Transaction log growth during the import of huge data volume
Hi,
I've a table in a staging area with an identity primary key where I want to import some millions of data. I need to acquire some nvarchar(100) fields and a nvarchar(2048) field. During the ETL execution, the transaction log of the staging area db grows,
allocates more 50-60 Gb and then the related job go in error.
Is it the right behaviour for the transaction log depending on row number and field dimension?
Do I need to perform any particular action to solve this issue?
Many thanksIs it the right behaviour for the transaction log depending on row number and field dimension?
Do I need to perform any particular action to solve this issue?
Many thanks
Its expected behavior in full recovery model each row insert would be logged hence filling the transaction log. Unless you are doing bulk insert bulk logged recovery model wont help. Or any transaction that supports minimal logging. Plus you loose
point in time recovery during minimal logging
Instead let recovery model be full please do insert in batches it would be best
You must read
data loading performance guide
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Key mapping during the transportation (lookup tables)
Hello,
For look up tables with key mapping during the transportation from DEV-> QA -> Prod it is asked to copy development repository with "out master data" How can we do it? and also how are workflows and matching stragetits transported ?
ThanksHi
For look up tables with key mapping during the transportation from DEV-> QA -> Prod it is asked to copy development repository with "out master data" How can we do it? and also how are workflows and matching stragetits transported ?
When we are moving from dev to QA to prod normally the remote system to which MDM is interacting also moves to similar environments. In different environments the reference table data may not match and hence it is advised not to move with same data. You can do this by simply exporting the schema of the repository from dev to QA and so on. That is create a new repository in QA and Prod by using option of "Export from Schema".
Matching strategies and workflows are not supported in this schema transport for MDM 5.5
These has to be created manually once the repository has been created.
Good news is MDM 7.1 supports this.
regards
Ravi -
Add the ability to manipulate image data early during the import process
Following many discussions about the inability to cleanly support aspect ratio in LR, it would be useful to have the ability to interviene in the import process (early) and manipulate the exif data. It is possible to attach a metadata preset during import but presets do not give the ability to change the aspect ratio and cropping tags.
This feature could have many other uses such as allowing more complex naming schemes, copy location, etc.Perhaps your post would be better received in the Lightroom forum? (this is the SDK forum).
Or perhaps better still, the feedback forum: http://feedback.photoshop.com/photoshop_family/topics/import_actionshttp://feedback.photoshop.com/photoshop_family/topics/new
SDK-wise: one can easily intervene via plugin and apply whatever transformations pre-import that one could think of...
Rob -
STRANGE NAMES OF FOLDERS DURING THE IMPORTATION OF PHOTOS FROM WINDOWS
I am a new user of OSX comming from Windows. I made the import of my Photos from a HD on windows to Iphotos. I have all my old folders, but i have also new folders with "original names" and they seems to be copies from the other folders with differents resolutions. Is it normal and can i delete those files without problem ?
Duplicate post, see:
https://forums.adobe.com/thread/1809862#expires_in=86399988&token_type=bearer&access_token =eyJhbGciOiJSUzI1NiIsIng1dSI6I… -
Import Excel with Import Wizard / Problem during the import of "Actual Work" column
Hello,
I am trying to import an Excel File (*.xlsx) to Project Server 2013 through Project Professional 2013 and Import Wizard.
I am having trouble when I try to include the column "Actual Work". After selecting the columnsmapping I am getting a strange error message saying that "The File cannot be oppened","Make sure, that the Filename and Path is right"
and some other strange messages concerning compatibility, although I am using the same File, which works perfectly without the "Actual Work" column.
I have tried as well with csv and the outcome was the same.
Any help is welcome!
Thank you in advance,
IoannisWhich DW are you using - DMX on Mac? It didn't have that
option. This is
not an Educational version issue - it's a Mac issue.
Murray --- ICQ 71997575
Adobe Community Expert
(If you *MUST* email me, don't LAUGH when you do so!)
==================
http://www.projectseven.com/go
- DW FAQs, Tutorials & Resources
http://www.dwfaq.com - DW FAQs,
Tutorials & Resources
==================
"Terry_Straehley" <[email protected]> wrote
in message
news:fm85u9$5ec$[email protected]..
> From a 11/06 post
> <<Hi, My Dreamweaver MX Education Version does NOT
have the Import to
> Excel
> option. I am using "Excel 2003 (11.6560.6568) SP2, Part
of Microsoft
> Office
> Professional Edition 2003". If I start with a blank page
in Dreamweaver,
> click
> on File, Import, I only have the options "XML into
Template", "Word HTML,
> and
> "Tabular Data". Does anyone know why I don't have the
"Import Excel" >>
>
> This was not answered in the thread I copied it from. I
have the same
> problem.
> Can some one answer the question?
> -
Which temp tablespace will used during the import (impdp) ?
Oracle version : 11.2.0.3 on Solaris
We have created an empty schema called MLDT_FN and set its default temporary tablespace as TEMP_MLDT. We usually use SYSTEM user to do exports (expdp) and imports (impdp)
During a schema level (or table level) import to MLDT_FN schema, which temporary tablespace will be used ?
Will it be
the default temporary tablespace of SYSTEM user which runs this impdp job
or
the default temporary tablespace of the newly created MLDT_FN schema which is being to imported toIt will use default temp tablespace of schema.
-
HI,
while importing the 2 tables from production to lower env we are gatting the below error
Processing object type TABLE_EXPORT/TABLE/POST_INSTANCE/PROCACT_INSTANCE
ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is:
BEGIN
SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_QUEUE_TABLE('IFACE_DEFECT_QUEUE_TABLE',1,16793608,2,0,0,'Defect Interface Normal Queue', SYS.DBMS_AQ_IMP_INTERNAL.DBVER_10i, '00:00');COMMIT; END;
ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is:
BEGIN
SYS.DBMS_AQ_IMP_INTERNAL.IMPORT_QUEUE_TABLE('IFACE_AEX_IN_QT',1,16801800,2,0,0,'', SYS.DBMS_AQ_IMP_INTERNAL.DBVER_10i, '00:00');COMMIT; END;
ORA-39083: Object type PROCACT_INSTANCE failed to create with error:
ORA-01403: no data found
ORA-01403: no data found
Failing sql is:
what these error means ?
Thanks
Prakash GRHi,
OS version -> Red Hat Enterprise Linux Server release 5.7
db version -> 11.2.0.3.4
import command used -> impdp directory=EXPORT_DIR2 dumpfile=tables.dmp remap_schema=odb:dummy table_exists_action=truncate logfile=import.log
Please let me know if you need any more details
Thanks
Prakash GR -
Length error occurred during the IMPORT statement.
i have problem in Zprogram.its working fine in 4.6b ,but its problem in ECC5.0.its giving dump and saying
Error analysis
An exception occurred. This exception will be dealt with in more detail
below. The exception, assigned to the class 'CX_SY_IMPORT_MISMATCH_ERROR', was
not caught, which led to a runtime error. The reason for this exception is:
The system found when importing that the target object was longer or
shorter than the object to be imported.Hi Madhu,
Suggested to post this in logistics forum for better answer.
Software Logistics
Regards,
Debasis. -
Error Occurred During the import
Hi friends,
When i tried to import the sql file of my already build apex application into my work space, and at the time of installation it is returning error like
Error ERR-1029 Unable to store session info. session=1798024046224326 item=40006855470898
ORA-02091: transaction rolled back ORA-02291: integrity constraint (APEX_040100.WWV_FLOW_PAGE_DA_A_AR_FK) violated - parent key not found
OK Suppose, if i tried to import that same application via back-end through Sql developer means then too im facing the same error like
Error starting at line 231,323 in command:
commit
Error report:
SQL Error: ORA-02091: transaction rolled back
ORA-02291: integrity constraint (APEX_040100.WWV_FLOW_PAGE_DA_A_AR_FK) violated - parent key not found
02091. 00000 - "transaction rolled back"
*Cause: Also see error 2092. If the transaction is aborted at a remote
site then you will only see 2091; if aborted at host then you will
see 2092 and 2091.
*Action: Add rollback segment and retry the transaction.
RollbackI dont know the reason behind this problem and what could be the problem. Because of that i couldn't import the application to my work space successfully.
Brgds,
MiniHi Mini,
Sorry you've hit this also. As described in Jitendra's linked thread and threads linked from that thread, this is caused by an inconsistent metadata state that can occur in dynamic actions in any version prior to 4.1.1. This was tracked with oracle bug ##1344144, and was fixed in 4.1.1, such that the inconsistent metadata state can no longer occur and that existing inconsistencies are corrected on upgrading or importing to 4.1.1.
Were you able to resolve this as outlined in the linked thread, identifying the problematic DA's and resetting the affected element? Also, are you be able to import the application on apex.oracle.com (4.1.1) successfully?
Regards,
Anthony. -
IMP-00017 error during the import
IMP-00017: following statement failed with ORACLE error 2304:
"CREATE TYPE "T_CARTON_NBRS" TIMESTAMP '2009-12-16:12:04:50' OID '7296549639"
"DC2076E0430A6488C02076' "
" is table of varchar2(20);"
IMP-00003: ORACLE error 2304 encountered
ORA-02304: invalid object identifier literal
IMP-00017: following statement failed with ORACLE error 2304:
"CREATE TYPE "T_CASE_NBRS" TIMESTAMP '2009-12-16:12:04:50' OID '7296549639D8"
"2076E0430A6488C02076' "
" is table of varchar2(20);"
IMP-00003: ORACLE error 2304 encountered
ORA-02304: invalid object identifier literal
IMP-00017: following statement failed with ORACLE error 2304:
plese let me know abt this error
THANKS IN ADVANCE.damorgan, if you have any power to influence how Oracle Forum works, I would buy you a 6pack ;-) I totally 300% agree with you.
it would be really helpful if next to SUBJECT and MESSAGE there would be two texboxes for OPERATING SYSTEM and ORACLE VERSION (these could be even as a drop-down list).
thanks a lot
jiri -
Short dumps in every transaction during the Support pack import
I am in the middle of applying support packs SP12 on NetWeaver 7.0. In SPAM, during the import, I received short dumps. I get short dump for all transactions i excute now. I am unable to go to SPAM to continue the SPAM, all I get short dumps. This is our new training system that we are building. I had same issues before, and I got help from SAP for this issue, where they suggested tp command wiht lots of options to run at the OS level. Since I moved company I am not able access the same ticket. Do anyone of you guys know the OS tp command to continue the SPAM import phase?
Please and thanks
KumarHi,
SAP does not recommend to run SP's at OS level. Most of time when u r running SAP_BASIS , u will receive ABAP dumps. then option is to run SP at using tp command & you have to analyze many things before running this command.
<removed_by_moderator>
Please read the Rules of Engagement
Edited by: Juan Reyes on Jul 28, 2008 10:12 AM
Maybe you are looking for
-
No screen sharing, but can accept commands...
mirror door G4 running 10.4.11 with ARD 3.2.2 cannot observe or control, but can send commands fine. I've switched out cards, changed adapters, reinstalled ARD... Nothing seems to change my inability to connect to the machine. This is on a local netw
-
So, I got a l letter from BT today saying that they were going to upgrade my broadband (currently option 2) to infinity for free. Quote: "There will be no change to your existing contract term or charges - the amount you pay will remain the same." Gr
-
Hi, Im struggling to know what to do at the moment, I have an iWeb site at the moment which I really like, have bought Rage SEO Tool and have updated all of the tags etc, submitted the sitemap etc and know the site has been crawled but I am not even
-
Itunes 10.4.1 for mac osx 10.6.8 quits
Ever since I updated itunes I was able to open it once, from then on I cannot open it, it unexpectedly quits during launch. It happens in safe mode as well. This is what I see: Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION
-
Upgrade to iphoto 9.4.2
I cannot upgrade to iPhoto version 9.4.2... I get an error that says "iphoto failed to upload, use the purchases page to try again". this happens time and time again... a bit of context that may be of use: - this is a new computer, which came preload