ODI creation master repo log (by JohnGoodwin's blog)
Hi,
ODI 10.1.3.5.0 has been installed successfully.
I read JohnGoodwin's blog but I can't create master repo with ODI 10.3.1.5.0 on Win 2003 Srv and MS SQL Srv 2005. Test connection is Ok.
Tell me please where can I read log of ODI creation master repo.
Hi,
I lost a week but resoloved my trouble.
My config is:
Win 2003 EE SP1
MS SQL Server 2000 SP4
Oracle (Hyperion) EPM System 11.1.1.1.0
ODI 10.3.1.5.0
My trouble:
Can't create master repo.
My action to resolve this trouble:
0. Install JRE 6 update XX (last version) from here
1. Install ODI (like here )
2. Replace JDBC drivers for SQL Server by these
2.1. To use sqljdbc 2.0 + jre delete sqljdbc.jar and rename sqljdbc4.jar to sqljdbc.jar
3. Delete all files from ODI_Home\jre\1.4.2\
4. Copy all files from :\Program Files\Java\jre6\ to ODI_Home\jre\1.4.2\
5. Run ODI to create master repo
Similar Messages
-
Getting Error in Master Repo creation in ODI 10.1.3.5
Hi All,
I am creating Master repo with MS SQL Server 2000 and using two approches but getting error.
In first approch -
I am using Driver --->>> com.microsoft.sqlserver.jdbc.SQLServerDriver and
URL---->>> jdbc:sqlserver://localhost:1433;selectMethod=cursor;database=Master_ODI;integratedSecurity=false
Master repo starts but in between getting error "*connection closed*" and some time "*Not delete permission for user on SNP_LICENSE.*
Second Approch -
Using Driver -->> sun.jdbc.odbc.JdbcOdbcDriver
Url-->> jdbc:odbc: <DSN name with master database in SQl Server>
in this approch I am able to create Mater repo but in topology I am not able to see Master repo in Repository window.
Please help me out if Iam doing somthing wrong or missed some steps.
Regards,
VibhavJust follow the below steps to create master repository in MS SQL Server, you can find these steps in odi installation folder.
Creating Repository Storage Spaces
Create a database db_snpm to host the Master repository and a database db_snpw to host the work repository. Create two logins snpm and snpw which have these databases by default.
Use Enterprise Manager to create the two databases db_snpm and db_snpw (allow about 40 Mb for Data and 20 Mb for Log for each of them)
Use Query Analyzer or I-SQL to launch the following commands:
CREATE LOGIN <mylogin>
WITH PASSWORD = '<mypass>',
DEFAULT_DATABASE = <defaultbase>,
DEFAULT_LANGUAGE = us_english;
USE <defautbase>;
CREATE USER dbo FOR LOGIN <mylogin>;
GO
Where: <mylogin> corresponds to snpm or snpw <mypass> corresponds to a password for these logins <defaultbase> corresponds to db_snpm and db_snpw respectively
Creating the Master Repository
Creating the master repository consists of creating the tables and the automatic importing of definitions for the different technologies.
To create the master repository:
In the Start Menu , select Programs > Oracle Data Integrator > Repository Management > Master Repository Creation , or Launch bin/repcreate.bat or bin/repcreate.sh.
Complete the fields:
Driver : the driver used to access the technology which will host the repository. For more information, refer to the section JDBC URL Sample.
URL : The complete path for the data server to host the repository. For more information, refer to the section JDBC URL Sample.
User : The user id / login of the owner of the tables (previously created under the name snpm).
Password : This user's password.
ID : A specific ID for the new repository, rather than the default 0. This will affect imports and exports between repositories.
Technologies : From the list, select the technology your repository will be based on.
Language: Select the language of your master repository.
Validate by OK.
Creating the dictionary begins. You can follow the procedure on your console. To test your master repository, refer to the section Connecting to the master repository. -
Master repo,Work Repo and ODI installation
Hi,
I have following doubts for windows flatform.
I am integrating two application systems say A(source) & B(Target) thier data located in different oracle db servers.
1) ODI server installation will be on source server/target/any other machine?
2)Where should i create master & work repositories on source/target server?
3)If my target is remote host then where can i install and run agents?
4)While creating physical schemas we can select the work schema as different,is this work schema acts like staging area? while designing the interfaces should i select this work schema as staging area? what is benefit of having this work schema different than my soruce/target schema.?
5) I read that ODI will create temparory tables while executing ODI objects each time and these tables are junk data,should i drop all these temp tables in work schema,even after or before execution of the interface?
Please clarify,
Thanks.
MNKHi,
Find my answers as below,
1) ODI server installation will be on source server/target/any other machine? Its always recommend to install ODI in your TARGET server for good performance.
2)Where should i create master & work repositories on source/target server?In TARGET server and make sure u have dedicated schema for work and master repos.
3)If my target is remote host then where can i install and run agents?Again in TARGET host , u can only install ODI run time agent in seperate servers, have a look at ODI installation guide.
4)While creating physical schemas we can select the work schema as different,is this work schema acts like staging area? Yes ODI will make use of work schema to create temparory tables ($ tables) and which will act like a staging area.
while designing the interfaces should i select this work schema as staging area? No need to select any work schema as such while designing interface, only u need to select the respective LOGICAL schema which implicitly create the $ tables in the work schema u selected in PHYSICAL schema.
what is benefit of having this work schema different than my soruce/target schema.? You will not require to have a dedicated "staging area" to consolidate your data from multiple/single source.
5) I read that ODI will create temparory tables while executing ODI objects each time and these tables are junk data,should i drop all these temp tables in work schema,even after or before execution of the interface?No need. ODI will take care of DROPPING and CREATING $ tables on FLY.
For simple data integration, 2 tables will be created at runtime, C$ and I$ which in turn will be dropped after loading to TARGET table.
Makes sense?
P.S: Experts comments are welcome.
Thanks,
Guru -
Change master repo and work repo connection password
Hi,
My ORACLE DB password expired and i changed it. But now my ODI is not working.
The oracle user - system and odi connection - system and pwd are same while creating the work and master repo.
How can i change the master and work repo password? My situation is in ODI screen connect to repo > it throws error of password failure.
Please help.
Thanks,
ArunFor the sake of repeating myself again update the password ODI uses to connect to the repository in the connection profile of the ODI Studio login with the new password you created via SQL PLUS. To make it even easier take a look at the following:
When you initially try to connect to the repository you get this screen
Click the Pencil icon (Edit) to bring up the connection profile details screen
In the section for Database Connection (Master Repository) change the password to your new repository password.
If the work repository password has also been changed at this point you will have to select the Master Repository Only option and then Test the connection which should now work.
Click OK and then OK on the login screen.
*** Do this if Work Rep password has also changed ***
Once successfully logged in goto the Repositories tab in the Topology Manager and update the Work Repository connection details with the correct password.
Logout of ODI and edit the Connection Profile to renable the association with the chosen Work Repository then re-login.
Pretty strightforward all in all. -
Vendor Master Change Log Details
Hello All,
As I have changed the vendor master data on last May-09. But the changed master data is wrong.
So I'd like to reset the vendor master data to the original value.
Therefore, could you please let me know how to check the vendor master change logs ( i.e with the use of any t-codes or tables ), and how to identify the original values ( i.e before changing ).
Best regards,
KesavYou can see in Xk02 / Xk03 transaction - Environment tab - Field changes..
-
Hello,
i am facing problem with upgrade of master repository ODI 10.1.3.5.0 -> 10.1.3.6.1 odi_patch_10.1.3.6.1 (p9377717_101360_Generic)
i followed steps as described in upgrade procedure (Remove the content of the oracledi/lib/scripts/ sub-directory your Oracle Data Integrator installation directory.
Copy the content of the oracledi sub-directory of the temporary directory to your Oracle Data Integrator installation directory. The temporary directory content should overwrite the Oracle Data Integrator installation directory content.)
when running upgrade script (./mupgrade.sh) there is NO ORACLE Technologies appearing in the wizard....
only Hypersonic SQL and Informix left
same in Windows XP and Linux.....i just downloaded and unpacked odi_patch_10.1.3.6.1 (p9377717_101360_Generic.zip) from support.oracle.com
how can i check? there is the structure of the patch
bin
demo
doc
drivers
impexp
lib
tools
./bin:
startcmd.bat
./demo:
xml
./demo/xml:
personal.xsd
./doc:
index_km.htm
km
webhelp
./doc/km:
odiafm_93110_readme.pdf
odiap_93110_readme.pdf
odiess_readme.pdf
odiess_users.pdf
odigs_sapabapbw.pdf
odigs_sapabap.pdf
odi_km_ref_guide v1.3.pdf
./doc/webhelp:
en
./doc/webhelp/en:
index.hhc
index.hhk
printable
ref_tools
release_snps.htm
setup
usermanual
whgdata
whxdata
./doc/webhelp/en/printable:
snps_ref_tools.pdf
snps_setup.pdf
snps_users.pdf
./doc/webhelp/en/ref_tools:
odiftpget.htm
odiftpput.htm
odiscpget.htm
odiscpput.htm
odisftpget.htm
odisftpput.htm
snpsfiledelete.html
./doc/webhelp/en/setup:
setup.htm
sunopsis.log
./doc/webhelp/en/usermanual:
technos
./doc/webhelp/en/usermanual/technos:
how_to.htm
jms
jms_xml
./doc/webhelp/en/usermanual/technos/jms:
creating_a_jms_data_server.htm
creating_a_physical_schema_for_jms.htm
defining_a_jms_model.htm
choosing_the_right_kms_for_jms.htm
jms_standard_properties.htm
using_jms_properties.htm
./doc/webhelp/en/usermanual/technos/jms_xml:
creating_a_jms_xml_data_server.htm
creating_and_reverse-engineering_a_jms_xml_model.htm
creating_a_physical_schema_for_jms_xml.htm
choosing_the_right_kms_for_jms_xml.htm
./doc/webhelp/en/whgdata:
whlstfl0.htm
whlstfl11.htm
whlstfl16.htm
whlstfl18.htm
whlstfl20.htm
whlstfl21.htm
whlstfl22.htm
whlstfl23.htm
whlstfl24.htm
whlstfl25.htm
whlstfl26.htm
whlstfl3.htm
whlstfl4.htm
whlstfl7.htm
whlstfl8.htm
whlstf0.htm
whlstf1.htm
whlstf10.htm
whlstf11.htm
whlstf12.htm
whlstf13.htm
whlstf14.htm
whlstf15.htm
whlstf16.htm
whlstf17.htm
whlstf18.htm
whlstf19.htm
whlstf2.htm
whlstf20.htm
whlstf21.htm
whlstf22.htm
whlstf23.htm
whlstf24.htm
whlstf25.htm
whlstf26.htm
whlstf27.htm
whlstf28.htm
whlstf29.htm
whlstf3.htm
whlstf30.htm
whlstf31.htm
whlstf32.htm
whlstf33.htm
whlstf34.htm
whlstf35.htm
whlstf36.htm
whlstf37.htm
whlstf38.htm
whlstf39.htm
whlstf4.htm
whlstf40.htm
whlstf41.htm
whlstf42.htm
whlstf43.htm
whlstf44.htm
whlstf45.htm
whlstf46.htm
whlstf47.htm
whlstf48.htm
whlstf49.htm
whlstf5.htm
whlstf50.htm
whlstf51.htm
whlstf52.htm
whlstf53.htm
whlstf54.htm
whlstf55.htm
whlstf56.htm
whlstf57.htm
whlstf58.htm
whlstf59.htm
whlstf6.htm
whlstf60.htm
whlstf61.htm
whlstf62.htm
whlstf63.htm
whlstf64.htm
whlstf65.htm
whlstf66.htm
whlstf67.htm
whlstf68.htm
whlstf69.htm
whlstf7.htm
whlstf70.htm
whlstf71.htm
whlstf72.htm
whlstf8.htm
whlstf9.htm
whlsti0.htm
whlsti1.htm
whlsti2.htm
whlstt0.htm
whlstt1.htm
whlstt10.htm
whlstt100.htm
whlstt101.htm
whlstt102.htm
whlstt103.htm
whlstt104.htm
whlstt11.htm
whlstt12.htm
whlstt13.htm
whlstt14.htm
whlstt15.htm
whlstt16.htm
whlstt17.htm
whlstt18.htm
whlstt19.htm
whlstt2.htm
whlstt20.htm
whlstt21.htm
whlstt22.htm
whlstt23.htm
whlstt24.htm
whlstt25.htm
whlstt26.htm
whlstt27.htm
whlstt28.htm
whlstt29.htm
whlstt3.htm
whlstt30.htm
whlstt31.htm
whlstt32.htm
whlstt33.htm
whlstt34.htm
whlstt35.htm
whlstt36.htm
whlstt37.htm
whlstt38.htm
whlstt39.htm
whlstt4.htm
whlstt40.htm
whlstt41.htm
whlstt42.htm
whlstt43.htm
whlstt44.htm
whlstt45.htm
whlstt46.htm
whlstt47.htm
whlstt48.htm
whlstt49.htm
whlstt5.htm
whlstt50.htm
whlstt51.htm
whlstt52.htm
whlstt53.htm
whlstt54.htm
whlstt55.htm
whlstt56.htm
whlstt57.htm
whlstt58.htm
whlstt59.htm
whlstt6.htm
whlstt60.htm
whlstt61.htm
whlstt62.htm
whlstt63.htm
whlstt64.htm
whlstt65.htm
whlstt66.htm
whlstt67.htm
whlstt68.htm
whlstt69.htm
whlstt7.htm
whlstt70.htm
whlstt71.htm
whlstt72.htm
whlstt73.htm
whlstt74.htm
whlstt75.htm
whlstt76.htm
whlstt77.htm
whlstt78.htm
whlstt79.htm
whlstt8.htm
whlstt80.htm
whlstt81.htm
whlstt82.htm
whlstt83.htm
whlstt84.htm
whlstt85.htm
whlstt86.htm
whlstt87.htm
whlstt88.htm
whlstt89.htm
whlstt9.htm
whlstt90.htm
whlstt91.htm
whlstt92.htm
whlstt93.htm
whlstt94.htm
whlstt95.htm
whlstt96.htm
whlstt97.htm
whlstt98.htm
whlstt99.htm
./doc/webhelp/en/whxdata:
whftdata0.xml
whftdata1.xml
whftdata2.xml
whftdata3.xml
whfts.xml
whfwdata0.xml
whfwdata1.xml
whfwdata10.xml
whfwdata11.xml
whfwdata12.xml
whfwdata13.xml
whfwdata14.xml
whfwdata15.xml
whfwdata16.xml
whfwdata17.xml
whfwdata18.xml
whfwdata19.xml
whfwdata2.xml
whfwdata20.xml
whfwdata21.xml
whfwdata22.xml
whfwdata23.xml
whfwdata24.xml
whfwdata25.xml
whfwdata26.xml
whfwdata27.xml
whfwdata28.xml
whfwdata29.xml
whfwdata3.xml
whfwdata30.xml
whfwdata31.xml
whfwdata32.xml
whfwdata4.xml
whfwdata5.xml
whfwdata6.xml
whfwdata7.xml
whfwdata8.xml
whfwdata9.xml
whidata0.xml
whidata1.xml
whidata2.xml
whidata3.xml
whidata4.xml
whidata5.xml
whidata6.xml
whidx.xml
whtdata0.xml
whtdata7.xml
./drivers:
ess_es_server.jar
ess_japi.jar
HFMDriver.dll
HFMDriver64.dll
odihapp_common.jar
odihapp_essbase.jar
odi_hfm.jar
odi-sap.jar
snpsfile.jar
snpsxmlo.jar
./impexp:
GAC_Hypersonic SQL Default.xml
GAC_Informix Default.xml
KM_CKM Teradata.xml
KM_IKM File to Teradata (TTU).xml
KM_IKM Oracle Slowly Changing Dimension.xml
KM_IKM SQL to Hyperion Essbase (DATA).xml
KM_IKM SQL to Teradata (TTU).xml
KM_IKM Teradata Control Append.xml
KM_JKM DB2 400 Simple (Journal).xml
KM_JKM Oracle to Oracle Consistent (OGG).xml
KM_LKM DB2_400 Journal to SQL .xml
KM_LKM File to Netezza (NZLOAD).xml
KM_LKM File to Oracle (SQLLDR).xml
KM_LKM File to Teradata (TTU).xml
KM_LKM MSSQL to Oracle (BCPSQLLDR).xml
KM_LKM SQL to Teradata (TTU).xml
KM_RKM MSSQL.xml
KM_RKM Oracle Olap (Jython).xml
KM_RKM Oracle.xml
KM_ RKM Salesforce.com.xml
KM_RKM Teradata.xml
LANG_SQL.xml
PROF_DESIGNER.xml
PROF_METADATA ADMIN.xml
PROF_NG DESIGNER.xml
PROF_NG METADATA ADMIN.xml
PROF_NG REPOSITORY EXPLORER.xml
PROF_NG VERSION ADMIN.xml
PROF_OPERATOR.xml
PROF_REPOSITORY EXPLORER.xml
PROF_SECURITY ADMIN.xml
PROF_TOPOLOGY ADMIN.xml
PROF_VERSION ADMIN.xml
TECH_Attunity.xml
TECH_BTrieve.xml
TECH_DBase.xml
TECH_Derby.xml
TECH_File.xml
TECH_Generic SQL.xml
TECH_Hyperion Essbase.xml
TECH_Hyperion Financial Management.xml
TECH_Hyperion Planning.xml
TECH_Hypersonic SQL.xml
TECH_IBM DB2 UDB.xml
TECH_IBM DB2400.xml
TECH_Informix.xml
TECH_Ingres.xml
TECH_Interbase.xml
TECH_JMS Queue.xml
TECH_JMS Queue XML.xml
TECH_JMS Topic.xml
TECH_JMS Topic XML.xml
TECH_LDAP.xml
TECH_Microsoft Access.xml
TECH_Microsoft Excel.xml
TECH_Microsoft SQL Server.xml
TECH_MySQL.xml
TECH_Netezza.xml
TECH_Operating System.xml
TECH_Oracle BAM.xml
TECH_Oracle.xml
TECH_Paradox.xml
TECH_PostgreSQL.xml
TECH_Progress.xml
TECH_Salesforce.com.xml
TECH_SAP ABAP.xml
TECH_SAP Java Connector.xml
TECH_SAS.xml
TECH_Sunopsis Engine.xml
TECH_Sybase AS Anywhere.xml
TECH_Sybase AS Enterprise.xml
TECH_Sybase AS IQ.xml
TECH_Teradata.xml
TECH_Universe.xml
TECH_XML.xml
./lib:
scripts
snpshelp.zip
snpsws.zip
sunopsis.zip
./lib/scripts:
DERBY
HYPERSONIC_SQL
IBM_DB2_UDB
IBM_DB2_400
INFORMIX
MICROSOFT_SQL_SERVER
ORACLE
POSTGRESQL
SYBASE_AS_ANYWHERE
SYBASE_AS_ENTERPRISE
SYBASE_AS_IQ
xml
./lib/scripts/DERBY:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/DERBY/patches:
E_04.02.02.01_04.02.03.01.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/HYPERSONIC_SQL:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/HYPERSONIC_SQL/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/IBM_DB2_UDB:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/IBM_DB2_UDB/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/IBM_DB2_400:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/IBM_DB2_400/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/INFORMIX:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/INFORMIX/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/MICROSOFT_SQL_SERVER:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/MICROSOFT_SQL_SERVER/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/ORACLE:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/ORACLE/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/POSTGRESQL:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/POSTGRESQL/patches:
E_04.02.02.01_04.02.03.01.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/SYBASE_AS_ANYWHERE:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/SYBASE_AS_ANYWHERE/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/SYBASE_AS_ENTERPRISE:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/SYBASE_AS_ENTERPRISE/patches:
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/SYBASE_AS_IQ:
E_CREATE.xml
E_DROP.xml
M_CREATE.xml
M_DROP.xml
patches
W_CREATE.xml
W_DROP.xml
./lib/scripts/SYBASE_AS_IQ/patches:
E_CREATE.xml
E_DROP.xml
E_04.02.02.01_04.02.03.01.xml
E_300101.xml
E_300101_300102.xml
E_300102_300103.xml
E_300103_300104.xml
E_300104_310101.xml
E_310101_320301.xml
E_320301_400101.xml
E_400101_410101.xml
E_410101_410201.xml
E_410201_420101.xml
E_420101_420201.xml
M_CREATE.xml
M_DROP.xml
M_04.02.02.01_04.02.03.01.xml
M_300101.xml
M_300101_300102.xml
M_300102_300103.xml
M_300103_300104.xml
M_300104_300105.xml
M_300105_310101.xml
M_310101_310201.xml
M_310201_320301.xml
M_320301_400101.xml
M_400101_410101.xml
M_410101_410201.xml
M_410201_420101.xml
M_420101_420201.xml
W_CREATE.xml
W_DROP.xml
W_04.02.02.01_04.02.03.01.xml
W_300101.xml
W_300101_300102.xml
W_300102_300103.xml
W_300103_310101.xml
W_310101_310201.xml
W_310201_320301.xml
W_320301_400101.xml
W_400101_410101.xml
W_410101_410201.xml
W_410201_420101.xml
W_420101_420201.xml
./lib/scripts/xml:
CONN_SecurityConnection.xml
CONVDT_CONVDATATYPESLST.xml
DT_DATATYPESLST.xml
FIELD_FIELD_LST.xml
FLOOK_LOOKUP_LST.xml
LANG_SQL.xml
LOCREP_MASTERREPOSITORY.xml
OBJ_Column.xml
OBJ_DataServer.xml
OBJ_Datastore.xml
OBJ_Model.xml
OBJ_OBJ_SNPOPENTOOL_7800.xml
OBJ_OBJ_SNPSCENFOLDER_7700.xml
OBJ_Solution.xml
PROF_DESIGNER.xml
PROF_METADATAADMIN.xml
PROF_NGDESIGNER.xml
PROF_NGMETADATAADMIN.xml
PROF_NGREPOSITORYEXPLORER.xml
PROF_NGVERSIONADMIN.xml
PROF_OPERATOR.xml
PROF_REPOSITORYEXPLORER.xml
PROF_SECURITYADMIN.xml
PROF_TOPOLOGYADMIN.xml
PROF_VERSIONADMIN.xml
TECH_Attunity.xml
TECH_File.xml
TECH_HyperionEssbase.xml
TECH_HyperionFinancialManagement.xml
TECH_HypersonicSQL.xml
TECH_Informix.xml
TECH_SAPABAP.xml
TECH_SAPJavaConnector.xml
TECH_SAS.xml
./tools:
cdc_iseries
web_services
./tools/cdc_iseries:
SAVPGM0110
./tools/web_services:
odi-public-ws.aar -
Wrong connection to ODI Designer by JohnGoodwin's blog
Hi,
I'd like to set up the connection to the master repository and work repository (see 1st screenshot here ) to connect for ODI Designer. All tests passed successfully. But when I try to login under these settings I have error "Designer can't connect to Execution repository."
My config:
ODI 10.1.3.5.0
MS SQL Srv 2005 SP2
Hyperion PMSystem 9.3.1
Where can I see a log file to resolve my problem?Hi,
Execution is only for running production scenarios, basically it means you can only execute pre-defined and completed interfaces.
You will need to use development if you want to design and create models/interfaces etc...
You should be able to delete the work repository from the topology manager and recreate it, then set up the connection again when logging in to the designer.
Cheers
John
http://john-goodwin.blogspot.com/ -
Business Partner Creation ( Master Tenant with Customer Account)
Hello Experts
Am trying to create a master tenant with a customer account, but as i save the business partner there is no corresponding creation of the master tenant with a customer account in the company code. The system only generates a business partner created. I have cheked the settings on business partner customer and have the correct FI custmer acount to the business partner, have also checked settings on the assignment of the reconciliation account to the BP and again it is compatible with the customer reconciliation account in Financials. I have also the synchronization data to see whether the synchronization object is activated, and it was not activated and i activated it, but still am not able to create a master tenant with customer account.
Please Experts help me on this am very very stranded
Regards
David MaviHi David,
Following activities are to be done to create customer simultaneously while the creation of business partner.
Business Partner Number Range: IMG>Flexible Real Estate Management (RE-FX)>Business Partner>Relevant Settings for Business Partner in RE Context>Number Range>Business Partner Number Range
Define Grouping and number range: IMG>Flexible Real Estate Management (RE-FX)>Business Partner>Relevant Settings for Business Partner in RE Context>Number Range>Define Groupings and Assign Number Ranges
Also make sure that the customer account group is created with a number range which should be external. The number range for the business partner should be internal.
Master Data syncronization: IMG>Cross Application Components>Master Data Synchronization>Customer/Vendor Integration>Business Partner Settings>Settings for Customer Integration>Field Assignment for Customer Integration>Assign Keys>Define Number Assignment for Direction of BP to Customer
In the settings select the same number range only if the number ranges for the customer account group and BP groups are the same.
If u have done all these things and still the customer is not getting created then there might be a problem of the mandatory fields. i.e may be in customer account group some fields are mandatory which are not getting copied from the BP. So make all the fields in the customer account group as optional. and try to create new BP again.
Regards,
Deepak M -
How to restrict inbound delivery creation by incomplete log
Guru's
I have a requirement of not to create an Inbound delivery if the filed in the "means of trans id " in the delivery header is 'BLANK".If i create an incomplete log for the field LIKP-TRAID (Warning message enabled) for Delivery header in transcation OVA2 .Now if i create an inbound delivery with "means of trans id " BLANK .It is giving only warning message it is not stopping me from creating an Inbound delivery .
Can you please let me know whether we can restrict creation of inbound delivery.
ThanksTry through user exit,If system does not find the means of trans id then syem stops to crete inbound delivery.Chcek this user exit V50Q0001 with your ABAPer it may help you.
-
Database Engine won't start - master DB log file issue
I had a server patched and reboot yesterday and now the SQL services won't start because it's saying it can't find the master log file. I checked the path and the log file is present. Any suggestions.
2014-04-21 09:48:55.22 Server Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)
Jun 28 2012 08:36:30
Copyright (c) Microsoft Corporation
Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)
2014-04-21 09:48:55.22 Server (c) Microsoft Corporation.
2014-04-21 09:48:55.22 Server All rights reserved.
2014-04-21 09:48:55.22 Server Server process ID is 4008.
2014-04-21 09:48:55.22 Server System Manufacturer: 'HP', System Model: 'ProLiant DL380 G5'.
2014-04-21 09:48:55.22 Server Authentication mode is MIXED.
2014-04-21 09:48:55.22 Server Logging SQL Server messages in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG'.
2014-04-21 09:48:55.22 Server This instance of SQL Server last reported using a process ID of 4080 at 4/21/2014 9:47:41 AM (local) 4/21/2014 2:47:41 PM (UTC). This is an informational message only; no user action is required.
2014-04-21 09:48:55.22 Server Registry startup parameters:
-d C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\master.mdf
-e C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG
-l C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512
2014-04-21 09:48:55.22 Server SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.
2014-04-21 09:48:55.22 Server Detected 4 CPUs. This is an informational message; no user action is required.
2014-04-21 09:48:55.33 Server Using dynamic lock allocation. Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node. This is an informational message only. No user action is required.
2014-04-21 09:48:55.35 Server Node configuration: node 0: CPU mask: 0x000000000000000f:0 Active CPU mask: 0x000000000000000f:0. This message provides a description of the NUMA configuration for this computer. This is an informational
message only. No user action is required.
2014-04-21 09:48:55.38 spid7s Starting up database 'master'.
2014-04-21 09:48:55.39 spid7s Error: 17204, Severity: 16, State: 1.
2014-04-21 09:48:55.39 spid7s FCB::Open failed: Could not open file C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512 for file number 2. OS error: 2(The system cannot find the file
specified.).
2014-04-21 09:48:55.39 spid7s Error: 5120, Severity: 16, State: 101.
2014-04-21 09:48:55.39 spid7s Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf -g512". Operating system error 2: "2(The system cannot find the
file specified.)".That did it. Not sure why that switch was there.
It might have been intended, but it was added incorrectly. -g512 is a legal switch. Then again, since you have 64-bit SQL Server, it is not likely that you will needed.
(The -g switch increases the area known as memtoleave, which is the virtual address space (VAS) not used for the buffer cache. On 32-bit SQL Server you may need to increase this area from the default of 256 MB. On 64-bit SQL Server the full physical memory
is available for VAS; on 32-bit it's only 2GB.)
Erland Sommarskog, SQL Server MVP, [email protected] -
Deleted cost centre master data log
Hello,
Does anyone know if there is a way/report to see which cost centre master data was deleted, i.e. something like a log file?
I found a report that can show change documents, but it does not include deleted cost centres.
Thanks in advance!
RenéHi,
Refer Note: 1090861 KS02: Change documents for time periods deleted/Create
regards
Waman -
Bapi_material_savedata and material master change log
Hi,
The BAPI_MATERIAL_SAVEDATA is called to extend a part number to a storage location and change MRP4 Re-order point qty and Replenishment qty.
The material master (MM03) correctly shows that the part number was extended successfully and the Re-order point qty and Replenishment qty are reflected correctly in MRP4.
However, when I view the Change Documents (menu-Environment-Display Changes), only the extension to storage location is listed. There is no record for the quantities.
Is this as expected? Is there something that can be done to enable recording of the quantities as well?
Because it's a new part number in the storage location, I expect to see the usual as below, but there's none.
old value / new value
0 / 02
0 / 10
Many thanks,
HuntrHi,
The solution is to call the bapi twice: once to create the mrp view and once to change the values in the view.
The first call will generate a Create event in the document log and the second call will generate a field Change event.
Thanks,
Reyleene -
Cannot activate master data - logs fill
Hi All,
We recently had to do a full load of master data for the 0MAT_PLANT object. The MARC table in our source system contains 31 million records, so that's how many records are now in status M in our master data table.
We are now trying to activate the master data and it keeps terminating because the database logs keep filling. We have tried the activation by Tools -- Apply Hierarchy/Change Run. We have also tried right clicking on 0MAT_PLANT infoobject. Both methods start the activation, and everything looks ok in SM50. It gets through the deletion in XMAT_PLANT, then moves onto PMAT_PLANT. Then it fills the logs, sending messages to SM21.
Then the database rollback (DB2) occurs.
Our basis people have increased the log space for the table space partition from 45 GB to 60 GB. The logs still filled.
Are there any methods we can use to activate the master data for 0MAT_PLANT in smaller subsets? Or to make the activation commit during the process?
Any help would be greatly appreciated.
Thanks
CharlaHi,
Try to activate the Datasource by useing ' RS_TRANSTRU_ACTIVATE_ALL'
1. Goto SE38> Select 'RS_TRANSTRU_ACTIVATE_ALL'> Click on Execuite.
2. Give the Infosource/ Datasoruce--> Click on Execuite.
3. Replicate the Datasource in BI side.
4 Apply the Attribute change run for that object.
Slect Tool> Select ' Apply Hierarchy/ Attribute change run'> Click on ' Infoobject list'> Select the Infoobjects> Click on Save.
Regards. -
Creation of error log on input data and stat report
I am doing call transaction on <b>C202</b> transaction.my client asked me before uploading the file he needs some validations on input file and he is asking me to create a log for all the validations.i have to place error lof for all the input data.
An error log will record all errors occurring during upload. For each error the list should
contain the data (line) going in error and an error text in a subsequent column (subsequent to data). The change number used to perform the upload will be stated in the header of the error list.A txt-file containing the error log will get the same name as the input file, but with an ending err.xls.
Therefore no Batch-Input-Session is needed.
After execution of the batch input program, the following analysis regarding execution will be shown:
o Number of records in input file (including title, first line)
o Number of records successfully updated
o Number of records in error
Example:
Number of records in input file (incl. first line) 4
Number of records successfully updated: 3
Number of records in error: 0
How to do this according to client requirements.can u help me to get a statastical way to represent errors. Send me some smpale code for number of errors and no records gets success and no of failed.
Thanks
chandrasekharHai Chandrasekhar
Go through the following Code
report Z_CALLTRANS_VENDOR_01
no standard page heading line-size 255.
Generated data section with specific formatting - DO NOT CHANGE ***
data: begin of it_lfa1 occurs 0,
KTOKK like lfa1-ktokk,
NAME1 like lfa1-name1,
SORTL like lfa1-sortl,
LAND1 like lfa1-land1,
end of it_lfa1.
End generated data section ***
data : it_bdc like bdcdata occurs 0 with header line.
*DATA: IT_MESSAGES TYPE TABLE OF BDCMSGCOLL WITH HEADER LINE.
*DATA: LV_MESSAGE(255).
data : it_messages like bdcmsgcoll occurs 0 with header line.
data : V_message(255).
data : V_flag.
data : V_datum1 type sy-datum.
data : begin of it_mesg occurs 0,
message(100),
end of it_mesg.
*V_datum1 = sy-datum-1.
parameters : P_Sess like APQI-GROUPID.
start-of-selection.
perform Get_data.
*perform open_group.
loop at it_lfa1.
perform bdc_dynpro using 'SAPMF02K' '0100'.
perform bdc_field using 'BDC_CURSOR'
'RF02K-KTOKK'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'RF02K-KTOKK'
it_lfa1-KTOKK.
perform bdc_dynpro using 'SAPMF02K' '0110'.
perform bdc_field using 'BDC_CURSOR'
'LFA1-LAND1'.
perform bdc_field using 'BDC_OKCODE'
'=UPDA'.
perform bdc_field using 'LFA1-NAME1'
it_lfa1-name1.
perform bdc_field using 'LFA1-SORTL'
it_lfa1-sortl.
perform bdc_field using 'LFA1-LAND1'
it_lfa1-land1.
call transaction 'XK01' using it_bdc
mode 'N'
update 'S'
messages into it_messages.
if sy-subrc <> 0.
if V_flag <> 'X'.
perform open_group.
V_flag = 'X'.
endif.
perform bdc_transaction. "using 'XK01'.
endif.
perform format_messages.
refresh : it_bdc,it_messages.
endloop.
if V_flag = 'X'.
perform close_group.
endif.
*& Form Get_data
text
--> p1 text
<-- p2 text
FORM Get_data .
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
FILENAME = 'C:\srinu_vendor.txt'
FILETYPE = 'ASC'
TABLES
DATA_TAB = it_lfa1
EXCEPTIONS
CONVERSION_ERROR = 1
INVALID_TABLE_WIDTH = 2
INVALID_TYPE = 3
NO_BATCH = 4
UNKNOWN_ERROR = 5
GUI_REFUSE_FILETRANSFER = 6
OTHERS = 7
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. " Get_data
*& Form bdc_dynpro
text
-->P_0061 text
-->P_0062 text
FORM BDC_DYNPRO USING PROGRAM DYNPRO.
CLEAR it_BDC.
it_BDC-PROGRAM = PROGRAM.
it_BDC-DYNPRO = DYNPRO.
it_BDC-DYNBEGIN = 'X'.
APPEND it_BDC.
ENDFORM.
Insert field *
FORM BDC_FIELD USING FNAM FVAL.
CLEAR it_BDC.
it_BDC-FNAM = FNAM.
it_BDC-FVAL = FVAL.
APPEND it_BDC.
ENDFORM.
*& Form format_messages
text
--> p1 text
<-- p2 text
FORM format_messages .
loop at it_messages.
CALL FUNCTION 'FORMAT_MESSAGE'
EXPORTING
ID = it_messages-MSGID
LANG = 'EN'
NO = it_messages-MSGNR
V1 = it_messages-MSGV1
V2 = it_messages-MSGV2
V3 = it_messages-MSGV3
V4 = it_messages-MSGV4
IMPORTING
MSG = V_message
EXCEPTIONS
NOT_FOUND = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
write : / V_message.
clear : V_message.
endloop.
ENDFORM. " format_messages
*& Form open_group
text
--> p1 text
<-- p2 text
FORM open_group .
CALL FUNCTION 'BDC_OPEN_GROUP'
EXPORTING
CLIENT = SY-MANDT
GROUP = P_Sess
HOLDDATE = V_datum1
KEEP = 'X'
USER = SY-UNAME
IF SY-SUBRC = 0.
write : / 'Session Creating wit Name : ',P_Sess.
ENDIF.
ENDFORM. " open_group
*& Form close_group
text
--> p1 text
<-- p2 text
FORM close_group .
CALL FUNCTION 'BDC_CLOSE_GROUP'.
ENDFORM. " close_group
*& Form bdc_transaction
text
-->P_0132 text
FORM bdc_transaction. "USING VALUE(P_0132).
CALL FUNCTION 'BDC_INSERT'
EXPORTING
TCODE = 'XK01'
POST_LOCAL = NOVBLOCAL
PRINTING = NOPRINT
SIMUBATCH = ' '
CTUPARAMS = ' '
TABLES
DYNPROTAB = it_bdc
EXCEPTIONS
INTERNAL_ERROR = 1
NOT_OPEN = 2
QUEUE_ERROR = 3
TCODE_INVALID = 4
PRINTING_INVALID = 5
POSTING_INVALID = 6
OTHERS = 7
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. " bdc_transaction
Regards
Sreeni
Message was edited by: Sreenivasulu Ponnadi -
Does external table creation creates redo log ?
Hi all,
I am on Oracle 10.1 on Solaris, but my question is generic.
I am creating an external table from a CSV text file. In the process of creating and reading (using) this external table any REDO information will be generated?
Till 8i, for loading CSV files, I would load the file with SQL*loader to a staging table and then process the data. This process of loading to staging table will create redo log information (unless I use NOLOGGING, which I cannot use for some reason).
Since 9i and 10g, I can create external table of the CSV file and use it. What can be the internal working of external table? Does it load the data to some temporary table internally (transparant to us)? or does Oracle reads the external source file as it needs it?
Important question is, in the whole affair of using a read-only external table, will Oracle create redo log information?
Please help.
RegardsI am creating an external table from a CSV text file. In the process of creating and reading (using) this external table any REDO information will be generated? <<The create statement will generate redo for the rdbms dictionary changes. If the nologging parameter is legal on a create statement for an external table it will have no effect since DDL is always logged. The insert select will generate redo for the inserts into the target table so again creating the external table as nologging would have no effect on the subsequent use of the table.
If you just select against the external table then no redo should be generated.
HTH -- Mark D Powell --
Maybe you are looking for
-
IMac is overheating? Or is my Hard Drive dying...
So, I think my iMac is dying in some form or another, but I'm not sure if it's the hard drive, something heat related, or a combination of the two. A preface: the machine is mainly used for the web, iTunes, and Warcraft, and I run iStat menus to keep
-
Hi, Thank you for responding. I tried Scott/Tiger and I get the following error message: ORA-12560 TNS: Protocol adapter error Please advise me what needs to be done to get into SQL, I'd appreciate it very much. Thanks again
-
I have ensured I have installed the latest version of iTunes and conducted everything to the point of burning from a playlist to a CD. When I do this the iTunes window states "Disc Burner or Software not found" "CD/DVD doesnt have connectivity to bur
-
Anyone else annoyed that Apple still hasn't put out any EFI driver updates for mid and late '07 Macs (in my case MacBook Pro) to be able to run Vista-64bit? I just can't believe it. A simple little driver holds me (and thousands of others) off to run
-
Swapping mother boards for Satellite A30
I am in North america. I have a Satellite A30 laptop model number PSA30C-00YS5 and i was wondering if i could swap the mother board for a UK origin computer. Model number PSA33E-87YX4-EN