Hide confidential data tables from SYS dba and applicaiton owners
Hello All,
I have been developing a application which is very confidential for higer level managers. I would like my tables, views infact all the data are accessed by my clinet form and should not be accessed using SQL PLUS or any other tool .Is there any way of restricting the users including SYS, Appowner to the tables which stores these confidential data using any data base view tools.
/Best Regards..
Prashantha
Hi,
You can see [Creating Secure Application Roles to Control Access to Applications|http://download.oracle.com/docs/cd/B28359_01/network.111/b28531/app_devs.htm#i1006262]
From [10 Keeping Your Oracle Database Secure|http://download-uk.oracle.com/docs/cd/B28359_01/network.111/b28531/guidelines.htm]:
Use secure application roles to protect roles that are enabled by application code.
Secure application roles allow you to define a set of conditions, within a PL/SQL package, that determine whether a user can log on to an application. Users do not need to use a password with secure application roles.
Another approach to protecting roles from being enabled or disabled in an application is the use of role passwords. This approach prevents a user from directly accessing the database in SQL (rather than the application) to enable the privileges associated with the role. However, Oracle recommends that you use secure application roles instead, to avoid having to manage another set of passwords.
Regards,
Edited by: Walter Fernández on Jul 2, 2009 12:46 PM - Adding other url...
Similar Messages
-
Can not insert a row to a table from Oracle DBA Studio
Hi,
I try to use the Oracle DBA Studio to create a table and put some test data in it. Here is what I found:
I start Oracle DBA Studio on the client machine by connecting to an Oracle server 8i (8.1.7). I create a simple table called Test with two columns: one is USER, VARCHAR2, 10; and the other is USER2, CHAR, 2. After that, I right click on the table name and select "Table Data Editor" from the popup menu. The table editor window shows up. I type in the value 'AA' and 'BB' in the grid of two columns. Then I click the Apply button. An error message box popped up:
ORA-00928 missing SELECT keyword.
I click Show SQL button. The SQL statement is as following:
INSERT INTO "TASYS"."TEST" (USER ,USER2 ) VALUES ('AA' ,'BB').
I copy the statement to SQL Plus and run it. The same error is generated. But if I remove (USER, USER2) from the statement, it inserts a row without complaint. In other word, the database will not allow to insert if individual columns are list. I realize something must wrong with the new server and client installed and could not figure out what happening. Hope someone can help.
Thanks.
nullI am not sure, but could it be that USER is a keyword or predefine word. Check Oracle'slist of predefine/keyword
-
Error in renaming the table from SYS user
Hi
I am in Schema by name jc
and I have a table by name tab1
now i logged as sys user
then I give a command
rename jc.tab1 to tab2 ;
getting the following error
rename jc.tab1 to tab2
ERROR at line 1:
ORA-01765: specifying table's owner name is not allowed
Query is
Is it notpossible to rename a table of other schema by logging as sys user ?
is there any other alternate method of doing so??
Thanks and Regards
JCSorry Guido,
I have not stolen anything from anywhere :)
All are mine ...I have created some normal users and some users with DBA privileges....
I use this schemas for testing purpose....
Also if I give command like from SYS user
Alter table jc.tab1 rename to tab2 ;
It will create tab2 in jc schema only and not in SYS schema.
So there is no question of poluting the SYS schema.
Also whatever I ask in this forum is a part of my application related,
I never give the entire program, which may be useless for others and time taking in understanding.
I just simulate whatever I require in as simplae format as I can
The person who is very perfect in this forumn can find such queries rubbish. But in learning process no question is rubbish, but there can be a rubbish answer to every intelligent question too... :)
Regards
JC -
Hi,
I have table called Employe. And the Columns are Emp_ID,EMP_NAME,SRC_SYS_CD,DOB
I have Query like
Select
COALESCE(MAX(CASE WHEN src_sys_cd='1' THEN Emp_id END), MAX(CASE WHEN src_sys_cd='2' THEN Emp_Id END))Emp_Id,
COALESCE(MAX(CASE WHEN src_sys_cd='1' THEN Emp_name END), MAX(CASE WHEN src_sys_cd='2' THEN Emp_Name END))Emp_name,
COALESCE(MAX(CASE WHEN src_sys_cd='1' THEN dob END), MAX(CASE WHEN src_sys_cd='2' THEN dob END))dob ,
from Employe
group by dob.
I want to generalize the query like get the columns from SYS.ALL_COLUMNS table for that table name and want to pass it to COALEACE() function. I tried with Cursor. But i didnt get the appropriate results.
Is there any way to achieve this? Please help me out in this regard.
Thanks,Is this the kinda thing you're after?
Add a filter to the queries to get just a single table/
WITH allCols AS (
SELECT s.name as sName, o.name AS oName, c.name AS cName, column_id,
CASE WHEN st.name in ('float','bigint','tinyint','int','smallint','bit','datetime','money','date','datetime2','uniqueidentifier','sysname','geography','geometry') THEN st.name
WHEN st.name in ('numeric','real') THEN st.name + '('+CAST(c.scale AS VARCHAR)+','+CAST(c.precision AS VARCHAR)+')'
WHEN st.name in ('varbinary','varchar','binary','char','nchar','nvarchar') THEN st.name + '(' + CAST(ABS(c.max_length) AS VARCHAR) + ')'
ELSE st.name + ' unknown '
END + ' '+
CASE WHEN c.is_identity = 1 THEN 'IDENTITY ' ELSE '' END +
CASE WHEN c.is_nullable = 0 THEN 'NOT ' ELSE '' END + 'NULL' AS bText,
f.name AS fileGroupName
FROM sys.columns c
INNER JOIN sys.objects o
ON c.object_id = o.object_id
AND o.type = 'U'
INNER JOIN sys.systypes st
ON c.user_type_id = st.xusertype
INNER JOIN sys.schemas s
ON o.schema_id = s.schema_id
INNER JOIN sys.indexes i
ON o.object_id = i.object_id
AND i.index_id = (SELECT MIN(index_id) FROM sys.indexes WHERE object_ID = o.object_id)
INNER JOIN sys.filegroups f
ON i.data_space_id = f.data_space_id
), rCTE AS (
SELECT sName, oName, cName, column_id, CAST(cName + ' ' + bText AS VARCHAR(MAX)) as bText, CAST(cName AS VARCHAR(MAX)) AS colList, fileGroupName
FROM allCols
WHERE column_id = 1
UNION ALL
SELECT r.sName, r.oName, r.cName, c.column_id, CAST(r.bText +', ' + c.cName + ' ' +c.bText AS VARCHAR(MAX)), CAST(r.colList+ ', ' +c.cName AS VARCHAR(MAX)), c.fileGroupName
FROM allCols c
INNER JOIN rCTE r
ON c.oName = r.oName
AND c.column_id - 1 = r.column_id
), allIndx AS (
SELECT 'CREATE '+CASE WHEN is_unique = 1 THEN ' UNIQUE ' ELSE '' END+i.type_desc+' INDEX ['+i.name+'] ON ['+CAST(s.name COLLATE DATABASE_DEFAULT AS NVARCHAR )+'].['+o.name+'] (' as prefix,
CASE WHEN is_included_column = 0 THEN '['+c.name+'] '+CASE WHEN ic.is_descending_key = 1 THEN 'DESC' ELSE 'ASC' END END As cols,
CASE WHEN is_included_column = 1 THEN '['+c.name+']'END As incCols,
') WITH ('+
CASE WHEN is_padded = 0 THEN 'PAD_INDEX = OFF,' ELSE 'PAD_INDEX = ON,' END+
CASE WHEN ignore_dup_key = 0 THEN 'IGNORE_DUP_KEY = OFF,' ELSE 'IGNORE_DUP_KEY = ON,' END+
CASE WHEN allow_row_locks = 0 THEN 'ALLOW_ROW_LOCKS = OFF,' ELSE 'ALLOW_ROW_LOCKS = ON,' END+
CASE WHEN allow_page_locks = 0 THEN 'ALLOW_PAGE_LOCKS = OFF' ELSE 'ALLOW_PAGE_LOCKS = ON' END+
')' as suffix, index_column_id, key_ordinal, f.name as fileGroupName
FROM sys.indexes i
LEFT OUTER JOIN sys.index_columns ic
ON i.object_id = ic.object_id
AND i.index_id = ic.index_id
LEFT OUTER JOIN sys.columns c
ON ic.object_id = c.object_id
AND ic.column_id = c.column_id
INNER JOIN sys.objects o
ON i.object_id = o.object_id
AND o.type = 'U'
AND i.type <> 0
INNER JOIN sys.schemas s
ON o.schema_id = s.schema_id
INNER JOIN sys.filegroups f
ON i.data_space_id = f.data_space_id
), idxrCTE AS (
SELECT r.prefix, CAST(r.cols AS NVARCHAR(MAX)) AS cols, CAST(r.incCols AS NVARCHAR(MAX)) AS incCols, r.suffix, r.index_column_id, r.key_ordinal, fileGroupName
FROM allIndx r
WHERE index_column_id = 1
UNION ALL
SELECT o.prefix, COALESCE(r.cols,'') + COALESCE(', '+o.cols,''), COALESCE(r.incCols+', ','') + o.incCols, o.suffix, o.index_column_id, o.key_ordinal, o.fileGroupName
FROM allIndx o
INNER JOIN idxrCTE r
ON o.prefix = r.prefix
AND o.index_column_id - 1 = r.index_column_id
SELECT 'CREATE TABLE ['+sName+'].[' + oName + '] ('+bText+') ON [' + fileGroupName +']'
FROM rCTE r
WHERE column_id = (SELECT MAX(column_id) FROM rCTE WHERE r.oName = oName)
UNION ALL
SELECT prefix + cols + CASE WHEN incCols IS NOT NULL THEN ') INCLUDE ('+incCols ELSE '' END + suffix+' ON [' + fileGroupName +']'
FROM idxrCTE x
WHERE index_column_id = (SELECT MAX(index_column_id) FROM idxrCTE WHERE x.prefix = prefix) -
How to access Repository Data Tables from backedn in TOAD
Hello ,
I just started working on ODI, i was following the Oracke Getting Started guide on ODI.I was working in Demo Environment.I came accros source tables like SRC_Customer, src_sales and target like trg_customer, trg_sales.
I want to acces them from backend from toad or from sql plus.
Can any body guide me as i am not able to acces them.That will help me understand the senarios well.
Thanks
AnanthHi,
It's true this is not a Oracle database but a HSql (HyperSonic SQL) database, in this case using file (and not memory) coming from this path oracledi\demo\hsql\demo_src.script (and target)
So, it's possible to access data using a query tool named Squirrel-Sql (for generic query using JDBC connection) : http://squirrel-sql.sourceforge.net/
In Squirrel-Sql use : HSQL Server
with URL : jdbc:hsqldb:hsql://<server>
User : sa
no password
Stephane -
Hi,
i have share point list like below
ID name AdminEmail Useremail URl DueDate UploadSatus
1 ppp [email protected] [email protected] url some date uploaded
2 yyy [email protected] [email protected] url somedate empty
3 xxx [email protected] [email protected] url somedate empty
4 jjj [email protected] [email protected] url somedate emp
AdminEmail and UserEmail are lookup column
i using query the list using caml query
inside of foreach i am checking two condition like below
one is upload status in not empty
i need to send to mail to admin user this part i have done my adding all list items which have datatable apply group by working fine
in send condition i am checking difference between DueDate And current date value
if the value is =1 or -1
if the value is i
thank
i am getting the
table like below
ID name AdminEmail Useremail URl DueDate Upload
2 yyy [email protected] [email protected] url somedate empty
3 xxx [email protected] [email protected] url somedate empty
4 jjj [email protected] [email protected] url somedate empty
my issue is here how can i get the dynamic table rows which are same values of AdminEmail and user email one set and distintict rows are another set
sets which are same emails are same
3 xxx [email protected] [email protected] url somedate empty
4 jjj [email protected] [email protected] url somedate empty
set 2
2 yyy [email protected] [email protected] url somedate empty
how can i get this separate this can any one tell i need to send mail only one time to user [adim and user] .planing to aviod duplicate mail
Srinivasyour case better to use the two data tables to store the data
DataTable dt = list.Items.GetDataTable();
foreach (DataRow row in dt.Rows) -
Data Extraction from FI,CO and SD
Hi
Can you please update me
What type of Extraction Method (LIS,LO-****) method do i need to use
to extract data from
FI
CO and
SD
and steps involved in them
ThanksFI extraction
http://help.sap.com/saphelp_bw32/helpdata/en/af/16533bbb15b762e10000000a114084/frameset.htm
Extractions in BI
https://www.sdn.sap.com/irj/sdn/wiki
New to Materials Management / Warehouse Management?
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1b439590-0201-0010-ea8e-cba686f21f06
CO-PA
http://help.sap.com/saphelp_46c/helpdata/en/7a/4c37ef4a0111d1894c0000e829fbbd/content.htm
CO-PC
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6
iNVENTORY
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f83be790-0201-0010-4fb0-98bd7c01e328
Extractions in BI
https://www.sdn.sap.com/irj/sdn/wiki
LO Extraction:
/people/sap.user72/blog/2004/12/16/logistic-cockpit-delta-mechanism--episode-one-v3-update-the-145serializer146
LOGISTIC COCKPIT DELTA MECHANISM - Episode three: the new update methods
LOGISTIC COCKPIT - WHEN YOU NEED MORE - First option: enhance it !
LOGISTIC COCKPIT DELTA MECHANISM - Episode two: V3 Update, when some problems can occur...
LOGISTIC COCKPIT: a new deal overshadowed by the old-fashioned LIS ?
Master data enhancement
CMOD Enhancements
Enhancing master data extractor
Lo Enhancement
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/b0af763b-066e-2910-a784-dc6731660f46
UD connect
http://help.sap.com/saphelp_nw04s/helpdata/en/78/ef1441a509064abee6ffd6f38278fd/frameset.htm
http://help.sap.com/saphelp_nw04s/helpdata/en/43/e35b3315bb2d57e10000000a422035/frameset.htm
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/c0d70315-958f-2a10-bc88-eb91e36f4037
HR Business content
BW HR Business content Data Source
HR Business Content Reports
Standard reports for HR-PY, PM, QM
Extraction using DB connect
http://help.sap.com/saphelp_nw70/helpdata/EN/58/54f9c1562d104c9465dabd816f3f24/frameset.htm
http://help.sap.com/saphelp_nw04/helpdata/en/c6/0ffb40af87ee6fe10000000a1550b0/frameset.htm
Extract data from oracle DB to SAP BI 7.0
Generic extraction
http://www.erpgenie.com/sapgenie/docs/MySAP%20BW%20Cookbook%20Vol%202.pdf
asssign points if usefull
Edited by: Aravindan Pagadala on Oct 6, 2008 11:40 PM -
Data coming from Three Applications and needs to post into SAP
Hi all,
i have scenario like
XI needs to take data from 3 Applications and do some validations like data comparision of comman records and post the comman data among the three to SAP.
Application1 will send text file and Application2 will send CSV file and Application3 will send xml file
Ex:
Application1:
Emp No
Emp Name
Sal
Location
Application2:
Emp No
Emp Name
Desgination
Application3:
Emp No
Emp name
Designation
Location
Now the Target data should be
Emp No
Emp Name
Designation
This is only the comman record which is there from 3 Applications.
RegardsHi vamsi,
As the experts mentioned above BPM is the way to achieve your business requirement.When you use BPM there is a step to wait till the 3 messages to come into XI to process furthur.Refer to the following links choose the one that is useful to you.
Check this link for more information...
http://help.sap.com/saphelp_nw04/helpdata/en/0e/56373f7853494fe10000000a114084/content.htm
take a look at this blog..
/people/alexander.bundschuh/blog/2006/01/04/scheduling-messages-in-sap-xi
For your case yes its not mandatory to create communication channels for each step if the inputfiles are same and from same system..Else you obviously need to create different communication channels for each different file more over you mentioned that files are coming from 3 different application and should be merged.So 3 communication channels are required.
Refer the following link too to get better understanding on message merge
Re: BPM - Message merge
All the best,
Ram.
Edited by: Ramakrishna kopparaju on Apr 24, 2009 7:55 PM -
How to synchronize the data acquisition from both GPIB and DAQ card
I want to begin/stop acquiring the data from GPIB and PCI6024E card into Labview at the same time. Since the acquisition from GPIB is quite slow comparing to the one from DAQ card, I can not put both of them in the same loop structure. Is it possible to synchronize them?
Thank you!Hi,
I wanted to save data acquired from NI-DAQ (for example, NI 9234) in a file using the DAQ-mx ANSI C Code. The response I got was as follows:-
One way to do it is with TDMS logging. DAQmx comes with functions designed to log to a TDMS file. This is a special file type that is used for collecting data in a logical format. It can be displayed in a TDMS viewer where data is separated into groups and channels. NI-DAQmx provides examples for how to log to TDMS. Look at the TDMS examples in the C:\Users\Public\Documents\National Instruments\NI-DAQ\Examples\DAQmx ANSI C\Analog In\Measure Voltage directory.
However, now I want to know is there a way that using that same C code, can we save the data in a .txt file format (Text File) instead of a TDMS file? We actually want to access that file through MATLAB (that's why we want to save it in text format).
Also on an other note, is there a way we can access & open TDMS files by MATLAB?
Thanks,
Sauvik Das Gupta -
How can I use a date/time from a cursor and insert it into a table later?
I have a cursor which retrieves data from a table which includes a date column, which has date and time.
I want to insert this value into a table which has another date column but when I do, it is dropping off the time data and putting in 00:00:00.
I am using execute immediate and trapped the SQL I am executing:
insert into ifsapp.GSCDB_TRANS_DTL_TAB values (
'GSCDB_COMPANY',to_date('01-DEC-06',3,0 )
and so you can see the date has no time.
The plsql is:
v_sql := 'insert into ifsapp.GSCDB_TRANS_DTL_TAB values ( '''||r_gscdbio.name||''','''||r_gscdbio.last_exec_time||''','||v_count||',0 )';
Any help appreciated, I feel I need to use to_date and/or to_char but my head is spinning!
Thank youWhy are you using EXECUTE IMMEDIATE for this? And why are you not using bind variables? It'll be much easier (and more efficeint, easier to debug, etc) to just have static SQL
INSERT INTO table_name
VALUES( r_gscdbio.name, r_gscdbio.last_exec_time, v_count, 0 );Additionally, you'll want to explicitly specify the column list for your INSERT statement. Otherwise, if you add another column to the table, your code will break.
Justin -
Hello everyone,
I’ve been assigned one requirement wherein I would like to read around 50 CSV files from a specified folder.
In step 1 I would like to create schema for this files, meaning take the CSV file one by one and create SQL table for it, if it does not exist at destination.
In step 2 I would like to append the data of these 50 CSV files into respective table.
In step 3 I would like to purge data older than a given date.
Please note, the data in these CSV files would be very bulky, I would like to know the best way to insert bulky data into SQL table.
Also, in some of the CSV files, there will be 4 rows at the top of the file which have the header details/header rows.
According to my knowledge I would be asked to implement this on SSIS 2008 but I’m not 100% sure for it.
So, please feel free to provide multiple approaches if we can achieve these requirements elegantly in newer versions like SSIS 2012.
Any help would be much appreciated.
Thanks,
Ankit
Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.comHello Harry and Aamir,
Thank you for the responses.
@Aamir, thank you for sharing the link, yes I'm going to use Script task to read header columns of CSV files, preparing one SSIS variable which will be having SQL script to create the required table with if exists condition inside script task itself.
I will be having "Execute SQL task" following the script task. And this will create the actual table for a CSV.
Both these components will be inside a for each loop container and execute all 50 CSV files one by one.
Some points to be clarified,
1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform data insert, while for the rest 48
files, they should be appended on daily basis.
Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have repeated file names. How can we manage
this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged with older than criteria, say remove
data older than 1st Jan 2015. what is the best way to achieve this requirement?
Please know, I'm very new in SSIS world and would like to develop these packages for client using best package development practices.
Any help would be greatly appreciated.
Thanks, <b>Ankit Shah</b> <hr> Inkey Solutions, India. <hr> Microsoft Certified Business Management Solutions Professionals <hr> http://ankit.inkeysolutions.com
1. In the bunch of these 50 CSV files there will be some exception for which we first need to purge the tables and then insert the data. Meaning for 2 files out of 50, we need to first clean the tables and then perform
data insert, while for the rest 48 files, they should be appended on daily basis.
Can you please advise what is the best way to achieve this requirement? Where should we configure such exceptional cases for the package?
How can you identify these files? Is it based on file name or are there some info in the file which indicates
that it required a purge? If yes you can pick this information during file name or file data parsing step and set a boolean variable. Then in control flow have a conditional precedence constraint which will check the boolean variable and if set it will execute
a execte sql task to do the purge (you can use TRUNCATE TABLE or DELETE FROM TableName statements)
2. For some of the CSV files we would be having more than one file with the same name. Like out of 50 the 2nd file is divided into 10 different CSV files. so in total we're having 60 files wherein the 10 out of 60 have
repeated file names. How can we manage this criteria within the same loop, do we need to do one more for each looping inside the parent one, what is the best way to achieve this requirement?
The best way to achieve this is to append a sequential value to filename (may be timestamp) and then process
them in sequence. This can be done prior to main loop so that you can use same loop to process these duplicate filenames also. The best thing would be to use file creation date attribute value so that it gets processed in the right sequence. You can use a
script task to get this for each file as below
http://microsoft-ssis.blogspot.com/2011/03/get-file-properties-with-ssis.html
3. There will be another package, which will be used to purge data for the SQL tables. Meaning unlike the above package, this package will not run on daily basis. At some point we would like these 50 tables to be purged
with older than criteria, say remove data older than 1st Jan 2015. what is the best way to achieve this requirement?
You can use a SQL script for this. Just call a sql procedure
with a single parameter called @Date and then write logic like below
CREATE PROC PurgeTableData
@CutOffDate datetime
AS
DELETE FROM Table1 WHERE DateField < @CutOffDate;
DELETE FROM Table2 WHERE DateField < @CutOffDate;
DELETE FROM Table3 WHERE DateField < @CutOffDate;
GO
@CutOffDate which denote date from which older data have to be purged
You can then schedule this SP in a sql agent job to get executed based on your required frequency
Please Mark This As Answer if it solved your issue
Please Vote This As Helpful if it helps to solve your issue
Visakh
My Wiki User Page
My MSDN Page
My Personal Blog
My Facebook Page -
Import conversion data table from SAP R/3 into value mapping table in XI
Hi:
Somebody knows how to import a table with conversion data that are in SAP R/3 and to take it to a value mapping table in XI?
The purpose is to use a mapping table that can change in the future. Must I use a ABAP programming that retrieve data and build the value mapping table?
If so, how I specify in the ABAP programming the group id, the scheme, the agency and the corresponding value?
Please, help me.
Regards!!!Hi David,
please refer to this section in the help: http://help.sap.com/saphelp_nw04/helpdata/en/2a/9d2891cc976549a9ad9f81e9b8db25/content.htm
There is an interface for mass replication of mapping data. The steps you need to carry out to use this are:
+Activities
To implement a value-mapping replication scenario, proceed as follows:
1. Register the Java (inbound) proxies.
To do so, call the following URLs in the following order in your Internet browser:
¡ http://:/ProxyServer/register?ns=http://sap.com/xi/XI/System&interface=ValueMappingReplication&bean=localejbs/sap.com/com.sap.xi.services/ValueMappingApplication&method=valueMappingReplication (for the asynchronous replication scenario)
¡ http://:/ProxyServer/register?ns=http://sap.com/xi/XI/System&interface=ValueMappingReplicationSynchronous&bean=localejbs/sap.com/com.sap.xi.services/ValueMappingApplicationSynchronous&method=valueMappingReplicationSynchronous (for the synchronous replication scenario)
You only need to perform this step once (for each installation).
2. Application programming
The ABAP program must perform the following tasks:
¡ Read the value mapping data from the external table
¡ Call the outbound proxy used to transfer the data to a message, which is then sent to the Integration Server
3. Configuration of the replication scenario in the Integration Directory
This involves creating all the configuration objects you need to execute the scenario successfully. One special aspect of the value-mapping replication scenario is that the receiver is predefined (it must be on the Integration Server). The sender, however, is not predefined in the replication scenario and can be defined to meet your individual requirements.
For example, you can use the shipped ABAP proxies.
In the case of the receiver communication channel, choose the adapter type XI. Ensure that you configure a channel for the Java proxy receiver in this case.
Enter the path prefix /MessagingSystem/receive/JPR/XI for this purpose.
+
Regards
Christine -
Import Data Table from Different DB Schema in BI Administration
Here is my scenario:
Tables are imported from different DB schema in Administration. They are then joined together.
When I select fields from more than one schema in BI Answer, incorrect result is returned.
Below is the SQL it returns:
-------------------- Sending query to database named DW_TESTING (id: <<15875>>):
select D1.c4 as c1,
D1.c5 as c2,
D1.c2 as c3,
D1.c1 as c4,
D1.c3 as c5
from
*(select D1.c1 as c1,*
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5
from
*(select sum(T34.PRODUCT_SALES) as c1,*
T52.DESCRIPTION as c2,
T52.BE_TYPE as c3,
T137.YEAR as c4,
T137.QUARTER as c5,
ROW_NUMBER() OVER (PARTITION BY T52.BE_TYPE, T137.QUARTER ORDER BY T52.BE_TYPE ASC, T137.QUARTER ASC) as c6
from
BEADMIN.BE_MASTER T52,
DWADMIN.DATE_MASTER T137,
BEADMIN.BE_DAILY_SALES T34
where ( T34.BE_TYPE = T52.BE_TYPE and T34.SALES_DATE = T137.CALENDAR_DATE )
group by T52.BE_TYPE, T52.DESCRIPTION, T137.YEAR, T137.QUARTER
*) D1*
where ( D1.c6 = 1 )
*) D1*
order by c2
+++Administrator:7e0000:7e001f:----2009/06/08 17:22:09
-------------------- Sending query to database named DW_TESTING (id: <<15938>>):
select D2.c2 as c1,
D2.c3 as c2,
D2.c1 as c3
from
*(select D1.c1 as c1,*
D1.c2 as c2,
D1.c3 as c3
from
*(select sum(T109.PRODUCT_SALES) as c1,*
T137.YEAR as c2,
T137.QUARTER as c3,
ROW_NUMBER() OVER (PARTITION BY T137.QUARTER ORDER BY T137.QUARTER ASC) as c4
from
DWADMIN.DATE_MASTER T137,
DWADMIN.DAILY_SALES T109
where ( T109.SALES_DATE = T137.CALENDAR_DATE )
group by T137.YEAR, T137.QUARTER
*) D1*
where ( D1.c4 = 1 )
*) D2*
order by c2
It seems that query has been seperated into two, and doesn't have relationship between each others.
However, I also tested the same situation in other environment, and it returns correct result. I found that the query hasn't been seperated into two, but one. The query is as below:
-------------------- Sending query to database named BI (id: <<2392>>):
WITH
SAWITH0 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4
from
*(select sum(T29.PRODUCT_SALES) as c1,*
T96.DESCRIPTION as c2,
T120.YEAR as c3,
T120.QUARTER as c4,
ROW_NUMBER() OVER (PARTITION BY T96.DESCRIPTION, T120.QUARTER ORDER BY T96.DESCRIPTION ASC, T120.QUARTER ASC) as c5
from
SIMULATION.BE_MASTER T96,
SIMULATION.DATE_MASTER T120,
BIEE.BE_DAILY_SALES T29
where ( T29.BE_TYPE = T96.BE_TYPE and T29.SALES_DATE = T120.CALENDAR_DATE )
group by T96.DESCRIPTION, T120.YEAR, T120.QUARTER
*) D1*
where ( D1.c5 = 1 ) ),
SAWITH1 AS (select D1.c1 as c1,
D1.c2 as c2,
D1.c3 as c3
from
*(select sum(T102.PRODUCT_SALES) as c1,*
T120.YEAR as c2,
T120.QUARTER as c3,
ROW_NUMBER() OVER (PARTITION BY T120.QUARTER ORDER BY T120.QUARTER ASC) as c4
from
SIMULATION.DATE_MASTER T120,
SIMULATION.DAILY_SALES T102
where ( T102.SALES_DATE = T120.CALENDAR_DATE )
group by T120.YEAR, T120.QUARTER
*) D1*
where ( D1.c4 = 1 ) )
select distinct case when SAWITH0.c3 is not null then SAWITH0.c3 when SAWITH1.c2 is not null then SAWITH1.c2 end as c1,
case when SAWITH0.c4 is not null then SAWITH0.c4 when SAWITH1.c3 is not null then SAWITH1.c3 end as c2,
SAWITH0.c2 as c3,
SAWITH1.c1 as c4,
SAWITH0.c1 as c5
from
SAWITH0 full outer join SAWITH1 On nvl(SAWITH0.c4 , 8) = nvl(SAWITH1.c3 , 8) and nvl(SAWITH0.c4 , 9) = nvl(SAWITH1.c3 , 9)
order by c1, c2, c3
This query returns correct result and seems reasonable.
Assume that the setup in BI Administration is the same for both environment, why the first situation returns query that are separated?I have made the same remark but not with two different environment and same design.
What can influence the construction of two queries ?
Here some ideas :
- if you have of course two connections pools (I assume not)
- if in our database features (double click on the icon and choose the tab feature), full outer join is not supported
But I have this two requirement on my laptop and I can see that I have also two sql fired.
What you can do is to make a comparison of your two repository.
In the administration tool, File/Compare.
The result will be very interesting. -
Master Data coming from 1 backend and GOA distributed to many bakends
Hello,
In our case (using SRM 5.0) one backend system is used to replicate all material master data and GOA will be distributed to multiple backends including the one that is used for master data. As far as i know the logical backend system is determined through materials/material groups while GOA is distributed. In the above case how can we intervene with this logic to be able to distribute the GOA to whatever backend we have, even though if the MD is not coming from them?
Thank you,Hi,
You can use the BADI BBP_DETERMINE_LOGSYS for this purpose.
The method :CONTRACT_LOGSYS_DETERMINE (Determine Target System of Contract) has been provided for the same purpose.
Sample BADI Implementation at SDN Wiki:
https://www.sdn.sap.com/irj/sdn/wiki?path=/display/srm/bbp_determine_logsys-contract_logsys_determine&
Regards
Kathirvel
Edited by: Kathirvel Balakrishnan on Feb 25, 2008 1:00 PM -
Mass Master data Upload from MS Excel and JDE into SAP ( ECC 6.0)
Hi
We are deciding the best method of uploading 2 million fixed assets Master data from Excel and JDE
We are following Batch Input (RAALTD001) and Direct Input methods (RAALTD11)
I am looking for some other efficient alternative for this upload
Look forward ot hear form you experts !!
Thanks
Milindhi
good
both works not possible simultaneously.
If you want to do it in two separate task than you can use the GUI_UPLOAD function module to fulfill your requirement.
thanks
mrutyun^
Maybe you are looking for
-
Installed itunes on windows 8 and it won't open
I just installed the most recent iTunes on my new Windows 8 computer and when it tries to open, it just crashes saying "iTunes has stopped working". Does anyone have any ideas?
-
Little icon not showing up...
i don't know how to load on music to my ipod... i checked the manual thing and my itunes should apparently have the little ipod nano icon in the source on the left and i just drag all the music there.. but i don't have that icon and i'm starting to g
-
Aperture 3 problem with Sony NEX-5n raw files
Hi all, I have a Sony Alpha NEX-5n camera and my brother advised to take the pictures in raw format in order to keep all the information the sensors gathered when making the picture. The problem I have is that the software Sony provides with the came
-
Flex Spark Datagrid - Scalable (font size) with No scroll Bars
I was wondering what would be the most optimized way to scale (increase/decrease) the fonts size in a Spark DataGrid with NO Scroller, so as to make it when the User Resizes the DataGrid, the Fonts inside the Grid would increase/decrease. Is there a
-
I connect an USB pen to Technicolor router for content sharing. All pc's can access to this shared partition except my Mac that give allways the message "Unable to connect the server". I tried inserting the IP adress or using finder but . Can anyone