Composite key duplicate values
Hi All,
I have a following query
select a,b,c_num,d,e,f,g,h,i_name,
max(l_num) "xxx",
from table
group by a,b,c_num, d, e, f, g,h,i_name;
Here, c_num and l_num are in composite primary key.
i_name can change for each l_num. so this query is listing more than one row per c_num. I want to get the i_name for only the last l_num for each c_num, so that it lists the c_number only once
How to do it?
Any Ideas
Thanks in advance
R.R
Hi Bhaskar,
Thanks for your help, It helped me to solve the problem in one way, But I need to find out the i_name for a c_num of last l_num.
Here Iam explaining the problem in more detail.
select a,b,batch_number,technician_name,
max(line_num) "Last Line",
from SAMPLE
where p='###'
group by a,b,batch_number,technician_name
Here is the sample result data
a b batch_number technician Last Line
13 3G R100401Y xxxxxx 189
13 3G R100402Y xxxxxx 189
13 3G R100412Y xxxxxx 189
13 3G R100413Y xxxxxx 189
13 3G R100414Y xxxxxx 189
13 3G R100415Y xxxxxx 189
13 3H R131201Y Mike 189
13 3H R131201Y Sachin 183
13 3H R131201Y James 184
13 3H R132001Y xxxxxx 189
13 3H R151005Y xxxxxx 189
If you see the above results, the maximum(i.e., last value) value of Line_num is 189 for batch_number R131201Y, Like that I want to find maximum Line_num for each batch and display the Technician_name for only that Last(i.e., maximum) line_num each batch_number. So the desired results should be like this...
a b batch_number technician Last Line
13 3G R100401Y xxxxxx 189
13 3G R100402Y xxxxxx 189
13 3G R100412Y xxxxxx 189
13 3G R100413Y xxxxxx 189
13 3G R100414Y xxxxxx 189
13 3G R100415Y xxxxxx 189
13 3G RM11925Y xxxxxx 189
13 3H R100113Y xxxxxx 189
13 3H R131201Y Mike 189
13 3H R132001Y xxxxxx 189
13 3H R151005Y xxxxxx 189
Here we are displaying only the last(i.e., maximum) line_num(189) for batch_number 'R131201Y'
How to make this possible? If you have any ideas,Pl reply
Here is the structure of the table
PRIMARY KEY ("P", "LINE_NUM", "BATCH_NUMBER","column_X")
column A Not Null
column B Not Null
Thank you All for your responses
Gopureddi
Similar Messages
-
Inserting duplicate values when only MANDT is primary key
Hello experts,
I have 4 fields in my UDT(user defined table) namely MANDT, ZEVENT, ZRECIPIENT and ZEMAIL.
Now, Only MANDT is the primary key. My question is, how can I insert duplicate values
via SM30?
Again, thank you guys and take care!Hi again,
1. open the Layout of that screen
2. using drag&drop
just REMOVE the field from the table control.
3. Also
4. In the flow logic of that screen,
remove the line / commen it
which has been put in CHAIN
for eg.
FIELD YHRT_FUNMST-FUNSORT .
5. Activate everything
6. try again via sm30 in new session.
7. NOW IT WILL WORK. I JUST TRIED IT.
regards,
amit m. -
Composite Key Validation in EOImpl
Hello,
Please anyone can give example of how to validate composite key validation?
I have tried by following in EOImpl code but its not working:
OADBTransaction transaction = getOADBTransaction();
Object[] ItemKey = {getOrganizationId(),getUserId(),getProcess()};
EntityDefImpl def = XxEOImpl.getDefinitionObject();
XxEOImpl item_name = (XxEOImpl)def.findByPrimaryKey(getOADBTransaction(),new Key(ItemKey));
if (item_name != null)
throw new OAException("Erorr duplicate");
Please suggest its urgent..
Thanks,
Swati ThakkarHello Gurus,
Can you please suggest the following way of validation is correct or not!?
We have defined primary key on 3 attributes as a composite key, created EO and VO.
Now to add new row we are asking values from user using LOV values.
is the handling of unique value constraint using TooManyObjects Exception valid approach to handle error?
EOImpl Code:
try {
setAttributeInternal(PROCESSTYPE, value);
} catch (TooManyObjectsException toomany) {
throw new OAException("Value is already exist.");
Please suggest the right approach to handling Composite key validation..
Thanks,
Swati -
Need help-SQL with result format suppressing duplicate values
I am trying to write a SQL which would select data from the below tables(in reality i have many other tables also) but I wanted to know how do i get the data in the format given below.
The scenario is :-A training_plan can N no. of OBJECTIVES and EACH objective has N no.of activities
Insert into TEST_TRAINING_PLAN
(TPLAN_ID, TPLAN_NAME, TPLAN_DESC, T_PERSON_ID
Values
('111', 'test_name', 'test_name_desc', '****')
Objectives table:-
Insert into TEST_TRAINING_OBJECTIVE
(T_OBJECTIVE_ID, T_OBJECTIVE_NAME,T_owner)
Values
('10', 'objective1', '1862188559')
Objective and Training Plan relationship table where TPLAN_ID is the foreign key.
Insert into TEST_TP_OBJECTIVE
(TPLAN_TOBJ_ID, TPLAN_ID, T_OBJECTIVE_ID,
REQUIRED_CREDITS)
Values
('1', '111', '10',5)
Objective and Activity relationship table where T_OBJECTIVE_ID is the foreign key from the TEST_TRAINING_OBJECTIVE table.
Insert into TEST_TRAIN_OBJ_ACTIVITY
(TOBJ_TRAIN_ACTIVITY, T_OBJECTIVE_ID, ACTIVITY_ID, IS_REQUIRED, STATUS,
ACTIVITY_TYPE, ITEM_ORDER, IS_PREFERRED)
Values
('1000', '10', 'selfstudy event', SS1, NULL,
'Event', 0, 0);
Insert into TEST_TRAIN_OBJ_ACTIVITY
(TOBJ_TRAIN_ACTIVITY, T_OBJECTIVE_ID, ACTIVITY_ID, IS_REQUIRED, STATUS,
ACTIVITY_TYPE, ITEM_ORDER, IS_PREFERRED)
Values
('1001', '10', 'SQLcourse', 1, NULL,
'Course', 1, 0);
Insert into TEST_TRAIN_OBJ_ACTIVITY
(TOBJ_TRAIN_ACTIVITY, T_OBJECTIVE_ID, ACTIVITY_ID, IS_REQUIRED, STATUS,
ACTIVITY_TYPE, ITEM_ORDER, IS_PREFERRED)
Values
('1002', '10', 'testSQL', 1, NULL,
'test', 2, 0);
COMMIT;
firstname emplid Tplan name Number of activities/credits completed(for TP) Objective Name Number of required activities/Credits (for objective) Number of activities/credits completed(for objective) activity name activity completion status
U1 U1 TP1 5
OBJ1 4 3 C1 PASSED
C2 PASSED
C3 WAIVED
T1 ENROLLED
T2 ENROLLED
OBJ2 3 2
S1 ENROLLED
S2 PASSED
T3 WAIVED
U1 U1 TP2 C4 INPROGRESS
50 OBJ11 50 30 C11 PASSED
**The second row where we have another training_plan record and accordingly all objectivesand their objective.**similarly ,i need to display many Training_plan records in such tabular format.Please help with the SQL query to select and display data in the above format
If you want to suppress duplicate values in some of your results columns
I am using toad 9.1 using Oracle 10g version 2Hi,
You can use the BREAK command to suppress duplicate values.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14357/ch12009.htm#SQPUG030
(scroll down for an example)
It's a 'SQL*Plus-ism', not sure if TOAD's capable to handle it.
Simple example:
HR%xe> break on department_name
HR%xe> select l.department_name
2 , e.last_name
3 , e.first_name
4 from departments l
5 , employees e
6 where e.department_id = l.department_id;
DEPARTMENT_NAME LAST_NAME FIRST_NAME
Executive King Steven
Kochhar Neena
De Haan Lex
IT Hunold Alexander
Ernst Bruce
Austin David
Pataballa Valli
Lorentz Diana
Finance Greenberg Nancy
Faviet Daniel
Chen John
Sciarra Ismael
Urman Jose Manuel
Popp Luis
Purchasing Raphaely Den
Khoo Alexander
Baida Shelli
Tobias Sigal
Himuro Guy
Colmenares Karen
Shipping Weiss Matthew
Fripp Adam
Kaufling Payam
Vollman Shanta
Mourgos Kevin
Nayer Julia
Mikkilineni Irene
Landry James
Public Relations Baer Hermann
Accounting Higgins Shelley
Gietz William
106 rijen zijn geselecteerd. -
hi,
Please see the tabl;e structure below
-- Create table
create table RN_RPT_FIG
DT DATE not null,
RPT_ID NUMBER(8) not null,
RPT_ROW_ID NUMBER(4) not null,
RPT_COL_ID NUMBER(4) not null,
FIG NUMBER(20,2)
tablespace TS_IRS
pctfree 10
initrans 1
maxtrans 255
storage
initial 64K
minextents 1
maxextents unlimited
-- Create/Recreate primary, unique and foreign key constraints
alter table RN_RPT_FIG
add constraint PK_RN_RPT_FIG primary key (DT, RPT_ID, RPT_ROW_ID, RPT_COL_ID)
using index
tablespace TS_IRS
pctfree 10
initrans 2
maxtrans 255
storage
initial 128K
minextents 1
maxextents unlimited
I would like to find any duplicate values that have been entered in this table. how can i write this query Keeping in mind the constraint PK_RN_RPT_FIG ?I am actually tryin to find out which are those columns
below is the data in that table
DT RPT_ID RPT_ROW_ID RPT_COL_ID FIG
1 31-Mar-11 2000101 1 2 8157500.00
2 31-Mar-11 2000101 1 1 17.00
3 31-Mar-11 2000101 1 4 530000.00
4 31-Mar-11 2000101 1 3 5.00
5 31-Mar-11 2000101 2 2 96500.00
6 31-Mar-11 2000101 2 1 6.00
7 31-Mar-11 2000101 3 2 8301000.00
8 31-Mar-11 2000101 3 1 7.00
9 31-Mar-11 2000101 3 4 669000.00
10 31-Mar-11 2000101 3 3 6.00
11 31-Mar-11 2000101 5 2 25184500.00
12 31-Mar-11 2000101 5 1 61.00
13 31-Mar-11 2000101 5 4 22609000.00
14 31-Mar-11 2000101 5 3 19.00
15 31-Mar-11 2000101 6 2 14103500.00
16 31-Mar-11 2000101 6 1 24.00
17 31-Mar-11 2000101 7 5 9153500.00
18 31-Mar-11 2000102 1 1 6.00
19 31-Mar-11 2000102 1 2 664000.00
20 31-Mar-11 2000102 1 3 1.00
21 31-Mar-11 2000102 1 4 18500.00
22 31-Mar-11 2000102 2 1 4.00
23 31-Mar-11 2000102 2 2 4800000.00
24 31-Mar-11 2000102 2 3 1.00
25 31-Mar-11 2000102 2 4 98000.00
26 31-Mar-11 2000102 3 3 1.00
27 31-Mar-11 2000102 3 4 98000.00
28 31-Mar-11 2000102 4 1 18.00
29 31-Mar-11 2000102 4 2 16476000.00
30 31-Mar-11 2000102 4 3 10.00
31 31-Mar-11 2000102 4 4 21257000.00
32 31-Mar-11 2000102 5 1 9.00
33 31-Mar-11 2000102 5 2 6069000.00
34 31-Mar-11 2000102 5 3 6.00
35 31-Mar-11 2000102 5 4 79000.00
36 31-Mar-11 2000102 6 1 26.00
37 31-Mar-11 2000102 6 2 2938000.00
38 31-Mar-11 2000102 6 3 2.00
39 31-Mar-11 2000102 6 4 1167500.00
40 31-Mar-11 2000102 7 1 2.00
41 31-Mar-11 2000102 7 2 257000.00
42 31-Mar-11 2000103 1 1 3.00
43 31-Mar-11 2000103 1 2 169500.00
44 31-Mar-11 2000103 2 3 3.00
45 31-Mar-11 2000103 2 4 2266500.00
46 31-Mar-11 2000103 3 1 4.00
47 31-Mar-11 2000103 3 2 6400000.00
48 31-Mar-11 2000103 4 1 17.00
49 31-Mar-11 2000103 4 2 9369500.00
50 31-Mar-11 2000103 4 3 8.00
51 31-Mar-11 2000103 4 4 20149000.00
52 31-Mar-11 2000103 5 1 25.00
53 31-Mar-11 2000103 5 2 7307000.00
54 31-Mar-11 2000103 5 3 3.00
55 31-Mar-11 2000103 5 4 125500.00
56 31-Mar-11 2000103 6 1 7.00
57 31-Mar-11 2000103 6 2 194500.00
58 31-Mar-11 2000103 6 3 5.00
59 31-Mar-11 2000103 6 4 68000.00
60 31-Mar-11 2000103 7 1 2.00
61 31-Mar-11 2000103 7 2 247000.00
62 31-Mar-11 2000103 8 1 2.00
63 31-Mar-11 2000103 8 2 1375000.00
64 31-Mar-11 2000103 11 1 1.00
65 31-Mar-11 2000103 11 2 122000.00
66 31-Mar-11 2000104 3 4 432500.00
67 31-Mar-11 2000104 3 3 27.00
68 31-Mar-11 2000104 3 6 115000.00
69 31-Mar-11 2000104 3 5 8.00
70 31-Mar-11 2000104 4 4 172000.00
71 31-Mar-11 2000104 4 3 4.00
72 31-Mar-11 2000104 4 6 294000.00
73 31-Mar-11 2000104 4 5 3.00
74 31-Mar-11 2000104 5 4 7030000.00
75 31-Mar-11 2000104 5 3 23.00
76 31-Mar-11 2000104 5 6 1150000.00
77 31-Mar-11 2000104 5 5 1.00
78 31-Mar-11 2000104 6 4 17550000.00
79 31-Mar-11 2000104 6 3 7.00
80 31-Mar-11 2000104 6 6 21050000.00
81 31-Mar-11 2000104 6 5 7.00
82 31-Mar-11 2000105 5 4 664000.00
83 31-Mar-11 2000105 5 3 6.00
84 31-Mar-11 2000105 5 6 18500.00
85 31-Mar-11 2000105 5 5 1.00
86 31-Mar-11 2000105 6 4 4800000.00
87 31-Mar-11 2000105 6 3 4.00
88 31-Mar-11 2000105 7 6 98000.00
89 31-Mar-11 2000105 7 5 1.00
90 31-Mar-11 2000105 8 4 16525500.00
91 31-Mar-11 2000105 8 3 23.00
92 31-Mar-11 2000105 8 6 21325000.00
93 31-Mar-11 2000105 8 5 15.00
94 31-Mar-11 2000105 9 4 2938000.00
95 31-Mar-11 2000105 9 3 26.00
96 31-Mar-11 2000105 9 6 1167500.00
97 31-Mar-11 2000105 9 5 2.00
98 31-Mar-11 2000105 10 4 257000.00
99 31-Mar-11 2000105 10 3 2.00
100 31-Mar-11 4000100 3 3 0.00
101 31-Mar-11 4000100 3 4 18.00
102 31-Mar-11 4000100 3 5 18.00
103 31-Mar-11 4000100 4 3 0.00
104 31-Mar-11 4000100 4 4 7.00
105 31-Mar-11 4000100 4 5 7.00
106 31-Mar-11 4000100 5 3 0.00
107 31-Mar-11 4000100 5 4 10.00
108 31-Mar-11 4000100 5 5 10.00
109 31-Mar-11 4000100 6 4 8.00
110 31-Mar-11 4000100 6 5 8.00
111 31-Mar-11 4000100 7 4 27.00
112 31-Mar-11 4000100 7 5 27.00
113 31-Mar-11 4000100 8 4 41.00
114 31-Mar-11 4000100 8 5 41.00
115 31-Mar-11 4000100 9 3 0.00
116 31-Mar-11 4000100 9 4 4.00
117 31-Mar-11 4000100 9 5 4.00
118 31-Mar-11 4000100 10 3 0.00
119 31-Mar-11 4000100 10 4 2.00
120 31-Mar-11 4000100 10 5 2.00
121 31-Mar-11 4000100 11 3 0.00
122 31-Mar-11 4000100 11 4 0.00
123 31-Mar-11 4000100 11 5 0.00
124 31-Mar-11 4000100 12 3 0.00
125 31-Mar-11 4000100 12 4 4.00
126 31-Mar-11 4000100 12 5 4.00
127 31-Mar-11 4000100 13 3 0.00
128 31-Mar-11 4000100 13 4 2.00
129 31-Mar-11 4000100 13 5 2.00
130 31-Mar-11 4000100 14 3 0.00
131 31-Mar-11 4000100 14 4 0.00
132 31-Mar-11 4000100 14 5 0.00
133 31-Mar-11 4000100 15 3 0.00
134 31-Mar-11 4000100 15 4 6.00
135 31-Mar-11 4000100 15 5 6.00
136 31-Mar-11 4000100 16 5 0.00
137 31-Mar-11 4000100 17 5 0.00
138 31-Mar-11 4000100 18 5 0.00
139 31-Mar-11 4000100 19 5 212.81
140 31-Mar-11 4000100 20 5 0.00
141 31-Mar-11 4000100 21 5 39.20
142 31-Mar-11 4000100 22 5 0.00
143 31-Mar-11 4000100 23 5 0.00
144 31-Mar-11 4000100 26 5 0.00
145 31-Mar-11 4000100 27 5 92.96
146 31-Mar-11 4000100 28 5 21.07
147 31-Mar-11 4000100 29 5 0.00
148 31-Mar-11 4000100 30 5 44.24
149 31-Mar-11 4000100 31 5 10.09
150 31-Mar-11 5000100 3 3 57700.00
151 31-Mar-11 5000100 4 3 137900.00
152 31-Mar-11 5000100 5 3 41700.00
153 31-Mar-11 5000100 6 3 20900.00
154 31-Mar-11 5000100 7 3 32800.00
155 31-Mar-11 5000100 8 3 188100.00
156 31-Mar-11 5000100 9 3 372730.00
157 31-Mar-11 5000100 10 3 63100.00
158 31-Mar-11 5000100 11 3 126200.00
159 31-Mar-11 5000100 12 3 65100.00
160 31-Mar-11 5000100 13 3 157200.00
161 31-Mar-11 5000100 15 3 61100.00
162 31-Mar-11 5000100 16 3 38900.00
163 31-Mar-11 5000100 17 3 208500.00
164 31-Mar-11 5000100 18 3 1167700.00
165 31-Mar-11 5000100 19 3 67100.00
166 31-Mar-11 5000100 20 3 68100.00
167 31-Mar-11 5000100 21 3 81100.00
168 31-Mar-11 5000100 22 3 82100.00
169 31-Mar-11 5000100 24 3 90500.00
170 31-Mar-11 5000100 25 3 20800.00
171 31-Aug-11 3000107 3 4 1000.00
172 31-Aug-11 3000107 3 3 1.00
173 31-Aug-11 3000107 3 6 1000.00
174 31-Aug-11 3000107 3 5 1.00
175 31-Aug-11 3000108 8 3 4.00
176 31-Aug-11 3000108 8 4 45100.00
177 31-Aug-11 3000108 15 3 4.00
178 31-Aug-11 3000108 15 4 60000.00
179 31-Aug-11 3000108 16 3 3.00
180 31-Aug-11 3000108 16 4 71020.56
181 31-Aug-11 3000108 17 3 4.00
182 31-Aug-11 3000108 17 4 97038.72
183 31-Aug-11 3000108 18 3 4.00
184 31-Aug-11 3000108 18 4 17096.91
185 31-Aug-11 3000108 19 3 8.00
186 31-Aug-11 3000108 19 4 106569.85
187 31-Aug-11 3000108 20 3 3.00
188 31-Aug-11 3000108 20 4 30456.12
189 31-Aug-11 3000108 21 3 4.00
190 31-Aug-11 3000108 21 4 63805.83
191 31-Aug-11 3000108 22 3 4.00
192 31-Aug-11 3000108 22 4 60000.00
193 31-Aug-11 3000108 23 3 4.00
194 31-Aug-11 3000108 23 4 16208.52
195 31-Aug-11 3000108 24 3 4.00
196 31-Aug-11 3000108 24 4 57784.26
197 31-Aug-11 3000108 25 3 3.00
198 31-Aug-11 3000108 25 4 45000.00
199 31-Aug-11 3000108 26 3 4.00
200 31-Aug-11 3000108 26 4 17212.24
201 31-Aug-11 3000108 28 3 4.00
202 31-Aug-11 3000108 28 4 30567.03
203 31-Aug-11 3000108 33 3 3.00
204 31-Aug-11 3000108 33 4 30130.34
205 31-Aug-11 4000100 3 3 0.00
206 31-Aug-11 4000100 3 4 18.00
207 31-Aug-11 4000100 3 5 18.00
208 31-Aug-11 4000100 4 3 0.00
209 31-Aug-11 4000100 4 4 7.00
210 31-Aug-11 4000100 4 5 7.00
211 31-Aug-11 4000100 5 3 0.00
212 31-Aug-11 4000100 5 4 10.00
213 31-Aug-11 4000100 5 5 10.00
214 31-Aug-11 4000100 6 5 8.00
215 31-Aug-11 4000100 7 5 27.00
216 31-Aug-11 4000100 8 5 41.00
217 31-Aug-11 4000100 9 3 0.00
218 31-Aug-11 4000100 9 4 4.00
219 31-Aug-11 4000100 9 5 4.00
220 31-Aug-11 4000100 10 3 0.00
221 31-Aug-11 4000100 10 4 2.00
222 31-Aug-11 4000100 10 5 2.00
223 31-Aug-11 4000100 11 3 0.00
224 31-Aug-11 4000100 11 4 0.00
225 31-Aug-11 4000100 11 5 0.00
226 31-Aug-11 4000100 12 3 0.00
227 31-Aug-11 4000100 12 4 4.00
228 31-Aug-11 4000100 12 5 4.00
229 31-Aug-11 4000100 13 3 0.00
230 31-Aug-11 4000100 13 4 2.00
231 31-Aug-11 4000100 13 5 2.00
232 31-Aug-11 4000100 14 3 0.00
233 31-Aug-11 4000100 14 4 0.00
234 31-Aug-11 4000100 14 5 0.00
235 31-Aug-11 4000100 15 3 0.00
236 31-Aug-11 4000100 15 4 6.00
237 31-Aug-11 4000100 15 5 6.00
238 31-Aug-11 4000100 16 5 0.00
239 31-Aug-11 4000100 17 5 0.00
240 31-Aug-11 4000100 18 5 0.00
241 31-Aug-11 4000100 19 5 212.81
242 31-Aug-11 4000100 20 5 0.00
243 31-Aug-11 4000100 21 5 39.20
244 31-Aug-11 4000100 22 5 0.00
245 31-Aug-11 4000100 23 5 0.00
246 31-Aug-11 4000100 26 5 0.00
247 31-Aug-11 4000100 27 5 92.96
248 31-Aug-11 4000100 28 5 21.07
249 31-Aug-11 4000100 29 5 0.00
250 31-Aug-11 4000100 30 5 44.24
251 31-Aug-11 4000100 31 5 10.09
252 31-Aug-11 5000100 3 3 97700.00
253 31-Aug-11 5000100 4 3 100600.00
254 31-Aug-11 5000100 5 3 82700.00
255 31-Aug-11 5000100 6 3 33900.00
256 31-Aug-11 5000100 7 3 59800.00
257 31-Aug-11 5000100 8 3 196400.00
258 31-Aug-11 5000100 9 3 287600.00
259 31-Aug-11 5000100 10 3 77100.00
260 31-Aug-11 5000100 11 3 154200.00
261 31-Aug-11 5000100 12 3 79100.00
262 30-Nov-07 4000100 6 4 8.00
263 30-Nov-07 4000100 7 4 27.00
264 30-Nov-07 4000100 8 4 41.00
265 30-Nov-07 4000100 16 4 0.00
266 30-Nov-07 4000100 18 4 0.00
267 30-Nov-07 4000100 19 4 0.03
268 30-Nov-07 4000100 20 4 0.00
269 30-Nov-07 4000100 21 4 0.01
270 30-Nov-07 4000100 26 4 0.00
271 30-Nov-07 4000100 27 4 0.00
272 30-Nov-07 4000100 28 4 0.00
273 30-Nov-07 4000100 29 4 0.00
274 30-Nov-07 4000100 30 4 0.00
275 30-Nov-07 4000100 31 4 0.00
276 31-Aug-11 1000101 6 2 1000.00
277 31-Aug-11 1000101 6 1 5.00
278 31-Aug-11 1000101 6 3 50800.00
279 31-Aug-11 1000101 7 2 4000.00
280 31-Aug-11 1000101 7 1 11.00
281 31-Aug-11 1000101 7 3 129828.26
282 31-Aug-11 1000101 12 2 14000.00
283 31-Aug-11 1000101 12 1 27.00
284 31-Aug-11 1000101 12 3 244433.51
285 31-Aug-11 1000101 30 2 0.00
286 31-Aug-11 1000101 30 1 6.00
287 31-Aug-11 1000101 30 3 1415254.00
288 31-Aug-11 1000101 39 2 0.00
289 31-Aug-11 1000101 39 1 8.00
290 31-Aug-11 1000101 39 3 5300.00
291 31-Aug-11 3000103 6 6 46.00
292 31-Aug-11 3000103 6 7 585509.54
293 31-Aug-11 3000104 3 13 7.00
294 31-Aug-11 3000104 3 14 47721.81
295 31-Aug-11 3000104 4 13 6.00
296 31-Aug-11 3000104 4 14 80805.83
297 31-Aug-11 3000104 5 13 6.00
298 31-Aug-11 3000104 5 14 46569.72
299 31-Aug-11 3000104 6 13 7.00
300 31-Aug-11 3000104 6 14 112907.50
301 31-Aug-11 3000104 7 13 6.00
302 31-Aug-11 3000104 7 14 75500.00
303 31-Aug-11 3000104 8 13 13.00
304 31-Aug-11 3000104 8 14 123471.04
305 31-Aug-11 3000104 9 13 6.00
306 31-Aug-11 3000104 9 14 101401.56
307 31-Aug-11 3000104 20 13 6.00
308 31-Aug-11 3000104 20 14 60489.10
309 31-Aug-11 3000104 22 13 6.00
310 31-Aug-11 3000104 22 14 75567.03
311 31-Aug-11 3000104 23 13 6.00
312 31-Aug-11 3000104 23 14 61339.10
313 31-Aug-11 3000104 24 13 20.00
314 31-Aug-11 3000104 24 14 94688.52
315 31-Aug-11 3000104 25 13 45.00
316 31-Aug-11 3000104 25 14 5772805.47
317 31-Aug-11 3000104 26 13 6.00
318 31-Aug-11 3000104 26 14 76462.24
319 31-Aug-11 3000104 27 13 16.00
320 31-Aug-11 3000104 27 14 63742.32
321 31-Aug-11 3000104 31 13 6.00
322 31-Aug-11 3000104 31 14 60141.20
323 31-Aug-11 3000105 7 3 14.00
324 31-Aug-11 3000105 7 4 162632.63
325 31-Aug-11 3000105 7 5 11.00
326 31-Aug-11 3000105 7 6 92950.93
327 31-Aug-11 3000105 8 3 15.00
328 31-Aug-11 3000105 8 4 128938.59
329 31-Aug-11 3000105 8 5 14.00
330 31-Aug-11 3000105 8 6 1491571.98
331 31-Aug-11 3000105 9 3 14.00
332 31-Aug-11 3000105 9 4 1666338.36
333 31-Aug-11 3000105 9 5 13.00
334 31-Aug-11 3000105 9 6 1377521.70
335 31-Aug-11 5000100 13 3 91100.00
336 31-Aug-11 5000100 15 3 75100.00
337 31-Aug-11 5000100 16 3 60100.00
338 31-Aug-11 5000100 17 3 448700.00
339 31-Aug-11 5000100 18 3 1117400.00
340 31-Aug-11 5000100 19 3 81100.00
341 31-Aug-11 5000100 20 3 82100.00
342 31-Aug-11 5000100 24 3 157500.00
343 31-Aug-11 5000100 25 3 48800.00
344 31-Aug-11 7000001 19 1
345 31-Aug-11 7000001 32 1
346 31-Aug-11 7000001 40 1
347 31-Aug-11 7000001 42 1
348 31-Aug-11 7000001 43 1
349 31-Aug-11 7000001 45 1
350 31-Aug-11 7000001 48 1 0.15
351 31-Aug-11 7000001 50 1 0.15
352 31-Aug-11 7000001 51 1
353 31-Aug-11 7000001 54 1
354 31-Aug-11 7000001 55 1
355 31-Aug-11 7000001 66 1 4417.45
356 31-Aug-11 7000001 66 2
357 31-Aug-11 7000002 12 2
358 31-Aug-11 7000002 12 3
359 31-Aug-11 7000002 13 2
360 31-Aug-11 7000002 13 3
361 31-Aug-11 7000002 16 2
362 31-Aug-11 7000002 16 3
363 31-Aug-11 7000002 17 2
364 31-Aug-11 7000002 17 3
365 31-Aug-11 7000002 18 2
366 31-Aug-11 7000002 18 3
367 31-Aug-11 7000002 19 2
368 31-Aug-11 7000002 19 3
369 31-Aug-11 7000002 20 2
370 31-Aug-11 7000002 20 3
371 31-Aug-11 7000002 22 2
372 31-Aug-11 7000002 23 2
373 31-Aug-11 7000002 23 3 14900.00
374 31-Aug-11 7000002 27 2
375 31-Aug-11 7000002 27 3
376 31-Aug-11 7000002 28 2
377 31-Aug-11 7000002 29 2
378 31-Aug-11 7000002 30 2
379 31-Aug-11 7000002 30 3 59100.00
380 31-Aug-11 7000002 32 2
381 31-Aug-11 7000002 32 3 56100.00
382 31-Aug-11 7000002 34 2
383 31-Aug-11 7000002 34 3 373880.00
384 31-Aug-11 7000002 35 2
385 31-Aug-11 7000002 35 3 119363875.00
386 31-Aug-11 7000002 36 2
387 31-Aug-11 7000002 36 3 9629230.00
388 31-Aug-11 7000002 38 2
389 31-Aug-11 7000002 38 3 2287214.00
390 31-Aug-11 7000002 40 2
391 31-Aug-11 7000002 40 3 1935671.00
392 31-Aug-11 7000002 41 2
393 31-Aug-11 7000002 41 3 2515010.00
394 31-Aug-11 7000002 42 2
395 31-Aug-11 7000002 42 3 36348309.00
396 31-Aug-11 7000002 43 2
397 31-Aug-11 7000002 43 3 3212922.00 -
Error while inserting record in Key-Flex Values Set interface
Guys
on 11.5.10.2
Actually the problem is being occurred while inserting value in Job Flex Field Value set, we have created two segment for job key flex filed, No and name, it was working fine, but since we have uploaded bulk data via api "fnd_flex_values_pkg.INSERT_ROW" which was around 100 off record, but after completion this task system is not taking new value by Values set interface, and show message "APPS-FND-01206: You entered duplicate values or sequence of value that must be unique for every record",
However, we insert the job number through api, it is inserting without error while
Interface is not taking value and raise duplicate message error,
which is APPS-FND-01206
Please Advice.Thanks for supporting dunacan
I have mentioned both of segment detail . please advice,
1.One is Job No segment
Segment1 number 10
Required Check Yes
Value set Info JOB_NO
List Type : List of Values
Security Type : No Security
Format Validation : select number 7,0 and check on number only (0-9)
Validation Type Independent
2.One is Job Name segment
Segment2 number 20
Required Check Yes
Value set Info JOB_NAME
List Type : List of Values
Security Type : No Security
Format Validation : select CHAR 60 Validation Type Independent
It was strange that how could it possible duplication value without having database,i am unable to find this issued,
Edited by: oracle0282 on Jan 17, 2011 3:32 AM -
Primary Key in DW and indexes - composite key of all dimensional FKs?
Hi guys,
I have a DW and indexing question. Our metadata DW and physical DW are the same (star schema, no snowflakes). Should all dimensional FKs (such as TIME_KEY, ORG_KEY) be included as a part of a fact table's PK (a composite key becomes a combination of the TIME_KEY, ORG_KEY, etc.) in the database. Second question is, should we create just 1 composite index per fact table (resembling fact table's PK) or should we build several indexes (1 per dimension) ? Also, does it pay to build indexes on dimension tables?
I'm mostly worried about fact table with a few hundred million records that has 7 dimensional-FK columns(only numeric codes), 1 value column, and several ETL-related date columns. I wonder what's the best course scenario be for that one.
Thank you.You can use bitmap indexes on foreign keys to enable star transformations to be carried out. There are full details in the Datawarehouse documentation. Unfortunately I can't access it at the moment to show you where!
As with anything you can check your performance using oracle trace events (10046) to ensure its performing as you want.
http://www.dwoptimize.com/2007/06/101010-answer-to-life-universe-and.html
Edited by: Matt T on Oct 22, 2008 4:50 PM -
Composite key constraint in SSIS
Hi All,
I have created one SSIS package to move data from one table to another table in same database using oledb connection.Now I want to enforce composite key constraint while moving data as follows:
Table A has following contents :
Col1 col2 col3
1 a b
2 c d
3 a b
So,while moving data, i want to verify the contents of col2 and col3(col2+col3) ie,composite key made of col2 and col3.In this case i want to move data of row 2 only and data of row 1 and 3 will go to error log file.
I am trying to use lookup here but no luck yet. Can anybody help me to achieve this .
Thanks in advance,
SanketHi Sanket,
I do agree with Visakh approach if table reside on same server, why go for the SSIS. But If you still want to do it, here are steps(It bit complex for simple operation like this, i didn't find
any other approach). I am using same table structure as mentioned above:
create table sampletest
col1 int ,
col2 varchar(10),
col3 varchar(10)
GO
insert into sampletest
values (1,'a','b'),(2,'c','d'),(3,'a','b')
1.) Load the
Data from source with all columns.
2.) Place an Aggregate
Task .Here is configuration:
Column
Operation
Col1
Max/Min
Col2
Group by
Col3
Group by
(*) - Output Alias (say cnt)
Count All
Figure 1:
3.) Place a condition split. With expression
(DT_I4)Cnt == 1
2.)
Move case1 to destination table and other conditional split to error table.
Full Diagram:
Regards Harsh -
46C Migration Oracle/HP-UX to MAXDB/SLE10 error: Duplicate value QCMT066
To migrate a productive system from 46C to ECC600 I got an system copy export from our business partner.
The source system is running on HPUX/Oracle. The target system should be MAXDB/SLE10.
I run the import with R3setup and got error: Duplicate value QCMT066.
To try to continue the import I set in DBMIG.R3S:
[DBR3LOADEXEC_IND_ADA]
from STATUS=ERROR to STATUS=OK
[DBR3LOADVIEW_IND_ADA]
from STATUS=ERROR to STATUS=OK
[DBR3LICENSECREATE_IND_ADA]
STATUS=ERROR to STATUS=OK
[DBR3START_IND_ADA]
from STATUS=ERROR to STATUS=OK
Tthe system is coming up, but get: in SAPGUI: GEN_TABLE_NOT_EXISTS .. Table VSEOPARENT does not exist.
I believe that I have to run the import again. How can I solve the duplicate value QCMT066 problem?
Edited by: Trieu Chi Phung on Aug 4, 2009 4:01 PMAnswer of SAP:
Did you skip the error? because there errors in both SAPVIEW.log and
SAPAPPL1.log
The SAPVIEW is having error because the table VBKD was not imported yet, and this table belongs to package SAPAPPL1, so, you have to finish
import the SAPAPPL1 in the first place.
You have the error "Duplicate key in index" for table QCMT066
One of the most important things to do before a migration starts is to
look at the consistency between Database and ABAP DDIC (/DB02 > checks>
button Database<> ABAP/4 DDIC) and to look after QCM-Tables from pre-
viously failed table-conversions These temporary objects are used duringconversion (see attached note # 9385 this note is for 3.0F but explains
the situation).
I would like you to proceed as follows:
1. In the source-system check SE14 > Extras > Invalid Temp.
Tables and remove all entries from there
2. Switch to use sidadm create a new temp-directory and run 'R3ldct
without any parameters
3. Check the STR-files created and grep for entries starting with
QCM
4. For those objects use function module 'DD_NAMETAB_DELETE' and
remove them from the nametab
5. repeat the export from scratch
If you want a workaround on this, you can modify the <package>.STR file
and remove the entry QCMT066 and restart the import to continue.
However, this maybe tedious if you and lots of this kind of object. -
Delete Reconciliation fails when a Composite Key is used
Hi Guys ,
Problem Statement :-
I am facing problem in performing delete reconciliation when a composite key is used.It fails whenever we have more than one attribute as key
in reconciliation field mappings.
I am using prepareDeleteReconData() ..etc Api's to perform delete reconciliation. I am not using CreateDeleteReconciliationEvent() as i dont know which users are deleted.
UseCase
For eg . Consider Oracle DataBase UM connector , where you have composite key defined as (UserID and ITResource), it fails to generate a delete reconciliation event.
Have anybody faced this ?? Any workarounds ?
Thanks
Surendra SinghHey Surendra,
This is what you can do to get rid of this problem. I kow you cannot use the 'createDeleetReconciliationEvent' API, but just to let you know that this works absolutely fine. Now The approach which you might be using has the following flow-
- provideDeletionDetectionData()
- getMissingAccounts()
- deleteDetectedAccounts()
Now you must be aware that getMissingAccounts() returns a ResultSet for all the instances which needs to be revoked in OIM. If you see the contents of this ResultSet, here is what it contains (4 columns):
Objects.Key, Objects.Name, Structure Utility.Table Name, Process Instance.Key
Now what I suggest is do not use the deleteDetectedAccounts API as of now. And Revoke the object instance using API call. Follow the following steps:
1) Just iterate through the ResultSet *(deletedUsers)* obtained from 'getMissingAccounts()' to fetch the value 'Process Instance.Key' and store it in an Array.
2) You must be passing the Object Name as a Task Attribute. Use this attribute to fetch the 'Object Key'. Once you get this value, use the 'getAssociatedUsers' API of objectOperationsIntf to find all the users associated with this object. This API will return a ResultSet. Let's name it as *'AssoUsers'*.
3) Iterate the above ResultSet(AssoUsers) and fetch the *'Process Instance.Key'* column from its rows. Compare this value to the already created Array in step-1. If there is a match then you will know that this resource instance needs to be revoked.
4) Now fetch the following two values from the ResultSet(AssoUsers):
- Users.Key
- Object Instance.Key
5) Once you get the User Key, you will have to find its corresponding resources. Do it by using *'getObjects'* API of userOperationsIntf. This will again return a resultSet *(userObjects)*.
6) Iterate through all the rows and check the value of column *'Objects.Name'*. If this value equals to your resource, then fetch the value of column- Users-Object Instance For User.Key from this row.
7) This will give you the 'Object instnace for User key'.
8) Call the revokeObject API of userOperationsIntf interface.
Below is a sample code snippet for your reference.
Array DeletedUsers = {Your Deleted User List Array};
String ObjectName = "Your Object Name as it comes from Task Attribute";
long ObjectKey = 1; // Fetch it from Object Name above using API
HashMap dummy = new HashMap();
tcResultSet AssoUsers = objectOperationsIntf.getAssociatedUsers(ObjectKey, dummy);
for (int i=0 ; i<AssoUsers.getRowCount() ; i++) {
AssoUsers.goToRow(i);
String piKey = AssoUsers.getStringValue("Process Instance.Key");
if("Your Array DeletedUsers contains piKey"){
long userKey = AssoUsers.getLongValue("Users.Key");
long obiKey = AssoUsers.getLongValue("Object Instance.Key");
logger.debug("userKey extracted is : " + userKey);
logger.debug("obiKey extracted is : " + obiKey);
tcResultSet userObjects = userOperationsIntf.getObjects(userKey);
for(int j=0 ; j<userObjects.getRowCount() ; j++) {
userObjects.goToRow(j);
if(ObjectName.equalsIgnoreCase(userObjects.getStringValue("Objects.Name"))) {
long obiuKey = userObjects.getLongValue("Users-Object Instance For User.Key");
userOperationsIntf.revokeObject(userKey, obiuKey);
logger.debug("Resource has been revoked");
This should work. I know this looks quiet complex but have to do it. Give it a try.
Thanks
Sunny -
CQL Join on Coherence Cache with Composite Key
I have a Coherence Cache with a composite key and I want to join a channel to records in that cache with a CQL processor. When I deploy the package containing the processor, I get the following error:
+italics14:32:35,938 | alter query SimpleQuery start | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
+14:32:35,938 | alter query >>SimpleQuery<< start+
+specified predicate requires full scan of external source which is not supported. please modify the join predicate | [ACTIVE] ExecuteThread: '7' for queue: 'weblogic.kernel.Default (self-tuning)' | CQLServer | FATAL+
I think that I'm using the entire key. If I change the key to a single field, it deploys OK. I found a similar issue when I defined a Java class to represent the composite key. Is it possible to join in this way on a composite key cache?
I could define another field which is a concatenation of the fields in the composite key but this is a little messy.
My config is as below:
<wlevs:caching-system id="MyCache" provider="coherence" />
<wlevs:event-type-repository>
<wlevs:event-type type-name="SimpleEvent">
<wlevs:properties>
<wlevs:property name="field1" type="char" />
<wlevs:property name="field2" type="char" />
</wlevs:properties>
</wlevs:event-type>
</wlevs:event-type-repository>
<wlevs:channel id="InChannel" event-type="SimpleEvent" >
<wlevs:listener ref="SimpleProcessor" />
</wlevs:channel>
<wlevs:processor id="SimpleProcessor">
<wlevs:listener ref="OutChannel" />
<wlevs:cache-source ref="SimpleCache" />
</wlevs:processor>
<wlevs:channel id="OutChannel" event-type="SimpleEvent" >
</wlevs:channel>
<wlevs:cache id="SimpleCache" value-type="SimpleEvent"
key-properties="field1,field2"
caching-system="MyCache" >
</wlevs:cache>
and the processor CQL is as follows:
<processor>
<name>SimpleProcessor</name>
<rules>
<query id="SimpleQuery">
<![CDATA[
select I.field1, I.field2 from InChannel [now] as I,
SimpleCache as S where
I.field1 = S.field1 and
I.field2 = S.field2
]]> </query>
</rules>
</processor>
Thanks
MikeUnfortunately, joining on composite keys in Coherence is not supported in the released versions. This will be supported in 12g release.
As you mention, defining another field as key, which is the concatenation of the original keys is the workaround. -
I'm doing a class with a composite @PrimaryKey ex:
@Persistent
public class NewsKey {
@KeyField(1) Long datetime;
@KeyField(2) String ccy;
@Entity
public class News {
@PrimaryKey
private NewsKey newsKey;
I have a PrimaryIndex for this key:
PrimaryIndex<NewsKey, News> newsByKey = mystore.getPrimaryIndex(NewsKey.class, News.class);
When I do the search by this composite key:
long fromTime = ; // some time number
long totime = ; // some time number
NewsKey fromKey = new NewsKey(fromTime, ccy);
NewsKey toKey = new NewsKey(toTime, ccy);
boolean fromInclusive = true;
boolean toInclusive = true;
newsByKey.entities(fromKey, fromInclusive, toKey, toInclusive);
The entities returned doesn't take into account the ccy String value.
But when I invert the order of key fields e.g.
@Persistent
public class NewsKey {
@KeyField(1) String ccy;
@KeyField(2) Long datetime;
The search returns the correct result doing the search by ccy and by datetime.
Does the order or @KeyField influence in the results?Please see the example and explanation at the top of this page:
http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/persist/model/KeyField.html
Does this clarify things?
--mark -
Customize Alert messages for checking duplicate value and Success alert
Hi All,
I want to show two alert against the same "Submit" button to--
1. Alert messages for checking duplicate value and show the Duplicate value found alert and
2. Success Alert if the form is success to commit and no duplicate value found.
I did all thing but when alert-1 shows and i pressed the ok button of the alert then the second alert also shows which i don't want to show.
Which i can do in this issue? Have there anyone to help me!
ArifHi Manu,
I have placed the following code
1. against the Submit button--
if error_code=40508 then
commit_form;
elsif :USERDELETION.CANCELLATION_USERID is not null then
do_key('COMMIT_FORM');
else null;
end if;
2. Code against the key-commit trigger in the form level--
commit_form;
DECLARE
vAlert NUMBER;
BEGIN
set_alert_property('ALERT_TO_VERIFY',ALERT_MESSAGE_TEXT, 'Your Form has successfully submitted and your Reference id is '||:USERDELETION.REF_NO);
vAlert:=SHOW_ALERT('ALERT_TO_VERIFY');
END;
3. Code against the on-error trigger in the form level--
DECLARE
vAlert NUMBER;
BEGIN
if ERROR_CODE = 40508 then
set_alert_property('ERROR_ALERT',ALERT_MESSAGE_TEXT, 'This user deletion request has already submitted of the user named '||'"'||:USERDELETION.FULLNAME||'"');
vAlert:=SHOW_ALERT('ERROR_ALERT');
elsif
ERROR_CODE = 40202 then
set_alert_property('ERROR_ALERT',ALERT_MESSAGE_TEXT, 'Existing userid must be filled-up !');
vAlert:=SHOW_ALERT('ERROR_ALERT');
else
message(error_type||to_char(error_text)||': '||error_text);
end if;
END;
If there have any confusion, please ask me.
Arif -
Avoiding null and duplicate values using model clause
Hi,
I am trying to use model clause to get comma seperated list of data : following is the scenario:
testuser>select * from test1;
ID VALUE
1 Value1
2 Value2
3 Value3
4 Value4
5 Value4
6
7 value5
8
8 rows selected.
the query I have is:
testuser>with src as (
2 select distinct id,value
3 from test1
4 ),
5 t as (
6 select distinct substr(value,2) value
7 from src
8 model
9 ignore nav
10 dimension by (id)
11 measures (cast(value as varchar2(100)) value)
12 rules
13 (
14 value[any] order by id =
15 value[cv()-1] || ',' || value[cv()]
16 )
17 )
18 select max(value) oneline
19 from t;
ONELINE
Value1,Value2,Value3,Value4,Value4,,value5,
what I find is that this query has duplicate value and null (',,') coming in as data has null and duplicate value. Is there a way i can avoid the null and the duplicate values in the query output?
thanks,
Edited by: orausern on Feb 19, 2010 5:05 AMHi,
Try this code.
with
t as ( select substr(value,2)value,ind
from test1
model
ignore nav
dimension by (id)
measures (cast(value as varchar2(100)) value, 0 ind)
rules
( ind[any]= instr(value[cv()-1],value[cv()]),
value[any] order by id = value[cv()-1] || CASE WHEN value[cv()] IS NOT NULL
and ind[cv()]=0 THEN ',' || value[cv()] END
select max(value) oneline
from t;
SQL> select * from test1;
ID VALUE
1 Value1
2 Value2
3 Value3
4 Value4
5 Value4
6
7 value5
8
8 ligne(s) sélectionnée(s).
SQL> with
2 t as ( select substr(value,2)value,ind
3 from test1
4 model
5 ignore nav
6 dimension by (id)
7 measures (cast(value as varchar2(100)) value, 0 ind)
8 rules
9 ( ind[any]= instr(value[cv()-1],value[cv()]),
10 value[any] order by id = value[cv()-1] || CASE WHEN value[cv()] IS NOT NULL
11 and ind[cv()]=0 THEN ',' || value[cv()] END
12 )
13 )
14 select max(value) oneline
15 from t;
ONELINE
Value1,Value2,Value3,Value4,value5
SQL> -
Unable to Enforce Unique Values, Duplicate Values Exist
I have list in SP 2010, it contains roughly 1000 items. I would like to enforce unique values on the title field. I started by cleaning up the list, ensuring that all items already had a unique value. To help with this, I used the export
to excel action, then highlight duplicates within Excel. So as far as I can tell, there are no duplicates within that list column.
However, when I try to enable the option to Enforce Unique Values, I receive the error that duplicate values exist within the field and must be removed.
Steps I've taken so far to identify / resolve duplicate values:
- Multiple exports to Excel from an unfiltered list view, then using highlight duplicates feature > no duplicates found
- deleted ALL versions of every item from the list (except current), ensured they were completely removed by deleting from both site and site collection recycle bins
- Using the SP Powershell console, grabbed all list items and exported all of the "Title" type fields (Item object Title, LinkTitle, LinkTitleNoMenu, etc) to a csv and ran that through excel duplicate checking as well.
Unless there's some rediculous hidden field value that MS expects anyone capable of attempting to enforce unique values on a list (which is simple enough for anyone to figure out - if it doesn't throw an error), then I've exhausted anything I can think
of that might cause the list to report duplicate values for that field.
While I wait to see if someone else has an idea, I'm also going to see what happens if I wipe the Crawl Index and start it from scratch.
- JonFirst, I create index for a column in list settings, it works fine no matter duplicate value exists or not;
then I set enforce unique values in the field, after click OK, I get duplicate values error message.
With SQL Server profiler, I find the call to proc_CheckIfExistingFieldHasDuplicateValues and the parameters. After reviewing this stored procedure in content database,
I create the following script in SQL Server management studio:
declare @siteid
uniqueidentifier
declare @webid
uniqueidentifier
declare @listid
uniqueidentifier
declare @fieldid
uniqueidentifier
set @siteid='F7C40DC9-E5D3-42D7-BE60-09B94FD67BEF'
set @webid='17F02240-CE04-4487-B961-0482B30DDA84'
set @listid='B349AF8D-7238-419D-B6C4-D88194A57EA7'
set @fieldid='195A78AC-FC52-4212-A72B-D03144DC1E24'
SELECT
* FROM TVF_UserData_List(@ListId)
AS U1 INNER
MERGE JOIN
NameValuePair_Latin1_General_CI_AS
AS NVP1 WITH (INDEX=NameValuePair_Latin1_General_CI_AS_MatchUserData)
ON NVP1.ListId
= @ListId AND NVP1.ItemId
= U1.tp_Id
AND ((NVP1.Level
= 1 AND U1.tp_DraftOwnerId
IS NULL)
OR NVP1.Level
= 2)
AND NOT((DATALENGTH(ISNULL(NVP1.Value,
= 0)) AND U1.tp_Level
= NVP1.Level
AND U1.tp_IsCurrentVersion
= CONVERT(bit, 1)
AND U1.tp_CalculatedVersion
= 0 AND U1.tp_RowOrdinal
= 0 INNER
MERGE JOIN
NameValuePair_Latin1_General_CI_AS
AS NVP2 WITH (INDEX=NameValuePair_Latin1_General_CI_AS_CI)
ON NVP2.SiteId
= @SiteId AND NVP2.ListId
= @ListId AND NVP2.FieldId
= @FieldId AND NVP2.Value
= NVP1.Value
AND NVP2.ItemId <> NVP1.ItemId
CROSS APPLY TVF_UserData_ListItemLevelRow(NVP2.ListId, NVP2.ItemId,
NVP2.Level, 0)
AS U2 WHERE ((NVP2.Level
= 1 AND U2.tp_DraftOwnerId
IS NULL)
OR NVP2.Level
= 2)
AND NOT((DATALENGTH(ISNULL(NVP2.Value,
= 0))
I can find the duplicate list items based on the result returned by the query above.
Note that you need to change the parameter values accordingly, and change the name of NameValuePair_Latin1_General1_CI_AS table based on the last parameter of the
proc_CheckIfExistingFieldHasDuplicateValues stored procedure. You can review the code of this stored procedure by yourself.
Note that direct operation on the content database in production environment is not supported, please do all these in test environment.
Maybe you are looking for
-
Attribute value has to be changed
Hello all, I have a requirement in R/3 one of the customer email address changed recently and it should replicate in BW. When i checked in BW this attribute is Display attribute and when i checked in /BIo/pcustomer and /BI0/xcustomer it is showing ol
-
Do I have a virus or Trojan malware on my MacBook Pro?
I downloaded a faulty mp3 file from the internet which I think infected my computer with either a virus or Trojan malware. It took over my computer and disallowed me from accessing any applications. I clicked on Finder, for example, and a window woul
-
Issue for creation date of Billing document in third party ???
Hello SAP Folks, This issue is related to third party scenario. The basic requirement is to create to a billing document with a creation date same as Good issue date of Sales Order. For your information, with the help of enhacement we have passed on
-
Is s-video output possible for nano 5g?
Can the ipod nano 5g (video camera) output video via s-video? On this page: http://support.apple.com/kb/HT1454 the 4G nano is said to be able to output s-video, but I don't see the 5G. Can I use a product like the Apple ipod AV connection kit (MA242L
-
Hi all of you, Is there any possibility to add the parked documents in New GL reporting vide no. "S_PL0_86000030 - GL Account balances New", as we are uploading the legacy transaction data using Segment and profit center reporting, for which the data