File-File.....fetching the record on a field value
Hi,
I am woking on a file-XI-file scenario. I am sending 4 records from the sender file. now on the basis of one field condition, i want that particular record to be appended at my target file and the rest of the records to be appended to the other file.
regards,
Raghu
what you need here is a combination of xpath conditions in receiver determination (in case of diff. recv. systems), interface determination and message split.
Ref:
Receiver Det- Xpath:
/people/shabarish.vijayakumar/blog/2006/06/07/customise-your-xpath-expressions-in-receiver-determination
/people/shabarish.vijayakumar/blog/2005/08/03/xpath-to-show-the-path-multiple-receivers
XPATH in Interfce Det:
/people/suraj.sr/blog/2006/01/05/multiple-inbound-interfaces-within-a-service
mapping
/people/jin.shin/blog/2006/02/07/multi-mapping-without-bpm--yes-it146s-possible
Similar Messages
-
How do I generate the pdf file using the name of a field. How can I help
how do I generate the pdf file using the name of a field. How can I help
Hi,
here's a sample.
LiveCycle Blog: Formulare in bestimmte Verzeichnisse speichern und nach Inhalt aus Formularfeld benennen //Save forms to… -
Can we split and fetch the records in Database Adapter
Hi,
I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
Could someone help me to resolve this?
In Database Adapter can we split and fetch the records, if number of records more then 1000.
ex. First 100 rec as one set and next 100 as 2nd set like this.
Thank you.You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.
-
Hi,
Please suggest me the best way to fetch the record from the table designed below. It is Oracle 10gR2 on Linux
Whenever a client visit the office a record will be created for him. The company policy is to maintain 10 years of data on the transaction table but the table holds record count of 3 Million records per year.
The table has the following key Columns for the Select (sample Table)
Client_Visit
ID Number(12,0) --sequence generated number
EFF_DTE DATE --effective date of the customer (sometimes the client becomes invalid and he will be valid again)
Create_TS Timestamp(6)
Client_ID Number(9,0)
Cascade Flg vahrchar2(1)
On most of the reports the records are fetched by Max(eff_dte) and Max(create_ts) and cascade flag ='Y'.
I have following queries but the both of them are not cost effective and takes 8 minutes to display the records.
Code 1:
SELECT au_subtyp1.au_id_k,
au_subtyp1.pgm_struct_id_k
FROM au_subtyp au_subtyp1
WHERE au_subtyp1.create_ts =
(SELECT MAX (au_subtyp2.create_ts)
FROM au_subtyp au_subtyp2
WHERE au_subtyp2.au_id_k =
au_subtyp1.au_id_k
AND au_subtyp2.create_ts <
TO_DATE ('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp2.eff_dte =
(SELECT MAX
(au_subtyp3.eff_dte
FROM au_subtyp au_subtyp3
WHERE au_subtyp3.au_id_k =
au_subtyp2.au_id_k
AND au_subtyp3.create_ts <
TO_DATE
('2013-01-01',
'YYYY-MM-DD'
AND au_subtyp3.eff_dte < =
TO_DATE
('2012-12-31',
'YYYY-MM-DD'
AND au_subtyp1.exists_flg = 'Y'
Explain Plan
Plan hash value: 2534321861
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 1 | FILTER | | | | | | |
| 2 | HASH GROUP BY | | 1 | 91 | | 33265 (2)| 00:06:40 |
|* 3 | HASH JOIN | | 1404K| 121M| 19M| 33178 (1)| 00:06:39 |
|* 4 | HASH JOIN | | 307K| 16M| 8712K| 23708 (1)| 00:04:45 |
| 5 | VIEW | VW_SQ_1 | 307K| 5104K| | 13493 (1)| 00:02:42 |
| 6 | HASH GROUP BY | | 307K| 13M| 191M| 13493 (1)| 00:02:42 |
|* 7 | INDEX FULL SCAN | AUSU_PK | 2809K| 125M| | 13493 (1)| 00:02:42 |
|* 8 | INDEX FAST FULL SCAN| AUSU_PK | 2809K| 104M| | 2977 (2)| 00:00:36 |
|* 9 | TABLE ACCESS FULL | AU_SUBTYP | 1404K| 46M| | 5336 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("AU_SUBTYP1"."CREATE_TS"=MAX("AU_SUBTYP2"."CREATE_TS"))
3 - access("AU_SUBTYP2"."AU_ID_K"="AU_SUBTYP1"."AU_ID_K")
4 - access("AU_SUBTYP2"."EFF_DTE"="VW_COL_1" AND "AU_ID_K"="AU_SUBTYP2"."AU_ID_K")
7 - access("AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
filter("AU_SUBTYP3"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND
"AU_SUBTYP3"."EFF_DTE"<=TO_DATE(' 2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
8 - filter("AU_SUBTYP2"."CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00')
9 - filter("AU_SUBTYP1"."EXISTS_FLG"='Y')Code 2:
I already raised a thread a week back and Dom suggested the following query, it is cost effective but the performance is same and used the same amount of Temp tablespace
select au_id_k,pgm_struct_id_k from (
SELECT au_id_k
, pgm_struct_id_k
, ROW_NUMBER() OVER (PARTITION BY au_id_k ORDER BY eff_dte DESC, create_ts DESC) rn,
create_ts, eff_dte,exists_flg
FROM au_subtyp
WHERE create_ts < TO_DATE('2013-01-01','YYYY-MM-DD')
AND eff_dte <= TO_DATE('2012-12-31','YYYY-MM-DD')
) d where rn =1 and exists_flg = 'Y'
--Explain Plan
Plan hash value: 4039566059
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 1 | VIEW | | 2809K| 168M| | 40034 (1)| 00:08:01 |
|* 2 | WINDOW SORT PUSHED RANK| | 2809K| 133M| 365M| 40034 (1)| 00:08:01 |
|* 3 | TABLE ACCESS FULL | AU_SUBTYP | 2809K| 133M| | 5345 (2)| 00:01:05 |
Predicate Information (identified by operation id):
1 - filter("RN"=1 AND "EXISTS_FLG"='Y')
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "AU_ID_K" ORDER BY
INTERNAL_FUNCTION("EFF_DTE") DESC ,INTERNAL_FUNCTION("CREATE_TS") DESC )<=1)
3 - filter("CREATE_TS"<TIMESTAMP' 2013-01-01 00:00:00' AND "EFF_DTE"<=TO_DATE('
2012-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))Thanks,
VijayHi Justin,
Thanks for your reply. I am running this on our Test environment as I don't want to run this on Production environment now. The test environment holds 2809605 records (2 Million).
The query output count is 281699 (2 Hundred Thousand) records and the selectivity is 0.099. The Distinct values of create_ts, eff_dte, and exists_flg is 2808905 records. I am sure the index scan is not going to help out much as you said.
The core problem is both queries are using lot of Temp tablespace. When we use this query to join the tables, the other table has the same design as below so the temp tablespace grows bigger.
Both the production and test environment are 3 Node RAC.
First Query...
CPU used by this session 4740
CPU used when call started 4740
Cached Commit SCN referenced 21393
DB time 4745
OS Involuntary context switches 467
OS Page reclaims 64253
OS System time used 26
OS User time used 4562
OS Voluntary context switches 16
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 2487
bytes sent via SQL*Net to client 15830
calls to get snapshot scn: kcmgss 37
consistent gets 52162
consistent gets - examination 2
consistent gets from cache 52162
enqueue releases 19
enqueue requests 19
enqueue waits 1
execute count 2
ges messages sent 1
global enqueue gets sync 19
global enqueue releases 19
index fast full scans (full) 1
index scans kdiixs1 1
no work - consistent read gets 52125
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time cpu 1
parse time elapsed 1
physical write IO requests 69
physical write bytes 17522688
physical write total IO requests 69
physical write total bytes 17522688
physical write total multi block requests 69
physical writes 2139
physical writes direct 2139
physical writes direct temporary tablespace 2139
physical writes non checkpoint 2139
recursive calls 19
recursive cpu usage 1
session cursor cache hits 1
session logical reads 52162
sorts (memory) 2
sorts (rows) 760
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 1
user calls 11
workarea executions - onepass 1
workarea executions - optimal 9
Second Query
CPU used by this session 1197
CPU used when call started 1197
Cached Commit SCN referenced 21393
DB time 1201
OS Involuntary context switches 8684
OS Page reclaims 21769
OS System time used 14
OS User time used 1183
OS Voluntary context switches 50
SQL*Net roundtrips to/from client 9
bytes received via SQL*Net from client 767
bytes sent via SQL*Net to client 15745
calls to get snapshot scn: kcmgss 17
consistent gets 23871
consistent gets from cache 23871
db block gets 16
db block gets from cache 16
enqueue releases 25
enqueue requests 25
enqueue waits 1
execute count 2
free buffer requested 1
ges messages sent 1
global enqueue get time 1
global enqueue gets sync 25
global enqueue releases 25
no work - consistent read gets 23856
opened cursors cumulative 2
parse count (hard) 1
parse count (total) 2
parse time elapsed 1
physical read IO requests 27
physical read bytes 6635520
physical read total IO requests 27
physical read total bytes 6635520
physical read total multi block requests 27
physical reads 810
physical reads direct 810
physical reads direct temporary tablespace 810
physical write IO requests 117
physical write bytes 24584192
physical write total IO requests 117
physical write total bytes 24584192
physical write total multi block requests 117
physical writes 3001
physical writes direct 3001
physical writes direct temporary tablespace 3001
physical writes non checkpoint 3001
recursive calls 25
session cursor cache hits 1
session logical reads 23887
sorts (disk) 1
sorts (memory) 2
sorts (rows) 2810365
table scan blocks gotten 23856
table scan rows gotten 2809607
table scans (short tables) 1
user I/O wait time 2
user calls 11
workarea executions - onepass 1
workarea executions - optimal 5Thanks,
Vijay
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:17 AM
Edited by: Vijayaraghavan Krishnan on Nov 28, 2012 11:19 AM -
Hello,
Could someone help me please ?
I have a listing of my sales orders and I want to make changes in my order by opening the form and fetched with that record. When I click on that particular orderno in my listing of order and call the form to display the details, it calls the form but says "Query could not fetch the record". I do not know why ? Please help me with the solution.
ThanxHello,
I think you are passing orderno to called form as a parameter. If you are using parameter list check..
1. If parameter data is getting in form correctly ?
2. Next, have you changed where clause of other block,so that is will display record with passed orderno ?
I am expecting more details from you.
Thanx
Adi -
say i have emp table
eno ename sales
1 david 1100
2 lara 200
3 james 1000
1 david 1200
2 lara 5400
4 white 890
3 james 7500
1 david 1313
eno can be duplicate
when i give empno is 1
i want to display his sales i.e 1100,1200,1313
first time i will go to database and fetch the records
but next time onwards i dont go to database; i will fetch the records from cache;
i thought doing it using hashmap or hasptable ;both those two don't allow duplicate values(empno has duplicate values);
How to solve this problem.Hi,
Ever considered splitting that table up. You are thinking about caching thats a
very good idea. But doesnt it make it vary evident that the table staructure that you have
keeps a lot of redundant data. Specially it hardly makes a sense to have sales
figures in a emp table. Instead you can have Emp table containing eno and
ename with eno as the primary key and have another table called sales with eno
and sales columns and in this case the eno references the Emp table.
If you still want to continue with this structure then I think you can go ahead with
the solution already suggested to you
Aviroop -
Hi,
I faced up to a 'peculiar' situation with a costly db view.
I attempted to reduce the total view cost
specifically for 223 records fetched{the cost from 149 reduced to 74,
the recursive calls from 796 reduced to 224,
the consistent gets from 311516 reduced to 310341,
the physical reads from 7 reduced to 0}
but the amount of time needed to fetch the results is greater than the old version of the db view....{it may be the double...}
Have you any idea about this...????
Note: I have got fresh statistics...
I use db 10g v.2
Thanks,
SimTry tracing the query and see what tkprofs shows you.
alter session set events '10046 trace name context forever, level 12';
run the query
alter session set events '10046 trace name context off';
Then run tkprofs on the trace file produced to see where the database is spending its time/effort. Do likewise for the baseline query in a different session (so you will generate a different trace file).
If that does not produce anything useful, try using
alter session set events '10053 trace name context forever';
run the query
alter session set events '10053 trace name context off';
Then examine the trace file to see if you can learn anything.
Since you have not given us anything more to go on, that is about all the help I can give.... -
Hibernate fails to fetch the record in the Database
Hi ,
We are developing a J2EE based server application which has to cater to different types of mobile clients.
The clients will be pinging the server at a rate of 4-5 requests per sec.
I use HibernateUtil.getSession().load(PK) and it returns null..but the record is very much there in the DB..i have not inserted that record using JDBC.
This happens at times..not always..
Exception : No row with the given identifier exists in the session..
If i destroy and recreate the session factory, it works fine..but it is an expensive operation.
DB : Mysql 5
Hibernate version 3
My mapping file :
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="client.Contacts" table="contacts">
<id name="cId" column="c_Id" unsaved-value="null">
<generator class="identity" />
</id>
<property name="ContactID">
<column name="ContactID" />
</property>
<property name="firstName">
<column name="FirstName" />
</property>
<property name="lastName">
<column name="LastName" />
</property>
<property name="address">
<column name="address" />
</property>
<property name="email">
<column name="Email" />
</property>
<property name="url">
<column name="url" />
</property>
<property name="birthday">
<column name="birthday" />
</property>
<property name="note">
<column name="note" />
</property>
<property name="telephone">
<column name="telephone" />
</property>
<property name="telephoneFax">
<column name="telephoneFax" />
</property>
<property name="telephoneHome">
<column name="telephoneHome" />
</property>
<property name="telephoneWork">
<column name="telephoneWork" />
</property>
<property name="telephoneCell">
<column name="telephoneCell" />
</property>
<property name="nickName">
<column name="nickName" />
</property>
<property name="version">
<column name="version" />
</property>
<property name="userPhoneFK">
<column name="userPhoneFK" />
</property>
<property name="isDeleted">
<column name="isDeleted"/>
</property>
<property name="userModifiedFlag">
<column name="isModifiedbyUser"/>
</property>
<property name="isRestored">
<column name="isRestored"/>
</property>
<property name="lastBackUpTime">
<column name="lastBackUpTime" />
</property>
<property name="isAddedOnServer">
<column name="isAddedOnServer" />
</property>
</class>
</hibernate-mapping>
Java code :
String sortContactsQuery = " from Contacts c where c.isRestored= 'false' and isDeleted='false' and c.userPhoneFK="+msisdnID+" order by c.ContactID ASC ";
contactList= getSession().createQuery(sortContactsQuery).list();
contactList = criteria.list();
This list is empty even though there are records.
Thanks in advance.
VenSorry but you have posted your question in the wrong forum. Your question isn't related to neither Patterns nor OO Design.
-
How to quickly fetch the records from an SQL recordset
I'm using LW/CVI V7.1.1 and SQL Toolkit V2.06.
When displaying the recordset from a SELECT statement I use the following construct:
SQLhandle1 = DBActivateSQL(DBhandle, SELECTtext);
DBBindCol() statements.......
numRecs = DBNumberOfRecords(SQLhandle1);
for (n=1; n<=numRecs; n++) {
DBFetchNext(SQLhandle1);
display record to the user...
DBDeactivateSQL (SQLhandle1);
This has always worked fine for me when using local databases. Now I am developing an app for a remote database, and the fetching of each selected record is proving to be an issue. It takes at best, 60msecs for my round-trip network access to fetch each record. If selecting very many records, the fetching can add up to a considerable time delay.
My question is, how can I bind the entire recordset to my application variables, (or to a local table?) in a single request to the database? Does LW/CVI support such a method? Or perhaps someone knows an SQL method to help me?
ThanksHi Michael,
Thanks for the help. This is what I was looking for. Not sure why I missed it!
However, after trying it out, it doesn't seem to help. The statement: DBGetVariantArray(SQLhandle, &array, &recs, &fields); seems to take the same amount of time to get the records as the individual DBFetchNext() statements. So if my SQL statement matches 100 records, the DBGetVariantArray() call will take 100*60msec to complete.
Is there a DB attribute setting that needs changed? -
Hi all
I want to fetch just twenty thousands records from table. My query take more time to fetch twenty thousands records. I post my working query, Could you correct the query for me. thanks in advance.
Query
select
b.Concatenated_account Account,
b.Account_description description,
SUM(case when(Bl.ACTUAL_FLAG='B') then
((NVL(Bl.PERIOD_NET_DR, 0)- NVL(Bl.PERIOD_NET_CR, 0)) + (NVL(Bl.PROJECT_TO_DATE_DR, 0)- NVL(Bl.PROJECT_TO_DATE_CR, 0)))end) "Budget_2011"
from
gl_balances Bl,
gl_code_combinations GCC,
psb_ws_line_balances_i b ,
gl_budget_versions bv,
gl_budgets_v gv
where
b.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and bl.CODE_COMBINATION_ID=gcc.CODE_COMBINATION_ID and
bl.budget_version_id =bv.BUDGET_VERSION_ID and gv.budget_version_id= bv.budget_version_id
and gv.latest_opened_year in (select latest_opened_year-3 from gl_budgets_v where latest_opened_year=:BUDGET_YEAR )
group by b.Concatenated_account ,b.Account_descriptionHi,
If this question is related to SQL then please post in SQL forum.
Otherwise provide more information how this sql is being used and do you want to tune the SQL or the way it fetches the information from DB and display in OAF.
Regards,
Sandeep M. -
Fetch the records based on number
Hi experts,
I have a req like if user give input as 5..it should fetch 5 records from database.
for example database values are
SERNR MATNR
101 A
102 A
103 A
104 A
105 A
106 A
107 A
If user gives the input as 5 it should fetch 5 records like 101 to 105....Can any body plz help me how to write select query for this..
Thanks in advance,
Veena.
Edited by: s veena on Jan 18, 2011 5:52 AMHi Veena,
You can use UPTO in your select query. For example
SELECT MATNR FROM MARA INTO TABLE IT_MATNR UPTO P_NUMBER ROWS.
"Here P_NUMBER is the Selection Screen Parameter
It will fetch records based on the number in the parameter.
Thanks & Regards,
Faheem. -
How to fetch the description of a field
Hi All,
Could anybody please tell me , how to fetch the field description for a given field name .Any function module for it or how to fetch it ??
Thanks in Advance.
Regards,
Rakesh.Hi,
Copy paste this code and at the end of the code check the table len_table
data: len_table type standard table of DFIES .
CALL FUNCTION 'DDIF_FIELDINFO_GET'
EXPORTING
tabname = 'JKAP' {color:red}" change as per your requirement{color}
FIELDNAME = 'VBELN' {color:red}" change as per your requirement{color}
* LANGU = SY-LANGU
* LFIELDNAME = ' '
* ALL_TYPES = ' '
* GROUP_NAMES = ' '
* UCLEN =
* IMPORTING
* X030L_WA =
* DDOBJTYPE =
* DFIES_WA =
* LINES_DESCR =
TABLES
DFIES_TAB = len_table
* FIXED_VALUES =
* EXCEPTIONS
* NOT_FOUND = 1
* INTERNAL_ERROR = 2
* OTHERS = 3
IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Check out the internal Table len_table.
Hope it helps you,
Regards,
Abhijit G. Borkar -
Not fetching the records through DTP
Hi Gurus,
I am facing a problem while loading the data in to infocube through DTP.
I have successfully loaded the data till PSA but not able to load the records into infocube.
The request was successfully with status green but able to see only 0 records loaded.
later one of my friend executed the DTP successfully with all the records loaded.
can you please tell me why it is not working with my userid.
I have found the following difference in the monitor.
I am not able to see any selections for my request and ale to see REQUID = 871063 in the selection of the request started by my friend.
can any one tell me why that REQUID = 871063 is not displaying automatically when I have started the schedule.Hi,
I guess the DTP update process is DELTA UPDATE mode. Because you and your friend/colleague have executed SAME DTP object with a small time gap and during the same period no new TRANSATIONS HAVE POSTED IN SOURCE.
-Try to execute after couple hours....
Regards -
How to Ignore records having duplicate field value
Hi,
I have a fixed length flat file having multiple records.
Ex:
1000423421A8090
1000802421A8091
1000454421B8092
1000412421C8093
The first 2 record has A. So I have to take only one out of first two record and ignore the second one. I mean if there is any duplicate field at 11th position then I have to select only one record.
Can you please let me know how to handle this.
Thanks
MukeshMukesh,
Use the following UDF to achieve the same.
<u><b><i>getchar</i></b></u>
char ret = Values.charAt(10);
return ""ret"";
<u><b><i>Maptarget</i></b></u>
//write your code here
int len = Values.length;
String char_at="";
String pos="";
for(int j=0;j<len;j++)
if(j==0){
result.addValue(Values[j]);
else
char_at= ""+ Values[j-1].charAt(10);
pos=""+Values[j].charAt(10);
if(!char_at.equals(pos))
result.addValue(Values[j]);
<b><i><u>Mapping Logic</u></i></b>
1. Use sort by key function
in the first parameter map - Source -UDF(getchar)
in the second parameter map -Source
2.Map the output of sortbykey to UDF(Maptarget)
3.Finally Map the o/p of UDF to target.
I checked it, its working as per your reqmt.
If you have any doubts reply me I'll help you out.
Note: In UDF(MapTarget) use Queue
Best Regards,
Raj. -
How to find the list of org .field values in sap
Hi,
I need a list of org files values .
example : I want to know ...
Plant= WERKS,Cost Center=KOSTL.
similarly I need the list of all org. values.
what is the table name?
Thanks in advance
SR
Message was edited by:
sunny rajMy apologies.....
I need the list of non org filed values with the description
example..
BEGRU Authorization group
BSART Order type
BWART Movement type (inventory management)
I have already down loaded org filed values from USVAR table as Ben said.
Thanks,
sr
null
Maybe you are looking for
-
I have 2 itunes accounts and want to sync everything to one account
need help with itunes accounts i have 2 and would like to sync to one account
-
PDFMaker icon not working in Acrobat Pro 8.3.1.
I am using OS Windows XP Home and Microsoft Office 2003. I generally use the work around of "Print". Annoying that I cannot use the PDFMaker icon in any Microsoft products. It use to work flawlessly. I keep getting the annoying message "The acrobat
-
Simple query ... syntax error ?
What is wrong with this simple sql statement ? SELECT Caseid FROM (SELECT DISTINCT Caseid, userid FROM Atts) I get this error Server: Msg 170, Level 15, State 1, Line 2 Line 2: Incorrect syntax near ')'.
-
Hi, I want to create a dispute case through FBL5n and initiate a WF. I activated all the agents and events but still there is no work item. Can any one help in this regard. Regards, Aruna.
-
Idea or maybe there is a way to do this
I have this situation where I need to send a spreadsheet to a researcher of ours. The person seeing it needs to see the client name, but we don't want the researcher to see the client name. I just used an output to excel link (automated), and told th