Query for extracting specific data
Hi,
I had two tables contains more than 10,00,00 records.
customer_mast table
(CUSTOMER_ID
LOGIN_NAME
CREATE_DATE
CREDENTIAL_TYPE)
and customer_prod table
(CUSTOMER_ID
CREDENTIAL_TYPE
IND_PROD_LOGIN_ID)
IND_PROD_LOGIN_ID(numeric) is the customer card number and LOGIN_NAME is the userid(alphanumeric).
Now from these two table i need to select only those records as per below condition
1) LOGIN_NAME same as IND_PROD_LOGIN_ID
2) Corresponding to IND_PROD_LOGIN_ID there should be only one row in customer_mast table, no duplicate row.
problem is earlier userid was allowed as numeric as well as alphnumeric. Now after new changes i had changed the rules and allowed only alphanumeric userid only. Now i want to clean up the records which contains the numeric as login_name. But there is problem for some card(IND_PROD_LOGIN_ID) having two entries in the customer_mast table one with numeric and one with alphanumeric this is due to process while registering on my site so for those customer i dont want to delete the records.
Please help me on this.......
Please, avoid to cross post over different forums, especially this very forum is not for SQL.
Here your other thread :
Select query
And also, when asking such question, better to say about your Oracle version, your tries so far...
Nicolas.
Similar Messages
-
Query for extracting the data in tags
I have data in table like
col1 col2
1 Appli<resourceId>Page_Attorny</resourceId>
How I will get output of query as
Page_Attorny
Plz help.
Thanx in advance.Hi,
I hope you are looking some thing like :
SQL>create table XMLTable (col1 number, col2 XMLType);
Table created.
SQL>insert into XMLTable values (1,XMLType('<resourceId>Page_Attorny</resourceId>'));
1 row created.
SQL>SELECT col1, extractValue(col2,'resourceId') FROM XMLTable;
COL1
EXTRACTVALUE(COL2,'RESOURCEID')
1
Page_Attorny
1 row selected.
SQL>Regards -
Query taking long time for EXTRACTING the data more than 24 hours
Hi ,
Query taking long time for EXTRACTING the data more than 24 hours please find the query and explain plan details below even indexes avilable on table's goe's to FULL TABLE SCAN. please suggest me.......
SQL> explain plan for select a.account_id,round(a.account_balance,2) account_balance,
2 nvl(ah.invoice_id,ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date,'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date,
to_date('30-12-9999','dd-mm-yyyy')),'DD-MON-YYYY')
due_date, ah.current_balance-ah.previous_balance amount,
decode(ah.invoice_id,null,'A','I') transaction_type
3 4 5 6 7 8 from account a,account_history ah,invoice i_+
where a.account_id=ah.account_id
and a.account_type_id=1000002
and round(a.account_balance,2) > 0
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id=i.invoice_id(+)
AND a.account_balance > 0
order by a.account_id,ah.effective_start_date desc; 9 10 11 12 13 14 15 16
Explained.
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
| 0 | SELECT STATEMENT | | 544K| 30M| | 693K (20)|
| 1 | SORT ORDER BY | | 544K| 30M| 75M| 693K (20)|
|* 2 | HASH JOIN | | 544K| 30M| | 689K (20)|
|* 3 | TABLE ACCESS FULL | ACCOUNT | 20080 | 294K| | 6220 (18)|
|* 4 | HASH JOIN OUTER | | 131M| 5532M| 5155M| 678K (20)|
|* 5 | TABLE ACCESS FULL| ACCOUNT_HISTORY | 131M| 3646M| | 197K (25)|
| 6 | TABLE ACCESS FULL| INVOICE | 262M| 3758M| | 306K (18)|
Predicate Information (identified by operation id):
2 - access("A"."ACCOUNT_ID"="AH"."ACCOUNT_ID")
3 - filter("A"."ACCOUNT_TYPE_ID"=1000002 AND "A"."ACCOUNT_BALANCE">0 AND
ROUND("A"."ACCOUNT_BALANCE",2)>0)
4 - access("AH"."INVOICE_ID"="I"."INVOICE_ID"(+))
5 - filter("AH"."CURRENT_BALANCE">"AH"."PREVIOUS_BALANCE" AND ("AH"."INVOICE_ID"
IS NOT NULL OR "AH"."ADJUSTMENT_ID" IS NOT NULL))
22 rows selected.
Index Details:+_
SQL> select INDEX_OWNER,INDEX_NAME,COLUMN_NAME,TABLE_NAME from dba_ind_columns where
2 table_name in ('INVOICE','ACCOUNT','ACCOUNT_HISTORY') order by 4;
INDEX_OWNER INDEX_NAME COLUMN_NAME TABLE_NAME
OPS$SVM_SRV4 P_ACCOUNT ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT CUSTOMER_NODE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_ACCOUNT_TYPE ACCOUNT_TYPE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE INVOICE_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_PREVIOUS_INVOICE PREVIOUS_INVOICE_ID ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_NAME ACCOUNT
OPS$SVM_SRV4 U_ACCOUNT_NAME_ID ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_LAST_MODIFIED_ACCOUNT LAST_MODIFIED ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_INVOICE_ACCOUNT INVOICE_ACCOUNT_ID ACCOUNT
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ACCOUNT SEQNR ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_INVOICE INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA CURRENT_BALANCE ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA INVOICE_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_CIA ACCOUNT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_LMOD LAST_MODIFIED ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADINV ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_PAYMENT PAYMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_ADJUSTMENT ADJUSTMENT_ID ACCOUNT_HISTORY
OPS$SVM_SRV4 I_ACCOUNT_HISTORY_APPLIED_DT APPLIED_DATE ACCOUNT_HISTORY
OPS$SVM_SRV4 P_INVOICE INVOICE_ID INVOICE
OPS$SVM_SRV4 U_INVOICE CUSTOMER_INVOICE_STR INVOICE
OPS$SVM_SRV4 I_LAST_MODIFIED_INVOICE LAST_MODIFIED INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT ACCOUNT_ID INVOICE
OPS$SVM_SRV4 U_INVOICE_ACCOUNT BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_BILL_RUN BILL_RUN_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_INVOICE_TYPE INVOICE_TYPE_ID INVOICE
OPS$SVM_SRV4 I_INVOICE_CUSTOMER_NODE CUSTOMER_NODE_ID INVOICE
32 rows selected.
Regards,
Bathula
Oracle-DBAI have some suggestions. But first, you realize that you have some redundant indexes, right? You have an index on account(account_name) and also account(account_name, account_id), and also account_history(invoice_id) and account_history(invoice_id, adjustment_id). No matter, I will suggest some new composite indexes.
Also, you do not need two lines for these conditions:
and round(a.account_balance, 2) > 0
AND a.account_balance > 0
You can just use: and a.account_balance >= 0.005
So the formatted query isselect a.account_id,
round(a.account_balance, 2) account_balance,
nvl(ah.invoice_id, ah.adjustment_id) transaction_id,
to_char(ah.effective_start_date, 'DD-MON-YYYY') transaction_date,
to_char(nvl(i.payment_due_date, to_date('30-12-9999', 'dd-mm-yyyy')),
'DD-MON-YYYY') due_date,
ah.current_balance - ah.previous_balance amount,
decode(ah.invoice_id, null, 'A', 'I') transaction_type
from account a, account_history ah, invoice i
where a.account_id = ah.account_id
and a.account_type_id = 1000002
and (ah.invoice_id is not null or ah.adjustment_id is not null)
and ah.CURRENT_BALANCE > ah.previous_balance
and ah.invoice_id = i.invoice_id(+)
AND a.account_balance >= .005
order by a.account_id, ah.effective_start_date desc;You will probably want to select:
1. From ACCOUNT first (your smaller table), for which you supply a literal on account_type_id. That should limit the accounts retrieved from ACCOUNT_HISTORY
2. From ACCOUNT_HISTORY. We want to limit the records as much as possible on this table because of the outer join.
3. INVOICE we want to access last because it seems to be least restricted, it is the biggest, and it has the outer join condition so it will manufacture rows to match as many rows as come back from account_history.
Try the query above after creating the following composite indexes. The order of the columns is important:create index account_composite_i on account(account_type_id, account_balance, account_id);
create index acct_history_comp_i on account_history(account_id, invoice_id, adjustment_id, current_balance, previous_balance, effective_start_date);
create index invoice_composite_i on invoice(invoice_id, payment_due_date);All the columns used in the where clause will be indexed, in a logical order suited to the needs of the query. Plus each selected column is indexed as well so that we should not need to touch the tables at all to satisfy the query.
Try the query after creating these indexes.
A final suggestion is to try larger sort and hash area sizes and a manual workarea policy.alter session set workarea_size_policy = manual;
alter session set sort_area_size = 2147483647;
alter session set hash_area_size = 2147483647; -
How can I use Automator to extract specific Data from a text file?
I have several hundred text files that contain a bunch of information. I only need six values from each file and ideally I need them as columns in an excel file.
How can I use Automator to extract specific Data from the text files and either create a new text file or excel file with the info? I have looked all over but can't find a solution. If anyone could please help I would be eternally grateful!!! If there is another, better solution than automator, please let me know!
Example of File Contents:
Link Time =
DD/MMM/YYYY
Random
Text
161 179
bytes of CODE memory (+ 68 range fill )
16 789
bytes of DATA memory (+ 59 absolute )
1 875
bytes of XDATA memory (+ 1 855 absolute )
90 783
bytes of FARCODE memory
What I would like to have as a final file:
EXCEL COLUMN1
Column 2
Column3
Column4
Column5
Column6
MM/DD/YYYY
filename1
161179
16789
1875
90783
MM/DD/YYYY
filename2
xxxxxx
xxxxx
xxxx
xxxxx
MM/DD/YYYY
filename3
xxxxxx
xxxxx
xxxx
xxxxx
Is this possible? I can't imagine having to go through each and every file one by one. Please help!!!Hello
You may try the following AppleScript script. It will ask you to choose a root folder where to start searching for *.map files and then create a CSV file named "out.csv" on desktop which you may import to Excel.
set f to (choose folder with prompt "Choose the root folder to start searching")'s POSIX path
if f ends with "/" then set f to f's text 1 thru -2
do shell script "/usr/bin/perl -CSDA -w <<'EOF' - " & f's quoted form & " > ~/Desktop/out.csv
use strict;
use open IN => ':crlf';
chdir $ARGV[0] or die qq($!);
local $/ = qq(\\0);
my @ff = map {chomp; $_} qx(find . -type f -iname '*.map' -print0);
local $/ = qq(\\n);
# CSV spec
# - record separator is CRLF
# - field separator is comma
# - every field is quoted
# - text encoding is UTF-8
local $\\ = qq(\\015\\012); # CRLF
local $, = qq(,); # COMMA
# print column header row
my @dd = ('column 1', 'column 2', 'column 3', 'column 4', 'column 5', 'column 6');
print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
# print data row per each file
while (@ff) {
my $f = shift @ff; # file path
if ( ! open(IN, '<', $f) ) {
warn qq(Failed to open $f: $!);
next;
$f =~ s%^.*/%%og; # file name
@dd = ('', $f, '', '', '', '');
while (<IN>) {
chomp;
$dd[0] = \"$2/$1/$3\" if m%Link Time\\s+=\\s+([0-9]{2})/([0-9]{2})/([0-9]{4})%o;
($dd[2] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of CODE\\s/o;
($dd[3] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of DATA\\s/o;
($dd[4] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of XDATA\\s/o;
($dd[5] = $1) =~ s/ //g if m/([0-9 ]+)\\s+bytes of FARCODE\\s/o;
last unless grep { /^$/ } @dd;
close IN;
print map { s/\"/\"\"/og; qq(\").$_.qq(\"); } @dd;
EOF
Hope this may help,
H -
Sync with Outlook ONLY for a specific date range
I have a Zire 31 that I'm syncing with Outlook via latest Palm Desktop/HotSync. I did just install the latest conduit.
I was getting several repeating calendar entries that were not syncing with the Zire (annual birthdays, etc.) - turns out that they did not have an "end-date" specified when created in Outlook. Creating an end-date solved the problem.
However - I would like to know if it is possible to specify a certain date range for the device to sync (e.g., sync calendar only for years 2007 through 2050, or in other words sync from 1 year ago to 50 years in future, etc...)
I recall specifying this once years ago, but I may have been using a third party to sync with Lotus Notes at the time. I'm wondering if this option is also available with syncing directly with Outlook using HotSync??? I can't imagine it's not, but I can't find where you specify it.
One more interesting note: while my Zire 31 had issues syncing those repeating calendar entries - my much older HandSpring Visor Deluxe syncs them no problem! Go figure.
Thanks much -
Post relates to: Zire 31I'm not actually looking to purge old items.
Rather - I would like to ONLY sync calendar entries between a specified range of dates. For example - ONLY sync for calendar entries between Jan 1 2007 and Jan 1, 2050.
the problem is that I have some repeating entries (birthdays, etc.) that are set up in Outlook with no end date. Apparently Palm/HotSync is having trouble syncing these (they don't show up at all - for any year.)
One resolution is to apply an end-date for each repeating calendar entry. However, this is just another step my wife needs to remember to do. If you happen to forget to to this, then Hotsync will ignore that entry. Fine if you realize that's happening, but if you forget and forget to check - you won't even know it didn't sync!
The other possible resolution is to only have HotSync sync for a SPECIFIC date range - one that does not presumably go to infinity. That is my question - how does one sync ONLY for a specific date range. Has nothing to do with purging old entries.
Thanks!!
Post relates to: Zire 31 -
Need a query for export table data .....
Hi,
I need a query for exporting the data in a table to a file.
Can anyone help me ?
Thanking You
JeneeshSQL> spool dept.txt
SQL> select * from dept;
DEPTNO DNAME LOC
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SQL> spool off
SQL> ed dept.txt -
How to get users' login logout time for user IDs for a specific date?
Dear All,
There is a case I being requested to retrieve the Userid, User Name,
User Group, User Dept, Date, Login Time, Logout Time in a specific date, for example, 21.05.2009.
How should I retrieve the information? The user want to input specific date and user group then return the details that mentioned above.
I try with SUIM->Users->By Logon Date and Password Change... but I can't specific the date that I want ...
I try with SM19 (Security Audit Log), but unfortunately in my system this is not activated.
I've seek for SAP's advise, and they say need to ask abaper to developr a report in order to get such details....
Do you guys have any other methods?
Do you guys know which tables will contain the details as mentioned above?
Best Regards,
KenUnfortunately without the audit log, you're going have a hard time finding this information. As mentioned, ST03N will give you some information. If your systems daily workload aggregation goes back to the date you require then you'll be able to get a list of all users who logged on that day. ST03N doesn't keep time stamps just response times.
My only idea is VERY labor intensive. If your DB admin can retrieve a save of the database from that day then table USR02 will hold a little more information for you. It will contain last login times for that day. If your system backup policy happened to have saved the contents of folder "/usr/sap/<SID>/<instance>/data" then you potentially have access to all the data you require. The stat file will have recorded every transaction that took place during that day. If that file is restored you could use program RSSTAT20 to query against it.
Good luck and turn on the audit log as it makes your life much easier! -
Need to write a query for extractions
How can I extract the following data from Oracle:
1. Segment1 and description from mtl_system_items_b
2. Onhand quantity that is orderable (Not under the WIP manufacturing or PO)
3. Onhand quantity that not orderable (That is under WIP manufacturing or PO)
4. Retail Price of these items.
There is only one organization_id =21 needs to be considered. This extract needs to be a daily kind of an extract. If you can help me in writing the query for all the fields above with proper joins, I'll really appreciate.
Thanks & Regards
KMHi,
Primary_uom_code and Primary_transaction_quantity shows the items quantity on the basis of primary code
whereas Secondary_uom_code and secondary_transction_quantity shows the items quantity on different code.
e.g. suppose you have an item Pen opened on the primary code as Each. 1 Each = 1 Pen.
You have a conversion set as 10 Each = 1 Pkt.
At the time of transaction if you selects as 1 Pkt,
The primary_transaction_quantity will show 10 Each where as the secondary_transaction_quantity will show 1Pkt.
Now as the transaction has done as Pkt the column Transaction_quantity will display 10Pkt.
If you have done this transaction on the primary code as 10 Each.
The primary_transaction_quantity would show the same 10 Each but the Secondary_transaction_quantity will by
null and the transaction_quantity will be 10 Each.
Hope this will clarify the issue.
2) why you want to join these tables
How can I join the following tables: qp_list_lines and mtl_system_items_b? I want to find the price of each
item. I tried using inventory_item_id but it does not work as inventory_item_id can be null in qp_list_lines table.i have not been able to find any thing in table qp_list_lines. -
Custom Report for the Stock and Stock value for a specific date
Hi SAP Gurus,
Is there any SAP standard t-code or any logic to get the transcations (additions (for example: Purchases) and subtractions (Sales) to the inventory) for a particular materials in a plant and with Total Stock and also Total Stock value when that particular transaction happened?
Our system is R/3 4.7
I looked at the MB5B, MBCE, MBCA, MC44, MB51 and some other standard T-codes but could not find the total stock value at the time of Transaction happened.
The history tables MBEWH and MARDH are updated after the month end closing procedures, right, which means I will have the inventory value changing every month if material has Price "S".
Thank you,
-HarterHi Harter,
Unfortunately, you cannot see in a single tcode the value of stock and stock quantity on a specific date. As you yourself have pointed out, we only have to make use of the history table MBEWH, MARDH for the month wise stock quantity and value. Along with that you should also make use of the table MBEW to take teh stock quantity and value. So the total value of stock on a particular date will be
Stock qnty = MBEWH value until the previous month (for teh specific valuation class, period etc) + MBEW value for the present date.
But this will nto work out if you want to find out teh stock quantity and stock value on a past date basis. For past data, only m onthwise data is available. For this anyway you can refer to MC.1 and so on reports. -
Looking for a specific data in all the cubes and ods
Hi Gurus
"i am looking for all the cubes/ods that contain a specific Controlling area(lets say 0123) and a specific 0plant (lets say plant 4567), now i can go down to every cube and ods and search for it in its contents but i have like hundereds of cubes it will take days, is there a simple way to look for some particular data in all the cubes/ods, and it tells me which cube/ods contains these plants and controlling area."
<b>now based on this above post i got a reply that abaping can help.</b>
"you could write an ABAP where you call for every InfoProvider function RSDRI_INFOPROV_READ_RFC like
loop at <infoprov-table> assigning <wa>.
call function 'RSDRI_INFOPROV_READ_RFC'
exporting
i_infoprov = <wa>
tables
i_t_sfc = i_t_rsdri_t_sfc
i_t_range = l_t_rsdri_t_range
e_t_rfcdata = l_t_rsdri_t_rfcdata
exceptions
illegal_input = 1
illegal_input_sfc = 2
illegal_input_sfk = 3
illegal_input_range = 4
illegal_input_tablesel = 5
no_authorization = 6
generation_error = 7
illegal_download = 8
illegal_tablename = 9
illegal_resulttype = 10
x_message = 11
data_overflow = 12
others = 13.
endloop.
i_t_sfc should contain 0PLANT and i_t_range the restriction on you plant value.
with a describe table statement on l_t_rsdri_t_rfcdata you can get the hits.
check test program RSDRI_INFOPROV_READ_DEMO for details
best regards clemens "
<b>now my question is how do i use this code to check each and every cube in bw, it seems like it is meant to be for only one cube at a time. and what does he mean by "for every infoprovider function"</b>
thanksTHANKS
-
Infoobject change for extracting texts data.
Hi BW guys,
Here is my requirement.
I have one info object 'salesmen', which is already used in some other ODS & Cube's.
Now I want to extract texts data for the object 'salesmen', for that I will need to change my infoobject (changes are : adding credit control are object under compounding).
But while i am activating the info object again it is giving errors.
Error messages:
1) InfoObject XXXXX (or ref.) is used in data targets with data -> Error:
2) Characteristic XXXXX: Compound or reference was changed
3)InfoObject XXXXX being used in InfoCube XXXX (contains data)
etc....
But i don't want to delete the data in any data target.
Is there any way to solve this problem?
Thanks in advance......Hi,
If you have not many cubes and ODSs with this salesman, you can consider another, beter, but more time-consuming way.
1. Create a new IO for your salesman, add a compounding attribute as you want.
2. Load master data for the new IO.
3. Create copies of your infoproviders.
3. In each of them delete an old salesman IO and insert a new one.
4. Create export datasourses for old cubes.
5. Create update rules for new data targets based on old ones.
6. In URs map your new IO with the old one. All other IOs should be mapped 1:1 (new<-old).
7. Reload data targets.
That's all.
The way I proposed earlier is less preferrable. Because anyway you'll have to change loaded into data targets data. And in this case it's better to change data model as you want.
Best regards,
Eugene -
Count records for a specific date range
Hi,
I am using BI Publisher with Siebel 8.1.1.1. I have a monthly report where I want to count the number of service requests entered per customer per month.
I am using the following expression that works for one month:
<?count(ssServiceRequest[ssSeverity[.='1-Critical'] and xdoxslt:date_diff('d',psfn:getCanonicalDate(ssCreated), xdoxslt:current_date($_XDOLOCALE,$_XDOTIMEZONE), $_XDOLOCALE, $_XDOTIMEZONE) <=28])?>
However, I want to be able to specify a specific date range for my count, i.e ssCreated >= '01/02/2011' and <= '28/02/2011', as opposed to count for the past 28 days. Can you please advise what syntax I can use to specify a date range?
Any help will be greatly appreciated!
ClaireHello,
Many many thanks for your reply.
I'm doing as you suggest and am using the date_diff to the end of the period in question. I've tried the following to get the number of Service Requests with Severity 1-Critical, created in Jan 2011, feb 2011:
<?count(ssServiceRequest[ssSeverity[.='1-Critical'] and xdoxslt:date_diff('d', (psfn:getCanonicalDate(ssCreated), $_XDOLOCALE, $_XDOTIMEZONE), '2011-01-31', $_XDOLOCALE, $_XDOTIMEZONE) <=31])?>
<?count(ssServiceRequest[ssSeverity[.='1-Critical'] and xdoxslt:date_diff('d', (psfn:getCanonicalDate(ssCreated), $_XDOLOCALE, $_XDOTIMEZONE), '2011-02-28', $_XDOLOCALE, $_XDOTIMEZONE) <=28])?>
In my xml sample data that I'm using I have 2 service requests that mean the criteria for Jan, yet when I run my report I'm getting '0'. The problem seems to be with my end of period date '2011-01-31'. I've also tried with both start and end period date (using a negative value as per your suggestion), but I still get '0':
<?count(ssServiceRequest[ssSeverity[.='1-Critical'] and xdoxslt:date_diff('d', (psfn:getCanonicalDate(ssCreated), $_XDOLOCALE, $_XDOTIMEZONE), '2011-01-01', $_XDOLOCALE, $_XDOTIMEZONE) <=-31 and xdoxslt:date_diff('d', (psfn:getCanonicalDate(ssCreated), $_XDOLOCALE, $_XDOTIMEZONE), '2011-01-31', $_XDOLOCALE, $_XDOTIMEZONE) <=31])?>
Many thanks,
Claire -
CTS+ Configuration in PI 7.4 for SLD specific data
Hello Guys,
I am doing CTS+ configuration for PI System SLD Vesrion 7.4 to transport J2ee as well as SLD specific data through transports. My CTS server is Solman System.
I have created a CTS user in Solman System and in PI NWA I have defined destination to point to Solman System. But i am running into several errors.
Could anyone please help me what user authorization are required for CTS user in Solman to transfer Non-Abap Data and SLD specific data through transports.
Its very urgent.Did you happen to see this document of a CTS+ configuration for 7.3 (should be fairly the same for 7.4)? CTS+ Configuration for PI 7.3
Steps 1.2 and 8.2 have references to roles. -
SQL Query for SSRS has data but fields don't show that data
I am having a strange issue here with my new report;
First off, this report is an availability report for employees. If they are busy then a 0 should be displayed for that Hour and if they are free then a 1 is to be displayed. There are 2 parameters setup for use in this query, one is a Date/Time parameter
and the other is a Text parameter where another Dataset Query is grabbing the data for (Departments)
I have 2 Parameters, 1 is for a Department and the other is to select the date.
Hour1 Hour2 Hour3 Hour4 Hour5
Hour6 Hour7
Smith, John | 1 | 0 | 0 | 0 |
1 | 1 | 1 |
Som, One | 1 | 1 | 1 | 0
| 0 | 1 | 1 |
When I run the query in the Query Designer for the Dataset the information is displayed correctly and as I would expect it, however, when I run the Report and choose the same information for the 2 parameters then the report only ever shows all 1's;
Smith, John | 1 | 1 | 1 | 1 |
1 | 1 | 1 |
Som, One | 1 | 1 | 1 | 1
| 1 | 1 | 1 |
I've tried searching but didn't know what term to use that describes what is going on.
Like I said, this works if ran in SSMS and works when ran in the Query Builder of SSRS but when it comes to displaying the data on the report the incorrect information is displayed.
Any help would be appreciated.
EDIT
I have also ran the Report Table Wizard with the same query and chosen Names as the row and Hours as the columns and the same thing happens - just all 1's are displayed even thought the query in Query Builder shows correct information.The difference running the query directly in query designer and when the report runs is that you manually type in values for the parameters when running query designer. It is likely that the parameter values from the report have a different syntax than you
expect. This will happen especially when setting the available values of a parameter from a data cube query. A value from an analysis cube may be displayed in the query designer as "\Project\Iteration Node" while the actual value is "[Work Item].[Iteration
Hierarchy].[Iteration2].&[-7189901615194941888]&[-8272609059741292246]". Very different as you can see. This example is from the TFS analysis server.
The best way to validate that your parameters are passing the values (and syntax) you expect is to add text boxes to your report for each parameter and set them to display Parameters!ParameterName.Value.
"You will find a fortune, though it will not be the one you seek." -
Blind Seer, O Brother Where Art Thou
Please Mark posts as answers or helpful so that others may find the fortune they seek. -
UPDATE query for GEOMETRY (spatial data)
Hi,
how to update values dynamically for this geometry type " MDSYS.SDO_GEOMETRY(2002,8307,MDSYS.SDO_POINT_TYPE(0,0,'null'),MDSYS.SDO_ELEM_INFO_ARRAY(1,2,1),MDSYS.SDO_ORDINATE_ARRAY(-0.44106912,0.46456902,-0.72306504,0.09942102))" please help me.Thanks Reetesh for your reply.
As this is a simple task I wan't to do it via OAF query rather than writing PL/SQL procedure.
I have two tables , say error table and interface table, (there is a foreign key relation ship between these tables, ie. i have to show the interface name present in the interface table via the foreign key in the error table). I used the following query to get the data
SELECT xxgblErrorMasterEO1.ERROR_ID_NO,
xxgblErrorMasterEO1.ERROR_CODE,
xxgblErrorMasterEO1.ERROR_MESSAGE,
xxgblErrorMasterEO1.CREATED_BY,
xxgblErrorMasterEO1.CREATION_DATE,
xxgblErrorMasterEO1.LAST_UPDATED_BY,
xxgblErrorMasterEO1.LAST_UPDATE_DATE,
xxgblIntfProgramMaster.INTERFACE_NAME,
xxgblErrorMasterEO1.ERROR_TYPE
FROM XXEEG.XXGBL_ERROR_MASTER xxgblErrorMasterEO1,
XXEEG.XXGBL_INTF_PROGRAM_MASTER xxgblIntfProgramMaster
where xxgblErrorMasterEO1.INTERFACE_ID_NO =
xxgblIntfProgramMaster.INTERFACE_ID_NO
I like the idea of Advanced table while going through the tutorial (example 2) and would want to show certain fields by expanding on the + mark ( just like in the explorer)
now i want to update any of the fields that i show to the user (except the WHO fields). Say for example if the user updates the error_message and Interface_name, so how should i write the update method in the AM ?
Pardon me if this sounds simple :(
Maybe you are looking for
-
can anyone tell me me if that field or account number is possible to shown in Layout of ctios toolkit agent for campaign outbound? exist any reports of outbound with AHT statistics? what reports exist for to says a supervisors of reason for customer
-
Servlets and JSPs stored within Oracle 8i
Folks, Does anyone know when Oracle will release support to servlets and Java Server Pages in Oracle8i for Linux? Everything in the database, including my HTML pages!! Best Regards, Luis Claudio R. da Silveira TRE/GO null
-
The XML file importing Error / Exception
Dear All, I am getting the following error while importing the XML file oracle.apps.fnd.framework.OAException: Application: FND, Message Name: FND_NO_REGION_DATA. Tokens: REGIONCODE = /doc/oracle/apps/po/selfservice/webui/DocSrvRcptQryPG; at ora
-
Hello, I just bought a new workstation to optimize my work and the Warp Stabilize Effect take the same time ( 10 minutes) to analyse the background as my old workstation no graphic card inside ( Lenovo i5). Is it normal? 6 Core i7-4930K @ Turbo Boos
-
Can't adjust color in placed image layer?
I used File > Place to bring in some images, which are now layers. I want to adjust the colors of the layers but in Image > Adjustments everything is grayed out (except Shadows and HDR). The original layer allows full adjustments. The images I placed