Strange Behaviour for sequential SELECT statements
I have all my handles ready to be used
I do a StmtPrepare , StmtExecute , StmtFetch
I get (lets say) 3 results...
result is correct...
Then by using the very same handles
I do a StmtPrepare , StmtExecute , StmtFetch
I get (lets say) 3 results...
result is incorrect...
I should be getting 10 results but i get the first 3
If in the first pass i get 4 results then in the second
pass i get 4 results like the result_count_of_second = result_count_of_first !?????
I get no error or warning , everything seems OK
I check SQL statements from TOAD also
What may cause this and any trick to overcome ?
Should i free and realloc the StmdHandle ?
(I dont want to do this but !?)
Could you provide some snippets of the code that is
giving the problem?
What do you mean that you should be getting 10 results?
Are you getting an OCI_NO_DATA earlier than expected (on
the third row instead of the 10th)?
I think that if you gave some code snippets of your
OCI program and an example of what results you are
expecting and what results you are actually getting,
we might be in a better position to help.
Similar Messages
-
How to find for which select statement performance is more
hi gurus
can anyone suggest me
if we have 2 select statements than
how to find for which select statement performance is more
thanks®ards
kals.hi check this..
1 .the select statement in which the primary and secondary keys are used will gives the good performance .
2.if the select statement had select up to i row is good than the select single..
go to st05 and check the performance..
regards,
venkat -
Hi experts,
i'd like you help to understand a strange behaviour on a 10.2.0.4 db.
i use a select instruction in a plsql cursor with a where condition like the following:
select fields
from table1, table2, table3
where
field1 = parameter 1 and
field2 = parameter 2 and
field3 = parameter 3 and
function(field1, field2) = 0
usually this instruction works without problems.
Sometimes (i think depending on the parameter) it stucks.
I have moved the last row of the where condition right after the where instruction:
select fields
from table1, table2, table3
where
function(field1, field2) = 0 and
field1 = parameter 1 and
field2 = parameter 2 and
field3 = parameter 3
and it works with the usual performances (with the parameters that does not work on the first select).
The function returns quickly it's result regardless the parameter used.
I've compared (by toad) the execuption plan of the two selects and they are the same.
Could you please explain this behaviour ?
i can leave on the package the release 2 select but i'm not sure it fixes the problem.
I'd like to understand something more ?
thanks in advance
best regards
StefanoYou can verify that the predicates evaluation orders are different in the two queries looking at the "Predicate Information" that follows the execution plan:
SQL> create function f(a number,b number) return number is
2 begin
3 return a+b;
4 end;
5 /
Function created.
SQL> select * from t;
A B
2 5
3 7
4 5
3 4
6 6
5 7
3 8
5 9
8 rows selected.
SQL> set autot on exp
SQL> select /*+ ORDERED_PREDICATES */ *
2 from t
3 where f(a,b)=7
4 and a=3
5 and b=4;
A B
3 4
1 row selected.
Execution Plan
Plan hash value: 1601196873
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 26 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("F"("A","B")=7 AND "A"=3 AND "B"=4)
Note
- dynamic sampling used for this statement
SQL> select /*+ ORDERED_PREDICATES */ *
2 from t
3 where a=3
4 and b=4
5 and f(a,b)=7;
A B
3 4
1 row selected.
Execution Plan
Plan hash value: 1601196873
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 1 | 26 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 26 | 3 (0)| 00:00:01 |
Predicate Information (identified by operation id):
1 - filter("A"=3 AND "B"=4 AND "F"("A","B")=7)
Note
- dynamic sampling used for this statementAs you can see, despite the two plans are identical, the predicate information row shows exacly the predicate evaluation order.
If toad doesn't show predicate information (I don't use it) run your queries in sqlplus.
Max
[My Italian Oracle blog|http://oracleitalia.wordpress.com/2010/01/17/supporto-di-xml-schema-in-oracle-xmldb/] -
Strange behaviour in "Do - Until" statement
Hi,
I'm having a strange behaviour in a script I'm creating. The purpose of the script is to update some files in some folders. In the case that more folders need to use the script, I'm importing the folders name and path from a CSV file (much easier to read
it from there instead of hardcoding the folders in the code). As you can see, I add a number at the beginning of the folder's name, and then I let them choose which folder number do they want to update.
# Import from CSV File
$name = @()
$path = @()
$bkpPath = @()
Import-Csv $AppsFile |
ForEach-Object {
$name += $_.Name
$path += $_.Path
$bkpPath += $_.bkpPath
Write-Host "Choose the folder to update:"
Write-Host ""
$count = 1
foreach ($n in $name) {
Write-Host $count " - " $n
$count += 1
Do {$opt = read-host -prompt "Enter the option number"}
Until ($opt -lt $count -AND $opt -gt 0)
The problem appears when I started debuging the script, and try to input strange characters. The idea of the "Do - Until" statement, is that the user enters a loop until they choose a correct option. It works ok when it doesn't let you input letters,
or numbers outside the range I'm using.
The strange behaviour appears when I enter for example "0." (The important part is the DOT after the zero). There, it sends me to the last option in my CSV file. If I put "<number>.", it reads it as if it were "<number>".
And if I input a number out of range plus a final dot, it breaks my script (instead of just keep looping as it's supposed to).
Does anyone even understand the issue? =P It's really strange, I don't know if there is another way to limit the input...
Thanks,
Regards$opt is a string, and you're trying to compare it to numbers, which is probably going to give you headaches. I tend to write those types of loops like this:
$opt = $null
while ($true)
$string = read-host -prompt "Enter the option number"
if ([int]::TryParse($string, [ref]$opt) -and $opt -lt $count -and $opt -gt 0)
break
Simply because the validation can get kind of complex sometimes, and I think it's ugly to jam it in after an "until" statement, even though the end result is the same. -
Slow query results for simple select statement on Exadata
I have a table with 30+ million rows in it which I'm trying to develop a cube around. When the cube processes (sql analysis), it queries back 10k rows every 6 seconds or so. I ran the same query SQL Analysis runs to grab the data in toad and exported results, and the timing is the same, 10k every 6 seconds or so. r
I ran an execution plan it returns just this:
Plan
SELECT STATEMENT ALL_ROWSCost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576
1 TABLE ACCESS STORAGE FULL TABLE DMSN.DS3R_FH_1XRTT_FA_LVL_KPI Cost: 136,019 Bytes: 4,954,594,096 Cardinality: 33,935,576 I'm not sure if there is a setting in oracle (new to the oracle environment) which can limit performance by connection or user, but if there is, what should I look for and how can I check it.
The Oracle version I'm using is 11.2.0.3.0 and the server is quite large as well (exadata platform). I'm curious because I've seen SQL Server return 100k rows ever 10 seconds before, I would assume an exadata system should return rows a lot quicker. How can I check where the bottle neck is?
Edited by: k1ng87 on Apr 24, 2013 7:58 AMk1ng87 wrote:
I've notice the same querying speed using Toad (export to CSV)That's not really a good way to test performance. Doing that through Toad, you are getting the database to read the data from it's disks (you don't have a choice in that) shifting bulk amounts of data over your network (that could be a considerable bottleneck) and then letting Toad format the data into CSV format (process the data adding a little bottleneck) and then write the data to another hard disk (more disk I/O = more bottleneck).
I don't know exedata but I imagine it doesn't quite incorporate all those bottlenecks.
and during cube processing via SQL Analysis. How can I check to see if its my network speed thats effecting it?Speak to your technical/networking team, who should be able to trace network activity/packets and see what's happening in that respect.
Is that even possible as our system resides off site, so the traffic is going through multiple networks.Ouch... yes, that could certainly be responsible.
I don't think its the network though because when I run both at the same time, they both are still querying at about 10k rows every 6 seconds.I don't think your performance measuring is accurate. What happens if you actually do the cube in exedata rather than using Toad or SQL Analysis (which I assume is on your client machine?) -
How to create a mapping for a select statement containing DENSE_RANK( )?
Hi,
I want help with a select statement that I want to make a mapping of in OWB 11.1 g. Can anyone please tell me how is code can be incorporated in a mapping?
SELECT DISTINCT MAX (dimension_key) KEEP (DENSE_RANK FIRST ORDER BY day DESC) OVER (PARTITION BY calendar_week_name),
MAX (day) KEEP (DENSE_RANK FIRST ORDER BY DAY DESC) OVER (PARTITION BY calendar_week_name), calendar_week_end_date, calendar_week_number
FROM time_dim;I have been trying to use the Aggregator operator but I am not entirely sure how to go about it. Any help will be highly appreciated.
Thanks in advance,
Ann.Hi Ann
You can just use an EXPRESSION operator. Configure the mapping to be set based only code generation and operating mode.
You will have an expression output attribute for each one of your projected columns;
MAX (dimension_key) KEEP (DENSE_RANK FIRST ORDER BY day DESC) OVER (PARTITION BY calendar_week_name),
MAX (day) KEEP (DENSE_RANK FIRST ORDER BY DAY DESC) OVER (PARTITION BY calendar_week_name),
calendar_week_end_date,
calendar_week_number
Cheers
David -
Need to wite pl sql procedure for dynamic select statement
Need pl sql procedure for a Dynamic select statement which will drop tables older than 45 days
select 'Drop table'||' ' ||STG_TBL_NAME||'_DTL_STG;' from IG_SESSION_LOG where substr(DTTM_STAMP, 1, 9) < current_date - 45 and INTF_STATUS=0 order by DTTM_STAMP desc;I used this to subtract any data older than 2 years, adjustments can be made so that it fits for forty five days, you can see how I changed it from the originaln dd-mon-yyyy to a "monyy", this way it doesn't become confused with the Static data in the in Oracle, and call back to the previous year when unnecessary:
TO_NUMBER(TO_CHAR(A.MV_DATE,'YYMM')) >= TO_NUMBER(TO_CHAR(SYSDATE - 365, 'YYMM')) -
Cursor - Suggestions for Dynamic select statements
Hey,
Am trying to define a cursor like this -
cursor c1 is
select table_name from dba_tables INTERSECT select table_name from dba_tables@SOME_DBLINKMy need is to pass this dblink as IN parameter to the procedure and use it in the select statement for cursor. How can I do this?
Any suggestion is highly appreciated. Thanks!Well that was meant to be my point. If you had two, you wouldn't (I hope) call the second one "c2" - you would be forced to think about what it represented, and name it "c_order_history" or something. Sticking "1" on the end does not make an extensible naming convention for cursors any more than it does for variables, procedures, tables or anything else, and so the "1" in "c1" is redundant because there will never be a c2.
-
Can't figure out the correct syntax for this select statement
Hello,
The following statement works great and gives the desired results:
prompt
prompt Using WITH t
prompt
with t as
select a.proj_id,
a.proj_start,
a.proj_end,
case when (
select min(a.proj_start)
from v b
where (a.proj_start = b.proj_end)
and (a.proj_id != b.proj_id)
is not null then 0 else 1
end as flag
from v a
order by a.proj_start
select proj_id,
proj_start,
proj_end,
flag,
-- the following select statement is what I am having a hard time
-- "duplicating" without using the WITH clause
select sum(t2.flag)
from t t2
where t2.proj_end <= t.proj_end
) s
from t;As an academic exercise I wanted to rewrite the above statement without using the WITH clause, I tried this (among dozens of other tries - I've hit a mental block and can't figure it out):
prompt
prompt without with
prompt
select c.proj_id,
c.proj_start,
c.proj_end,
c.flag,
-- This is what I've tried as the equivalent statement but, it is
-- syntactically incorrect. What's the correct syntax for what this
-- statement is intended ?
select sum(t2.flag)
from c t2
where t2.proj_end <= c.proj_end
) as proj_grp
from (
select a.proj_id,
a.proj_start,
a.proj_end,
case when (
select min(a.proj_start)
from v b
where (a.proj_start = b.proj_end)
and (a.proj_id != b.proj_id)
is not null then 0 else 1
end as flag
from v a
order by a.proj_start
) c;Thank you for helping, much appreciated.
John.
PS: The DDL for the table v used by the above statements is:
drop table v;
create table v (
proj_id number,
proj_start date,
proj_end date
insert into v values
( 1, to_date('01-JAN-2005', 'dd-mon-yyyy'),
to_date('02-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 2, to_date('02-JAN-2005', 'dd-mon-yyyy'),
to_date('03-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 3, to_date('03-JAN-2005', 'dd-mon-yyyy'),
to_date('04-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 4, to_date('04-JAN-2005', 'dd-mon-yyyy'),
to_date('05-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 5, to_date('06-JAN-2005', 'dd-mon-yyyy'),
to_date('07-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 6, to_date('16-JAN-2005', 'dd-mon-yyyy'),
to_date('17-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 7, to_date('17-JAN-2005', 'dd-mon-yyyy'),
to_date('18-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 8, to_date('18-JAN-2005', 'dd-mon-yyyy'),
to_date('19-JAN-2005', 'dd-mon-yyyy'));
insert into v values
( 9, to_date('19-JAN-2005', 'dd-mon-yyyy'),
to_date('20-JAN-2005', 'dd-mon-yyyy'));
insert into v values
(10, to_date('21-JAN-2005', 'dd-mon-yyyy'),
to_date('22-JAN-2005', 'dd-mon-yyyy'));
insert into v values
(11, to_date('26-JAN-2005', 'dd-mon-yyyy'),
to_date('27-JAN-2005', 'dd-mon-yyyy'));
insert into v values
(12, to_date('27-JAN-2005', 'dd-mon-yyyy'),
to_date('28-JAN-2005', 'dd-mon-yyyy'));
insert into v values
(13, to_date('28-JAN-2005', 'dd-mon-yyyy'),
to_date('29-JAN-2005', 'dd-mon-yyyy'));
insert into v values
(14, to_date('29-JAN-2005', 'dd-mon-yyyy'),
to_date('30-JAN-2005', 'dd-mon-yyyy'));Hi, John,
Not that you asked, but as you proabably know, analytic functions are much better at doing this kind of thing.
You may be amazed (as I continually am) by how simple and efficient these queries can be.
For example:
WITH got_grp AS
SELECT proj_id, proj_start, proj_end
, proj_end - SUM (proj_end - proj_start) OVER (ORDER BY proj_start) AS grp
FROM v
SELECT ROW_NUMBER () OVER (ORDER BY grp) AS proj_grp
, MIN (proj_start) AS proj_start
, MAX (proj_end) AS proj_end
FROM got_grp
GROUP BY grp
ORDER BY proj_start
;Produces the results you want:
PROJ_GRP PROJ_START PROJ_END
1 01-Jan-2005 05-Jan-2005
2 06-Jan-2005 07-Jan-2005
3 16-Jan-2005 20-Jan-2005
4 21-Jan-2005 22-Jan-2005
5 26-Jan-2005 30-Jan-2005This is problem is an example of Neighbor-Defined Groups . You want to GROUP BY something that has 5 distinct values, to get the 5 rows above, but there's nothing in the table itself that tells you to which group each row belongs. The groups are not defined by any column in hte table, but by relationships between rows. In this case, a row is in the same group as its neighbor (the row immediatly before or after it when sorted by proj_start or proj_end) if proj_end of the earlier row is the same as proj_start of the later row. That is, there is nothing about 03-Jan-2005 that says the row with proj_id=2 is in the first group, or even that it is in the same group with its neighbor, the row with proj_id=3. Only the relation between those rows, the fact that the earlier row has end_date=03-Jan-2005 and the later row has start_date=03-Jan-2003, that says these neighbors belong to the same group.
You're figuring out when a new group starts, and then counting how many groups have already started to see to which group each row belongs. That's a prefectly natural procedural way of approaching the problem. But SQL is not a procedural language, and sometimes another approach is much more efficient. In this case, as in many others, a Constant Difference defines the groups. The difference between proj_end (or proj_start, it doesn't matter in this case) and the total duratiojn of the rows up to that date determines a group. The actual value of that difference means nothing to you or anybody else, so I used ROW_NUMBER in the query above to map those distinct values into consecutive integers 1, 2, 3, ... which are a much simpler way to identify the groups.
Note that the query above only requires one pass through the table, and only requires one sub-query. It does not need a WITH clause; you could easily make got_grp an in-line view.
If you used analytic functions (LEAD or LAG) to compute flag, and then to compute proj_grp (COUNT or SUM), you would need two sub-queries, one for each analytic function, but you would still only need one pass through the table. Also, those sub-queries could be in-line views; yiou would not need to use a WITH clause. -
Using BEGIN TRAN for a SELECT statement
I have inherited a number of stored procedures that are using explicit transactions and structured error handling.
I have had to review the procs due to deadlocks occurring and have found that many of them are SELECT statements.
The syntax is like this:
BEGIN TRAN
BEGIN TRY
SELECT ####
END TRY
BEGIN CATCH
IF @@TRANCOUNT >0
BEGIN
ROLLBACK TRAN
INSERT ErrorLog
XXXXXX
END CATCH
IF @@TRANCOUNT > 0
BEGIN
COMMIT TRAN
END
It is very obvious that there is no purpose for explicit transactions in the select, but I am wondering if this could be a factor in the deadlocks. From what I tested using BEGIN TRAN for a select still only uses a shared lock.
Is it possible that this is a factor in the deadlocks?
David Dye My BlogThere is no need of explicit transaction in select statement what exactly would you want to rollback, nothing. Although you might use error handling depending on complexity of select involved.
Plus drawbacks is locks would be held for longer duration which MIGHT provide some help in deadlock
Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it
My Technet Wiki Article
MVP -
Need to create a transaction for multiple select statements?
Hello,
I am a newbie and have a question about database transaction, e.g. whether/not to enclose multiple select statements (and select statements only) into a transaction.
My database is set to transaction isolation level 2: REPEATABLE READ, where dirty read & non-repeatable read are not allowed, only phantom read is allowed.
Now, in my code I have a number of methods that only contain select statements only. Since they are merely select statements, which don't do any modifications to the data, I am not sure if I am supposed to enclose them into a transaction.
However, if I don't put them into a transaction will the transaction isolation level takes into effect automatically when another user is modifying the data that I am reading? In other words, I need to make sure the select statements will never do either dirty read or non-repeatable read. But I am not sure if it is necessary to enclose multiple select statements in a transaction, since I believe putting the select statements into a transaction will put some locks to the data being read which may reduce the concurrency of my application.
Any help/advice would be very much appreciated.
DuaneYou might want to try asking this on a forum that specific to your database. I suspect the answer can vary depending on the database and probably requires in depth knowledge of what the database does.
-
Inner Join for Dynamic Select statement
Hi All,
Can some one please help me in rewriting the below select statement where i have to remove the existing table1 by putting a dynamic table name which has the same table structure.
select a~zfield1
a~zfield2
from ztab1 as a
inner join ztab2 as b
on b~ztab1-zfield3 = a~ztab2-zfield3
where a~zfield4 = 'A'.
I am looking something as below. But encountering an error when using the below statement
select a~zfield1
a~zfield2
from (v_ztab1) as a
inner join ztab2 as b
on b~ztab1-zfield3 = a~ztab2-zfield3
where a~zfield4 = 'A'.
No Separate selects please. Please help me in rewriting the same select statement itself.
Regards,
PSKhi,
What error you are getting ?
Also INTO is missing from the statement.
SELECT pcarrid pconnid ffldate bbookid
INTO TABLE itab
FROM ( spfli AS p
INNER JOIN sflight AS f ON pcarrid = fcarrid AND
pconnid = fconnid )
WHERE p~cityfrom = 'FRANKFURT' AND
p~cityto = 'NEW YORK' .
thanks -
When I run the following code
set nocount on
declare @i table(id int identity(1,1) primary key, sDate datetime)
while((select count(*) from @i)<10000)
begin
insert into @i(sDate) select getdate()
end
select top 5 sDate, count(id) selectCalls
from @i
group by sDate
order by count(id) desc
I get the following results.
sDate selectCalls
2014-07-30 14:50:27.510 406
2014-07-30 14:50:27.527 274
2014-07-30 14:50:27.540 219
2014-07-30 14:50:27.557 195
2014-07-30 14:50:27.573 170
As you can see the select getdate() function returned same time up to the milisecon 406 time for the first date value. This started happening when we moved our applications to a faster server with four processors. Is this correct or am I
going crazy?
Please let me know
BilalObserve that adding 2 ms is accurate only with datetime2. As noted above, datetime does not have ms resolution:
set nocount on
declare @d datetime, @i int, @d2 datetime2
select @d = getdate(), @i = 0, @d2 = sysdatetime()
while(@i<10)
begin
select @d2, @d, current_timestamp, getdate(), sysdatetime()
select @d = dateadd(ms,2,@d), @i = @i+1, @d2=dateadd(ms,2,@d2)
end
2014-08-09 08:36:11.1700395 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
2014-08-09 08:36:11.1720395 2014-08-09 08:36:11.173 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
2014-08-09 08:36:11.1740395 2014-08-09 08:36:11.177 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
2014-08-09 08:36:11.1760395 2014-08-09 08:36:11.180 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
2014-08-09 08:36:11.1780395 2014-08-09 08:36:11.183 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
2014-08-09 08:36:11.1800395 2014-08-09 08:36:11.187 2014-08-09 08:36:11.170 2014-08-09 08:36:11.170 2014-08-09 08:36:11.1700395
DATE/TIME functions:
http://www.sqlusa.com/bestpractices/datetimeconversion/
Kalman Toth Database & OLAP Architect
SQL Server 2014 Design & Programming
New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012 -
Unsure of syntax for SQL SELECT statement, please help
I am trying to execute the following SQL statement:
ResultSet rs = stB.executeQuery("SELECT quantity FROM stocklevels WHERE code=salesCode[x]");where SalesCode is an integer array and x is a counter used to loop this part of the code. I keep getting the "syntax error - missing operator" message when I run the code and get to this line, can anybody please tell me the correct syntax/form to use? Many thanks.Is salesCode an array in Java or in your DB? I don't know anything about SQL arrays, so I'll assume it's in your Java code.
You want to put the value of salesCode[x] into the query--say 123456. But what you've got is like doing System.out.println("The code = salesCode[x]"); and then wondering why you're seeing "salesCode[x]" instead of "123456. You'd need to do System.out.println("The code = " + salesCode[x]); Similarly, you could take the "salesCode[x]" out of the string literal that's forming the query, in order to have it evaluated as an int and stuffed into the string as "123456". It would be better to use a PreparedStatement though: ps = con.prepareStatement("select .... where code = ?");
ps.setInt(1, salsedCode[x]);
rs = ps.executeQuery(); -
HELP FOR ABAP SELECT STATEMENT
I am writing below query and getting below current result which is four rows. i would like to have result mentioned below in expected result which is single row.. any idea how can i do that? It can be easily done in sql-plus using decode or union clause. but please suggest how to do it in abap.
select distinct
qmsm~qmnum
qmel~qmtxt
qmsm~mncod
qmsm~pster
qmsm~peter
ihpa~parvw
ihpa~parnr
from qmsm
inner join qmma
on qmsmqmnum = qmmaqmnum
inner join qmel
on qmmaqmnum = qmelqmnum
inner join ihpa
on qmelobjnr = ihpaobjnr
into table ztstnotifications
where
qmel~qmnum = '000100000166'
and qmma~material = wa_material
and qmsm~mncod in ('2','4')
and ihpa~parvw in ('1A','ZY')
order by qmsm~qmnum
qmel~qmtxt
qmsm~mncod
qmsm~pster
qmsm~peter.
current result
100000166 will it work 2 22.10.2009 31.10.2009 SP 1000688
100000166 will it work 2 22.10.2009 31.10.2009 ZY AE001
100000166 will it work 4 01.01.2010 15.01.2010 SP 1000688
100000166 will it work 4 01.01.2010 15.01.2010 ZY AE001
expected result
100000166 will it work 2 22.10.2009 31.10.2009 4 01.01.2010 15.01.2010 SP 1000688 ZY AE001
ThanksI doubt you'll be able to do so in a straight way. I would try to use some imaginative ways, like define an addition internal table with the key fields of your query, and a "long string" field, where I'll add the returned rows.
Or something like that. if you know the max number of rows this kind of join will return, add that number times the fields needed (parvw1, parnr1, parvw2, parnr2....) and use a LOOP to populate it.
Maybe you are looking for
-
Date Wise stock report (0IC_C03)
How can we see the date wise stock quantity and value in Query using cube 0IC_C03. Example:Suppose i want to see what was my stock quantity and stock value on 25th Jan08.
-
Driver for HP scanjet 5400 c windows 7 64 bit
This is an old issue, and according to various posts there should be a workaround for this one. I have not seen one which is fully explained and conclusive with a win 64 system. I have downloaded everything II can find on the HP site. The scanner sho
-
Please can anyone help, I recently purchased a second hand ipod touch and it had over 4000 photos on it, so to delete them I went to settings and found an option which said something like erase all content and data, I did this and the next morning I
-
I've tried this process twice, and I'm still having trouble with Adobe Flash Drive. Do I need to change something in my security settings?
-
Failover properties set at dvswitch level??
I am looking to define an explicit failover order for all dvportgroups created from within vcloud director 5.1 as to control traffic flow across cisco UCS fabric interconnects. Is there a way to set the defaults at the dvswitch level so all created d