Cursor for loop gives less number of records for certain condition
Hi i am getting very weired behaviour of cursor for loop.
i have one query below which returns 128 records.
select su.item_id from suv_tmp su
inner join review_items ri on su.ITEM_ID = RI.ITEM_ID
inner join sources s on s.source_id=ri.source_id
and s.source_name='XYZ'
but when i take this query in cursor for loop it returns only 100 records.
this is the cursor for loop i am writing:
begin
for v_rec in
(select su.item_id from suv_tmp su
inner join review_items ri on su.ITEM_ID = RI.ITEM_ID
inner join sources s on s.source_id=ri.source_id
where s.source_name='XYZ'
loop
dbms_output.put_line(v_rec.item_id);
end loop;
end;
It prints only 100 records.
The weired behaviour is that if i select any column name from the table which is used in where clause it gives 128 records..
query below gives expected output
begin
for v_rec in
(select su.item_id,s.source_descr from suv_tmp su
inner join review_items ri on su.ITEM_ID = RI.ITEM_ID
inner join sources s on s.source_id=ri.source_id
where s.source_name='XYZ'
loop
dbms_output.put_line(v_rec.item_id);
end loop;
end;
i am not able to find out any logc..
Any pointers appreciated..
Edited by: user11266153 on Dec 14, 2011 11:27 PM
1) Can you provide the DDLs of those tables and dummy data to check if it gets reproduced at our env.
2) What is the Oracle Database software version? e.g. 9iR2 , 10gR1 , 10gR2 , 11gR1 etc...
Similar Messages
-
SPM Data Loads : Less number of records getting loaded in the Invoice Inbound DSO
Dear Experts,
We are working on a project, where data of different NON SAP Source Systems is being loaded into SPM, via Flat File Loads. We came across a very weird situation.
For other Master and Transaction Data objects, it worked fine, but when we loaded Invoice File, less number of records are getting loaded in the Inbound DSO. The Invoice File contained 80000 records, but the inbound DSO has 78500 records only. We are losing out on 1500 Records.
We are unable to figure out, as to which 1500 records are we missing out on. We couldn't find any logs, in the Inbound Invoice DSO. We are unable to find out if the records are erroneous, or there is any issue with something else. Is there a way to analyze the situation / Inbound invoice DSO.
If there is any issue with the Outbound DSO or Cube, We know that it is possible to check the issue with the Data Load request, but for the Inbound DSO, we are not aware, as to which the way to analyze the issue, and why Inbound DSO is taking less records.
Regards
PankajHi,
Yes, It might be happen in DSO, because the data records have the simantic keys, so in Keyfileds selection you might have less no of records.
If you have any rountines check the code(If any condetion for filtering the records).
Regards. -
Less number of Records while loading data from one cube to another
Hi,
We are in the process of doing major changes in the existing InfoCube.
Before making any changes, we had planned to make a copy of the cube with data.
For this we did the following steps :
1. Created the new cube from the original cube.
2. Generated Datasource on the Orignal cube.
3. Made a update rule on the new cube by selecting the Orinal cube.
4. Made a InfoPack on Infosouce which got created with 8<original cube>.
5. Uploaded the data into the new cube.
We have uploaded the data successfully from the original cube to the new cube.
However, New cube shows less number of records as compared to the original cube.
But the query is showing the same figures from both the cubes.
Can anyone please advise what could be the reason for this less number of records and how the figures are showing same when we run the query from both the cubes.
Please help.
Thanks
Ramesh GanjiHi Ramesh,
this is possible coz when u have loaded yr original cube, it may be loaded on daily/weekly basis. so in the cube there are many requests. Also if in the same request, if there are two records with same dimension keys ther are automatically added aggregating them.
if req 1 has following records
cust mat amt
1 1 100
1 1 50
it will add
1 1 150
and if the records are seperated by different req, then both will b added individually.
therefore when u r loading yr new cube from original cube, all records that has same dimension key gets aggregated coz they are loaded into single request. so it showing less no of records as added.
hope this helps.
Regards,
Purvang
Assigning Point is to say Thanks in SDN * -
SQL help: return number of records for each day of last month.
Hi: I have records in the database with a field in the table which contains the Unix epoch time for each record. Letz say the Table name is ED and the field utime contains the Unix epoch time.
Is there a way to get a count of number of records for each day of the last one month? Essentially I want a query which returns a list of count (number of records for each day) with the utime field containing the Unix epoch time. If a particular day does not have any records I want the query to return 0 for that day. I have no clue where to start. Would I need another table which has the list of days?
Thanks
RayPeter: thanks. That helps but not completely.
When I run the query to include only records for July using a statement such as following
============
SELECT /*+ FIRST_ROWS */ COUNT(ED.UTIMESTAMP), TO_CHAR((TO_DATE('01/01/1970','MM/DD/YYYY') + (ED.UTIMESTAMP/86400)), 'MM/DD') AS DATA
FROM EVENT_DATA ED
WHERE AGENT_ID = 160
AND (TO_CHAR((TO_DATE('01/01/1970','MM/DD/YYYY')+(ED.UTIMESTAMP/86400)), 'MM/YYYY') = TO_CHAR(SYSDATE-15, 'MM/YYYY'))
GROUP BY TO_CHAR((TO_DATE('01/01/1970','MM/DD/YYYY') + (ED.UTIMESTAMP/86400)), 'MM/DD')
ORDER BY TO_CHAR((TO_DATE('01/01/1970','MM/DD/YYYY') + (ED.UTIMESTAMP/86400)), 'MM/DD');
=============
I get the following
COUNT(ED.UTIMESTAMP) DATA
1 07/20
1 07/21
1 07/24
2 07/25
2 07/27
2 07/28
2 07/29
1 07/30
2 07/31
Some dates donot have any records and so no output. Is there a way to show the missing dates with a COUNT value = 0?
Thanks
Ray -
I have a message on my Ipad that states my IOS has crashed due to a 3rd party. It refers to my tablet as a phone (sorry, don't have an Iphone. It gives a number to call for support but I know it's a scam. Is there a solution to get rid of it as it has locked up my Safari.
1. Reset the iPad
Press and hold the Sleep/Wake button on your iPad,
simultaneously press and hold down the “Home” button” until the screen turns off.
Turn it back on.
2. Clear information from your device
iOS 7
Tap Settings > Safari > Clear History > Clear Cookies and Data. -
That did not help me find out needy iPhone will break Icloud account for Iphone his serial number (DN******TTN) for I am I can not activate buy user please help
<Personal Information Edited by Host>Your question isn't at all clear. Is this a second-hand phone which has been locked by the previous user? In that case only the previous owner can unlock it, either by providing you with the account ID and password, or by removing it from his list of devices (as he should have done before selling it) - please see http://support.apple.com/kb/ts4515
If you unable to contact him to do this then I'm afraid you will not be able to use the device - there is no other way of unlocking it at all.
You should if possible return it to wherever you bought it and ask for a refund as in this event the device is completely useless.
You should not post serial numbers or any other personal data here; I've asked the Hosts to remove it. -
Max Number of records for BAPI 'BAPI_PBSRVAPS_GETDETAIL'
Hi All,
Can you suggest me the number of records to be fed to the 'BAPI_PBSRVAPS_GETDETAIL'.
I am using a few location products for 9 key figures.Whenever number of records
in selection table increases BAPI behaves in a strange way and the code written below it does not get executed.
Please guide me to get full points.
Thanks in Advance,
Chandan DubeyHi Uma,
It comes out of the program after this code is executed.I have 50 location product combinations in vit_selection table.
CALL FUNCTION 'BAPI_PBSRVAPS_GETDETAIL'
EXPORTING
planningbook = planning_book
period_type = 'B'
date_from = l_from_week
date_to = l_to_week
logical_system = logical_system
business_system_group = business_system_group
TABLES
selection = vit_selection
group_by = vit_group_by
key_figure_selection = vit_kf_selection
time_series = vit_t_s
time_series_item = vit_t_s_i
characteristics_combination = vit_c_c
return = vit_return.
LOOP AT vit_return. -
Pulling records where number of records for unique ID = 6
I have a table that contains address information for everyone in the system. It has numerous fields, though I've only included a few in the create table query below for the sake of brevity. The PIDM uniquely identifies each record as belonging to a particular person in the database. A person can have multiple addresses in the table, though we normally do not allow them to have more than one active address of a particular ATYP_CODE. Again, I am doing this here for the sake of brevity. What I need to do is pull all the records for each PIDM, but only where there are >= 6 records per PIDM. The user doesn't care if the data are pivoted (I can do that part if needed). Pulling the actual data isn't the issue. I just need a little help figuring out how to get only the records of PIDMs with six or more records in the table. So, from the example data below, the records for PIDM 12345 and 34567 are the ones that should be in the output, but the ones from PIDM 23456 should not.
DROP TABLE SPRADDR;
CREATE TABLE SPRADDR
(PIDM NUMBER(8),
ATYP_CODE VARCHAR2(2 CHAR),
STREETLINE1 VARCHAR2(60 CHAR),
CITY VARCHAR2(60 CHAR),
STATE VARCHAR2(2 CHAR),
ZIP VARCHAR2(10));
INSERT INTO SPRADDR VALUES (12345,'PR','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (12345,'MA','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (12345,'BU','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (12345,'PR','2 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (12345,'MA','3 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (12345,'PR','4 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (23456,'PR','1 MAIN','KENT','OH','44240');
INSERT INTO SPRADDR VALUES (23456,'MA','1 MAIN','KENT','OH','44240');
INSERT INTO SPRADDR VALUES (23456,'BU','1 MAIN','KENT','OH','44240');
INSERT INTO SPRADDR VALUES (34567,'PR','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'MA','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'BU','1 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'PR','2 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'MA','3 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'PR','4 MAIN','CANFIELD','OH','44406');
INSERT INTO SPRADDR VALUES (34567,'PR','6 MAIN','CANFIELD','OH','44406');
COMMIT;I'd greatly appreciate any help you might be able to provide. I'm sure this is easy, but what I've done so far has not worked and I'm not including the code I tried because it's totally cockeyed and not working at all.
Thanks,
Michelle Craig
Data Coordinator
Admissions Operations and Transfer Systems
Kent State UniversityPIDM 12345 and 34567 are the ones that should.Why 12345? It is repeated 6 times where you asked for > 6. Anyway:
SQL> select *
2 from (
3 select s.*,
4 count(*) over(partition by pidm) cnt
5 from spraddr s
6 )
7 where cnt > 6
8 order by pidm
9 /
PIDM AT STREETLINE1 CITY ST ZIP CNT
34567 PR 1 MAIN CANFIELD OH 44406 7
34567 MA 1 MAIN CANFIELD OH 44406 7
34567 BU 1 MAIN CANFIELD OH 44406 7
34567 PR 2 MAIN CANFIELD OH 44406 7
34567 MA 3 MAIN CANFIELD OH 44406 7
34567 PR 4 MAIN CANFIELD OH 44406 7
34567 PR 6 MAIN CANFIELD OH 44406 7
7 rows selected.
SQL> select *
2 from (
3 select s.*,
4 count(*) over(partition by pidm) cnt
5 from spraddr s
6 )
7 where cnt >= 6
8 order by pidm
9 /
PIDM AT STREETLINE1 CITY ST ZIP CNT
12345 PR 1 MAIN CANFIELD OH 44406 6
12345 MA 1 MAIN CANFIELD OH 44406 6
12345 BU 1 MAIN CANFIELD OH 44406 6
12345 PR 2 MAIN CANFIELD OH 44406 6
12345 MA 3 MAIN CANFIELD OH 44406 6
12345 PR 4 MAIN CANFIELD OH 44406 6
34567 PR 1 MAIN CANFIELD OH 44406 7
34567 MA 1 MAIN CANFIELD OH 44406 7
PIDM AT STREETLINE1 CITY ST ZIP CNT
34567 BU 1 MAIN CANFIELD OH 44406 7
34567 PR 2 MAIN CANFIELD OH 44406 7
34567 MA 3 MAIN CANFIELD OH 44406 7
34567 PR 4 MAIN CANFIELD OH 44406 7
34567 PR 6 MAIN CANFIELD OH 44406 7
13 rows selected.
SQL> SY.
Edited by: Solomon Yakobson on May 10, 2012 10:02 AM -
Max number of records for 'BAPI_PBSRVAPS_GETDETAIL'.
Hi All,
Can you suggest me the number of records to be fed to the 'BAPI_PBSRVAPS_GETDETAIL'.
I am using a few location products for 9 key figures.Whenever number of records
in selection table increases BAPI behaves in a strange way and the code written below it does not get executed.
Please guide me to get full points.
Thanks in Advance,
Chandan DubeyServer memory issue !
-
Maximum number of Records for Emigall Upload
Hi,
Is there any limit or maximum number of records can be uploaded via Emigall at one time.
Thanks.Hi Satish Kumar,
There exists no limit except for some exceptions ;o) These exceptions are objects that require more and more memory during runtime due to growing internal tables. This behavior leads to performance issues because more and more time is spend in working on the internal tables instead of updating the database. This is known for the PARTNER migration object and all MM and PM related migration objects, such as, CONNOBJ, INST_MGMT, etc.
On the other hand a long lasting import run (because it takes such a long time to migrate the objects in the import file) limits your options in controlling the data import, for example, restarting a cancelled import run. As already pointed out, the Distributed Import should be your choice when migrating huge import files with many objects to be migrated.
I hope this answers your question.
Kind regards,
Fritz -
Maximum number of records for usage of "For all entries"
Hi,
Is there a limit on maximum number of records to be selected from the database using "For all entries" statement ?
Thanks in advanceThere is a UNDOCUMENTED(??) behaviousr
FOR ALL ENTRIES does ahidden SELECT DISTINCT & drops duplicates.
http://web.mit.edu/fss/dev/abap_review_check_list.htm
3 pitfalls
"FOR ALL ENTRIES IN..." (outer join) are very fast but keep in the mind the special features and 3 pitfalls of using it.
(a) Duplicates are removed from the answer set as if you had specified "SELECT DISTINCT"... So unless you intend for duplicates to be deleted include the unique key of the detail line items in your select statement. In the data dictionary (SE11) the fields belonging to the unique key are marked with an "X" in the key column.
^^!!!!
(b) If the "one" table (the table that appears in the clause FOR ALL ENTRIES IN) is empty, all rows in the "many" table (the table that appears in the SELECT INTO clause ) are selected. Therefore make sure you check that the "one" table has rows before issuing a select with the "FOR ALL ENTRIES IN..." clause.
(c) If the 'one' table (the table that appears in the clause FOR ALL ENTRIES IN) is very large there is performance degradation Steven Buttiglieri created sample code to illustrate this. -
Less number of records on datasource
Hi,
There is a extractor 0ASSESSMENT_TEXT .It is showing 20 records as the source table LSOTACASSESSMENT is having 21 records.Can anyone explain the reason behind this?
Thanks in advance!Hi pallavi,
I have debugged and trace that your data source is extracting data from application rather than table you mentioned. so that is the reason why your data source is showing less no. of records as in table and that is the configuration settings they made in R/3.
assign points plz
regards
vadlamudi -
I have a vi that runs through two for loops and a while loop, then after a certain number of iterations on the inner while loop it is supposed to pass a trigger to a case stucture to start measurement. I have tried using a local variable attached to a boolean control which in turn is attached to the true case of the case structure involving the measurement sub vi. What is happening is that the boolean control will light on the front panel at the correct # of iterations in the while loop, as if true, but the measurement is not taken, and the boolean control never goes back to false. It remains lit. Am I u
sing the local variable incorrectly? If so, where am I going wrong.
Attachments:
GL_Flicker.vi 118 KB
Take_Measurements.vi 147 KBHello planar,
There are multiple ways to pass control information between loops and VIs and each one has its place. For simple VIs like your example, a local variable will be fine.
The main reason the subVI is not running is that the case structure is not continuously polling the Boolean control. This is because the case structure is not inside a loop and as such will only read the Boolean value once and execute once. Encasing the case structure and control inside the while loop should solve the issue of the subVI not running.
You may find the following links of help in creating more robust and advanced VI architectures.
Application Design Patter
ns: Master/Slave
LabVIEW Application Design Patterns
Keep up to date on the latest PXI news at twitter.com/pxi -
Long running Process chain for less Number of records
Hello Experts,
I have a problem with a process chain which is taking very long time ( To load around 20 records)
Problem Description:
1) I have one base ODS and First level ODS and the data uo to this point is loaded with out any problem.
2) When the data is moving from First level ODS to the Cube to load 20 records it is taking around 5 hrs of time and the load is successful.
3) If I see the DTP monitor the exraction step from DSO to Cube is taking around 4.50 min
4) The remaining stpes are like Error handling, transformations, updation are taking normal times (Around 5 secs).
Could any one please help me to find out where the exact problem lies to make this extraction step so long time.
Regards,Please check for start routines, field level routines. From DSO to Cube check for transformation. How each and every field is getting updated and you will be able to track the time taken for the load.
thanks, -
How to count number of records for a field based on condition?
Hi guys,
I want to know how to find count of records coming from the database for a particular field based on some condition.
I need to use this count to suppress some headers. Because of this i am not able to use running totals. Is there any other way?
Ex scenario:
I have account number and currency fields, those are coming from database. And i need to count the number of accounts whose currency is not Euro.
Thanks in advance,
Vijay.A simple formula can do that:
//Formula begin
if {your account field}<>"Euro" then 1
//Formula end
This formula can be summarized. (by group or report)
Bryan Tsou@Taiwan
Maybe you are looking for
-
Installing New OS on New Hard Drive
My daughter somehow killed the hard drive on her iMac G5 + iSight and, with considerable difficulty, I managed to crack the case and install a new hard drive. She didn't have her original mac disks, so I used the discs that came with my Macbook Pro (
-
Note: This is not an official document, it discusses Oracle Enterprise Manager Grid Control (EMGC) installation on Redhat Linux Update 3 & Update 4 with new database NOT with an existing database. Both OS Updates were installed with FULL PACKAGE INST
-
Can't get my hp laserjet p1505n to print with verizon mifi
I changed internet provider from broadband to Verizon MIFI and now my printer doesn't work. II have a HP Laserjet P1505n and it was networked to my desk pc and my laptop. What can I do to get it working again?? Thank you Connie Moore
-
Photo albums not appearing in site on own server
Hi, I have a small personal site, www.baskie.com it's built (badly!) using iWeb and it's hosted on my own server. When I used iWeb 08, published to a folder, then uploaded to my server using an FTP client as was well, now with iWeb09 I have set thing
-
Laser Multi-function printer recommendation
I know I'm probably not supposed to be asking for product recommendations here, but am not sure where else to go (if there are other good forums for this kind of thing, please let me know). Anyone have any recommendations on a good laser multi-functi