Max number of records to hold in explicit cursor
Hi Everyone,
What is the maxmimum number of records that could be holded in
an explicit cursor for manipulation. I need to process millions of records.
Can I hold it in cursors or use temp table to hold those records and
do fixes with volume control.
Thanks
Hi Kishore sorry for the delayed response,
Table1
prim_oid sec_oid rel_oid
pp101 cp102 101
pp101 cp103 101
pp102 cp104 101
pp102 cp105 101
Table2
ID p_oid b_oid rel_oid
1 pp101 -51 102
2 pp102 -51 102
3 cp102 52 102
4 cp103 53 102
5 cp104 54 102
6 cp105 54 102
From table1 I get the parent and child recs based on rel_oid=101,
the prim_oid and sec_oid are related to another col in table2 again
with a rel_oid. I need to get all the prim_oid that are linked to -ive b_oid
in table2 whose child sec_oid are linked with +ive b_oid.
In the above case, parent pp101 linked to 2 child cp102 & cp103 and
pp102 linked to 2 child cp104 & cp105. Both pp101 and pp102 are linked
to -ive b_oid (table2), but the children of these parents are linked to +ive b_oids.
But pp101's children are linked to 2 diff b_oid and pp102's childrend are linked
to same b_oid. For my requirement I can only update b_oid of pp102 with that
of its children b_oid whereas cannot update pp101's b_oid as it children are
linked to diff b_oid's.
I've a sql that will return prim_oid, b_oid, sec_oid, b_oid as a record as below
1 pp101 -51 3 cp102 52
1 pp101 -51 4 cp103 53
2 pp102 -51 5 cp104 54
2 pp102 -51 6 cp105 54
with a cursor sql that returns records as above, it would be difficult to process
distinct parent and distinct child. So I've a cursor that returns only the parent
records as below,
1 pp101 -51
2 pp102 -51
and then for each parent I get the distinct child b_oid, if I get only one child
b_oid I update parent else dont. but the problem is table2 has 8 million parent recs
with link to -ve b_oid but child of only 2 million recs have link to only one distinct
b_oid.
If i include volume control in the cursor sql chances are all might returns like
pp101 for which update is not required, so I should not have volume control in
curosr sql which will now return all the 8 million record. (my assumption).
is there any other feasible solution? Thanks
Similar Messages
-
Max number of records in MDM workflow
Hi All
Need urgent recommendations.
We have a scenario where we need to launch a workflow upon import of records. The challenge is source file contains 80k records and its always a FULL load( on daily basis) in MDM. Do we have any limitation in MDM workflow for the max number of records? Will there be significant performance issues if we have a workflow with such huge number of records in MDM?
Please share your inputs.
Thanks-RaviHi Ravi,
Yes it can cause performance overhead and you will also have to optimise MDIS parametrs for this.
Regarding WF i think normally it is 100 records per WF.I think you can set a particular threshold for records after which the WF will autolaunch.
It is difficult to say what optimum number of records should be fed in Max Records per WF so I would suggest having a test run of including 100/1000 records per WF.Import Manager guide say there are several performance implications of importing records in a WF,so it is better to try for different ranges.
Thanks,
Ravi -
Max number of records in an internal table
Hi,
Can any one tell me what is the Max Number of records we can get into an internal table.
if you have any link of sap help on this please FWD.
thanks in Adv.
Regards,
Lakshmikanth.T.VHi lakshmikanth,
Internal Tables as Dynamic Data Objects
Internal tables are always completely specified regarding row type, key and access type. However, the number of lines is not fixed. Thus internal tables are dynamic data objects, since they can contain any number of lines of a particular type. The only restriction on the number of lines an internal table may contain are the limits of your system installation. The maximum memory that can be occupied by an internal table (including its internal administration) is 2 gigabytes. A more realistic figure is up to 500 megabytes. An additional restriction for hashed tables is that they may not contain more than 2 million entries. The line types of internal tables can be any ABAP data types - elementary, structured, or internal tables. The individual lines of an internal table are called table lines or table entries. Each component of a structured line is called a column in the internal table.
regards,
keerthi. -
What is the max number of records in table
Hello Friends,
am using oracle 11g .
How many records we can store in a table or what is the maximum size of the table . On what factors it depends.
If the number of records are ever growing , what is the best possible solutiion ?
thanks/kumarThere is a limit based on the limit of the ROWID.
You may find this limit in Oracle documentation.
From database version 9.0 it is virtually unlimited for us, as it is hardly likely to reach the max value of the ROWID with data we can store now and with the actual speed of our computers (reported to our lifetime). -
Is a subquery in a BO report limited to a max number of records???
Here's my problem:
I recieved an excel sheet with 700 records of customers from a client who wants me to to create a report with specific data for these customers in my Business Objects universe (BO6.5 on SQL Server).
So I created a dataprovider with query 1, i.e. the requested data of customers. Then I created a second dataprovider, query 2, based on 'personal files', i.e. the excel sheet. In query 1 I added to the conditions that each customer should be in (sub)query 2 (CustomerId In list of the query result ('query2.CustomerId').
the syntax I have used for this seems OK.
However, I recieve the following error: "Too many selected values (LOV0001)". I know this error has to do with parameter MAX_INLIST_VALUES, which is limited by default to 99 and can be extended to 256 max. But I thought it refers to the max number of items in lists of values.
When I limit the number of records in the excel sheet to 99 the result is perfect (proof that I got the syntax right!). I can upgrade the parameter to 256, and can split the excel sheet into three, but that will not be useful when next time my client sends me 10.000 customer records.
Can I make reports in BO which use subqueries that result in more than 256 records at all? (hardly imaginable).
What is the best way to do this?
Thanks in advance!Hi Lucas,
Following is the information regarding the issue you are getting and might help you to resolve the issue.
ADAPT00519195- Too many selected values (LOV0001) - Select Query Result operand
For XIR2 Fixed Details-Rejected as this is by design
I have found that this is a limitation by design and when the values exceed 18000 we get this error in BO.
There is no fix for this issue, as itu2019s by design. The product always behaved in this manner.
Also an ER (ADAPT00754295) for this issue has already been raised.
Unfortunately, we cannot confirm if and when this Enhancement Request will be taken on by the developers.
A dedicated team reviews all ERs on a regular basis for technical and commercial feasibility and whether or not the functionality is consistent with our product direction. Unfortunately we cannot presently advise on a timeframe for the inclusion of any ER to our product suite.
The product group will then review the request and determine whether or not the functionality/feature will be included in a future release.
Currently I can only suggest that you check the release notes in the ReadMe documents of future service packs, as it will be listed there once the ER has been included
The only workaround which I can suggest for now is:
Workaround 1:
Test the issue by keep the value of MAX_Inlist_values parameter to 256 on designer level.
Workaround 2:
The best solution is to combine 'n' queries via a UNION. You should first highlight the first 99 or so entries from the LOV list box and then combine this query with a second one that selects the remaining LOV choices.
Using UNION between queries; which is the only possible workaround
Please do let me know if you have any queries related to the same.
Regards,
Sarbhjeet Kaur -
Max. number of records in a package
Hi,
I was asked an interview question of 'what are the maximum number of records a BW package can have?'
Can some one please answer?
Thanks
SarahHi
Maximum size of a data packet in kByte 20000
U can find this in Scheduler(Maintain Info package) screen. In this screen in the menu go to Scheduler ---> DataS.Default Data transfer
regards
kiran -
Max number of records in a cube & architectural issues
Hi,
Sorry if my question was already done but i can't find the same question with the research button (may be i have not the good words for search, I'm not english).
I am on a BIG BIG IP project. The forecast volume of planned records is about 1.000.000.000 records a year. So we choose to split records toward many cubes.
1) Is there a maximum number of record supported by a cube to be planned? We planned to put 100.000.000 records maximum on a cube to be planned. Is it too much?
2) If I make 100 cubes (one for each organizational entity) with 10.000.000 records per cube and if I "plug" a planning layout on these 100 cubes with a multiprovider, will IP:
spread time to search the good cube to write in (due to the selected entity) in the 100 cubes (too much time !), or,
search directly in the good cube (thanks to user exit that match cube with selected entity), so the response time will be about the same as the one for 1 layout plugged on 1 cube of 10.000.000 records?
Thanks a lot, and sorry for my english language level
GeorgesHi Georges,
Having too many records in the cube should not be very detrimental for performance of planning application as long as you can ensure that the data volume that you fetch at one go is restricted to reasonable limits, using restrictions in filters (the more restrictive the better). Take care of this while modelling both your planning functions/sequences and input-ready queries.
I understand that you'll need to create a multiprovider for reporting purposes, but if you dont need the data from more than one cube for planning purpose, it will be better to create the aggregation level (and rest of the planning model) on top of individual cube. In case, you want to use the same planning function/input queries for multiple cubes (which will probably be the case), you can create the aggregation levels on the multiprovider but make sure you restrict the characteristic 'infoprovider' properly in the filter restrictions to avoid the function reading unnecessary data from many cubes.
Hope this helps. -
Max Number of records for BAPI 'BAPI_PBSRVAPS_GETDETAIL'
Hi All,
Can you suggest me the number of records to be fed to the 'BAPI_PBSRVAPS_GETDETAIL'.
I am using a few location products for 9 key figures.Whenever number of records
in selection table increases BAPI behaves in a strange way and the code written below it does not get executed.
Please guide me to get full points.
Thanks in Advance,
Chandan DubeyHi Uma,
It comes out of the program after this code is executed.I have 50 location product combinations in vit_selection table.
CALL FUNCTION 'BAPI_PBSRVAPS_GETDETAIL'
EXPORTING
planningbook = planning_book
period_type = 'B'
date_from = l_from_week
date_to = l_to_week
logical_system = logical_system
business_system_group = business_system_group
TABLES
selection = vit_selection
group_by = vit_group_by
key_figure_selection = vit_kf_selection
time_series = vit_t_s
time_series_item = vit_t_s_i
characteristics_combination = vit_c_c
return = vit_return.
LOOP AT vit_return. -
Max number of records for 'BAPI_PBSRVAPS_GETDETAIL'.
Hi All,
Can you suggest me the number of records to be fed to the 'BAPI_PBSRVAPS_GETDETAIL'.
I am using a few location products for 9 key figures.Whenever number of records
in selection table increases BAPI behaves in a strange way and the code written below it does not get executed.
Please guide me to get full points.
Thanks in Advance,
Chandan DubeyServer memory issue !
-
Optimal number of records to fetch from Forte Cursor
Hello everybody:
I 'd like to ask a very important question.
I opened Forte cursor with approx 1.2 million records, and now I am trying
to figure out the number of records per fetch to obtain
the acceptable performance.
To my surprise, fetching 100 records at once gave me approx 15 percent
performance gain only in comparsion
with fetching records each by each.
I haven't found significant difference in performance fetching 100, 500 or
10.000 records at once.In the same time, fetching 20.000
records at once make a performance approx 20% worse( this fact I cannot
explain).
Does anybody have any experience in how to improve performance fetching from
Forte cursor with big number of rows ?
Thank you in advance
Genady Yoffe
Software Engineer
Descartes Systems Group Inc
Waterloo On
CanadaYou can do it by writing code in start routine of your transformations.
1.If you have any specific criteria for filtering go with that and delete unwanted records.
2. If you want to load specific number of records based on count, then in start routine of the transformations loop through source package records by keeping a counter till you reach your desired count and copy those records into an internal table.
Delete records in the source package then assign the records stored in internal table to source package. -
Reg: Find Duplicate Records and select max number of record in Table
Hi Guys,
This is Nagendra, India.
my table structure is
id name tempid temptime
1 xxx 123 date
1 yyy 128 date
1 sdd 173 date
14 ree 184 date
14 fded 189 date
This is Table Structure, totally 15000+ records is there.
My Requirement is showing id and max(tempId) value.
id name tempid temptime
1 sdd 173 date
14 fded 189 date
Like that, I want to show all record(after hiding duplicate values ) like that Could you please solve this issue.
With Regards,
Nagendra; WITH numbering AS (
SELECT id, name, tempid, temptime,
rowno = row_number() OVER(PARTITION BY id ORDER BY tempid DESC)
FROM tbl
SELECT id, name, tempid, temptime
FROM numbering
WHERE rowno = 1
The WITH thing defines a Common Table Expression which is a locally defined view which only exists for this query. The row_number function numbers the rows, re-starting on 1 for every id, and they are numbering in falling tempid order. Thus, by
selecting all rows from the CTE with rono = 1, we get the row with the highest tempid for each id.
Erland Sommarskog, SQL Server MVP, [email protected] -
Maximum number of records which can be added to custom list
HI,
What is the maximum number of records added to custom list, without increasing the list throttling?
Thanksits two differnt thing you are asking.
1) Max Number of Record MSFT supported is 30,000,000 per library/List
http://technet.microsoft.com/en-us/library/cc262787.aspx#ListLibrary
For List Throttling.
To minimize database contention, SQL Server often uses row-level locking as a strategy to ensure accurate updates without adversely
impacting other users who are accessing other rows.
check this one to understand more about throttling.
http://blogs.msdn.com/b/spses/archive/2013/12/02/sharepoint-2010-2013-list-view-lookup-threshold-uncovered.aspx
Please remember to mark your question as answered &Vote helpful,if this solves/helps your problem. ****************************************************************************************** Thanks -WS MCITP(SharePoint 2010, 2013) Blog: http://wscheema.com/blog -
RE: (forte-users) Optimal number of records to fetch fromForte C ursor
The reason why a single fetch of 20.000 records performs less then
2 fetches of 10.000 might be related to memory behaviour. Do you
keep the first 10.000 records in memory when you fetch the next
10.000? If not, then a single fetch of 20.000 records requires more
memory then 2 fetches of 10.000. You might have some extra over-
head of Forte requesting additional memory from the OS, garbage
collections just before every request for memory and maybe even
the OS swapping some memory pages to disk.
This behaviour can be controlled by modifying the Minimum memory
and Maximum memory of the partition, as well as the memory chunk
size Forte uses to increment its memory.
Upon partition startup, Forte requests the Minimum memory from the
OS. Whithin this area, the actual memory being used grows, until
it hits the ceiling of this space. This is when the garbage collector
kicks in and removes all unreferenced objects. If this does not suffice
to store the additional data, Forte requests 1 additional chunk of a
predefined size. Now, the same behaviour is repeated in this, slightly
larger piece of memory. Actual memory keeps growing until it hits
the ceiling, upon which the garbage collector removes all unrefer-
enced objects. If the garbage collector reduces the amount of
memory being used to below the original Miminum memory, Forte
will NOT return the additional chunk of memory to the OS. If the
garbage collector fails to free enough memory to store the new data,
Forte will request an additional chunk of memory. This process is
repeated untill the Maximum memory is reached. If the garbage
collector fails to free enough memory at this point, the process
terminates gracelessly (which is what happens sooner or later when
you have a memory leak; something most Forte developpers have
seen once or twice).
Pascal Rottier
STP - MSS Support & Coordination Group
Philip Morris Europe
e-mail: [email protected]
Phone: +49 (0)89-72472530
+++++++++++++++++++++++++++++++++++
Origin IT-services
Desktop Business Solutions Rotterdam
e-mail: [email protected]
Phone: +31 (0)10-2428100
+++++++++++++++++++++++++++++++++++
/* All generalizations are false! */
-----Original Message-----
From: [email protected] [SMTP:[email protected]]
Sent: Monday, November 15, 1999 6:53 PM
To: [email protected]
Subject: (forte-users) Optimal number of records to fetch from Forte
Cursor
Hello everybody:
I 'd like to ask a very important question.
I opened Forte cursor with approx 1.2 million records, and now I am trying
to figure out the number of records per fetch to obtain
the acceptable performance.
To my surprise, fetching 100 records at once gave me approx 15 percent
performance gain only in comparsion
with fetching records each by each.
I haven't found significant difference in performance fetching 100, 500
or
10.000 records at once.In the same time, fetching 20.000
records at once make a performance approx 20% worse( this fact I cannot
explain).
Does anybody have any experience in how to improve performance fetching
from
Forte cursor with big number of rows ?
Thank you in advance
Genady Yoffe
Software Engineer
Descartes Systems Group Inc
Waterloo On
Canada
For the archives, go to: http://lists.sageit.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: [email protected]Hi Kieran,
According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
"In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
you have the more things you have to manage. Here is some links which describe the partition optimization
http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
Regards,
Charlie Liao
TechNet Community Support -
RE: (forte-users) Optimal number of records to fetch fromForte Cursor
Guys,
The behavior (1 fetch of 20000 vs 2 fetches of 10000 each) may also be DBMS
related. There is potentially high overhead in opening a cursor and initially
fetching the result table. I know this covers a great deal DBMS technology
territory here but one explanation is that the same physical pages may have to
be read twice when performing the query in 2 fetches as compared to doing it in
one shot. Physical IO is perhaps the most expensive (vis a vis- resources)
part of a query. Just a thought.
"Rottier, Pascal" <[email protected]> on 11/15/99 01:34:22 PM
To: "'Forte Users'" <[email protected]>
cc: (bcc: Charlie Shell/Bsg/MetLife/US)
Subject: RE: (forte-users) Optimal number of records to fetch from Forte C
ursor
The reason why a single fetch of 20.000 records performs less then
2 fetches of 10.000 might be related to memory behaviour. Do you
keep the first 10.000 records in memory when you fetch the next
10.000? If not, then a single fetch of 20.000 records requires more
memory then 2 fetches of 10.000. You might have some extra over-
head of Forte requesting additional memory from the OS, garbage
collections just before every request for memory and maybe even
the OS swapping some memory pages to disk.
This behaviour can be controlled by modifying the Minimum memory
and Maximum memory of the partition, as well as the memory chunk
size Forte uses to increment its memory.
Upon partition startup, Forte requests the Minimum memory from the
OS. Whithin this area, the actual memory being used grows, until
it hits the ceiling of this space. This is when the garbage collector
kicks in and removes all unreferenced objects. If this does not suffice
to store the additional data, Forte requests 1 additional chunk of a
predefined size. Now, the same behaviour is repeated in this, slightly
larger piece of memory. Actual memory keeps growing until it hits
the ceiling, upon which the garbage collector removes all unrefer-
enced objects. If the garbage collector reduces the amount of
memory being used to below the original Miminum memory, Forte
will NOT return the additional chunk of memory to the OS. If the
garbage collector fails to free enough memory to store the new data,
Forte will request an additional chunk of memory. This process is
repeated untill the Maximum memory is reached. If the garbage
collector fails to free enough memory at this point, the process
terminates gracelessly (which is what happens sooner or later when
you have a memory leak; something most Forte developpers have
seen once or twice).
Pascal Rottier
STP - MSS Support & Coordination Group
Philip Morris Europe
e-mail: [email protected]
Phone: +49 (0)89-72472530
+++++++++++++++++++++++++++++++++++
Origin IT-services
Desktop Business Solutions Rotterdam
e-mail: [email protected]
Phone: +31 (0)10-2428100
+++++++++++++++++++++++++++++++++++
/* All generalizations are false! */
-----Original Message-----
From: [email protected] [SMTP:[email protected]]
Sent: Monday, November 15, 1999 6:53 PM
To: [email protected]
Subject: (forte-users) Optimal number of records to fetch from Forte
Cursor
Hello everybody:
I 'd like to ask a very important question.
I opened Forte cursor with approx 1.2 million records, and now I am trying
to figure out the number of records per fetch to obtain
the acceptable performance.
To my surprise, fetching 100 records at once gave me approx 15 percent
performance gain only in comparsion
with fetching records each by each.
I haven't found significant difference in performance fetching 100, 500
or
10.000 records at once.In the same time, fetching 20.000
records at once make a performance approx 20% worse( this fact I cannot
explain).
Does anybody have any experience in how to improve performance fetching
from
Forte cursor with big number of rows ?
Thank you in advance
Genady Yoffe
Software Engineer
Descartes Systems Group Inc
Waterloo On
Canada
For the archives, go to: http://lists.sageit.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: [email protected]
For the archives, go to: http://lists.sageit.com/forte-users and use
the login: forte and the password: archive. To unsubscribe, send in a new
email the word: 'Unsubscribe' to: [email protected]Hi Kieran,
According to your description, you are going to figure out what is the optimal number of records per partition, right? As per my understanding, this number was change by your hardware. The better hardware you have, the more number of records per partition.
The earlier version of the performance guide for SQL Server 2005 Analysis Services Performance Guide stated this:
"In general, the number of records per partition should not exceed 20 million. In addition, the size of a partition should not exceed 250 MB."
Besides, the number of records is not the primary concern here. Rather, the main criterion is manageability and processing performance. Partitions can be processed in parallel, so the more there are the more can be processed at once. However, the more partitions
you have the more things you have to manage. Here is some links which describe the partition optimization
http://blogs.msdn.com/b/sqlcat/archive/2009/03/13/analysis-services-partition-size.aspx
http://www.informit.com/articles/article.aspx?p=1554201&seqNum=2
Regards,
Charlie Liao
TechNet Community Support -
Hello members
i have a detail block,
how can i get number of record, i want if the cursor is in the second record
it show me two as it is shown in the status bar on the left side,,,
thanks:SYSTEM.cursor_record gives the record where the cursor is located, i.e. the current record.
i hope i understood your problem correctly.
Maybe you are looking for
-
Smartforms-Dynamic Change of Font Size in main window
Hi Friends, How shall I be able to change dynamically the font size in main window of smartform. If the records are more in main window then the layout comes in 2 pages but if less in 1 page. So I want to decrease the font if the records are more
-
We are getting ready to migrate from server core 2008 r2 hyper-v with failover cluster volumes on an iscsi san to server core 2012 r2 hyper-v with failover cluster volumes on a new iscsi san. I've been searching for a "best practices" article for thi
-
Quick,,maybe dumb question
I recently bought PB G4 12in and all I want to do is formated, so I can start with a cleas sheet,, can anyone tell me if I can do that myself (if so, how?) or what do I have to do, to take care of it...any help will be really helpful
-
Regarding Analytics component in SAP CRM
Hi, Currently I am working on CRM analytics and we are using SAP BI system for all the reporting. We also have a BI component in built in the SAP CRM system which has all the necessary tools for all types of CRM relevant analysis. Has anybody got a
-
System command execution from stored procedure
Hello World, How to run System command from stored procedure ? For example : Delete a file running a programm, Is it possible ? H.M