Unique RD and LSR memory consumption
Hi,
It is a best practice to follow unique per-VRF/ per-LSR Route Distinguisher to allow load-balancing for dual-homed customer sites (when dual-homing is to two separate PEs).
Let's say you have a large customer which connects to many PEs and you follow the per-VRF/per-LSR approach for him. For the ease of deployment you choose RD to be in a format of
IP Address:Value when IP Address is a loopback address used for BGP update-source. Loopback IP address is different on each LSR. Value could be a running number from 1 to max. If you make sure that Customer VRF gets a different Value on each ingress/egress LSR, you will get completely different RD each time.
Now let's say you have LSRA that gets MP-BGP updates with Customer prefixes from many other LSR. Since RDs on the incoming updates will be different from RD assigned to Customer VRF on LSRA, LSRA will import the prefixes into proper Customer VRF and also place them into Null BGP table (Null VRF), this is per normal behavior.
Customer routing will work OK, but with many prefixes received it will cause huge memory consuptions because each prefix will be saved twice. Now take into account many Customers working in this manner...
How do MPLS/VPN Service Providers deal with this? Maybe they do not follow unique RD on a per-VRF/per-LSR bases for all the Customers...
Thanks,
David
Hello,
while agreeing with Paresh, I would like to add another point of view. From a customer perspective load sharing might be attractive - my experience is in fact, that it really is (for whatever reason...).
So the feature can be used as a service differentiator for the SP.
As you already pointed out, the need for memory will double (at least) on a PE. But memory is not always a limiting factor, but number of interfaces and CPU etc.
And while a PE today has 256 or more MB RAM, an average customer with less than 1000 routes does not really create a memory hog. So in brief: utilizing more memory is not such a strong argument that I would skip the whole feature in my opinion. But every SP has to make his own decision.
Hope this helps! Please rate all posts.
Regards, Martin
Similar Messages
-
I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
Memory consumption during re-indexing.
When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
I checked that clips played; there were no errors generated; no clips showed as Offline.
Some Clips now Offline:Importer CC 2014
Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
The same clips play perfectly well in, for example, Windows Media Player.
I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
It's not the sort of problem expected with a premium product.
After this experience, I will not be updating premiere mid-project ever again.
How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
Any help appreciated. Thanks.To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
This is very poor support for what is supposed to be a premium product.
Lesson learned: I won't be upgrading Premiere again mid project after these experiences. -
Check Process memory consumption and Kill it
Hello
I have just installed Orchestrator and have a problem that I think is perfekt for Orchestrator to handle.
I have a process that sometimes hangs and the only way to spot it is that the memory consumption has stoped.
The process is started every 15 minutes and scans a folder, if it finds a file it reads the file to a system. You can see that it is working by the increasing Memory consumption. If the read fails then the memory consumption stops. The process is still working
and is responding but is hung.
I'm thinking about doing a runbook that checks the memory-consumption every 5 minutes and compares it with the previous value. if the last three values are the same then I will kill the process and start it again.
My problem is that I have not found a way to check the memory consumption of a process.
I have set up a small test, just verify that I get the correct process, with the activity Monitor process -> Get Process Status -> Append Line (process name).
But How do I get the process memory consumption?
/AndersNow that I think about it a bit more I don't think there will be an easy way to set up a monitor for your situation in SCOM. Not that it couldn't be done, just not easily. Getting back to SCORCH. What you are trying to do isn't an every day kind of
scenario. I don't think there is a built in activity for this.
The hardest thing to overcome whether you use SCORCH or SCOM is likely going to be determining the error condition of three consecutive samples of the same memory usage. you'll need a way to track the samples. I can't think of a good way to do
this without utilizing scripting. -
BW data model and impacts to HANA memory consumption
Hi All,
As I consider how to create BW models where HANA is the DB for a BW application, it makes sense moving the reporting target from Cubes to DSOs. Now the next logical progression of thought is that the DSO should store the lowest granularity of data(document level). So a consolidated data model that reports on cross functional data would combine sales, inventory and purchasing data all being stored at document level. In this scenario:
Will a single report execution that requires data from all 3 DSOs use more memory vs the 3 DSOs aggregated say at site/day/material?Lower Granularity Data = Higher Memory Consumption per report execution
I'm thinking that more memory is required to aggregate the data in HANA before sending to BW. Is aggregation still necessary to manage execution memory usage?
Regards,
Dae JinLet me rephrase.
I got an EarlyWatch that said my dimensions on one of cube were too big. I ran SAP_INFOCUBE_DESIGNS in SE38 in my development box and that confirmed it.
So, I redesigned the cube, reactivated it and reloaded it. I then ran SAP_INFOCUBE_DESIGNS again. The cube doesn't even show up on it. I suspect I have to trigger something in BW to make it populate for that cube. How do I make that happen manually?
Thanks.
Dave -
Very high memory consumption of B1i and cockpit widgets
Hi all,
finally I have managed it to install B1i successfully, but I think something is wrong though.
Memory consumption in my test environment (Win2003, 1024 MB RAM), while no other applications and no SAP addons are started:
tomcat5.exe 305 MB
SAP B1 client 315 MB
SAP B1DIProxy.exe 115 MB
sqlservr.exe 40 MB
SAPB1iEventSender.exe 15 MB
others less than 6 MB and almost only system based processes...
For each widget I open (3 default widgets, one on each standard cockpit), the tomcat grows bigger and leaves less for the sql server, which has to fetch all the data (several seconds on 100% of CPU usage).
Is this heavy memory consumption normal? What happens if several users are logged into SAP B1 using widgets?
Thanks in advance
Regards
SebastianHi Gordon,
so this is normal? Then I guess the dashboards are not suitable for many customers, especially for them who are working on a terminal server infrastructure. Even if the tomcat server has this memory consumption only on the SAP server, when each client needs about 300 MB (and add some hundred for the several addons they need!), I could not activate the widgets. And generally SAP B1 is not the only application running at the customers site. Suggesting to buy more memory for some Xcelsius dashboards won't convince the customer.
I hope that this feature will be improved in the future, otherwise the cockpit is just an extension of the old user menu (except for the brilliant quickfinder on top of the screen).
Regards
Sebastian -
How to measure memory consumption during unit tests?
Hello,
I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
Running on Win32, the system-level info of memory consumed is known not to be accurate.
Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
What tools do you use/suggest?However this requires manual code in my unit test
classes themselves, e.g. in my setUp/tearDown
methods.
I was expecting something more orthogonal to the
tests, that I could activate or not depending on the
purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
If I don't have another option, OK I'll wire my own
pre/post memory counting, maybe using AOP, and will
activate memory measurement only when needed.If you need to check the memory used, I would do this.
You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
Have you actually used your suggestion to automate
memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
I have more test which fail if the execution time is siginificantly different.
Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
Plus, I did not understand your suggestion, can you elaborate?
- I first assumed you meant freeMemory(), which, as
you suggest, is not accurate, since it returns "an
approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
- I re-read it and now assume you do mean
totalMemory(), which unfortunately will grow only
when more memory than the initial heap setting is
needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
- Eventually, I may need to inlcude calls to
System.gc() but I seem to remember it is best-effort
only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally. -
Query on memory consumption during SQL
Hi SAP Gurus,
Could I kindly request for your inputs concerning the following scenario?
To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk. Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
The program retrieves from the database table using a where clause filtering only to a single value company code. Kindly note that company code is not the only key in the table. In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
The problem encountered is as follows:
- Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
- However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump. This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
About the only biggest difference between the two company codes is the number of corresponding records they have in the table. I've checked if company code B had more values in its columns than company code A. However, they're just the same.
What I do not quite understand is why memory consumption changed just by changing the company code in the selection. I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once. However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
Could someone please enlighten me on how memory is consumed during database retrieval?
Thanks!Hi,
with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
not by the database.
If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
see recent ABAP documentation:
Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
What you should do:
calculate the size needed for your big company code B. How many row multiplied with line length.
That is the minimum amount you need for your user memory quota. (quotas can be checked with
ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
SELECT * FROM <custom table>
INTO TABLE lt_temp
FOR ALL ENTRIES IN lt_bukrs
WHERE bukrs = lt_bukrs-bukrs
ORDER BY primary key.
This might actually use less memory than the package size option for the FOR ALL ENTRIES.
Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
packe in lt_temp).
If the amount of memory is still too big, you have to either increase the quotas or select
less data (additional where conditions) or avoid using FAE in this case in order to not read all
the data in one go.
Hope this helps,
Hermann -
Integration Builder Memory Consumption
Hello,
we are experiencing very high memory consumption of the Java IR designer (not the directory). Especially for loading normal graphical idoc to EDI mappings, but also for normal idoc to idoc mappings. examples (RAM on client side):
- open normal idoc to idoc mapping: + 40 MB
- idoc to edi orders d93a: + 70 MB
- a second idoc to edi orders d93a: + 70 MB
- Execute those mappings: no additional consumption
- third edi to edi orders d93a: + 100 MB
(alle mappings in same namespace)
After three more mappings RAM on client side goes on 580 MB and then Java heap error. Sometimes also OutOfMemory, then you have to terminate the application.
Obviously the mapping editor is not quite will optimized for RAM usage. It seems to not cache the in/out message structures. Or it loads for every mapping very much dedicated functionality.
So we cannot really call that fun. Working is very slow.
Do you have similar experiences ? Are there workarounds ? I know the JNLP mem setting parameters, but the problem is the high load of each mapping, not only the overall maximum memory.
And we are using only graphical mappings, no XSLT !
We are on XI 3.0 SP 21
CSYHii
Apart from raising tablespace..
Note 425207 - SAP memory management, current parameter ranges
you have configure operation modes to change work processes dynamically using rz03,rz04.
Please see the below link
http://help.sap.com/saphelp_nw04s/helpdata/en/c4/3a7f53505211d189550000e829fbbd/frameset.htm
You can Contact your Basis administrator for necessary action -
High memory consumption in XSL transformations (XSLT)
Hello colleagues!
We have the problem of a very high memory consumption when transforming XML
files with CALL TRANSFORMATION.
Code example:
CALL TRANSFORMATION /ipro/wml_translate_cls_ilfo
SOURCE XML lx_clause_text
RESULT XML lx_temp.
lx_clause_text is a WordML xstring (i.e. it is a Microsoft Word file in XML
format) and can therefore not be easily splitted into several parts.
Unfortunately this string can get very huge (e.g. 50MB). The problem is that
it seems that CALL TRANSFORMATION allocates memory for the source and result
xstrings but doesn't free them after the transformation.
So in this example this would mean that the transformation allocates ~100MB
memory (50MB for source, ~50MB for result) and doesn't free it. Multiply
this with a couple of transformations and a good amount of users and you see
we get in trouble.
I found this note regarding the problem: 1081257
But we couldn't figure out how this problem could be solved in our case. The
note proposes to "use several short-running programs". What is meant with
this? By the way, our application is done with Web Dynpro for ABAP.
Thank you very much!
With best regards,
Mario DüsselHi,
q1. how come the Ram consumption is increased to 99% on all the three boxes?If we continue with the theory that network connectivity was lost between the hosts, the Coherence servers on the local hosts would form their own clusters. Prior to the "split", each cache server would hold 1/12 of the primary and 1/12 of the backup (assuming you have one backup). Since Coherence avoids selecting a backup on the same host as the primary when possible, the 4 servers on each host would hold 2/3 of the cache. After the spit, each server would hold 1/6 of the primary and 1/6 of the backup, i.e., twice the memory it previously consumed for the cache. It is also possible that a substantial portion of the missing 1/3 of the cache may be restored from the near caches, in which case, each server would then hold 1/4 of the primary and 1/4 of the backup, i.e., thrice the memory it previously consumed for the cache.
q2: where is the cache data stored in the coherence servers?on which memory?The cache data is typically stored in the jvm's heap memory area.
Have you reviewed the logs?
Regards,
Harv -
8i Memory Consumption in Solaris 8
Hi,
This time I'm hoping I've posted this qustioned to the right place. :-)
This is the memory consumtion I took from my Solaris 8 Netra T1 with 256 MB RAM after running the
prstat -t command. User Oracle is the Oracle 8.1.6 DB owner.
Initial Consumption
===================
NPROC USERNAME SIZE RSS MEMORY TIME CPU
4 oracle 8760K 5976K 2.4% 0:00.00 0.1%
25 root 52M 21M 8.7% 0:00.04 0.1%
1 daemon 2488K 1016K 0.4% 0:00.00 0.0%
After Starting the Listner
==========================
NPROC USERNAME SIZE RSS MEMORY TIME CPU
5 oracle 19M 11M 4.6% 0:00.00 0.4%
25 root 52M 21M 8.7% 0:00.04 0.1%
1 daemon 2488K 1016K 0.4% 0:00.00 0.0%
After Starting Oracle
=====================
NPROC USERNAME SIZE RSS MEMORY TIME CPU
17 oracle 1792M 1502M 99% 0:00.01 0.4%
25 root 52M 21M 1.4% 0:00.04 0.1%
1 daemon 2488K 1016K 0.1% 0:00.00 0.0%
First Access
============
NPROC USERNAME SIZE RSS MEMORY TIME CPU
18 oracle 1941M 1631M 99% 0:00.08 4.3%
1 daemon 2488K 1016K 0.1% 0:00.00 0.0%
25 root 52M 21M 1.3% 0:00.04 0.0%
And the memory stays at 99%. The consumption is not coming down.
Any idea why this is happening. If it's a problem with the configuration pls let me know
how to correct it.
Thx
ShafeenHow big is you SGA ? This is the memory Oracle is using.
(if you don't know: see init.ora
-> shared_pool
-> large_pool
-> java_pool
-> buffer_cache * db_block_size
-> log_buffer ) -
I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
'for $doc in collection("VcObjStore")/doc
where $doc[@type="Foo"]
return <item>{$doc}</item>'
when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
ThanksHi Ron,
Thanks for a quick reply!
- I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
- Yes, an environment was created for this db. Here is the code we used to set it up
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setInitializeLocking(true);
envConfig.setInitializeCache(true);
envConfig.setAllowCreate(true);
envConfig.setErrorStream(System.err);
envConfig.setCacheSize(1024 * 1024 * 100);
- I'd like an explanation on reasons behind the performance difference between these two queries
Query 1:
dbxml> time query 'for $doc in collection("VcObjStore")/doc
where $doc[@type="VirtualMachine"]
return $doc'
552 objects... <snip>
Time in seconds for command 'query': 0.031
Query 2:
dbxml> time query 'for $doc in collection("VcObjStore")/doc
where $doc[@type="VirtualMachine"]
return <val>{$doc}</val>'
552 objects... <snip>
Time in seconds for command 'query': 5.797
- Any way to make the query #2 go as fast as #1?
Thanks! -
Memory consumption of queries in workbooks
We have an issue with the exceution of a Workbook which contains several queries. The queries require very much memory which finally leads to a shortdump (TSV_TNEW_PAGE_ALLOC_FAILED). We found that during execution of the workbook the memory is not released after a query has been executed and therfore at some point of time the dump occurs. However, if the queries are refreshed manually one after the other in the workbook the memory is relaesed and finally the workbook can be executed by this workaround.
My question is, if anyone has an idea, if it is possible to apply a setting somewhere that the queries relaese the memory after execution when they are all refreshed together in the workbook?
Thanks a lot in advance for any hint & Kind regards,
Hans-JörgHi,
Try this,
You may be able to workaround the problem by increasing free memory avaiable, parameter em/initial_size_MB (contact your Basis team or refer note 835474).
Also concenrate on parameter ztta/roll_extension (Refer note 146289)
Try increasing the parameter, abap/heap_area_dia from tcode RZ11.
Also check the following notes in detail as well,
649327 Analysis of memory consumption
425207 SAP memory management, current parameter ranges
369726 TSV_TNEW_PAGE_ALLOC_FAILED
185185 Application: Analysis of memory bottlenecks
If the issue persist, please review SAP Note 779123 and query design.
check this,
http://scn.sap.com/thread/288222
http://www.sapfans.com/forums/viewtopic.php?f=3&t=109557
regards,
anand. -
Memory Consumption: Start A Petition!
I am using SQL Developer 4.0.0.13 Build MAIN 13.80. I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl. But that is not the case.
Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage? This is problem has been there for years now with many posts and no real answer.
If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
Thank youYes, at this point (after restarting) SQL Developer is functioning fine. Windows reports 1+ GB of free memory. I have 3 worksheets open all connected to two different DB connections. Each worksheet has 1 to 3 pinned query results. My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets. It appears like it does not clean up after itself to me.
I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that. I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help. -
Hi,
Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
What about query like
select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
Please note that the real query would include JOINS and such. Thanks
Any information is appreciatedHi Melvin,
Not sure I'd call myself an expert but I'll have a go at an answer
I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
Hope this helps,
Owen -
Appendbytes with larger memory consumption
hi at all,
when i user appendbytes, my memory consumption become too Larger, how can i reduce it.
i tried to close the netStream and seek(0) and appendBytesAction, it's not usef.
Thanks.yes,the URLStream, but i put the data in ByteArray in ProgressEvent, than use appendBytes with Timer
Maybe you are looking for
-
Bug? - Submitting page with html name attribute in content
Hello, I use a modified version of Task Manager to track tasks in my group at work. I wanted a "non-standard" report to show the users the history of their task at the bottom of a page. Basically I created a PL/SQL function that returns preformatted
-
Ipad wont send messages but receives ok. Text in bold send button wont send
recently updated to 8.0.2. My ipad wont send messages, the typeface has gone into bold and the send wont work.
-
JTable resizing the column width
Have a JTable whose column width's are set using int width = ((String)getDataTbl().getColumnModel().getColumn(0).getHeaderValue()).length() + 32; getDataTbl().getColumnModel().getColumn(0).setPreferredWidth(width); int width1 = ((String)getDataTbl().
-
I just bought a 4s and it charges in the wall but not when plugged into my macbook pro
help!? my macbook pro ( up to date ) and my 4S ( up to date ) won't charge when plugged into the computer.. it works with the wall charger but not on the computer... do i need to resintall usb drivers?
-
Photoshop Touch is Chinese, Japanese and Korean fonts are supported?
Photoshop Touch is Chinese, Japanese and Korean fonts are supported?