9 shared objects performance question
I have 9 shared objects 8 of which contain dynamic data which i cannot really consolidate into a single shared object because the shared object often has to be cleared.....my question is what performance issues will I exsperience with this number of shared objects .....I maybe wrong in thinking that 9 shared objects is alot.....anybody with any exsperience using multiple shared objects plz respond.
I've used many more than 9 SO's in an application without issue. I suppose what it really comes down to is how many clients are connected to those SO's and how often each one is being updated.
Similar Messages
-
Snap shot remote shared object question
HI I'd like to take a snapshot of video from a webcam and
then store that snapshot either as a file or as an object in a
remote shared object anybody know how that can be accomplished? I
also need to know how to store a picture in a remote shared object
...the picture in this case is being brought into flash using
loadMovie.If you're using loadMovie, you need to extract an image from
the recorded flv file. FFMPEG does this quite nicely.
About the use of the sharedObject, you can use it to
store/sync the location of the image file, but you couldn't store
the image itself in the SO (well... you could, but you'd have to
break it down to pixel data and that would make for an awfully big
SO). -
How to Get property values from Shared Object in client's load event - Very urgent
I am using shared object to share data between two users. First user connect to shared object and set some value in shared object. Please consider that second user has not connected with the shared object yet.
Now when second user connects to the server and try to get that property set by first user, he could get shared object but could not get properties of Shared object set by first user. I observed few times that Second user can get these properties within "Sync" event between two users. But I would like to get these values for Second user in any stage (i.e. in load event etc.). Whenever Second user tries to get the property of Shared object, the object will reset the actual property value and then return reset value.
Anyone faced such issue while using shared object between two users. If so, I would appreciate if you could let me know your suggestions for following questions:
1) Is there any way to get all the properties of shared object before sync event called, as I want to get it immediately when second user connect to the application and perform next task based on the values stored in shared object.
2) Is it possible for second user to check whether any property has been set by first user? So that second user can use the property instead of reset it.
Any kind of help would be greatly appreciated.
Thank You.I am using shared object to share data between two users. First user connect to shared object and set some value in shared object. Please consider that second user has not connected with the shared object yet.
Now when second user connects to the server and try to get that property set by first user, he could get shared object but could not get properties of Shared object set by first user. I observed few times that Second user can get these properties within "Sync" event between two users. But I would like to get these values for Second user in any stage (i.e. in load event etc.). Whenever Second user tries to get the property of Shared object, the object will reset the actual property value and then return reset value.
Anyone faced such issue while using shared object between two users. If so, I would appreciate if you could let me know your suggestions for following questions:
1) Is there any way to get all the properties of shared object before sync event called, as I want to get it immediately when second user connect to the application and perform next task based on the values stored in shared object.
2) Is it possible for second user to check whether any property has been set by first user? So that second user can use the property instead of reset it.
Any kind of help would be greatly appreciated.
Thank You. -
Query Time Out and Shared Objects
Hi,
I have 2 questions.
1) from st22 i can get the list of users and time out information. But how can I find related query which has been time out by that user ?? as of now i was getting query information from st22 by getting abap program name which starts with "G" and find related query for that from the table rsrrepdir. But recently it is happening that i get abap program which starts with "G" from st22 time out but its not giving me any query information in RSRREPDIR table or by t-code se38.
can anyone guide me how can i get query info from st22 ? we have so many time out occuring in production have to trace that and improve the query performance.
2) 2nd question is - how can i get list of queries which are having shared objects like variables, templates, key figures, structures etc...
is there any specific table or t-code which provides that? we have to delete some queries in prod. but before that we want this list.
A.H.P. I am counting on you extensively for help coz i have seen that you reply to everybody. and i have asked same question in another forum at SDN but no reply !!!
waiting for help...Hi Bhanu,
thank you for your prompt reply.
well, we dont have woorkbooks atall. we got the inventory list of all queries needs to be deleted in production. we are deleting very very old queries which are not even in used now. i know the variables and str. etc...remains but still needs to know how to find out the queries with shared object. i want that information in hands before i delete anything in dev. system and transport it to prod. so, i will have proof that i didnt delete anything important. hope you understands what i mean and why i need to know this !
i found few tables related to this but cannt relate them and get the info I need. RSRREPDIR, V_ELTDIR_TXT, RSZELTXREF,etc..
where used list in BEx gives us the webtemplates names nothing else. i meant not the shared objects. and if it is giving shared objects info then i guess i am not aware of it coz few queries i tried to check didnt get that info in "where used" list.
in metadata repository ?? i didnt get it!! how can i find queries with shared objects in metadata rep ? -
Shared Object - Prompt Data Permissions Dialog
Hello,
I'm creating a small app to run from CD-Rom/local
installation that will use multiple shared objects for data
storage. To ensure proper saving without surprising the user with a
permissions dialog unexpectedly, I'd like to request unlimited data
storage on first time app launch - Joey Lott shows how to do this
in the Actionscript Cookbook...
request=mySO.flush(1024 * 500);
My question is,
Can I perform this permissions request with the user a single
time with a generic app SO in global fashion, so that the
permissions would be set for any SOs created during the use of the
product (all written to same SO directory), or do I have to request
permissions for each and every SO created? Since the latter
would be unacceptable from a UE standpoint, that means stuffing all
app data into a single SO which doesn't seem so great from a data
config perspective...
I really appreciate your attention and help on this!
Thanks in advance,
-MauraHmm. Experimented a bit and it seems that once the permission
is set it applies to the Flash Player installation globally, and
not per SO, not even per domain...
Or, please correct me if I'm mistaken.
Thanks. -
Losing Connection to Remote Shared Object
Hello,
For some reason, and at random intervals, I lose connection to my Remote Shared Object. When it first loads, the data loads up A-ok. However, after 5 minutes, I lose connection to the FMS entirely and a error isn't thrown up at all.
At the same time I have another net connection to a live stream, it never loses connection. I'm looking for some direction, and ideas on what could be the issue. See below:
import com.ambient.classes.AmbientUserData;
[Bindable]public var Branding_so:SharedObject;
public function init():void
Application.application.lblLoading.text = "Initializing, Please Wait...";
AmbientUserData.AmbientServer1();
AmbientUserData.nc.addEventListener(NetStatusEvent.NET_STATUS, netStatusHandler);
AmbientUserData.AmbientServer2();
public function netStatusHandler(e:NetStatusEvent):void
//mx.controls.Alert.show(e.info.code + tFlag);
switch (e.info.code)
case "NetConnection.Connect.Success":
Application.application.lblLoading.text = "Connected.";
initRSO();
break;
case "NetStream.Play.StreamNotFound":
break;
case "NetConnection.Connect.Closed":
mx.controls.Alert.show('Connection Lost');
break;
case "NetConnection.Call.Failed":
mx.controls.Alert.show('Connection Failed.');
break;
public function initRSO():void
Application.application.lblLoading.text = "Loading Server Side Components...";
// Colors Tab
Branding_so = SharedObject.getRemote("bgColor1", AmbientUserData.nc.uri, true); // 1
Branding_so = SharedObject.getRemote("bgColor2", AmbientUserData.nc.uri, true); // 2
Branding_so = SharedObject.getRemote("txtColor", AmbientUserData.nc.uri, true); // 3
Branding_so = SharedObject.getRemote("layoutID", AmbientUserData.nc.uri, true); // 4
Branding_so.addEventListener(SyncEvent.SYNC,rsoHandler);
Branding_so.connect(AmbientUserData.nc);
Application.application.lblLoading.text = "Complete.";
public function rsoHandler(e:SyncEvent):void
Application.application.controlAmbientControl.currentState = Branding_so.data.layoutID;
Application.application.setStyle("backgroundGradientColors", [Branding_so.data.bgColor1, Branding_so.data.bgColor2]);
Application.application.setStyle("color", Branding_so.data.txtColor);Hi
has this only started happening? if you open the rdp session on the secondary screen and then maximise it does the same happen?
Any errors in your log files?
maybe this helps:
http://superuser.com/questions/744808/why-did-remote-desktop-performance-drop-when-switching-from-windows-server-2012r
Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
Exception handling is not working in GCC compile shared object
Hello,
I am facing very strange issue on Solaris x86_64 platform with C++ code compiled usging gcc.3.4.3.
I have compiled shared object that load into web server process space while initialization. Whenever any exception generate in code base, it is not being caught by exception handler. Even though exception handlers are there. Same code is working fine since long time but on Solaris x86, Sparc arch, Linux platform
With Dbx, I am getting following stack trace.
Stack trace is
dbx: internal error: reference through NULL pointer at line 973 in file symbol.cc
[1] 0x11335(0x1, 0x1, 0x474e5543432b2b00, 0x59cb60, 0xfffffd7fffdff2b0, 0x11335), at 0x11335
---- hidden frames, use 'where -h' to see them all ----
=>[4] __cxa_throw(obj = (nil), tinfo = (nil), dest = (nil), , line 75 in "eh_throw.cc"
[5] OBWebGate_Authent(r = 0xfffffd7fff3fb300), line 86 in "apache.cpp"
[6] ap_run_post_config(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0x444624
[7] main(0x0, 0x0, 0x0, 0x0, 0x0, 0x0), at 0x42c39a
I am using following link options.
Compile option is
/usr/sfw/bin/g++ -c -I/scratch/ashishas/view_storage/build/coreid1014/palantir/apache22/solaris-x86_64/include -m64 -fPIC -D_REENTRANT -Wall -g -o apache.o apache.cpp
Link option is
/usr/sfw/bin/g++ -shared -m64 -o apache.so apache.o -lsocket -lnsl -ldl -lpthread -lthread
At line 86, we are just throwing simple exception which have catch handlers in place. Also we do have catch(...) handler as well.
Surpursing things are..same issue didn't observe if we make it as executable.
Issue only comes if this is shared object loaded on webserver. If this is plain shared object, opened by anyother exe, it works fine.
Can someone help me out. This is completly blocking issue for us. Using Solaris Sun Studio compiler is no option as of now.shared object that load into web server process space
... same issue didn't observe if we make it as executable.When you "inject" your shared object into some other process a well-being of your exception handling depends on that other process.
Mechanics of x64 stack traversing (unwind) performed when you throw the exception is quite complicated,
particularly involving a "nearly-standartized" Unwind interface (say, Unwind_RaiseException).
When we are talking about g++ on Solaris there are two implementations of unwind interface, one in libc and one in libgcc_s.so.
When you g++-compile the executable you get it directly linked with libgcc_s.so and Unwind stuff resolves into libgccs.
When g++-compiled shared object is loaded into non-g++-compiled executable's process _Unwind calls are most likely already resolved into Solaris libc.
Thats why you might see the difference.
Now, what exactly causes this difference can vary, I can only speculate.
All that would not be a problem if _Unwind interface was completely standartized and properly implemented.
However there are two issues currently:
* gcc (libstdc++ in particular) happens to use additional non-standard _Unwind calls which are not present in Solaris libc
naturally, implementation details of Unwind implementation in libc differs to that of libgccs, so when all the standard _Unwind
routines are resolved into Solaris version and one non-standard _Unwind routine is resolved into gcc version you get a problem
(most likely that is what happens with you)
* libc Unwind sometimes is unable to decipher the code generated by gcc.
However that is likely to happen with modern gcc (say, 4.4+) and not that likely with 3.4.3
Btw, you can check your call frame to see where _Unwind calls come from:
where -h -lIf you indeed stomped on "mixed _Unwind" problem then the only chance for you is to play with linker
so it binds Unwind stuff from your library directly into libgccs.
Not tried it myself though.
regards,
__Fedor. -
Hi Gurus,
I'm trying to upgrade my test 9.2.0.8 rac to 10.1 rac. I cannot upgrade to 10.2 because of RAM limitations on my test RAC. 10.1 Clusterware software was successfully installed and the daemons are up with OCR and voting disk created. Then during the installation of RAC software at the end, root.sh needs to be run. When I run root.sh, it gave the error: while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory. I have libpthread.so.0 in /lib. I looked up on metalink and found Doc ID: 414163.1 . I unset the LD_ASSUME_KERNEL in vipca (unsetting of LD_ASSUME_KERNEL was not required in srvctl because there was no LD_ASSUME_KERNEL in srvctl). Then I tried to run vipca manually. I receive the following error: Error 0(Native: listNetInterfaces:[3]). I'm able to see xclock and xeyes. So its not a problem with x.
OS: OEL5 32 bit
oifcfg iflist
eth0 192.168.2.0
eth1 10.0.0.0
oifcfg getif
eth1 10.0.0.0 global cluster_interconnect
eth1 10.1.1.0 global cluster_interconnect
eth0 192.168.2.0 global public
cat /etc/hosts
192.168.2.3 sunny1pub.ezhome.com sunny1pub
192.168.2.4 sunny2pub.ezhome.com sunny2pub
192.168.2.33 sunny1vip.ezhome.com sunny1vip
192.168.2.44 sunny2vip.ezhome.com sunny2vip
10.1.1.1 sunny1prv.ezhome.com sunny1prv
10.1.1.2 sunny2prv.ezhome.com sunny2prv
My questions are:
should ping on sunny1vip and sunny2vip be already working? As of now they dont work.
if you look at oifcfg getif, I initially had eth1 10.0.0.0 global cluster_interconnect,eth0 192.168.2.0 global public then I created eth1 10.1.1.0 global cluster_interconnect with setif. Should it be 10.1.1.0 or 10.0.0.0. I looked at the subnet calculator and it says for 10.1.1.1, 10.0.0.0 is the subnet. In metalink they had used 10.10.10.0 and hence I used 10.1.1.0
Any ideas on resolving this issue would be very much appreciated. I had been searching on oracle forums, google, metalink but all of them refer to DOC Id 414163.1 but it does n't seem to work. Please help. Thanks in advance.
Edited by: ayyappa on Aug 20, 2009 10:13 AM
Edited by: ayyappa on Aug 20, 2009 10:14 AM
Edited by: ayyappa on Aug 20, 2009 10:15 AMa step forward towards resolution but i need some help from the gurus.
root# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.2.3 sunny1pub.ezhome.com sunny1pub
192.168.2.4 sunny2pub.ezhome.com sunny2pub
10.1.1.1 sunny1prv.ezhome.com sunny1prv
10.1.1.2 sunny2prv.ezhome.com sunny2prv
192.168.2.33 sunny1vip.ezhome.com sunny1vip
192.168.2.44 sunny2vip.ezhome.com sunny2vip
root# /u01/app/oracle/product/crs/bin/oifcfg iflist
eth1 10.0.0.0
eth0 192.168.2.0
root# /u01/app/oracle/product/crs/bin/oifcfg getif
eth1 10.0.0.0 global cluster_interconnect
eth0 191.168.2.0 global public
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
****ORACLE_HOME environment variable not set!
ORACLE_HOME should be set to the main directory that contain oracle products. set and export ORACLE_HOME, then re-run.
root# export ORACLE_BASE=/u01/app/oracle
root# export ORACLE_HOME=/u01/app/oracle/product/10.1.0/Db_1
root# export ORA_CRS_HOME=/u01/app/oracle/product/crs
root# export PATH=$PATH:$ORACLE_HOME/bin
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
VIP does not exist.
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny1pub -o $ORACLE_HOME -A 192.168.2.33/255.255.255.0
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl add nodeapps -n sunny2pub -o $ORACLE_HOME -A 192.168.2.44/255.255.255.0
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny1pub -a
VIP exists.: sunny1vip.ezhome.com/192.168.2.33/255.255.255.0
root# /u01/app/oracle/product/10.1.0/Db_1/bin/srvctl config nodeapps -n sunny2pub -a
VIP exists.: sunny2vip.ezhome.com/192.168.2.44/255.255.255.0
Once I execute the add nodeapps command as root on node 1, I was able to get vip exists for config nodeapps on node 2. The above 2 statements resulted me with same values on both nodes. After this I executed root.sh on both nodes, I did not receive any errors. It said CRS resources are already configured.
My questions to the gurus are as follows:
Should ping on vip work? It does not work now.
srvctl status nodeapps -n sunny1pub(same result for sunny2pub)
VIP is not running on node: sunny1pub
GSD is not running on node: sunny1pub
PRKO-2016 : Error in checking condition of listener on node: sunny1pub
ONS daemon is not running on node: sunny1pub
[root@sunny1pub ~]# /u01/app/oracle/product/crs/bin/crs_stat -t
Name Type Target State Host
ora....pub.gsd application OFFLINE OFFLINE
ora....pub.ons application OFFLINE OFFLINE
ora....pub.vip application OFFLINE OFFLINE
ora....pub.gsd application OFFLINE OFFLINE
ora....pub.ons application OFFLINE OFFLINE
ora....pub.vip application OFFLINE OFFLINE
Will crs_stat and srvctl status nodeapps -n sunny1pub work after I upgrade my database or should they be working now already? I just choose to install 10.1.0.3 software and after running root.sh on both nodes, I clicked ok and then the End of installation screen appeared. Under installed products, I see 9i home, 10g home, crs home. Under 10g home and crs home, I see cluster nodes(sunny1pub and sunny2pub) So it looks like the 10g software is installed. -
Friends:
The latest Firefox won't launch. Here's what I get...
gardei@gardei-lab:~$ ./firefox/firefox
XPCOMGlueLoad error for file /home/gardei/firefox/libxpcom.so:
libxul.so: cannot open shared object file: No such file or directory
Couldn't load XPCOM.
Both .so files exist in ./firefox
Thanks. -- BGHello,
Certain Firefox problems can be solved by performing a ''Clean reinstall''. This means you remove Firefox program files and then reinstall Firefox. Please follow these steps:
'''Note:''' You might want to print these steps or view them in another browser.
#Download the latest Desktop version of Firefox from http://www.mozilla.org and save the setup file to your computer.
#After the download finishes, close all Firefox windows (click Exit from the Firefox or File menu).
#Delete the Firefox installation folder, which is located in one of these locations, by default:
#*'''Windows:'''
#**C:\Program Files\Mozilla Firefox
#**C:\Program Files (x86)\Mozilla Firefox
#*'''Mac:''' Delete Firefox from the Applications folder.
#*'''Linux:''' If you installed Firefox with the distro-based package manager, you should use the same way to uninstall it - see [[Installing Firefox on Linux]]. If you downloaded and installed the binary package from the [http://www.mozilla.org/firefox#desktop Firefox download page], simply remove the folder ''firefox'' in your home directory.
#Now, go ahead and reinstall Firefox:
##Double-click the downloaded installation file and go through the steps of the installation wizard.
##Once the wizard is finished, choose to directly open Firefox after clicking the Finish button.
Please report back to see if this helped you!
Thank you. -
// Server side
application.onAppStart = function()
// ... code here
application.users_so = SharedObject.get("users_so", false);
// ... more code here
application.onConnect = function(newClient, userID)
// client object properties being setup, etc
application.users_so.setProperty(userID, newClient);
// ... more code
application.acceptConnection(newClient);
// Client side
nc.connect(rtmpURL, userID);
users_so = SharedObject.getRemote("users_so", nc.uri, false);
users_so.connect(nc);
When I call the above on the client side I should receive a
copy of all users that are stored in the server's shared object
indexable by users_so.data[userID] correct?
on the server side I perform the following:
application.acceptConnection(newClient);
newClient.call("updateStatus", null, userID);
on the client side the updateStatus method looks like this:
nc.syncQuestions = function(userID:String)
trace("--> "+users_so.data[userID]+" <--");
}; (is the ';' necessary?!?)
this prints: --> undefined <--
WHY?!? :( --> is this because of a race condition between
the nc.connect() and the users_so.connect() ??
NOTE: My problem seems to arise when I try to re-connect
(i.e. connect w/ client, close client, re-open client)
CheersI have the feeling it's a timing problem because you're not
waiting for the onStatus event from the netconnection before you
connect the shared object.
Try adding an onStatus handler to your netconnection, and
wait for a code of "NetConnection.Connect.Success". Then connect
your shared object. -
How can i use the shared object already present in the system from java.
explanation:
Actually there are shared objects present in the jdk which is used by java itself.I want to know if i can use the methods in any library file(shared object) which is already present in the system.
Or the question can be put this way how does the java call the native methods? (Can we do that explicitly) in our code.It isn't entirely clear what you mean by 'shared' objects and what the relationship with these shared objects and calling native code is.
There are no shared objects in the Java language, only the java platform.
The platform system properties are exposed via the System class (java.lang package).
You are free to create your own shared objects by using static member access or some other mechanism.
Your access to methods in any of the API's is dictated by the access type you have, normally public being the only completely open access allowing complete visibility.
You can call native methods, thats what JNI is for. Calling native methods in classes other than your own is generally done using the API provided by the developer(s) of those classes. -
Linux - libnqwebibotapi.so Cannot open shared object file
BI is running on my Linux box (Redhat). I have configured and started BI Scheduler. It starts without an issue.
When I create an iBot to be delivered immediately I get the error:
libnqwebibotapi.so: cannot open shared object file: No such file or directory [nQSError: 77002] Could not load iBot library libnqwebibotapi.so.
My LD_LIBRARY_PATH is pointing to the location of the file: /opt/oracle/oraclebi/web/bin
declare -x LD_LIBRARY_PATH="/opt/oracle/11.2.0/client_1/lib:/opt/oracle/oraclebi/server/Bin:/opt/oracle/oraclebi/web/bin"
Any ideas? I am not sure what step I am missing.
Thanks!
EricCan you post the full error log from Delivers please
You've not told us OS, versions, etc
http://catb.org/~esr/faqs/smart-questions.html#beprecise -
Confusion about shared objects...
Hi...
I'm building an application using JSP/Servlet technology and I've ecountered some behavior that is not that unexpected, but is something I can't seem to figure out how to get around.
I've been using two reference manuals over the last year, to learn JSP/Servlet development and I'm not sure that either one of them, do a very good job of explaining how to avoid the problem I'm seeing. (Or maybe they do, but I'm just too dense to figure it out.)
Both are O'Reilly manuals:
Java Servlet Programming - Jason Humber with William Crawford
Java Server Pages - Hans Bergsten
Anyway, I've tried to model my application using a MVC approach.
My controller servlet UserCtl.java is small and routes requests as a controller should.
My business logic is in a bean. UserBean.java This object has properties that represent the fields from my UserMaster table and corresponding setter/gettter methods. It also has methods to retrieve an individual user record, insert a record, update a record, delete a record and retrieve a list of records.
The scenario I'm experiencing is as follows:
I bring up my application in the browser on two different PC's.
I display the user list on each PC.
Now...
On each PC, I simultaneously click the Update User, selecting user 1 on pc 1 and user 2 on pc 2.
My application then, creates a record in a lock file, for each user record. This seems to work properly, even during the simultaneous click.
However...
When the update form is subseqently displayed, I have a situation where the form on each of the 2 pc's contains the same data.
I can verify that 2 different lock records were created, indicating that I did not click the same user by accident, however, the data in the form is clearly for only one of the users.
I've read the sections over and over and I feel like I understand their comments on concurrency and how the condition I'm seeing could occur, however, I've tried many things to overcome this and nothing seems to work.
Originally, I opened my JDBC database connection at the servlet level. I've subsequently tried doing it when I create the bean in the controller and subsequent to that, creating the connection object within the method that retrieves the user data inside the bean.
I've tried moving all my code into functions, so that any bean variables would be localized.
I've creating a bean from the JSP session bean object and then retrieving the record, and putting the bean back into the session object before moving to the update page.
Note... I've also enclosed my record retrieval code within a synchronize block.
I'm at a complete loss here. Nothing I do seems to work and I can consistently recreate this condition. Obviously, I can't move forward until I find out how to do this properly.
I'm disappointed in the Java Server Pages book, because the author encapsulated/wrapped all of his database i/o into his own database management routines.
This is a terrible practice for an author to perform. I understand the concept of why you would do that, however, it complicates learning the fundamentals first. I say show us how to do it the long way first, then show us how to improve on the implementation. Don't confuse an already complicated process with your own home-grown methodology.
Anyway... I digress. Can anybody give me any pointers or recommend a good book or web based example, that can show me how to overcome the issue I am encountering? My implementation is a straightforward, simple, approach. I am trying to grasp how this stuff works, without adding in all the extra stuff like tag libraries and XSLT, etc.
I just need to understand how to write a simple thread-safe application. I'll tackle the other stuff later. Oh... by the way... I have built the simple samples that overcome servlet class level counters that show the result differences between global variables at the servlet instance level versus reporting back localized variable values. I think that JDBC database access and/or bean creation, is more complex than that and that's where my problem is.
Any comments, pointers, references to simple samples, will be greatly appreciated.
Thanks,
BrettYou do not need a shared files system - that is just for
convenience. You do need to have identical directory stuctures
for all instances in the cluster.
Mike.
"Tomato Bug" <[email protected]> wrote:
>
>hi all,
> I have a confusion with shared objects in cluster.
> A cluster needs a shared file system to store
> configuration
>files and shared objects. Can I only put the shared things
>in the
>shared file system but not put them on the servers in
>the cluster.
> Thanks.
>
> Tomato.
-
Hi,
we think about using ABAP shared objects in parallel processing to avoid reading the same data from the DB for every parallel process.
Currently we have 15MB shared memory configured according to SHMM.
My Questions:
1. How do we extend the shared memory? What is the limit?
2. Does anyone have experience with large shared objects? (1-2GB)
Thanks in advance
regards
SteffenHi Steffen,
the parameter is abap/shared_objects_size_MB, the limit is defined by your available memory.
I have seen sizes with 8 GB in production so far.
Regarding the big shared objects: Carefully think about who is reading and writing and when
these actions happen. With versioning you can easily have a much higher memory consumption
since update requests will create a new version while readers attached to the old version
keep the old version alive. Besides that i haven't seen much trouble with shared objects so far,
but maybe other people have other experiences.
Please share your experients with us if you use the shared objects.
Kind regards,
Hermann -
Hi all,
I've a problem with a program working on Shared Objects technology.
We have a Job, scheduled in 18 parallelism, and each one writes into the SHM controlled by a SHMA Class.
At jobs ending, a program reads content from the area and sends an automatic e-mail with the results.
Everything works well if the writer program is executed on-line.
Otherwise, in background, seems that nothing is stored in the SHM.
Here's the code executed by the writer program:
FORM shared_memory_access TABLES it_fehler STRUCTURE rpfausg.
DATA: errors_reference TYPE REF TO data.
DATA: lx_pterl00 TYPE REF TO zcx_pterl00_collector.
TRY.
* --> Get SHM Access
CALL METHOD zcl_pterl00_collector_root=>build
EXPORTING
invocation_mode = cl_shm_area=>invocation_mode_explicit.
* --> It's ok?
IF zcl_pterl00_collector_root=>area_exists EQ 'X'.
* --> Fill Data:
GET REFERENCE OF it_fehler[] INTO errors_reference.
CALL METHOD zcl_pterl00_collector_root=>fill_area_with_data
EXPORTING
error_messages_dref = errors_reference.
ENDIF.
CATCH zcx_pterl00_collector INTO lx_pterl00.
MESSAGE lx_pterl00 TYPE 'S' DISPLAY LIKE 'E'. "Non-blocking -> JOBS
ENDTRY.
ENDFORM. " SHARED_MEMORY_ACCESS
Here is the section from the class handling the attachment to the SHMA:
METHOD if_shm_build_instance~build.
DATA: lx_collector TYPE REF TO zcx_pterl00_collector.
* --> Automatic building of instance:
TRY.
CALL METHOD get_handle_for_update( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
CATCH: cx_shm_no_active_version.
TRY.
CALL METHOD get_handle_for_write( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
ENDTRY.
CATCH: cx_shm_inconsistent.
zcl_pterl00_collector=>free_area( ).
TRY.
CALL METHOD get_handle_for_write( inst_name ).
CATCH zcx_pterl00_collector INTO lx_collector.
MESSAGE lx_collector TYPE 'X'.
ENDTRY.
ENDTRY.
ENDMETHOD.
I cannot explain why multiple jobs do not populate the area...Hi Rob,
if your requirement is to have many (18) active processes all updating the shared object, and very few simply reading the shared object, then versioning is probably not what you require.
Versioning allows readers to continue to attach and read the active shared object instance while the updater gets their own instance of the shared object. When the updater does a detach_commit the old instance becomes obsolete and all new requests to attach will be diected to the new instance. The old instance will be cleaned up by garbage collection once all of its readers have detached.
If your programs primarily attach for update then you will decrease performance with versioning because a new instance needs to be created at every attach for update.
Perhaps you should just retry the attach for update after a small period of time has passed?
If, on the other hand, you do have lots of other readers of the shared object you may well still find that it is more efficient not to have versioning. I build a web shop catalogue using shared objects and found that versioning severly hampered performance. This was because, once the catalogue was initialised, updaters were pretty rare but readers were constant.
BTW make sure you keep the locks on the object as short as possible. Do all your preparation work first, then attach for update, update, detach as quick as possible.
Cheers
Graham Robbo
Maybe you are looking for
-
why do i get error code -3212
-
Photo Sync Tab dissapeared on itunes 9
I'm trying to sync iphone with new itunes but the photos tab is not there. Do you have it ?
-
Reading the output values after each step executes in LabVIEW User Interface
Hello all, Development environment: TestStand 2010 SP1 and LabVIEW 2010 SP1 Problem: is there a way to execute the subsequent steps programatically and get the output values from each of them? I have already extended a little bit a Full OI interface
-
I need MRP takes in account the return orders created in sales. They must be considered as receipts. How can I do it?
-
Hello everyone, I am wondering how to use java.util.Properties to get the corresponding value of a specific key whose corresponding value is an array, and I am also wondering how to express/format an array in a properties file. For example, I have a