Improving calc time..Issue using @CurrMBR in @MAXSRANGE

I have a calc ..If I use @CurrMBR(Entity)..It takes 11 min ..If I hardcode to lets say ....100 ,it takes 7 sec.
Can I modified the calc to work it more faster.
==========================
FIX("Plan", Working, Fy12, "Jan":"Dec", "LOC",
@REMOVE(@IDESC("Total_Function"), @LIST(@RELATIVE("Total_Function",0))),
@RELATIVE("Management_Reporting",0),Amount
"TestAc" = @MAXSRANGE (SKIPBOTH, "TestAc", @CHILDREN("100")) ; /*This takes 7 sec*/
/*"TestAc" = @MAXSRANGE (SKIPBOTH, "TestAc", @CHILDREN(@CurrMBR(Entity))) ;..This takes 11 min */
ENDFIX
=================
Thanks,

As the discussion is going about @CURRMBR() function, I would like to know are there any better function you could use instead of it to get the desired result especially when you want to find out where is the user making changes to the values and do some calculations on those cells. Is there a better way to catch that in Hyperion that would make your scripts more efficient and fast especially when you don't know the value to hard code it for calculation?
I do have some of my scripts/BR where I have used this function a couple of times, and it bothers me to think if that is going to make them inefficient and slow.
~ Adella
Edited by: Adella on Oct 21, 2011 11:25 AM

Similar Messages

  • Calc time issue.

    Hello,My Calc script is taking little bit longer to complete. I am doing the following fix in my calc script. FIX (Actual, @RELATIVE("P&L Hierarchy",0), @RELATIVE(Headcount,0))     FIX(@DESCENDANTS ("Fiscal Year"))          Amount (          IF (Amount == #Missing)               IF (@ISMBR(&Fiscal_Year))                    IF (@ISMBR(Aug:&Fiscal_Month))                         IF (@ISMBR(AUG))                              PreviousHeadcount = @SHIFT("Amount"->Jul, 1, "Fiscal Year");                         ELSE                              PreviousHeadcount = @SHIFT(Amount,-1);                         ENDIF                         IF (PreviousHeadcount == #Missing)                              Amount = #Missing;                         ELSE                              Amount = 0;                         ENDIF                    ENDIF               ELSE                    IF (@ISMBR(AUG))                         PreviousHeadcount = @SHIFT("Amount"->Jul, 1, "Fiscal Year");                    ELSE                         PreviousHeadcount = @SHIFT(Amount,-1);                    ENDIF                    IF (PreviousHeadcount == #Missing)                         Amount = #Missing;                    ELSE                         Amount = 0;                    ENDIF               ENDIF          ENDIF )     ENDFIXENDFIXIs this possible to do the above in the load rule. So I can eliminate this process from the calc to improve the calc time?Thanks in Advance.Ricky [email protected]

    As the discussion is going about @CURRMBR() function, I would like to know are there any better function you could use instead of it to get the desired result especially when you want to find out where is the user making changes to the values and do some calculations on those cells. Is there a better way to catch that in Hyperion that would make your scripts more efficient and fast especially when you don't know the value to hard code it for calculation?
    I do have some of my scripts/BR where I have used this function a couple of times, and it bothers me to think if that is going to make them inefficient and slow.
    ~ Adella
    Edited by: Adella on Oct 21, 2011 11:25 AM

  • Having response time issues using Studio to manage 3000+ forms

    We are currently using Documaker Studio to create and maintain our forms, of which we have thousands. Once we create the form we export it to a very old version of Documerge where it is then used in our policy production. 
    The problem is that because we have so many forms/sections, everytime we click on "SECTIONS" in Studio it takes a significant amount of time to load the screen that lists all of the sections. Many of these forms/sections are old and will never change but we want to still have access to them in the future.
    What is the best way to "backup" all these forms somewhere where they are still accessible? Ideally I think I would like to have one workspace (let's call it "PRODUCTION") that has all 3000+ forms and delete the older resources from our existing workspace (called "FORMS") so that just has the forms that we are currently working on.  This way the response time in the "FORMS" workspace would be much better. Couple questions:
    1. How would I copy my existing workspace "FORMS" (and all the resources in it) to a new workspace called "PRODUCTION"?
    2. How would I delete from the "FORMS" workspace all of the older resources?
    3. Once I am satisfied with a new form/section in my "FORMS" workspace how would I move it to "PRODUCTION"?
    4. How could I move a form/section from "PRODUCTION" back into "FORMS" in order to make corrections, or use it as a base for a new form down the road?
    5. Most importantly....Is there a better way to do this?
    Again, we are only using this workspace for forms creation and not using it to generate output...we will be doing that in the future once we upgrade from the very old Documerge on the mainframe, to Documaker Studio.
    Many thanks to any of you who can help me with this!

    However, I am a little confused on the difference between extracting and promoting. Am I correct in assuming that I would go into my PROD workspace and EXTRACT the resources that I want to continue to work on. I would then go into my new, and empty, DEV workspace and IMPORT FILES (or IMPORT LIBRARY?) using the file(s) that I created with the EXTRACT? In effect, I would have two totally separate workspaces, one called DEV and one called PROD?
    Extraction is writing a copy of a resource from the library out to disk. Promotion is copying a resource from one library to another, with the option of modifying the metadata values of the source and target resources. You would use extract in a case where you don't have access to both libraries to do a promote.
    An example promotion scenario would go something like this. You have resources in the source (DEV) that you want to promote to the target (PROD). Items to be promoted are tagged with the MODE = "To Promote". When you perform the promotion, you can select the items that you want to promote with the filter MODE="To Promote". When you perform the promotion, you can also configure Studio to set the MODE of the resource(s) in the source to be MODE="To Delete", and set the MODE of the resource(s) in the target to be MODE="" (empty). Then you can go back and delete the resources from the source (DEV) where MODE=DELETE.
    Once you have the libraries configured you could bypass the whole extract/import bit and just use promote. The source would be PROD, and the target would be DEV. During promotion, set the target MODE = "To Do", and source MODE = "In Development". In this fashion you will see which resources in PROD are currently being edited in DEV (because in PROD the MODE = "In Development"). When development is completed, change the MODE in DEV to "To Promote", then proceed with the promotion scenario described above.
    I am a bit confused on the PROMOTE function and the libraries that have the  _DEV _TEST _PROD suffixes. This looks like it duplicates the entire workspace to new libraries _PROD but it is all part of the same workspace, not two separate workspaces?  Any clarification here would be helpful.
    Those suffixes are just attached by default; these suffixes don't mean anything to Documaker. You could name your library PROD and use it for DEV. It might be confusing though ;-) The usual best practice is to name the library and subsequent tablespaces/schemas according to their use. It's possible to have multiple libraries within a single tablespace or schema (but not recommended to mix PROD and non-PROD libraries).
    Getting there, I think!
    -A

  • Adding time issue using Date's getTime method

    The following code is incorrectly adding 5 hours to the resultant time.
    I need to be able to add dates, and this just isn't working right.
    Is this a bug or am I missing something?
    long msecSum = 0 ;
    DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
    try
    Date date1 = dateFormat.parse("01:02:05.101") ;
    Date date2 = dateFormat.parse("02:03:10.102") ;
    System.out.println("Date1: " + dateFormat.format(date1));
    System.out.println("Date2: " + dateFormat.format(date2));
    msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
    System.out.println("Sum: " + dateFormat.format(msecSum)) ;
    catch (Exception e)
    System.out.println("Unable to process time values");
    Results:
    Date1: 01:02:05.101
    Date2: 02:03:10.102
    Sum: 08:05:15.203 // should be 3 hours, not 8

    Dates shouldn't be added, but if you promise not to tell anyone:
    long msecSum = 0 ;
    DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss.SSS") ;
    dateFormat.setTimeZone(TimeZone.getTimeZone("GMT"));
    try
    Date date1 = dateFormat.parse("01:02:05.101") ;
    Date date2 = dateFormat.parse("02:03:10.102") ;
    System.out.println("Date1: " + dateFormat.format(date1));
    System.out.println("Date2: " + dateFormat.format(date2));
    msecSum = date1.getTime() + date2.getTime() ; // adds 5 hours !!!
    System.out.println("Sum: " + dateFormat.format(msecSum)) ;
    catch (Exception e)
    System.out.println("Unable to process time values");Me? I would just parse the String "01:02:05.101" to extract hours,
    minutes, seconds and milliseconds and do the math.

  • Several of us have a iPhone 6s and are having trouble with hearing and speaking at the same time when using the phone. ? The speakers seem to be too far apart? Are others having this issue? Solutions?

    Several of us have a iPhone 6plus and are having trouble with hearing and speaking at the same time when using the phone. ? The speakers seem to be too far apart? Are others having this issue? Solutions?

    Not having the problem, don't personally know anyone who is.

  • Any way to improve response time with iPhoto 8.1.2 using Mountain Lion ?

    Any way to improve response time with iPhoto 8.1.2 using Mountain Lion ?  Can you store photos on a separate hard drive and use a smaller file for openning iphoto?

    How did you move your iPhoto library to the new system?  the recommended way is Connect the two Macs together (network, firewire target mode, etc)  or use an external hard drive formed Mac OS extended (journaled) and drag the iPhoto library intact as a single entity from the old Mac to the pictures folder of the new Mac - launch iPhoto on the new mac and it will open the library and convert it as needed and you will be ready move forward.
    LN

  • HT1595 i can't see my apple tv on my screen? i'm not sure if this is an apple tv issue or a tv setup issue? (although it appears nothing has changed on my tv setup since the last time i used it?)

    i can't see apple tv on my screen? i'm not sure if this is an apple tv issue or a tv setup issue? (although it appears nothing has changed on my tv setup since the last time i used it?)

    As far as the flashing question mark and globe icons are concerned, your computer had lost track of its startup volume and was searching through the possibilities. The globe icon indicates that it is looking for a network to start from and the question mark indicates that it is looking for an OS X volume. To fix this, if you haven't already, go to System Preferences > Startup Disk and reselect your internal HD as the startup volume.
    From what you have described, this issue does seem to have corrected itself, and the faster start up is what one would normally expect. I think that you have absolutely no reason to reinstall your system.

  • Any issue using Time Machine on USB drive with AirPort?

    Are there any issues using a USB drive attached to an AirPort Extreme with Time Machine? My guess is no, but figured I would ping the forum and see if there's something I have not thought of. I know Time Capsule does this, but I don't have that - I have an AirPort Extreme.
    The drive is within my network and not exposed to anyone unless they can connect
    My password is required to access the Time Machine files
    Any security implications not covered by these two points?
    I'm currently using a USB drive I attach/detach from my MacBook Air and it would be much more convenient to backup to the drive connected to the AirPort. The first backup will probably take a day short of forever being over the air and not the wire, but after that I imagine it would nicely run in the background.
    Thanks for any feedback or suggestions!

    Are there any issues using a USB drive attached to an AirPort Extreme with Time Machine? My guess is no, but figured I would ping the forum and see if there's something I have not thought of.
    The correct answer is yes. First off, Apple does NOT support Time Machine backups to external USB HDDs attached to the AirPort Extreme ... although they do for the Time Capsule.
    Please check out this excellent Pondini article which provides some details on why this would not be a good idea.

  • Is there is any scope for improvement in rollup times by using  Basis Aggr

    Hello All,
    Appreciate if you could let me know Is there is any scope for improvement in rollup times by using  Basis Aggregates?
    Thanks.

    Hi,
    Go through the links below , It may help you
    http://wiki.sdn.sap.com/wiki/display/BI/Aggregates--SAPBWQueryPerformance
    http://help.sap.com/saphelp_nw70/helpdata/EN/9a/33853bbc188f2be10000000a114084/frameset.htm
    http://help.sap.com/saphelp_nw70/helpdata/en/c5/40813b680c250fe10000000a114084/content.htm
    http://www.sdn.sap.com/irj/scn/index;jsessionid=(J2EE3417700)ID0168391950DB10940851676120293872End?rid=/library/uuid/3a699d90-0201-0010-bc99-d5c0e3a2c87b&overridelayout=true
    http://sap.seo-gym.com/performance%20tuning%20for%20queries.pdf
    Regards,
    Marasa.

  • Slow calc time with SET CREATEBLOCKONEQ OFF for block creation

    Hello everyone,
    I have a problem with the slow execution of one of my calc scripts:
    A simplified version of my calc script to calculate 6 accounts looks like this:
    SET UPDATECALC OFF;
    SET FRMLBOTTOMUP ON;
    SET CREATEBLOCKONEQ ON;
    SET CREATENONMISSINGBLK ON;
    FIX (
    FY12,
    "Forecast",
    "Final",
    @LEVMBRS("Cost Centre",0),
    @LEVMBRS("Products",0),
    @LEVMBRS("Entities",0)
    SET CREATEBLOCKONEQ OFF;
    "10000";"20000";"30000";"40000";"50000";"60000";
    SET CREATEBLOCKONEQ ON;
    ENDFIX
    The member formula for each of the accounts is realtively complex. One of the changes recently implemented for the FIX was openin up the cost center dimension. Since then the calculation runs much slower (>1h). If I change the setting to SET CREATEBLOCKONEQ ON, the calculation is very fast (1 min). However, no blocks are created. I am looking for a way to create the required blocks, calculate the member formulas but to decrease calc time. Does anybody have any idea what to improve?
    Thanks for your input
    p.s. DataStorage in the member properties for the above accounts is Never Share

    MattRollings wrote:
    If the formula is too complex it tends not to aggregate properly, especially when using ratios in calculations. Using stored members with member formulas I have found is much faster, efficient, and less prone to agg issues - especially in Workforce type apps.We were experiencing that exact problem, hence stored members^^^So why not break it up into steps? Step 1, force the calculation of the lower level member formulas, whatever they are. Make sure that that works. Then take the upper level members (whatever they are) and make them dynamic. There's nothing that says that you must make them all stored. I try, wherever possible, to make as much dynamic as possible. As I wrote, sometimes I can't for calc order reasons, but as soon as I get past that I let the "free" dense dynamic calcs happen wherever I can. Yes, the number of blocks touched is the same (maybe), but it is still worth a shot.
    Also, you mentioned in your original post that the introduction of the FIX slowed things down. That seems counter-intuitive from a block count perspective. Does your FIX really select all level zero members in all dimensions?
    Last thought on this somewhat overactive thread (you are getting a lot of advice, who knows, maybe some of it is good ;) ) -- have you tried flipping the member calcs on their heads, i.e., take what is an Accounts calc and make it a Forecast calc with cross-dims to match? You would have different, but maybe more managable block creation issues at that point.
    Regards,
    Cameron Lackpour

  • "....To Improve Reliability, Time Machine Must Create a New Backup for You"

    I bought the new Apple 2 TB Time Capsule thinking it would be a great way to get a powerful wireless router and handle file backup for both me and my wife's iMacs.  She is running an up-to-date version of Snow Leopard and I am on the most recent update of Mountain Lion.  The Time Capsule was installed and running by the first of August and we had no problems until this monday (7 weeks later) when I got the message that "Time Machine completed a verification of your backups. To improve reliability Time Machinge must create a new backup"  My wife's back up was still ok but I had to go though the 5-6 hour process of re-copying all my files(using an ethernet connection) to the device. All went smoothly for a few days until I got the same message again about creating a new backup.  Also my wife cannot enter her time machine files even though the computer is still being backed up. We have a stable internet service and both computers recieve a strong signal from the router. Is it possible that the Time Capsule Drive is defective? Is there any sort of maintenance such as zpping the PRAM that would help?  This is really frustrating as it looks like the timemachine backup is totally unreliable and not worth fooling with.  Is there any way to use the storage capability of the time capsule without going through time machine?
    Thanks,
    Reid McCallister

    hi
    rest assured you are not alone. i have the same problem and i hate it with a passion. in fact, i am the victim of this message as of 20 minutes ago and doing a full backup as I am typing this. I am so overly ****** that I actually posted here and will be calling or emailing apple next because I cannot take this anymore. the problem lies in time machine and unfortunately, there is no fix for that. time machine is the best idea of apple with the worst implementation ever. you have no obligation to use time capsule to benefit from time machine as this would not be legal anyway. i will give you 2 temporary solutions and let you decide which one works best for you:
    1) open time machine preferences and turn automatic backups off. create a new full backup from scratch and only backup manually whenever you remember, preferably every day. the automation process is the problem that lies within time machine. also make sure that your system does not go to standy or sleep whatever that may be called. Power nap, despite all claims, is not compatible with time machine. install CAFFEINE, an app that keeps your mac awake and prevents from going to sleep while back up is in progress. use it like this and pray it will all go fine for a long time. you may wish to duplicate the first full backup to save time in case this should happen again so you have a base to start on
    2) install CCC (carbon copy cleaner) I have not used it myself yet, but probably will be installing it after finishing this post. it is claimed to be better, yet it is yet to be seen by me personally.
    unless time machine can be trained without any addons to use a specific wifi only for backup, these issues will persist.

  • Calc time vs. defragmentation

    I have a database with an average cluster ratio of .44. If I export and reload my data, it will go to 1.0, but as soon as I calc it goes back to .44.Under my current data settings, this calc takes a mere 5.7 seconds to run and retrevial time is fine. In an effort to improve the cluster ratio, I played with my dense/sparse settings, changed my time dimension to sparse, and was able to get a .995 cluster ratio after calculation; the problem is now the calc script ran for 127 seconds which is 22x longer.I know that either calc time is minimal by Essbase standards, but I'm still curious which way is "optimal". I would think it is always best to take the enhanced performance over the academic issue of cluster ratio, but I'm concerned at what point this becomes more than an academic question. How imporant is the cluster ratio and what kind of implications are there for having a database that is more fragmented? Are there other things besides calc and retrieval time that maybe I'm not seeing on the surface that I should be concerned with. Since defragmentation should improve performance is it worth it to sacrifice some performance for less fragmentation? Of course as this database grows this will become more of an issue.Any input, thoughts and comments would be appreciated.

    Just my humble opinion: Everybody's data has a different natural sparsity and rather than think in terms of 'fragmentation', think in terms of the nature of your data. If you made EVERY dimension sparse except for Accounts, and had only one member in Accounts, your database would consist solely of single-cell datablocks that are 100% populated - as dense as you can get. The trade-off is that you will have a HUGE number of these small, highly compact datablocks and your calc times would be enormous. As a general rule, you can take each of your densest dimensions in turn and make them "dense" in the outline until your datablocks approach 80k in size. The tradeoff is that not all cells in each datablock will be populated, but you'll have fewer datablocks and your calcs will zoom. Your goal is not to simply minimize the number of datablocks or to minimize the datablock size or to maximize the block density. You goal is to reach a compromise position that maximizes the utility of the database.A good approach is to hit a nice compromise spot in terms of sparse/dense settings and then begin optimizing your calcs and considering converting highly sparse stored dimensions to attributes and such. These changes can make a tremendous impact on calc time. We just dropped our calc time on a database from 14 hours to 45 minutes and didn't even touch the dense/sparse settings.-dan

  • Getting this error: Time Machine completed a verification of your backups. To improve reliability, Time Machine must create a new backup for you.

    I keep getting this error on my new Macbook Pro w/ Retina.
    "Time Machine completed a verification of your backups. To improve reliability, Time Machine must create a new backup for you."
    Connected to a wifi network and QNAP storage system.  There are 5 computers on this network, and each backs up just fine.  The issue is isolated to this one machine.
    This error shows up every week or so.

    A third-party NAS is unsuitable for use with Time Machine, especially if it's your only backup. I know this isn't the answer you want. I know Time Machine accepts the NAS as a backup destination. I know that the manufacturer says the device will work with Time Machine, and I also know that it usually seems to work. Except when you try to restore, and find that you can't.
    Apple has published a specification for network devices that work with Time Machine. None of the third-party NAS vendors, as far as I know, meets that specification. They all use the incomplete, obsolete Netatalk implementation of Apple Filing Protocol.
    If you want network backup, use as the destination either an Apple Time Capsule or an external storage device connected to another Mac or to an 802.11ac AirPort base station. Only the 802.11ac base stations support Time Machine, not any older model.
    Otherwise, don't use Time Machine at all. There are other ways to back up, though none of them is anywhere near as efficient or as well integrated with OS X. I don't have a specific recommendation.
    If you're determined to keep using the NAS with Time Machine, your only recourse for any problems that result is to the manufacturer (which will blame Apple, or you, or anyone but itself.)

  • Memory issue using BO XI Enterprise SDK and ISecurityInfo

    Hello everybody
    I have a big issue using the XIR2 SDK when I want to get rights for an object (universe or overload for example). When I start my process the memory used keep growing, starting for 20 mb to more than 100mb and if I have too many objects, the script hangs. I tried to simplify my code to make it understandable :
    My Main Class
                   Vector<Integer> vIntOv = OverloadsFactory.getAllOverloadsID();
                   Iterator<Integer> itOvIterator = vIntOv.iterator();
                   Integer cIdOv = null;
                   while(itOvIterator.hasNext()) {
                        cIdOv = itOvIterator.next();
                        System.out.println("ID OV = "+cIdOv);
                        Overload ov = OverloadsFactory.getOverloadById(cIdOv);
                        Iterator<PrincipalRestricted> itRestPrin = ov.getPrincipalRestricted().iterator();
                        PrincipalRestricted cPrin = null;
                        while(itRestPrin.hasNext()) {
                             cPrin = itRestPrin.next();
                             System.out.println("     REST = "+cPrin.getPrincipalName());
                             cPrin = null;
    The getOverloadById method in OverloadFactory class :
         public static Overload getOverloadById(int overloadID) throws OverloadException, IOException, ClassNotFoundException, SDKException {
              String name="";
              String creationTime="";
              Vector<RowRestriction> vRestrictedRows = new Vector<RowRestriction>();
              Vector<ObjectRestriction> vRestrictedObjects = new Vector<ObjectRestriction>();
              Vector<PrincipalRestricted> vPrincipalRestricted= new Vector<PrincipalRestricted>();
              String boQuery="SELECT * " +
                        "FROM CI_APPOBJECTS " +
                        "WHERE SI_KIND='OVERLOAD' AND SI_ID="+overloadID;
              Iterator<IOverload> itOverload = BoxiRepositoryManager.getInstance().executeQuery(boQuery).iterator();
              if(itOverload.hasNext()) {
                   IOverload currentIOverload = itOverload.next();
                   name=currentIOverload.properties().get("SI_NAME").toString();
                   creationTime=currentIOverload.properties().get("SI_CREATION_TIME").toString();
                   //System.out.println("OVERLOAD : "+currentIOverload.getTitle()+" / UNIVERSE : "+UniversesFactory.getUniverseById(currentIOverload.getUniverse()).getName());
                   Iterator<IRowOverload> itRestrictedRows=currentIOverload.getRestrictedRows().iterator();
                   while(itRestrictedRows.hasNext()) {
                        IRowOverload currentRestrictedRow = itRestrictedRows.next();
                        //System.out.println("     RR ("+currentIOverload.getID()+") Where Clause : "+currentRestrictedRow.getWhereClause());
                        vRestrictedRows.add(new RowRestriction(currentRestrictedRow.getRestrictedTableName(), currentRestrictedRow.getWhereClause()));
                   //System.out.println("     RR ("+currentIOverload.getID()+") Size : "+vRestrictedRows.size());
                   Iterator<IObjectOverload> itRetsrictedObjects=currentIOverload.getRestrictedObjects().iterator();
                   while(itRetsrictedObjects.hasNext()) {
                        IObjectOverload currentRestrictedObj = itRetsrictedObjects.next();
                        //System.out.println("     RO ("+currentIOverload.getID()+") Object Name : "+currentRestrictedObj.getObjectName());
                        vRestrictedObjects.add(new ObjectRestriction(currentRestrictedObj.getObjectID(), currentRestrictedObj.getObjectName()));
                   Iterator<IObjectPrincipal> itIObjectPrincipal = currentIOverload.getSecurityInfo().getObjectPrincipals().iterator();
                   while (itIObjectPrincipal.hasNext()) {
                        IObjectPrincipal currentIObjPrincipal = itIObjectPrincipal.next();
                        vPrincipalRestricted.add(new PrincipalRestricted(currentIObjPrincipal.getID(),currentIObjPrincipal.getName()));
                   itOverload = null;
                   return new Overload(overloadID,currentIOverload.getUniverse(),name, vRestrictedObjects, vRestrictedRows, vPrincipalRestricted, creationTime);
              } else {
                   throw new OverloadException("This Overload ID is not valid");
    At the beginning I thought it was a problem in my own code but if you comment the following part in the above method, you'll see that the memory increase will not happen anymore :
    Iterator<IObjectPrincipal> itIObjectPrincipal = currentIOverload.getSecurityInfo().getObjectPrincipals().iterator();
                   while (itIObjectPrincipal.hasNext()) {
                        IObjectPrincipal currentIObjPrincipal = itIObjectPrincipal.next();
                        vPrincipalRestricted.add(new PrincipalRestricted(currentIObjPrincipal.getID(),currentIObjPrincipal.getName()));
    Here the error
    Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
       at java.util.Hashtable.rehash(Unknown Source)
       at java.util.Hashtable.put(Unknown Source)
       at com.crystaldecisions.sdk.occa.security.internal.a.a(Unknown Source)
       at com.crystaldecisions.sdk.occa.security.internal.f.new(Unknown Source)
       at com.crystaldecisions.sdk.occa.security.internal.a.commit(Unknown Source)
       at com.crystaldecisions.sdk.occa.infostore.internal.ap.a(Unknown Source)
       at com.crystaldecisions.sdk.occa.infostore.internal.ar.if(Unknown Source)
       at com.crystaldecisions.sdk.occa.infostore.internal.ar.getObjectPrincipals(Unknown Source)
       at com.crystaldecisions.sdk.occa.infostore.internal.ar.getObjectPrincipals(Unknown Source)
    So it why I think that either there is an issue with "getSecurityInfo()" or I'm using it in a bad way. I tried many things like nulling my objects (even if in java the garbage collector does that by itself), changing my code... but no improvements, it's why I m requesting help from the experts
    Thanks to you
    PS : sorry for my grammar, i'm french
    PS 2 : i just want notify that I'm not a java expert so if you have to criticize my code, no prob
    Edited by: Cyril Amsellem on Aug 8, 2008 5:00 PM

    Hi Merry
    Thanks a lot for answering me. I didn't know that "getObjectPrincipal" takes so much memory for running.
    According to you is it normal that the used memory keep growing even after nulling my objects ? For me after requesting the Principales for an object (overload or universe) the memory used should be released. It seems that even after the process an object created in the ISecurityInfo class is still living...
    Thanks again for taking time to help me

  • Hi, with mix16 pro app, can anyone tell me when you set different audio levels for different tracks, does the app remember the level set the next time you use it?  Or does it default to the original setting when you power down after a show?  Thanks!

    Hi
    I use backing tracks for some songs with a live band and we are having some issues with levels.  Some are higher than others etc.  I use i pad to run the tracks.  I am looking for an app that where I can control the levels better.  Ideally, an app where I can set the level of each track to a desired level and leave it at that level for good.  Mix16 Pro  seems to do that but I wonder does it save the setting as I do not want to set the levels every time I use it.
    Thanks

    Hi
    I use backing tracks for some songs with a live band and we are having some issues with levels.  Some are higher than others etc.  I use i pad to run the tracks.  I am looking for an app that where I can control the levels better.  Ideally, an app where I can set the level of each track to a desired level and leave it at that level for good.  Mix16 Pro  seems to do that but I wonder does it save the setting as I do not want to set the levels every time I use it.
    Thanks

Maybe you are looking for

  • Apex_util.get_print_document results in "503-service unavailable"

    I'm using APEX 4.2.0.3.00.08 with GlassFish 3.1.2.2 on an Oracle 10.2.0.5 database. Recently I created a report-query together with a xsl-fo report-layout created in Altova Stylevision. I've enabled and configured print-options and when I press my ap

  • How to Print via File Sharing

    I have a laser printer connected via USB to my iMac and my wife prints from her MacBook Pro via Printer Sharing over our wireless LAN. We now have an iPad and it cannot detect a printer on the network because the printer is hard wire to the iMac. Any

  • Help! iOS 4 giving my iPhone 3G problems...

    Hi, so ever since I upgraded to iOS 4, my iPhone 3G has been "lagging", and it is rather sluggish starting up apps. Also, the Safari app is not working. It brings me to the History, etc. menu on Safari, but freezes. It may crash too. I've not had pro

  • Issuing of remuneraton statement on the checking of selection screen field

    Hi Experts ,                     Verify the field  XYZ  in the selection screen. - If the field is not checked, the remunerations statements should be issued in this sequence order: u2022     PA code u2022     PSA code u2022     Cost Center code (KOS

  • Pse 6(?) won't start

    i downloaded some patterns from the internet. i have not been able to open ps since. i ran a defrag and system restore, moved the .pat files out of patterns/presets to my desk top as per another help thread's suggestion... no dice.  i click on the ic