Aggregate storage cache warning during buffer commit

h5. Summary
Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
h5. Storage Cache Setting
The documentation says:
A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
h5. Data Load
The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
MAXL>
h5. The Question
Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
h5. Versions
Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
1. = to the final load size of the database
2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
3. OR if the Cache is needed only when compression dimension member groups cross buffers
By "Sparse" dimension I mean the non-compressed dimensions.
Therefore you might try some experiments. To test case x above:
1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
I really hope you will try the suggestions above and post your results.

Similar Messages

  • Change Aggregate Storage Cache

    Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
    Thanks for your help.

    Try something like
    alter application ASOSamp set cache_size 64MB;
    I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Controlling EJB Cache & Warning BEA-010001 in 8.1

    Background: Migrated from 6.1 to 8.1. Using CMP with template.j modified (migrated).
    Problem:
    1. Could not use any Entities which uses Entity Cache. It throws weblogic.ejb20.cache.CacheFullException
    during ejbLoad. My existing Cache parameters are (anyway, it is no different from
    6.1 where itz working fine)
    <pool>
    <max-beans-in-free-pool>100</max-beans-in-free-pool>
    </pool>
    <entity-cache>
    <max-beans-in-cache>100</max-beans-in-cache>
    <idle-timeout-seconds>610</idle-timeout-seconds>
    <read-timeout-seconds>60</read-timeout-seconds>
    <concurrency-strategy>Database</concurrency-strategy>
    </entity-cache>
    Is there any new configuration which needs to be set for 8.1? Redirection to
    some checklist for controlling there cache parameter would be helpful?
    2. While deploying my Beans it shows up Warning BEA-010001. Documentation says,
    specify in weblogic-ejb-jar.xml<weblogic-ejb-jar> <disable-warning>BEA-010001
    | BEA-010054</disable-warning>..</weblogic-ejb-jar>. I have thousands of beans
    deployed in the server my warning message dumps my console with these messages
    hiding critical error messages. Is there any 'additional parameters' to stop
    this warning or route this warning to a separate log file?
    TIA
    JAK

    Hi Jak,
    max-beans-in-cache equal 100 is way too low for any real life application.
    I'm not sure why it worked for you in 6.1. I remember setting it was broken
    in 6.1 at some point.
    Regards,
    Slava Imeshev
    "jak" <[email protected]> wrote in message
    news:3f07e7c9$[email protected]..
    >
    Background: Migrated from 6.1 to 8.1. Using CMP with template.j modified(migrated).
    >
    Problem:
    1. Could not use any Entities which uses Entity Cache. It throwsweblogic.ejb20.cache.CacheFullException
    during ejbLoad. My existing Cache parameters are (anyway, it is nodifferent from
    6.1 where itz working fine)
    <pool>
    <max-beans-in-free-pool>100</max-beans-in-free-pool>
    </pool>
    <entity-cache>
    <max-beans-in-cache>100</max-beans-in-cache>
    <idle-timeout-seconds>610</idle-timeout-seconds>
    <read-timeout-seconds>60</read-timeout-seconds>
    <concurrency-strategy>Database</concurrency-strategy>
    </entity-cache>
    Is there any new configuration which needs to be set for 8.1? Redirectionto
    some checklist for controlling there cache parameter would be helpful?
    2. While deploying my Beans it shows up Warning BEA-010001. Documentationsays,
    specify in weblogic-ejb-jar.xml<weblogic-ejb-jar><disable-warning>BEA-010001
    | BEA-010054</disable-warning>..</weblogic-ejb-jar>. I have thousands ofbeans
    deployed in the server my warning message dumps my console with thesemessages
    hiding critical error messages. Is there any 'additional parameters' tostop
    this warning or route this warning to a separate log file?
    TIA
    JAK

  • -200361 warning during DAQmx acquisition

    Hello,
    I have been getting this warning during acquistion with 6255 OEM. This interrupts the continous data acquisiton of my software. I have tried reducing the sampling rate but doesn't help. Can some one help me find the way of handling this situration in my program to avoid the interruption in the continous data acquistion?
    Thanks.

    MansoorEE wrote:
    This program works fine when I am using a simple USB cable. But, now I am requried to use a USB extender with 100m long Ethernet cable. It works fine for a while; but at some point this warning occurs and acquisition stops.
    That sounds like you were working on the hairy edge before and now that the communication bus is likely just a little bit slower due to the conversion, you fail.
    Try setting your loop to run a little bit faster, like maybe 100ms.  By reading the samples faster, you are less likely to reach your buffer limit.
    There are only two ways to tell somebody thanks: Kudos and Marked Solutions
    Unofficial Forum Rules and Guidelines

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • NullPointerException during UnitOfWork commit

    During UnitOfWork commit the following exception is thrown:
    LOCAL EXCEPTION STACK:
    EXCEPTION [TOPLINK-69] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.DescriptorException
    EXCEPTION DESCRIPTION: A NullPointerException was thrown while extracting a value from the instance variable [id] in the object [ClassB].
    INTERNAL EXCEPTION: java.lang.NullPointerException
    MAPPING: oracle.toplink.mappings.DirectToFieldMapping[id-->DatabaseField(B_TABLE.Id)]
    DESCRIPTOR: Descriptor(ClassB --> [DatabaseTable(B_TABLE)])
         at oracle.toplink.exceptions.DescriptorException.nullPointerWhileGettingValueThruInstanceVariableAccessor(Unknown Source)
         at oracle.toplink.internal.descriptors.InstanceVariableAttributeAccessor.getAttributeValueFromObject(Unknown Source)
         at oracle.toplink.mappings.DatabaseMapping.getAttributeValueFromObject(Unknown Source)
         at oracle.toplink.mappings.DirectToFieldMapping.iterate(Unknown Source)
         at oracle.toplink.internal.descriptors.ObjectBuilder.iterate(Unknown Source)
         at oracle.toplink.internal.descriptors.DescriptorIterator.iterateReferenceObjects(Unknown Source)
         at oracle.toplink.internal.descriptors.DescriptorIterator.startIterationOn(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.discoverUnregisteredNewObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.discoverAllUnregisteredNewObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.assignSequenceNumbers(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.collectAndPrepareObjectsForCommit(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
         at Main.main(Main.java:33)
    INTERNAL EXCEPTION STACK:
    java.lang.NullPointerException
         at oracle.toplink.internal.descriptors.InstanceVariableAttributeAccessor.getAttributeValueFromObject(Unknown Source)
         at oracle.toplink.mappings.DatabaseMapping.getAttributeValueFromObject(Unknown Source)
         at oracle.toplink.mappings.DirectToFieldMapping.iterate(Unknown Source)
         at oracle.toplink.internal.descriptors.ObjectBuilder.iterate(Unknown Source)
         at oracle.toplink.internal.descriptors.DescriptorIterator.iterateReferenceObjects(Unknown Source)
         at oracle.toplink.internal.descriptors.DescriptorIterator.startIterationOn(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.discoverUnregisteredNewObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.discoverAllUnregisteredNewObjects(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.assignSequenceNumbers(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.collectAndPrepareObjectsForCommit(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitToDatabase(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
         at Main.main(Main.java:33)
    All attributes of my objects are non-null.
    What's wrong?

    Here are my classes:
    public class ClassA {
    public long id;
    public java.util.List listB;
    public java.lang.String name;
    // Sestters and getters are here
    public class ClassB {
    public long id;
    public java.lang.String name;
    public ClassA refA;
    // Sestters and getters are here
    I use transparent indirection for ClassA.listB and no indirection for ClassB.refA.
    Here is how I use these classes:
    UnitOfWork uow = session.acquireUnitOfWork();
    ClassA objA = new ClassA();
    objA.setName("objectA");
    objA.setListB(new ArrayList(10));
    for (int i = 0; i < 10; i++) {
    ClassB objB = new ClassB();
    objB.setName("objectB" + i);
    objB.setRefA(objA);
    objA.getListB().add(objB);
    uow.registerNewObject(objB);
    uow.registerNewObject(objA);
    uow.commit();
    Here is log:
    2003.07.31 12:48:44.759--DatabaseSession(160388)--Thread[main,5,main]--acquire unit of work:5396218
    2003.07.31 12:48:44.759--UnitOfWork(5396218)--#registerNew(ClassB@4c4975)
    2003.07.31 12:48:44.769--UnitOfWork(5396218)--#registerNew(ClassB@2da3d)
    2003.07.31 12:48:44.769--UnitOfWork(5396218)--#registerNew(ClassB@6c8909)
    2003.07.31 12:48:44.769--UnitOfWork(5396218)--#registerNew(ClassB@497934)
    2003.07.31 12:48:44.779--UnitOfWork(5396218)--#registerNew(ClassB@280a69)
    2003.07.31 12:48:44.779--UnitOfWork(5396218)--#registerNew(ClassB@40ec97)
    2003.07.31 12:48:44.779--UnitOfWork(5396218)--#registerNew(ClassB@3b60c3)
    2003.07.31 12:48:44.779--UnitOfWork(5396218)--#registerNew(ClassB@7a1bb6)
    2003.07.31 12:48:44.779--UnitOfWork(5396218)--#registerNew(ClassB@5e256f)
    2003.07.31 12:48:44.789--UnitOfWork(5396218)--#registerNew(ClassB@6e1fb1)
    2003.07.31 12:48:44.789--UnitOfWork(5396218)--#registerNew(ClassA@1360e2)
    2003.07.31 12:48:44.789--UnitOfWork(5396218)--begin unit of work commit
    2003.07.31 12:48:44.809--DatabaseSession(160388)--Connection(2913640)--begin transaction
    2003.07.31 12:48:44.819--UnitOfWork(5396218)--#executeQuery(DataModifyQuery())
    2003.07.31 12:48:44.819--UnitOfWork(5396218)--Connection(2913640)--UPDATE SEQUENCE SET SEQ_COUNT = SEQ_COUNT + 1 WHERE SEQ_NAME = 'CLASSB'
    2003.07.31 12:48:45.430--UnitOfWork(5396218)--#executeQuery(ValueReadQuery())
    2003.07.31 12:48:45.430--UnitOfWork(5396218)--Connection(2913640)--SELECT SEQ_COUNT FROM SEQUENCE WHERE SEQ_NAME = 'CLASSB'
    2003.07.31 12:48:45.660--DatabaseSession(160388)--Connection(2913640)--commit transaction
    2003.07.31 12:48:45.670--UnitOfWork(5396218)--#assignSequence(112->ClassB@5e256f)
    [ Skipped ]
    2003.07.31 12:48:48.515--DatabaseSession(160388)--Connection(2913640)--begin transaction
    2003.07.31 12:48:48.515--UnitOfWork(5396218)--#executeQuery(DataModifyQuery())
    2003.07.31 12:48:48.515--UnitOfWork(5396218)--Connection(2913640)--UPDATE SEQUENCE SET SEQ_COUNT = SEQ_COUNT + 1 WHERE SEQ_NAME = 'CLASSB'
    2003.07.31 12:48:48.605--UnitOfWork(5396218)--#executeQuery(ValueReadQuery())
    2003.07.31 12:48:48.605--UnitOfWork(5396218)--Connection(2913640)--SELECT SEQ_COUNT FROM SEQUENCE WHERE SEQ_NAME = 'CLASSB'
    2003.07.31 12:48:48.665--DatabaseSession(160388)--Connection(2913640)--commit transaction
    2003.07.31 12:48:48.685--UnitOfWork(5396218)--#assignSequence(121->ClassB@497934)
    2003.07.31 12:48:48.685--DatabaseSession(160388)--Connection(2913640)--begin transaction
    2003.07.31 12:48:48.685--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:48.695--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO A_TABLE (Id, Name) VALUES (13, 'objectA')
    2003.07.31 12:48:48.765--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@6e1fb1))
    2003.07.31 12:48:48.765--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:48.765--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (120, 'objectB9', 13)
    2003.07.31 12:48:48.855--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@40ec97))
    2003.07.31 12:48:48.855--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:48.855--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (114, 'objectB5', 13)
    2003.07.31 12:48:48.905--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@3b60c3))
    2003.07.31 12:48:48.905--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:48.905--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (117, 'objectB6', 13)
    2003.07.31 12:48:48.965--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@6c8909))
    2003.07.31 12:48:48.965--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:48.965--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (115, 'objectB2', 13)
    2003.07.31 12:48:49.015--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@5e256f))
    2003.07.31 12:48:49.015--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.015--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (112, 'objectB8', 13)
    2003.07.31 12:48:49.086--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@2da3d))
    2003.07.31 12:48:49.086--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.086--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (113, 'objectB1', 13)
    2003.07.31 12:48:49.126--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@4c4975))
    2003.07.31 12:48:49.126--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.126--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (118, 'objectB0', 13)
    2003.07.31 12:48:49.166--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@7a1bb6))
    2003.07.31 12:48:49.176--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.176--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (116, 'objectB7', 13)
    2003.07.31 12:48:49.236--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@497934))
    2003.07.31 12:48:49.236--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.236--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (121, 'objectB3', 13)
    2003.07.31 12:48:49.296--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassB@280a69))
    2003.07.31 12:48:49.296--UnitOfWork(5396218)--#executeQuery(WriteObjectQuery(ClassA@1360e2))
    2003.07.31 12:48:49.296--UnitOfWork(5396218)--Connection(2913640)--INSERT INTO B_TABLE (Id, Name, A_Id) VALUES (119, 'objectB4', 13)
    2003.07.31 12:48:49.326--DatabaseSession(160388)--Connection(2913640)--commit transaction
    2003.07.31 12:48:49.837--UnitOfWork(5396218)--EXCEPTION [TOPLINK-150] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.DescriptorException
    EXCEPTION DESCRIPTION: The mapping for the attribute [listB] uses transparent indirection so the attribute [listB] must be initialized to an appropriate container. Currently the value is [null].
    - JDK 1.1.x: an instance of IndirectList, IndirectMap or Hashtable, or one of their subclasses.
    - JDK 1.2 or higher: an instance of an implementor of Collection or Map.
    MAPPING: oracle.toplink.mappings.OneToManyMapping[listB]
    DESCRIPTOR: Descriptor(ClassA --> [DatabaseTable(A_TABLE)])LOCAL EXCEPTION STACK:
    EXCEPTION [TOPLINK-150] (TopLink - 9.0.3 (Build 423)): oracle.toplink.exceptions.DescriptorException
    EXCEPTION DESCRIPTION: The mapping for the attribute [listB] uses transparent indirection so the attribute [listB] must be initialized to an appropriate container. Currently the value is [null].
    - JDK 1.1.x: an instance of IndirectList, IndirectMap or Hashtable, or one of their subclasses.
    - JDK 1.2 or higher: an instance of an implementor of Collection or Map.
    MAPPING: oracle.toplink.mappings.OneToManyMapping[listB]
    DESCRIPTOR: Descriptor(ClassA --> [DatabaseTable(A_TABLE)])
         at oracle.toplink.exceptions.DescriptorException.indirectContainerInstantiationMismatch(Unknown Source)
         at oracle.toplink.internal.indirection.TransparentIndirectionPolicy.validateAttributeOfInstantiatedObject(Unknown Source)
         at oracle.toplink.mappings.ForeignReferenceMapping.getAttributeValueFromObject(Unknown Source)
         at oracle.toplink.mappings.ForeignReferenceMapping.isAttributeValueInstantiated(Unknown Source)
         at oracle.toplink.mappings.CollectionMapping.mergeChangesIntoObject(Unknown Source)
         at oracle.toplink.internal.descriptors.ObjectBuilder.mergeChangesIntoObject(Unknown Source)
         at oracle.toplink.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(Unknown Source)
         at oracle.toplink.internal.sessions.MergeManager.mergeChanges(Unknown Source)
         at oracle.toplink.mappings.ObjectReferenceMapping.mergeChangesIntoObject(Unknown Source)
         at oracle.toplink.internal.descriptors.ObjectBuilder.mergeChangesIntoObject(Unknown Source)
         at oracle.toplink.internal.sessions.MergeManager.mergeChangesOfWorkingCopyIntoOriginal(Unknown Source)
         at oracle.toplink.internal.sessions.MergeManager.mergeChanges(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.mergeChangesIntoParent(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commitRootUnitOfWork(Unknown Source)
         at oracle.toplink.publicinterface.UnitOfWork.commit(Unknown Source)
         at Main.main(Main.java:46)

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

  • MacBook shuts down without warning during calibration (39% battery power)

    My MacBook has been shutting down without warning during recent calibrations (which I've done approximately every two months). Most recently it shut down after 2 hours and 20 minutes on battery power with a battery reading of 39% -- i.e., nowhere close to the 0% of a fully rundown battery.
    The result is that:
    I'm not seeing the 'low battery' warning dialog before the computer shuts down; and
    (ii) I'm not able to save current work and close all applications when the battery's charge gets low, before the computer goes to sleep (because it just shuts down without warning)
    Can I do anything about this? Is it just an unavoidable indication of declining battery performance?
    My MacBook is 15 months old. Current 'battery information' is listed below.
    Thanks. Replies appreciated.
    Battery Information:
    Battery Installed: Yes
    First low level warning: No
    Full Charge Capacity (mAh): 4792
    Remaining Capacity (mAh): 4790
    Amperage (mA): 0
    Voltage (mV): 12477
    Cycle Count: 23
    ------------------------------

    stevenray1 wrote:
    1. I'm not sure how to match the path you listed re: the '1.3 battery update' to what's on my MacBook. Maybe I just don't understand how to interpret it, but I don't have a root folder (or any other folder) identified as 'MacBook.' When I used Spotlight, I also didn't come up with a BatteryUpdater.bundle. A file named BatteryUpdate2007001.pkg was all that Spotlight turned up.
    On my MacBoook my hard drive that I boot from is named Mort's MacBook. Yours may be similar I don't know. It is the drive you boot from. Mines shows in finder as the top level drive. The next level in column view. Then you find the System file and go from there.
    If your not sure what you named your drive or which one it is click the apple in the upper left corner and then pick *About This Mac* You then should see *Startup Disk* and next to that the name is what you refer too when you look for the folder path I gave.
    2. You recommended resetting the SMC. However, the info file you provided a link for says it shouldn't be "necessary except as a last resort in cases where a hardware failure of the power management system is suspected." Does that accurately describe the condition I reported? Also, the problems listed as possible reasons to reset the SMC don't include the one I posted about.
    The reason I suggest resetting the SMC is that your changing one of the components that is controlled by the functions of the SMC. The battery is a big part of the system which the battery update changes. To get it to be sure to recognize the new bundle you reset the SMC. The instructions linked are not all inclusive but if you don't want to reset it that is up to you.
    3. Similarly, re: the 1.2 Battery Update, it's not clear to me that any of the listed 'symptoms' of a faulty battery apply in my case -- unless what I reported indicates a "+low charge capacity/runtime+." Does it?
    The battery update instructions are again not all inclusive. If you don't want to do it that is fine. All I know is you have a MacBook Pro update installed at the moment. You don't have a MacBook Pro. This suggestion of mine has helped many people with problems as reported by you. You do not have to do anything you don't want. That is your choice. Is your battery cutting off at 39% considered normal operating or capacity to you? I personally would call it a faulty battery, but thats just me.
    What would be your definition of a faulty battery?
    4. Apparently, you had a faulty battery which Apple replaced. I wasn't sure from your message whether you had other things going on that made it clear you had a faulty battery, or only the same symptoms I reported. Were there other power-related problems in your case, or only, as you wrote "the same thing as yours except mine cut off at 28%"?
    I guess I misunderstood your original post, everything yours was doing mine was doing except mine cut off at 28% not 39%. I can only give advice on what you post. Are there other things happening that you didn't mention?
    My idea of a faulty battery must be different than yours. If I don't get warnings I should and the battery does not work like it used to then that is a faulty battery in my opinion.
    If your not comfortable with any of my suggestions contact Apple support and maybe they can help you. Possibly one of the other posters on here can give you different advice that you agree with. Never do anything advised in the forums unless you agree with it.

  • YTD Performance in Aggregate Storage

    Has anyone had any problems with performance of YTD calculations in Aggregate storage? Any solutions?

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Derived Cells in Aggregate storage

    <BR>The aggregate storage loads obviously ignore the derived cells. Is there a way to get these ignored records diverted to a log or error file to view and correct the data at the source system !?<BR><BR>Has anybody tried any methods for this !? Any help would be much appreciated.<BR><BR>-Jnt

    did yuo ever get this resolved? we are running into the same problem.We have an ASO db which requires YTD calcs and TB Last. We've tried using two separate options (CASE and IF) statements on the YTD, Year and Qtr members (ie MarYTD). Both worked and now concerned about performance. Any suggestions?

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Aggregate storage data export failed - Ver 9.3.1

    Hi everyone,
    We have two production server; Server1 (App/DB/Shared Services Server), Server2 (Anaytics). I am trying to automate couple of our cubes using Win Batch Scripting and MaxL. I can export the data within EAS successfully but when I use the following command in a MaxL Editor, it gives the following error.
    Here's the MaxL I used, which I am pretty sure that it is correct.
    Failed to open file [S:\Hyperion\AdminServices\deployments\Tomcat\5.0.28\temp\eas62248.tmp]: a system file error occurred. Please see application log for details
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270083)
    A system error occurred with error number [3]: [The system cannot find the path specified.]
    [Tue Aug 19 15:47:34 2008]Local/MyAPP/Finance/admin/Error(1270042)
    Aggregate storage data export failed
    Does any one have any clue that why am I getting this error.
    Thnx in advance!
    Regards
    FG

    This error was due to incorrect SSL settings for our shared services.

Maybe you are looking for

  • Error occuring during import of certificate for SSO configuring in BI

    Hi, I am configuring the SSO with logon ticket for BI system. I downloaded the certificate from portal server. But while importing this certificate on R/3 server it shows error "Error occurred during import" Message no. TRUST008 Please suggest me any

  • Show Sales orders, STOs, & Deliveires in SNP planning book by Shipping Poin

    We are using ECC 6.0 and SCM 7.0. We have a requirement to model the Shipping point as a location in SNP. By this functionality we want to show Sales orders, STOs, & Deliveires in SNP planning book by Shipping Point in addition to Delivering Plant wh

  • [SOLVED] Java Font Too Small in OpenBox

    I usually use Cinnamon, but decided to give OpenBox a whirl again. I am encountering a problem with OpenBox and the size of fonts in Java apps. I'm using the Oracle JDK, not OpenJDK. These first screenshots show the font on OpenBox being too small: h

  • 06/11 iPhoto library compatibility?

    Morning all, I have 2 sizeable iphoto libraries in iPhoto 05 & 06 on my old g4 powermac dual 1.25 MDD running OSX 10.4.11 and am thinking of going to iLife 11 shortly. Despite the downsides to iPhoto 11 I've read about on this forum and elsewhere, si

  • IPad Wireless Connection

    Hi, I am using IPad and trying to connect to corporate Wireless network. The Corporate Radius server is configured for Certificate based authentication. I have installed user cert in IPad but when trying to connect it is saying wrong credentials. I h