Change Aggregate Storage Cache

Does anyone know how to change the aggregate storage cache setting in Maxl? I can no longer see it in EAS and I don't think I can change it in MaxL. Any clue?
Thanks for your help.

Try something like
alter application ASOSamp set cache_size 64MB;
I thought you right click the ASO app in EAS and edit properties > Pending cache size limit.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • Aggregate storage cache warning during buffer commit

    h5. Summary
    Having followed the documentation to set the ASO storage cache size I still get a warning during buffer load commit that says it should be increased.
    h5. Storage Cache Setting
    The documentation says:
    A 32 MB cache setting supports a database with approximately 2 GB of input-level data. If the input-level data size is greater than 2 GB by some factor, the aggregate storage cache can be increased by the square root of the factor. For example, if the input-level data size is 3 GB (2 GB * 1.5), multiply the aggregate storage cache size of 32 MB by the square root of 1.5, and set the aggregate cache size to the result: 39.04 MB.
    My database has 127,643,648k of base data which is 60.8x bigger than 2GB. SQRT of this is 7.8 so I my optimal cache size should be (7.8*32MB) = 250MB. My cache size is in fact 256MB because I have to set it before the data load based on estimates.
    h5. Data Load
    The initial data load is done in 3 maxl sessions into 3 buffers. The final import output then looks like this:
    MAXL> import database "4572_a"."agg" data from load_buffer with buffer_id 1, 2, 3;
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1270041 - For better performance, increase the size of aggregate storage cache.
    OK/INFO - 1003058 - Data load buffer commit elapsed time : [5131.49] seconds.
    OK/INFO - 1241113 - Database import completed ['4572_a'.'agg'].
    MAXL>
    h5. The Question
    Can anybody tell me why the final import is recommending increasing the storage cache when it is already slightly larger than the value specified in the documentation?
    h5. Versions
    Essbase Release 11.1.2 (ESB11.1.2.1.102B147)
    Linux version 2.6.32.12-0.7-default (geeko@buildhost) (gcc version 4.3.4 [gcc-4_3-branch revision 152973] (SUSE Linux) ) #1 SMP 2010-05-20 11:14:20 +0200 64 bit

    My understanding is that storage cache setting calculation you quoted is based on the cache requirements for retrieval. This recommendation has remained unchanged since ASO was first introduced in v7 (?) and was certainly done before the advent of parallel loading.
    I think that the ASO cache is used during the combination of the buffers. As a result depending on how ASO works internally you would get this warning unless your buffer was:
    1. = to the final load size of the database
    2. OR if the cache was only used when data existed for the same "Sparse" combination of dimensions in more than one buffer the required size would be a function of the number of cross buffer combinations required
    3. OR if the Cache is needed only when compression dimension member groups cross buffers
    By "Sparse" dimension I mean the non-compressed dimensions.
    Therefore you might try some experiments. To test case x above:
    1. Forget it you will get this message unless you have a cache large enough for the final data set size on disk
    2. sort your data so that no dimensional combination exists in more than one buffer - ie sort by all non-compression dimensions then by the compression dimension
    3. Often your compression dimension is time based (EVEN THOUGH THIS IS VERY SUB-OPTIMAL). If so you could sort the data by the compression dimension only and break the files so that the first 16 compression members (as seen in the outline) are in buffer 1, the next 16 in buffer 2 and the next in buffer 3
    Also if your machine is IO bound (as most are during a load of this size) and your cpu is not - try using os level compression on your input files - it could speed things up greatly.
    Finally regarding my comments on time based compression dimension - you should consider building a stored dimension for this along the lines of what I have proposed in some posts on network54 (search for DanP on network54.com/forum/58296 - I would give you a link but it is down now).
    OR better yet in the forthcoming book (of which Robb is a co-author) Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals http://www.amazon.com/Developing-Essbase-Applications-Techniques-Professionals/dp/1466553308/ref=sr_1_1?ie=UTF8&qid=1335973291&sr=8-1
    I really hope you will try the suggestions above and post your results.

  • Dataload in Aggregate storage outline

    Hi All,My existing code which works while loading data into Block storage outline is not working for Aggregate storage outline. When I pass "SendString" api simultaneously about 3-4 times, I got an error "Not supported for agg. storage outline". Is there any API changes for loading data into agg. storage outline. I didnt find nething related to such changes in Documentation.Regards,Samrat

    I know that EsbUpdate and EsbImport both work with ASO

  • Incremental Load in Aggregate Storage

    <p>Hi,</p><p> </p><p>From what I understand, Aggregate Storage (ASO) clears all dataif a new member gets added to the outline.</p><p>This is unlike Block Storage (BSO) where we can restructure thecube if new member is added to the outline.</p><p> </p><p>We need to load data daily into an ASO cube and the cubecontains 5 yrs of data. We may get a new member in the customerdimension daily. Is there a way we can retain (restructure)existing data when updating the customer dimension and then add thenew data? Otherwise, we will have to rebuild the cube daily andtherefore reload 5 yrs of data (about 600 million recs) on a dailybasis.</p><p> </p><p>Is there a better way of doing this in ASO?</p><p> </p><p>Any help would be appreciated.</p><p> </p><p>Thanks</p><p>--- suren_v</p>

    Good information Steve. Is the System 9 Essbase DB Admin Guide available online? I could not find it here: <a target=_blank class=ftalternatingbarlinklarge href="http://dev.hyperion.com/resource_library/technical_documentation">http://dev.hyperion.com/resour...echnical_documentation</a><BR><BR>(I recently attended the v7 class in Dallas and it was excellent!)<BR><BR><BR><blockquote>quote:<br><hr><i>Originally posted by: <b>scran4d</b></i><BR>Suren:<BR><BR><BR><BR>In the version 7 releases of Essbase ASO, there is not a way to hold on to the data if a member is added to the outline; data must be reloaded each time.<BR><BR><BR><BR>This is changed in Hyperion's latest System 9 release, however.<hr></blockquote><BR><BR>

  • SSPROCROWLIMIT and Aggregate Storage

    I have been experimenting with detail level data in an Aggregate Storage style cube. I will have 2 million members in one of my a dimensions, for testing I have 514683.If I try to use the spreadsheet addin to retrieve from my cube, I get the error "Maximum number of rows processed [250000] exceeded [514683]" This indicates that my SSPROCROWLIMIT is too low. Unfortunately, the upper limit for SSPROCROWLIMIT is below my needs.What good is this new storage model if I can't retrieve data from the cube! Any plans to remove the limit?Craig Wahlmeier

    We are using ASO for a very large (20 dims) database. The data compression and performance has been very impressive. The ASO cubes are much easier to build, however have much fewer options. No calc scripts, formulas are limited in that they can only use on small dimensions and only on one dimension. The other big difference is that you need to reload and calc your data every time you change metadata. The great thing for me about 7.1 it gives you options, particularly when dealing with very large sparse non-finance cubes.If your client is talking about making calcs faster - ASO is only going to work if it is an aggregation calc.

  • Can i change the storage device for a user on my macbook pro

    That basically says it all

    It means i want to change the storage device on the computer. I have the drive on the macbook, the one built into it, and i have an external drive. I want to know how i can make the external drive my main source for storage. And if its possible i want it to be for a specific user on the computer

  • How to change the storage location in Lightroom 3.6.?

    How to change the storage location in Lightroom 3.6.?
    I use my internal hard drive NTFS of my MacBook Pro  as my storage location of Lightroom 3.6. being installed in Snow Leopard now but the data of my photos get too big now. I have two fast and big external hard drives eSATA now which I want to use for my data/photos. I know how to copy the data from one hard drive to another but I don't know how to change the settings in connection with Lightroom easily. I still want to watch all fotos on my MacBook Pro but don't want to save any fotos on my internal harddrive any more. Isn't it possible to have the programm Lightroom on my internal harddrive but the data on two times on two different extermal hard drives? If I look at Lightroom - settings of catalogue - I see that the internal harddrive location is my storage location now but I my two external harddrives don't appear as folders. Why? How to change that so that the external hard drives appear in Lightroom and I can watch all my fotos immediately (sometimes using external hard drive 1 and sometimes using external hard drive 2)?
    My external hard drives are in HFS+ but I use the software of Paragon so that there shouldn't be any problem to read and write the data (I use also Final Cut Express so my external hard drives are HFS+ and not NTFS).
    I want to impert many data from CDs - should I copy the fotos first to my external hard drive and then import it into Lightroom or is it easier to import the data directly from the CD in Lightroom so that I don't have to name the folders two times?
    I see that I have to sort the fotos from the CDs in new different folders and an used to do that in Finder of my MacBookPro. In Lightroom I don't want have to click each of hundreds of fotos to be able to put it into the right folders ... But how to import only a part of a CDs quickly into the right (newly created) folders?
    Kind regards, Karin

    Would you really leave the names of photos like DSC05271.ARW and DSC05271.jpeg etc.?
    What if I name them f.i. like ...
    2013-09-11_Austria_1_mountain.jpeg
    2013-09-11_Austria_2_mountain.jpeg
    2013-09-11_Austria_3_valley.jpeg
    I would NOT do this. I would not change the file name.
    I would add keywords and other metadata to the photos to identify the content of the photos, and from that point on, I would use keywords and other metdata to search for these photos.
    However, let's look at the big picture. I think there are three different methods of organizing
    Organize via keywords and other metadata, and not via operating system constructs such as file name and folder name
    Organize via operating system constructs such as file name and folder name, and not via keywords and other metadata
    Some combination of 1 and 2
    You can choose any one of the three methods above, the choice is yours. I am a very strong believer that for most people (and it sounds like you are one of those people), method 1 is the best method. However, not everyone agrees.
    There are drawbacks to using file names to identify your photos. One big drawback is that you must type the information into each photo's name, and this is more tedious than if you wanted to assign the keyword "mountain" to the photos, you can assign the keyword to multiple photos at once. Also, if you mis-spell the information in the file name on any photo, you have just made the photo much harder to find; whereas in Lightroom, the keyword can be assigned via a mouse click (so you can't mis-spell it), or by typing the first few letters and Lightroom (auto-fill) will find the correct keyword name. So using keywords is a much simpler way to go.
    .. is this much too complicated to do that in Lightroom or even dangerous to got lost of photos?
    It is very simple to add keywords, as I explained for this case. Create keywords "Austria", "Mountain", "Valley"; then select the desired photos, and then click on the check box next to the keyword name. It is much more complicated in my opinion to rename the photos in this manner.
    There is less danger in doing this via keywords than the danger doing this via file names, because of potential mis-spellings.
    If I spend a lot of time in editing photos (in Lightroom or Photoshop) I don't want that I loose those photos f.i. later if Lightroom won't be the software I use regarding the photos (in 10, 20 years?!)
    Lightroom will OPTIONALLY write your keywords and other metadata to the files (or sidecar files in the case of RAW). Every photographic application I know of (and I'm sure those in the future) can read the keywords and other metadata. So if you ever want to switch to some other software, your keywords and other metadata are available. you can turn this option on via Edit->Catalog Settings->Metadata, check Automatically Write Changes to XMP.
    So I have to create two folders - one with the originals and one with the edited exported photos?
    No, you do not have to create two folders. You don't have to export everything, but if you do, you can put the exports wherever you want, including putting them in the same folder as the originals, with different name.
    Why not edit them in Photoshop and save them in the second folder which I import later in Lightroom? Because it takes longer?
    Wow, you really need to think about: Why are you using Lightroom?
    Could you explain this to me? Why are you using Lightroom?
    If your goal is to edit the photos in Photoshop, and then put them in folders with whatever custom name you want to give them, then what is the benefit of using Lightroom? You can do all of this WITHOUT Lightroom. You keep explaining that your intended goal is to use Photoshop and folders/file names, and this is a goal which AVOIDs all benefits of Lightroom. I can't understand what you think the benefits of using Lightroom are for you.
    It is starting to sound to me like Lightroom is not the right software for you.
    Some people want to see a movie on their TV and are able to watch a DVD (not all!). Flickr seems to be a solution for movies in a big size (many GBs) to share them with friends (?)
    You haven't mentioned this before, and you should check the rules on Flickr regarding how long a video you can upload.

  • I need to change the storage hard drive for both lightroom and Photoshop from my C drive to F drive

    How do I change my storage drive from C to F in both lightroom and photoshop as I am running out of space on my C drive?

    Storage Drive ??  Photoshop.   What Platform and OS are you running.
    I use PC therefore Windows. When I thing about Application like Photoshop I know the are going to be files all over the place.  Windows like everything installed on the Boot disk.  I gave up the fight many years ago and let windows have it way when application are installed using an installer.  However besides the application files you have user files and I break user files down into to categories. Windows User files like applications settings, preferences, cashes, etc and user Data Files,  Images, text documents, PDF.   Applications like Photoshop also have add-ons an need swap space as well.  While all Windows default library reside on the C: drive you can add additional storage  to you document, music, picture and video libraries the are not on the c: Most of my user data file are actually on external ESATA drives or USB3 drives.  System files, Application Files, Paging, swapping and backup are on internal SSd and Fast Disk drive. Some of the folder that look like they are on C: are actually shortcut links to folders on other drives.

  • Aggregate Storage Backup level 0 data

    <p>When exporting level 0 data from aggregate storage through abatch job you can use a maxL script with "export database[dbs-name] using server report_file [file_name] to data_file[file_name]". But how do I build a report script that exportsall level 0 data so that I can read it back with a load rule?</p><p> </p><p>Can anyone give me an example of such a report script, thatwould be very helpful.</p><p> </p><p>If there is a better way to approach this matter, please let meknow.</p><p> </p><p>Thanks</p><p>/Fredrik</p>

    <p>An example from the Sample:Basic database:</p><p> </p><p>// This Report Script was generated by the Essbase QueryDesigner</p><p><SETUP { TabDelimit } { decimal 13 } { IndentGen -5 }<ACCON <SYM <QUOTE <END</p><p><COLUMN("Year")</p><p><ROW("Measures","Product","Market","Scenario")</p><p>// Page Members</p><p>// Column Members</p><p>// Selection rules and output options for dimension: Year</p><p>{OUTMBRNAMES} <Link ((<LEV("Year","Lev0,Year")) AND ( <IDESC("Year")))</p><p>// Row Members</p><p>// Selection rules and output options for dimension:Measures</p><p>{OUTMBRNAMES} <Link ((<LEV("Measures","Lev0,Measures")) AND (<IDESC("Measures")))</p><p>// Selection rules and output options for dimension: Product</p><p>{OUTMBRNAMES} <Link ((<LEV("Product","SKU")) AND ( <IDESC("Product")))</p><p>// Selection rules and output options for dimension: Market</p><p>{OUTMBRNAMES} <Link ((<LEV("Market","Lev0,Market")) AND ( <IDESC("Market")))</p><p>// Selection rules and output options for dimension:Scenario</p><p>{OUTMBRNAMES} <Link ((<LEV("Scenario","Lev0,Scenario")) AND (<IDESC("Scenario")))</p><p>!</p><p>// End of Report</p><p> </p><p>Note that no attempt was made here to eliminate shared membervalues.</p>

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • How to change the storage location using BAPI_OUTB_DELIVERY_CHANGE

    Hi !
    I want to do batch split in the delivery using BAPI_OUTB_DELIVERY_CHANGE.
    Can anyone tell me how to pass/change the storage location of each batch item.
    Is there any other BAPI that can do the batch split and populate the storage location also for the split batches?
    Regards,
    Firoz.

    Hi all,
    BAPI_OUTB_DELIVERY_CHANGE  can be used to do batch split and updating storage loaction against each item of an outbond delivey.
    I have done that in the folllowing way:
    1 > Firstly i have updated the storage location for each delivery item using 'BAPI_OUTB_DELIVERY_CHANGE' passing some mininal parameters.
       Fetch the item details from LIPS table based on the outbound delivery and pass the corresponding fields to item_data, item_control  and item_data_spl parameters and passed into intenal table li_lips.
    Loop at li_lips inti lw_lips.
        lw_item_data-deliv_numb           = lw_lips-vbeln.
        lw_item_data-deliv_item             = lw_lips-posnr.
        lw_item_data-material                = lw_lips-matnr.
        lw_item_data-fact_unit_nom      = lw_lips-umvkz.
        lw_item_data-fact_unit_denom  = lw_lips-umvkn.
        lw_item_data-base_uom            = lw_lips-meins.
        lw_item_data-sales_unit            = lw_lips-vrkme.
        lw_item_control-deliv_numb      = lw_lips-vbeln.
        lw_item_control-deliv_item        = lw_lips-posnr.
        lw_item_data_spl-deliv_numb   = lw_lips-vbeln.
        lw_item_data_spl-deliv_item     = lw_lips-posnr.
        lw_item_data_spl-pick_denial   = 'X'.
        lw_item_data_spl-stge_loc       = v_lgort.
    "(This would be your Storage Location which you want to be updated)
    Appending work areas into internal table to pass as parameter
        APPEND lw_item_data_lgort     TO li_item_data_lgort .
    Appending work areas into internal table to pass as parameter
        APPEND lw_item_control_lgort  TO li_item_control_lgort.
    Appending work areas into internal table to pass as parameter
        APPEND lw_item_data_spl_lgort TO li_item_data_spl_lgort.
    endloop.
    Passing the delivery no in the work area of header data
      lw_header_data-deliv_numb           = v_delivery_no.
      lw_header_control-deliv_numb       = v_delivery_no.
      lw_header_tech_control-upd_ind   = 'U'.
    Calling BAPI to change the Storage location
      CALL FUNCTION 'BAPI_OUTB_DELIVERY_CHANGE'
        EXPORTING
          header_data       = lw_header_data
          header_control   = lw_header_control
          delivery               = v_delivery_no
          techn_control     = lw_header_tech_control
        TABLES
          item_data           = li_item_data
          item_control       = li_item_control
          return                 = li_return_change
          item_data_spl    = li_item_data_spl.
    Calling BAPI to committ the task
        CALL FUNCTION 'BAPI_TRANSACTION_COMMIT' .
    2> Then i have used the same BAPI again 'BAPI_OUTB_DELIVERY_CHANGE' to do the batch split/update batch only(if required) and change the Actual Delivered Quantity.
    Here you have to pass the same thing along with actual delivery qauntity and different batches to do batch split.
    Here it is assumed that Batch numbers, actual delivered quantity are coming from an internal table i_lqua.
    Looping through Internal table to get Bin details
      LOOP AT i_lqua INTO w_lqua .
    Clearing work area before use
        CLEAR : lw_item_data, lw_lips, lw_item_control, lw_item_data_spl.
    Reading table comparing metrial number
        READ TABLE li_lips INTO lw_lips
        WITH KEY matnr = w_lqua-matnr BINARY SEARCH.
    If read is successful, passing values from table
        IF sy-subrc EQ 0.
    Passing the delivery details into Item level table
          lw_item_data-deliv_numb              = lw_lips-vbeln.
          lw_item_data-deliv_item                = lw_lips-posnr.
          lw_item_data-material                   = lw_lips-matnr.
          lw_item_data-batch                       = w_lqua-charg.
          lw_item_data-dlv_qty                    = w_lqua-verme.
          lw_item_data-dlv_qty_imunit         = w_lqua-verme.
          lw_item_data-base_uom               = w_lqua-meins.
          lw_item_data-hieraritem                = lw_lips-posnr.
          lw_item_data-usehieritm                = 1.
          lw_item_data-fact_unit_nom          = lw_lips-umvkz.
          lw_item_data-fact_unit_denom      = lw_lips-umvkn.
          lw_item_data-sales_unit                 = lw_lips-vrkme.
          lw_item_control-deliv_numb           = lw_lips-vbeln.
          lw_item_control-deliv_item             = lw_lips-posnr.
          lw_item_control-chg_delqty           = 'X'.
          lw_item_data_spl-deliv_numb        = lw_lips-vbeln.
          lw_item_data_spl-deliv_item          = lw_lips-posnr.
          lw_item_data_spl-stge_loc            = w_lqua-lgort.
          lw_item_data_spl-pick_denial        = 'X'
    Appending work area into internal table to pass as parameter
          APPEND  lw_item_data TO  li_item_data.
    Appending work area into internal table to pass as parameter
          APPEND lw_item_control TO li_item_control.
    Appending work area into internal table to pass as parameter
          APPEND lw_item_data_spl TO li_item_data_spl.
    Clearing work areas after use
          CLEAR : lw_item_data, w_lqua,lw_item_data_spl,lw_item_control,
                        lw_vbpok, lw_lips.
        ENDIF.
      ENDLOOP.
    Passing the delivery no in the work area of header data
      lw_header_data-deliv_numb           = v_delivery_no.
      lw_header_control-deliv_numb       = v_delivery_no.
      lw_header_tech_control-upd_ind   = 'U'.
    Calling BAPI to change the Batch/Batch-Split/Delivery Quantity
      CALL FUNCTION 'BAPI_OUTB_DELIVERY_CHANGE'
        EXPORTING
          header_data           = lw_header_data
          header_control       = lw_header_control
          delivery                   = v_delivery_no
          techn_control         = lw_header_tech_control
        TABLES
          item_data                = li_item_data
          item_control            = li_item_control
          return                     = li_return_change
          item_data_spl         = li_item_data_spl.
    Calling BAPI to committ the task
        CALL FUNCTION 'BAPI_TRANSACTION_COMMIT' .
    This is the only way which i found as better way to do the Batch split and updating storage location togetherly.
    I hope this code will help you.

  • Stock Type Changes for Storage Unit Mgt.

    Hello,
    I am looking to develop an R3 system program (in ABAP) which based on certain criteria can change a storage unit's stock type between unrestricted, quality, and blocked.  I know there is a BAPI, 'BAPI_GOODSMVT_CREATE', which can be used to do the Posting Change in IM and there are function modules, L_TO_CREATE_SINGLE, etc, which can be used to create/confirm transfer orders for the storage unit in WM, but I was hoping someone can provide guidance on how I could make this simpler.  Is there a one-step method I can use to combine the movements for IM and WM (either programatically or through configuration).  Any help would be greatly appreciated.

    Hello Allen,
    Thanks for detailed explaination,
    Here is the one BAdi available to meet your requirement ( SPRO-> Log Exe -> Warehouse management -> System Modification -> BAdi in WM -> Badi for Quant determination)
    <b>Business Add-In for Quant Determination</b>
        Application components: LE-WM, LE-WM-TFM
    <b>Use of this component</b>
    You can use the BAdI LE_WM_LE_QUANT in the Warehouse Management system (WMS) to influence quant selection for a posting change notice after the system has determined all of the quants that correspond to the specifications in the posting change notice.
    if the system has determined more quants than are necessary for processing the posting change notice, you can restrict the quant selection further, so that you can process the posting change notice in the background.
    Hope this will be helpful
    regards,
    Arif Mansuri
    Reward if answer is helpful.

  • Can you change flash storage to normal storages

    CCan you change flash storage for normal storage

    Please word your question more clearly or in another way and someone should be able to provide an answer.
    Are you referring to a replacement drive for the MacBook Pro that you already own, or a new Mac that you plan on purchasing but don't want an SSD (Flash) hard drive?
    Other than cost per GB, a "normal", "old-school", 2.5" SATA laptop HDD will underperform nearly any modern "flash" SSD in a laptop.

  • How do I change my storage plan? I'm currently being charged 79p a month for a storage plan I don't use and would like to cancel it!

    How do I change my storage plan? I'm currently being charged 79p a month for a storage plan I don't use and would like to cancel this and obtain a refund!
    Please help!

    Following has information about how to cancel (downgrade to the free 5GB plan): iCloud storage upgrades and downgrades - Apple Support

  • Clear Partial Data in an Essbase Aggregate storage database

    Can anyone let me know how to clear partial data from an Aggregate storage database in Essbase v 11.1.13? We are trying to clear some data in our dbase and don’t want to clear out all the data. I am aware that in Version 11 Essbase it will allow for a partial clear if we write using mdx commands.
    Can you please help me on the same by giving us some examples n the same?
    Thanks!

    John, I clearly get the difference between two. What I am asking is in the EAS tool itself for v 11.1.1.3 we have option - right clicking on the DB and getting option "Clear" and in turn sub options like "All Data", "All aggregations" and "Partial Data".
    I want to know more on this option part. How will this option know which partial data to be removed or will this option ask us to write some MAXL query for the same"?

Maybe you are looking for

  • Standard T Code to release list of sales orders in 'Delivery Block'

    Dear All, Is there any standard transaction code in SAP which provides the facility to release list of sales orders in 'Delivery Block'. (Additional information - List of sales orders in 'Delivery block' is available in V.14,however it does not have

  • How do I create a chart of the number of entries for each date?

    I have a table which contains bookings for an event I am running, with one row for each booking and a column for the date of the booking.  I am trying to create a line chart which shows me the progress of the number of bookings over time.  So the X a

  • How to set a I/O channel in NI PXIe-6556 to ground when in PMU mode

    Is there a way or API call in LabVIEW to set any one of the channels to ground within the PXIe-6556 when in PMU mode? Solved! Go to Solution.

  • Multiple source systems ?

    Hi all, What is the best possible answer/solution for the following scenario If I am extracting same data(datasource) from multiple (say 3)R/3 source systems ie the datasources too are same for example 2lis_40_S600, how can we model the architecture

  • Acrobat Memory Issues

    Here is my problem. When I print from Acrobat on 2 of the PC's we have here, they start getting slower. Now if looked through all my processes running, but nothing says its taking up a significant amount of RAM, but my memory usage will be at somethi