Livecache data cache usage - table monitor_caches

Hi Team,
We have a requirement of capturing the Data cache usage of Livecache on an hourly basis.
Instead of doing it manually by going into LC10 and copying the data into an excel, is there a table which captures this data on a periodic basis which we can use to get the report at a single shot.
"monitor_caches" is one table which holds this data, but we are not sure how we can get the data from this table. Also, we need to see the contents of this table, we are not sure how we can do that.
As "monitor_caches" is a maxdb table I am not sure how I can the data from this table. I have never worked on Maxdb before.
Has anyone had this requirement.
Warm Regards,
Venu

Hi,
For Cache usage below tables can be referred
Data Cache Usage - total (table MONITOR_CACHES)
Data Cache Usage - OMS Data (table MONITOR_CACHES)
Data Cache Usage - SQL Data (table MONITOR_CACHES)
Data Cache Usage - History/Undo (table MONITOR_CACHES)
Data Cache Usage - OMS History (table MONITOR_CACHES)
Data Cache Usage - OMS Rollback (table MONITOR_CACHES)
Out Of Memory Exceptions (table SYSDBA.MONITOR_OMS)
OMS Terminations (table SYSDBA.MONITOR_OMS)
Heap Usage (table OMS_HEAP_STATISTICS)
Heap Usage in KB (table OMS_HEAP_STATISTICS)
Maximum Heap Usage in KB (table ALLOCATORSTATISTICS)
System Heap in KB (table ALLOCATORSTATISTICS)
Parameter OMS_HEAP_LIMIT (KB) (dbmrfc command param_getvalue OMS_HEAP_LIMIT)
For reporting purpose , look into the following BW extractors and develop BW report.
/SAPAPO/BWEXDSRC APO -> BW: Data Source - Extractor
/SAPAPO/BWEXTRAC APO -> BW: Extractors for Transactional Data
/SAPAPO/BWEXTRFM APO -> BW: Formula to Calculate a Key Figure
/SAPAPO/BWEXTRIN APO -> BW: Dependent Extractors
/SAPAPO/BWEXTRMP APO -> BW: Mapping Extractor Structure Field
Hope this helps.
Regards,
Deepak Kori

Similar Messages

  • Data Cache usage 96%

    Hi Experts,
    With total 7 planning versions and total size of 9GB, the data cache is constantly filling up
    Total 14GB allocated and each time we create a new version the usage goes up by 10%.
    There are no old sessions more than 4 hrs and when 5 planners starts working simutaneously, the cache starts filling up
    Thanks,
    Naren

    > With total 7 planning versions and total size of 9GB, the data cache is constantly filling up
    >
    > Total 14GB allocated and each time we create a new version the usage goes up by 10%.
    >
    > There are no old sessions more than 4 hrs and when 5 planners starts working simutaneously, the cache starts filling up
    Now what?
    You create multiple versions of liveCache objects in parallel - of course the cache usage will increase (and also the used space in the data area).
    That's how the multi-version-data-handling works.
    A copy of the current version is created for each new version.
    So either add more cache, reduce your planning version complexity/size or do less in parallel if you want to prevent the cache from beeing used 100%.
    regards,
    Lars

  • Kodo Data Cache Usage

    I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
    it configured to use the "LocalCache" for caching of object instances and I
    can see that the cache is working.
    Does anyone know how I can get a reference to the DataCache? There is
    example code in section 7.3.2 of the Kodo Manual, but that code doesn't
    compile for me.
    The specific code from the manual is:
    PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
    pm.getPersistenceManagerFactory();
    factory.getDataCache().pin(JDOHelper.getObjectId(o));
    The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
    Thanks in advance.

    Oops; that should be 'factory.getConfiguration ().getDataCache ()'.
    -Fred
    In article <amphkp$bes$[email protected]>, TJanusz wrote:
    I'm using Kodo 2.3.2 running in a managed environment (JBoss 3.x). I have
    it configured to use the "LocalCache" for caching of object instances and I
    can see that the cache is working.
    Does anyone know how I can get a reference to the DataCache? There is
    example code in section 7.3.2 of the Kodo Manual, but that code doesn't
    compile for me.
    The specific code from the manual is:
    PersistenceManagerFactoryImpl factory = (PersistenceManagerFactoryImpl)
    pm.getPersistenceManagerFactory();
    factory.getDataCache().pin(JDOHelper.getObjectId(o));
    The PersistenceManagerFactoryImpl doesn't have a method "getDataCache".
    Thanks in advance.
    Fred Lucas
    SolarMetric Inc.
    202-595-2064 x1122
    http://www.solarmetric.com

  • How to check if table or index bound to data cache loaded in memory?

    In my system, thare are many named data caches created and certain objects bind to those data cache.
    For example, I have a large table mytab which is bound to data cache mycache.
    When I issue a sql like select * from mytab where x=y, run it at first time is is very slow.  run it again and again. then it very fast. it means data loaded in cahce and ready for same query.
    Question:
    1. How can I know if the table mytab has been loaded in data cache mycache?
    2. How to load mytab into data cahce mycache one time for all query in the feature?

    one way to monitor is:
    select CacheName,DBName,OwnerName,ObjectName,IndexID,sum(CachedKB) as "CachedKb"
    from master..monCachedObject
    group by CacheName,DBName,OwnerName,ObjectName,IndexID
    order by CacheName,DBName,OwnerName,ObjectName,IndexID
    But, you should understand that caches have a finite size, and caches contain "pages" of objects including data pages, index pages, and LOB pages.  Also, caches may have different pool sizes, so a page can be in only one cache pool.  So, if you want  a table and all of it's indexes, text/image pages  to be loaded into a dedicated cache, you need a large enough cache to fit all of those pages, and decide which buffer pool you want them in (typically either the 1 page pool, or the 8 page pool).
    Then, simply execute SQL (or dbcc) commands that access all of those pages in the manner you wish to find them in the cache.  For example, two statements, one that scans the table using 2k reads, and another that scans the index (mytab_ind1) using 2k reads.
    select count(*) from mytab plan '( i_scan mytab_cl mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    select count(*) from mytab plan '( i_scan mytab_ind1 mytab) ( prop mytab ( prefetch 2 ) ( lru ) )'
    etc etc.
    used count(*) to limit result sets of examples

  • How to purge data cache table using command line

    Hi:
    Is there a way to purge the data cache table using command line?
    thanks!

    Thanks, Mike.
    I'm thinking about the ldconsole provided with ALDSP.
    The ldconsole has a link for purging the cache. Is there anything I can leverage from there? Is it a JMX component that I can call?

  • Compress all dat ain existing table

    Hi,
    We are using oracle 11.2.0.3 and wish to evaluate performance of compression in terms of reducing time takes for reports aganis the tables to run.
    Have had a quick serach and can see alter table compress but talks about only data inserted/updated after the compress operationexecute is compressed.
    Don't want to have to create separate table, compressed and replace other table with this.
    Is it possible to compress all existing dat aof existing table?
    Also what sort of magntitudes of performance gain coudl we expect on report performance by using compression versus identical tabel but uncompressed?
    Also this is a data warehouse and the large fact table already has bitmpa indexes which by default compressed.
    Tlamking about table compression here.
    Thanks
    Edited by: user5716448 on 05-Mar-2013 05:57

    >
    Trying to make reports as fast as possible.
    >
    Such a blanket 'fast as possible' goal is unrealistic and unattainable. Sounds like a requirement from a business user.
    A goal needs to be specific to be attainable. An index that makes one report faster might make several others slower.
    A report might run fast one day and slow another, or fast the first week of a new month and slow the last week. The cause might be as simple as the varying amount of monthly data at the start versus the end of the month.
    I suggest you itemize your reports and identify a specific goal (SLA - service-level agreement) for each one. Base the goal on the performance you are getting now and state it explicitly: returns results in less than 30 seconds during peak-usage periods (identify a sample peak period).
    >
    Believe do not need to have a problem in oredr to loook for the most effcient solution.
    >
    You might want to read that again now that you've had a good night's sleep and a cup of coffee.
    A 'solution' cannot exist without a problem: what would it be the solution for?
    >
    If have quick reports, but can make even quicker that is something we would wnat to do.
    >
    Back to that again? You need to establish an SLA for each report and only worry about it if it starts failing to meet that SLA.
    In other words, for any individual report determine what is 'quick enough' and let it go at that. Things can almost always be made quicker but there can be a big price to pay. Without an SLA for a goal you will have a moving target that can never be met because you can NEVER prove that it couldn't be quicker if you just tweaked it some more.
    >
    In your experience, all other thinsg being indentical as is good practice with any experment, what sort of performance gains coudl we expect with comnpression and report performance compare to non-compressed data?
    Also, in such star queries is it advisable just to compress the central fact table or would the dimesnions benefit from compression?
    >
    Yes - the benefits can be greater for indexes and dimension tables than for the fact tables themselves. Indexes benefit because fewer index blocks are needed so hot blocks are more likely to be in cache. Same thing with the small dimension (lookup) tables. Fact tables using star-schema data get an additional benefit from the compression of the multiple key values that compress more readily than arbitrary text data like addresses.
    The key missing piece of information is that you haven't specified what compression options are availlable to you.
    Are you only considering BASIC compression? Or have you licensed Oracle's advanced compression?
    http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf

  • High cache usage of free memory

    I dont know if this is normal but my fresh archlinux x86_64 install with Xfce uses round 300Mb of 4Gb ram space after boot but then after i start and close some programs i get to 3 Gb used of 3.87 Gb free memory when all applycation are closed. Now i understand that kernel caches some data in memory but i cant clear it all with commands:
    sync
    echo 1 > /proc/sys/vm/drop_caches
    echo 2 > /proc/sys/vm/drop_caches
    echo 3 > /proc/sys/vm/drop_caches
    After i do that system reports 540 Mb used compared to 300Mb from boot. Is there a way to limit the ammount of cache usage so i dont end up having all free memory used by cache?
    Last edited by Anril (2009-12-29 13:19:52)

    Ram that is unused is wasted ram, the linux kernel manages it very well so no worries there.
    If you have swap usage it's because most probably you have things stored in the ram that are not being used often but cannot be discarded too so things being used more often take precedence.
    If you think you have enough ram for all your needs then why not disable swap? I've read that there might be a speed penalty for running without swap but I've been running my system without swap (arch64 4G ram) and I've never had any bad surprises.
    There are a few things that might prompt swap usage (don't quote me on that though, it just seems to me to be the case), if you copy many files from one place to the other the kernel will use all the ram available to try to cache them, if you copy many GB then all the ram will get filled (and maybe other things will get evicted to swap).
    It's not a bug, free ram is there to be used, things that have been used/needed recently get to stay in the ram, other things can go to the swap, if you can cache a file that has been recently used and need to use it again shortly after then it's much faster if it is in the ram instead of having to read it again from the hard disk. The logic behind it is good, it's just that some corner cases may make behave badly.

  • Web Application - Data caching of enterprise data

    Sorry in advance if this is off-topic but I can't find anywhere else to post this type of question.
    I am looking for information/suggestions such as books, technology or design methodology for my enterprise web applications. These sites are currently up and functional using only JSP, servlets and regular Java classes stored in a web application session to provide data caching and access. We are using Weblogic Server 6.1 running on an AIX Unix system at this time. I'm not sure that this is the best design architecture as our web sessions are getting too large but I can't think of any other Java technology to use and I need some help. Here's an overview of our environment and our needs.
    Our core data is held in a mainframe based IMS system. Some DB2 is also used. Access to this data is through IMS COBOL transactions which we can execute with IMS Connect. We also use some JDBC to get to the DB2 tables directly where available.
    Some overall application data is cached when the web application is deployed. We use singleton classes which are created and refreshed at deployment and they then refresh themselves from the sources every 24 hours.
    Each time a user logs in we execute several IMS transactions and JDBC calls to cache user specific data in regular Java classes which are then simply placed in the users web session where we access them from JSP, servlets and other Java classes. The fields in these Java classes range from any type of primitive field to TreeMaps of other Java classes. As the data is cached it is sorted and other fields are calculated and stored in these classes. As the user progresses through the system we then may have to do several other IMS transaction and JDBC calls to collect other types of data. All of this is then also added to the users session. Most of this in inquiry. We do allow transactions but those are built from user input and data already cached and are then we just execute the IMS transactions with the input.
    As our application has grown these Java classes have gotten larger and larger. And since these are simply stored in server memory in the web sessions then these are also getting huge. I'm concerned that this is not the best way for this application to be architected. Is there something else we should be doign? I simply don't understand how Entity Java Beans could be used but then again I don't know much about them. I wouldn't think that caching the data to a local database and accessing it from there would be any more efficient and would probably just slow down the system from all the I/O.
    Any help or direction would be greatly appreciated.

    The best book you can buy is 'Professional Java Server Programming, j2ee edition' by Wrox. It is by far the best reference I've used. Another quick reference consideration might be the j2ee book provided by codenotes... its quick and to the point.

  • Import Data using Full Table Name

    Is there a way to run the import data wizard so when pulling data from a csv it generates an insert statement not only using the table name but using the Oracle Schema name -- ie fully qualified table name?

    Hi Nilanjan,
    I need help ASAP.
    About this dump-load task, how does it works?  Because I was checking out a package called Import SQL  and this one only imports data from a table, however this table has to be allocated within the database that is being used, and I right? Are these 2 related somehow?
    Does this task order BPC to find data from a SQL Table (allocated in a different server, different instance for SQL) and import it to SQL Fac2 table?
    Can you help me with a simple explanation what do I need to do to run this task?  The page help.sap.com  talks shows a section within DumpLoad Task Usage which is called  Importing Into SQL Server... 
    * Processing the Application using DumpLoad  -
    Importing into SQL
    You can use DumpLoad to process the application as a standalone procedure, or with data import, export, or clear.
    Prerequisites
    The DumpLoad task (OsoftTaskDumpLoad2008.dll) is registered with Microsoft SSIS. See Registering Custom Tasks.
    Procedure
    1.Open a package or create a new package in Microsoft SSIS on the Planning and Consolidation server.  i already did this for the DumpLoad task ... should I do this for the Import SQL task also ?
    2.Select the task and add it to the package.   WHAT DOES SELECT THE TASK  MEAN?
    3.Open the task, and choose  Data Management  None .
    4.Enter the application set, application, and user ID.
    Im getting confused, can you please provide more details about the whole procedure. Thanx in advance
    Velázquez

  • Data service for table in Oracle 8.0.6

    Hi,
    Using WebLogic 8.1.4 and LiquidData 8.5 I am trying to create physical data services for tables in a DB in Oracle 8.0.6. I am aware that that Oracle version is not supported by Oracle anymore, but I need to work with that version anyway (you know how it is sometimes).
    I managed to create a connection pool for this through the WebLogic Server Console by providing the JDBC driver for 8.0.6., but when I want to create a data source using the new connection pool and WebLogic tries to get the metadata, I get pop up windows with messages like:
    "Bigger type length than maximum"
    and
    "OALL8 in an inconsistent state"
    and
    "Protocol violation"
    One more thing to mention: I also added the Oracle 8.0.6 JDBC driver to the WebLogic Server classpath (Tools -> WebLogic Server -> Server Properties ... -> WebLogic Server: added classes12.zip to Server classpath additions) and restarted WebLogic Workshop and Server. Still I get those error messages.
    Is there a special procedure how to provide/configure a specific driver for a DBMS that is not natively supported by WebLogic?
    Any help is appreciated.
    Thanks,
    Wilko

    Hi Mike,
    Thanks for the quick reply. Below the contents of the console window from starting the Workshop and Server. I'll try your next hint next and let you know about the outcome. As far as I see there were no errors issued by the Server while I tried to connect to Oracle 8.0.6 to upload metadata. (I am not sure whether anything was printed out while I started the server). My address is w.eschebach at vsnlinternational dot com.
    Thanks,
    Wilko
    This is how my workshop.cfg looks like:
    C:\bea\weblogic81\workshop
    C:\bea\jdk142_05\jre\bin\java.exe
    -XX:-UseThreadPriorities -Xmx256m -Xms64m -Xss256k -client -Dsun.io.useCanonCaches=false -Dsun.java2d.noddraw=true -Dsun.java2d.d3d=false -Djava.system.class.loader="workshop.core.AppClassLoader" -cp "C:\bea\weblogic81\workshop\wlw-ide.jar" workshop.core.Workshop
    Console output:
    DEBUG: extensions=C:\bea\weblogic81\workshop\\extensions
    INFO: Registering extension com.bea.portal.ide.CommonServices
    INFO: Service com.bea.portal.ide.findrefs.FindRefsSvc registered
    INFO: Handler for urn:com-bea-portal-ide:ref-finders registered
    INFO: Registering extension workshop.control.ControlServices
    INFO: Service com.bea.ide.control.ControlSvc registered
    INFO: Registering extension com.crystaldecisions.integration.weblogic.workshop.r
    eport.Bootstrap
    INFO: Registering extension workshop.debugger.DebuggerServices
    INFO: Exit Handler found
    INFO: Service com.bea.ide.debug.DebugSvc registered
    INFO: Handler for urn:com-bea-ide:debugExpressionViews registered
    INFO: Registering extension workshop.jspdesigner.JspDesignerServices
    INFO: Service com.bea.ide.ui.browser.BrowserSvc registered
    INFO: Service com.bea.ide.jspdesigner.PaletteActionSvc registered
    INFO: Handler for urn:com-bea-ide-jspdesigner:tags registered
    INFO: Registering extension workshop.liquiddata.LiquidDataExtension
    INFO: Registering extension workshop.pageflow.services.PageFlowServices
    INFO: Exit Handler found
    INFO: Service workshop.pageflow.services.PageFlowSvc registered
    INFO: Service com.bea.ide.ui.palette.DataPaletteSvc registered
    INFO: Handler for urn:workshop-pageflow-wizard:extension registered
    INFO: Registering extension com.bea.portal.ide.portalbuilder.PortalBuilderServic
    es
    INFO: Service com.bea.portal.ide.portalbuilder.laf.LookAndFeelSvc registere
    d
    INFO: Service com.bea.portal.ide.portalbuilder.laf.css.CssSvc registered
    INFO: Service com.bea.portal.codegen.CodeGenSvc registered
    INFO: Registering extension com.bea.portal.ide.PortalServices
    INFO: Service com.bea.portal.ide.cache.CacheInfoSvc registered
    INFO: Registering extension workshop.process.ProcessExtension
    INFO: Service workshop.process.ProcessSvc registered
    INFO: Service workshop.process.broker.channel.ChannelManagerSvc registered
    INFO: Handler for urn:com-bea-ide-process:process registered
    INFO: Registering extension workshop.shell.ShellServices
    INFO: Exit Handler found
    INFO: Service com.bea.ide.ui.frame.FrameSvc registered
    INFO: Service com.bea.ide.core.datatransfer.DataTransferSvc registered
    INFO: Service com.bea.ide.actions.ActionSvc registered
    INFO: Service com.bea.ide.document.DocumentSvc registered
    INFO: Service com.bea.ide.core.HttpSvc registered
    INFO: Service com.bea.ide.ui.help.HelpSvc registered
    INFO: Service com.bea.ide.ui.output.OutputSvc registered
    INFO: Service com.bea.ide.core.navigation.NavigationSvc registered
    INFO: Service com.bea.ide.filesystem.FileSvc registered
    INFO: Service com.bea.ide.filesystem.FileSystemSvc registered
    INFO: Service com.bea.ide.refactor.RefactorSvc registered
    INFO: Service com.bea.ide.security.SecuritySvc registered
    INFO: Handler for urn:com-bea-ide:actions registered
    INFO: Handler for urn:com-bea-ide:document registered
    INFO: Handler for urn:com-bea-ide:frame registered
    INFO: Handler for urn:com-bea-ide:encoding registered
    INFO: Handler for urn:com-bea-ide:help registered
    INFO: Registering extension workshop.sourcecontrol.SCMServices
    INFO: Service com.bea.ide.sourcecontrol.SourceControlSvc registered
    INFO: Handler for urn:com-bea-ide:sourcecontrol registered
    INFO: Registering extension workshop.sourceeditor.EditorServices
    INFO: Service com.bea.ide.sourceeditor.EditorSvc registered
    INFO: Service com.bea.ide.sourceeditor.compiler.CompilerSvc registered
    INFO: Handler for urn:com-bea-ide:sourceeditor:sourceinfo registered
    INFO: Registering extension com.bea.wls.J2EEServices
    INFO: Service com.bea.wls.ejb.EJBSvc registered
    INFO: Service com.bea.wls.DBSvc registered
    INFO: Registering extension workshop.workspace.WorkspaceServices
    INFO: Exit Handler found
    INFO: Service com.bea.ide.workspace.WorkspaceSvc registered
    INFO: Service com.bea.ide.workspace.ServerSvc registered
    INFO: Service com.bea.ide.workspace.SettingsSvc registered
    INFO: Service com.bea.ide.build.AntSvc registered
    INFO: Service com.bea.ide.workspace.RunSvc registered
    INFO: Handler for urn:com-bea-ide:settings registered
    INFO: Handler for urn:com-bea-ide:project registered
    INFO: Registering extension workshop.xml.XMLServices
    INFO: Service com.bea.ide.xml.types.TypeManagerSvc registered
    INFO: Service com.bea.ide.xml.types.TypeResolverSvc registered
    INFO: Service com.bea.ide.xmlmap.XMLMapSvc registered
    DEBUG: Workshop temp dir: C:\DOCUME~1\TR003137\LOCALS~1\Temp\wlw-temp-18920
    DEBUG: ExtensionsLoaded: 8329ms
    DEBUG: UI Displayed: 11563ms
    DEBUG: Time to load XQuery Functions (in seconds) - 0
    DEBUG: Time to load repository (in seconds) - 0
    DEBUG: LdBuildDriver loaded
    DEBUG: project ProvisioningDataServices activated
    DEBUG: Setting active project to: ProvisioningDataServices
    DEBUG: Workspace Activated: 17126ms
    DEBUG: Document Panel initialized: 17501ms
    DEBUG: *** CompilerProject constructor 1
    DEBUG: WorkspaceLoaded: 17594ms
    DEBUG: getClasspathMapping initiated with 29 item list.
    DEBUG: getClasspathMapping returning 29 item map.
    INFO: Startup Complete
    DEBUG: Time to load repository (in seconds) - 1
    DEBUG: Loading template file wsrp-producer-project.zip
    DEBUG: Loading template file wli-tutorial.zip
    DEBUG: Loading template file wli-schemas.zip
    DEBUG: Loading template file wli-newprocess.zip
    DEBUG: Loading template file wli-helloworld.zip
    DEBUG: Loading template file webflow-project.zip
    DEBUG: Loading template file tutorial-webservice.zip
    DEBUG: Loading template file tutorial-pageflow.zip
    DEBUG: Loading template file tutorial-jbc.zip
    DEBUG: Loading template file tutorial-ejb.zip
    DEBUG: Loading template file portal-project.zip
    DEBUG: Loading template file portal-application.zip
    DEBUG: Loading template file pipeline-application.zip
    DEBUG: Loading template file oag-schemas.zip
    DEBUG: Loading template file netui-webapp.zip
    DEBUG: Loading template file liquiddata-project.zip
    DEBUG: Loading template file liquiddata-application.zip
    DEBUG: Loading template file ejb-template.zip
    DEBUG: Loading template file default-workshop.zip
    DEBUG: Loading template file datasync-template.zip
    DEBUG: Loading template file crystalreports.zip
    DEBUG: Loading template file commerce-project.zip
    DEBUG: Loading template file commerce-application.zip
    DEBUG: URI is null. Delete Version will not show up in the menu
    DEBUG: URI is null. Delete Version will not show up in the menu
    DEBUG: GCThread: performing gc while idle

  • How to configure params for buffer pool for named data cache?

    when create a named data cache on ase 12.5, it will setup 2K I/O buffer pool by default with Configured size=0, wash size = 60M
    1. if 2K can be changed to 8K for this buffer pool?
    2. If add another 16k buffer pool, should Affected Pool be changed to the right pool?
    3. How to decide then pagesize, configured size and wash size for a buffer pool? Are they part of total memory size allocated for this cache?

    > 1. if 2K can be changed to 8K for this buffer pool?
    You should be able to create an 8K i/o pool, then drop the 2k pool by setting its size to 0.
    > 2. If add another 16k buffer pool, should Affected Pool be changed to the right pool?
    If you don't specify the Affected Pool (when calling sp_poolconfig), the procedure uses the pool with the smallest i/o.  So if you had an 8k but not had dropped the 2k, the space for the new 16k pool would come from the 8k pool.
    > 3. How to decide then pagesize, configured size and wash size for a buffer pool? Are they part of total memory size allocated for this cache?
    The wash is included in the pool. I don't think it usually needs to be adjusted.
    Which page size pools to have will depend on how the cache is used.  Tables with a clustered index that have a lot of range queries will benefit from larger page size pools, as will text/image/java.   Syslogs is said to do well on a 4k pool.
    -bret

  • Moving data in internal tables

    hi
    I have defined an internal table(say itab_master) and one of its field is table name which will store name of some other internal table(say itab2, itab3..etc).
    now i want to move data into the tables(itab2, itab3..etc) selected as per row of itab1_master.
    kindly suggest me way of doing it.
    thnks

    hi there,
    FIrst you declare :
    FIELD-SYMBOLS <fs> TYPE ANY TABLE.
    ASSIGN itab_temp <fs>.
    Note : If itab2 ,itab3..... are having the same structure there is now problem
    Else you have to use FIELD-GROUPS.
    Doc...
    Defining an Extract
    To define an extract, you must first declare the individual records and then define their structure.
    Declaring Extract Records as Field Groups
    An extract dataset consists of a sequence of records. These records may have different structures. All records with the same structure form a record type. You must define each record type of an extract dataset as a field group, using the FIELD-GROUPS statement.
    FIELD-GROUPS <fg>.
    This statement defines a field group <fg>. A field group combines several fields under one name. For clarity, you should declare your field groups at the end of the declaration part of your program.
    A field group does not reserve storage space for the fields, but contains pointers to existing fields. When filling the extract dataset with records, these pointers determine the contents of the stored records.
    You can also define a special field group called HEADER:
    FIELD-GROUPS HEADER.
    This group is automatically placed before any other field groups when you fill the extract. This means that a record of a field group <fg> always contains the fields of the field group HEADER. When sorting the extract dataset, the system uses these fields as the default sort key.
    Defining the Structure of a Field Group
    To define the structure of a record, use the following statement to add the required fields to a field group:
    INSERT <f1>... <f n> INTO <fg>.
    This statement defines the fields of field group <fg>. Before you can assign fields to a field group, you must define the field group <fg> using the FIELD-GROUPS statement. The fields in the field group must be global data objects in the ABAP program. You cannot assign a local data object defined in a procedure to a field group.
    The INSERT statement, just as the FIELD-GROUPS statement, neither reserves storage space nor transfers values. You use the INSERT statement to create pointers to the fields <f i > in the field group <fg>, thus defining the structures of the extract records.
    When you run the program, you can assign fields to a field group up to the point when you use this field group for the first time to fill an extract record. From this point on, the structure of the record is fixed and may no longer be changed. In short, as long as you have not used a field group yet, you can still extend it dynamically.
    The special field group HEADER is part of every extract record. Consequently, you may not change HEADER once you have filled the first extract record.
    A field may occur in several field groups; however, this means unnecessary data redundancy within the extract dataset. You do not need to define the structure of a field group explicitly with INSERT. If the field group HEADER is defined, an undefined field group consists implicitly of the fields in HEADER, otherwise, it is empty.
    NODES: SPFLI, SFLIGHT.
    FIELD-GROUPS: HEADER, FLIGHT_INFO, FLIGHT_DATE.
    INSERT: SPFLI-CARRID SPFLI-CONNID SFLIGHT-FLDATE
    INTO HEADER,
    SPFLI-CITYFROM SPFLI-CITYTO
    INTO FLIGHT_INFO.
    The program is linked to the logical database F1S. The NODES statement declares the corresponding interface work areas.
    There are three field groups. The INSERT statement assigns fields to two of the field groups.
    Filling an Extract with Data
    Once you have declared the possible record types as field groups and defined their structure, you can fill the extract dataset using the following statements:
    EXTRACT <fg>.
    When the first EXTRACT statement occurs in a program, the system creates the extract dataset and adds the first extract record to it. In each subsequent EXTRACT statement, the new extract record is added to the dataset.
    Each extract record contains exactly those fields that are contained in the field group <fg>, plus the fields of the field group HEADER (if one exists). The fields from HEADER occur as a sort key at the beginning of the record. If you do not explicitly specify a field group <fg>, the
    EXTRACT
    statement is a shortened form of the statement
    EXTRACT HEADER.
    When you extract the data, the record is filled with the current values of the corresponding fields.
    As soon as the system has processed the first EXTRACT statement for a field group <fg>, the structure of the corresponding extract record in the extract dataset is fixed. You can no longer insert new fields into the field groups <fg> and HEADER. If you try to modify one of the field groups afterwards and use it in another EXTRACT statement, a runtime error occurs.
    By processing EXTRACT statements several times using different field groups, you fill the extract dataset with records of different length and structure. Since you can modify field groups dynamically up to their first usage in an EXTRACT statement, extract datasets provide the advantage that you need not determine the structure at the beginning of the program.
    Assume the following program is linked to the logical database F1S.
    REPORT demo_extract_extract.
    NODES: spfli, sflight.
    FIELD-GROUPS: header, flight_info, flight_date.
    INSERT: spfli-carrid spfli-connid sflight-fldate
    INTO header,
    spfli-cityfrom spfli-cityto
    INTO flight_info.
    START-OF-SELECTION.
    GET spfli.
    EXTRACT flight_info.
    GET sflight.
    EXTRACT flight_date.
    There are three field groups. The INSERT statement assigns fields to two of the field groups. During the GET events, the system fills the extract dataset with two different record types. The records of the field group FLIGHT_INFO consist of five fields: SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE, SPFLI-CITYFROM, and SPFLI-CITYTO. The first three fields belong to the prefixed field group HEADER. The records of the field group FLIGHT_DATE consist only of the three fields of field group HEADER. The following figure shows the structure of the extract dataset:
    Reading an Extract
    Like internal tables, you can read the data in an extract dataset using a loop.
    LOOP.
    [AT FIRST | AT <fgi> WITH <fg j> | AT LAST.
    ENDAT.]
    ENDLOOP.
    When the LOOP statement occurs, the system stops creating the extract dataset, and starts a loop through the entries in the dataset. One record from the extract dataset is read in each loop pass. The values of the extracted fields are placed in the corresponding output fields within the loop. You can use several loops one after the other, but they cannot be nested. It is also no longer possible to use further EXTRACT statements within or after the loop. In both cases, a runtime error occurs.
    In contrast to internal tables, extract datasets do not require a special work area or field symbol as an interface. Instead, you can process each record of the dataset within the loop using its original field names.
    Loop control
    If you want to execute some statements for certain records of the dataset only, use the control statements AT and ENDAT.
    The system processes the statement blocks between the control statements for the different options of AT as follows:
    AT FIRST
    The system executes the statement block once for the first record of the dataset.
    AT <fgi> WITH <fgj>
    The system processes the statement block, if the record type of the currently read extract record was defined using the field group <fg i >. When using the WITH <fg j > option, in the extract dataset, the currently read record of field group <fg i > must be immediately followed by a record of field group <fg j >.
    AT LAST
    The system executes the statement block once for the last record of the dataset.
    You can also use the AT and ENDAT statements for control level processing.
    Assume the following program is linked to the logical database F1S.
    REPORT DEMO.
    NODES: SPFLI, SFLIGHT.
    FIELD-GROUPS: HEADER, FLIGHT_INFO, FLIGHT_DATE.
    INSERT: SPFLI-CARRID SPFLI-CONNID SFLIGHT-FLDATE
    INTO HEADER,
    SPFLI-CITYFROM SPFLI-CITYTO
    INTO FLIGHT_INFO.
    START-OF-SELECTION.
    GET SPFLI.
    EXTRACT FLIGHT_INFO.
    GET SFLIGHT.
    EXTRACT FLIGHT_DATE.
    END-OF-SELECTION.
    LOOP.
    AT FIRST.
    WRITE / 'Start of LOOP'.
    ULINE.
    ENDAT.
    AT FLIGHT_INFO WITH FLIGHT_DATE.
    WRITE: / 'Info:',
    SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE,
    SPFLI-CITYFROM, SPFLI-CITYTO.
    ENDAT.
    AT FLIGHT_DATE.
    WRITE: / 'Date:',
    SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE.
    ENDAT.
    AT LAST.
    ULINE.
    WRITE / 'End of LOOP'.
    ENDAT.
    ENDLOOP.
    The extract dataset is created and filled in the same way as shown in the example for Filling an Extract with Data. The data retrieval ends before the END-OF-SELECTION event, in which the dataset is read once using a loop.
    The control statements AT FIRST and AT LAST instruct the system to write one line and one underscore line in the list, once at the beginning of the loop and once at the end.
    The control statement AT <fg i > tells the system to output the fields corresponding to each of the two record types. The WITH FLIGHT_DATE option means that the system only displays the records of field group FLIGHT_INFO if at least one record of field group FLIGHT_DATE follows; that is, if the logical database passed at least one date for a flight.
    The beginning of the output list looks like this:
    The contents of the field SFLIGHT-FLDATE in the HEADER part of record type FLIGHT_INFO are displayed as pound signs (#). This is because the logical database fills all of the fields at that hierarchy level with the value HEX 00 when it finishes processing that level. This feature is important for sorting and for processing control levels in extract datasets.
    rewards would be appreciated.

  • Mapviewer - Spatial Data Cache

    I am trying to clear the spatial data cache in two different versions (mv10 & mv11ea) of Mapviewer without success. To confirm that the cache is being cleared I have enabled the report_stats as below in the mapViewerConfig.xml for both versions
        <spatial_data_cache   max_cache_size="64"
                              report_stats="true"
        />After a few requests I note the following in the log:
    Cache group PARCEL_VIEW_SHAPE_82473_PDT_GEOM statistics:
       capacity: 262144
           size: 145988
    load factor: 0.95
        # of chains: 88837
    max chain depth: 8
    avg chain depth: 1.6433242905546113
    empty bucket %: 0.6611137390136719
    total mem size : 28169KBKnowing (assuming) that this cache group is populated by a single theme that references the PARCEL_VIEW table I then issue the following via the Admin section of the Mapviewer control.
    <?xml version="1.0" standalone="yes"?>
    <non_map_request>
      <clear_theme_cache data_source="vicmap" theme="THEME_PARCEL" />
    </non_map_request>Then, after waiting patiently for the next set of statistics to appear in the log (BTW, is there a way to change the frequency from 10 minutes to something more regular?) I notice that the information for the cache group has not changed.
    Am I following the correct steps here? If I wish to clear the spatial cache, should I be monitoring these statistics?
    All advice most welcome.
    Ross.

    Hi Ross,
    we'll review the statistics reported and check why it is not changing. The frequency is currently hard-coded (5 minutes), and there is no parameter on the configuration file to change that. We may consider this in the future.
    Joao

  • Is it possible to cache and index and not cache the table?

    can someone point me to the syntax? I have been messing with it and can't get the cache command on the indexes to work. I dont want to cache the table. Just the index blocks.

    i have to joins between tables with denormalized data and join non-unique columns. The indexes I am using have high clustering factors. i have no way of solving this right now.
    in performnace tests queries use a lot of physical IO and take a long time to return. If I run them a second time, they still use alot of logical IO, but return quickly. I have enough CPU to handle the logical IO and I need to speed up queries.
    I dont have enough memory to cache the tables data involved, but I do have enough to cache the indexes. When I run a 10046 trace virtually all of the work is done in the index searches, so I was hoping to cache the indexes in order to speed up the queries.
    again I can't solve the data issues and I am not concerned about the high logical IOs since there is limited concurrency and I have plenty of CPU.
    I guess my only other option is to find out which table in the join is would benefit most from caching and cache that table since these are big tables and I can really only cache one of them.

  • Query Data cached? (Virtual Cube)

    Hi folks,
    I have some problems with a Query-Objekt which gets data out of an Virtual Cube. The Virtual Cube is based on a 3.X Infosource an gets data out of a table in the ERP.
    When i call the Query from VC everything works fine and current data is shown. But if i manipulate data of the table which the Virtual Cube points to and send a refresh event to the Query-Objekt, the manipulated data is not shown. It always returns the data which it fetched at the first call. If i refresh the whole application in the browser (via F5), the manipulated data is shown. I disabled cachmode in rsrt for this query but it doesn't work.
    Any chance to get the current data by just sending a refresh action and call the Query-Objekt again without reloading the whole application? Any Idea?
    Points will be awarded for usefull information.

    Hello,
    The reason why the data manipulated is not showing up in the query even after refresh is sent is because the cache for the virtual provider does not get reset as it would for a normal InfoCube.
    So, it does not know when to reset the cache for itself even when data is manipulated.
    The way we have worked around this is have a temporary process chain which runs on a frequent basis and executes the function module RSDMD_SET_DTA_TIMESTAMP for the virtual cube in consideration.
    Thanks
    Dharma.

Maybe you are looking for

  • Backing up essbase data

    Good morning Is there a way to back up the data in Essbase while the user is still actively using the database? Currently we use scripting to export the database (these are ASO cubes) but sometimes the export fails due to users actively sending or re

  • Win 7 Pro error R6034 after itunes 11.4

    I just installed itunes 11.4 on my Win 7 Pro system, which has all patches up to date.  After installing, all itunes services and itunes itself get the following error: application has made an attempt to load the C runtime library incorrectly.  (I *d

  • I keep getting a download error that says Creative cloud cant connect to Adobe servers, what do I do?

    This is my second time getting this message. I downloaded Creative Cloud in the past and it never worked. I just tried again same thing. I've disabled my firewalls even! Nothing works. What is going on??

  • File adapter not green

    Hi , My file adater status is showing grey color .Can you help me to make it green . Cheers,

  • IPhoto will not sync with my iPhone

    I did the new update for my Mac yesterday and one of the update was for iPhoto but now when I open iphoto it will not find my iPhone. Not sure what to do to fix that. I do have iTunes open and iTunes does see my iPhone so I know it has nothing to do