Data Cache Settings

Hello,
I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
databases currently. Can somebody please help me understand if they are right or need some changes:
DataBase A:
Data File Cache setting (KB) 32768
Data cache setting (KB) 100000
Block Size: (B) 143880
Number of existing blocks: 7266
Page file size: 40034304
DataBase B:
Data File Cache setting (KB) 32768
Data cache setting (KB) 300000
Block Size: (B) 91560
Number of existing blocks: 1912190
Page file size:2147475456--1
Page file size: 500703056
DataBase C:
Data File Cache setting (KB) 300000
Data cache setting (KB) 37500
Block Size: (B) 23160
Number of existing blocks: 26999863
Page File Size: 21 page file =21 * 2= 42 GB
If this might not be the issue then please let me know what might be causing it?
Thanks!
Edited by: user4958421 on Dec 15, 2009 10:43 AM

Hi,
1. For error no 1130203, here are the possible problems and solutions, straight from document.
Try any of the following suggestions to fix the problem. Once you fix the problem, check to see if the database is corrupt.
1. Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occuring, add more memory to the server computer.
2. If you are on a UNIX computer, check the user limit profile.
3. Check the block size of the database. If necessary, reduce the block size.
4. Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
5. Make sure that the Analytic Services computer has enough resources. Consult the Analytic Services Installation Guide for a list of system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Analytic Services needs
Sandeep Reddy Enti
HCC
http://hyperionconsultancy.com/

Similar Messages

  • How can I transfer the entire Firefox installation/content (directory, "private data" (cache), settings...) to another harddrive unit?

    I want to move all of the Mozilla Firefox browser/program from one Harddrive (for example "C:") to another (for example "D:").
    All of it means all of the program files(/directory), all associated data (like in the "\WINDOWS"-directory (application data, registry, "\WINDOWS\SYSTEM")), settings, cache, browsing history, download history, saved form information, "private data" data, add-ons, plugins, cookies, etc..
    Grateful for quick, simple solution.

    Try using [http://mozbackup.jasnapaka.com/ MozBackup], it's a free program that will make backups of various Mozilla programs -- "It allows you to backup and restore bookmarks, mail, contacts, history, extensions, passwords, cache etc. It's an easy way to do Firefox backup, Thunderbird backup ..."
    Run MozBackup to backup your settings, uninstall Firefox, install it in the new location, run MozBackup to restore your settings.

  • Cache Settings

    Hello,
    I am coming up with the spreadhseet retreival error very frequently and get Essbase Error 1130203. I went through the Error interpretation and found that Data Cache might be causing it. Here are the details of cache settings on my
    databases currently. Can somebody please help me understand if they are right or need some changes:
    DataBase A:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 100000
    Block Size: (B) 143880
    Number of existing blocks: 7266
    Page file size: 40034304
    DataBase B:
    Data File Cache setting (KB) 32768
    Data cache setting (KB) 300000
    Block Size: (B) 91560
    Number of existing blocks: 1912190
    Page file size:2147475456--1
    Page file size: 500703056
    DataBase C:
    Data File Cache setting (KB) 300000
    Data cache setting (KB) 37500
    Block Size: (B) 23160
    Number of existing blocks: 26999863
    Page File Size: 21 page file =21 * 2= 42 GB
    If this might not be the issue then please let me know what might be causing it?
    Thanks!

    You have posted the same question on the essbase forum > Data Cache Settings
    You are more likely to get more responses on the essbase forum to this question.
    From the docs
    Error - 1130203     
    Essbase is unable to allocate memory.
    Possible solutions
    Try any of these suggestions to fix the problem. After you fix the problem, determine whether the database is corrupt (see Checking for Database Corruption).
    Check the physical memory on the server computer. In a Windows environment, 64 MB is the suggested minimum for one database. In a UNIX environment, 128 MB is the suggested minimum for one database. If the error keeps occurring, add more memory to the server computer.
    If you are on a UNIX computer, check the user limit profile (see Checking the User Limit Profile).
    Check the block size of the database. If necessary, reduce the block size.
    Check the data cache and data file cache setting. If necessary, decrease the data cache and data file cache sizes.
    Ensure that the Essbase computer has enough resources. Consult the Oracle Hyperion Enterprise Performance Management System Installation Start Here for system requirements. If a resource-intensive application, such as a relational database, is running on the same computer, the resource-intensive application may be using the resources that Essbase needs.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Cache settings in BI7. Data still not displayed correctly in RSRT

    Good day
    Please assist? I did scan SDN for a similar problem and did check the CACHE settings in RSRT etc.
    My problem is: If I check the data in the DSO, then the updated information that was loaded yesterday is displayed. When I run the query, it does not display the last loaded information. I have run the query in RSRT and the updated information does not display either. I have changed the cache properties of this query to different settings already, and running the query from RSRT, with no success.
    I also checked the query itself (design) It is not restricted to anything. User selects 0CALMONTH as variable only. I have logged out and in again after each Cache setting, no success of displaying the updated DSO information.
    Can anyone please tell me why the query does not pick up the latest information from the DSO?
    Thanks in advance.
    Cj

    Hi Cj,
    Is the last request marked as Ready for Reporting? This needs to be the case before the latest data comes up in the query.
    Hope this helps!
    Regards,
    Saurabh

  • Cache Settings - Data Load

    Hello All,
    Do we have to set cache while performing data laod?
    Defragmentation - No setting of cache
    Calculation - Set cahce to reduce the calculation time (Max 2 GB- Index Cache + Data Cache)
    Data Load - ???
    Amarnath

    Hi Amarnath,
    There are some configuration settings that can effect dataload performance which are:-
    1- DLTHREADSPREPARE - Specifies how many threads Essbase may use during the data load stage that codifies and organizes the data in preparation to being written to blocks in memory.
    2- DLTHREADSWRITE - Specifies how many threads Essbase may use during the data load stage that writes data to the disk. High values may require allocation of additional cache.
    3- DLSINGLETHREADPERSTAGE - Specifies that Essbase use a single thread per stage, ignoring the values in the DLTHREADSPREPARE and DLTHREADSWRITE settings.
    If you set high value for 2nd setting then you need to increase cache size too.
    Hope it answere your question.
    Regards,
    Atul Kushwaha

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • LR 5.5 - Camera RAW Cache settings reset non-stop

    Lightroom doesn’t remember the folder path in Camera RAW Cache Settings section in File Handling tab in Preferences window. I want to set my own folder Camera Raw Cache and its maximum size but every time Lightroom sets default path (C:/Users/Name/AppData/Local/Adobe/CameraRaw/Cache/) and also maximum size as 1GB. When I set all the parameters Camera RAW Cache and click OK or switch between the tabs in Lightroom preferences window or restart Lightroom and I return to the File Handling tab all the data is reset to default. The same happens when I want to change the parameters Camera RAW Cache from Adobe Bridge and Photoshop.
    Second question:
    When I use DNG file format with Embed Fast Load Data option in the folder Camera RAW Cache the “DAT” files doesn’t appear. Is it connected with the use of Embed Fast Load Data option? When I use the original NEF files from my Nikon “DAT” files appear.
    [Win 7 64-bit, LR 5.5, Web&Design CS6]

    I have no trouble placing my Camera Raw cache where I want to put it. Perhaps deleting your Preferences file would help with that problem.
    Yes, if you embed fast-load data in your DNGs, LR will use that instead of the CR Cache.
    Hal

  • Properly and accurately calculating application cache settings

    Hello everyone.
    We are running Hyperion Planning 11.1.2.1 and one of the dataforms we have set up is quite heavy (it includes several DynamicCalc members with considerable ammount of scripting) and it fails to load from time to time, just saying "Cannot open dataform X. Check logs" and such.
    I have tried to increase cache sizing in the databases of the Essbase application (right clic on each database > edit > properties > caches), as well as buffer sizes and commit blocks.
    Little by little I have managed to improve performance by modifying the above mentioned caches (it crashes less often), but still I guess it's nuts to infinitely increase caches to make sure it works.
    So, my question is: Is there a way to calculate the accurate cache settings for a given application?
    My current settings are:
    Buffer size: 200 KB
    Short buffer size: 200 KB
    Index cache setting: 30720 KB
    Data file cache setting: 61440 KB
    Data cache setting: 51200 KB
    Commit blocks: 30000
    Do you think these are accurate?
    Thanks a lot,
    G.S.Feliu

    You haven't really provided enough information to be honest, for example are you running a 64bit system?
    But that is rhetorical, as usual the first port of call is the DBAG. I don't see why a link should be posted, you must have access to it already if you're administering a production system. That will point out things like that the Data file cache setting is only relevant if Direct I/O is being used and that the index size should be at least as big as the file if memory allows.
    Commit blocks....is interesting, personally I have set it to 0 in some projects and seen some improvement but as usual testing is the key.
    However their is a performance tuning guide out there that you may find very useful:
    https://blogs.oracle.com/pa/entry/epm_11_1_2_epm1
    that focuses on the infrastructure a bit more. Bit complicated though and I would thoroughly recommend recording a set of benchmarks before applying any of those changes, and applying said changes one by one or you'll never know what is good and what is not so good.
    Learn from the pain others have already endured ;-)
    Good Luck
    Steve

  • Dynamic Calc Issue - CalcLockBlock or Data Cache Setting

    We recently started seeing an issue with a Dynamic scenario member in our UAT and DEV environments. When we tried to reference the scenario member in Financial Reports, we get the following error:
    Error executing query: The data form grid is invalid. Verify that all members selected are in Essbase. Check log for details.com.hyperion.planning.HspException
    In SmartView, if I try to reference that scenario member, I get the following:
    The dynamic calc processor cannot allocare more than [10] blocks from the heap. Either the CalcLockBlock setting is too low or the data cahce size setting is too low.
    The Dynamic calcs worked fine in both environments up until recently, and no changes were made to Essbase, so I am not sure why them stopped working.
    I tried to set the CalcLockBlock settings in the Essbase.cfg file, and increases the data cache size. When I increases the CalcLockBlock settings, I would get the same error.
    When I increased the data cache size, it would just sit there and load and load in Financial Reporting and wouldn't show the report. In SmartView, it would give me an error that it had timed out and to try to increase the NetRetry and NetDelay values.

    Thanks for the responses guys.
    NN:
    I tried to double the Index Cache Setting and the Data Cache setting, but it appears when I do that, it crashes my Essbase app. I also tried adding the DYNCALCCACHEMAXSIZE and QRYGOVEXECTIME to essbase.cfg (without the Cache settings since it is crashing), and still no luck.
    John:
    I had already had those values set on my client machine, I tried to set them on the server as well, but no luck.
    The exact error message I get after increasing the Cache settings is"Essbase Error (1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the olap.server.netConnectTry and/or olap.server.netDealy values in teh essbase. proprieties. Restart the server and try again"
    From the app's essbase log:
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1023040)
    msg from remote site [[Wed Jun 06 10:07:44 2012]CCM6000SR-HUESS/PropBud/NOIStmt/admin/Error(1042017) Network error: The client or server timed out waiting to receive data using TCP/IP. Check network connections. Increase the NetRetryCount and/or NetDelay values in the ESSBASE.CFG file. Update this file on both client and server. Restart the client and try again.]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Error(1200467)
    Error executing formula for [Resident Days for CCPRD NOI]: status code [1042017] in function [@_XREF]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/Staffing/admin@Native Directory/4340/Warning(1080014)
    Transaction [ 0x10013( 0x4fcf63b8.0xcaa30 ) ] aborted due to status [1042017].
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1013091)
    Received Command [Process local xref/xwrite request] from user [admin@Native Directory]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008108)
    Essbase Internal Logic Error [1060]
    [Wed Jun 06 10:07:44 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008106)
    Exception error log [E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp] is being created...
    [Wed Jun 06 10:07:46 2012]Local/PropBud/NOIStmt/admin@Native Directory/4340/Info(1008107)
    Exception error log completed E:\Oracle\Middleware\user_projects\epmsystem2\diagnostics\logs\essbase\essbase_0\app\PropBud\NOIStmt\log00014.xcp please contact technical support and provide them with this file
    [Wed Jun 06 10:07:46 2012]Local/PropBud///4340/Info(1002089)
    RECEIVED ABNORMAL SHUTDOWN COMMAND - APPLICATION TERMINATING

  • When I remove all website data under settings safari advanced website data, it comes back a day later.  How do I fix this?, When I remove all website data under settings safari advanced website data, it comes back a day later.  How do I fix this?

    When I remove all website data under settings>safari>advanced>website data (remove all), the websites reappear when i check a day later.  How do I stop this?

    HI,
    Wow you must have been watching me type and hit send since it took about 1 minute for the reply. Thanks.
    Yes i understand the purpose however i still feel as though i should be able to decide for myself if i want to use them as an "option" or turn them off entirely. It's sort of akin to the "trash bin" that's not really trash, though in that instance at least microsoft and Apple listened to consumers (eventally) and allowed users to make the choice. Hey there's a thought how about Apple points the delete to the trash bin and looses the annoying popup, that way we can restore cookies? Sorry, i'm being sarcastic but in my view, if i delete a cookie or a cached web icon it's not exactly the same thing as "saving us" from deleting a spreadsheet or system files. Oh but wait in that instance i CAN delete them without having to endure the annoying "are you sure" pop up. it looks like someone missed the bus on this one. Thanks again I'll check the web for the terminal command and try and edit the tag myself.

  • Data File Cache / Data Cache

    I have a few Questions regarding Data File cache and data cache. based on teh size of the application.
    All the settings are changed by using a maxl script.
    1. Data File cache-50MB,Data Cache - 100MB
    I am using Buffered I/O, then will memory be allocated to Data File cache?
    2. It is given in DBAG that data cache & index cache should be set as small as
    possbile for both buffered & Direct I/O. The size of one of my application is
    around 11 GB. data file :11GB,index File :450MB
    I have set my index cache to 450MB and data cache to 700MB.
    Is it OK or a. what should be my data cache size?
    b. How do I calculate the optimal Data cache and index cache?
    3. the memory size of our AIX server is 6GB. If i use Directo I/O, can my sum of
    all caches be 4GB?
    4. If I use buffered I/O, according to (2), what should be my cache sizes?
    Thanks
    Amarnath

    In the DBAG it states data file cache is not used with buffered IO so the answere to 1) should be NO.
    For 2) there is a hint in the DBAG that you should check the hit ratio of the caches to verify sizing, only calculatory advice for sizing is given on the calulator cache :-( This would mean for 2b) look at the hit ration if it stays around 1.0 try to decrease it until it drops slightly. Inspect the ratios from time to time.
    3) don't know, 64bit should be no problem. But why would you do this anyway?
    Example form our settings pag total ~20GB ind ~2GB
    outline hat 11 dimensions with a block size of ~340KB largest dense ~400 members, largest sparse ~4000 members, existing blocks ~2.7 milions
    The data cache is set to 256 MB, the index cache to 64MB our hit ratios are 1.0 for index cache and 0.77 for data chache. so our data cache could be larger, but the performace of retrievals is around 3.0 seconds which is fine for our users..
    4) Check your hit ratios and try to increase or decrease the cahces in small steps (first I'd do the index cache, the if it's fine I'd tune the data cache).
    hope it helped a bit..

  • BW cache settings and WebI

    Hello,
    we're trying to tune the BW cache for WebI queries, we'd like to change the default cache settings for a number of queries, including ones used for BO universes.
    I'd like to know if this changes will be picked and used by WebI or if it is not sensitive to such changes (Cache Mode and Persistence Mode).
    Thanks in advance for your help.

    Hello Pablo,
    I haven't tried yet but the OLAP cache should be used by WebI when it accesses the queries. You can ensure it does in transaction RSRCACHE by looking at the last access of a cached query. Could also look at the BI stats (if you have installed them and turned then on of course).
    Regarding the cache settings, I would personally recommend using the persistent cache (across App Servers if you have more than one app server) into a Cluster table (or Transparent table if the data set is quite large). Can find more details here: http://help.sap.com/saphelp_nw70ehp1/helpdata/en/d9/31363dc992752de10000000a114084/frameset.htm
    You could also look into MDX cache instead of OLAP cache, but I haven't played with it yet so I'm not sure it would help with WebI. Anyone else did?
    Hope it helps...

  • Reg: Cache settings

    Hi,
    We have a database of size 15KB only but has data from 1999 to 2008.The main issue is retrieval performance is too low.To increase the performance of the database retrieval can we increase the below cache size's or is there any other option?
    Our database has the below settings:
    Mode:Buffered I/O
    Existing blocks: 1900000
    Index cache:30000KB,Data cache:200000KB
    Please suggest on this.

    Hi,
    Below is some tuning info which you will hopefully find of interest:
    1 - The index cache setting should equal the size of your index files (ess000001.ind). Note that you you may have several essxxxx.ind files in your environment since a new file is created for every 2GB.
    In the Database Properties screen, select the Storage tab and check the size of each .ind file. Remove the last 3 digits to obtain the value to enter in the Index Cache field (convert from KB to MB)
    2 - The Data File Cache is only used with Direct I/O (No need to set it when using Buffered I/O). It should equal the data file size (ess000001.pag). Remember 1 .pag file is created for every 2GB.
    3 - The data cache should be set to 0.125 times the values of the data file size (ess000001.pag). Again you may have several essxxxxx.pag files in your environment, 1 for every 2 GB.
    You then need to restart the databases for the changes to take effect.
    You may need to look at your isolation mode to improve performance maybe?
    Committed Isolation: the calculation is stored to a temp directory before being written to the server. => Better data consistency
    Uncommitted Isolation: Essbase releases a block after it is updated but does not commit blocks until the transaction is completed or until a specified limit is reached => Better performance and data concurrency
    Hope this helps.
    Seb
    www.taysols.com.au

  • Essbase BSO data cache setting

    can you guys post your BSO cube compressed data size(.pag files) and data cache setting? I am trying to get a better understanding about these two areas interms of optimization. I am also curious to know the maximum data cache setting ever used on BSO cube....(1 GB is what I know as of now)
    I think the settings depends on million things (available RAM on server, performance check, etc etc) but just a quick check.
    Thanks,
    KK

    Indes and data caches work differently and it is probably good to talk about that. When an Essbase database starts, it will grab all of the memory you have defined for the index cache and nothing else can get at it. Data cachs is grabbed as it is needed until it reaches the maximum. It was truly problematic in the old days when we had much less memory to work with. If you started too many applications, you took all available memory right away. Man I feel old talking about it :)

  • Need input on data cache

    Hi All,
    This is the situation..
    Data cache = 3mb (default)
    Data block size = 20kb
    so number of blocks a memory segment (data cache) can hold is 157
    What will happen If user is retrieving 300 blocks in one retrieval ?
    I know swapping will takes place for multiple retrievals.
    I know that agg cache in ASO will increase till OS say no it.
    Will Essbase tries to increase data cache....?? if so how it works??
    My understanding is Essbase tries to increase data cache and it may touch the wrong segment in memory to create a segment violation to crash!!
    As per the modern OS, virtual memory management using pages and segments ...I think this will takes place!!
    Please give your inputs!!
    Thanks,
    Jeeth
    Edited by: Jithendra on Mar 28, 2011 7:12 PM
    Edited by: Jithendra on Mar 28, 2011 7:14 PM
    Edited by: Jithendra on Mar 28, 2011 7:15 PM

    Hi,
    I just did an initial analysis....
    It is always 3mb. (never increased it)!!
    XCP log says it stopped at executing the maxl!! ..like (Maxl pending)
    and its an segmentation violation!!
    from my understanding segmentation violation is something to do with kernal memory allocation, virtual memory and physical memory.
    I cannot try the same on prod server (pls don't ask why the settings are default etc ..I'm new to the system)
    I just provided an intial analysis telling that at this point of time...the highly visible point to me is data cache in buffered I/o.
    The segmentation violation can also happen at total level of availability of RAM.
    but in our case we are using only 1/3rd of the Total available RAM.
    memory in all applications = (1/3rd of available RAM)
    and its a Maxl operation...
    by seeing al this i suspected the application cache.(data cache)
    I posted only to gather little mor info and to have a debate to get clear understanding.
    Thanks for your support and input...
    Please advice
    Thanks,
    Jeeth
    Edited by: Jithendra on Mar 29, 2011 1:37 AM

Maybe you are looking for

  • How to insert row in the middle of a table so that the below content will move to another page?

    I'm making a huge form in LiveCycle Designer. I want to be able to add rows in the middle o the table but in a way that the content below will move down automatically in a way it doe's in Word for example. Sometimes I am adding a very high new row in

  • Why do we need  a session

    in bdc session method we create a session by bdc_open_group , i want to know the exact functionality in creating this session .we procees this session in t code sm35.we r processing means it is updating the database,is that right.   thank u in advanc

  • Dynamic buttons and itemlisteners????

    I'm trying to do something that might not be possible. I would like to create some buttons and actions for those buttons on the fly. For example: 1) read some files from a directory (these file store settings) 2) Create buttons using the filenames as

  • Iphone to  HD  TV

    Hello   Apple First of  all  I realize  I  have  listed as a specific request a question about an itunes store feature.  however that is  NOT  what this email is about. Here is  My   Question.   I  just recently  bought an iphone 4 about a month ago.

  • How to create an instance of myClass with a name that is in a variable?

    Hey Guys, How can I create an instance of myClass with a name that is in a variable? Normally a new instance is defined something like: myClass myName = new myClass (); I want to make a method which creates instances of a certain class (ie myClass) w