Mpv cache size issue.

I update any VCS packages I use about once a month. After my June update mpv (mpv-git 0.38211.af25e0a-1) stopped playing radio streams - or so I thought.
Probably because of https://github.com/mpv-player/mpv/commi … c11e258671 my radio-playing script broke. I finally figured out it was because of cache fill, when it started playing a minute after I launched it. I fixed it by disabling the cache - the file is not seekable anyway. Setting cache to e.g. 64 KB, instead of the now-default 25000 KiB works too.
If you're using a current build of mpv-git, can you please try
mpv --no-config --cache=no http://xstream1.somafm.com:8880
and
mpv --no-config http://xstream1.somafm.com:8880
and see (hear) if there's any difference? You can try some different radio streams too. SomaFM's Underground Eighties has also another url they advertise as firewall-friendly: http://ice.somafm.com/u80s (in case you have problems with http://voxsc1.somafm.com:8880 ). Both streams are 128k mp3.
mpv 0.3.10-1 from the repos sets eh cache to 320 KiB but it's from before the change, as the man page still mentions --audiofile-cache.
I've never bothered to read mpv (or mplayer for that matter) man page. I wasn't even reading the console output as it was passed through awk (my radio-playing script). Disabling cache seems like the right solution, but I'd like to hear some other opinions and ideas before I put this in the wiki.
This thread will also make the issue a bit move visible for anyone who was left scratching their head after mpv update, like I was.

I am not specified SGA_MAX_SIZE, SGA_TARGET in pfile.
But in show parameter given below value
SQL> sho parameter SGA_MAX_SIZE
NAME TYPE VALUE
sga_max_size big integer 10743694360
SQL> sho parameter SGA_TARGET
SQL>
Regards
Mayil

Similar Messages

  • Callable Statement Cache Size

    Hi all,
    while using some dinamyc store procedures I get in the following error:
    [BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
    I'm using WL8.1 and Sql Server 2000.
    Store procedure contains two different queries where table name is a store procedure's
    parameter.
    The first time it works great, after that I always have this error:
    Reading bea doc's I found
    There may be other issues related to caching prepared statements that are not
    listed here. If you see errors in your system related to prepared statements,
    you should set the prepared statement cache size to 0, which turns off prepared
    statement caching, to test if the problem is caused by caching prepared statements.
    If I set prepared statement cache size to 0 everything works great but that does
    not seem the better way.
    Should we expect Bea to solve this problem?
    Or whatever else solution?
    such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
    dynamically ?
    thks in advance
    Leonardo

    caching works well for DML and thats what it is supposed to do. But it looks
    like you are doing DDL , which means your tables might be getting
    created/dropped/altered which effectively invalidates the cache. So you
    should try to turn the cache off.
    "leonardo" <[email protected]> wrote in message
    news:40b1bb75$1@mktnews1...
    >
    >
    Hi all,
    while using some dinamyc store procedures I get in the following error:
    [BEA][SQLServer JDBC Driver]Value can not be converted to requested type.
    I'm using WL8.1 and Sql Server 2000.
    Store procedure contains two different queries where table name is a storeprocedure's
    parameter.
    The first time it works great, after that I always have this error:
    Reading bea doc's I found
    There may be other issues related to caching prepared statements that arenot
    listed here. If you see errors in your system related to preparedstatements,
    you should set the prepared statement cache size to 0, which turns offprepared
    statement caching, to test if the problem is caused by caching preparedstatements.
    If I set prepared statement cache size to 0 everything works great butthat does
    not seem the better way.
    Should we expect Bea to solve this problem?
    Or whatever else solution?
    such as using JDBCConnectionPoolMBean.setPreparedStatementCacheSize()
    dynamically ?
    thks in advance
    Leonardo

  • (statement cache size = 0) == clear statement cache ?

    Hi
    I ran this test with WLS 8.1. I set to the cache size to 5, and I call a servlet
    which invokes a stored procedure to get the statement cached. I then recompile
    the proc, set the statement cache size to 0 and re-execute the servlet.
    The result is:
    java.sql.SQLException: ORA-04068: existing state of packages has been discarded
    ORA-04061: existing state of package "CCDB_APPS.MSSG_PROCS" has been invalidated
    ORA-04065: not executed, altered or dropped package "CCDB_APPS.MSSG_PROCS"
    ORA-06508: PL/SQL: could not find program unit being called
    ORA-06512: at line 1
    which seems to suggest even though the cache size has set to 0, previously cached
    statements are not cleared.
    Rgs
    Erik

    Galen Boyer wrote:
    On Fri, 05 Dec 2003, [email protected] wrote:
    Galen Boyer wrote:
    On 14 Nov 2003, [email protected] wrote:
    Hi
    I ran this test with WLS 8.1. I set to the cache size to 5,
    and I call a servlet which invokes a stored procedure to get
    the statement cached. I then recompile the proc, set the
    statement cache size to 0 and re-execute the servlet.
    The result is:
    java.sql.SQLException: ORA-04068: existing state of packages
    has been discarded ORA-04061: existing state of package
    "CCDB_APPS.MSSG_PROCS" has been invalidated
    ORA-04065: not executed, altered or dropped package
    "CCDB_APPS.MSSG_PROCS" ORA-06508: PL/SQL: could not find
    program unit being called ORA-06512: at line 1
    which seems to suggest even though the cache size has set to
    0, previously cached statements are not cleared.This is actually an Oracle message. Do the following test.
    Open two sqlplus sessions. In one, execute the package.
    Then, in the other, drop and recreate that package. Then, go
    to the previous window and execute that same package. You
    will get that error. Now, in that same sqlplus session,
    execute that same line one more time and it goes through. In
    short, in your above test, execute your servlet twice and I
    bet on the second execution you have no issue.Hi. We did some testing offline, and verified that even a
    standalone java program: 1 - making and executing a prepared
    statement (calling the procedure), 2 - waiting while the
    procedure gets recompiled, 3 - re-executing the prepared
    statement gets the exception, BUT ALSO, 4 - closing the
    statement after the failure, and making a new identical
    statement, and executing it will also get the exception! JoeI just had the chance to test this within weblogic and not just
    sqlplus.Note, I wasn't using SQL-PLUS, I wrote a standalone program
    using Oracle's driver...
    MY SCENARIO:
    I had one connection only in my pool. I executed a package.
    Then, went into the database and recompiled that package. Next
    execution from app found this error. I then subsequently
    executed the same package from the app and it was successful.And this was with the cache turned off, correct?
    What the application needs to do is catch that error and within
    the same connection, resubmit the execution request. All
    connections within the pool will get invalidated for that
    package's execution.Have you tried this? Did you try to re-use the statement you had,
    or did you make a new one?
    Maybe Weblogic could understand this and behave this way for
    Oracle connections?It's not likely that we will be intercepting all exceptions
    coming from a DBMS driver to find out whether it's a particular
    failure, and then know that we can/must clear the statement cache.
    Note also that even if we did, as I described, the test program I
    ran did try to make a new statement to replace the one that
    failed, and the new statement also failed.
    In your case, you don't even have a cache. Would you verify
    in your code, what sort of inline retry works for you?
    Joe

  • Increase the size of the cache using the cache.size= number of pages ?

    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

    hitesh Chauhan wrote:
    Hi All,
    I am getting this error when I do load testing.
    I have Connection pool for Sybase database that I am using in my JPD. I am using Database control of weblogic to call the Sybase Stored procedure.
    I got following exception when I was doing load testing with 30 concurrent users.
    Any idea why this exception is coming ?
    thanks in advance
    Hitesh Hi. Please note below, the stacktrace and exception is coming from the
    Pointbase DBMS, nothing to do with Sybase. It seems to be an issue
    with a configurable limit for PointBase, that you are exceeding.
    Please read the PointBase configuration documents, and/or configure
    your MDBs to use Sybase.
    Joe
    >
    javax.ejb.EJBException: [WLI-Core:484047]Tracking MDB failed to acquire resources.
    java.sql.SQLException: Cache Full. Current size is 2069 pages. Increase the size of the cache using the cache.size=<number of pages>
         at com.pointbase.net.netJDBCPrimitives.handleResponse(Unknown Source)
         at com.pointbase.net.netJDBCPrimitives.handleJDBCObjectResponse(Unknown Source)
         at com.pointbase.net.netJDBCConnection.prepareStatement(Unknown Source)
         at weblogic.jdbc.common.internal.ConnectionEnv.makeStatement(ConnectionEnv.java:1133)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:917)
         at weblogic.jdbc.common.internal.ConnectionEnv.getCachedStatement(ConnectionEnv.java:905)
         at weblogic.jdbc.wrapper.Connection.prepareStatement(Connection.java:350)
         at weblogic.jdbc.wrapper.JTSConnection.prepareStatement(JTSConnection.java:479)
         at com.bea.wli.management.tracking.TrackingMDB.getResources(TrackingMDB.java:86)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:141)
         at com.bea.wli.management.tracking.TrackingMDB.onMessage(TrackingMDB.java:115)
         at weblogic.ejb20.internal.MDListener.execute(MDListener.java:370)
         at weblogic.ejb20.internal.MDListener.onMessage(MDListener.java:262)
         at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:2678)
         at weblogic.jms.client.JMSSession.execute(JMSSession.java:2598)
         at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:219)
         at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:178)

  • Determining buffer cache size

    Hello,
    I'd like to get the type of information in version 8 that I can get in version 9 through v$db_cache_advice in order to determine the size that the buffer cache should be. I've found sites that say you can set db_block_lru_extended_statistics to populate v$recent_bucket, but they say there is a performance hit. Can anyone tell me qualitatively how much of a performance hit this causes (obviously it would only be run this way for a short period of time), and whether or not this is really the best/right way to do this?
    Thanks.

    Actually ours is bank Database,
    Our Database size is 400GB.
    last month they got ORA-000604 error,
    so that production database got hanged for 15 min, issue got resolved automatically after 15min.
    At that time complete buffer cache was flushed out & all oracle Processes was terminated.
    becoz of that they increased buffer cache size.

  • Oracle buffer cache size

    I need to calculate buffer cache size calculation for get operation.
    SELECT o.object_name, h.status, count(*) number_of_blockes
    FROM V$BH h, DBA_OBJECTS o WHERE h.objd=o.data_object_id
    AND o.owner NOT IN('SYS','SYSTEM','SYSMAN')
    AND h.status NOT IN('free')
    GROUP BY o.object_name,h.status
    ORDER BY count(*) DESC;
    Used the above query, so i got the number of blocks used to cache data.
    I performed a get operation in one db and number of blocks noticed.
    But the problem is same operation in another db shows different number of blocks.
    Both db are same configuration.
    Anyone notices this issue??

    Why do you expect them to be the same?
    Oracle version of each database?
    Number of objects in each database?
    Size of buffer cache in each database?
    The amount of query activity that would actually load blocks into the buffer cache in each database is not likely to be "the same".
    Identical data can take up a different number of blocks in different databases, depending on how it was loaded, transactions on that data, etc, so the number of blocks used in the buffer cache is likely to be different in different databases, even for the same data set.

  • SBS2011 (Exchange 2010 SP2) - limiting cache size doesn't appear to work

    Hi All,
    Hoping for some clarification here, or extra input at least.  I know there are other posts about this topic such as
    http://social.technet.microsoft.com/Forums/en-US/smallbusinessserver/thread/5acb6e29-13b3-4e70-95d9-1a62fc9304ac but these have been
    incorrectly marked as answer in my opinion.
    To recap the issue.  The Exchange 2010 store.exe process uses a lot of memory.  So much in fact it has a negative performance impact on the server (sluggish access to the desktop etc).  You can argue about this all day - it's by design
    and shouldn't be messed with etc but the bottom line is that it does use too much memory and it does need tweaked.  I know this because if you simply restart the Information Store process (or reboot the server) it frees up the memory and the performance
    returns (until its cache is fully rebuilt that is).  I have verified this on 4 different fresh builds of SBS2011 over the last 6 months. (all on servers with 16GB RAM)
    I have scoured the internet for information on limiting how much memory exchange uses to cache the information store and most articles point back to the same two articles (http://eightwone.com/2011/04/06/limiting-exchange-2010-sp1-database-cache/
    and
    http://eightwone.com/2010/03/25/limiting-exchange-2010-database-cache) that deal with exchange 2010 and exchange 2010 SP1, notably not exchange 2010 SP2.  Ergo most articles are out of date since exchange 2010 SP2 has been released since these articles
    were posted.
    When testing with our own in house SBS2011 server (with exchange 2010 SP2) I have found that specifying the min, max and cache sizes in ADSIEDIT has varying results that are not in line with the results documented in the articles I mentioned above. 
    I suspect the behaviour of these settings has changed with the release of exchange 2010 SP2 (as it did between the initial release and SP1).
    Specifically here's what I have found using ADSIEDIT;
    If you set the msExchESEParamCacheSize to a value - it doesn't have any effect.
    If you set the msExchESEParamCacheSizeMax to a value - it doesn't have any effect.
    If you set the msExchESEParamCacheSizeMin to a value - it always locks the store.exe process to using exactly this value.
    I have also tested using combinations of these settings with the result that the size and max size values are always ignored (and the store.exe process uses the maximum available amount of memory - thus causing the performance degradation) but as soon as
    you specify the min value it locks it to this value and it doesn't change.
    As a temporary solution on our in-house SBS2011 I have set the min value to 4GB and it appears to be running fine (only 15 mailboxes though).
    Anyone got some input on this ? thank you for your time.

    I concur with Erin. I'm seeing the same behaviour across all SBS2011 boxes, whether running SP1, SP2 or SP3.
    If a minimum value is set, the store cache size barely rises above the minumum. I have one server with 32GB RAM. Store.exe was using 20GB of RAM, plus all the other Exchange services which total 4GB+. That left virtually no free RAM and trying to do
    anything else on the server was sluggish at best.
    All the advise is that setting a maximum alone has no effect and a minimum must be set too. But when set, the store cache size barely rises above the minimum. I have set a 4GB minimum and 16GB max, but 5 days later it's still using only slightly more than
    4GB and there's 8GB free. Now the server as a whole is responsive, but doing anything with Exchange is sluggish.
    Just saying leave Exchange to manage itself is not an answer. The clue is in the name - Small Business Server. It's not Exchange Only Server - there are other tasks an SBS must handle so leaving Exchange to run rampant is not an option. Besides, there are
    allegedly means to manage the Exchange cache size - they just don't apparently work!
    I'm guessing nobody has an answer to this so the only solution is to effectively fix the cache size to a sensible value by setting min and max to the same value.
    Adam@Regis IT

  • CONFUSED with Cache size versus Increment by in Sequence

    Dear all,
    I have a sequence that has the following attributes
    MINVALUE 1
    MAXVALUE 9999999999999
    CACHE SIZE 20
    INCREMENT BY 1
    and tested with SELECT * FROM all_sequences and verified the above attributes.
    The issue is when used, it started at 21 and has been incrementing by 20. Is it because of the Cache size overriding Increment by? I am totally confused. Please help...
    Thank you,

    As long as the Sequence is in the Library Cache in the Shared Pool, you will get increments of 1.
    However, as with any other object in the Shared Pool, a Sequence can get "aged out" of the Shared Pool if Oracle has insufficient memory to allocate for new SQLs. If it does get aged out, at the next load back into memory, it will come back with a value of 21. If you keep hitting the Sequence and keep it "busy" (ie hot), it will return 22, 23, 24 etc. If you use it infrequently and your shared pool size is small, it might be exited out of the shared pool and, at the next call, return with a value of 40 !
    See MetaLink Note#61760.1 on using DBMS_SHARED_POOL.KEEP to "pin" Sequences and other objects. If you "keep" too many objects in the shared pool , you may end up getting ORA-4031 errors !
    Note : Even if you pin your shared pool you can still "lose" values when :
    a. SHUTDOWN, STARTUP cycle happens
    b. Transactions allocate a sequence but do a rollback (a sequence increment does not get rolled-back)
    Edited by: Hemant K Chitale on Jun 5, 2009 10:53 AM

  • Cache Size error

    We have a few users that occasionally receive the following:
    OLAP_error (1200601): Not enough memory for formula execution. Set MAXFORMULACACHESIZE configuration parameter to [2112]KB and try again.
    Our Essbase admin is suggesting that rather than increase the MAXFORMULACACHESIZE, that we reduce the maximum number of rows that are allowed to be returned.  Thoughts on that?. 
    2 other questions:
    Are there any issues with increasing the MAXFORMULACACHESIZE to a much larger number than what the error message recommends? (let's say 9000KB for the sake of this discussion).  In the DBAG I think it says it will only use what is needed.
    Are there any issues with setting the maximum rows allowed to be returned to a very high number (such as 1 million rows to reflect that max number of rows excel can handle)?

    Answer for both of your questions is a "No" . There wont eb any problem if you change teh cache size nor Increasing the row limit.But in Practical Conditions there will be no reports in any Financial organization retrieving a Million rows , so it is better to split the workbook for a faster retrieval and better performance.

  • Ndsd preallocated cache size

    I was looking into ndsd memory usage on my OES11 SP2 server and noticed that pre-allocated cache size is set in /var/opt/novell/eDirectory/data/dib/_ndsdb.ini:
    cache=209715200
    This (200 MB) is apparently the default setting.
    The total size of files in /var/opt/novell/eDirectory/data/dib on my server is only 70 MB.
    In that situation, would it be reasonable to reduce the pre-allocated cache size?

    The old documentation (from the NetWare days) which no longer applies
    directly was to have 2-3 times your DIB size in DIB cache. Newer logic
    (also covered by the new eDirectory 8.8 defaults and the
    performance/tuning guide) says to ignore it since Linux is doing caching
    on its own of data in the filesystem, so the amount there is just really
    immediate needs and should be there, but ultimately the kernel is helping
    you a lot.
    This past year I've had at least two customers report that their large
    DIBs (many GB) were benefited substantially by having multi-GB DIB caches
    even though they can see that the OS is doing caching as well, so
    something happening in there seems to be pre-worked somehow to benefit
    their type of eDirectory functionality in terms of measurable performance
    in searches.
    Considering your current static amount is about 3x your DIB cache, and
    since you are not reporting any issues with that, I'd leave it alone. If
    you do want to adjust it, do so based on performance testing that matches
    your environment's needs, specifically by making some adjustment, noting
    performance-related stats, then doing it again, etc., until you can find a
    statistically-significant improvement at some level.
    Good luck.
    If you find this post helpful and are logged into the web interface,
    show your appreciation and click on the star below...

  • Default Adobe Drive cache size is only 128MB

    The Adobe Drive cache size defaults to 128MB. This doesn't seem a very logical value as a single file may be easily larger than that. Is there a reason it's so small? As most users would probably benefit from a larger cache size and today's harddrives should also allow a larger cache size.
    Would it make sense to have a default cache size of 5-10GB? Maybe depending on the amount of free disk space available during installation?

    Hello.
    To help automatically clear up some cache from Firefox, click on each of the images from left to right. Now at least you won't have to constantly do it yourself.
    Also, to help you with your space issues, download Clean Master from the Google Play app Store: https://play.google.com/store/apps/details?id=com.cleanmaster.mguard this app will clean up hidden cache and useless files on your phone, helping free up space.
    And as for why Firefox keeps reverting to the default cache in "about:config", I do not know. We are sorry for any inconveniences that this has caused you. But please try doing what was mentioned above to help with your issue.
    Hope this helps!

  • Change browser cache size on BlackBerry Z10

    I love Z10 browser that support flash.
    But I can't find setting to modify browser cache size.
    On the Device monitor>storage, i see that browser use space only 124MB, maybe the cache is 120MB.
    Currently I need cache size around 1GB to play a game.
    Is there a way to increase the browser disk cache size?

    Behavior of OBIEE in using cache is (both in 10g as well as 11g),
    When you select a specific dashboard or analysis, Presentation Services checks its cache to determine if the identical results have recently been requested.
    If so, Presentation Services returns the most recent results, thereby avoiding unnecessary processing by the Oracle BI Server and the back-end database. If not, the analysis is issued to the Oracle BI Server for processing.
    If the Oracle BI Server has cached results that can satisfy your request, the resutls are returned from that cache. If not, Oracle BI Server issues the request to the back-end database.
    There are some ways to bypass Oracle BI Presentation Services cache but you cannot force your request past the Oracle BI Server's cache.
    So, you need to clear both the caches to get the updated data.
    Thanks,
    PP

  • Paper Size issues with CreatePDF Desktop Printer

    Are there any known paper size issues with PDFs created using Acrobat.com's CreatePDF Desktop Printer?
    I've performed limited testing with a trial subscription, in preparation for a rollout to several clients.
    Standard paper size in this country is A4, not Letter.  The desktop printer was created manually on a Windows XP system following the instructions in document cpsid_86984.  MS Word was then used to print a Word document to the virtual printer.  Paper Size in Word's Page Setup was correctly set to A4.  However the resultant PDF file was Letter size, causing the top of each page to be truncated.
    I then looked at the Properties of the printer, and found that it was using an "HP Color LaserJet PS" driver (self-chosen by the printer install procedure).  Its Paper Size was also set to A4.  Word does override some printer driver settings, but in this case both the application and the printer were set to A4, so there should have been no issue.
    On a hunch, I then changed the CreatePDF printer driver to a Xerox Phaser, as suggested in the above Adobe document for other versions of Windows.  (Couldn't find the recommended "Xerox Phaser 6120 PS", so chose the 1235 PS model instead.)  After confirming that it too was set for A4, I repeated the test using the same Word document.  This time the result was fine.
    While I seem to have solved the issue on this occasion, I have not been able to do sufficient testing with a 5-PDF trial, and wish to avoid similar problems with the future live users, all of which use Word and A4 paper.  Any information or recommendations would be appreciated.  Also, is there any information available on the service's sensitivity to different printer drivers used with the CreatePDF's printer definition?  And can we assume that the alternative "Upload and Convert" procedure correctly selects output paper size from the settings of an uploaded document?
    PS - The newly-revised doc cpsid_86984 still seems to need further revising.  Vista and Windows 7 instructions have now been split.  I tried the new Vista instructions on a Vista SP2 PC and found that step 6 appears to be out of place - there was no provision to enter Adobe ID and password at this stage.  It appears that, as with XP and Win7, one must configure the printer after it is installed (and not just if changing the ID or password, as stated in the document).

    Thank you, Rebecca.
    The plot thickens a little, given that it was the same unaltered Word document that first created a letter-size PDF, but correctly created an A4-size PDF after the driver was changed from the HP Color Laser PS to a Xerox Phaser.  I thought that the answer may lie in your comment that "it'll get complicated if there is a particular driver selected in the process of manually installing the PDF desktop printer".  But that HP driver was not (consciously) selected - it became part of the printer definition when the manual install instructions were followed.
    However I haven't yet had a chance to try a different XP system, and given that you haven't been able to reproduce the issue (thank you for trying), I will assume for the time being that it might have been a spurious problem that won't recur.  I'll take your point about using the installer, though when the opportunity arises I might try to satisfy my cursed curiosity by experimenting further with the manual install.  If I come up with anything of interest, I'll post again.

  • Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).

    Hi,
    Our Environment is Essbase 11.1.2.2 and working on Essbase EAS and Shared Services components.One of our user tried to run the Cal Script of one Application and faced this error.
    Dynamic Calc processor cannot lock more than [100] ESM blocks during the calculation, please increase CalcLockBlock setting and then retry(a small data cache setting could also cause this problem, please check the data cache size setting).
    I have done some Google and found that we need to add something in Essbase.cfg file like below.
    1012704 Dynamic Calc processor cannot lock more than number ESM blocks during the calculation, please increase CalcLockBlock setting and then retry (a small data cache setting could also cause this problem, please check the data cache size setting).
    Possible Problems
    Analytic Services could not lock enough blocks to perform the calculation.
    Possible Solutions
    Increase the number of blocks that Analytic Services can allocate for a calculation:
    Set the maximum number of blocks that Analytic Services can allocate to at least 500. 
    If you do not have an $ARBORPATH/bin/essbase.cfg file on the server computer, create one using a text editor.
    In the essbase.cfg file on the server computer, set CALCLOCKBLOCKHIGH to 500.
    Stop and restart Analytic Server.
    Add the SET LOCKBLOCK HIGH command to the beginning of the calculation script.
    Set the data cache large enough to hold all the blocks specified in the CALCLOCKBLOCKHIGH setting. 
    Determine the block size.
    Set the data catche size.
    Actually in our Server Config file(essbase.cfg) we dont have below data  added.
    CalcLockBlockHigh 2000
    CalcLockBlockDefault 200
    CalcLockBlocklow 50
    So my doubt is if we edit the Essbase.cfg file and add the above settings and restart the services will it work?  and if so why should we change the Server config file if the problem is with one application Cal Script. Please guide me how to proceed.
    Regards,
    Naveen

    Your calculation needs to hold more blocks in memory than your current set up allows.
    From the docs (quoting so I don't have to write it, not to be a smarta***:
    CALCLOCKBLOCK specifies the number of blocks that can be fixed at each level of the SET LOCKBLOCK HIGH | DEFAULT | LOW calculation script command.
    When a block is calculated, Essbase fixes (gets addressability to) the block along with the blocks containing its children. Essbase calculates the block and then releases it along with the blocks containing its children. By default, Essbase allows up to 100 blocks to be fixed concurrently when calculating a block. This is sufficient for most database calculations. However, you may want to set a number higher than 100 if you are consolidating very large numbers of children in a formula calculation. This ensures that Essbase can fix all the required blocks when calculating a data block and that performance will not be impaired.
    Example
    If the essbase.cfg file contains the following settings:
    CALCLOCKBLOCKHIGH 500  CALCLOCKBLOCKDEFAULT 200  CALCLOCKBLOCKLOW 50 
    then you can use the following SET LOCKBLOCK setting commands in a calculation script:
    SET LOCKBLOCK HIGH; 
    means that Essbase can fix up to 500 data blocks when calculating one block.
    Support doc is saying to change your config file so those settings can be made available for any calc script to use.
    On a side note, if this was working previously and now isn't then it is worth investigating if this is simply due to standard growth or a recent change that has made an unexpected significant impact.

  • In EJB3 entities, what is the equiv. of key-cache-size for PK generation?

    We have an oracle sequence which we use to generate primary keys. This sequence is set to increment by 5.
    e.g.:
    create sequence pk_sequence increment by 5;
    This is so weblogic doesn't need to query the sequence on every entity bean creation, it only needs to query the sequence every 5 times.
    With CMP2 entity beans and automatic key generation, this was configured simply by having the following in weblogic-cmp-rdbms-jar.xml:
    <automatic-key-generation>
    <generator-type>Sequence</generator-type>
    <generator-name>pk_sequence</generator-name>
    <key-cache-size>5</key-cache-size>
    </automatic-key-generation>
    This works great, the IDs created are 10, 11, 12, 13, 14, 15, 16, etc and weblogic only needs to hit the sequence 1/5 times.
    However, we have been trying to find the equivalent with the EJB3-style JPA entities:
    We've tried
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "native(Sequence=pk_sequence, Increment=5, Allocate=5)")
    @SequenceGenerator(name = "SW_ENTITY_SEQUENCE", sequenceName = "pk_sequence", allocationSize = 5)
    But with both configurations, the autogenerated IDs are 10, 15, 20, 25, 30, etc - weblogic seems to be getting a new value from the sequence every time.
    Am i missing anything?
    We are using weblogic 10.3

    If you are having a problem it is not clear what it is from what you have said.  If you have sugestions for improving some shortcomings you see in Flash CC then you should submit them to:
    Adobe - Wishlist & Bug Report
    http://www.adobe.com/cfusion/mmform/index.cfm?name=wishform

Maybe you are looking for

  • Splash screen for a desktop app (non mobile app)

    I'm using fb4linux, 4.5.1A SDK, I'm developing a non-mobile (i.e. desktop application) and I would like to add a splash screen to it. I've added mobilecomponents.swc to the library path, and the application starts like this: <s:Application xmlns:fx="

  • Linking from mc

    Hello! I have a "book flipping pages" mc that I downloaded... and every page has a btn that links to a website-. I've tried different ways to make it work but all of the btns from the different pages are not working- please help! I am new to this and

  • Is a clean install really this hard?

    Hi folks, I installed Flexbuilder 2 over my existing Eclipse installation, choosing the "plug-in" option rather than a complete install. But I couldn't debug: it claimed I had to install FlashPlayer 9 (yes, it was already installed and working) or Fl

  • MacBook Pro 13 inch and 16 GB of RAM.

    On apple support page i can see that mac book pro 13.3 supports only 8 GB of RAM! http://support.apple.com/kb/ht1651?viewlocale=ja_jp&locale=ja_jp#link1 But i really need to install 16 GB of RAM! Is it supported by MacBook pro 13 inch hardware? I wan

  • Copy APO Macro ACROSS Landscape..

    Hi, I have a typical scenario, where in I need to write a Macro in APO-DP which is similar to thae one I did in one of the previous APO-DP projects.The Names of the key figures etc are same.. The need for modification would be very little. Subject to