Optimization

Hi All,
I Have got the following Dimensions in Essbase. I am using Essbase 9.3.1
1. Accounts
2. Period
3. Year
4. Activity
5. Version
6. Currency
7. Company
8.Asset Type
9. Cost Centre
10.Invest_Char
11. Location
Of the above only Accounts and Period are Dense and the rest Sparse.
Following is the code for making the balance in a particular month equal to zero.
This Script takes more that 14 hours to run.
SET UPDATECALC OFF;
SET CACHE HIGH;
SET CREATENONMISSINGBLK ON;
FIX("Test","HSP_InputValue","SCNZ",@IDESCENDANTS("Location",0),@IDESCENDANTS(Activity,0),@IDESCENDANTS("Cost-Center",0),@IDESCENDANTS("Asset-Type",0),@IDESCENDANTS("Invest-Char",0))
FIX(&currfcst,&startyear)
FIX("PR_5163")
&currmth = 0;
ENDFIX
ENDFIX
ENDFIX;
Explaination to Above Code Terms
Test is Version
SCNZ is Location
&currfcst could be any Scenario
&startyear could be any year say FY10.
"PR_5163" is account code which needs to be zeroed out for given month.
&currmth = Month
Any inputs to reduce the run time will be thankful.
Once again thanks in advance.
BK

Beyond the different technique suggestions that have been outlined in other posts, I have to ask why you are setting the current month to zero. Block creation? Clearing out data values?
Block creation can always be handled at the time of calculation. CLEARDATA can wipe out data and sometimes blocks -- watch out on that latter one. I got severely burned on that to the point of blogging about my pain. CLEARBLOCK for sure can wipe out blocks and data. http://camerons-blog-for-essbase-hackers.blogspot.com/2011/01/stupid-programming-tricks-6_03.html
Remember, you are creating blocks for every potential combination of dimensions (I am going to guess that you are at least going to rewrite this as @RELATIVE instead of @IDESCENDANTS unless you well and truly want to calculate this for every level) and the data that goes into the calc almost certainly doesn't exist at all of the combinations you have touched. You end up creating blocks where they don't need to be, slowing down your calc, making your database bigger than it needs be, and generally threatening world peace. Okay, the last one you're not doing, but you get the idea.
Why not describe what you need to calc, and where, and then try different techniques to create blocks, e.g., DATACOPY, CREATENONMISSINGBLKS, CREATEBLOCKSONEQ, assign to sparse, cross-dim on the left hand of the equal sign, XWRITE with LOOPBACK, etc.
Regards,
Cameron Lackpour
Edited by: CL on Feb 6, 2011 9:12 AM
Whoops -- you can't do the XWRITE because you're on 9.3.1, but the rest of the techniques are fine.

Similar Messages

  • How can I optimize just the video on a project timeline?

    Hi everyone,
    I've been working on a 1 hour documentary using original non-optimized media, now that I'm approaching the final steps of the edit I would like to optimize all the video in the timeline, but NOT all the footage I have in the events.
    I did NOT optimized my media on import: all my events and project are made up of video non-converted, just imported. I did that because I didn't have enough storage to transcode to ProRes the whole 40 hours of footage.
    The folders now full of media in Final Cut Events are the "original media" ones.
    Now I'll add some title, subtitles and color-correction and I want to be a little more fast. Then I'll step in the "export" zone, and I know it is much better to export from optimized media than from original media, that's why I want a 'ProRes opimized media' project timeline.
    Thank you in advance to anyone with advices!

    Thank you Tom,
    at least I know there is no need I keep on wondering "WHY?"...
    This impossibility to transcote footage on timeline seem to me a big downside to this new version... I just have in mind all the options for managing media in FCP 7...
    Thank you again,
    always read your tips: veru useful!

  • G770 won't boot normally or in safe mode... I suspect it has something to do with the boot optimizer

    My Lenovo G770 will not boot in normal or safe mode.  I usually escape out of the boot optimizer.  Today I let it run and it went to the "starting windows" screen with a brief startup of the windows 7 animation...it freezes for a second, then a quick flash of the bsod, then to the "windows error recovery page" giving me the option of "starting windows normally" or "Launch Startup Repair (recommended)." 
    Starting Windows Normally eventually brings me back to the same place repeating what I just stated above paragraph.
    When I launch the Startup Repair it "Cannot repair this computer automatically".
    So I go to view advanced options for system recovery and support.
    It brings me to 5 options:
    Startup Repair (we already tried this above)
    System Restore (unfortunately I didn't create any restore points)
    System Image Recovery (unfortunately I haven't created an image to recover)
    Windows Memory Diagnostic (no problems found-done several times)
    Command Prompt (don't know what I can do here except for remove a bad/corrupted driver which may be the problem, but I don't know the driver name that is associated with the boot optimizer...can anyone tell me this?)
    I've tried booting to safe mode in all of its incarnations and I can't even do that..it repeats the same things as stated above...windows 7 animation briefly starts then locks up, flash of bsod, then the windows error recovery page.
    I've tried booting to last known good configuration (same thing occurs...brief startup of windows 7 animation, freeze, flash of bsod, then error recovery page.
    The only thing that has given me any kind of result was "disabling system restart on system failure."  When I do this, the BSOD doesn't flash briefly..it stays. and it gives me the error message page_fault_in_nonpaged_area.
    I'm at a loss as to what to do.  Not being able to boot into Safe Mode even is really frustrating.  Any advice from anyone?  can I remove the driver associated with the boot optimizer?  If so, what is the name of the driver and where (directory) is it located? 

    How did you resolve the issue?
    I have the exactly same issue.
    When I go System Image Recovery -->Select System Image-->Advanced->I can open all the drives [:Local Drive(C , LENOVO(D, Local Disk(E, Boot(X-where i think executable booting is here]. It comes with an Open prompt asking me to enter File Name with a File type: Setup Information.
    I don't know which setup information and where to find it on my Drives.
    Anyone know how to fix?
    I was trying to re-install Win 7 from DVD but it is not executing either.
    Can I boot with USB Ubuntu and Install Win 7 from there??? but how?
    Need help?

  • Column optimization in GUI_DOWNLOAD--Exce

    Hi Experts,
       I am writing an Excel file using GUI_DOWNLOAD function module. Is there any way to do column optimization in Excel file while downloading.
    Thanks and regards,
    Venkat

    Hi,
    There is a Complete & Very good documentation by SAP available on this URL. Please read this.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/204d1bb8-489d-2910-d0b5-cdddb3227820
    Hope your query get solved.
    Thanks and regards,
    Ramani N

  • Query optimization in Oracle 8i (tunning)

    Hi everyone,
    The following SQL statement use more than 15% of the CPU of the machine where it is executed. Please if somebody can help me to rewrite or hinting the query.
    This is the statement:
    SELECT
    /*+ INDEX(APD IDX_ABAPLANI_DET_SOL)  */ 
    apd.sinonimo,
    apd.sinonimo_planificacion,
    apd.cod_despensa,
    apd.estante_cod,
    apd.correlativo_solicitud,
    apd.prioridad,
    apd.correlativo_det_sol,
    apd.insumo_sinonimo,
    apd.cantidad_solicitada,
    apd.cantidad_despachada,
    apd.estado,
    apd.sinonimo_usuario,
    apd.sinonimo_observacion,
    ap.fecha_creacion,
    ap.centro_resultado,
    aud.nombre,
    aud.a_paterno,
    aud.rut,
    aud.username,
    cenres.cod_flex codigocr,
    insumo.cod_flex insumocod,
    cenres.des_flex despensa_descripcion,
    cenres.des_flex crdescripcion,
    insumo.des_flex insumodescripcion
    FROM
    aba_usuario_despachador aud,
    cenres,
    insumo,
    aba_planificacion_detalle apd,
    aba_planificacion ap
    WHERE ap.sinonimo = apd.sinonimo_planificacion
    AND aud.sinonimo = apd.sinonimo_usuario
    AND ap.centro_resultado = cenres.sinonimo
    AND insumo.sinonimo = apd.insumo_sinonimo
    AND apd.sinonimo_usuario = NVL (:b1, apd.sinonimo_usuario)
    AND apd.sinonimo_planificacion = NVL (:b2, apd.sinonimo_planificacion)
    AND apd.correlativo_solicitud = NVL (:b3, apd.correlativo_solicitud)
    AND apd.estante_cod = NVL (UPPER (:b4), apd.estante_cod)
    AND apd.cod_despensa = NVL (UPPER (:b5), apd.cod_despensa)
    AND apd.estado = NVL (:b6, apd.estado)
    AND ap.centro_resultado = NVL (:b7, ap.centro_resultado)
    AND TO_DATE (TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy'), 'dd/mm/yyyy')
    BETWEEN TO_DATE (NVL (:b8,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND TO_DATE (NVL (:b9,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND apd.estado NOT LIKE :b10
    ORDER BY apd.sinonimo;The version of the database is 8.1.7.4.0.
    Here is the output of EXPLAIN PLAN:
    Plan
    SELECT STATEMENT  CHOOSECost: 2,907  Bytes: 104,312  Cardinality: 472                                               
         32 SORT ORDER BY  Cost: 2,907  Bytes: 104,312  Cardinality: 472                                          
              31 CONCATENATION                                     
                   15 FILTER                                
                        14 NESTED LOOPS  Cost: 11  Bytes: 52,156  Cardinality: 236                           
                             11 NESTED LOOPS  Cost: 10  Bytes: 177  Cardinality: 1                      
                                  8 NESTED LOOPS  Cost: 9  Bytes: 133  Cardinality: 1                 
                                       5 NESTED LOOPS  Cost: 8  Bytes: 67  Cardinality: 1            
                                            2 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION_DETALLE Cost: 7  Bytes: 52  Cardinality: 1       
                                                 1 INDEX FULL SCAN NON-UNIQUE ADMABA.IDX_ABAPLANI_DET_SOL Cost: 3  Cardinality: 1 
                                            4 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION Cost: 1  Bytes: 15  Cardinality: 1       
                                                 3 INDEX UNIQUE SCAN UNIQUE ADMABA.PK_ABA_PLANIFICACION Cardinality: 1 
                                       7 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_USUARIO_DESPACHADOR Cost: 1  Bytes: 3,498  Cardinality: 53            
                                            6 INDEX UNIQUE SCAN UNIQUE ADMABA.ABA_USUARIO_DESPACHADOR_PK Cardinality: 53       
                                  10 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 14,828  Cardinality: 337                 
                                       9 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 337            
                             13 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 1.037.828  Cardinality: 23,587                      
                                  12 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 23,587                 
                   30 FILTER                                
                        29 NESTED LOOPS  Cost: 11  Bytes: 52,156  Cardinality: 236                           
                             26 NESTED LOOPS  Cost: 10  Bytes: 177  Cardinality: 1                      
                                  23 NESTED LOOPS  Cost: 9  Bytes: 133  Cardinality: 1                 
                                       20 NESTED LOOPS  Cost: 8  Bytes: 67  Cardinality: 1            
                                            17 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION_DETALLE Cost: 7  Bytes: 52  Cardinality: 1       
                                                 16 INDEX RANGE SCAN NON-UNIQUE ADMABA.IDX_ABAPLANI_DET_SOL Cost: 3  Cardinality: 1 
                                            19 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION Cost: 1  Bytes: 15  Cardinality: 1       
                                                 18 INDEX UNIQUE SCAN UNIQUE ADMABA.PK_ABA_PLANIFICACION Cardinality: 1 
                                       22 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_USUARIO_DESPACHADOR Cost: 1  Bytes: 3,498  Cardinality: 53            
                                            21 INDEX UNIQUE SCAN UNIQUE ADMABA.ABA_USUARIO_DESPACHADOR_PK Cardinality: 53       
                                  25 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 14,828  Cardinality: 337                 
                                       24 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 337            
                             28 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 1.037.828  Cardinality: 23,587                      
                                  27 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 23,587                 Thanks in advance!
    Edited by: user491853 on 21-ago-2012 15:29

    A few comments looking at your sql query:
    How much time the query is taking?
    How many rows are there in the tables?
    Make sure the stats are up-to-date.
    Please kindly follow the instructions provided by others as well.
    >
    The version of the database is 8.1.7.4.0
    >
    Suggestion: Upgrade your version. Oracle Cost Based Optimizer is more smarter now.Upgrading will make your life much more easier as there are so many enhancements.
    AND TO_DATE (TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy'), 'dd/mm/yyyy')
    BETWEEN TO_DATE (NVL (:b8,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND TO_DATE (NVL (:b9,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')Why are you using TO_DATE/TO_CHAR on a date column?
    AND ap.centro_resultado = NVL (:b7, ap.centro_resultado)the same can be rewritten as below:
    AND (ap.centro_resultado =:b7 and :b7 is not null) or :b7 is null This applies to other predicates you are using as well.
    The table used in the plan is not found in your sql query eg NUC_CODIGOS_FLEXIBLES.
    Regards
    Biju

  • Unable to optimize album art on my iPod and then display the art on my iPod

    When I optimize album art it gives me an error message "unknown error (-50)"
    I would like to fix that. Any pointers?

    get the order number from itunes account "Purchase History" & then send a mail to itunes store from following website. www.apple.com/support/itunes/ They will provide you exception to redownload because its Apple's problem

  • Help needed to optimize the query

    Help needed to optimize the query:
    The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
    This is having high cost and taking around 15mins to execute.
    Can anyone help to fine-tune this??
       SELECT c.H_SEC,
                    c.S_PAID,
                    c.H_PAID,
                    table_c.EFF_DATE
       FROM    MTCH_TBL c
                    LEFT OUTER JOIN
                       (SELECT b.SEC_ALIAS,
                               b.EFF_DATE,
                               b.INSTANCE
                          FROM HIST_TBL b
                         WHERE b.EFF_DATE =
                                  (SELECT MAX (b2.EFF_DATE)
                                     FROM HIST_TBL b2
                                    WHERE b.SEC_ALIAS = b2.SEC_ALIAS
                                          AND b.INSTANCE =
                                                 b2.INSTANCE
                                          AND b2.EFF_DATE >= '01-Jan-2007')
                               OR b.EFF_DATE IS NULL) table_c
                    ON  table_c.SEC_ALIAS=c.H_SEC
                       AND table_c.INSTANCE = 100;

    To start with, I would avoid scanning HIST_TBL twice.
    Try this
    select c.h_sec
         , c.s_paid
         , c.h_paid
         , table_c.eff_date
      from mtch_tbl c
      left
      join (
              select sec_alias
                   , eff_date
                   , instance
                from (
                        select sec_alias
                             , eff_date
                             , instance
                             , max(eff_date) over(partition by sec_alias, instance) max_eff_date
                          from hist_tbl b
                         where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
                            or eff_date is null
               where eff_date = max_eff_date
                  or eff_date is null
           ) table_c
        on table_c.sec_alias = c.h_sec
       and table_c.instance  = 100;

  • Aggregation script is taking long time - need help on optimization

    Hi All,
    Currently we are working to build a BSO solution (version 11.1.2.2) for a customer where we are facing performance issue in aggregating the database. The most common activity of the solution will be to generate data on different scenario from Actual and Budget (Actual Vs Budget difference data in one scenario) and to be used for reporting purpose mainly.
    We are aggregating the data to top level using AGG command for Sparse dimensions. While doing this activity, we found that it is creating a lot of page files and thereby filling up the present physical memory of the drive (to the tune of 70GB). Moreover it is taking a long time to aggregate. The no. of stored members that is present are as follows:
    Dimension - Type - Stored member (Total members)
    Account - Dense- 1597 (1845)
    Period - Dense - 13 (19)
    Year - Sparse - 11 (12)
    Version - Sparse - 2 (2)
    CV - Sparse- 5 (6)
    Scenario - Sparse - 94 (102)
    EV - Sparse - 120 (122)
    FC - Sparse- 118 (121)
    CP - Sparse - 1887 (2049)
    M1 - Sparse - 4873 (4874)
    Entity - Sparse - 12020 (32349) - Includes two alternate hierarchies for rolling up the data
    The other properties are as follows:
    Index Cache - 152000
    Data File Cache - 32768
    Data cache - 153600
    ACR = 0.65
    We are using Buffered I/O
    The level 0 datafile is about 3 GB.( 2 year budget and 1 year 2 months Actuals data)
    Customer is going to use SmartView to retrieve the data and having Planning Plus License only. So could not go for an ASO solution. We could not reduce the members of huge Sparse dimensions M1 and CP as well. To improve the data retrieval time, we had to make upper level members as stored which resolved data retrieval issue
    I am seeking for help on the following:
    1. How can we optimize the time taken? Currently each dimension is taking about an hour to aggregate. Calc Dim is taking even longer time. Hence opted for AGG
    2. Will change of dense and sparse setting help our cause? ACR is ona lower side. Please note that most calculations are either on Period dimensions or FC. There is no such calculation on Account dimension
    3. Will change of a few non-level 0 members from store to dynamic-calc help? Will this slow down calculations in the cube?
    4. What should be the best performance order for this cube?
    Appreciate your help in these regard,
    Regards,
    Sukhamoy

    Please provide following  information
    1)  Block size  and other statistic
    2)  Aggreagation script
    >>Index Cache - 152000
    >>Data File Cache - 32768
    >>Data cache - 153600
    Try this settings
    Index Cache - 1120000
    Data cache - 3153600

  • In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity

    [The title of this forum is "Labview Ideas". Although this is NOT a direct suggestion for a change or addition to Labview, it seems appropriate to me to post it in this forum.]
    In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity
    I'd like to see NI actually start a round-table discussion about VI references, Data Value references, local variables, compiler optimizations, etc. I'm a C programmer; I'm used to pointers. They are simple, functional, and well defined. If you know the data type of an object and have a pointer to it, you have the object. I am used to compilers that optimize without the user having to go to weird lengths to arrange it. 
    The 'reference' you get when you right click and "Create Reference" on a control or indicator seems to be merely a shorthand read/write version of the Value property that can't be wired into a flow-of-control (like the error wire) and so causes synchronization issues and race conditions. I try not to use local variables.
    I use references a lot like C pointers; I pass items to SubVIs using references. But the use of references (as compared to C pointers) is really limited, and the implementation is insconsistent, not factorial in capabilites, and buggy. For instance, why can you pass an array by reference and NOT be able to determine the size of the array EXCEPT by dereferencing it and using the "Size Array" VI? I can even get references for all array elements; but I don't know how many there are...! Since arrays are represented internally in Labview as handles, and consist of basically a C-style pointer to the data, and array sizing information, why is the array handle opaque? Why doesn't the reference include operators to look at the referenced handle without instantiating a copy of the array? Why isn't there a "Size Array From Reference" VI in the library that doesn't instantiate a copy of the array locally, but just looks at the array handle?
    Data Value references seem to have been invented solely for the "In-Place Element Structure". Having to write the code to obtain the Data Value Reference before using the In-Place Element Structure simply points out how different a Labview reference is from a C pointer. The Labview help page for Data Value References simply says "Creates a reference to data that you can use to transfer and access the data in a serialized way.".  I've had programmers ask me if this means that the data must be accessed sequentially (serially)...!!!  What exactly does that mean? For those of use who can read between the lines, it means that Labview obtains a semaphore protecting the data references so that only one thread can modify it at a time. Is that the only reason for Data Value References? To provide something that implements the semaphore???
    The In-Place Element Structure talks about minimizing copying of data and compiler optimization. Those kind of optimizations are built in to the compiler in virtually every other language... with no special 'construct' needing to be placed around the code to identify that it can be performed without a local copy. Are you telling me that the Labview compiler is so stupid that it can't identify certain code threads as needing to be single-threaded when optimizing? That the USER has to wrap the code in semaphores before the compiler can figure out it should optimize??? That the compiler cannot implement single threading of parts of the user's code to improve execution efficiency?
    Instead of depending on the user base to send in suggestions one-at-a-time it would be nice if NI would actually host discussions aimed at coming up with a coherent and comprehensive way to handle pointers/references/optimization etc. One of the reasons Labview is so scattered is because individual ideas are evaluated and included without any group discussion about the total environment. How about a MODERATED group, available by invitation only (based on NI interactions with users in person, via support, and on the web) to try and get discussions about Labview evolution going?
    Based solely on the number of Labview bugs I've encountered and reported, I'd guess this has never been done, with the user community, or within NI itself.....

    Here are some articles that can help provide some insights into LabVIEW programming and the LabVIEW compiler. They are both interesting and recommended reading for all intermediate-to-advanced LabVIEW programmers.
    NI LabVIEW Compiler: Under the Hood
    VI Memory Usage
    The second article is a little out-of-date, as it doesn't discuss some of the newer technologies available such as the In-Place Element Structure you were referring to. However, many of the general concepts still apply. Some general notes from your post:
    1. I think part of your confusion is that you are trying to use control references and local variables like you would use variables in a C program. This is not a good analogy. Control references are references to user interface controls, and should almost always be used to control the behavior and appearance of those controls, not to store or transmit data like a pointer. LabVIEW is a dataflow language. Data is intended to be stored or transmitted through wires in most cases, not in references. It is admittedly difficult to make this transition for some text-based programmers. Programming efficiently in LabVIEW sometimes requires a different mindset.
    2. The LabVIEW compiler, while by no means perfect, is a complicated, feature-rich set of machinery that includes a large and growing set of optimizations. Many of these are described in the first link I posted. This includes optimizations you'd find in many programming environments, such as dead code elimination, inlining, and constant folding. One optimization in particular is called inplaceness, which is where LabVIEW determines when buffers can be reused. Contrary to your statement, the In-Place Element Structure is not always required for this optimization to take place. There are many circumstances (dating back years before the IPE structure) where LabVIEW can determine inplaceness and reuse buffers. The IPE structure simply helps users enforce inplaceness in some situations where it's not clear enough on the diagram for the LabVIEW compiler to make that determination.
    The more you learn about programming in LabVIEW, the more you realize that inplaceness itself is the closest analogy to pointers in C, not control references or data references or other such things. Those features have their place, but core, fundamental LabVIEW programming does not require them.
    Jarrod S.
    National Instruments

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Is it possible to force 16/32-bit stack alignment without using the optimizer?

    The compiler emits code targeted at the classic Pentium architecture for the -m32 memory model.  I'm running into problems mixing Sun Studio compiled code with code built with other compilers because the other compiler builds under the assumption that the stack is 16-byte aligned.
    The only way I've found to force Sun Studio to comply with that restriction is with -xarch={sse2a,sse3,...}, but this causes the code to pass through the optimizer.  As noted in the documentation, if you want to avoid optimizations you must remove all flags that imply optimizations -- that is to say, there's no way to disable optimizations once enabled.  This should not, however, be treated as an optimization because it's an ABI requirement.
    I've scoured the documentation, spent many hours googling, digging through forums, and asking questions.
    The best I've come up with is the -xarch option which is sub-optimal because it has side effects.  I tried -xchip=pentium4 (this is what my other compilers have set as their default target), but the generated code doesn't force 16-byte stack alignment.
    Is there a way to force the compiler to emit code conforming to a different ABI without using the optimizer?
    -Brian

    Thank you for your response.
    I hope you won't mind my asking: do you have a way to prove that it's not possible to force 16-byte alignment without using the optimizer?  I ask because your username / profile don't give the impression you work for Oracle, so while I think you're probably right it's at least possible that we're both mistaken.  I haven't been able to find any documentation on either stack alignment or altering the targeted ABI short of using the -xarch flag, and even there the details are fairly sketchy.
    -Brian

  • Need to Optimize 3D performance on an Envy 17 3D? (...and other random bits)

    Hey there,
    The name's Darren. Nice to meet you. While I'm new to posting around these parts, I have been lurking for a little bit. Here's the deal: I work for another part of HP -- I run the blog, thenextbench.com. There, I'm working on various stories: How-tos, tweaks, tips and whatnot. What I'm wondering is if you guys would find it useful for me to post bits of some of my stories here. For example, I did one a while back about setting up games to work in 3D on an ENVY 17 3D....and getting better performance. 
    (The story originally ran here)
    You’ve bought an ENVY 17 3D. Awesome. You’re rocking it with 3D movies and I’m going to make the wild assumption that you’ve played some games since the Envy 17 3D got updated with that snazzy TriDef 3D ignition software. It’s actually dead-simple to get up-and-running with its 300-plus supported games…but what if there is no preset profile for that brand new game you just bought or that super-obscure title you downloaded from some cool, underground hipster indie gaming site? Well, I’ve been tinkering a little with this machine and wanted to walk you through the proper steps to get you situated. So strap those fancy goggles firmly to your noggin and read on, my friends.
    For the sake of this story, I’m going to walk you through how I got things set up, step-by-step. If any of this seems a little redundant, bear with me. Also, the fine folks at TriDef have been great to work with on this - and while I don’t have all the answers, feel free to hit the comment box below and I’ll do my best to get the straight scoop from them. Also, I’d highly recommend youcheck the ddd forums as well. It is a VERY handy resource for 3D gaming on the ENVY 17 3D.
    STEP 1: The initial setup
    The first time you run the TriDef 3D Ignition software, hit the “Scan” button. It checks directories for known EXE files and instantly populates them on the game launch list. If you installed a popular game directly from a disc, it usually doesn’t have a problem. But if you’re like me, you download your games from digital download services like Steam. (What can I say? I lose discs all the time.) That’s when it gets a little trickier.  The game is afoot!
    STEP 2: Manually adding a game
    Click the “Add” button and it calls up a window. The first thing to look at is the drop down menu. It contains a current list of all the games automatically supported. Your game not there? Don’t sweat it yet. There’s a link in the window to the TriDef forums – there is an active community of users always creating new game profiles for you to download. Still nothing? There is still hope. Select the “Generic” profile for now. We’ll get back to that in Step 3.
    In the same window, you’re going to see a prompt to find the game location. You can either click a shortcut to the game or find the actual EXE file yourself. After that, make sure you create a name for the profile and save it.
    STEP 2a: Adding a Steam game
    I figured that it’d be a piece of cake. And it was at first. I downloadedBorderlands through Steam and when I created a profile pointing to the game file in the Steam directory, everything was groovy.
    (PROTIP: The TriDef software can work with game shortcuts, but Steam holds its game files in the “\Steam\steamapps\common” directory).
    Many other Steam-downloaded games started giving me this oddball warning: “This game doesn’t support DirectX 9, 10 or 11.” These were new games – OF COURSE they supported the latest DX files. So I did a little digging and there is an extra step required to make some Steam games work.
         1.       Click the “Add” button in the TriDef menu
         2.       In the “Executable” field, point to the “steam.exe” file in the main Steam directory.
         3.       Find a shortcut for the game you want to download. (If you don’t have one, open up the Steam client, right-click on the game and select “create shortcut on desktop.”)
         4.       Right-click on the shortcut for the game. At the end of the link location it’ll have a number. Copy that number
         5.       Within the TriDef’s Add window, enter “-applaunch [NUMBER]” in the field where it says “Command Line Arguments (optional)”
         6.       Look for the game’s profile as described above in Step 2.
         7.       Save your progress.
    STEP 3: Optimizing your 3D performance
    Once you’ve cleared those first couple steps, it’s actually not that bad from here. You just want to optimize the experience so that you can get good 3D effects and keep the game playable. What you have to remember is that in order to render a 3D image, the Envy is effectively doubling what’s happening on-screen. My gut reaction with any game is to run it at the laptop’s native screen resolution (1920 by 1080). It looks pretty and can handle running those games in 2D just fine. Bring 3D into the equation and your frame rate will drop. But with a couple tweaks, I’ll get you back up to speed.
         1.       First, start up a test game and just sit around in the game environment, not the game menus.
         2.       Next, on your computer’s number pad, hit the “0” key to call up the 3D overlay menu. Use the 8 and 2 to navigate up and down and the 6 key to make selections.
         3.       Push 2 until the “Performance” option is highlighted and hit the 6 key. There you should see the frame rate displayed (It’s labeled “FPS”). If the FPS number is above 30, youshould be fine. That, of course, can change if there’s a lot of action happening on screen. In short, the higher the frame rate, the better.
                  a.       If your frame rate is below 30, consider lowering the game’s resolution or move the cursor in the 3D overlay menu and lower the game’s 3D effects settings. Just highlight “Quality” and push the 6 to toggle the 3D effects between High, Medium and Low.
         4.       When you find the performance settings you like, hit Alt-Shift-S to save them. The next time you fire up that game, it’ll remember what you set.
    STEP 4: Tweaking your 3D experience.
    All right, so you’ve got the game running great, the 3D effects are there, but maybe you still want to adjust the settings a little further. For instance, the 3D effect is a little more jarring in real-time strategy games like StarCraft II and MMO games because you have menus and cursors floating over the world out of perspective with the rest of the 3D depth.  (Try selecting a target far downfield in an MMO and you’ll know what I’m talking about). There are all sorts of settings here that you can adjust. Experiment by adjusting the numbers for the “Depth” and “Focus” under the 3D menu. Under the “Options” and “Window and Cursor” sections, there are plenty of other toggles to switch on and off to your liking.
    Goes without saying, make sure to hit Alt-Shift-S when you’re done and the Ignition software will remember all your preferences.
    What About….?
    Just so you know, this story is an on-going work-in-progress that I plan to update as I learn more. Here are a couple things that I’m currently looking into with the Envy 17 3D:
    [This Game] Doesn’t Work at All / Is Glitchy in 3D. Yeah, I run into that problem as well every so often. DC Universe Online looks broken with tearing images when the 3D goggles are on. (Looks great in 2D, though). Other games, like Telltale Games’ new Back to the Future titles look five kinds of crazy. Those might be more specific fixes that require a deeper dive later on.
    What about Flash-based games? My gut reaction is that the technology requires DirectX 9, 10 or 11 to work so this one might not be in the cards.
    What about older games optimized for Windows 7? There are plenty of old-skool classics, I’d love to try in 3D, but they were all created in a pre-DirectX 9 world. That’s not stopping me from looking around for any solutions, but no word yet.
    =-=-=-=-=-
    So....was this even remotely helpful? Would you want to see more stuff like this? Or bits from stories I've written posted here? Heck, if there were topics you wanted tackled in story-form, I'm all ears for that as well. 
    Thanks in advance for any feedback!
    GizmoGladstone
    Blogger-in-Chief, HP's thenextbench.com
    thenextbench.com
    While I professionally blog for HP about the latest laptops and desktops, these words are all mine.
    My job: Come up with unusual angles for talking about HP gear, dissecting how stuff works and provide tips on getting better performance with your tech.

    Hi @fjward ,
    Thank you for visiting the HP Support Forums and Welcome. I have looked into your issue HP ENVY 17-3090nr 3D Edition Notebook PC and issues with brightness control and the Catalyst Control Center.  I would uninstall any graphic drivers that are listed and  CCC software, restart the computer, then reinstall only the AMD. It will include the Amd Graphics Driver and Catalyst Control Center restart the computer.
    Here is a link to the HP Support Assistant if you need it. Just download and run the application and it will help with the software and drivers on your system.
    You can do a system restore. System restore will help if something automatically updated and did not go well on the Notebook.
    When performing a System restore please note remove any and all USB devices. Disconnect all non-essential devices as they can cause issues.
    Please let me know how this goes.
    Thanks.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping!

  • Report and data comming wrong after compress data with full optimization

    In SAP BPC 5.1 version to increase the sysetm performance we did full optimization with compress data.
    Theis process end with error, after login into system the report and values comming wrong,
    What is the wrong,how to rectify it
    Regards
    prakash J

    This issue is resolved,

  • FI-CA Open Items Optimization

    Hi,
    I'm using trx FPBW to load data into DFKKOPBW table, and then load this data into BW.
    This proces is somewhere on the 7 to 8 hours process time.
    Is there any way to optimize this time?.
    We are running it on a weekly basis.
    thanks for the help.
    Mauricio

    Oscar,
    thanks for the tip.
    Do you know what programs are involved?
    I think some new index may help too, but I need to know what tables are been used and what fields are been filtered.
    thanks again.
    Mauricio

  • Animated Gif Optimization (exporting)

    I'm attempting to export an animated GIF using the Export
    Wizard. My problem is that I used high quality pictures and
    gradients and now i cannot seem to find an export optimization that
    will display these correctly. I tried everything from Adaptive down
    to Custom but everything gives me blurred pictured and less than
    acceptable results. I'm using Fireworks 8.
    If anyone could please help that'd be great! Thanks!

    jmosier14 wrote:
    > I'm attempting to export an animated GIF using the
    Export Wizard. My problem is
    > that I used high quality pictures and gradients and now
    i cannot seem to find
    > an export optimization that will display these
    correctly.
    The GIF format, including GIF89a, is designed for flat,
    poster-like
    images. It does not compress gradients or Photos very
    successfully.
    You'll end up with a better looking animation if you export
    in SWF
    format instead of GIF. Click on the Quick Export option menu
    at the top
    right of the document window and choose Macromedia Flash >
    Export SWF.
    In the Save As dialog box, click on the Options button to
    customize the
    export settings.
    Linda Rathgeber [PVII] *Adobe Community Expert-Fireworks*
    http://www.projectseven.com
    Fireworks Newsgroup:
    news://forums.projectseven.com/fireworks/
    CSS Newsgroup: news://forums.projectseven.com/css/
    http://www.adobe.com/communities/experts/

  • How to include optimizer hints in Discoverer

    We have a Discoverer report which used to run fine prior to DB migration from (9.2.0.6 to 10G Rac 10.2.0.4).
    Since the database is migrated to 10g RAC same reports is running for longer time and failes with ROW_ID error,
    we ran the sql generated by the report in SQL Plus with below optimizer hint.
    Select /*+ optimizer_features_enable('9.2.0') */
    this query ran well with optimizer hint, but i am wondering how to use the optimizer hint in Discoverer Plus/Desktop.
    Select /*+ optimizer_features_enable('9.2.0') */
    C.EMPLOYEE_NUMBER||' '||A.EMPLOYEE_NAME, A.REPORTS_TO, C.SERVICE_CODE, COUNT(B.ACCOUNT_NUMBER)
    FROM PSTAGE.NEW_EMPLOYEE_MASTER A,
    PSTAGE.NEW_ALL_WORK_ORDER_MASTER B,
    PSTAGE.NEW_ALL_WORK_ORDER_DETAIL C
    WHERE ( ( B.WORK_ORDER_NUMBER = C.WORK_ORDER_NUMBER AND B.SITE_ID = C.SITE_ID )
    AND ( A.EMPLOYEE_NUMBER = C.EMPLOYEE_NUMBER AND A.SITE_ID = C.SITE_ID ) )
    AND ( B.WO_STATUS <> 'CN' )
    AND ( C.EMPLOYEE_NUMBER = ANY(SELECT S254_200018.EMPLOYEE_NUMBER
    FROM PSTAGE.NEW_EMPLOYEE_MASTER S254_199317,
    PSTAGE.NEW_ALL_WORK_ORDER_MASTER S254_199854,
    PSTAGE.NEW_ALL_WORK_O
    Thanks in advance

    Hi Sunil
    In the Administrator tool, you can add hints to the driving folder used in your query. A first glance at your report seems to indicate that B might be the driver.
    If you launch the Administrator tool, open the business area then right-click on the folder in question you can select Properties. The second to last property is called Optimizer hints. Try setting the same hint in here exactly the way you would do it inside SQL.
    I am not 100% sure whether this would take as this isn't a folder hint per se, but it is worth a try. You might also want to look at this thread: how to design Optimizer hints  to the generated SQL
    Another thing to check is to look at the code that is being generated by your Discoverer worksheets. Do you by chance see a NOREWRITE hint being added? This is a common issue with newer systems. This hint tells the database that the query cannot be rewritten which in most cases will cause poor performance. If this is happening to you I advise you to disable that hint. This is done by editing the pref.txt and adding a new preference called
    Out of the box, Discoverer Plus will sometimes add the NOREWRITE hint. This will cause Plus worksheets to operate much slower than Desktop worksheets. You can disable the NOREWRITE hint by adding a new preference called UseNoRewriteHint to pref.txt in the Database section. After you have done this you will have to run the apply preferences script.
    [Database]
    UseNoRewriteHint = 0
    Be sure to close all of your IE windows so that a new JVM is loaded.
    For example, you might turn on UseNoRewriteHint (i.e. set it to 1) if you want users to always query against the latest data (e.g. created today), even though this might be slower than querying the summary data (e.g. created yesterday). The NOREWRITE hint instructs the optimizer to disable query rewrite for the query block, which overrides the setting of the parameter QUERY_REWRITE_ENABLED.
    Default Value: 0
    Valid Values
    0 = Do not add the NOREWRITE hint. This is the one I recommend.
    1 = Do add the NOREWRITE hint.
    Another possible area is with query prediction. This is taken from an Oracle Note: Under some circumstance when you run a query against an Oracle 10g database, the queryprediction might take up the majority of time and CPU may hit 100%.
    The cause for this is an an Oracle10g (10.1) database issue but seeing as you are on 10.2 this might not be an issue any more. I throw it out there just in case you still hvae issues and want to raise this with Oracle. The last I heard is that the root cause was still under investigation in an unpublished Bug:4024370. There was a workaround to the issue:
    1. Disable Query Prediction (strongly recommended anyway):
    For Plus/Viewer:
    Edit pref.txt on the middle-tier server and set QPPEnable=0
    Run the applypreferences script (.sh or .bat)
    For Desktop:
    Edit the registry and set QPPEnable=0
    HKEY_CURRENT_USERS\Software\ORACLE\Discoverer <version>\Database
    2. If you still wish to use Query Prediction while the database issue is being investigate, then you can configure the Query Predictor to use the Explain Plan method rather than the Dynamic Views method.
    For Plus/Viewer:
    Edit pref.txt on the middle-tier server and set QPPObtainCostMethod=0
    Run the applypreferences script (.sh or .bat)
    For Desktop:
    Edit the registry and set QPPObtainCostMethod=0
    HKEY_CURRENT_USERS\Software\ORACLE\Discoverer <version>\Database
    Hope this helps
    Best wishes
    Michael

Maybe you are looking for

  • Background image for XML Forms- RenderList Item

    Hi Everybody, We are working with XML Forms for creating News, and in this process the requirement is to set a<b> background image</b> for the form. It's working in Show Form but in <b>RenderListItem</b> it doesn't (as mnetioned by SAP in the documen

  • QuickTime Error Code

    When I deleted QuickTime the first time off my computer I had to install QuickTime 7.0 for my New IPod Nano. Well long story short When I run QuickTime it is saying that ( I have an error:-2093.) Does anybody have a clue what that is? As well as when

  • How to set the View instance and view  attribute to lov field ?

    Hi I created lov and textfield programatically. I want to set the viewattribute and view instance to that items(lovbean, texfieldbean). How to set programatically? Thanks in Advance Awadesh

  • Displaying detailed TLN in anonymous page

    Dear All, I want to display the detailed navigation list which starts from level 1 itself and which should not refer to the TLN. what i mean is anonymous page wont be having TLN but i need DTLN there which is not getting reflected because it takes ce

  • MDM integration with Digital Assets Management applications

    Is MDM a DAM tool?  How does MDM efficiently support the management of digital files.