Apllication optimization

Hi all,
I am trying to Optimize the Application , but it is giving the following error:
- Errors in the OLAP storage engine: The attribute key cannot be found: Table: dbo_tblFactLegalApp, Column: ACCTDETAIL, Value: .
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Errors in the OLAP storage engine: An error occurred while processing the 'LegalApp' partition of the 'LegalApp' measure group for the 'LegalApp' cube from the MEDEVELOP database.
Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation.
Internal error: The operation terminated unsuccessfully.

Fiona's definitely got the right diagnosis. You must remove the invalid records from the fact table.
The frustrating thing with this error message, sometimes, is that it only tells you the first member of the dimension that is invalid. If you fix this one, and then re-process, you may get a second member. And then a third... and a fourth.
Try this query in SQL mgmt studio, to identify all the invalid acctDetail members:
select distinct acctDetail
from tblFactLegalApp
except
select ID from mbrAcctDetail where calc = 'N'
select distinct acctDetail
from tblFac2LegalApp
except
select ID from mbrAcctDetail where calc = 'N'
select distinct acctDetail
from tblFactWBLegalApp
except
select ID from mbrAcctDetail where calc = 'N'
That tells you the complete magnitude of the problem, but not what caused it. Normally I find the cause to be script logic that uses a member property in the *REC statement; there's no validation available to ensure that the property value is a valid base-level member of the appropriate dimension. Or it could be a business rule set up with a DIMLIST property, etc.
After you identify the root cause, you can then also decide how to fix the corrupt data. Easiest & safest is to delete these records, and then re-run the logic that caused the problem initially. But sometimes that means you'll lose data. If that's the case, then you can update tblFactLegalApp where acctDetail = 'whatever' set acctDetail = 'somethingBetter'

Similar Messages

  • How can I optimize just the video on a project timeline?

    Hi everyone,
    I've been working on a 1 hour documentary using original non-optimized media, now that I'm approaching the final steps of the edit I would like to optimize all the video in the timeline, but NOT all the footage I have in the events.
    I did NOT optimized my media on import: all my events and project are made up of video non-converted, just imported. I did that because I didn't have enough storage to transcode to ProRes the whole 40 hours of footage.
    The folders now full of media in Final Cut Events are the "original media" ones.
    Now I'll add some title, subtitles and color-correction and I want to be a little more fast. Then I'll step in the "export" zone, and I know it is much better to export from optimized media than from original media, that's why I want a 'ProRes opimized media' project timeline.
    Thank you in advance to anyone with advices!

    Thank you Tom,
    at least I know there is no need I keep on wondering "WHY?"...
    This impossibility to transcote footage on timeline seem to me a big downside to this new version... I just have in mind all the options for managing media in FCP 7...
    Thank you again,
    always read your tips: veru useful!

  • G770 won't boot normally or in safe mode... I suspect it has something to do with the boot optimizer

    My Lenovo G770 will not boot in normal or safe mode.  I usually escape out of the boot optimizer.  Today I let it run and it went to the "starting windows" screen with a brief startup of the windows 7 animation...it freezes for a second, then a quick flash of the bsod, then to the "windows error recovery page" giving me the option of "starting windows normally" or "Launch Startup Repair (recommended)." 
    Starting Windows Normally eventually brings me back to the same place repeating what I just stated above paragraph.
    When I launch the Startup Repair it "Cannot repair this computer automatically".
    So I go to view advanced options for system recovery and support.
    It brings me to 5 options:
    Startup Repair (we already tried this above)
    System Restore (unfortunately I didn't create any restore points)
    System Image Recovery (unfortunately I haven't created an image to recover)
    Windows Memory Diagnostic (no problems found-done several times)
    Command Prompt (don't know what I can do here except for remove a bad/corrupted driver which may be the problem, but I don't know the driver name that is associated with the boot optimizer...can anyone tell me this?)
    I've tried booting to safe mode in all of its incarnations and I can't even do that..it repeats the same things as stated above...windows 7 animation briefly starts then locks up, flash of bsod, then the windows error recovery page.
    I've tried booting to last known good configuration (same thing occurs...brief startup of windows 7 animation, freeze, flash of bsod, then error recovery page.
    The only thing that has given me any kind of result was "disabling system restart on system failure."  When I do this, the BSOD doesn't flash briefly..it stays. and it gives me the error message page_fault_in_nonpaged_area.
    I'm at a loss as to what to do.  Not being able to boot into Safe Mode even is really frustrating.  Any advice from anyone?  can I remove the driver associated with the boot optimizer?  If so, what is the name of the driver and where (directory) is it located? 

    How did you resolve the issue?
    I have the exactly same issue.
    When I go System Image Recovery -->Select System Image-->Advanced->I can open all the drives [:Local Drive(C , LENOVO(D, Local Disk(E, Boot(X-where i think executable booting is here]. It comes with an Open prompt asking me to enter File Name with a File type: Setup Information.
    I don't know which setup information and where to find it on my Drives.
    Anyone know how to fix?
    I was trying to re-install Win 7 from DVD but it is not executing either.
    Can I boot with USB Ubuntu and Install Win 7 from there??? but how?
    Need help?

  • Column optimization in GUI_DOWNLOAD--Exce

    Hi Experts,
       I am writing an Excel file using GUI_DOWNLOAD function module. Is there any way to do column optimization in Excel file while downloading.
    Thanks and regards,
    Venkat

    Hi,
    There is a Complete & Very good documentation by SAP available on this URL. Please read this.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/204d1bb8-489d-2910-d0b5-cdddb3227820
    Hope your query get solved.
    Thanks and regards,
    Ramani N

  • Query optimization in Oracle 8i (tunning)

    Hi everyone,
    The following SQL statement use more than 15% of the CPU of the machine where it is executed. Please if somebody can help me to rewrite or hinting the query.
    This is the statement:
    SELECT
    /*+ INDEX(APD IDX_ABAPLANI_DET_SOL)  */ 
    apd.sinonimo,
    apd.sinonimo_planificacion,
    apd.cod_despensa,
    apd.estante_cod,
    apd.correlativo_solicitud,
    apd.prioridad,
    apd.correlativo_det_sol,
    apd.insumo_sinonimo,
    apd.cantidad_solicitada,
    apd.cantidad_despachada,
    apd.estado,
    apd.sinonimo_usuario,
    apd.sinonimo_observacion,
    ap.fecha_creacion,
    ap.centro_resultado,
    aud.nombre,
    aud.a_paterno,
    aud.rut,
    aud.username,
    cenres.cod_flex codigocr,
    insumo.cod_flex insumocod,
    cenres.des_flex despensa_descripcion,
    cenres.des_flex crdescripcion,
    insumo.des_flex insumodescripcion
    FROM
    aba_usuario_despachador aud,
    cenres,
    insumo,
    aba_planificacion_detalle apd,
    aba_planificacion ap
    WHERE ap.sinonimo = apd.sinonimo_planificacion
    AND aud.sinonimo = apd.sinonimo_usuario
    AND ap.centro_resultado = cenres.sinonimo
    AND insumo.sinonimo = apd.insumo_sinonimo
    AND apd.sinonimo_usuario = NVL (:b1, apd.sinonimo_usuario)
    AND apd.sinonimo_planificacion = NVL (:b2, apd.sinonimo_planificacion)
    AND apd.correlativo_solicitud = NVL (:b3, apd.correlativo_solicitud)
    AND apd.estante_cod = NVL (UPPER (:b4), apd.estante_cod)
    AND apd.cod_despensa = NVL (UPPER (:b5), apd.cod_despensa)
    AND apd.estado = NVL (:b6, apd.estado)
    AND ap.centro_resultado = NVL (:b7, ap.centro_resultado)
    AND TO_DATE (TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy'), 'dd/mm/yyyy')
    BETWEEN TO_DATE (NVL (:b8,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND TO_DATE (NVL (:b9,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND apd.estado NOT LIKE :b10
    ORDER BY apd.sinonimo;The version of the database is 8.1.7.4.0.
    Here is the output of EXPLAIN PLAN:
    Plan
    SELECT STATEMENT  CHOOSECost: 2,907  Bytes: 104,312  Cardinality: 472                                               
         32 SORT ORDER BY  Cost: 2,907  Bytes: 104,312  Cardinality: 472                                          
              31 CONCATENATION                                     
                   15 FILTER                                
                        14 NESTED LOOPS  Cost: 11  Bytes: 52,156  Cardinality: 236                           
                             11 NESTED LOOPS  Cost: 10  Bytes: 177  Cardinality: 1                      
                                  8 NESTED LOOPS  Cost: 9  Bytes: 133  Cardinality: 1                 
                                       5 NESTED LOOPS  Cost: 8  Bytes: 67  Cardinality: 1            
                                            2 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION_DETALLE Cost: 7  Bytes: 52  Cardinality: 1       
                                                 1 INDEX FULL SCAN NON-UNIQUE ADMABA.IDX_ABAPLANI_DET_SOL Cost: 3  Cardinality: 1 
                                            4 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION Cost: 1  Bytes: 15  Cardinality: 1       
                                                 3 INDEX UNIQUE SCAN UNIQUE ADMABA.PK_ABA_PLANIFICACION Cardinality: 1 
                                       7 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_USUARIO_DESPACHADOR Cost: 1  Bytes: 3,498  Cardinality: 53            
                                            6 INDEX UNIQUE SCAN UNIQUE ADMABA.ABA_USUARIO_DESPACHADOR_PK Cardinality: 53       
                                  10 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 14,828  Cardinality: 337                 
                                       9 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 337            
                             13 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 1.037.828  Cardinality: 23,587                      
                                  12 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 23,587                 
                   30 FILTER                                
                        29 NESTED LOOPS  Cost: 11  Bytes: 52,156  Cardinality: 236                           
                             26 NESTED LOOPS  Cost: 10  Bytes: 177  Cardinality: 1                      
                                  23 NESTED LOOPS  Cost: 9  Bytes: 133  Cardinality: 1                 
                                       20 NESTED LOOPS  Cost: 8  Bytes: 67  Cardinality: 1            
                                            17 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION_DETALLE Cost: 7  Bytes: 52  Cardinality: 1       
                                                 16 INDEX RANGE SCAN NON-UNIQUE ADMABA.IDX_ABAPLANI_DET_SOL Cost: 3  Cardinality: 1 
                                            19 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_PLANIFICACION Cost: 1  Bytes: 15  Cardinality: 1       
                                                 18 INDEX UNIQUE SCAN UNIQUE ADMABA.PK_ABA_PLANIFICACION Cardinality: 1 
                                       22 TABLE ACCESS BY INDEX ROWID ADMABA.ABA_USUARIO_DESPACHADOR Cost: 1  Bytes: 3,498  Cardinality: 53            
                                            21 INDEX UNIQUE SCAN UNIQUE ADMABA.ABA_USUARIO_DESPACHADOR_PK Cardinality: 53       
                                  25 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 14,828  Cardinality: 337                 
                                       24 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 337            
                             28 TABLE ACCESS BY INDEX ROWID OPS$NUCLEO.NUC_CODIGOS_FLEXIBLES Cost: 1  Bytes: 1.037.828  Cardinality: 23,587                      
                                  27 INDEX UNIQUE SCAN UNIQUE OPS$NUCLEO.NUC_CODFLEX_PK Cardinality: 23,587                 Thanks in advance!
    Edited by: user491853 on 21-ago-2012 15:29

    A few comments looking at your sql query:
    How much time the query is taking?
    How many rows are there in the tables?
    Make sure the stats are up-to-date.
    Please kindly follow the instructions provided by others as well.
    >
    The version of the database is 8.1.7.4.0
    >
    Suggestion: Upgrade your version. Oracle Cost Based Optimizer is more smarter now.Upgrading will make your life much more easier as there are so many enhancements.
    AND TO_DATE (TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy'), 'dd/mm/yyyy')
    BETWEEN TO_DATE (NVL (:b8,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')
    AND TO_DATE (NVL (:b9,TO_CHAR (ap.fecha_creacion, 'dd/mm/yyyy')),'dd/mm/yyyy')Why are you using TO_DATE/TO_CHAR on a date column?
    AND ap.centro_resultado = NVL (:b7, ap.centro_resultado)the same can be rewritten as below:
    AND (ap.centro_resultado =:b7 and :b7 is not null) or :b7 is null This applies to other predicates you are using as well.
    The table used in the plan is not found in your sql query eg NUC_CODIGOS_FLEXIBLES.
    Regards
    Biju

  • Unable to optimize album art on my iPod and then display the art on my iPod

    When I optimize album art it gives me an error message "unknown error (-50)"
    I would like to fix that. Any pointers?

    get the order number from itunes account "Purchase History" & then send a mail to itunes store from following website. www.apple.com/support/itunes/ They will provide you exception to redownload because its Apple's problem

  • Help needed to optimize the query

    Help needed to optimize the query:
    The requirement is to select the record with max eff_date from HIST_TBL and that max eff_date should be > = '01-Jan-2007'.
    This is having high cost and taking around 15mins to execute.
    Can anyone help to fine-tune this??
       SELECT c.H_SEC,
                    c.S_PAID,
                    c.H_PAID,
                    table_c.EFF_DATE
       FROM    MTCH_TBL c
                    LEFT OUTER JOIN
                       (SELECT b.SEC_ALIAS,
                               b.EFF_DATE,
                               b.INSTANCE
                          FROM HIST_TBL b
                         WHERE b.EFF_DATE =
                                  (SELECT MAX (b2.EFF_DATE)
                                     FROM HIST_TBL b2
                                    WHERE b.SEC_ALIAS = b2.SEC_ALIAS
                                          AND b.INSTANCE =
                                                 b2.INSTANCE
                                          AND b2.EFF_DATE >= '01-Jan-2007')
                               OR b.EFF_DATE IS NULL) table_c
                    ON  table_c.SEC_ALIAS=c.H_SEC
                       AND table_c.INSTANCE = 100;

    To start with, I would avoid scanning HIST_TBL twice.
    Try this
    select c.h_sec
         , c.s_paid
         , c.h_paid
         , table_c.eff_date
      from mtch_tbl c
      left
      join (
              select sec_alias
                   , eff_date
                   , instance
                from (
                        select sec_alias
                             , eff_date
                             , instance
                             , max(eff_date) over(partition by sec_alias, instance) max_eff_date
                          from hist_tbl b
                         where eff_date >= to_date('01-jan-2007', 'dd-mon-yyyy')
                            or eff_date is null
               where eff_date = max_eff_date
                  or eff_date is null
           ) table_c
        on table_c.sec_alias = c.h_sec
       and table_c.instance  = 100;

  • Aggregation script is taking long time - need help on optimization

    Hi All,
    Currently we are working to build a BSO solution (version 11.1.2.2) for a customer where we are facing performance issue in aggregating the database. The most common activity of the solution will be to generate data on different scenario from Actual and Budget (Actual Vs Budget difference data in one scenario) and to be used for reporting purpose mainly.
    We are aggregating the data to top level using AGG command for Sparse dimensions. While doing this activity, we found that it is creating a lot of page files and thereby filling up the present physical memory of the drive (to the tune of 70GB). Moreover it is taking a long time to aggregate. The no. of stored members that is present are as follows:
    Dimension - Type - Stored member (Total members)
    Account - Dense- 1597 (1845)
    Period - Dense - 13 (19)
    Year - Sparse - 11 (12)
    Version - Sparse - 2 (2)
    CV - Sparse- 5 (6)
    Scenario - Sparse - 94 (102)
    EV - Sparse - 120 (122)
    FC - Sparse- 118 (121)
    CP - Sparse - 1887 (2049)
    M1 - Sparse - 4873 (4874)
    Entity - Sparse - 12020 (32349) - Includes two alternate hierarchies for rolling up the data
    The other properties are as follows:
    Index Cache - 152000
    Data File Cache - 32768
    Data cache - 153600
    ACR = 0.65
    We are using Buffered I/O
    The level 0 datafile is about 3 GB.( 2 year budget and 1 year 2 months Actuals data)
    Customer is going to use SmartView to retrieve the data and having Planning Plus License only. So could not go for an ASO solution. We could not reduce the members of huge Sparse dimensions M1 and CP as well. To improve the data retrieval time, we had to make upper level members as stored which resolved data retrieval issue
    I am seeking for help on the following:
    1. How can we optimize the time taken? Currently each dimension is taking about an hour to aggregate. Calc Dim is taking even longer time. Hence opted for AGG
    2. Will change of dense and sparse setting help our cause? ACR is ona lower side. Please note that most calculations are either on Period dimensions or FC. There is no such calculation on Account dimension
    3. Will change of a few non-level 0 members from store to dynamic-calc help? Will this slow down calculations in the cube?
    4. What should be the best performance order for this cube?
    Appreciate your help in these regard,
    Regards,
    Sukhamoy

    Please provide following  information
    1)  Block size  and other statistic
    2)  Aggreagation script
    >>Index Cache - 152000
    >>Data File Cache - 32768
    >>Data cache - 153600
    Try this settings
    Index Cache - 1120000
    Data cache - 3153600

  • In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity

    [The title of this forum is "Labview Ideas". Although this is NOT a direct suggestion for a change or addition to Labview, it seems appropriate to me to post it in this forum.]
    In-Place Element Structures, References and Pointers, Compiler Optimization, and General Stupidity
    I'd like to see NI actually start a round-table discussion about VI references, Data Value references, local variables, compiler optimizations, etc. I'm a C programmer; I'm used to pointers. They are simple, functional, and well defined. If you know the data type of an object and have a pointer to it, you have the object. I am used to compilers that optimize without the user having to go to weird lengths to arrange it. 
    The 'reference' you get when you right click and "Create Reference" on a control or indicator seems to be merely a shorthand read/write version of the Value property that can't be wired into a flow-of-control (like the error wire) and so causes synchronization issues and race conditions. I try not to use local variables.
    I use references a lot like C pointers; I pass items to SubVIs using references. But the use of references (as compared to C pointers) is really limited, and the implementation is insconsistent, not factorial in capabilites, and buggy. For instance, why can you pass an array by reference and NOT be able to determine the size of the array EXCEPT by dereferencing it and using the "Size Array" VI? I can even get references for all array elements; but I don't know how many there are...! Since arrays are represented internally in Labview as handles, and consist of basically a C-style pointer to the data, and array sizing information, why is the array handle opaque? Why doesn't the reference include operators to look at the referenced handle without instantiating a copy of the array? Why isn't there a "Size Array From Reference" VI in the library that doesn't instantiate a copy of the array locally, but just looks at the array handle?
    Data Value references seem to have been invented solely for the "In-Place Element Structure". Having to write the code to obtain the Data Value Reference before using the In-Place Element Structure simply points out how different a Labview reference is from a C pointer. The Labview help page for Data Value References simply says "Creates a reference to data that you can use to transfer and access the data in a serialized way.".  I've had programmers ask me if this means that the data must be accessed sequentially (serially)...!!!  What exactly does that mean? For those of use who can read between the lines, it means that Labview obtains a semaphore protecting the data references so that only one thread can modify it at a time. Is that the only reason for Data Value References? To provide something that implements the semaphore???
    The In-Place Element Structure talks about minimizing copying of data and compiler optimization. Those kind of optimizations are built in to the compiler in virtually every other language... with no special 'construct' needing to be placed around the code to identify that it can be performed without a local copy. Are you telling me that the Labview compiler is so stupid that it can't identify certain code threads as needing to be single-threaded when optimizing? That the USER has to wrap the code in semaphores before the compiler can figure out it should optimize??? That the compiler cannot implement single threading of parts of the user's code to improve execution efficiency?
    Instead of depending on the user base to send in suggestions one-at-a-time it would be nice if NI would actually host discussions aimed at coming up with a coherent and comprehensive way to handle pointers/references/optimization etc. One of the reasons Labview is so scattered is because individual ideas are evaluated and included without any group discussion about the total environment. How about a MODERATED group, available by invitation only (based on NI interactions with users in person, via support, and on the web) to try and get discussions about Labview evolution going?
    Based solely on the number of Labview bugs I've encountered and reported, I'd guess this has never been done, with the user community, or within NI itself.....

    Here are some articles that can help provide some insights into LabVIEW programming and the LabVIEW compiler. They are both interesting and recommended reading for all intermediate-to-advanced LabVIEW programmers.
    NI LabVIEW Compiler: Under the Hood
    VI Memory Usage
    The second article is a little out-of-date, as it doesn't discuss some of the newer technologies available such as the In-Place Element Structure you were referring to. However, many of the general concepts still apply. Some general notes from your post:
    1. I think part of your confusion is that you are trying to use control references and local variables like you would use variables in a C program. This is not a good analogy. Control references are references to user interface controls, and should almost always be used to control the behavior and appearance of those controls, not to store or transmit data like a pointer. LabVIEW is a dataflow language. Data is intended to be stored or transmitted through wires in most cases, not in references. It is admittedly difficult to make this transition for some text-based programmers. Programming efficiently in LabVIEW sometimes requires a different mindset.
    2. The LabVIEW compiler, while by no means perfect, is a complicated, feature-rich set of machinery that includes a large and growing set of optimizations. Many of these are described in the first link I posted. This includes optimizations you'd find in many programming environments, such as dead code elimination, inlining, and constant folding. One optimization in particular is called inplaceness, which is where LabVIEW determines when buffers can be reused. Contrary to your statement, the In-Place Element Structure is not always required for this optimization to take place. There are many circumstances (dating back years before the IPE structure) where LabVIEW can determine inplaceness and reuse buffers. The IPE structure simply helps users enforce inplaceness in some situations where it's not clear enough on the diagram for the LabVIEW compiler to make that determination.
    The more you learn about programming in LabVIEW, the more you realize that inplaceness itself is the closest analogy to pointers in C, not control references or data references or other such things. Those features have their place, but core, fundamental LabVIEW programming does not require them.
    Jarrod S.
    National Instruments

  • SharePoint Foundation 2013 Optimization For Large File Transfer?

    We are considering upgrading from  WSS 3.0 to SharePoint Foundation 2013.
    One of the improvements we want to see after the upgrade is a better user experience when downloading large files.  It can be done now, but it is not reliable.
    Our document library consists of mostly average sized Office documents, but it also includes some audio and video files and software installer package zip files ranging from 100MB to 2GB in size.
    I know we can change the settings to "allow" larger than default file downloads but how do we optimize the server setup to make these large file transfers work as seamlessly as possible? More RAM on the SharePoint Foundation server? Other Windows,
    SharePoint or IIS optimizations?  The files will often be downloaded from the Internet, so we will not have control over the download speed.

    SharePoint is capable of sending large files, it is an HTTP stateless system like any other website in that regard. Given your server is sized appropriately for the amount of concurrent traffic you expect, I don't see any special optimizations required.
    Trevor Seward
    Follow or contact me at...
    This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs.
    I see information like this posted warning against doing it as if large files are going to cause your SharePoint server and SQL to crash. 
    http://blogs.technet.com/b/praveenh/archive/2012/11/16/issues-with-uploading-large-documents-on-document-library-wss-3-0-amp-moss-2007.aspx
    "Though SharePoint is meant to handle files that are up to 2 gigs in size, it is not practically feasible and not recommended as well."
    "Not practically feasible" sounds like a pretty dire warning to stay away from large files.
    I had seen some other links warning that large file in the SharePoint database causes problems with fragmentation and large amounts of wasted space that doesn't go away when files are removed or that the server may run out of memory because downloaded
    files are held in RAM.

  • Is it possible to force 16/32-bit stack alignment without using the optimizer?

    The compiler emits code targeted at the classic Pentium architecture for the -m32 memory model.  I'm running into problems mixing Sun Studio compiled code with code built with other compilers because the other compiler builds under the assumption that the stack is 16-byte aligned.
    The only way I've found to force Sun Studio to comply with that restriction is with -xarch={sse2a,sse3,...}, but this causes the code to pass through the optimizer.  As noted in the documentation, if you want to avoid optimizations you must remove all flags that imply optimizations -- that is to say, there's no way to disable optimizations once enabled.  This should not, however, be treated as an optimization because it's an ABI requirement.
    I've scoured the documentation, spent many hours googling, digging through forums, and asking questions.
    The best I've come up with is the -xarch option which is sub-optimal because it has side effects.  I tried -xchip=pentium4 (this is what my other compilers have set as their default target), but the generated code doesn't force 16-byte stack alignment.
    Is there a way to force the compiler to emit code conforming to a different ABI without using the optimizer?
    -Brian

    Thank you for your response.
    I hope you won't mind my asking: do you have a way to prove that it's not possible to force 16-byte alignment without using the optimizer?  I ask because your username / profile don't give the impression you work for Oracle, so while I think you're probably right it's at least possible that we're both mistaken.  I haven't been able to find any documentation on either stack alignment or altering the targeted ABI short of using the -xarch flag, and even there the details are fairly sketchy.
    -Brian

  • Need to Optimize 3D performance on an Envy 17 3D? (...and other random bits)

    Hey there,
    The name's Darren. Nice to meet you. While I'm new to posting around these parts, I have been lurking for a little bit. Here's the deal: I work for another part of HP -- I run the blog, thenextbench.com. There, I'm working on various stories: How-tos, tweaks, tips and whatnot. What I'm wondering is if you guys would find it useful for me to post bits of some of my stories here. For example, I did one a while back about setting up games to work in 3D on an ENVY 17 3D....and getting better performance. 
    (The story originally ran here)
    You’ve bought an ENVY 17 3D. Awesome. You’re rocking it with 3D movies and I’m going to make the wild assumption that you’ve played some games since the Envy 17 3D got updated with that snazzy TriDef 3D ignition software. It’s actually dead-simple to get up-and-running with its 300-plus supported games…but what if there is no preset profile for that brand new game you just bought or that super-obscure title you downloaded from some cool, underground hipster indie gaming site? Well, I’ve been tinkering a little with this machine and wanted to walk you through the proper steps to get you situated. So strap those fancy goggles firmly to your noggin and read on, my friends.
    For the sake of this story, I’m going to walk you through how I got things set up, step-by-step. If any of this seems a little redundant, bear with me. Also, the fine folks at TriDef have been great to work with on this - and while I don’t have all the answers, feel free to hit the comment box below and I’ll do my best to get the straight scoop from them. Also, I’d highly recommend youcheck the ddd forums as well. It is a VERY handy resource for 3D gaming on the ENVY 17 3D.
    STEP 1: The initial setup
    The first time you run the TriDef 3D Ignition software, hit the “Scan” button. It checks directories for known EXE files and instantly populates them on the game launch list. If you installed a popular game directly from a disc, it usually doesn’t have a problem. But if you’re like me, you download your games from digital download services like Steam. (What can I say? I lose discs all the time.) That’s when it gets a little trickier.  The game is afoot!
    STEP 2: Manually adding a game
    Click the “Add” button and it calls up a window. The first thing to look at is the drop down menu. It contains a current list of all the games automatically supported. Your game not there? Don’t sweat it yet. There’s a link in the window to the TriDef forums – there is an active community of users always creating new game profiles for you to download. Still nothing? There is still hope. Select the “Generic” profile for now. We’ll get back to that in Step 3.
    In the same window, you’re going to see a prompt to find the game location. You can either click a shortcut to the game or find the actual EXE file yourself. After that, make sure you create a name for the profile and save it.
    STEP 2a: Adding a Steam game
    I figured that it’d be a piece of cake. And it was at first. I downloadedBorderlands through Steam and when I created a profile pointing to the game file in the Steam directory, everything was groovy.
    (PROTIP: The TriDef software can work with game shortcuts, but Steam holds its game files in the “\Steam\steamapps\common” directory).
    Many other Steam-downloaded games started giving me this oddball warning: “This game doesn’t support DirectX 9, 10 or 11.” These were new games – OF COURSE they supported the latest DX files. So I did a little digging and there is an extra step required to make some Steam games work.
         1.       Click the “Add” button in the TriDef menu
         2.       In the “Executable” field, point to the “steam.exe” file in the main Steam directory.
         3.       Find a shortcut for the game you want to download. (If you don’t have one, open up the Steam client, right-click on the game and select “create shortcut on desktop.”)
         4.       Right-click on the shortcut for the game. At the end of the link location it’ll have a number. Copy that number
         5.       Within the TriDef’s Add window, enter “-applaunch [NUMBER]” in the field where it says “Command Line Arguments (optional)”
         6.       Look for the game’s profile as described above in Step 2.
         7.       Save your progress.
    STEP 3: Optimizing your 3D performance
    Once you’ve cleared those first couple steps, it’s actually not that bad from here. You just want to optimize the experience so that you can get good 3D effects and keep the game playable. What you have to remember is that in order to render a 3D image, the Envy is effectively doubling what’s happening on-screen. My gut reaction with any game is to run it at the laptop’s native screen resolution (1920 by 1080). It looks pretty and can handle running those games in 2D just fine. Bring 3D into the equation and your frame rate will drop. But with a couple tweaks, I’ll get you back up to speed.
         1.       First, start up a test game and just sit around in the game environment, not the game menus.
         2.       Next, on your computer’s number pad, hit the “0” key to call up the 3D overlay menu. Use the 8 and 2 to navigate up and down and the 6 key to make selections.
         3.       Push 2 until the “Performance” option is highlighted and hit the 6 key. There you should see the frame rate displayed (It’s labeled “FPS”). If the FPS number is above 30, youshould be fine. That, of course, can change if there’s a lot of action happening on screen. In short, the higher the frame rate, the better.
                  a.       If your frame rate is below 30, consider lowering the game’s resolution or move the cursor in the 3D overlay menu and lower the game’s 3D effects settings. Just highlight “Quality” and push the 6 to toggle the 3D effects between High, Medium and Low.
         4.       When you find the performance settings you like, hit Alt-Shift-S to save them. The next time you fire up that game, it’ll remember what you set.
    STEP 4: Tweaking your 3D experience.
    All right, so you’ve got the game running great, the 3D effects are there, but maybe you still want to adjust the settings a little further. For instance, the 3D effect is a little more jarring in real-time strategy games like StarCraft II and MMO games because you have menus and cursors floating over the world out of perspective with the rest of the 3D depth.  (Try selecting a target far downfield in an MMO and you’ll know what I’m talking about). There are all sorts of settings here that you can adjust. Experiment by adjusting the numbers for the “Depth” and “Focus” under the 3D menu. Under the “Options” and “Window and Cursor” sections, there are plenty of other toggles to switch on and off to your liking.
    Goes without saying, make sure to hit Alt-Shift-S when you’re done and the Ignition software will remember all your preferences.
    What About….?
    Just so you know, this story is an on-going work-in-progress that I plan to update as I learn more. Here are a couple things that I’m currently looking into with the Envy 17 3D:
    [This Game] Doesn’t Work at All / Is Glitchy in 3D. Yeah, I run into that problem as well every so often. DC Universe Online looks broken with tearing images when the 3D goggles are on. (Looks great in 2D, though). Other games, like Telltale Games’ new Back to the Future titles look five kinds of crazy. Those might be more specific fixes that require a deeper dive later on.
    What about Flash-based games? My gut reaction is that the technology requires DirectX 9, 10 or 11 to work so this one might not be in the cards.
    What about older games optimized for Windows 7? There are plenty of old-skool classics, I’d love to try in 3D, but they were all created in a pre-DirectX 9 world. That’s not stopping me from looking around for any solutions, but no word yet.
    =-=-=-=-=-
    So....was this even remotely helpful? Would you want to see more stuff like this? Or bits from stories I've written posted here? Heck, if there were topics you wanted tackled in story-form, I'm all ears for that as well. 
    Thanks in advance for any feedback!
    GizmoGladstone
    Blogger-in-Chief, HP's thenextbench.com
    thenextbench.com
    While I professionally blog for HP about the latest laptops and desktops, these words are all mine.
    My job: Come up with unusual angles for talking about HP gear, dissecting how stuff works and provide tips on getting better performance with your tech.

    Hi @fjward ,
    Thank you for visiting the HP Support Forums and Welcome. I have looked into your issue HP ENVY 17-3090nr 3D Edition Notebook PC and issues with brightness control and the Catalyst Control Center.  I would uninstall any graphic drivers that are listed and  CCC software, restart the computer, then reinstall only the AMD. It will include the Amd Graphics Driver and Catalyst Control Center restart the computer.
    Here is a link to the HP Support Assistant if you need it. Just download and run the application and it will help with the software and drivers on your system.
    You can do a system restore. System restore will help if something automatically updated and did not go well on the Notebook.
    When performing a System restore please note remove any and all USB devices. Disconnect all non-essential devices as they can cause issues.
    Please let me know how this goes.
    Thanks.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping!

  • I bought my ipad in us, and has been using a purchasing applications. Now i already went back to bahrain, i am trying to purchase apllication using my bahrain credit card but it cannot get through as the default country is USA.   I tried

    I bought my ipad in us, and has been using a pre paid card in purchasing applications. Now i already went back to bahrain, i am trying to purchase apllication using my bahrain credit card but it cannot get through as the default country is USA.
    I tried to change my country but a message says i have a store credit balance and must spend it before changing store. But how i can spend it i only have 8 cents left?
    So how will i be able to purchase applications if i am in bahrain?

    You can try again after a few hours after waiting and you still get the same error, you'll need to contact support.
    https://expresslane.apple.com/GetproductgroupList.action

  • Report and data comming wrong after compress data with full optimization

    In SAP BPC 5.1 version to increase the sysetm performance we did full optimization with compress data.
    Theis process end with error, after login into system the report and values comming wrong,
    What is the wrong,how to rectify it
    Regards
    prakash J

    This issue is resolved,

  • FI-CA Open Items Optimization

    Hi,
    I'm using trx FPBW to load data into DFKKOPBW table, and then load this data into BW.
    This proces is somewhere on the 7 to 8 hours process time.
    Is there any way to optimize this time?.
    We are running it on a weekly basis.
    thanks for the help.
    Mauricio

    Oscar,
    thanks for the tip.
    Do you know what programs are involved?
    I think some new index may help too, but I need to know what tables are been used and what fields are been filtered.
    thanks again.
    Mauricio

Maybe you are looking for