MDX query performance on ASO cube with dynamic members

We have an ASO cube and we are using MDX queries to extract data from that cube. We are doing some performance testing on the MDX data extract.
Recently we made around 15-20 account dimension members dynamic in the ASO cube, and it is taking around 1 and a half hour for the query to run on an empty cube. Earlier the query was running in 1 minute on the empty cube when there were no dynamic members in the cube.
Am not clear why it takes so much time to extract data from MDX on an empty cube when there is nothing to extract. The performance has also degraded while extracting data on the cube with data in it.
Does dynamic members in the outline affect the MDX performance? Is there a way to exclude dynamic members from the MDX extract?
I appreciate any insights on this issue.

I guess it depends on what the formulas of those members in the dynamic hierarchy are doing.
As an extreme example, I can write a member formula to count every unique member combination in the cube and assign it to multiple members, regardless if I have any data in the database or not, that function is going to resolve itself when you query it and it is going to take a lot of time. You are probably somewhere in between there and a simple function that doesn't require any over head. So without seeing the MDX it is hard to say what about it might be causing an issue.
As far as excluding members there are various function in MDX to narrow down the set you are querying
Filter(), Contains(), Except(), Is(), Subset(), UDA(), etc.
Keep in mind you did not make members dynamic, you made a hierarchy dynamic, that is not the same thing and it does impact the way Essbase internally optimizes the database based on Stored vs dynamic hierarchies. So that alone can have an impact as well.

Similar Messages

  • Clear ASO region with dynamic members

    I have a problem in clearing a region in an Oracle ESSBASE ASO cube that returns in an error like:
    dynamic members are not allowed in data clear region specification
    when i try to execute this query
    alter database 'aso'.'db' clear data in region '{CrossJoin(CrossJoin(CrossJoin(CrossJoin(CrossJoin(CrossJoin(CrossJoin(CrossJoin(CrossJoin({[var1]},{[var2]}),{[var3]}),{[var4]}),{[var5]}),{[a],[b ],[c],[d]}),{Descendants([node],Levels([dim1],0))}),{Descendants([dim2],Levels([dim3],0))}),{Descendants([dim4],Levels([node1],0))}),{Descendants([dim5],Levels([node2],0))})}' physical;
    In my case there are level 0 dynamic members in dim5.
    As you can see i tried the "physical" approach as found in some oracle forums but with without success. I want to work at leaf level for each node, excluding the ones marked as dynamic. How can I achieve this? How can I exclude from the query dynamic members in the hyerarchy and, as in my case, only dynamic leaf?
    Help is very appreciated.
    Thank you
    Edited by: user9289301 on Feb 23, 2013 12:46 PM

    There are two things I have done.
    1. Move all of the dynamically calcualted members under a a parent like statistical, then for Dim 5 I wuld do the level zero members except the statistics parent
    2. put a UDA on each of the members with formulas then exclude them.(You could conversely put a uda on members without formulas and select only those

  • ASO Cube with attributes very slow in retrieval

    Hi,
    I have a ASO Cube with 5 base dimensions and 8-9 attributes on the entity dimension. I have only 5-6 measures, which do the averages and counts based on the 40 day period. Howere, the data is loaded at the 15 minute increment
    Entity
    Date - (date-time, lowest level being date)
    TIme - ( 15 minute time for the full 24 hour period, has a attribute assocuated with oit)
    LocationType
    Measures.
    The sample formula is
    IIF(Islevel([Locations].CurrentMember,0), Avg(CrossJoin({[Measure].[Sale]},{[DateDim].CurrentMember.lag(40):[DateDim].CurrentMember})),Missing)
    Is there a way, i can have this calculated as a part of the script? DO you suggest i create a BSO, to do these calculations and pass on the result.
    In OBIEE, the report is to display the followung based on the date input.
    Entity Gen7, Entity Gen 6..... Entity Gen 2, Attr1, Attr2, Attr3, Attr4, Attr5, Attr6, Attr7, Measures

    2 things I would look at
    1st - I don't know how much performance you would get out of this, but I'm not clear why you are using a crossjoin in your MDX, it seems unnecessary and may cause more overhead. The following should work, you could also try using IsLeaf instead and see if that is any faster
    IIF(Isleaf([Locations].CurrentMember),
    Avg({[DateDim].CurrentMember.lag(40):[DateDim].CurrentMember},
    [Measure].[Sale], INCLUDEEMPTY)
    2nd - your problem mostly revolves around the fact that you are running a 40 member sum/avg for every member you are querying. It also sounds like the average is at the Day level, which is not level 0. So for all forty days, ASO also has to calc the results of each of those days. Remember that aggregations only get you so far, you should really think of everything in ASO as dynamic and that is why you can see what you have set up is not going to work that well, it is too calc intensive.
    I don't know how practical this is, but to get this to work fast you would probably need to break out the 15 minute increments below the day level to another dimension so the day level becomes a stored level zero member. The 15 minute increment dimension should also be stored. If at all possible you would want to have an alternate stored hierarchy with the 40 days you want to base the average on. Enable alternate hierarchies in your aggregations, then change your MDX calc to be based on the parent of the 40 day hierarchy divided by 40. That would be fast.
    I suppose you could opt to not break out the 15 minute increments and just have the shared hierarchy made up of the 15 minute increments that are below the 40 days. That would still give you a good stored subtotal that with some query hints you could get optimized.

  • VAL_FIELD selection to determine RSDRI or MDX query: performance tuning

    according to on of the HTG I am working on performance tuning. one of the tip is to try to query base members by using BAS(xxx) in the expension pane of BPC report.
    I did so and found an interesting issue in one of the COPA report.
    with income statement, when I choose one node gross_profit, saying BAS(GROSS_PROFIT), it generates RSDRI query as I can see in UJSTAT. when I choose its parent, BAS(DIRECT_INCOME), it generates MDX query!
    I checked DIRECT_INCOME has three members, GROSS_PROFIT, SGA, REV_OTHER. , none of them has any formulars.
    in stead of calling BAS(DIRECT_INCOME), I called BAS(GROSS_PROFIT),BAS(SGA),BAS(REV_OTHER), I got RSDRI query again.
    so in smmary,
    BAS(PARENT) =>MDX query.
    BAS(CHILD1)=>RSDRI query.
    BAS(CHILD2)=>RSDRI query.
    BAS(CHILD3)=>RSDRI query.
    BAS(CHILD1),BAS(CHILD2),BAS(CHILD3)=>RSDRI query
    I know VAL_FIELD is SAP reserved name for BPC dimensions.  my question is why BAS(PARENT) =>MDX query.?
    interestingly I can repeat this behavior in my system. my intention is to always get RSDRI query,
    George

    Ok - it turns out that Crystal Reports disregards BEx Query variables when put in the Default Values section of the filter selection. 
    I had mine there and even though CR prompted me for the variables AND the SQL statement it generated had an INCLUDE statement with hose variables I could see by my result set that it still returned everything in the cube as if there was no restriction on Plant for instance.
    I should have paid more attention to the Info message I got in the BEx Query Designed.  It specifically states that the "Variable located in Default Values will be ignored in the MDX Access".
    After moving the variables to the Characteristic Restrictions my report worked as expected.  The slow response time is still an issue but at least it's not compounded by trying to retrieve all records in the cube while I'm expecting less than 2k.
    Hope this helps someone else

  • Create SCOM Group with dynamic members about 10minutes !

    in our SCOM 2012 SP1 (CU3) environment with about 800 Windows Agents.
    OperationsDB on a Windows Cluster (2 physical server with 2 processors (six cores). Datawarehouse on separate cluster.
    When i create a group with dynamic members, it took about 10min. During this period all the consoles are busy and freezing. 
    Is that normal ?
    Regards
    Lehugo

    on the management server i got follow eventlog error durung this time: 
    OpsMgr Management Configuration Service failed to execute 'ConfigStoreStatsUpdate' engine work item due to the following exception
    Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.DataAccessException: Data access operation failed
       at Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.DataAccessOperation.ExecuteSynchronously(Int32 timeoutSeconds, WaitHandle stopWaitHandle)
       at Microsoft.EnterpriseManagement.ManagementConfiguration.SqlConfigurationStore.ConfigurationStore.ExecuteOperationSynchronously(IDataAccessConnectedOperation operation, String operationName)
       at Microsoft.EnterpriseManagement.ManagementConfiguration.SqlConfigurationStore.ConfigurationStore.WorkItemCompleted(IConfigServiceEngineWorkItemHandle workItemHandle, IConfigServiceEngineWorkItemResult workItemResult)
       at Microsoft.EnterpriseManagement.ManagementConfiguration.Interop.SharedWorkItem.ExecuteWorkItem()
       at Microsoft.EnterpriseManagement.ManagementConfiguration.Interop.ConfigServiceEngineWorkItem.Execute()
    System.Data.SqlClient.SqlException (0x80131904): Sql execution failed. Error 50000, Level 16, State 1, Procedure WorkItemMarkCompleted, Line 61, Message: Failed to report work item completion. Work item with id 1888748 is not assigned to service instance 'XXXXXX\Default'
       at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
       at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning()
       at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
       at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString)
       at System.Data.SqlClient.SqlCommand.CompleteAsyncExecuteReader()
       at System.Data.SqlClient.SqlCommand.EndExecuteNonQuery(IAsyncResult asyncResult)
       at Microsoft.EnterpriseManagement.ManagementConfiguration.DataAccessLayer.NonQuerySqlCommandOperation.SqlCommandCompleted(IAsyncResult asyncResult)

  • Query performance on Inventory Cube

    Hi All,
            I have a query on Inventory Cube with non cumulative key figures, when I ran a query with them its taking 60 to 70 minutes. When I ran the same query by removing non cumulatives its displaing results in 25 seconds. Is there any way we can improve query performance which is effected  by non cumulative keyfigures.
        I have checked the performance related tools like RSRV on cube and master data no errors, in RSRT> execute debug the more time query consumes in data manager, ST03> DB and data manager time and also unassigned time is more.
        I know that query consumes time because of non cumulative keyfigures as it need to perform calculations on fly but its taking lot more than that. I apprecate your inputs to this query in advance.
      I will reward points.
    Regards
    Satish Reddy

    Hi Anil,
        Its nice to see you. We have compressed the cube with marker update and we are using only two infosources to the cube(BF and UM). As there are 150 queries on that cube I don't want to build aggregate especially for that query. I also treid DB stats refresh, there is a process chain to delete and recreate indexes, analysed cube and master data in RSRV etc. it didn't really helped me. Would you please suggest any good solution for this. I apprecaite it in advance.
    When i check in Application log in Cube Manage it is displayed that Mass Upsert of Markers update so I assumed that markers are updated.
    Regards
    Satish Arra.

  • How to improve query performance of an ODS- with 320 million records

    <b>Issue:</b>
    The reports are giving time-outs while execution.
    <b>Scenario</b>:
    We have an ODS having approximately 320 millions of records in it.
    The reports are based on
    The ODS and
    InfoSets based on this ODS.
    These reports are giving time-outs while execution.
    <b>Few facts about this ODS:</b>
    There are around 75 restricted and calculated keyfigures used in the query definition.
    We can’t replace this ODS by cube as there is requirement of InfoSet on it.
    This is in BW 3.5 environment.
    <b>Few things we tried:</b>
    Secondary Indices are created on the fields which are appearing in the selection screen of the reports. It’s not worked.
    The Restriction/Calculation logic in the query definition can be moved to backend. Will it make the difference?
    Question:
    Can you suggest the ways to improve the query performance of this ODS?
    Your immediate response is highly appreciated. Thanks in advance.

    Hey!
    I think Oliver's questions are good. 320 Mio records are to much for an ODS. If you can get rid of the InfoSet that would be helpful. Why exactly do you need it? If you don't need you could partition your ODS with a characteristic and report over an MultiProvider.
    Is there a way to delete some data from the ODS?
    Maybe you make an Upgrade to 7.0 in the next time? There you can use InfoSets on InfoCubes.
    You also could try to precalculation like sam say. This is possible with reporting agent or Information Broadcasting. Then you have it in your cache. Look that your cache is large enough. Maybe you can use a table or something.
    Do you just need to make one or some special reports on a special time? Maybe you can make an update in another ODS writing just the result in it. For this you can use update rules or maybe analysisprocess designer (transaction RSANWB) is the better way.
    Maybe it is also possible to increase the parameter for your dialog-runtime rdisp/max_wprun_time (If you don't know, you basis should. Else look here https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ab254cf2-0c01-0010-c28d-b26d04627e61)
    Best regards,
    Peter

  • Filters not getting passed in MDX query while using SAP BW with OBIEE

    Hello,
    I've been working on OBIEE with SAP BW as back-end. I've created some reports & those are working fine when there is less amount of data. But when I try to run a report with 3 dimensions & 1 fact it throws an error saying "No more storage space available for extending an internal table". When I checked MDX query, I found that the filters that I had applied to request & also selected from prompts are not getting passed in that query. So, I tried running a simple request using a simple filter in Answers. Although this request returns results but I can't see filter conditions in query. MDX query always show crossjoin but I can't see filter conditions anywhere.
    Is it the normal OBIEE behaviour OR am I doing something wrong in there? Can you please help me out with this?
    Thanks,
    Rocky

    Hello Sainath,
    We tried those things. But it is still giving same error.
    State: HY00. Code: 10058. [NQODBC][SQL_STATE:HY000][nQSError: 10058] A general error has occurred. XML/A error returned from the server: Fault code: "XMLAnalysisError.0X80000005". Fault string: "The XML for Analysis provider encountered an error: MDX result contains too many cells (more than 1 million)". (HY000)
    The problem here, I think, is the filter parameters are not getting passed in the MDX query. Any idea why would that happen? Is there any setting to do so?
    Thanks in advance for help.
    Regards,
    Rocky

  • Query ID in Virtual Cube with services-Function module

    Hi,
    I am using virtual cube with services linked to a function module.
    The function module has fixed parameters(such as infoprovider name). None of these parameters consists of query information such as query  ID OR queryname .
    Do any one know how to determine query which was executed this function module?
    Best Regards,
    Anil

    Hi Claudio,
    I never implemented Virtual InfoCube with services with a FM, but I know there is a couple of How To Documents about named:
    - How to Reporting from External Data via Virtual InfoProvider
    -How to Implement a Virtual InfoCube with Services
    both with some code samples: did you read it?
    Hope it helps
    GFV

  • How to tune performance of a cube with multiple date dimension?

    Hi, 
    I have a cube where I have a measure. Now for a turn time report I am taking the date difference of two dates and taking the average, max and min of the date difference. The graph is taking long time to load. I am using Telerik report controls. 
    Is there any way to tune up the cube performance with multiple date dimension to it? What are the key rules and beset practices for a cube to perform well? 
    Thanks, 
    Amit

    Hi amit2015,
    According to your description, you want to improve the performance of a SSAS cube with multiple date dimension. Right?
    In Analysis Services, there are many tips to improve the performance of a cube. In this scenario, I suggest you only keep one dimension, and only include the column which are required for your calculation. Please refer to "dimension design" in
    the link below:
    http://www.mssqltips.com/sqlservertip/2567/ssas--best-practices-and-performance-optimization--part-3-of-4/
    If you have any question, please feel free to ask.
    Simon Hou
    TechNet Community Support

  • ASO Cube with BSO Partition as a Target

    Hi,
    Can someone please explain me the following because after reading different blogs as well as some other Essbase documents I am not able to understand,
    What is the actual use of having a BSO Transparent partition (target) of an ASO cube (source)?
    How exactly does the Write-Back functionality works if I have a BSO Transparent partition (target) of an ASO cube (source)?
    And lastly, in what business scenario one would implement a BSO Transparent partition (target) of an ASO cube (source)?
    Your help is much appreciated.
    thanks,
    fikes

    I have a situation where I have a BSO (target) on top of an ASO (source).  This was to try and resolve the dynamic time series (where time and periods are in different dimensions).  The DTS works almost great in the BSO partition.
    The other problem (or why we wanted to go to ASO) was that BSO cubes do not handle attributes across partitions (we have quarterly data in transparent partitions) to the target cube. -- but we do have dynamic time series.
    What I'm facing now it that the attributes work in the BSO (target) except when using a dynamic time series.
    The attributes are defined in both the BSO and ASO cubes.
    Any suggestions on how to handle attributes AND dynamic time series ?

  • Unable to create ASO cube with 120 attributes in Essbase

    Hi All,
    Currently we have requirement to build the ASO cube contains 6 hierachies and they wants to include all the columns of three relational tables.So,I adding all the columns as attributes to three of each hierarchies because they are all descriptive of each dimension.
    Now when I am trying build individually its wroking fine,When I am combining its throws an error "Network Error.Cannot close the cube".I seen in Essbase limitations that it will allow upto 256 dimensions however in my scenerio it is not accepting even 140(including all).
    Can some one assist is there any possible solution to build the cube.
    For your reference
    Using Essbase studio tool to deploy cube in Admin console
    Windows environment and SQL Server 2005 database
    Thanks in Advance

    "D:\ Huperion\Analytics\Bin\libdb42.dll"
    Is that a typo or is your path "Huperion" instead of "Hyperion"?
    I could see where that might cause a problem.

  • Issue with loading data into cube with duplicate members in different dimensions

    Have a cube with allowed dublicate members only in two dims (Period1 and Period2).
    Outlines saves with no errors, but when i try to load data, i get some errors:
    "Member 2012-12 is a duplicate member in the outline
    A6179398-68BD-7843-E0C2-B5EE81808D0B    01011    cd00    st01    2905110000    EK    fo0000    NNNN-NN-NN    2012-12    cust00000$$$    1" 
    These dims represent two similar periods of time. Am I to change member names and aliases a little in one of them(f.e. yyyy1-mm --> yyyy1 - mm1)?
    Users wouldnt like it...
    Period1
      yyyy1
        yyyy1/q1
          yyyy1-mm1
          yyyy1-mm2
          yyyy1-mm3   
        yyyy1/q2
    Period2
      yyyy1
        yyyy1/q1
          yyyy1-mm1
            yyyy1-mm1-dd1 
            yyyy1-mm1-dd2
          yyyy1-mm2
          yyyy1-mm3   
        yyyy1/q2
    Tnanx

    You may have to use fully qualified name something like
    [Period1].[2012-12]
    [Period2].[2012-12]
    You can refer to ESSBASE admin guide on "Creating and Working With Duplicate Member Outlines"
    Regards,
    Sunil

  • Using PeriodToDate MDX function in an ASO cube (Dynamic Time Series)

    Hyperion Essbase 7.1.2
    Since ASO doesn't support Dynamic Time Series (DTS), I am trying to use the "PeriodToDate" MDX function to get MTD and YTD totals. Below is my Time dimension:
    Time (Time - Stored)
    ...I---> 2007
    ......l---> Q1-2007
    .........l---> Jan-2007
    ............l---> 2007-01-01
    ............l---> 2007-01-02
    ............l---> 2007-01-03
    .........l---> Feb-2007
    .........l---> Mar-2007
    ......l---> Q2-2007
    ......l---> Q3-2007
    ......l---> Q4-2007
    Syntax:
    PeriodsToDate ( [layer [, member ]] )
    I tried many different ways to get the right syntax but it didn't work when I set the formula on member "2007" for YTD totals or on months to get MTD. Any help would be appreciated.
    Message was edited by:
    user597443

    ASO does not have Dynamic Time Series at all (unless something just recently changed)
    Period to date functionality has to be managed via MDX formulas and/or alternate roll ups in your time dimension.
    Time Balancing does have to be a stored hierarchy, which is what you generally want anyway. The issue you have with Time balancing is that some times if you have an alternate roll up in your time dimension, it can cause problems with Time balancing.
    If you want to use Time balancing, I would sugest a single stored time heriarchy and use an anakytic view dimension for the period to date functionality.
    You can see a post I have on my blog that talks about this: garycris.blogspot.com
    One other thing to mention. If you wan to use built in time balance functionality, you must tag a dimension as Account Type and it will have to be a dynamic dimension.

  • Essbase MDX Query Performance Problem

    Hello,
    I'm doing an analysis in OBIEE to Essbase cubes, but I don't know why OBIEE generates two MDX queries against Essbase. The first one returns in a reasonable time ( 5 minutos ) but the second one never returns.
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    select
    { [Accounts].[Paid Amount]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Product2]},crossjoin({[_Client Name]},{[_Service Name]}))))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Year].[Memnor], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Mês Caixa].[Memnor], [Product].[MEMBER_UNIQUE_NAME], [Product].[Memnor], [Client Name].[MEMBER_UNIQUE_NAME], [Client Name].[Memnor], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    With
    set [_Year] as '[Year].Generations(2).members'
    set [_Month] as '[Mês Caixa].Generations(2).members'
    set [_Product2] as 'Filter([Product].Generations(2).members, (([Product].CurrentMember.MEMBER_Name = "SPECIAL" OR [Product].CurrentMember.MEMBER_ALIAS = "SPECIAL") OR ([Product].CurrentMember.MEMBER_Name = "EXECUTIVE" OR [Product].CurrentMember.MEMBER_ALIAS = "EXECUTIVE")))'
    set [_Client Name] as 'Filter([Client Name].Generations(2).members, (([Client Name].CurrentMember.MEMBER_Name = "JOHN DOE" OR [Client Name].CurrentMember.MEMBER_ALIAS = "JOHN DOE")))'
    set [_Service Name] as 'Generate([Service Name].Generations(2).members, Descendants([Service Name].currentmember, [Service Name].Generations(4), leaves))'
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    select
    { [Accounts].[_MSCM1]
    } on columns,
    NON EMPTY {crossjoin({[_Year]},crossjoin({[_Month]},crossjoin({[_Client Name]},{[_Service Name]})))} properties MEMBER_NAME, GEN_NUMBER, [Year].[MEMBER_UNIQUE_NAME], [Mês Caixa].[MEMBER_UNIQUE_NAME], [Client Name].[MEMBER_UNIQUE_NAME], [Service Name].[Member_Alias] on rows
    from [cli.Client]
    Does anyone know why OBIEE generate these two queries and how to optimize them since it's generated automatically by OBIEE ?
    Thanks,

    Hi,
    I have been through the queries, and understand that the "_MSCM1" is being aggregated across Product and Paid Amount from the query extract below:
    member [Accounts].[_MSCM1] as 'AGGREGATE({[_Product2]}, [Accounts].[Paid Amount])'
    If I am getting it right, there is an aggregation rule missing for [Paid Amount] (I think that's the reason, the query is to aggregate _MSCM1 by "Paid Amount" ie just like any other dimension).
    Could you please check this once and this is why I think BI is generating two queries? I am sorry, if I got this wrong.
    Hope this helps.
    Thank you,
    Dhar

Maybe you are looking for

  • Using the magsafe adapter

    Hi everyone, I am loving my new Macbook Air. It is truly a wonderful computer. Anyway, I noticed that the power adapter that came with the computer is considerably smaller compared to other macbook models. This weekend, I forgot my power adapter at w

  • Can't see signatures

    my client sent me doc's with signatures but I can't see them  she used adobe docusign and I am running Adobe X . I can open the file but I can't see the actual signatures

  • Movies in iPhoto have gone black, audio remains

    This is troubling. Today I noticed that some of my movies in iPhoto no longer contain any images. That is, they are they same length, but play nothing but black. The audio is still intact. So far I have only noticed this in videos in iPhoto. Most are

  • GTX 770 2GB Twin Frozr Gaming Boost 2.0

    Hey! Im new here and would like help to a question in my mind. I got the GTX 770 and i wonder if there is any bios that turns of GPU Boost 2.0. Dont like it and want a constant clock when i set it. Greetings from sweden!

  • Convert xsd:dateTime to UTC

    Hello community, Within a message mapping I have to convert a field containing date, time and timezone to UTC/GMT timezone. Example: 2008-10-06T01:11:06+02:00 = Incoming value from source field, incl. local time + time zone offset 2008-10-05T23:11:06