Reducing WCF Verbosity

Our app has been working for a long time with WCF as is. However, we have a module now where the object mesh is very complex with a lot of data. It is transferring approximately 8 meg in a single WCF call. This takes a long time to transfer. So, I used
a data contract serializer to check what kind of xml is being created when WCF is transferring the data over the wire. It's absolutely massively bloated. It is transferring a lot of properties etc. which are simply null in memory.
My first thought was to implement IDataContractSurrogate so I could write a custom serializer. However, this interface doesn't seem to be implemented in Silverlight, so I can't use it. I think my only option is to serialize the data on one side, then send
the data as a string across WCF, and on the other side, deserialize using my own logic.
Incidentally, I tried converting the xml to json, and the json ended up being 5 meg more - 13 meg!  I used this tool:
http://www.utilities-online.info/xmltojson.
Anyway, I guess what I am asking is: are there any general principles that might help me here? Are there any options etc. to make the default data contract serializer more efficient? All my DataContracts are quite verbose in that most properties are marked
with DataMember attributes, but these properties should not be transferred across the wire when their value is null. I guess this points to an overall, inherent inefficiency in WCF. Ideas?

Hi,
You could filter data in silverside using the DomainCollectionView. So, you could get rid of some nonsense data.
You could check article below:
https://code.msdn.microsoft.com/silverlight/Server-Side-Filtering-737becda
Besides, link below coul give you some help:
https://msdn.microsoft.com/en-us/library/ff750808.aspx
Best Regards,
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Default Styles

    i would be interested to know, what default styles\setting
    does everyone use for their css pages.
    here are mine.
    body { /* set everything to normal to define the base format
    font: normal 13px/normal Geneva, Arial, Helvetica,
    sans-serif;
    color: #000000;
    margin:0;
    margin-top:2px;
    margin-bottom:2px;
    h1 { font-size: 1.5em; margin: 0 .25em 0.65em 0; }
    h2 { font-size: 1.2em; margin: 0 .25em 0.65em 0; }
    h3 { font-size: 1.1em; margin: 0 .25em 0.4em 0; }
    p, td, th, div, blockquote, ul, li, dl, ol { font-size: 1em;
    p, td, th, blockquote { margin: 0.5em 0;} /* controls spacing
    between elements */
    .clearfloat { /* this class should be placed on a div or
    break element
    and should be the final element before the close of a
    container that
    should fully contain a float */
    clear:both;
    height:0;
    font-size: 1px;
    line-height: 0px;
    .convinline
    display:inline;
    .convblock
    display:block;
    .pararight
    text-align:right;
    .paraleft
    text-align:left;
    .paracentre
    text-align:center;
    .fleft
    float:left;
    .fright
    float:right;

    You can read about it here -
    http://www.tjkdesign.com (look
    for the article on clear floating).
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "malcster2" <[email protected]> wrote in
    message
    news:[email protected]...
    >
    quote:
    Originally posted by:
    Newsgroup User
    > Seems OK. I normally add -
    >
    > html,body { min-height:100%;margin-bottom:1px; }
    >
    > after any body rule that sets margins. This forces a
    vertical scrollbar
    > on
    > all pages, even when they don't require one to avoid the
    left/right jog as
    > you navigate between pages that do and pages that don't
    exceed the
    > viewport
    > height in certain browsers (FF and Safari).
    >
    > You could reduce the verbosity a bit - for example:
    >
    > instead of this -
    >
    > > margin:0;
    > > margin-top:2px;
    > > margin-bottom:2px;
    >
    > this -
    >
    > margin:2px 0;
    >
    > I no longer use this kind of thing as a rule, opting for
    the
    > overflow:hidden
    > style -
    >
    > > .clearfloat { /* this class should be placed on a
    div or break element
    > > and should be the final element before the close of
    a container that
    > > should fully contain a float */
    > > clear:both;
    > > height:0;
    > > font-size: 1px;
    > > line-height: 0px;
    > > }
    >
    > --
    > Murray --- ICQ 71997575
    > Adobe Community Expert
    > (If you *MUST* email me, don't LAUGH when you do so!)
    > ==================
    >
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    >
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    > ==================
    >
    >
    > "malcster2" <[email protected]> wrote
    in message
    > news:[email protected]...
    > >i would be interested to know, what default
    styles\setting does everyone
    > >use
    > > for their css pages.
    > > here are mine.
    > >
    > > body { /* set everything to normal to define the
    base format */
    > > font: normal 13px/normal Geneva, Arial, Helvetica,
    > > sans-serif;
    > > color: #000000;
    > > margin:0;
    > > margin-top:2px;
    > > margin-bottom:2px;
    > > }
    > >
    > >
    > > h1 { font-size: 1.5em; margin: 0 .25em 0.65em 0; }
    > > h2 { font-size: 1.2em; margin: 0 .25em 0.65em 0; }
    > > h3 { font-size: 1.1em; margin: 0 .25em 0.4em 0; }
    > >
    > > p, td, th, div, blockquote, ul, li, dl, ol {
    font-size: 1em; }
    > > p, td, th, blockquote { margin: 0.5em 0;} /*
    controls spacing
    > > between elements */
    > >
    > > .clearfloat { /* this class should be placed on a
    div or break element
    > > and should be the final element before the close of
    a container that
    > > should fully contain a float */
    > > clear:both;
    > > height:0;
    > > font-size: 1px;
    > > line-height: 0px;
    > > }
    > >
    > > .convinline
    > > {
    > > display:inline;
    > > }
    > >
    > > .convblock
    > > {
    > > display:block;
    > > }
    > >
    > > .pararight
    > > {
    > > text-align:right;
    > > }
    > >
    > > .paraleft
    > > {
    > > text-align:left;
    > > }
    > >
    > > .paracentre
    > > {
    > > text-align:center;
    > > }
    > >
    > > .fleft
    > > {
    > > float:left;
    > >
    > > }
    > >
    > > .fright
    > > {
    > > float:right;
    > > }
    > >
    > >
    >
    >
    >
    > thanks for that murray
    >
    > to be honest, i was wondering what the difference
    between overflow:hidden,
    > and
    > the .clearfloat style was
    >

  • Reduce File Size doesn't work!

    When I click the (new) option 'Reduce File Size' in Keynote '09 the app prompts a estimated new size which is smaller than the current size. When I click on 'Reduce' the app doesn't do anything except for showing an error report which tells me something like: "..the file size was not reduced", for all of the files included. They are just regular .jpg/.png and .psd files. Did anyone else notice this bug..?

    PeterBreis0807 wrote:
    I'd say if it is a very large file with multiple large images, it might not be actually freezing, just taking its sweet time.
    It's why I leave Activity Monitor running when I use iWork applications.
    This way, I know ifd the app is dead or if it's heavily busy.
    Either that or it has nothing to do. It trims and reduces the resolution of bitmap images to 72dpi. There will be a fair bit of work if you have transparency in the document.
    It doesn't do a very good job IMHO and to reduce the file by 90% is a big ask.
    The 'Reduce FileSize' feature isn't linked to the 'export to PDF' feature.
    As explained, before applying "Reduce file size" it's good practice to "Reduce Image File Size".
    This will drop the parts of the pictures which aren't displayed.
    The asked task doesn't imply a reduction of the pictures by 90%.
    As I already wrote several times, it's more efficient to crop images to the really used area *_before inserting them in a Pages document_*.
    A Pages document is huge by nature because the Index.xml file describing its contents is highly verbose.
    If the document was saved with the 'embed Preview.pdf' feature active, the size of this PDF is adding a lot of used space.
    I'd print to pdf and reduce the quality in Acrobat Pro.
    Most of us don't own Acrobat Pro.
    I feel that it would be a bit silly to buy an application whose price is : 450$ as a complement for a 80$ set of applications.
    Before doing that, I would
    (a) crop pictures before inserting them
    (b) try to enhance Pages behaviour for free installing enhanced PDF filters.
    Yvan KOENIG (VALLAURIS, France) dimanche 29 août 2010 09:53:44

  • SSL Negotiation Failure in WCF Service

    Hi,
    I have a WCF service hosted in IIS that is making a call to another cloud hosted service using HTTPS, however I get the error 
    'The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel'.
    I have checked the following so far to diagnose this problem:
    - The remote certificate has not expired
    - The remote certificate trust chain is valid on my server up to the root trusting authority, and this root authority certificate is in my servers Trusted Root Certification Authorities
    store
    - The HTTPS connection from my service to the cloud service works for 1 hour after an IIS Reset, and then it fails and doesn't start working again until another IIS Reset is done.
    - I have enabled verbose logging for  System.Net and can see the SSL negotiation starting, and my service receives the remote certificate and prints the cert info, and it all looks
    valid.
    The next messages in the log are
    System.Net Information: 0 : [5832] SecureChannel#10559359 - Remote certificate was verified as invalid by the user.
        ProcessId=3264
        DateTime=2014-09-02T20:27:30.4279756Z
    System.Net.Sockets Verbose: 0 : [5832] Socket#18356823::Dispose()
        ProcessId=3264
        DateTime=2014-09-02T20:27:30.4279756Z
    System.Net Error: 0 : [5832] Exception in HttpWebRequest#41735743:: - The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel..
    - This issue only occurs on the production server.  The same service installed on 3 other servers works flawlessly
    - I have compared the IIS configuration for production and the 3 other servers and there are no obvious differences that could cause this issue.
    - My service is running on Windows Server 2008 R2, IIS 7.0, .NET Framework 4.0
    I'd appreciate any advice on what I can try to understand why the SSL negotiation is failing.
    Thanks

    Try post to cloud service forum.

  • Azure WCF SSL latency problems

    I have a WCF service running in Azure Worker Role. The service waits for client requests, then fetchs some data from SQL database and finally returns data to client. First, service was using plain unencrypted NetTcpBinding for communication, and latency
    for each request was about 200ms which was acceptable. Lately I switched binding to use SSL (still with NetTcpBinding) and latency jumped to 500ms (which was expected, of course). However, usability of client suffered greatly because clients are doing requests
    very frequently (like, there could be a burst of 10 requests going on at the same time) and almost all requests are really light at server side so increase of 300ms in latency really hurt.
    Now I am not sure what I could do to help latency. Connection pooling does not seem to work very well with Azure because there is 1min idle timeout and I cannot realiably tell if connection has timeouted before doing new request. Also I am not sure how connection
    pooling affects load balancer and scaling out instances: if I force every client to open 10 connections to WCF service and keep connections alive artificially, is it possible that load balancer does not work as expected?
    Are there any other options? I was also thinking that maybe I could use SSL only when logging in and then exchange symmetric crypto-key and afterwards use unencrypted connection and encrypt messages in code, but this is probably a bad idea (maybe it would
    be secure enough but then I couldn't say that all connections are encrypted with SSL which unfortunately is requirement for me).
    Thank you for help!

    Hello,
    Thank you for your answers! However, the problem still remains so let me explain it with more details.
    We're developing a new version of older software for our customers. Previously customers had to own and maintain their own servers which were running our software. Because we wanted to take this burden away from customers we decided to move all servers into
    (Azure) cloud.
    Now imagine following scenario:
    A user is 5 button clicks away from doing whatever he wants to do. Every button click has to do one query to server, usually in order to fetch some data from database. In our old application this was very fast since customers had their own servers running
    literally few meters from their workstations, so latency was minimal.
    Then we moved servers to cloud which is 500km (not 5 or 50 meters) away so latency jumped to approximately 200ms per query (as expected). Now clicking 5 buttons would add 1 second as latency in total, and although our customers were not particularly happy
    about this, they could still accept it. However, now we have to turn SSL on, and since it seems to add about 500ms latency per query, clicking 5 buttons add 2.5 seconds as latency in total which is simply too much, and the application becomes very sluggish
    to use.
    We are already trying to combine small requests into big ones but it is not possible in many cases because we don't know what action user is going to take before previous action has finished. We could try to collect data how users are using the software
    and make decision based on it, and although it would surely help, it would not remove the problem entirely.
    That is why I am hoping I could find a way to minimize the latency between client and server running in Azure cloud. Biggest part of latency comes from TCP or SSL handshaking: opening a channel, doing a request and closing the channel takes about 200ms when
    using plain TCP and 500ms when using SSL. However, if I don't close the channel but instead of reuse it, latency is only about 60ms. The problem is that Azure tries to prevent me from reusing channels. In perfect world I would simply open 10 SSL connections
    whenever application is started and reuse these channels throughout the lifetime of application, as this would result in approx. 60ms latency in every request (I chose number 10 simply because 10 simultaneous queries might be possible in some
    cases). Now, 60ms is a LOT shorter time period than 500ms or even 200ms!
    Now back to my original questions. Azure has 1min timeout for idle channels but there are libraries (e.g. http://code.msdn.microsoft.com/WCF-Azure-NetTCP-Keep-Alive-09f50fd9) that keep channels artificially open for longer time by sending empty packets every
    X seconds so that Azure thinks that channel is active. We are already using this library to prevent Azure from closing channels during some of our longer queries that take more than 1min to finish (like generating a monthly report with lots of data). Now I
    was wondering if I could extend this functionality so that I could open 10 connections at startup, keep them alive for X minutes (5, 10, 30, whole day?) and reuse them, as this would reduce latency to a good level. Some problems:
    1) How is load balancer affected if I keep channels alive? What if I simply open 10 channels per user (lets say there are 1000 users) is it possible that all channels (10000) are opened to same server instance as there is no significant CPU load on the server
    (at this point)?
    2) If Azure still closes the connection for some reason, it is hard to know what happened at client side. Basically it seems that if connection is closed and I try to reuse it (note that at client side WCF does not know that connection has been closed before
    I try to use that connection) WCF simply throws "timeout exception" with absurd timeout value (e.g. if I have configured timeout to be 5mins WCF throws exception "timeout 0.001 seconds"). Now I could catch this exception and parse the timeout value and see
    if it is "too low" and then decide that "ok, Azure closed this channel, I will close it and open new one" but this seems hacky (see 4).
    3) Is it ok to open this many connections to Azure? Would Azure think that it is under DoS attack if this many connections are opened in small time window (usually ppl come to work at specific time and so basically the connections would be opened in small
    time window). Also, we expect our userbase to grow to at least 10000 simultaneous users (after which this approach would require 100k simultaneous channels)
    4) This feels very hacky! Is it really so that applications don't usually behave like ours do? Are there other ways to achieve what I want?
    Thanks!

  • Error in WCF-Custom adapter (sqlbinding)

    There is a stored procedure tempdbo.dbo.InsertArTrxTyp.  I used the Add Generated Items wizard to create the schema and binding file for the stored procedure.  Import the binding file and get a send port, type WCF-Custom, with sqlbinding.  When
    the port action header is 
    <BtsActionMapping xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
      <Operation Name="InsertArTrxTyp" Action="Procedure/dbo/InsertArTrxTyp" />
    </BtsActionMapping>
    The event log shows error
    <Operation Name="InsertArTrxTyp" Action="Procedure/dbo/InsertArTrxTyp" /></BtsActionMapping>" was not understood.
    If I reduce the action header to 
    Procedure/dbo/InsertArTrxTyp
    the event log shows
    Object [dbo].[InsertArTrxTyp] of type StoredProcedure does not exist
    In the second case, SQL Profiler shows the code sent to the sql server querying for the existence of
    @ORIGINALOBJECTNAME=N'InsertArTrxTyp',@ORIGINALSCHEMANAME=N'dbo'
    If I run that code, I get results that indicate the existence of that object.
    Why is the fist action mapping wrong?  Why does the second action mapping result in no object found?  I've read this
    post but don't see how it applies to this situation as I'm not using an orchestration.

    Hi,
    Try and check following points:
    1. Update the SQL URI with the complete SQL instance name, if you are using the a named instance.
    2. Verify if the user, under which BizTalk host instance in running, have suffecient rights on the target DB to execute the SP.
    Hope this will help.
    HTH,
    Sumit
    Sumit Verma - MCTS BizTalk 2006/2010 - Please indicate "Mark as Answer" or "Mark as Helpful" if this post has answered the question

  • When to use WCF with JSON

    I m building a wcf service n exploring whether I should use JSON or XML. JSON will reduce payload size but I think xml is default config/setting. With JSON, do we need to do any additional setting?
    my calling client is a windows service. Is JSON applicable only for web applications? Do I need to serialize in calling code before sending to service and then de-serialize again when service response come back?
    thnx in advnce.

    Yes, json is better.
    Vote if help you

  • HTTP Web Service in SP Designer 2013 Workflow calling WCF Service - 401 403 Error

    Because of the limitations of SP Designer URL's I've had to create a custom webservice to Publish, Checkin, Approve etc....  I pass the List ID and a comment as parms in the HTTP POST. The WCF lives in the /15 ISAPI directory as it should.
    I setup 3 dictionaries:
    POST_RequestHeader
    content-type: application/json;odata=verbose, accept:application/json;odata=verbose, X-HTTP-Method: MERGE, IF-MATCH:*,Authorization:
    If I don't include Authorization as null, I get a 401.  With this I get a 403.
    POST_Metadata
    type string "name of my SP.Data category from tyhe list"
    POST_Parameters
    __metadata with the name of POST_Metadata
    Call the webservice in SP DESIGNER 2013
    Http://Mysitecollection/_vti_bin/MyWCFService/MyWCFService.svc/MyPublish/?id=123&comment='BasicTest2'
    RequestHeaders set to POST_RequestHeader
    RequestContent set to POST_Parameters
    The Interface correctly maps and routes me to the method.  The document is in Draft state ready for "Publish a Major version".
    I cannot get past the security errors.  My GET methods work fine.  Any thoughts?  How can I pass the proper credentials?
    I'm running in an App Step and have granted workflows elevated privileges so no problem there.
    Thanks.
                    using (SPSite siteCollection = new SPSite("myservercollectionurl"))
                        using (SPWeb web = siteCollection.OpenWeb())
                            SPWeb site = siteCollection.RootWeb;
                            SPList corpPol = site.Lists["Our Policies"];
                            SPListItem spListItem = corpPol.GetItemById(id);
                            SPFile file = spListItem.File;
                            trace = "In SPsite Loop" + spListItem["Title"].ToString(); 
    <--- Title is written OK to log
                            diagSvc.WriteTrace(0, category, TraceSeverity.Verbose, trace);
                            if (file.CheckOutType == SPFile.SPCheckOutType.None)
                                spListItem.File.Publish("done in WCF"); 
    <--  Blows everytime with 403 Forbidden
    This is in the Sharepoint logs:
    System.Runtime.InteropServices.COMException (0x8102006D): The security validation for this page is invalid. Click Back in your Web browser, refresh the page, and try your operation again.     at Microsoft.SharePoint.SPFile.PublishOrUnPublish(String
    comment, Boolean fPublish)
    Tom

    Hi Tom,
    Here are two blogs for you to check:
    Using SharePoint REST services from workflow with POST method
    http://mysharepointinsight.blogspot.com/2013/05/using-sharepoint-rest-services-from.html
    SharePoint Designer 2013 Workflow: Working with Web Services
    http://blog.appliedis.com/2014/10/09/sharepoint-designer-2013-workflow-working-with-web-services/
    We can use the fiddler to compose the HTTP POST method with the properly request headers and request body, then create your workflow.
    http://www.fabiangwilliams.com/2013/09/03/more-on-sharepoint-2013-rest-api-with-fiddler-and-spd/
    Best Regards
    Dennis Guo
    TechNet Community Support

  • Why poor WCF performance for first calls?

    I'm seeing some weird WCF call timings that I can't explain, and it is causing some real issues in my application. My WCF service calls seem to be taking hundreds of milliseconds if not seconds longer than they should.
    I've set up a simple SL5 project hosted in a web app project just to reduce the variables, and I still see terrible timings.
    I've got a very simple WCF service call, and I'm using ClientBase to instantiate the service instance, then I'm calling the service in a tight loop 30 times (asynchronously).
    The problem is that the first handful of calls take extremely long, according to the IE F12 tools. I'm seeing network times of between 500ms and 2000ms. After that, all of the service call times drop down below 100 ms. The problem for me is that,
    when I am just calling the service once in an application, I am seeing these initial delays, meaning every time I call the service it tends to take a really long time. I only did the tight loop test to see if things get better over time, which they do.
    I would imagine it is doing something like establishing the initial channels, and that is what is taking the hit, and then calls after that just reuse them, but is there anyway to reduce that initial hit? Adding tons of extra time to each of my
    calls in the real app is killing my performance.
    Here is a screenshot of F12 with the call results. You can see the first bunch of calls take an extremely long time, then everything gets nice and quick after:
    Here is the calling code in the test app:
    Private Sub TestWcfClientBase()
    Dim client = ServicesCommon.GetService()
    client.Proxy.BeginGetCurrentUser((AddressOf OnGetUserCompletedCommon), Nothing)
    End Sub
    Public Shared Function GetService() As ServiceClient(Of IServiceAsync)
    Dim service As New ServiceClient(Of IServiceAsync)("IServiceAsyncEndpoint")
    Return service
    End Function
    Public Class ServiceClient(Of T As Class)
    Inherits ClientBase(Of T)
    Implements IDisposable
    Private _disposed As Boolean = False
    Public Sub New()
    MyBase.New(GetType(T).FullName)
    End Sub
    Public Sub New(endpointConfigurationName As String)
    MyBase.New(endpointConfigurationName)
    End Sub
    Public ReadOnly Property Proxy() As T
    Get
    Return Me.Channel
    End Get
    End Property
    Protected Sub Dispose() Implements IDisposable.Dispose
    If Me.State = CommunicationState.Faulted Then
    MyBase.Abort()
    Else
    Try
    Catch
    End Try
    End If
    End Sub
    End Class
    The client config is as follows:
    <system.serviceModel>
    <bindings>
    <basicHttpBinding>
    <binding
    name="NoSecurity"
    closeTimeout="00:10:00"
    openTimeout="00:01:00"
    receiveTimeout="00:10:00"
    sendTimeout="00:10:00"
    maxBufferSize="2147483647"
    maxReceivedMessageSize="2147483647"
    textEncoding="utf-8">
    <security mode="None" />
    </binding>
    </basicHttpBinding>
    </bindings>
    <client>
    <endpoint
    name="IServiceAsyncEndpoint"
    address="http://localhost/TestService.svc"
    binding="basicHttpBinding"
    bindingConfiguration="NoSecurity"
    contract="Services.Interfaces.IServiceAsync" />
    </client>
    </system.serviceModel>
    Here is the stripped down service code:
    <AspNetCompatibilityRequirements(RequirementsMode:=AspNetCompatibilityRequirementsMode.Allowed)>
    <ServiceBehavior(InstanceContextMode:=InstanceContextMode.PerCall)>
    Public Class TestProxy
    Implements IServiceAsync
    Public Function GetCurrentUser() As WebUser Implements IServiceAsync.GetCurrentUser
    Dim user As New WebUser
    With user
    .User_Name = "TestUser"
    End With
    Return user
    End Function
    End Class
    And here is the service config:
    <system.serviceModel>
    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true"/>
    <bindings>
    <basicHttpBinding>
    <binding name="NoSecurity" closeTimeout="00:10:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" textEncoding="utf-8">
    <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="4096" maxNameTableCharCount="16384"/>
    </binding>
    </basicHttpBinding>
    </bindings>
    <behaviors>
    <serviceBehaviors>
    <behavior name="TestProxyServiceBehavior">
    <serviceMetadata httpGetEnabled="true"/>
    <serviceDebug includeExceptionDetailInFaults="false"/>
    </behavior>
    </serviceBehaviors>
    </behaviors>
    <services>
    <service behaviorConfiguration="TestProxyServiceBehavior" name="TestProxy">
    <endpoint address="" binding="basicHttpBinding" bindingConfiguration="NoSecurity" contract="Services.Interfaces.IService" />
    <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
    </service>
    </services>
    </system.serviceModel>

    Hi ChrisMikeC,
    Based on your description and config file, it seems that you host your WCF Service in IIS and you try to consume this WCF Service in a SL5 project, if I do not misunderstand you, in my mind when we are hosting our WCF service in IIS, there is a high
    cost for the first call. On the first call to our WCF service, IIS must compile our service. Then IIS must construct and start the service host, so it will be slow in the first call. For more information about how to improve the time for the first call, please
    try to refer to
    this article. Meanwhile please try to host your WCF Service in Windows Activation Services or Windows Service in where you will have greater control on the service host startup.
    Besides, since your SL5 project hosted in a web app, then in my mind ASP.NET applications always takes longer in first call because it includes JIT compilation step, and ocne the code is compiled, all the calls thereafter are faster than first one, so please
    try to use the Console application or a Windows Forms appliction to test if the time can be slow down.
    Best Regards,
    Amy Peng
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Reduce space/distance between tables

    Hi experts!
    I just need to know how to reduce the distance between some tables.
    I have 10 tables of 1 row each, and I would like to show these 10 tables as one reducing the space until they are all together.
    The solution of combining with similar request it is not a solution for me.
    I guess there is a file where I can change the properties of the sections where I can fill the propertie of the distance with 0.
    I need to have this reduced distance just for this case, not for all the dashboards, columns and sections.

    Use the validation tool below.
    http://validator.w3.org/check?verbose=1&uri=http%3A%2F%2Fwww.betteannesteele.com%2F
    You've got a closing paragraph tag on line 81 where none is needed.
    Nancy O.
    Alt-Web Design & Publishing
    Web | Graphics | Print | Media  Specialists
    www.alt-web.com/
    www.twitter.com/altweb
    www.alt-web.blogspot.com

  • Can't reduce previews.lrdata size

    Hi all,
    My Previews.lrdata folder is 45 GB. On a 500 GB SSD (late 2013 15" retina MacBook Pro), this is a reasonable chunk.
    So I've gone through and used the "discard 1:1 previews..." menu item on all photos that I don't expect to edit again. Somewhat to my surprise, the Previews.lrdata folder has not changed in file size by 1 byte. I've used the "optimise catalog" option, but that didn't seem to do anything either.
    I'd prefer not to flush the entire preview cache by deleting Previews.lrdata, as for recent photos that I haven't edited yet I appreciate having the 1:1 previews pre-rendered.
    Am I expecting too much here? Should there not be a relationship between me discarding 1:1 previews and a reduction in the Previews.lrdata folder size?
    Thanks,
    Nick

    ntompson wrote:
    So if I reduce the size of the standard preview to below screen resolution, then Lightroom just uses (down-scales) the 1:1 preview...
    That is correct.
    ntompson wrote:
    Are there any disadvantages to this?
    No - your standard previews were set too high before.
    ntompson wrote:
    why have a standard preview at all?
    For performance reasons - faster to grab/resize a smaller preview if it will do.
    ntompson wrote:
    Would I better off be setting the standard preview to the smallest possible size?
    No. If you do that, then Lr will re-render a 1:1 preview for fit view, even if a standard preview was already available (since it would be too small). Granted, if the 1:1 preview was already available, it wouldn't need to re-render it, but if you're expiring them, it might not be available, and even so, it takes longer to load/resize a larger preview than a smaller one.
    ntompson wrote:
    when I make the standard preview size change in Lightroom, does it then invalidate all of the standard previews in my catalogue? Does it have to crank through and regenerate all the standard previews at the new size?
    No, and no (but in my opinion, user should be prompted for whether he/she would like to rebuild when size or quality changes).
    In many cases, the difference in having too small a setting or too large a setting, will be almost (or completely) inconsequential. To be honest, I'm surprised Adobe exposed this option to the user, since most users don't have a clue what an optimal setting should be. If I were Adobe, I'd assume Lr/main-UI was gonna be on the biggest monitor, and automatically size previews accordingly. Often when I mention this, people point out that the same catalog might also be used on a laptop, at which point previews would no longer be optimal, and there are other dual-monitor considerations.., but hey - whatever...
    PS - Lightroom creates (up to) 7 different preview sizes: from 1:1 to tiny thumb. When it creates a big preview (1:1 or standard), it creates all smaller previews too, by halfening the size. Such is very fast, and no doubt accounts for the reason it won't discard 1:1 if next one down (standard) would be greater than half it's size (it's actually only creating 6 previews in that case - 1:1 through thumb, there is no big standard preview of the size you specified in that case , or if you prefer another way of looking at it: the 1:1 preview is your largest standard preview ).
    If you want to better understand what's stored for previews, you can play with the PreviewExporter plugin I wrote (free) - it allows you to specify which level (1-7) preview you want to export, and if you enable verbose logging it will provide info about what was available, and which level of preview was exported. Here is an excerpt from "post-process action" (export-filter) UI:
    Cheers,
    Rob

  • Performance reduced under Bootcamp after EFI update on Macbook Pro Retina

    after i upgraded EFI on my retina macbook pro the proformence reduced to unusable. under heavy load under bootcamp windows7. the cpu clock drops and as well as cpu clock. this is the screen shot of gpu clock monitoring from windows u can see the gpu clock is running at 270Mhz at most of the time and trying to be back to 725Mhz(yea, not 900Mhz!) when im running a graphic benchmark at same time cpu will run about 1.1Ghz as well. i noticed that it will run 10-20sec normally then CPU and GPU clock will start dropping. It's almost like speedstep is programmed backwards. My CPU is constantly running at 3.1-3.2GHz, and then as soon as a game loads, it drops to 1.2GHz. I've lost track of how many times I reset the SMC  reseting SMC PRAM, reinstalling windows using Bootcamp or even erease the entire drive and start over from network recovery MacOS did not solve this problem. so all the recent games like COD MW3, Starcraft II, BF3,or D3 will only run about about 10 fps as i was playing them with no ploblem before the EFI update.
    this is killing me! please help.
    im in NY, in this season i dont think is the overheating problem. when i check the temperature, it stays about 80C (fans will run about 4k rpm). but some time heavy loaded Mac OS cpu is about 90 degree but only with 3k rpm fan kicked in.
    i did search online, seems that some other people have the same problem with rMBP or MBA 2012.
    anyone knows if i take my macbook pro to the store, will those people work at genuines bar help me with this "windows" problem?
    thanks!

    Hey Shadowyani, please take a time and fill a bug report at http://developer.apple.com/bugreporter/ . The machine is useless for me as it is now because I develop and use very GPU/CPU demanding scientific applications, after the update my realtime applications are completely useless as they drop from something like 25fps to 9 fps even on Mac OS X (no SMC reset fixed the issue). In bootcamp the situation is worse, my software runs at 4 fps and gaming turns to be impossible to play as well.
    Please let Apple know about the problem, fill a bug report, even if SMC reset once in a while fixed the issue for you (this is not a common behaviour and must be fixed). We paid the price for cutting edge technology.
    Apple shall not do a downgrade of the system after few weeks, remember that most of reviews and benchmarks were done BEFORE the EFI Update, so people are being fooled by Apple in this sense.

  • A simple and free way of reducing PDF file size using Preview

    Note: this is a copy and update of a 5 year old discussion in the Mac OS X 10.5 Leopard discussions which you can find here: https://discussions.apple.com/message/6109398#6109398
    This is a simple and free solution I found to reduce the file size of PDFs in OS X, without the high cost and awful UI of Acrobat Pro, and with acceptable quality. I still use it every day, although I have Acrobat Pro as part of Adove Creative Cloud subscription.
    Since quite a few people have found it useful and keep asking questions about the download location and destination of the filters, which have changed since 2007, I decided to write this update, and put it in this more current forum.
    Here is how to install it:
    Download the filters here: https://dl.dropboxusercontent.com/u/41548940/PDF%20compression%20filters%20%28Un zip%20and%20put%20in%20your%20Library%20folder%29.zip
    Unzip the downloaded file and copy the filters in the appropriate location (see below).
    Here is the appropriate location for the filters:
    This assumes that your startup disk's name is "Macintosh HD". If it is different, just replace "Macintosh HD" with the name of your startup disk.
    If you are running Lion or Mountain Lion (OS X 10.7.x or 10.8.x) then you should put the downloaded filters in "Macintosh HD/Library/PDF Services". This folder should already exist and contain files. Once you put the downloaded filters there, you should have for example one file with the following path:
    "Macintosh HD/Library/PDF Services/Reduce to 150 dpi average quality - STANDARD COMPRESSION.qfilter"
    If you are running an earlier vesion of OS X (10.6.x or earlier), then you should put the downloaded filters in "Macintosh HD/Library/Filters" and you should have for example one file with the following path:
    "Macintosh HD/Library/Filters/Reduce to 150 dpi average quality - STANDARD COMPRESSION.qfilter"
    Here is how to use it:
    Open a PDF file using Apple's Preview app,
    Choose Export (or Save As if you have on older version of Mac OS X) in the File menu,
    Choose PDF as a format
    In the "Quartz Filter" drop-down menu, choose a filter "Reduce to xxx dpi yyy quality"; "Reduce to 150 dpi average quality - STANDARD COMPRESSION" is a good trade-off between quality and file size
    Here is how it works:
    These are Quartz filters made with Apple Colorsinc Utility.
    They do two things:
    downsample images contained in a PDF to a target density such as 150 dpi,
    enable JPEG compression for those images with a low or medium setting.
    Which files does it work with?
    It works with most PDF files. However:
    It will generally work very well on unoptimized files such as scans made with the OS X scanning utility or PDFs produced via OS X printing dialog.
    It will not further compress well-optimized (comrpessed) files and might create bigger files than the originals,
    For some files it will create larger files than the originals. This can happen in particular when a PDF file contains other optomizations than image compression. There also seems to be a bug (reported to Apple) where in certain circumstances images in the target PDF are not JPEG compressed.
    What to do if it does not work for a file (target PDF is too big or even larger than the original PDF)?
    First,a good news: since you used a Save As or Export command, the original PDF is untouched.
    You can try another filter for a smaller size at the expense of quality.
    The year being 2013, it is now quite easy to send large files through the internet using Dropbox, yousendit.com, wetransfer.com etc. and you can use these services to send your original PDF file.
    There are other ways of reducing the size of a PDF file, such as apps in the Mac App store, or online services such as the free and simple http://smallpdf.com
    What else?
    Feel free to use/distribute/package in any way you like.

    Thanks ioscar.
    The original link should be back online soon.
    I believe this is a Dropbox error about the traffic generated by my Dropbox shared links.
    I use Dropbox mainly for my business and I am pretty upset by this situation.
    Since the filters themsemves are about 5KB, I doubt they are the cause for this Dropbox misbehavior!
    Anyway, I submitted a support ticket to Dropbox, and hope everything will be back to normal very soon.
    In the meantime, if you get the same error as ioscar when trying to download them, you can use the link in the blog posting he mentions.
    This is out of topic, but for those interested, here is my understanding of what happened with Dropbox.
    I did a few tests yesterday with large (up to 4GB) files and Dropbox shared links, trying to find the best way to send a 3 hour recording from French TV - French version of The Voice- to a friend's 5 year old son currently on vacation in Florida, and without access to French live or catch up TV services. One nice thing I found is that you can directly send the Dropbox download URL (the one from the Download button on the shared link page) to an AppleTV using AirFlick and it works well even for files with a large bitrate (except of course for the Dropbox maximum bandwidth per day limit!). Sadly, my Dropbox shared links were disabled before I could send anything to my friend.
    I may have used  a significant amount of bandwidth but nowhere near the 200GB/day limit of my Dropbox Pro account.
    I see 2 possible reasons to Dropbox freaking out:
    - My Dropbox Pro account is wronngly identified as a free account by Dropbox. Free Dropbox accounts have a 20GB/day limit, and it is possible that I reached this limit with my testing, I have a fast 200Mb/s internet access.
    - Or Dropbox miscalculates used bandwidth, counting the total size of the file for every download begun, and I started a lot of downloads, and skipped to the end of the video a lot of times on my Apple TV.

  • I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it this about and how do I get my product back

    I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it this about and how do I get my product back

    Hi there
    I have version 5.7 and every time I opened it I was told that updates are available and to click on the icon to access these.  Instead it just took me to the
    adobe page with nowhere visible to update.  I then  sought to download lightroom cc and this is when I could not access the 'develop' section due to reduced
    functionality  It was apparent that my photos had been put in cc but no way to access them unless I wanted to subscribe. 
    I have since remedied the problem as  my original lightroom 5.7 icon is still available on the desktop and have gone back to that.  I do feel that this is a bit
    of a rip off and an unnecessary waste of my time though.
    Thank you for your prompt reply by the way.
    Carlo
    Message Received: May 04 2015, 04:52 PM
    From: "dj_paige" <[email protected]>
    To: "Carlo Bragagnolo" <[email protected]>
    Cc:
    Subject:  I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have
    reduced functionality what it this about and how do I get my product back
    dj_paige  created the discussion
    "I have just sought  to update my lightroom and am now unable to access the develop function and get a note stating that I have reduced functionality what it
    this about and how do I get my product back"
    To view the discussion, visit: https://forums.adobe.com/message/7510559#7510559
    >

  • Best practice to reduce downtime  for fulllaod in Production system

    Hi Guys ,
    we have  options like "Initialize without data transfer  ", Initialization with data transfer"
    To reduce downtime of production system for load setup tables , first I will trigger info package for  Initialization without data transfer so that pointer will be set on table , from that point onwards any record added as delta record , I will trigger Info package for Delta , to get delta records in BW , once delta successful, I will trigger info package for  the repair full request to  get all historical data into setup tables , so that downtime of production system will be reduced.
    Please let me know your thoughts and correct me if I am wrong .
    Please let me know about "Early delta Initialization" option .
    Kind regards.
    hari

    Hi,
    You got some wrong information.
    Info pack - just loads data from setup tables to PSA only.
    Setup tables - need to fill by manual by using related t codes.
    Assuming as your using LO Data source.
    In this case source system lock is mandatory. other wise you need to go with early delta init option.
    Early delta init - its useful to load data into bw without down time at source.
    Means at same time it set delta pointer and will load into as your settings(init with or without).
    if source system not able lock as per client needs then better to go with early delta init options.
    Thanks

Maybe you are looking for

  • Can't add or move toolbars in Acrobat 8 Professional

    Any time I try to add a toolbar, it appears momentarily in the middle of the window and then disappears. If I try to move a toolbar, say from the top of the window to the bottom, the same thing happens; it's there just for a moment, then it's gone, a

  • Java.lang.ClassCastException when executing an AbstractAggregator

    I'm trying to run an agregator over a distributed cache and I'm getting this exception: (Wrapped: Failed request execution for NPDistributedCache service on Member(Id=2, Timestamp=2013-03-16 09:46:09.907, Address=127.0.0.1:8090, MachineId=27728, Loca

  • Problem with column headings....

    Hi, I developed a matrix group report in motif version of oracle reports. I executed the report, it worked fine. Displaying the headings correctly on all the pages.... The report output is to a pdf. When executed from the front end, discretely the re

  • Acrobat windows won't close

    Hi, Sorry if this is the wrong forum, but I couldn't find a general troubleshooting for Acrobat forum. I've got Acrobat Pro XI and for some reason I can't get the windows to close when I press the red x. I end up having to quit the entire program. Ha

  • Preventing Messages from beeing deliverd from XI to receiver system

    Hi Experts, for testing purposes I need to send one and the same message to XI every day. I want the message to be processed by XI. But I have to prevent that the message is delivered to the receiver system. As I am not allowed to touch reciver agree