Inbound BGP selection from Google Cache Server (video content)

Scenario:
Having multiple gateway A & B. 1G bandwidth for each GW. total 2G
Current inbound traffic A 98% utilized. for B inbound traffic was under utilized below 50%
From Analysis many traffic came form video Google cache was using GW A and not using GW B
Need solution how to divert video traffic by using GW B path. using BGP method.

Before I comment I would like to see more detail like existing traffic flow (Internal dummy source to Internet) diagram with peering detail, etc.
Happy to Help

Similar Messages

  • HT203071 How does the clients (iOS devices) to configure/use the caching server to download apps from local caching server instead of App store?

    Hi ... How does the clients (iOS devices) to configure/use the caching server to download apps from local caching server instead of App store?

    Hi,
    If you want to restore a reomoved app,you need to use
    Add-AppxPackage to add a signed app package (.appx) to a user account.
    But we cannot extract them from the ISO.
    The behavior is by design.And this is a software protection regime.
    Thanks for your understanding.
    Regards,
    Kelvin Xu
    TechNet Community Support

  • Configure postfix to accept inbound mail only from Google

    I like to host my own email on a Mac Mini running OS X Server.  I’ve also looked for solutions that allow filtering out SPAM before the mail gets sent to my server.  For many years I used Postini’s spam filtering service to clean incoming email before Postini forwarded the mail on to my server.  I now use Google mail, part of Google Apps service to remove spam and to archive all the mail.
    The approach of letting Google clean emal before sending the mail on to my OS X Server uses two domains, one a “public” domain for incoming email and another a “private” domain used only for  forwarding the filtered email to OS X server.
    All went well with the defult Postfix configuration that came with OS X Server for a few months, then SPAM started creeping into my “private” domain as various spammers discovered my private email address and started sending mail directly to the Mac Mini, bypassing Google.
    Whenever I had spare time I would search the web looking for how to configure Postfix on OS X server so that email from Google and my other machines would be accepted and all other email would be blocked.  There were lots of write-ups on how to relay outgoing email to Google, but I couldn’t find straightforward configuration instructions for configuring Postfix to only allow incoming email forwarded by Google or coming from my machines and block all other sources.
    With a Google apps account you get telephone support so I gave Google a call and within a few rings got a very pleasant guy who listened to what I wanted to do and didn’t have the configuration setup, but did offer to send me a document showing the blocks of IP addresses used by Google for sending email. 
    I’ve posted several requests for help doing this type of configuration and never received responses that made sense.  So in the interest of helping anyone else that wants to configure Postfix to accept connections from a set of specific IP addresses and refuse connections from all other connections for inbound email, here is what will get you going:
    Use your favorite text editor to edit the Postfix configuration file (I use BBEDIT) but use whatever you like. 
    On the OS X Server open this file:
    /Library/Server/Mail/Config/postfix/main.cf
    Immediately do a “save as…” to make a backup copy with a different name, such as …mail.df.back1 in the same directory so you can revert to the backup if necessary.
    substitute your domain names in the following commands:
    public.com   -  change to your publicly advertised  routable domain
    hidden.com  -  change to your OS X Server  routable domain
    lan.com - change to your OS X Server lan domain, should be registered to make things clean and shouldn’t be .local
    10.6.18.0/24 - change to your LAN subnet
    host - change to your host name
    Your Postfix configuration file should contain these commands (and probably more).  Each situation varies so do what you have to for your situation….
    Have Postfix add your public domain name in the email header
    myorigin = public.com
    mydomain_fallback = localhost
    message_size_limit = 41943040
    biff = no
    aaa.bbb.ccc.ddn - Your publicly routable IP addresses provided by your ISP
    Let Postfix know your LAN network, the routable addresses you have from your ISP, and the Google networks where the Google email servers live.  Get the latest list of Google networks hosting email at this address: https://support.google.com/a/answer/3070269
    mynetworks =
              10.6.18.0/24,
              127.0.0.0/8
    # ISP provided routable  IP Addresses, individually or cidr aaa.bbb.ccc.0/24 notation if possible
              aaa.bbb.ccc.dd1,
              aaa.bbb.ccc.dd2,
              aaa.bbb.ccc.dd3,
              aaa.bbb.ccc.dd4,
    # Google networks 
              64.18.0.0/20
              64.233.160.0/19
              66.102.0.0/20
              66.249.80.0/20
              72.14.192.0/18
              74.125.0.0/16
              173.194.0.0/16
              207.126.144.0/20
              209.85.128.0/17
              216.239.32.0/19
    smtpd_client_restrictions =
              permit_mynetworks
              permit_sasl_authenticated
    #  Comment out the spam blacklist sites since Google does spam filtering for you
    #          reject_rbl_client bl.spamcop.net
    #          reject_rbl_client zen.spamhaus.org
    #          permit
    #  If you get this far, reject because the IP address isn’t one of yours or Google’s
              REJECT
    The rest of the config file should be  pretty much what you already have in place
    recipient_delimiter = +
    smtpd_tls_ciphers = medium
    inet_protocols = all
    inet_interfaces = all
    config_directory = /Library/Server/Mail/Config/postfix
    smtpd_enforce_tls = no
    smtpd_use_pw_server = yes
    relayhost =
    smtpd_tls_cert_file =  your cert file path here
    mydomain = hidden.com
    smtpd_pw_server_security_options = cram-md5,digest-md5,login,plain
    smtpd_sasl_auth_enable = yes
    smtpd_helo_required = yes
    smtpd_tls_CAfile = your file path here
    content_filter = smtp-amavis:[127.0.0.1]:10024
    smtpd_recipient_restrictions =
         permit_mynetworks,
         permit_sasl_authenticated,
         check_policy_service unix:private/policy,
         reject_unauth_pipelining,
         reject_invalid_hostname,
         reject_unauth_destination,
         reject_unknown_recipient_domain,
         reject_non_fqdn_recipient,
         permit
    header_checks = pcre:/Library/Server/Mail/Config/postfix/custom_header_checks
    myhostname = host.hidden.com
    smtpd_helo_restrictions = reject_non_fqdn_helo_hostname reject_invalid_helo_hostname
    smtpd_use_tls = yes
    smtpd_tls_key_file = your path here
    enable_server_options = yes
    recipient_canonical_maps = hash:/Library/Server/Mail/Config/postfix/system_user_maps
    virtual_alias_maps = $virtual_maps hash:/Library/Server/Mail/Config/postfix/virtual_users
    virtual_alias_domains = $virtual_alias_maps hash:/Library/Server/Mail/Config/postfix/virtual_domains
    mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain, ipv6.$mydomain, public.com
    mailbox_transport = dovecot
    postscreen_dnsbl_sites = zen.spamhaus.org*2
    maps_rbl_domains =
    This config file should do the job of keeping out everyone but the Google email servers and devices on your WAN and LAN.
    Any suggestions to make this better or more efficient welcomed!

    After a few telnet tests I can answer my own question: It makes an open relay server to spammers! But to solve the former issue with the connection refuse, I had to switch to virtual hosting in the advanced tab of the mail service and add my own domains.

  • Selecting from a SQL Server 2005 with long column names ( 30 chars)

    Hi,
    I was able to set up a db link from Oracle 11.2.0.1 to SQL Server 2005 using DG4ODBC.
    My problem is that some column names in the Sql Server are longer than 30 chars and trying to select them gives me the ORA-00972: identifier is too long error.
    If I omit these columns the select succeeds.
    I know I can create a view in the sql server and query it instead of the original table, but I was wondering if there's a way to overcome it with sql.
    My select looks like this:
    select "good_column_name" from sometable@sqlserver_dblink -- this works
    select "good_column_name","very_long_column_name>30 chars" from sometable@sqlserver_dblink -- ORA-00972Thanks

    I tried creating a view with shorter column names but selecting from the view still returns an error.
    create view v_Boards as (select [9650_BoardId] as BoardId, [9651_BoardType] as BoardType, [9652_HardwareVendor] as
    HardwareVendor, [9653_BoardVersion] as BoardVersion, [9654_BoardName] as BoardName, [9655_BoardDescription] as BoardDescription,
    [9656_SlotNumber] as SlotNumber, [9670_SegmentId] as SegmentId, [MasterID] as MasterID, [9657_BoardHostName] as BoardHostName,
    [9658_BoardManagementUsername] as BoardManagementUsername, [9659_BoardManagementPassword] as BoardManagementPassword,
    [9660_BoardManagementVirtualAddress] as BoardManagementVirtualAddress, [9661_BoardManagementTelnetLoginPrompt] as
    MANAGEMENTTELNETLOGINPROMPT, [9662_BoardManagementTelnetPasswordPrompt] as MANAGEMENTTELNETPASSPROMPT,
    [9663_BoardManagementTelnetCommandPrompt] as MANAGEMENTTELNETCOMMANDPROMPT FROM Boards)performing a select * from this view in sqlserver works and show the short column names
    this is the error i'm getting for performing a select * from v_boards@sqlserver_dblink
    ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
    [Microsoft][SQL Native Client][SQL Server]Invalid column name 'BoardManagementTelnetLoginProm'. {42S22,NativeErr = 207}[Microsoft]
    [SQL Native Client][SQL Server]Invalid column name 'BoardManagementTelnetPasswordP'. {42S22,NativeErr = 207}[Microsoft][SQL Native
    Client][SQL Server]Invalid column name 'BoardManagementTelnetCommandPr'. {42S22,NativeErr = 207}[Microsoft][SQL Native Client][SQL
    Server]Statement(s) could not be prepared. {42000,NativeErr = 8180}
    ORA-02063: preceding 2 lines from sqlserver_dblinkI also tried replacing the * with specific column names but it fails on the columns that have a long name (it doesn't recognize the short names from the view)
    what am I doing wrong?
    Edited by: Pyrocks on Dec 22, 2010 3:58 PM

  • Select from a linked server fails when executed as a join to database tables

    Hi
    I have been using the following TSQL to update a results table for several years it is currently running on SQL Express version 11.0.2100.60 (I have successfully run this on various versions and editions from 2005 onwards)
    select c.Race_id , r.Runner_id , i.Time_secs
      from CHAMPS INPUT]...Results I
      join  [dbo].[Race] c
      on rtrim(I.Race)COLLATE SQL_Latin1_General_CP1_CI_AS
      = rtrim(c.Race_name)COLLATE SQL_Latin1_General_CP1_CI_AS
      join  [dbo].[Runner] r
      on rtrim(I.[Name]) COLLATE SQL_Latin1_General_CP1_CI_AS
      = rtrim(r.First_Name)COLLATE SQL_Latin1_General_CP1_CI_AS + ' '
      + rtrim(r.Surname)COLLATE SQL_Latin1_General_CP1_CI_AS
     where i.Time_secs > 0
    This worked earlier today then with no obvious change stopped working
    No error is given but no rows are returned
    running select * from [CHAMPS INPUT]...Results where Time_secs > 0 returns the expected rows
    Any ideas where to look for the problem

    Erland thanks for your reply. I have resolved the issue, it was caused by an error in updating the joined race table. I was looking for a problem with the linked server and missing the obvious.
    Hi philpits,
    You can try to use SQL Profiler to capture some events while run your query.
    In addition, can you get expected result while run the query on remote server? Please also check your underlying tables.
    Regards,
    Elvis Long
    TechNet Community Support

  • No response from web cache server

    Hello,
    I have OCS installed on linux box. when i am trying to connect to web conferencing home page through winodws, after connectivity check when i click on login, it is showing me error as "NO response from webcache serve".
    For webconferencing on linux i have run following scripts ./dcmctl, ./imtctl and ./webcachectl it is showing me as webcached server is running.
    what more i have to do for getting the access to webserver?
    can anyone help me please?
    Regards,
    Sarita

    In the OAS Release 2 Web Cache Admin guide, you''ll find that the solution related to that particular error is to increase the network timeout between the web cache and the origin server. Change it in Resource Limits and Timeouts page of the ASControl.
    However, with all the internal errors that you are also getting, the problem is likely something else so increasing the timeout value might only mean it'll take longer before it tells you there's an error.

  • Remove documents from Google cache

    Please help
    I removed a pdf file from the website kttlaw.us.  However when you click on the quick View option the pdf appears in googledocs
    do you know how to remove or break that link? One can still see it under view option.
    http://docs.google.com/viewer?a=v&q=cache:ryWJu6BBgCwJ:www.kttlaw.us/F%26A/Injury_AOECOE/H eneenvWestlake.pdf+leigh+ann+heneen&hl=en&gl=us&pid=bl&srcid=ADGEEShWCChSlhZr-n1QYSS6RrgNY O-05KySziufg2q0uc7ozUmzP-aOl6ixC-eAjXQ6sH9nfN4XNX191YnsQI6mYA3vCuO5k-HLcEEc-XYY8xGpJwZKxBk Uw7ix4RPNrdplaMhKoJab&sig=AHIEtbTMOhJ24xi8EBo6VDJB9M7pqU71bw
    Thanks

    Contact Google Docs support:
    http://docs.google.com/support/bin/request.py?hl=en&ctx=docs&contact_type=contact_policy

  • I need help resolving issues with inbound mail on 10.8.5 server.

    Let's start from the beginning.
    I had a Mac Mini server running OS X 10.7 since 2011. I have a static IP and domain registered. I used it for mail, calendar, and web service.  It was working beautifully until a week ago.  Suddenly it stopped processing mail for me from google and apple managed domains.  There may be other domains, I do not know.
    I checked my external firewall and the correct ports are being forwarded (25, 587, 993).  Connected to a remote network, I can verify that nmap shows the ports as open.  I can telnet into the server on port 25 and send mail.  I checked with the ISP and they are not blocking/filtering those ports and the DNS they are hosting for me appears to be correct (unchanged from when it was working).  I've looked in the logs, but I'm not sure what I'm looking for, really.  I upgraded to 10.8.5 and server 2.2.2 last night in an attempt to rectify the situation but I'm still unable to receive mail from my other accounts (iCloud and gmail).
    I've been trying to troubleshoot this issue for a while now and I'm all out of ideas.  If anyone has any advice I'd really appreciate it.
    Thanks,
    Trevor

    Hi,
    I can send/receive mail locally.  I send mail to [email protected] from [email protected] and [email protected].  This works while on my LAN and connected to my work via VPN.
    I'm not listed on any blacklist, either by domain or IP using that tool.  The MX lookup tool at that site lists everything as OK, the MX record appears to be correct.  The SMTP test at that site shows a "failed to connect" error.  The exact error is:
    Connection attempt #1 - Unable to connect after 15 seconds. [15.04 sec]
    I'm not sure what I'm looking for in my log files.  I do not see any inbound connection attempts from google or apple domains when I try to send from my other e-mail accounts.
    when I run the dig command, I get the following output:
    dig @8.8.8.8 -t mx bakernet.ca
    ; <<>> DiG 9.8.5-P1 <<>> @8.8.8.8 -t mx bakernet.ca
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1983
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
    ;; QUESTION SECTION:
    ;bakernet.ca. IN MX
    ;; ANSWER SECTION:
    bakernet.ca. 3599 IN MX 10 mail.bakernet.ca.
    ;; Query time: 100 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Sat Jun 21 07:47:08 EDT 2014
    ;; MSG SIZE  rcvd: 50
    I don't see an  A record here.  My DNS is hosted by my ISP, my server is performing DNS lookups for my LAN.
    When I run dig from inside my LAN I get the following:
    ; <<>> DiG 9.8.5-P1 <<>> -t mx bakernet.ca
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 21448
    ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
    ;; QUESTION SECTION:
    ;bakernet.ca. IN MX
    ;; ANSWER SECTION:
    bakernet.ca. 10800 IN MX 10 mail.bakernet.ca.
    ;; AUTHORITY SECTION:
    bakernet.ca. 10800 IN NS www.bakernet.ca.
    ;; ADDITIONAL SECTION:
    mail.bakernet.ca. 10800 IN A 172.16.0.17
    www.bakernet.ca. 10800 IN A 172.16.0.17
    ;; Query time: 0 msec
    ;; SERVER: 127.0.0.1#53(127.0.0.1)
    ;; WHEN: Sat Jun 21 08:02:04 EDT 2014
    ;; MSG SIZE  rcvd: 100
    That does show an A record for the mail.bakernet.ca hostname.  Looks like my ISP is to blame?
    Trevor

  • Query.cmd  -s -c -l "select * from ' cachename '" does not return results

    We are working with coherence 3.6.1 on RH5 server. We are connecting to the grid and are able to query a cache through the command line. However, when we attempt to query a cache utilizing the following command, we receiving only the CohQL prompt:
    query.cmd -s -c -l "select * from '<a cache>'"
    This works on the UNIX side with the query.sh command.
    Additionally:
    redirect to a file = same issue
    input from a file = same issues
    input from a file, redirect to a file = same issue
    etc....
    The examples are OK, it just does not work as expected.
    Thanks!!!

    We have been able to get the file input to file output to work
    query.cmd -s -t -f < myinputfile -f > myoutputfile
    still cannont get the
    query.cmd -l "select * from 'mycache'" to work

  • Mavericks Caching Server very slow on gigabit Microsoft Network

    Specs.  OS X 10.9.5 / Mac mini / 8 GB Ram - Connected to Gigabit ethernet, 2 TB Hard Drives, 2.3 Ghz i7
    Only a few days old.  I have done all available updates that Apple released.
    I had some difficulty getting the Mavericks caching server to work in the first place on our Windows Server 2012 network. Caching would work for the Server itself but none of my iPads or other Windows machines would cache or pull from the cache.
    My work around was to manually create a DNS Host Name and PTR on the Windows Server so that we could actually nslookup and ping the Mavericks Server by hostname.domain.local  This immediately fixed the issue and all hosts can cache and pull from the cache.
    Speed is an issue.  I currently cannot get any transfer speeds over 2 MB/s
    Using the caching server to deploy an app (I verified that the internet connection was not being used and that it was coming from the caching server)
    Connecting to a file share on the Windows Server and manually Copy/Paste files.
    I verified the slowness using Activity Monitor on the Mavericks Server.
    I have researched and found many similar complaints for the SMB2 protocol.  I did not find a good enough Terminal Command to paste that I felt comfortable doing and at a later date undoing when Apple fixes the slowness issue.
    Any help is greatly appreciated.
    Does anybody know the protocol that Mavericks uses by default to send out the cached updates etc (SMB)?
    If you know the correct command to paste into Terminal to force Mavericks to use SMB1 and also the command to undo this change I am willing to try that as well.
    Thank you in advance,
    TechJeff

    I tweaked the DNS a little bit more...Under DHCP IPv4 Properties --> DNS Tab.  I selected  "Dynamically update DNS A and PTR records for DHCP clients that do not request updates".  This allowed our DNS to register the Macintosh server and ipads etc.  My SMB2 speed is up to 300 mbps from 16 mbps.
    My caching server transfers are still maxing out at 2.1 MBps.  I have switched out my wireless unit to a 150 mbps unit that the iPad is connected to.  My next troubleshooting step will be to download an app in iTunes on a computer that is plugged in to our gigabit ethernet on the same switch as the Mac Server to elliminate any potential bottlenecks or wireless issues.
    Still looking for ideas if anybody has troubleshooting ideas.

  • Cache server getcert http request issue

    Hi,
    We are using third party repository for documents archiving and storage. And we have separate application for connecting third party repository to the SAP.
    We have installed content server and cache server on the system where SAP was installed.
    Now we want to run getcert HTTP request from our application so that we will get the certificate from SAP cache server. What we need to do in order to achieve the same? Or in other words how we can make the connectivity between cache server and the third party repository?
    We have verified the following URLs
    http://10.224.1.37:1090/ContentServer/ContentServer.dll?serverInfo
    http://10.224.1.37:1095/Cache/CSProxyCache.dll?serverInfo
    (Where 10.224.1.37 is the IP address of the system where SAP is installed)
    for content server and cache server respectively.
    The URL for cache server was giving the correct server information but URL for content server is not at all showing any server information even after running for long time.
    Could you please tell me step by step by procedure and configuration steps for running get cert request to cache server? We want to know how to send the getcert request to third party content management system from cache server.
    We have given the following URL in http get functional module but we are getting a 400 bad request response
    http://10.224.1.37:1095/Cache/CSProxyCache.dll?getcert&pversion=0046&conrep=RH
    Where RH is pointing to the third party content server (through transaction oac0).
    Thanks,
    Ravi

    Hi All,
    I am also facing the same problem, Please help us out to solve this issue.
    Thanks in advance
    Regards
    Harshavardhan.G

  • Optimal way to retrieve data from a linked server?

    Hi,
    If I create a view for our "Support Calls" list inside our database the SQL code looks like this...
    Code Snippet
    SELECT CALL.REFERENCE AS CallRef,
     CALL.CUSTOMERREF AS CustomerRef,
     CUSTOMER.NAME,
     DATEADD(hh,CALL.OPENHRS, DATEADD(n, CALL.OPENMINS, CALL.OPENDATE)) AS Opened,
     DATEADD(hh, CALL.CLOSEHRS, DATEADD(n, CALL.CLOSEMINS, CALL.CLOSEDATE)) AS Closed,
     DATEADD(hh, CALL.ACTHRS, DATEADD(n, CALL.ACTMINS, CALL.ACTDATE)) AS Actioned,
     CALL.PRODUCTREF,
     SUPPROD.NAME AS ProductName,
     CODES_2.CATEGORY AS CallType,
     REPLACE(CODES_3.CATEGORY, '£ ', '') AS CallStatus,
     CODES_4.CATEGORY AS CallPriority,
     CALL.TITLE,
            CALL.CALLER,
     CALL.HANDLERTAG,
     CODES_1.CATEGORY AS SLA,
            CODES.CATEGORY AS ActionCategory,
     CALL.szemail,
     CALL.szphone,
     CALL.szline,
            CALL.szresolvedby
    FROM    EVENT
    INNER JOIN
     CODES ON CALL.ACT = CODES.NO INNER JOIN
            CODES AS CODES_1 ON CALL.CATAREA = CODES_1.NO INNER JOIN
            CODES AS CODES_2 ON CALL.CATTYPE = CODES_2.NO INNER JOIN
            CODES AS CODES_3 ON CALL.CATSTATUS = CODES_3.NO INNER JOIN
            CODES AS CODES_4 ON CALL.CATPRIORITY = CODES_4.NO INNER JOIN
            SUPCUST ON CALL.CUSTOMERREF = CUSTOMER.REFERENCE INNER JOIN
            SUPPROD ON CALL.PRODUCTREF = SUPPROD.REFERENCE
    Support calls that are closed are simply moved from the CALL table, to a table called ARCHIVED_CALL which is identical in structure.
    So, if I want a single view to present all support calls, the query string would be twice as long as the one above unless I create two views (one for each database) and use a third view to UNION them.
    Thing is, I can't create the views inside the enterprise database (political reasons); I have to do it from another server.
    The enterprise system is on a quad core server on the same gigabit switch as the dual core server hosting the SQL instance that is retrieving the data.
    I tried creating a view based on an openquery statement e.g.
    Code Snippet
    SELECT * FROM OPENQUERY([ENTERPRISE SERVER],
            'SELECT ..... FROM CALL UNION ALL SELECT ..... FROM ARCHIVED_CALL')
    Can't say I'm impressed. The returned dataset has under 4000 records in it, but it takes more than 15 seconds to execute the query.
    This is by far the smallest dataset that we need to work with - the support call notes tables (live and archived) contain over 60,000 records in total. I can boil the kettle, make a pot of tea, leave it to brew and then pour when ready, before the view finishes doing its job!
    I tried using SPs to populate a local table with the data returned by the OPENQUERY, but this turned out to be a real headache when handling situations like users updating or archiving support calls; the only way that seemed to work was by dropping all records and then reimporting them and that didn't seem to be any faster.
    Any ideas?

    Hi,
    Sorry for the delay in getting back to you all. I've got a bit of expanded info for you which'll probably help you understand the situation better.
    We have two databases, call 'em Support and Sales for the sake of argument, and they format data differently so I have a proof-of-concept SP which gets just the customer names and references from each table, then creates a temp table formatted as follows:
    Customer ------- Support Reference - Sales Reference
    ABC SITE1        ABC                 ABC1
    ABC SITE2        ABC                 ABC2
    ABD CUSTOMER                         ABD1     
    BBB SUPPORTONLY  BBB  
    So in Sharepoint, for example, someone does a customer name search and they will get a list of customers who fit the bill, along with links to the various customer records.
    And the code I'm predominately concerned with, is this bit of it:
    Code Snippet
    create table #SupportIDs
    [Customer Name] varchar(120),
    [Support Reference] varchar(16)
    create table #SalesIDs
    [Customer Name] varchar(120),
    [Sales Reference] varchar(16)
    INSERT INTO #SupportIDs
        SELECT [Customer Name], [Support Reference] FROM  
        OPENQUERY([SUPPORTSERVER],        'SELECT NAME AS [Customer Name],
                                    REFERENCE AS [Support Reference]
                                    FROM Support.CUSTOMERS')
    INSERT INTO #SalesIDs
        SELECT [Customer Name],
            [Sales Reference] FROM  
        OPENQUERY([SALESSERVER], 'SELECT name as [Customer Name], customer as [Sales Reference]
    FROM         Sales.CUSTOMERLIST')
    SELECT dbo.Neaten([Customer Name]),
       dbo.Neaten([Support Reference]),
       '' AS [Sales Reference]
    FROM #SupportIDs
    UNION ALL
    SELECT dbo.Neaten([Customer Name]),
       '' AS [Support Reference],
       dbo.Neaten([Sales Reference]) FROM #SalesIDs
    // dbo.Neaten performs left & right trim, strips out illegal characters, and then converts to uppercase.
    It takes no time at all to drag the data into #SupportIDs (<1 second), but it takes >10 seconds to do the same with the data from #SalesIDs, purely because the Sales database is hosted on a SQL 2000 instance on an old server the other side of a slow WAN link from here.
    In total, there are less than 2750 records returned from the SP in the UNION statement, 2100 of which are coming from the #SalesIDs temp table.
    The function Neaten just strips out white space and capitalises the value.
    If I can store this dataset in a local table, and have a trigger on the SQL instance which refreshes the table once a day, then I think we could probably live with it.
    The ADF for BDC would only need to point to the local table if that were the case.

  • When using the OS X caching server, after the first update is made, do the rest of the computers have to enter the Apple ID to download the cached update

    Hello,
    I have a question that I had not found an answer to yet. I would like to use the OS X caching server, and I know that it caches a software update as soon as the first computer downloads it. If an application requires authentication using an Apple ID, and the first computer authenticates and downloads that update, do the rest of the computers have to authenticate as well, even though the update is downloaded from the caching server?
    I am looking for a solution in that regard. For the company that I work, I would like a solution for users to be able to download software updates (OS or Application), without having to authenticate with the company Apple ID, for security reasons.
    Would anybody be able to help?

    Yes.  You still need to authenticate on the subsequent machines if you are interactively applying the updates.  Caching server stores the application data, not the App Store authorization information.
    You have some alternatives.
    1:  If you are using a single "corporate" Apple ID to claim your free apps (iMovie, iPhoto, etc), then you can download them on your master machine and then user ARD, JAMF, or other tool to push the full application packages out to all the clients.  They will already be tagged to the Apple ID and since the push is happening in the background, as long as the user is not using the application, they will not know you are updating.  Remember, App Store updates are the entire application, not update packages.  So pushing the entire .app bundle is effective.
    2:  If you are looking to distribute purchased apps, you really should look at the VPP (volume purchase plan) from Apple (http://www.apple.com/business/vpp/) (there is also one for education but you mentioned company).  This allows you as the organization to purchase the correct number of copies and then control the distribution of the apps to your end users.  They can use their own Apple ID to claim the apps but you can reclaim the license should the user leave the organization.
    Reid
    Apple Consultants Network
    Apple Professional Services
    Author "Mavericks Server – Foundation Services" :: Exclusively available in Apple's iBooks Store

  • Stale Near Cache data when all Cache Server fails and are restarted

    Hi,
    We are currently making use of Coherence 3.6.1. We are seeing an issue. The following is the scenario:
    - caching servers and proxy server are all restarted.
    - one of the cache say "TestCache" is bulk loaded with some data (e.g. key/value of : key1/value1, key2/value, key3/value3) on the caching servers.
    - near cache client connects onto the server cluster via the Extend Proxy Server.
    - near cache client is primed with all data from the cache server "TestCache". Hence, near cache client now has all key/values locally (i.e. key1/value1, key2/value, key3/value3).
    - all caching servers in the cluster is down, but the extend proxy server is ok.
    - all cache server in the cluster comes back up.
    - we reload all cache data into "TestCache" on the cache server, but this time it only has key/value of : key1/value1, key2/value.
    - So the caching server's state for "TestCache" is that it should only have key1/value1, key2/value, but the near cache client still thinks it's got key1/value1, key2/value, key3/value3. So in effect, it still knows about key3/value3 which no longer exists.
    Is there anyway for the near cache client to invalidate the key3/value3 automatically? This scenario happens because the extend proxy server is actually not down, but all caching servers are, but the near cache client for some reason doesn't know about this and does not invalidate the cache client near cache data.
    Can anyone help?
    Thanks
    Regards
    Wilson.

    Hi,
    I do have the invalidation strategy as "ALL". Remember this cache client is connected via the Extend proxy server where it's connectivity is still ok, just the caching server holding the storage data in the cluster are all down.
    Please let me know why else we can try.
    Thanks
    Regards
    Wilson.

  • Mavericks Caching server 2 for all Apple IDs?

    Good evening,
    At Mavericks server 10.9.1, does caching server 2 on Mavericks server caches for any Apple ID that is ont the local network?
    Francois.

    The application and the authorization are independent.  The app/book are just data.  When the first system requests it, the file is transferred to the caching server and held there as it is passed to the client that requested it.  iTunes, Appstore, or the iOS device will then talk to the authorization server and assign the asset to the users account.  When the next user requests the same file, it can now be vended from the caching server but authorization is handled by Apple's servers.  So there is still an outbound request but this is trivial compared to the size of the payload.
    One to of the best ways to see this in action is with a large OS update.  Update one device.  Then when complete, review your caching server to see that the data is there.  Then update another system.  It downloads in a blink of the eye.
    R-
    Apple Consultants Network
    Apple Professional Services
    Author "Mavericks Server – Foundation Services" :: Exclusively available in Apple's iBooks Store

Maybe you are looking for