Scale out SSAS server for better performance

HI
i have a sharepoint farm
running performance point service in a server where ANaylysis srver,reporting server installed
and we have anyalysis server dbs and cubes
and a wfe server where secure store service running
we have
1) application server + domain controller
2) two wfes
1) sql server sharepoint
1) SSAS server ( analysis server dbs+ reporting server)
here how i scaled out my SSAS server for better performance 
adil

Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
will then be used to store the UPDs (one VHDX file per user).
Cheers for now
Russell
Sure you can do it. See:
Deploy a Guest Cluster Using a Shared Virtual Hard Disk
http://technet.microsoft.com/en-us/library/dn265980.aspx
Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
The following table describes the physical host prerequisites.
Cluster Type
Requirements
Scale-Out File Server
At least two servers that are running Windows Server 2012 R2.
The servers must be members of the same Active Directory domain.
The servers must meet the requirements for failover clustering.
For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
Hardware for a Failover Cluster.
The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

Similar Messages

  • Scale Out File Server for Applications using Shared VHDX

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage
    that will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell

    Just trying to get a definitive answer to the question of can we use a Shared VHDX in a SOFS Cluster which will be used to store VHDX files?
    We have a 2012 R2 RDS Solution and store the User Profile Disks (UPD) on a SOFS Cluster that uses "traditional" storage from a SAN. We are planning on creating a new SOFS Cluster and wondered if we can use a shared VHDX instead of CSV as the storage that
    will then be used to store the UPDs (one VHDX file per user).
    Cheers for now
    Russell
    Sure you can do it. See:
    Deploy a Guest Cluster Using a Shared Virtual Hard Disk
    http://technet.microsoft.com/en-us/library/dn265980.aspx
    Scenario 2: Hyper-V failover cluster using file-based storage in a separate Scale-Out File Server
    This scenario uses Server Message Block (SMB) file-based storage as the location of the shared .vhdx files. You must deploy a Scale-Out File Server and create an SMB file share as the storage location. You also need a separate Hyper-V failover cluster.
    The following table describes the physical host prerequisites.
    Cluster Type
    Requirements
    Scale-Out File Server
    At least two servers that are running Windows Server 2012 R2.
    The servers must be members of the same Active Directory domain.
    The servers must meet the requirements for failover clustering.
    For more information, see Failover Clustering Hardware Requirements and Storage Options and Validate
    Hardware for a Failover Cluster.
    The servers must have access to block-level storage, which you can add as shared storage to the physical cluster. This storage can be iSCSI, Fibre Channel, SAS, or clustered storage spaces that use a set of shared SAS JBOD enclosures.
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Cleaning/ clearing out my MacBook for better performance

    How can I clean out some of the temporary files and/or cache from my MacBook  2008. I think it could be why it's slow sometimes. Any help would be appreciated. Thanks!

    Check the status of your hard drive. Control click the hard drive icon and select Get Info. On the new pane, the critical numbers are Capacity and Available.
    Empty the trash.
    Delete emails you do not want and then go to Mailbox (in the menu bar) / Erase deleted items / In all accounts.
    Safari / Empty cache.
    p.s. Under no conditions should you use software that claims it will clean things up... macleaner, mackeeper, macsweeper, what ever it's called, you do not want to use it.

  • Windows Server 2012 R2 Scale out file server cluster using server disks as CSV

    Hi,
    My question is if I can create a Scale Out File Server Cluster with CSV using the disks that comes with the servers, we have 2 servers with 2 arrays each one, 1 array for the OS files and 1 array that we could use for the CSV.
    Regards.

    Hi,
    a SoFS needs some kind of shared Storage, this could be in the old days ISCSI or FC SAN and now also a Shared SAS JBOD with Clustered Storage Spaces.
    If you have 2  Servers with "local" Disks you need some sort of Software to create a Shared Disk Layer out of that local Disks, like Starwind or DataCore.
    Scale-Out File Server for Application Data Overview
    http://technet.microsoft.com/en-us/library/hh831349.aspx
    check out Step 1: Plan for Storage in Scale-Out File Server
    oh i forgot the normal 4th Option, some Kind of Clustered Raid Controller like HP or Dell offer in some Solutions.
    Udo
    Udo, clustered RAID controllers still require SAS disks mounted into external enclosure *OR* everything mounted into single chassis for Cluster-In-A-Box scenario (but that's for OEMs). The only two benefits somebody would have with them are a) ability to
    privide RAID LUs to Clustered Storage Spaces (non-clustered RAID controllers would be used as a SAS controllers only in pass-thru mode) and b) abillity to have caches synchronized so VM moved from one physical host to another would not start from the "cold"
    state. Please see LSI Syncro reference manual for details:
    Syncro 8i
    http://www.lsi.com/downloads/Public/Syncro%20Shared%20Storage/docs/LSI_PB_SyncroCS_9271-8i.pdf
    "LSI Syncro CS solutions are designed to provide continuous application uptime at a fraction 
    of the cost and complexity of traditional high availability solutions. Built on LSI MegaRAID 
    technology, the Syncro CS 9271-8i enables OEMs and system builders to use Syncro CS 
    controllers to build cost-effective two-node Cluster-in-a-Box (CiB) systems and deliver high 
    availability in a single self-contained unit.
    Syncro 8e
    http://www.lsi.com/downloads/Public/Syncro%20Shared%20Storage/docs/LSI_PB_SyncroCS_9286-8e.pdf
    LSI Syncro CS solutions are designed to provide continuous application uptime at a fraction
    of the cost and complexity of traditional high availability solutions. Built on LSI MegaRAID
    technology, the Syncro CS 9286-8e solution allows a system administrator to build a costeffective,
    easy to deploy and manage server failover cluster using volume servers and an offthe-
    shelf JBOD. Syncro CS solutions bring shared storage and storage controller failover into
    DAS environments, leveraging the low cost DAS infrastructure, simplicity, and performance
    benefits. Controller to controller connectivity is provided through the high performance SAS
    interface, providing the ability for resource load balancing, helping to ensure that applications
    are using the most responsive server to boost performance and help prevent any one server
    from being overburdened.
    So... Making long story short: 8i is for OEMs and 8e is for end-users but require a JBOD.
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Scale out file server client access point using public nic

    Thoughts on this one.
    I have a Scale Out File Server cluster with a Client Access Point. Whenever i talk to the Client Access Point it uses the public nics.
    If i talk to the Scale Out File Server directly it uses the private like i want it to. How can i get the Client Access Point using the private nics?

    Hi JustusIV,
    Could you tell us why you want to modify the CAP use the “private” network, the CAP is used for client access, your clients may can’t access your cluster if modify your CAP
    use private network, if you want know how to modify the CAP of a cluster you can refer the following KB:
    Modify Network Settings for a Failover Cluster
    http://technet.microsoft.com/en-us/library/cc725775.aspx
    More information:
    Understanding Access Points (Names and IP Addresses) in a Failover Cluster
    http://technet.microsoft.com/en-us/library/cc732536.aspx
    Windows Server 2008 Failover Clusters: Networking (Part 4)
    http://blogs.technet.com/b/askcore/archive/2010/04/15/windows-server-2008-failover-clusters-networking-part-4.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • How to backup a Scale Out File Server

    Can anyone explain to me how i am supposed to backup my Scale Out File Server? It is used to host our VDI environment but we have thrown a few other minimal shares on there for stuff relevant to the VDI setup. Profile Disk/Scripts etc
    Per this topic http://social.technet.microsoft.com/Forums/en-US/15561919-963b-4164-a307-e3696f5d21e1/adding-scaleout-file-servers-to-dpm2012-sp1?forum=dpmfilebackup
    "DPM does not support protecting scale-out file servers (SOFS) for file shares.
    I do not really know how to proceed. To make sure everything is backed up.

    Hi,
    Scale out file server support in DPM is for VM's (and SQL DB's I think), only and to be honest because of the metadata overhead using scale out file server and CSV as a general file server is not a good choice.  To protect VM's running on a scale out
    file server the VM's are backed up from the Hyper-V hosts but the Scale Out file servers do require a DPM agent on them.  SMB 3.0 has a new feature which enables Remote VSS therefore the volumes (shares) are snapped from the Hyper-V hosts.
    This document will give you the basic steps (http://technet.microsoft.com/en-us/library/hh757866.aspx)
    hope this helps,
    Regards,
    Paul

  • How to size a Scale-out File Server

    Hi,
    We are looking to implement a 2-node 2012 R2 Scale-out File Server cluster (using SAS JBOD enclosure) for the primary purpose of storing the VHD files that will be accessed by a 4-node 2012 R2 Hyper-V cluster using 10 gigabit Ethernet (no RDMA).  Our
    environment can be characterised as having a large number of mostly idle VMs that experience sporadic, low intensity use (this is *not* a VDI environment).  We have 2 questions.
    1) To what extent is RAM a consideration for the SoFS servers?  We can't find any documentation to suggest that there are benefits to be gained by having more RAM in the SoFS servers but we don't know if we should go with 8/16/32/64+ GB RAM in each
    of the nodes.
    2) With the need to keep costs down, we don't think RDMA / SMB-Direct NICs are going to be within our reach.  Should we however look to have 2 * dual-port 10 Gbps NICs in both the SoFS & Hyper-V boxes?

    Unless your VMs are read-intensive and you're going to deploy a CSV cache memory requirement for serving mostly idle VMs can be pretty low. However RAM is cheap these days so going for less then 16GB per node does not sound reasonable. Good sample, see:
    Windows Server 2012 File Server Tip: Enable CSV Caching on Scale-Out File Server Clusters
    http://blogs.technet.com/b/josebda/archive/2012/11/14/windows-server-2012-file-server-tip-enable-csv-caching-on-scale-out-file-server-clusters.aspx
    How to Enable
    CSV Cache
    http://blogs.msdn.com/b/clustering/archive/2013/07/19/10286676.aspx
    HYPER-V OVER SMB: SCALE-OUT FILE SERVER AND STORAGE SPACES
    http://www.thomasmaurer.ch/2013/08/hyper-v-over-smb-scale-out-file-server-and-storage-spaces/
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Please help to modifiy this query for better performance

    Please help to rewrite this query for better performance. This is taking long time to execute.
    Table t_t_bil_bil_cycle_change contains 1200000 rows and table t_acctnumberTab countains  200000 rows.
    I have created index on ACCOUNT_ID
    Query is shown below
    update rbabu.t_t_bil_bil_cycle_change a
       set account_number =
           ( select distinct b.account_number
             from rbabu.t_acctnumberTab b
             where a.account_id = b.account_id
    Table structure  is shown below
    SQL> DESC t_acctnumberTab;
    Name           Type         Nullable Default Comments
    ACCOUNT_ID     NUMBER(10)                            
    ACCOUNT_NUMBER VARCHAR2(24)
    SQL> DESC t_t_bil_bil_cycle_change;
    Name                    Type         Nullable Default Comments
    ACCOUNT_ID              NUMBER(10)                            
    ACCOUNT_NUMBER          VARCHAR2(24) Y    

    Ishan's solution is good. I would avoid updating rows which already have the right value - it's a waste of time.
    You should have a UNIQUE or PRIMARY KEY constraint on t_acctnumberTab.account_id
    merge rbabu.t_t_bil_bil_cycle_change a
    using
          ( select distinct account_number, account_id
      from  rbabu.t_acctnumberTab
          ) t
    on    ( a.account_id = b.account_id
           and decode(a.account_number, b.account_number, 0, 1) = 1
    when matched then
      update set a.account_number = b.account_number

  • What is the best way to replace the Inline Views for better performance ?

    Hi,
    I am using Oracle 9i ,
    What is the best way to replace the Inline Views for better performance. I see there are lot of performance lacking with Inline views in my queries.
    Please suggest.
    Raj

    WITH plus /*+ MATERIALIZE */ hint can do good to you.
    see below the test case.
    SQL> create table hx_my_tbl as select level id, 'karthick' name from dual connect by level <= 5
    2 /
    Table created.
    SQL> insert into hx_my_tbl select level id, 'vimal' name from dual connect by level <= 5
    2 /
    5 rows created.
    SQL> create index hx_my_tbl_idx on hx_my_tbl(id)
    2 /
    Index created.
    SQL> commit;
    Commit complete.
    SQL> exec dbms_stats.gather_table_stats(user,'hx_my_tbl',cascade=>true)
    PL/SQL procedure successfully completed.
    Now this a normal inline view
    SQL> select a.id, b.id, a.name, b.name
    2 from (select id, name from hx_my_tbl where id = 1) a,
    3 (select id, name from hx_my_tbl where id = 1) b
    4 where a.id = b.id
    5 and a.name <> b.name
    6 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=2 Bytes=48)
    1 0 HASH JOIN (Cost=7 Card=2 Bytes=48)
    2 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    3 2 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    4 1 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    5 4 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    Now i use the with with the materialize hint
    SQL> with my_view as (select /*+ MATERIALIZE */ id, name from hx_my_tbl where id = 1)
    2 select a.id, b.id, a.name, b.name
    3 from my_view a,
    4 my_view b
    5 where a.id = b.id
    6 and a.name <> b.name
    7 /
    Execution Plan
    0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=8 Card=1 Bytes=46)
    1 0 TEMP TABLE TRANSFORMATION
    2 1 LOAD AS SELECT
    3 2 TABLE ACCESS (BY INDEX ROWID) OF 'HX_MY_TBL' (TABLE) (Cost=3 Card=2 Bytes=24)
    4 3 INDEX (RANGE SCAN) OF 'HX_MY_TBL_IDX' (INDEX) (Cost=1 Card=2)
    5 1 HASH JOIN (Cost=5 Card=1 Bytes=46)
    6 5 VIEW (Cost=2 Card=2 Bytes=46)
    7 6 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    8 5 VIEW (Cost=2 Card=2 Bytes=46)
    9 8 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D6967_3C610F9' (TABLE (TEMP)) (Cost=2 Card=2 Bytes=24)
    here you can see the table is accessed only once then only the result set generated by the WITH is accessed.
    Thanks,
    Karthick.

  • My mac says connect to a faster usb for better performance

    every time i connect my ipod the mac says:
    the ipod is connected to a low-speed USB 1.1 port. for better performance, you should connect it to a high-speed USB 2.0 port if one is available on your computer
    whats that about

    It means exactly what it says.
    This message will pop up when you connect a device (the iPod) that works with USB 2.0, to USB 1.1 port.
    USB 1.1 is 12 megabits per second.
    USB 2.0 is 460 megabits per second (60 ttimes faster).
    For better perfomrance, use a usb 2.0 port if your computer has one.
    Unfortunately, your iBook G3 does not have USB 2.0.

  • Scale out sharepoint server search to index server and Microsoft SharePoint Foundation Web Application to new server

    Hi
    in my single server these below service are running:
    SharePoint Server Search 
    User Profile Service 
    Microsoft SharePoint Foundation Web Application
    here how i scale out  Microsoft SharePoint Foundation Web Application  to another server
    and  SharePoint Server Search  to new index server.
    adil

    Check here to see how to add servers to the farm:
    http://technet.microsoft.com/en-us/library/cc261752(v=office.15).aspx
    What Inderjeet meant was that if you chose to install SharePoint as a single server instead of a complete farm, you won't be able to add servers to the farm at a later time. You can check if this is the case by checking the registry key he mentioned. If
    that is indeed the case, there's nothing to it but reinstall the farm.
    So that is why you never should choose the single server option, it locks you in and limits future options, while choosing to install sharepoint as a complete farm it's still perfectly acceptable to host it on a single server.
    Kind regards,
    Margriet Bruggeman
    Lois & Clark IT Services
    web site: http://www.loisandclark.eu
    blog: http://www.sharepointdragons.com

  • I need a clarification : Can I use EJBs instead of helper classes for better performance and less network traffic?

    My application was designed based on MVC Architecture. But I made some changes to HMV base on my requirements. Servlet invoke helper classes, helper class uses EJBs to communicate with the database. Jsps also uses EJBs to backtrack the results.
    I have two EJBs(Stateless), one Servlet, nearly 70 helperclasses, and nearly 800 jsps. Servlet acts as Controler and all database transactions done through EJBs only. Helper classes are having business logic. Based on the request relevant helper classed is invoked by the Servlet, and all database transactions are done through EJBs. Session scope is 'Page' only.
    Now I am planning to use EJBs(for business logic) instead on Helper Classes. But before going to do that I need some clarification regarding Network traffic and for better usage of Container resources.
    Please suggest me which method (is Helper classes or Using EJBs) is perferable
    1) to get better performance and.
    2) for less network traffic
    3) for better container resource utilization
    I thought if I use EJBs, then the network traffic will increase. Because every time it make a remote call to EJBs.
    Please give detailed explanation.
    thank you,
    sudheer

    <i>Please suggest me which method (is Helper classes or Using EJBs) is perferable :
    1) to get better performance</i>
    EJB's have quite a lot of overhead associated with them to support transactions and remoteability. A non-EJB helper class will almost always outperform an EJB. Often considerably. If you plan on making your 70 helper classes EJB's you should expect to see a dramatic decrease in maximum throughput.
    <i>2) for less network traffic</i>
    There should be no difference. Both architectures will probably make the exact same JDBC calls from the RDBMS's perspective. And since the EJB's and JSP's are co-located there won't be any other additional overhead there either. (You are co-locating your JSP's and EJB's, aren't you?)
    <i>3) for better container resource utilization</i>
    Again, the EJB version will consume a lot more container resources.

  • Tunning sql sub query for better performance

    I have been given the task to tune this query for better execution as presently it take a very long time execute i will appreciate if someone can help.
    Thank you.
    SELECT a.fid_trx_no, a.fid_seq_no,a.bcc_customer_account,
    a.bcc_msid,a.fid_trx_date,a.fid_trx_type,
    a.fid_trx_initial_amount,a.fid_trx_fidodollar_amount
    FROM
    SPOT.SPOT_FIDODOLLAR_TRX a
    WHERE
    a.fid_trx_status = 'PE' AND a.fid_seq_no =
    (SELECT MAX(c.fid_seq_no) FROM SPOT.SPOT_FIDODOLLAR_TRX c WHERE c.fid_trx_no = a.fid_trx_no) AND
    a.FID_TRX_DATE <
         (SELECT MAX(b.FID_TRX_DATE) FROM SPOT.SPOT_FIDODOLLAR_TRX b wHERE
         b.bcc_customer_account = a.bcc_customer_account AND fid_trx_type IN
         (SELECT par_code FROM SPOT.spot_parameter where par_value=:vAccountType AND par_type='FC')
         )

    Rob...
    so many times you post this link.. i think that Oracle should put this link in an obvious place....!!!
    Greetings,
    Sim

  • Can a OLAP Cube be Processed in to RAM , Can this be done using Traditional BI ? , If we can place the entire SSAS Cube in RAM for better performance , how can it be done?

    I'm trying to increase the performance of my OLAP Cube and i thought placing the CUbe entirely in RAM can solve this problem. can anyone please answer me , if this can be done using Traditional BI i.e SSAS and how?

    Hi Nagarjuna:
    I do not believe you can load the entire cube into RAM and even if you were able to do so, I don't think you'll be able to solve you performance issues.
    Here is a thread with the same discussion and it has some good links that you can visit to learn more about where the performance issues are emanating from  -
    How to cache all SSAS 2008 cube processed data into RAM
    Also, please tale a look at the followoing guide - Analysis Services Performance
    Guide
    Hope this helps.
    Faisal Muhammed My Blog

  • Upgrading the Hard Drive for better performance?

    hello i have the new 17 inch macbook pro i couldn't wait to special order from apple so i settled with the stock model from the apple store. I am looking into buying a hard drive that runs at 7200rmp. i was wondering if it is possible to buy the exact drives that they put into the mbp? it is really important that the hard drive is best quality i can get. i want the seagate momentous 500gb @7200 but they are not available anywhere i look. I also have been reading that seagate drives tend to fail. Does anyone know of a really good hard drive that i can buy?
    is a ssd going to be that much better in terms of performance? i manly do video editing and music creation with my mac.
    so my question is what is the one of the bes hard drive for my unibody macbook pro that will be fast and reliable?
    thanks

    I too am suspect of Seagate drives. If you want a 500 GB, 7200-rpm drive, I would wait for another manufacturer to introduce one.
    I think that Hitachi drives are your best bet. They have been proven very reliable, and Apple has been using them a lot in their new products. I have noted links to some good ones for you to check out.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822145228
    http://www.zipzoomfly.com/jsp/ProductDetail.jsp?ProductCode=10008894
    Read this article before you invest in SSD.
    http://lenovoblogs.com/insidethebox/?p=141

Maybe you are looking for

  • Word 2013 (Windows 7 64-bit) - Embed PDF as Icon Error

    We have a process where we embed PDFs into a table in a Word document as an icon. This worked for us in Word 2010, but as soon as we upgraded to Word 2013 we started getting an error message. "The program used to create this object is Package.  That

  • How to open a text-file with notepad from labview-vi?

    Hello, how can i execute a program from a vi? I want to open a textfile with Windows7-Notepad after selelecting it from a file-path-control and pressing an open-button. Thx for help Solved! Go to Solution.

  • How to get in Java Graphic context of Excel?

    My question is about the Graphic context. Through the HSSF story I can set a font and style of the Excel sheet, how can I get the Graphic context of it? I need it to get the text metrics later of the sheet. Thanks in advance.

  • Can I remove the Faces part of iPhoto in Lion 10.7.2?

    I have been transfering phots using DVD disks to a new iMac. Regrettably there are some 22,000 photos and the iMac has made 1000s of individual faces that I do not want. Can I remove or block Faces from creating faces? Many thanks.

  • How can i turn off auto-update?

    This auto update feature is driving me crazy. i recently downloaded songs from my cds onto my computer. I then put these songs onto my ipod shuffle and they all played fine. i do not want to keep all these songs on my computer, so i deleted them. but