SQL PATCH IN CLUSTER ENVIRONMENT

Dear All,
One of my client is having SAP ECC6.0 , SQL 2005 , windows 2003 64 bit.
They are having SQL 2005 version 9.00.1399 and according to SAP it is no more supported so they want to upgrade it to 9.00.4035.
In DEV and QAS (non cluster environment) they are already having 9.00.4035 .
- Can you guys please guide me about how to apply SQL patch in Cluster environment.
- Shall I apply patch 9.00.4035 directly from 9.00.1399
- Any recommendations before and after applying patched
Kindly guide.
Regards,
Rmoihnitii

Hello my friend
Unfortunately this is not a zero down time solution since Database Services and Analysis Services checkboxes will not be available while patching SP3:
1. Launch SP3 package on the active node;
2. Select authentication mode for the instance you are going to update;
3. Enter credentials of the login you want to run service pack on behalf of;
4. Click all the way next to start installing;
Then you need to update the inactive node it is necessary to execute all the steps except steps 2 and 3. There can be some problems with updating database service in the cluster. Restart of each node of the cluster and restart of the cluster service usually resolves all the problems. After restart try to update database service one more time. 
Regards,
Effan

Similar Messages

  • Does a sql server client application needs to be modified to allow it to have benefits of running on a SQL Server 2012 cluster?

    I have a client application in c++ which interacts with sql server database. My question is whether I need to make any changes to the client application code to allow it to have the benefits of running on a SQL server 2012 cluster environment. 
    To elaborate more on my query my concern is for e.g if my application has called an api to execute a sql query and during
    the execution of this query the sql server (part of the cluster) goes down then as per my understanding the sql cluster would ensure that another node takes up the task from the current sql server which has gone down. Is this transition transparent to the
    client application or in such a case my client application needs to again make a new connection and again execute the query?

    Hello,
    Just as Shanky post above, When you connected to a database in an availability group and specify the availability group listener in the connection string, if the availability group fails over, the original connection is broken, your application
    should try a new connection after the failover.
    So, when connect to an availability group, please try to increasing connection timeout and implementing connection retry logic to increase the probability of successful connection.
    Reference:SqlClient Support for High Availability, Disaster Recovery
    Regards,
    Fanny Liu
    If you have any feedback on our support, please click here.
    Fanny Liu
    TechNet Community Support

  • Steps to upgrade kernel patch in AIX cluster environment

    Hello All,
    We are going to perform kernel upgrade in AIX cluster environment.
    Please let me know the other locations to copy the new kernel files ,
    default location
    CI+DB server
    APP1
    Regards
    Subbu

    Hi Subbu
    Refer the SAP link
    Executing the saproot.sh Script - Java Support Package Manager (OBSOLETE) - SAP Library
    1. Extract the downloaded files to a new location using SAPCAR -xvf <file_name> as sidadm.
    2. copy the extracted files to sapmnt/<SID>/exe
    3. Start the DB & Application.
    Regards
    Sriram

  • Service Accounts for Reporting Service in SQL Server Failover Cluster setup

    I am setting up 2 Report Services (SSRS) in SQL Failover Clustering (Version: 2012SP1) on Windows 2012, as part of scale out architecture.
    There are 2 options to configure the service account for SSRS:
    Option 1) Using domain accounts, as what I have done for DB Engine and SQL Agent.
    Option 2) accept the default, which is virtual account for SSRS. Per documentation URL:
    http://msdn.microsoft.com/en-us/library/ms143504.aspx
    which is the recommended one? is it option 2?
    There is security note on above URL as well, but does not clearly mention that option 1 is not recommended.
    Security Note:  Always run SQL Server services by using the lowest possible user rights. Use a MSA or  virtual account when possible. When MSA and virtual accounts are not possible, use a specific low-privilege user account or domain account instead
    of a shared account for SQL Server services. Use separate accounts for different SQL Server services. Do not grant additional permissions to the SQL Server service account or the service groups. Permissions will be granted through group membership or granted
    directly to a service SID, where a service SID is supported.
    Thanks very much for your help!

    Hi Luo Donghua,
    In SQL Server Failover Cluster Instance, personally two options can run well. If you use the virtual account for SQL Server Reporting Service. Virtual accounts in Windows Server 2008 R2 and Windows 7 are managed local accounts that provide the features to
    simplify service administration. The virtual account is auto-managed, and the virtual account can access the network in a domain environment.
    Of cause, you can also use domain accounts in your clustering. 
    Just make sure your service account is set up here, or that it is using a proper built-in account.For more information, see:http://ermahblerg.com/2012/11/08/cluster-ssrs-in-2008/
    Thanks,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • SCCM 2012 - Migrating SQL components to cluster including Reporting Services

     
    Current SCCM 2012 environment:
     - CAS with database on remote SQL (non-cluster) server in a named instance - SQLSRV\CASINST 
     - SQL Reporting Services in also installed in the SQLSRV\CASINST instance 
     - The SCCM Reporting Services Point role is installed on the SQLSRV\CASINST instance 
     - Primary server with database on the same remote SQL (non-cluster) server in a different named instance - SQLSRV\PRIINST 
    Now I would like to move all of the database stuff to an existing SQL 2008 R2 cluster (SQLCLUSTER) and decommission the SQLSRV server.  SQLCLUSTER already exists and the default instance is already in use (for other applications) for both database engine
    services and reporting services.
    From my understanding:
     - SQL 2008 R2 Reporting Services cannot be clustered
     - Best practise is to install Reporting Services Point on a remote site server rather than a CAS server
     - SSRS can be installed on nodes in SQL cluster as standalone installs - with the caviat that they don't use instance names already in use
    So I plan to:
     - Move CAS database to new named instance on cluster eg SQLCLUSTER\CASINST 
     - Move primary database to new named instance on cluster eg SQLCLUSTER\PRIINST 
     -  Install SQL Reporting Services on one of the cluster nodes (unclustered) using a named SSRS instance eg SQLCLUSTERNODE1\SCCMREPORTING
    -  This article points out some interesting results of not have a default instance -  http://magalhaesv.wordpress.com/2012/05/24/system-center-configuration-manager-x-sql-server-reporting-services-x-wmi-english-version/
     - Move Reporting Services Point from SQLSRV\CASINST to the cluster node eg SQLCLUSTERNODE1\SCCMREPORTING
    I have read on some forums (eg
    http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/40ad0cb4-a464-4576-b824-957ed6e1b2e2) things like '' If you have an app like SCCM that requires a default instance of SSRS, you are out of luck if your cluster already has a default
    instance of the database engine.", but I cannot find this anywhere in Microsoft documentation.  From the MS documentation (http://technet.microsoft.com/en-us/library/7fd0d4f5-14e0-4ec7-b2e6-3b67487df555#BKMK_InstallReportingServicesPoint)
    I understand that as long the current user has read access to WMI on the remote site server where SQL Reporting Services is installed, you should be able to use a named instance for SQL Reporting Services and successfully install the Reporting Services Point
    on it.
    So a 2 part question:
     - Anybody see any obvious flaws in my high level plan?
     - Anybody care to comment on the SCCM requiring a default instance of SSRS?
    My Microsoft Core Infrastructure & Systems Management blog -
    blog.danovich.com.au

    Yes, I know this is an old post, I’m trying to clean them up, Did you figure this out, if so how?
    http://www.enhansoft.com/
    Hi Garth,
    Yes we moved the DBs and RS as outlined above. As expected, we needed to have a default instance of RS because otherwise the SCCM console doesn't see the Reporting Services instance when you try to deploy the Reporting Services site role.
    So the above plan worked.
    My Microsoft Core Infrastructure & Systems Management blog -
    blog.danovich.com.au

  • Can a SQL Server Failover Cluster Instance (FCI) be Implemented Between Two Hyper-V Hosted Virtual Machines?

    I haven't had the opportunity to implement a SQL Server Failover Cluster Instance (FCI) for over 10 years and that was done with two physical, identical database servers way back in the day of Windows Server 2003 and SQL Server 2000 (old school).
    Can a SQL Server 2008 R2 Failover Cluster Instance (FCI) be implemented between two Hyper-V hosted virtual machines? The environment in question already has Windows Server 2012 R2 Hyper-V hosts in place, so I'm just looking to see if this is even
    possible and/or supported when utilizing virtual machines.
    The client in question is currently using SQL Server 2008 R2 instances running on Win2008R2, Win2012, and Win2012R2, but I'd also be interested how this can be done or not with SQL Server 2012 or 2014 as well. Thanks in advance.
    Bill Thacker

    Yes, it can be done with Hyper-V guests. In fact, with Windows Server 2012 R2 Hyper-V, guests can use the Shared VHDX feature for shared storage used by Windows clusters. The guests can run Windows Server 2008 and higher provided that the Hyper-V Integration
    Services are installed to support Shared VHDX. The only challenge here is making the Hyper-V hosts highly available as well, running it on WSFC.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • Discover System with a database cluster environment?

    Hi All
    We are thinking of getting the Discovery System and are looking into using the SQL Server 2005 Enterprise addition to allow us to use the Discovery System beyond the 180 day evaluation version of SQL Server that comes with the system.
    Does anyone know if the Discover System will work with a database cluster environment or does the database need to be installed on the Discovery System box?
    Sorry, I’m not a BASIS Administrator or DBA.
    Thanks!
    Mike Vondran

    Thanks Rick,
         Just to put our proposed usage of the Discovery System into context, our goal is to use it as a “proof of concept”/ “prototyping” system to use some of the features of the NetWeaver environment for various different business cases. We would like to use it for this purpose beyond the 180 day SQL Server evaluation addition constraint that comes with the Discovery System. We have existing SQL Server database clusters that we could leverage and wanted to see if it is possible to use them before purchasing an Enterprise Addition of SQL Server.
    We would NOT, and could not, be using it for any Production purposes.
    Let me know if this usage/environment makes sense.
    Thanks again for your help on this.
    Mike Vondran
    eBay Inc.

  • Deploying oracle SOA on cluster environment ?

    Hi can anyone provide me links or information about deploying SOA on cluster environment. We are not even sure whether SOA can be deployed in a cluster environment.Kindly help

    I did this on several customers; SOA SUite 10.1.3.4 + MLR patches. I created a document on this:
    http://orasoa.blogspot.com/2009/04/soa-cluster-installation.html
    Marc

  • Is cluster environment needed in alwayson?

    Hi All,
    I have a question, can  we built 'Alwayson group' without a cluster environment? 
    If we can, what are the disavantages in building it? 
    Or
    If we can not? Why we can not built in non-cluster environment?
    I did some research in google some say we can built in standalone some say we can not, but I do not get the concrete answer. The agenda is we are planning to cut cost for clustering. Please help me to understand it.
    Also Suggest me some blogs, books for the same.
    Thanks
    Best Regards Moug

    Hello Moug,
    You cannot build SQL Server AlwaysOn groups without Windows Cluster. Whatever resources out there in the web, saying that it can, are kind of misleading - there is a strict requirement from Microsoft that you need Windows Cluster to get advantage of the
    AlwaysOn groups (you can check the
    AOAG Overview and the
    WSFC feature).
    Also, since Windows Server 2012, Failover Clustering feature is supported in Standard edition as well, so that can help with the costs, I guess.
    Regards,
    Ivan
    Ivan Donev MCITP SQL Server 2008 DBA, DB Developer, BI Developer

  • Create secondary node on BOBJ cluster environment

    Hi All,
    I have done migration from old BOBJ server to new BOBJ server. The primary nodes has successfully completed and manage to view reports. I currently having issue to proceed installing second node on BOBJ cluster environment. i stuck at phase repository database as no database entry was found.
    Please help. I also attach my steps to install the second node.
    Thanks.
    Regards
    Aiman

    Hi,
    im guessing you do an Expand installation on the second node and want to enter the credentials of your existing CMS DB, the one you configured during the installation of Node 1. Correct?
    What is your CMS DB of a Kind? MS SQL? Oracle?
    It looks like that the ODBC DSN entries are missing on Node 2.
    Regards
    -Seb.

  • A transaction problem in cluster environment,help!

              I need to take a complicated data operation which will have to call sevearl SQL
              sentences to add datas into DB in a circle. In order to minimize the cost of DB
              connection, I use one connection to create 5 statements,which are used to add
              datas repeatedly. You can read the corresponding codes below. These codes run
              very well in stand-alone environment,but problem occurs when changed to cluster
              environment. The console shows that it's timeout because the transaction takes
              too long time. But when it runs in stand-alone environment,it completes in less
              than one second! In both situations I use the same TX data source. I guess when
              changed to cluster environment,the transaction processing becomes very complicated
              and then it leads to dead-lock. Anybody agrees with this, or has any experience
              about it before? Help,thanks a lot!
              conn = getConnection();
              pstmt3 = conn.prepareStatement
              (DBInfo.SQL_RECEIPTPACK_CREATE);
              pstmt4 = conn.prepareStatement
              (DBInfo.SQL_RECEIPT_CREATE);
              pstmt5 = conn.prepareStatement
              (DBInfo.SQL_RECEIPTPACKAUDIT_INSERT_ALL);
              pstmt6 = conn.prepareStatement
              (DBInfo.SQL_RECEIPTAUDIT_INSERT_ALL);
              int count = (endno+1-startno)/quantity;
              for(int i=0;i<count;i++)
              int newstartno = startno +
              i*quantity;
              int newendno = newstartno +
              quantity - 1;
              String newStartNO =
              Formatter.formatNum2Str(newstartno,ConstVar.RECEIPT_NO_LENGTH);
              String newEndNO =
              Formatter.formatNum2Str(newendno,ConstVar.RECEIPT_NO_LENGTH);
              pstmt1 = conn.prepareStatement
              (DBInfo.SQL_RECEIPTPACK_SEQ_NEXT);
              rs1 = pstmt1.executeQuery();
              if(!rs1.next()) return -1;
              int packid = rs1.getInt(1);
              cleanup(pstmt1,null,rs1);
              pstmt3.setInt(1,packid);
              pstmt3.setString(2,newStartNO);
              pstmt3.setString(3,newEndNO);
              pstmt3.setInt(4,quantity);
              pstmt3.setLong(5,expiredt);
              pstmt3.setInt
              (6,ConstVar.ID_UNIT_TREASURY);
              pstmt3.setInt
              (7,Status.STATUS_RECEIPTPACK_REGISTERED);
              pstmt3.setLong(8,proctm);
              pstmt3.setInt(9,procUserid);
              pstmt3.setInt
              (10,ConstVar.ID_UNIT_TREASURY);
              pstmt3.setInt(11,typeid);
              pstmt3.addBatch();
              //audit
              pstmt5.setInt(1,procUserid);
              pstmt5.setInt(2,packid);
              pstmt5.setInt
              (3,OPCode.OP_RCT_RGT_RECEIPTPACK);
              pstmt5.setLong(4,0);
              pstmt5.setLong(5,proctm);
              pstmt5.addBatch();
              for(int
              j=newstartno;j<=newendno;j++)
              String receiptNO =
              Formatter.formatNum2Str(j,ConstVar.RECEIPT_NO_LENGTH);
              pstmt2 =
              conn.prepareStatement(DBInfo.SQL_RECEIPT_SEQ_NEXT);
              rs2 =
              pstmt2.executeQuery();
              if(!rs2.next()) return -
              1;
              int receiptid =
              rs2.getInt(1);
              cleanup
              (pstmt2,null,rs2);
              pstmt4.setInt
              (1,receiptid);
              pstmt4.setString
              (2,receiptNO);
              pstmt4.setInt
              (3,Status.STATUS_RECEIPT_REGISTERED);
              pstmt4.setDouble
              (4,00.0);
              pstmt4.setInt(5,0);
              pstmt4.setDouble
              (6,00.0);
              pstmt4.setDouble
              (7,00.0);
              pstmt4.setDouble
              (8,00.0);
              pstmt4.setInt
              (9,procUserid);
              pstmt4.setLong
              (10,proctm);
              pstmt4.setLong
              (11,expiredt);
              pstmt4.setInt(12,0);
              pstmt4.setInt
              (13,packid);
              pstmt4.setInt
              (14,typeid);
              pstmt4.addBatch();
              //audit
              pstmt6.setInt
              (1,procUserid);
              pstmt6.setInt
              (2,receiptid);
              pstmt6.setInt
              (3,OPCode.OP_RCT_RGT_RECEIPTPACK);
              pstmt6.setLong(4,0);
              pstmt6.setLong
              (5,proctm);
              pstmt6.addBatch();
              pstmt3.executeBatch();
              cleanup(pstmt3,null);
              pstmt5.executeBatch();
              cleanup(pstmt5,null);
              pstmt4.executeBatch();
              cleanup(pstmt4,null);
              pstmt6.executeBatch();
              cleanup(pstmt6,null);
              

    Hello,
    Are you using any kind of Load Balancer, like an F5. I am currently troubleshooting this issue for one of our ADF apps and I originally suspected the F5 was not sending traffic correctly. We have not set the adf-config file for HA and the dev team will fix that. But my concern is that will just hide my F5 issue.
    Thanks,
    -Alan

  • SBO Mailer in a cluster environment

    Hi,
    I am currently facing an issue where when I install SBO Mailer on a cluster environment, the installation removes applications and license manager for some reason gets uninstalled.. When I want to uninstall the SBO Mailer, I do not see it under the Control Panel and neither do I see it in running in services. But if i look at the Service Manager the SBO Mailer Service is running.
    The strange part is that after this installation, there is an add-on that stopped working.
    Has anyone come across such a scenario?

    Hi,
    To uninstall you have to first run the Server tools and there you can uncheck the sbo mailer. There is not way you can uninstall SBO mailer service and you cannot find it under control panel its a service...not an applicaiton.
    Run the server tools and then choose repair.
    Regards,
    Rakesh N

  • Deploying Java Web Application (WAR-File) into a cluster environment

    Hi,
    we have a web application which has to read from and write to the file system.
    Since a short time we have a cluster environment (2 parallel servers) and since thisa time we have the problem, that files are worked double in the cluster. The application is working on both servers now and so we have this problem.
    Does anybody know how we have to deploy the application correctly in a cluster environment or do we have to change anything in our source code of the application?
    I didn't find any documentation about this.
    At the moment we have deployed the application on one of the two servers only, but I think there must be a better way to solve this problem.
    Thanks for your replies.
    Regards
    Thorsten

    Hi,
    I think first you need to wrap it into an EAR file, then you can deploy it.
    As far as I know standalone deployment of WAR is deprecated as of 640.
    similar threads:
    How to deploy .war on NWDI
    Deploying an existing WAR file into the Portal
    Hopefully this tutorial also gives some idea:
    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/70/13353094af154a91cbe982d7dd0118/frameset.htm
    Regards,
    Ervin

  • PI 7.1 in a cluster environment (multiple ip-adresses): P4 port

    We want to  install PI 7.1 on unix in a cluster environment.Therefore we  installed also DEV+QA with virtual hostnames like the prodsystem, which will be later installed.
    At all sapinst installation screens we have used only the virtual hostname <virtual-hostname-server interface>.We have also set the SAPINST_USE_HOSTNAME=<virtual-hostname-server interface>. Although the P4-port seems to have used the physical hostname: in step 57 of sapinst we got problems and in dev_icm were:
    [Thr 05] *** ERROR => client with this banner already exists:
    1:<physical-hostname>:35644 {000306f5} [p4_plg_mt.c 2495]
    After we have set
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical hostname>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=127.0.0.1
    the sapinst was successfull.
    Now we're not sure how to set these P4-parameters in our  future productive cluster environment.
    Our productive system PX1 will live in a HA environment, so we don't want to use the physical hostnames in any profile.
    Our environment will look like:
    HOST-A (<physical-hostname-A>):
    <virtual-hostname-server interface>
    <virtual-hostname-user interface>
    HOST-B (<physical-hostname-B>):
    Normally our prodsystem will live on Host-A (physical-hostname-A). All parameters should
    only take the virtual hostname <virtual-hostname-server-interface>. During switchover the
    virtual hostnames (server and user interface) will be taken over to HOST-B, while the physical
    hostnames of HOST-A and HOST-B will stay like there are.
    How do the parameters have to be set here ?
    Have also the physical hostnames of both cluster nodes set in the
    instance profile, e.g:
    icm/server_port_1 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-server interface>
    icm/server_port_6 = PROT=P4,PORT=5$$04, HOST=<virtual-hostname-user interface>
    icm/server_port_7 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-A>
    icm/server_port_8 = PROT=P4,PORT=5$$04, HOST=<physical-hostname-B>
    icm/server_port_9 = PROT=P4,PORT=5$$04, HOST=<localhost>
    Any recommendations ? In note 1158626 is some infomation regarding P4 ports with multiple network interfaces, but it's not 100% clear for us.
    Best regards,
    Uta

    Hi Uta!
    Obviously we are the only human beings in the SAP community having this problem. Nevertheless let's give it another try with a - hopefully - more simple problem description (and maybe it will be helpful to copy and paste this description into the open SAP CSN also).
    So here comes the scenario:
    We have one physical host:
    Physical hostname: physhost
    Physical IP address: 1.1.1.1
    On this physical host there is running one OS: SUN Solaris 10/SPARC
    On top of this we have two virtual hosts where we install 2 completely independent PI 7.1 instances with separate virtual hostnames and separate virtual IP addresses and separate DB2 9.1 databases. That is this is not an MCOD installation.
    Virtual Host no. 1 is PI 7.1 Development System:
    Virtual hostname: virthostdev
    Virtual IP address: 2.2.2.2
    Java Port numbers: 512xx
    Virtual Host no. 2 is PI 7.1 QA System:
    Virtual hostname: virthostqa
    Virtual IP address: 3.3.3.3
    Java Port numbers: 522xx
    With this constellation we face serious problems with the P4 port. Currently for example the JSPM for virthostdev does not start, because JSPM cannot connect to the P4 port.
    In SAP note 1158626 we have learned that by default always the physical hostname/IP address is used to address the P4 port and that we have to configure instance profile parameter icm/server_port_xx to avoid this.
    So how do we have to configure the instance profile parameter icm/server_port_xx for both systems to resolve these P4 port conflicts?
    Additionally: Is it important to use distinct server port slot numbers xx in both systems?
    Additionally: Is it possible to configure this parameter with hostnames instead of using IP addresses?
    So far we have tried several combinations, but with each combination at least one or even both systems have problems with that f.... P4 port.
    Please help! Thanx a lot in advance!
    Regards,
    Volker

  • How to install a DB Instance in a SQL Server 2005 cluster

    Hi all,
    I have an installation scenaro is install SCS/CI in machine A, and DB instance on a already exist cluster. This is based on Windows 2003 SR2 and SQL Server 2005 cluster. But now after i finish SCS, i don't know how to install the DB instance. Do i just need sign in the active node of the DB cluster and start the sapinst.exe. I am worry about any of my mis-action will broke the cluster.. Appreciate any tips or idea about it, thanks.
    Peter

    directly start sapinst.exe on the cluster node.

Maybe you are looking for

  • SharePoint Online 2013 Workflows for one List stopped working after 07/08/2014

    Our client host application in Office 365 SharePoint Online 2013, and we just found that all workflows for one list stopped working after 7, Aug 2014. It kept displaying pop-up message "Something went wrong. To try again, reload the page and then sta

  • [Jdev11gR1]: Insert taskflow run as dialog and window (X) Close button

    Hi i have taskflow to insert some data into database and i have (commit and rollback) buttons it is run as external dialog using (dialog:nav) when i press commit and rollback every thing good but if i press X button to close dialog , i find row added

  • Address Book - Title of a person is missing  when entry has no first name

    In Address Book - the title (prefix) of a person is missing when the contact has no first name. This is an issue both in the way that the names are displayed in the 'name' panel of Address Book, but more importantly when I print out a list of contact

  • No Internet Connection to Server

    No Internet Connection to Server You must be connected to the internet to edit or view an online form.  Please check your internet connection and try again. Note: You might otherwise be connected, but the server or your ISP may be unavailable. BUT, I

  • Low Disk Space & Online Problem

    Hi... need some help first problem "Low Disk Space" second problem after i free up some space, then i get Online all my free space is gone? are they any virus or some thing that eat up my space... Please help. Thanks.