Unable to shrink tempdb

Hi All,
This is to bring to your notice that when iam trying to see free space using shrink database it is showing 180GB free space.But when i am trying to shrink individual data files they are only showing free space in MBs.Further analysis sussgests that there
might be space used by  internal  objects in tempdb.How can i reclaim that without restarting sql server services.
Regards
Rahul

You can shrink tempdb by multipel ways like;
DBCC SHRINKFILE
DBCC SHRINKDATABASE
ALTER ATABASE
But you may run into consistency errors by doing this. That is why, the safes way to release the tempdb space is to restart the instance's sql server service.
Please visit my Blog for some easy and often used t-sql scripts
My BizCard
Atif can you show any example where by running Shrinkfile or alter database command database will go in inconsistent state.I agree sometimes restart is last option but I dont agree with your consistency statement.
Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

Similar Messages

  • Unable to shrink datafile

    Hello Gurus,
    Env: SQL Server 2008 R2 installed on Windows Server 2008.
    DAS based storage.
    Issue: DBCC showfilestats shows that all of the exents are used. This means there is no
    Unable to shrink the data file.
    Description: The data file is growing by a GB every day approx.
    I see the existing space with in the data file is not being used and it is allocating fresh extents every time.
    When I do the DBCC showfilestats it says all of the extents are in use and there is almost nothing free.
    The size of the data file is 200 GB.
    When I run the top 50 tables by size, I see the sum of all of these tables is hardly 10GB.
    I know there is not much data with in the tables.
    When I am trying to shrink the data file, it gives the error message "A severe error occurred on the current command."
    I tried to follow the following till date, nothing worked till date.
    DBCC checkDB and then tried to shrink.
    Took the database in the single_user mode and then tried to shrink.
    Ran a checkpoing and then the log backup job and then tried to shrink.
    So there are two questions.
    1. How shall we shrink the data file?
    2. Why is it not using the existing space with in the file?
    Please letme know if I need to send more details.

    1. How shall we shrink the data file?
    Why would shrink it, when The data file is growing by a GB every day approx. Database files should rather not autogrow. They should be extended well in advance in a maintenance window. I would suggest that you increase the database
    with 50 GB.
    Shrinking a database is a very exceptional operation which you only do when you know that the data size has been reduced on a permanent basis.
    2. Why is it not using the existing space with in the file?
    It is not clear how you have deemed that there is space in the file; au contraire, all evidence points at that is no space available.
    It is possible, though, that your data pages are poorly utiliized. Heaps are particularly prone to this, and you can be left with pages with no or little rows on them.
    Do you rebuild your indexes regularly? Do you have any heaps? What does this query return:
    SELECT COUNT(*) FROM sys.indexes WHERE index_id = 0
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Transaction log used 200GB unable to shrink

    Hi All,
    Currently I am facing diskspace(0 KB available) issue in one of my application server..
    When I checked SQL server SMS_UDB (is our DB name) containing 70GB but SMS_UDB_LOG (Transaction log) Containing more than 200GB. When I checked the diskspace report 200GB space used (showing Green). I am unable to shrink the file. I am running 2008 SQL with
    simple recovery model. I taken bak to other server.   It's urgent Kindly help me to reduce the transaction log
    thanks

    Hello,
    Please verify if the initial size for the log on the properties of the database is set to big number of MB and for that reason the database
    shrink does not work.
    Please run the following query and share the results with us:
    SELECT
    name,
    log_reuse_wait_desc
    FROM
    sys.databases
    WHERE
    log_reuse_wait_desc='ACTIVE_TRANSACTION'
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • IOS 7 wallpaper unable to shrink and fit ...

    IOS 7 unable to shrink and fit, this is horrible when i use my old wallpaper...
    Please fix this bug, i can careless about the parallax ****,just want my wall paper back to normal!!!

    Have you tried pinching inwards?
    Regards,
    Steve

  • Disk Utility - Unable to Shrink Disk Error

    hey guys,
    I have been trying to shrink my system partition but i keep getting an error.
    I had Vista installed on a boot camp partition, but i needed to reinstall it and i decided to start from scratch. I erased the vista partition and tried to recreate it. I ran boot camp assistant but that failed at creating a new partition.
    I ran disk utility to first try and shrink the system drive, then increase the size to full before i rerun the boot camp partition. however, i cannot shrink.
    this is the error i get.
    http://dl.getdropbox.com/u/734117/Screen%20shot%202009-10-25%20at%208.48.54%20AM .png
    I had only about 35 gigs free and i tried to create a 20 gig partition. after that error, i erased some data and now have just under 100 gigs free. but the drive still cannot shrink
    I was hoping to find a way to shrink the drive, or defragment without having to erase the entire disk and reinstall. i do not have the backup capabilities to move all my data at this point.
    Thanks for your help

    Hahmad wrote:
    How do i create a clone?
    first, you need an external drive. then use CCCloner http://www.bombich.com/index.html
    or Superduper http://www.shirt-pocket.com/SuperDuper/SuperDuperDescription.html
    to clone the OS X partition to the external. make sure that the external is formatted with GUID partition scheme before cloning. if needed reformat it first with disk utility.

  • Unable to shrink the connections

    HI,
    I am using weblogic 6.1 connection pooling. I am facing problem in shrinking
    the connections
    I set the connection parameters as
    Initial capacity = 2,
    Increment = 1
    Max capacity = 10
    Allow shrinking = true
    When I open 4 sessions simultaneously it shows the
    connections = 4
    total connections = 4
    Waiters = 0
    Connections high = 3
    After several time accessing the database it shows
         Connections = 0
         Total connections = 13
         Connections high = 3
         Waiters = 0
    My problem is even after shirking time also it shows the active connections in
    the database side(I tested using v$session).are there any extra setting is needed
    for the shrink to take place?(other than setting it to true in console). I even
    tested using the code ( getting the pool and executing the shrink() method).
    My second doubt was why it is showing total connections as greater than maximum
    capacity. I am not clear on the no shown for total connections.
    I would be thankful if some looks into it.
    Thanks

    Make sure the application is returning all the connection to the pool (close
    all connections in the finally blocks). If connections are not closed in
    your application they cannot be removed from the pool.
    hth
    sree
    "nambudri" <[email protected]> wrote in message
    news:3c33e8f1$[email protected]..
    HI,
    I am using weblogic 6.1 connection pooling. I am facing problem in
    shrinking
    the connections
    I set the connection parameters as
    Initial capacity = 2,
    Increment = 1
    Max capacity = 10
    Allow shrinking = true
    When I open 4 sessions simultaneously it shows the
    connections = 4
    total connections = 4
    Waiters = 0
    Connections high = 3
    After several time accessing the database it shows
    Connections = 0
    Total connections = 13
    Connections high = 3
    Waiters = 0
    My problem is even after shirking time also it shows the active connections
    in
    the database side(I tested using v$session).are there any extra setting is
    needed
    for the shrink to take place?(other than setting it to true in console). I
    even
    tested using the code ( getting the pool and executing the shrink() method).
    My second doubt was why it is showing total connections as greater than
    maximum
    capacity. I am not clear on the no shown for total connections.
    I would be thankful if some looks into it.
    Thanks

  • Unable to shrink tlog

    Hello,
    Usually,the tlog backup goes to tape but it's giving some error.I took the backup on disk and then doing tlog shrink but it's not happening.
    Please advise.
    Best regards,
    Vishal

    Hello,
    Usually,the tlog backup goes to tape but it's giving some error.I took the backup on disk and then doing tlog shrink but it's not happening.
    Please advise.
    Best regards,
    Vishal
    You might need to take log backup twice to actually be able to shrink the log file.please run DBCC LOGINFO(DB_NAME) and see if status column last value is 0 or not unless it is zero you wont be able to shrink.
    You can also use below query to see what is holding your log from truncating'.If it is  log backup you need to take log backup.
    select name,log_reuse_wait_desc from sys.databases where name='db_name
    Problem with tape log  backup is there are 2 options to take log backup one just log backup (plain) and other take log backup and truncate logs hope you have selected second option.Else please take log backup using TSQL
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Unable to shrink/resize undo tablespace

    Hi Experts,
    I have Oracle 10.2.0.4 database running on RHEL 4.7 in production environment, my undo tablespace has grown upto 32 GB, database is rebooted, still the tablespace is full.
    I want to shrink, resize the undo tablespace, please help me
    Few details are as below
    show parameter undo_retention
    NAME TYPE     VALUE
    undo_retention integer     10
    Please help
    Thanks

    This post is repeatedly executed by mistake, thread with same name is posted 2 minutes before this
    Edited by: user1687821 on Jul 9, 2010 9:29 AM

  • Unable to shrink undo tablespace... Help!

    Hi,
    I have problems to shrink the system undo tablespace, which has grown up to 14 GB.
    I use 9.2. Table space owner is 'SYSTEM', undo_management = AUTO.
    I tried to shrink the greatest rollback segments by the commands
    ALTER SESSION SET UNDO_SUPPRESS_ERRORS = TRUE;
    ALTER ROLLBACK SEGMENT "_SYSSMU6$" SHRINK TO 20 M;
    Oracle confirmed these commands, but nothing happened.
    What am I doing wrong?
    Hermann Mueller

    You have seen the discussion about the undo segments, on the temporary tablespaces, you should be aware that the sort segment of a given temporary tablespace is created at the time the first sort operation takes place. The sort segment continues to grow by means of extent allocation until the segment size has reached the total storage demands of all of the active sorts running on the instance. Oracle will keep on allocating temporary space on demand unless the physical limit states otherwise.
    Temporary segments are produced each time a sort operation (explicit -order by- or implicit -aggregation, reindexing-) requires to sort a set that cannot fit into memory. So if you detect excesive sort usage you should aim your monitors towards the sort operations (reports, reindexing, max, min, aggregations, order by ...). If your system has a DDS behaviour, this kind of operations are frequent as a massive sorting has to be peformed against millions of rows.
    A Temporary tablespace will almost always appear to be near 100% full, that's because once oracle has allocated temporary space it doesn't release it back to the free space, it keeps it allocated even when the sort operation has finished. Criteria behind this fact is similar to the one oracle used to have when the rollback segments were in use, Oracle only allocated space and the dba should perform manual actions to release space, and the criteria is performance. Once it has allocated space this big, there are possibilities that the same circumstances that raised the temporary usage high water mark to this level are repeated, so if oracle keeps the mamimum allocated space, it won't have to allocate the same storage once more.
    Main concern with temporary tablespace growth is not free space itself, but the reasons why this amount of space was allocated, so I suggest you to track the sql statements with sort operations. If you are certain the circumstances that motivated this amount of temporary resources to be allocated won't be repeated again, then you could think of resizing down your temporary tablespace. I suggest you to create a new temporary tablespace with the desired size, and alter the default tempoary tablespace to point to this newly crated temporary tablespace, and finally get rid of the original temporary TS.
    ~ Madrid

  • Unable to remove TempDB files SQL 2012

    Hi Experts,
    We have a problem with removing the extra created TempDB files on de SQL 2012 server. We tried several things:
    Restart services;
    Rename *.ndb files
    We use the following query in the SQL management studio:
    USE tempdb;
    GO
    DBCC SHRINKFILE('tempdev4', EMPTYFILE)
    GO
    USE master;
    GO
    ALTER DATABASE tempdb
    REMOVE FILE tempdev4;
    The error returned is:
    DBCC SHRINKFILE: Page 5:40 could not be moved because it is a work table page.
    Msg 2555, Level 16, State 1, Line 1
    Cannot move all contents of file "tempdev4" to other places to complete the emptyfile operation.
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    Msg 5042, Level 16, State 1, Line 1
    The file 'tempdev4' cannot be removed because it is not empty.
    It looks like the TempDB is still in use. If the services are stopped and the query is fired right after starting we receive the same error. Any suggestions are welcome.
    Regards,

    I would try to restart SQL Server in single-user mode, add -m to the startup options in the SQL Server Configuration Manager. You may still fail to remove the file if there is a process which eagerly connects directly and start to do work.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Tempdb full

    in a situation where tempdb is full - both data and log (meaning disk is full) - then what are the possible solutions :
    - find spid filling up tempdb and kill spid to free up space in tempdb - is this possible?
    - backup log tempdb - use with truncate_only/no_log if regular backup is not possible.
    - shrink tempdb
    - move tempdb mdf/ldf to another spacious device with : alter database tempdb modify file - will this require a sqlserver shutdown or can it be done when sqlserver is online?
    - restart sqlserver
    appreciate the insights.

    If you kill the transaction then its going to take more time and resources for rollback and database may got recovery and will create more issue.
    No you cant do anything against tempdb related to backup
    you cant shrink the tempdb if the transaction is active and long running.
    Yes you have allow tempdb to grow enough based on the server behaviour...
    TempDB is critical to performance; many, many user and system actions make use of tempDB, like cursors, temp tables, hash table for sorts, reindexing, and so on. Take care of TempDB before you even separate out your OLTP data and log files.
    A good rule of thumb is to have one TempDB data file per physical core. For example, if your server has two quad-core processors, you’d want eight TempDB files (one .mdf file and seven .ndf files)...But, that's only a rule of thumb.
    Once again if you restart the sql it will rollback the transaction and nothing is going to help with your issue.
    Just divide your transactions into multiple small small parts, and make your transactions simple and short. and configure the tempdb correctly and with same space to all files.
    Just check all the temp tables, varilabe and all other obejects related to sql and server behaviour and the decide or configure the tempdb...
    For example if your transaction is very small , may be 200 MB then restart and other stull will helps you...
    Restart SQL Server.
    Try to shrink the TempDB database.
    But if the transaction is 100 GB then you cant do this and also if it is production system you cant restart any time and its not a best approach.. so you have to find a way to fix this as a long term solution..
    Optimizing tempdb Performance
    http://technet.microsoft.com/en-us/library/ms175527%28v=sql.105%29.aspx
    How you will configure Tempdb ???
    This is one of the biggest "It Depends..." that exists in SQL Server.  It all depends on how your SQL Server uses tempdb.  If you don't make use of tempdb then you won't have a significant amount of contention on the allocation bitmap
    pages and therefore won't need as many files to alleviate the contention.  The whole point behind having multiple files is that each file gets separate GAM, SGAM, and PFS allocation pages, so as round-robin allocation and proportional fill are used to
    write data to the data files, you spread the access to these pages across multiple files and reduce the contention on them.  The size question is strictly based on your specific environment, just take whatever size tempdb needs to be in total, and then
    divide that by the number of files being created, and create each file exactly the same size.  Then make sure that you set AutoGrow for the files exactly the same as well so that they grow the same way and maintain the same size, otherwise one file could
    become an allocation hot spot if it grows larger and has a larger proportion of free space in it.
    Last year at PASS 2011 Bob Ward, one the Sr Escalation Engineers for SQL, made the following recommendation which will be updated in the Microsoft references that other people provided on this thread:
    As a general rule, if the number of logical processors is less than 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number
    of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.
    The other thing I'd recommend if you have excessive contention at 8+ files is to follow the last part of that general recommendation, and make changes to the workload and code because the excessive use of tempdb is still a bottleneck that in most cases
    can be fixed by changing how your code works and doing some basic tuning.
    Raju Rasagounder Sr MSSQL DBA

  • Tablespace shrink.

    hi
    i m using oracle database 10gR2
    one of my database size is 13GB and out of that used size is only 4GB. HWM has already reached to the end of datafile so i m unable to shrink the database upto 4GB(used space)
    General solution to this problem is to take tablespace level export,drop tablespace ,create tablespace with 4 gb size and do import.
    but i have doubt that above solution will invalid my dependent objects like synonyms and triggers, procedures and packeges in different schema
    is there any other solution i can use which should not affect my dependent objects.

    Hello User,
    Yes that won't copy all the dependent object and you have manually create them. But you have a perfect solution of using dbms_redefinition packaage and it will create all the dependent objects (triggers/grants) without any extra manual work.
    check this link
    http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_redefi.htm#CBBHFJAI
    Here is specific example, create an emptry interim table under new tablespace
    create table my_objects_interim as select * from my_objects where rownum < 1;
    DECLARE
       v_num_errors   NUMBER;
    BEGIN
       -- With Primary key
       -- README -- COMMENT OUT the one you don't need for the table either ROWID or PRIMARY KEY
       DBMS_REDEFINITION.can_redef_table (USER,
                                          'my_objects',
                                          DBMS_REDEFINITION.cons_use_pk,
                                          NULL
       -- without primary key
       DBMS_REDEFINITION.can_redef_table (USER,
                                          'my_objects',
                                          DBMS_REDEFINITION.cons_use_rowid,
                                          NULL
       -- dbms_redefinition.cons_use_pk
       -- dbms_redefinition.cons_use_rowid
       -- With Primary key
       DBMS_REDEFINITION.start_redef_table (USER,
                                            'my_objects',
                                            'my_objects_interim',
                                            NULL,
                                            DBMS_REDEFINITION.cons_use_pk,
                                            NULL,
                                            NULL
       -- without primary key
       DBMS_REDEFINITION.start_redef_table (USER,
                                            'my_objects',
                                            'my_objects_interim',
                                            NULL,
                                            DBMS_REDEFINITION.cons_use_rowid,
                                            NULL,
                                            NULL
       v_num_errors   :=
          DBMS_REDEFINITION.copy_table_dependents (USER,
                                                   'my_objects',
                                                   'my_objects_interim',
                                                   DBMS_REDEFINITION.cons_orig_params,
                                                   TRUE,
                                                   TRUE,
                                                   TRUE,
                                                   FALSE,
                                                   num_errors,
                                                   FALSE
       DBMS_REDEFINITION.finish_redef_table (USER,
                                             'my_objects',
                                             'my_objects_interim',
                                             NULL
    EXCEPTION
       WHEN OTHERS
       THEN
          DBMS_OUTPUT.put_line (SUBSTR (SQLERRM, 1, 200));
    END;Regards
    Edited by: OrionNet on Dec 23, 2008 2:10 AM

  • Log file shrinking

    hi every one
    i m very happy getting reply from u all
    here my doubt regarding log file shrink.in our environment all high avaliabilities are using (log shippimg,mirroring,replication) now my doubt is if i shrink log file of a database which is these H A what will be the reaction?does i ve to reconfigure or
    not?
    can i make snapshot database  for restoring database?
    waiting for replies with anxiety
    thanks&regards
    chetan.tk

    Hi chetan.kt,
    What is the purpose of shrinking log file? It is recommend that backup the log file frequently as a period of some minutes if the log file is large. In case of more space is required for operation in SQL Server, you may consider to increase the disk space.
    Befor shrinking, you should insight into the reason which leads to the log file grows unexpectedly: A transaction log grows unexpectedly or becomes full on a computer that is running SQL
    Server.
    For log shipping and database mirroring, you can shrink the log file on primary server with non-truncate, and the shrink operation will be log shipped to the secondary servers.
    As for replicated database, you cannot be able to shrink the log file if replication is not completed. You may have a try to mark all replicated transactions as completed by stopping log reader agent and restart after shrinking. For more information:
    Unable to shrink transaction log on replicated database - SQL 2008
    It is not a good idea to shrink the log file. You may have a look at Tibor’s blog about the problem of shrinking log file:
    Why you want to be restrictive with shrink of database files
    Best Regards,
    Stephanie Lv
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]

  • Uanable to shrink pool

    hi,
              could someone explain this error.
              -Ben Litchfield
              weblogic.common.ResourceException: Unable to shrink pool EacuboPool from 2
              to 1
              at
              weblogic.common.internal.ResourceAllocator.shrink_internal(ResourceAllocator
              .ja
              va, Compiled Code)
              at java.lang.Exception.<init>(Exception.java, Compiled Code)
              at weblogic.common.ResourceException.<init>(ResourceException.java,
              Compiled Code)
              at
              weblogic.common.internal.ResourceAllocator.shrink_internal(ResourceAllocator
              .ja
              va, Compiled Code)
              at
              weblogic.common.internal.ResourceAllocator.trigger(ResourceAllocator.java,
              Comp
              iled Code)
              at
              weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigg
              er.
              java, Compiled Code)
              at
              weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java
              , C
              ompiled Code)
              at
              weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java,
              Compiled C
              ode)
              at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled
              Code)
              

    pls post this to weblogic.developer.interest.jdbc newsgroup
              Kumar
              Ben Litchfield wrote:
              > hi,
              > could someone explain this error.
              > -Ben Litchfield
              > weblogic.common.ResourceException: Unable to shrink pool EacuboPool from 2
              > to 1
              > at
              > weblogic.common.internal.ResourceAllocator.shrink_internal(ResourceAllocator
              > .ja
              > va, Compiled Code)
              > at java.lang.Exception.<init>(Exception.java, Compiled Code)
              > at weblogic.common.ResourceException.<init>(ResourceException.java,
              > Compiled Code)
              >
              > at
              > weblogic.common.internal.ResourceAllocator.shrink_internal(ResourceAllocator
              > .ja
              > va, Compiled Code)
              > at
              > weblogic.common.internal.ResourceAllocator.trigger(ResourceAllocator.java,
              > Comp
              > iled Code)
              > at
              > weblogic.time.common.internal.ScheduledTrigger.executeLocally(ScheduledTrigg
              > er.
              > java, Compiled Code)
              > at
              > weblogic.time.common.internal.ScheduledTrigger.execute(ScheduledTrigger.java
              > , C
              > ompiled Code)
              > at
              > weblogic.time.server.ScheduledTrigger.execute(ScheduledTrigger.java,
              > Compiled C
              > ode)
              > at weblogic.kernel.ExecuteThread.run(ExecuteThread.java, Compiled
              > Code)
              

  • TEMPDB.mdf out of control 100 GB

    Hello,
    using my previous thread and the answers proposed:
    http://social.technet.microsoft.com/Forums/en-US/99f34dc7-93fc-4f8d-b72a-48096707169f/tempdbmdf-out-of-control-30-gb?forum=sqldatabaseengine
    session_id database_id user_objects_alloc_page_count user_objects_dealloc_page_count internal_objects_alloc_page_count internal_objects_dealloc_page_count
    119 2 0 363 12150896 12152552
    78 2 0 0 96 88
    84 2 0 0 48 88
    102 2 0 0 16 8
    57 2 10 0 0 0
    69 2 6 0 0 0
    66 2 4 0 0 0
    65 2 2 0 0 0
    67 2 2 0 0 0
    73 2 2 0 0 0
    I have identified the Session Id, user and database using the session 119 ...
    what should I do next:
    restart SQL Service for this instance?
    kill the session?
    Thanks,
    DOm
    System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager

    INSERT INTO [etlStaging].[clarity].[flowsheet] ([flo_meas_id] ,[recorded_time] ,[meas_value] ,[pat_id]) select [flo_meas_id] ,[recorded_time] ,[meas_value] ,[pat_id] from [syn_lsClarity_Flwsht]
    System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager
    This is a insert statement without a where clause so I guess amount of records will be huge.Seem like bulk insert to me
    If I were you I would have added a new log file for tempdb on some other drive and let operation to complete.After operation is completed I would shrink tempdb
    You can kill but on your own risk.But I dont see tempdb being utilized in SQL statement anywhere ? Is this really the statement filling tempdb
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    I shrunk tempdb, I restarted SQL Server Service on this instance and already I am getting 8 Gb in 15 minutes with the same insert...
    System Center Operations Manager 2007 / System Center Configuration Manager 2007 R2 / Forefront Client Security / Forefront Identity Manager

Maybe you are looking for

  • Can I load an existing site in iWeb?

    I have a website that I put together with an old program I can't use in OS Leopard (wouldn't want to anyway). I want to start using iWeb to work on my web pages. Can I load the pages of my site into the iWeb program to edit? How do I do that? Thanks.

  • Sharing Downloaded Files between 2 accounts on the same system

    My wife and I each have separate accounts set up on our iMAC G5. We both have .MAC accounts, but only use one account to download songs into iTunes. In an earlier version of iTunes, I set up both accounts to point to the same iTunes library file, whi

  • Authortime Shared Libraries - Update Question

    Hi I have a library fla and a destination fla. Simple question. Using the drag from library to destination document I am of course able to easily update an asset in the destination fla by updating it in the source fla runtime. However this only works

  • Smartforms : Second Page Main Window not being called

    Hi All, The smartform has 2 different pages, with totally different layout. This also include that the Main window of both pages are different, having 2 different tables to fetch data. The main window width sizes are the same on both the pages. I hav

  • Buy a MBKpro in Australia with a "azerty" keyboard ?

    Hi everyone, I would like to know if it is possible to buy a new MacBook Pro in Australia with a "azerty" keyboard ? (on the Internet or in Shop) I mean a french keyboard. Regards. Steuve