Dramatic slowdown with large tables--common problem?

I opened an excel .xls file in Numbers. The file has one sheet and on that sheet is a table with 45,000 rows and 7 columns, including a date column. I was surprised to experience response times between 30 - 60 seconds for doing tasks like adjusting column widths and formatting the date column. Sorting and filtering also took > 30 seconds, each time I changed something (sort order, filter criteria). There are no formulas in the table at this point--just data.
I hacked off 3/4 of the rows to get the table below 10,000 rows and did the same tasks. The times were roughly proportionately shorter, but still still pretty long compared to Excel on Windows.
I'm running a Mac Pro with 2 quad-core processors and six GB of memory, so I don't think the problem is hardware. (When I run the same tasks in Windows on the same machine the response time is instantaneous for all tasks).
Is this a problem with Numbers that anyone with large tables will experience, or is it likely to be something wrong with my particular installation of Numbers (I'm on the most recent version of Numbers)?

Hello
The Terms of use which you MUST have read claims:
+to help you resolve issues, ask questions, get tips and advice, and more.+
+If you have a technical question about an Apple product, _be sure to check out Apple's support resources_ first by consulting the application Help menu on your computer and visiting our Support site to view articles and more on our product support pages.+
+How do I post a question? ‚+
+_If you searched the forums and didn't find an answer to your question or issue_, click the Post New Topic link at the top of a relevant forum page to post your own question.+
A quick search with the keyword "speed" would have already tell you that : yes, Numbers is slow !
Yvan KOENIG (from FRANCE lundi 25 février 2008 20:46:52)

Similar Messages

  • Address database perf with large tables for attachments (.doc,.jpg,.pdf..etc)

    Hello Folks,
    I have a database with large  tables that contain attachments of various kinds of files such as *.doc, *.docx, *.xlsx, *.jpg, *.pdf, *, *.jpeg. Overtime this database has grown so quickly and quite a handful to maintain.  
    I've been thinking of a proper approach to redesign the database's tables' struct with a little change on the application side.
    How do you implement an idea in which the physical file attachment is not appended to these tables but instead just a path to that file which is located in another storage drive? Hence, the column's data type could be just a string rather than an image.
    This topic first appeared in the Spiceworks Community

    Hi,
    I'm sorry you are having this problem, here is another post about the same problem, where the cause of the problem is described:
    https://support.mozilla.com/en-US/questions/894442
    A bug has been filed to track resolution of the issue here, because a true fix isn't yet available:
    https://bugzilla.mozilla.org/show_bug.cgi?id=703015
    I apologize for the inconvenience.
    Regards,
    Michelle

  • Can ui:table deal with large table?

    I have found h:dataTable can do pagination because it's data source is just a DataModel. But ui:table's datasouce is a data provider which looks some complex and confused.
    I have a large table and I want to load the data on-demand . So I try to implement a provider. But soon I found that the ui:table may be load all data from provider always.
    In TableRowGroup.java, there are many code such as:
    provider.getRowKeys(rowCount, null);
    null will let provider load all data.
    So ui:table can NOT deal with large table!?
    thx.
    fan

    But ui:table just uses TableDataProvider interface.TableData provider is a wrapper for the CachedRowSet
    There are two layers between the UI:Table comonent and the database table: the RowSet layer and the Data Provider layer. The RowSet layer makes the connection to the database, executes the queries, and manages the result set. The Data Provider layer provides a common interface for accessing many types of data, from rowsets, to Array objects, to Enterprise JavaBeans objects.
    Typically, the only time that you work with the RowSet object is when you need to set query parameters. In most other cases, you should use the Data Provider to access and manipulate the data.
    What can a CachedRowSet (or CachedRowSetprovider?)
    do?Check out the API that I pointed you to to see what you can do with a CachedRowSet
    Does the Table cache the data itself?
    Maybe this way is convenient for filter and order?
    Thx.I do not know the answers to these questions.

  • Using workspaces with large tables

    Hello
    I've got a few large tables (6-10GB+) that will have around 500k new rows added on a daily basis as part of an overnight batch job. No rows are ever updated, only inserted or deleted and then re-inserted. I want to stop the process that adds the new rows from being an overnight batch to being a near real time process i.e. a queue will be populated with requests to rebuild the content of these tables for specific parent ids, and a process will consume those requests throughout the day rather than going through the whole list in one go.
    I need to provide views of the data asof a point in time i.e. what was the content of the tables at close of business yesterday, and for this I am considering using workspaces.
    I need to keep at least 10 days worth of data and I was planning to partition the table and drop one partition every day. If I use workspaces, I can see that oracle creates a view in place of the original table and creates a versioned table with the LT suffix - this is the table name returned by DBMSMW.GetPhysicalTableName. Would it be considered bad practice to drop partitions from this physical table as I would do with a non version enabled table? If so, what would be the best method for dropping off old data?
    Thanks in advance
    David

    I've just spotted the workspace manager forum, I'll post there. :-)

  • Error in sync group with large tables

    Since a few days ago the automated sync process configured in windows azure portal is failing. The following message appears
    SqlException Error Code: -2146232060 - SqlError Number:40550, 
    Message: The session has been terminated because it has acquired too many locks. 
    Looking for the error on internet I've found the following post 
    http://blogs.msdn.com/b/sync/archive/2010/09/24/how-to-sync-large-sql-server-databases-to-sql-azure.aspx
    Basically it says that in order to increase the application transaction size it's necessary to include some parameters in the remote and local provider. There's an example script for that.
    But how can i apply this change if my data sync process was created through azure web portal? Is there a way to access the sync scripts? How can I increase the transaction size from azure portal?
    Please, any help is welcome
    Alvaro

    Hi Alvaro,
    I’m afraid that there’s no method to access the sync scripts and increase the transaction size from azure portal when using SQL Azure data sync group.
    The error 40550 occurs when sessions consuming greater than one million locks. You can use the following DMVs to monitor your transactions in SQL Azure. Usually, the solution of this error is to read or modify fewer rows in a single transaction.
    sys.dm_tran_active_transactions
    sys.dm_tran_database_transactions
    sys.dm_tran_locks
    sys.dm_tran_session_transactions
    In your scenario, to overcome the error 40550, I recommend you use the
    bcp utility or SQL Server Integration Services (SSIS) to move data from large table to SQL Azure.
    With bcp utility, you can divide your data into multiple sections and upload each section by executing multiple bcp commands simultaneously. With SSIS, you can divide your data into multiple files on the file system and upload each file by executing multiple
    Streams simultaneously.
    Reference:
    Optimizing Data Access and Messaging - SQL Azure Connection Management
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • XSU: Dealing with large tables / large XML files

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server

    Hi,
    I'm trying to generate a XML file from a "large" table (about 7 million lines, 512Mbytes of storage) by means of XSU. I get into "java.lang.OutOfMemoryError" even after raising the heap size up to 1 Gbyte (option -Xmx1024m of the java cmd line).
    For the moment, I'm involved in an evaluation process. But in a near future, our applications are likely to deal with large amount of XML data, (typically hundreds of Mbytes of storage, which means possibly Gbytes of XML code), both in updating/inserting data and producing XML streams from existing data in relationnal DB.
    Any ideas about memory issues regarding XSU? Should we consider to use XMLType instead of "classical" relational tables loaded/unloaded by means of XSU?
    Any hint appreciated.
    Regards,
    /Hervi QUENIVET
    P.S. our environment is Linux red hat 7.3 and Oracle 9.2.0.1 server Try to split the XML before you process it. You can take look into XMLDocumentSplitter explained in Building Oracle XML Applications Book By Steven Meunch.
    The other alternative is write your own SAX handler and send the chuncks of XML for insert

  • Working with large tables - thumbnail size

    Hi,
    I'm working with some oversized tables in IBA. What I usually do is make the table as needed and then use the "Uses thumbnail on page" option in the Layout section of the inspector and adjust the thumbnail size to fit the page as needed. What happened this time, that after the document has been closed and reopened, some tables reset the thumbnail size back to default, which is small. I can't seem to find what's causing this, one table is not behaving like that, although it's been done using the same method. Anyone else ran into the same thing? Any suggestions?
    Thanks in advance.

    Why don't you use a stored proc?
    Why are you ordering it?
    Should I take partial entries in a loop? Yep. Because software isn't perfect. No point in attempting to process the universe when you know it will fail sometime and it is easier to handle smaller failures than large ones (and you won't have to redo everything.)

  • Final Cut X glacial slowdown with large media library

    I am struggling with a vexing problem. I stupidly went to bed as I told Final Cut X to "import all" on a "prepared to leave my edit bay for the other coast" 4TB data drive containing just three duplicated Final Cut X timelines, but many, many other files. So, after about 12 hours of importing, NONE of my timelines had been recognized, but over 25,000 files were added to the media bin.
    So, with this daunting number of files, Final Cut X (latest) on 10.9 (latest) on a Mac Mini with 8TB of RAM, becomes slower than molasses at absolute zero. Cleverly, I tried a "select all" and "DELETE" to rid myself of the mega media bin (more than 25,000 items) and, after attempting to delete for about 8 HOURS, Final Cut X crashed.
    Now I am trying to delete a few at a time, a few thousand times, but this has proved a no-go. To select, say, three files seen on the screen in front of me takes about 30 minutes, as the daunting age old clean operating issue of MacOS seems to be at play: to select three files, each of the files must ask permission of all other files before allowing themselves to be selected.
    Doing ANYTHING in Final Cut X with so many files at play takes eons of time in a 64 bit world.
    Well, I will continue to post on this issue as I work it out. Fun trying to do this in the midst of moving a house (of stuff) to another house.
    A note to Final Cut X programmers: I am the guy who forced the rewrite of the App Store (remember that?) having found its databasing flaws. Here is how it works, or should work: when a user mistakenly does an "import all," and the process sees the massive nature of the future - IMMEDIATELY a dialog should appear. It warns about the huge number of files which would result in 100+ of hours of content, and provides alternative choices, which are:
    1) Only import located Final Cut X timelines and associated media. (most salient)
    2) Only import Quicktime HD movie files.
    3) Only import Quicktime SD movie files.
    4) Only import bitmap images.
    5) Cancel.
    It would be cute to add a date function to each choice, like "Created After MM/YYYY" with editable date fields. Focusing an import is what a pro wants.
    Import All is pointless in the extreme, and clogs Final Cut X beyond functionality.
    About THAT: When a media library grows to massive proportion, when a user attempts ANYTHING in Final Cut X, instantly, there should be a Dialog Box explaining the situation, and a radio button to suspend Final Cut X and Enter a Media Management Mode of a TEXT LIST ONLY allowing simple, fast, mass deletion. In reality, with 25,000+ files in the media bin, simply shrinking from showing 30 seconds of media to Showing 1 second of media takes about 7-10 hours wherein the computer suffers.

    Well, the fix:
    First, sleep, which I haven't done in a while. Moving a house during Christmas is epic and restless.
    Second, unhide the Library. Not seeing it gives old dogs a head scratch.
    Third, delete Final Cut prefs
    com.apple.FinalCut.LSSharedFileList.plist
    com.apple.FinalCut.plist
    Fourth: get and/or update Event Manage X and select the projects on your connected drives. You DID save everything nowhere near your boot drive, didn't you?
    Then, even on a baby Mac Mini, at least 1080 editing  to feature length, works fine. I have a 1:58:00 and a 59:00 edit in HD working nicely. I do not travel with my Mac Pro Towers, but I must travel with Final Cut.

  • ALV: how to save context space for large tables ?

    Dear collegues,
    We are displaying an ALV table that is quite large. Therefore, the corrsponding DDIC structure and the WD context is large. This has an impact on performance and the load size of the program. Now we will enhance the ALV table again.
    Example: for an icon and its explaining tooltip that are displayed in the ALV: there is are context fields required like "SOURCE_FIELDNAME" for the tooltip as well as for for the icon. They need a lot of characters for each tooltip and icon).
    Question: do you have an idea, how to save context space for those ALV fields ?
    Best regards,
    Christian

    >We are displaying an ALV table that is quite large.
    Do you mean quite large as in a large number of columns or as in a large number of rows (or both)?  I assume that the problem is probably more related to a large number of rows.  For very large tables, you should consider using the table instead of the ALV. For very large tables you can even use a technique called context paging to keep only a subset of the data in the context memory at a time.  Here is a recent blog that I created on the topic with demonstrations of different techniques for table sharing, shared memory, and context paging when dealing with large tables in Web Dynpro ABAP:
    Web Dynpro ABAP: How Fast Can You Consume 1 Million Rows?

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Problem with adf tables in dynamyc region

    Hello.
    I have a problem when use ADF tables within dynamic regions. It happens that if the table is inside the first task flow that is loaded (by default) in the dynamic region, retrieves the data perfectly, but if the table is in a second, third and so on. task flow, does not retrieve data. This happens only with adf tables and graphics, if I use a component form, for example, does not present any problem.
    Any idea that can be happening?
    I'm using Jdveloper 11.1.1.1.0
    Appreciate any ideas or help with this problem
    In iexplorer 8 gives the following error:
    Detalles de error de página web
    Agente de usuario: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)
    Fecha: Wed, 2 Dec 2009 16:17:48 UTC
    Mensaje: 'null' is null or not an object
    Línea: 1105
    Carácter: 2
    Código: 0
    URI: http://seisbd101.dgac.cl:7001/SGLadf-ViewController-context-root/afr/partition/ie/default/opt/collection-11-r1.js

    Hi,
    set the managed bean scope for the bean that switches between the regions to a scope larger as request: for example, set it to viewScope or pageFlowScope. This fixes the problem
    Frank

  • Steps to troubleshoot and get past common problems with Audition

    When you encounter a problem with Audition, the following steps are often useful for getting past the problem and/or determining the cause of the problem:
    1.  Hold down SHIFT while you launch Audition
         This overrides the preference files and launches Audition using the default settings.
    2.  Manually rename or delete the preferences folder
         Step 1 only overrides certain preference files, while others such as Workspace preference files, may be the cause of the problem.  Depending on your OS, locate the "5.0" folder in the location below and rename it or delete it.
         Windows XP: C:\Document and Settings\<username>\Application Data\Adobe\Audition\5.0\
         Windows Vista/7: C:\Users\<username>\AppData\Roaming\Adobe\Audition\5.0\ *
         Mac OS X: ~\Library\Preferences\Adobe\Audition\5.0\ **
              * "AppData" may be a hidden folder.  You can type it into the location bar, or enable "Show Hidden Files" in Windows.
              ** This is your user-level Library folder, not the system-level tree.
    3. Check your Audition Log.txt file
         To enable a log file with CS 5.5, you must create an empty file in your preferences folder called "Audition Log.txt" using notepad, text edit, or any other editor.  After you create this file, launch Audition and if it fails, open and/or share the log file for more specific information about what's happening.
    4. Check the Operating System console or error log
         Both OS X and Windows can track application errors, and if the problem is occurring outside of the application code - a driver conflict, for example - then the OS error report may be more informative than what we can get from the application.
         OS X: launch /Applications/Utilities/Console.app   Clear the view, then launch Audition and note any error messages that appear.
         Windows: launch Control Panel > Administrative Tools > Event Viewer > Windows Logs > Application then launch Audition and note any error messages that appear.
    5. Re-install Audition
         If at this point, nothing has resolved the issue or the error logs are inconclusive, it's a good time to uninstall Audition, reboot, and reinstall Audition.
    6. Obtain the full crash dump and send it to the Adobe team
         So you've walked through all the above steps, you've disconnected any external hardware interfaces to rule out any device or driver conflicts, and you've exhausted the basic troubleshooting steps.  Visit http://forums.adobe.com/thread/900619 and follow the steps for your OS that Charles has documented to best obtain a full crash memory dump for Audition, and send that to [email protected] with a description of the problem and the steps you've taken, and we'll take a look. 

    Firstly yes these are common problems and secondly hopefully they will be fixed with updated software 2.1 which is due for release this coming Firday!

  • Problem with SUBMIT report [ WITH SELECTION-TABLE ] or [ IN range ]

    Hello Everybody,
    I am trying to call transaction F.80 for mass reversal of FI documents by using SUBMIT sentence and its parameters like this:
      LOOP AT i_zfi013 INTO wa_zfi013.
        PERFORM llena_params USING 'BR_BELNR' 'S' 'I' 'EQ' wa_zfi013-num_doc ''.
    range_line-sign   = 'I'.
    range_line-option = 'EQ'.
    range_line-low    = wa_zfi013-num_doc.
    APPEND range_line TO range_tab.
    endloop.
    Line: -
          SUBMIT sapf080
            WITH br_bukrs-low = p_bukrs
            WITH SELECTION-TABLE it_params  [ same  problem with -  WITH BR_BELNR IN range_tab]
            WITH br_gjahr-low = p_an1
            WITH stogrd = '05'
            WITH testlauf = ''
            AND RETURN.
    My problem is that  when the report is executed the BR_BELNR only delete one document of the all the inputs in the selection criteria from the loop. if I add the statement [ VIA SELECTION-SCREEN] in the SUBMIT if open the multiple selection criteria in the screen I can check that all the documents are set in it from the ABAP code in the loop from it I just need to push F8 to copy them and run the program processing all the documents normally .
    Can some one help me with this? is there a way to execute the transaction BY the SUBMIT with the multiple selection criteria for the Document Number working well?
    Thank for you time and help.

    This is my code:
      TYPES: BEGIN OF T_ZFI013,
              BUKRS     TYPE BUKRS,
              GJAHR     TYPE GJAHR,
              MONAT     TYPE MONAT,
              ANLN1     TYPE ANLN1,
              ANLN2     TYPE ANLN2,
              NUM_DOC     TYPE BELNR_D,
              DATE     TYPE DATUM,
              TIME  TYPE UZEIT,
              USER     TYPE SYUNAME,
             END OF T_ZFI013.
       DATA: I_ZFI013  TYPE STANDARD TABLE OF T_ZFI013,
             WA_ZFI013 TYPE T_ZFI013,
      DATA: br_belnr       TYPE BELNR_D,
            rspar_tab  TYPE TABLE OF rsparams,
            rspar_line LIKE LINE OF rspar_tab,
            range_tab  LIKE RANGE OF br_belnr,
            range_line LIKE LINE OF range_tab."range_tab.
      LOOP AT i_zfi013 INTO wa_zfi013.
        range_line-sign   = 'I'.
        range_line-option = 'EQ'.
        range_line-low    = wa_zfi013-num_doc.
        APPEND range_line TO range_tab.
      ENDLOOP.
      SUBMIT sapf080
        WITH br_bukrs-low = p_bukrs
        WITH br_belnr IN range_tab
        WITH br_gjahr-low = p_an1
        WITH stogrd = '05'
        WITH testlauf = ''.
    This is the RANGE_TAB table before submit:
    1     I     EQ     1001xxxxxx
    2     I     EQ     1002xxxxxx
    3     I     EQ     1003xxxxxx
    4     I     EQ     1004xxxxxx
    5     I     EQ     1005xxxxxx
    6     I     EQ     1006xxxxxx
    7     I     EQ     1007xxxxxx
    8     I     EQ     1008xxxxxx
    I think this wont work for some reason so I will start to do this by a BDC.
    Many thanks for your help.

  • My iPod Touch 3rd Gen is having problems with the touch screen being unresponsive, too responsive, freezing and constantly resizing. Is this a common problem? Is there a fix?

    My iPod Touch (3rd Gen) is having problems with the touch screen being unresponsive, too responsive, freezing and constantly resizing. It varies between not responding when I tap the screen and acting as if I double-tap or tap and hold when I tap. Every so often it freezes for a minute or so and then comes back. The scren is constantly changing sizes too such that I often can't see most of the top status bar or other edges. This makes typing almost impossible, ruins most games and generally makes everything difficult, but otherwise the iPod is working correctly. Powering off, shutting down apps, letting it sit have not worked. This has happened once before months ago, but the issue resolved itself somehow. Is this a common problem? Is there a fix?

    Try this for the freezing
    http://www.apple.com/support/ipodtouch/assistant/ipodtouch/
    Or if they don't work put it into recovery mode
    See here
    http://www.apple.com/support/ipodtouch/assistant/restore/
    Then restore
    http://support.apple.com/kb/HT1414

  • Out.println() problems with large amount of data in jsp page

    I have this kind of code in my jsp page:
    out.clearBuffer();
    out.println(myText); // size of myText is about 300 kbThe problem is that I manage to print the whole text only sometimes. Very often happens such that the receiving page gets only the first 40 kb and then the printing stops.
    I have made such tests that I split the myText to smaller parts and out.print() them one by one:
    Vector texts = splitTextToSmallerParts(myText);
    for(int i = 0; i < texts.size(); i++) {
      out.print(text.get(i));
      out.flush();
    }This produces the same kind of result. Sometimes all parts are printed but mostly only the first parts.
    I have tried to increase the buffer size but neither that makes the printing reliable. Also I have tried with autoFlush="false" so that I flush before the buffer size gets overflowed; again same result, sometimes works sometimes don't.
    Originally I use such a system where Visual Basic in Excel calls a jsp page. However, I don't think that this matters since the same problems occur if I use a browser.
    If anyone knows something about problems with large jsp pages, I would appreciate that.

    Well, there are many ways you could do this, but it depends on what you are looking for.
    For instance, generating an Excel Spreadsheet could be quite easy:
    import javax.servlet.*;
    import javax.servlet.http.*;
    import java.io.*;
    public class TableTest extends HttpServlet{
         public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
              response.setContentType("application/xls");
              PrintWriter out = new PrintWriter(response.getOutputStream());
                    out.println("Col1\tCol2\tCol3\tCol4");
                    out.println("1\t2\t3\t4");
                    out.println("3\t1\t5\t7");
                    out.println("2\t9\t3\t3");
              out.flush();
              out.close();
    }Just try this simple code, it works just fine... I used the same approach to generate a report of 30000 rows and 40 cols (more or less 5MB), so it should do the job for you.
    Regards

Maybe you are looking for

  • Security updates takes long time to install through Software Center

    Hi Folks, We have a setup with SCCM 2012 R2 CU4 as a Primary Site server where SUP also enabled. I deployed the windows security updates to the test servers collection by deployment type as Available. I verified in test servers that all the deployed

  • Ipod can't be read or written to

    I can upload about 100 songs to my 15 gig ipod, but after that it becomes unable to read or write anything. I've tried everything in the self-help section, but nothing seems to work. Unfortunately I'm one month past my one year warranty. What should

  • Message URL missing in Mail Menu (10.8)

    Is it me not looking in the right place, or did Apple remove the menu item from Mountain Lion's Mail app to put a message url on the clipboard?

  • My Late 09 Macbook has a loose video cable. Will Apple fix it for free?

    The screen randomly freezes up unless I push the screen slightly forwards or backwards. I'm assuming that this is due to a loose video cable. Will Apple fix it for free? I've tried disassembling it but I can't seem to locate the right screwdriver for

  • New features of Ios7 on iPad

    Where can I get authentic list of iOS7 features relevant for new I iPad.