Database fragmentation

I have a question about database fragmentation.
I know that fragmentation can reduce performance in query times. The blocks are distributes
in many extents and scans process takes a long time. Oracle engine have to locate the address
of the next extent..
I want to know if there is any system view in which you can check if your table or index
has high fragmentation. If it's needed I will have to re-create, move or rebulid the table or index, but before I want to know if the degree of fragmentation is high.
Any useful script or query to do this, any interesting oracle system view??
Any advice will bre greatly apreciatted.
Thanks in advance

moslee wrote:
Hi Sybrand
Thanks for your prompt reply.
Here's abit of my current situation which I have just taken over:
Up to now my company is still using 8.1.6 for 3 identical DBs (DMT) and they are supposed to be in sync. For about every 50-60days, they will not be in sync and I will receive an alert. Past DBAs (those my senior) always reckons that the issue is due to fragmentation in any of the DBs. And they will always do a physical copy and paste of the system, datafiles to resolve the issue. Thus, the idea of 'defragmenting databases' still exist.
this, in itself, makes no sense. What defines a database not being 'in sync' with another? Exactly what situation is it that triggers this alert? The term 'in sync' could mean anything at all, but it usually has something to do with data. I've never seen it to mean anything that would be remotely related to fragmentation.
I want to change them to LMT but they are all top critcal production DBs. But I have only 1 such DB for development, not 3. One way that I could prove that LMT reduces or eliminates fragmentation is to execute the DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL procedure in those 3 top critical production DBs and hope that I will not receive an alert in the future, but this would be very very risky.
This is the reason why I need to prove and test that LMT actually works before implementing in my production. With the proof and testing that I have done, I can then confidently claim that LMT reduces or eliminates fragmentation.You are not going to be able to "prove" anything until you have a firm definition of "fragmentation" and whatever it is that you refer to as "being in sync". In the broader sense, nothing can ever be "proven" until the terms are defined and the definitions accepted by all interested parties.

Similar Messages

  • Remove database fragmentation in Essbase

    Hi,
    Somebody guide me which is best option to remove database fragmentation in essbase at Production server.
    I see Average clustering ratio is -> 0.5 which i think should be higher.
    Option which i know:-
    export data (all) > clear database > load data again.
    Regards
    Kumar

    Either a full restructure or export,clear,import data should remove the fragmentation.
    Personally I prefer to export/import.
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • Need the report for database fragmentation

    HI,
    Is there any script to get database defragmentation report
    Thanks

    Hi;
    Please also check:
    http://www.orafaq.com/node/1936
    Script to Report Extents and Contiguous Free Space [ID 162994.1]
    Script to Report Tablespace Free and Fragmentation [ID 1019709.6]
    Regard
    Helios

  • Data distribution scheme and database fragmentation

    Hi all,
    I'm working on a scenario (University) involving the fragmentation of a central database. A company has regional offices i.e. (England, Wales, Scotland) and each regional office has differing combinations of business areas. They currently have one central database in their head office and my task is to "design a data distribution scheme". By scheme does this mean something like horizontal / vertical fragmentation? Also can somebody point me to an Oracle specific example of creating a fragmented table? I've tried to search online and have found the "partition by" keyword but not much else except for database linking - but I'm thinking this is more concerned with querying than actually creating the fragments.
    Many thanks for your time

    >
    Partitioning is what the tutor meant by "fragmentation". So if there is a current central database and I have created new databases for each regional office I could run something like the below statement on the regional databases to create a bespoke version of the employee table filtered by data relevant to them? This is all theoretical and we don't have to develop the database, I just want to get the syntax correct - Thanks!
    >
    There you go talking about 'new databases' again. You said your original task was this
    >
    my task is to "design a data distribution scheme".
    >
    Is the task to give the regions access to their own data in the ONE central DB? Or to actually create a new DB for each region that contains ONLY that regions data?
    So are we talking ACCESS to a central DB by region? Or are we talking replication of the entire central DB to multiple regions?
    Your example table is partitioned by region. But if each region has their own DB why would you put data for other regions in it?
    If you are wanting each region to have access to their own data in the central DB then you could partition the central DB tables like your example:
    CREATE TABLE employees (
    id INT NOT NULL,
    fname VARCHAR(30),
    lname VARCHAR(30),
    hired DATE NOT NULL DEFAULT '1970-01-01',
    separated DATE NOT NULL DEFAULT '9999-12-31',
    job_code INT,
    store_id INT
    PARTITION BY LIST(region_id) (
    PARTITION Wales VALUES IN (2)
    ); But if you are creating a regional DB that includes data only for that region there is no need to partition it.

  • Server cleanup wizard problem - unable to connect to the WSUS Server Database.

    I'm trying to run server cleanup wizard.. it starts to run and then after a while it gives me this error:
    The WSUS administration console was unable to connect to the WSUS Server Database.
    Verify that SQL server is running on the WSUS Server. If the problem persists, try restarting SQL.
    System.Data.SqlClient.SqlException -- Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
    The statement has been terminated.
    Source
    .Net SqlClient Data Provider
    Stack Trace:
       at Microsoft.UpdateServices.Internal.BaseApi.SoapExceptionProcessor.DeserializeAndThrow(SoapException soapException)
       at Microsoft.UpdateServices.Internal.DatabaseAccess.AdminDataAccessProxy.ExecuteSPSearchUpdates(String updateScopeXml, String preferredCulture, ExtendedPublicationState publicationState)
       at Microsoft.UpdateServices.Internal.BaseApi.Update.SearchUpdates(UpdateScope searchScope, ExtendedPublicationState publicationState, UpdateServer updateServer)
       at Microsoft.UpdateServices.Internal.BaseApi.UpdateServer.GetUpdates(UpdateScope searchScope)
       at Microsoft.UpdateServices.UI.AdminApiAccess.UpdateManager.GetUpdates(ExtendedUpdateScope filter)
       at Microsoft.UpdateServices.UI.AdminApiAccess.WsusSynchronizationInfo.InitializeDerivedProperties()
       at Microsoft.UpdateServices.UI.AdminApiAccess.WsusSynchronizationInfo.get_NewUpdatesCount()
       at Microsoft.UpdateServices.UI.SnapIn.Pages.SyncResultsListPage.GetSyncInfoRow(WsusSynchronizationInfo syncInfo)
       at Microsoft.UpdateServices.UI.SnapIn.Pages.SyncResultsListPage.GetListRows()
    Thanks

     Some questions:
    Are there any other databases running on this Std Edition SQL service?
    [a] Yes there are, we have Kaspersky enterprise DB, Report Server DB and local application DB.
    Are there any other services running on this WSUS Server?
    [b] Yes there are, we have Active Directory, Kaspersky enterprise, SQL Server 2005, and WSUS all on the same server.
    How many days since your WSUS server was first installed?
    [c] It's been about a year now.
    What is the physical size of the SUSDB.mdf file?
    [d] 9,666,752 KB
    What is the hardware configuration of this machine, including disk drives?
    [e] Intel Xeon 1.86, 2GB Ram, HD C: 39GB - E: 25.2, running Windows Server 2003 R2 SP2.
    How many client systems are you servicing from this WSUS Server?
    [f] Around 40.
    What products/classifications are you synchronizing.
    [g] Windows XP-vista, Windows Server 2003, Office 2003-2007, SQL Server 2005.
    Okay, for starters, you have an underpowered/overextended machine running Active Directory, ASP.NET, and a database server, all on a sub 2GHz CPU with 2GB RAM, and not enough disk spindles. The machine has had WSUS running for about a year and is 9GB in size.
    There's no doubt in my mind that some of your performance issues are directly related to disk and database fragmentation.
    There's also no doubt that some of your performance issues are directly related to memory starvation.
    I'd suggest the following long-term fixes:
    1. Get a second machine. Make it a dedicated database server. Provision it accordingly to support servicing multiple database applications.
    2. Lacking #1, this machine needs more memory. It also needs more disk spindles. At a minimum the databases being serviced should be on a dedicated physical drive; ideally there would be two dedicated drives allocated for supporting database services. The
    For the short-term fixes, do this:
    1. During after-hours time, if you don't already have one, build a temporary machine that can act as a DC/GC, while you take this machine temporarily offline.
    1. Shutdown the Update Services service, SQL Server database engine, and any other services dependent on the SQL Server database engine (Kapersky, and other reporting applications). Disconnect from the network to temporarly eliminate DC traffic. (You could also shutdown the AD services, but disconnecting the network cable is ever-so-much easier.) Defragment ALL drives.
    2. Restart ONLY the SQL Server service. Obtain this SQL script to Reindex the WSUS Databases.
    3. Restart ONLY the Update Services service. Attempt the Server Cleanup Wizard again. Run it in two passes. Pass 1 performing everything except  remove unused updates. Pass 2 running only remove unused updates.
    4. After completion of the Server Cleanup Wizard, reconnect the machine to the network and resume all other services.
    5. If you're able to complete #3, secure the services of a well-qualified DBA to determine if there are any misconfigurations in your SQL Server setup that would account for why your WSUS database is 9GB in size -- such as improperly configured autogrowth parameters. Based on the products you're updating and only forty clients, 9GB is about 3x the maximum size I would expect to see in a WSUS database. It's possible this is simply caused by excess unused updates, it's possible it's caused by fragmentation, it's probable it was caused by unnecessary autogrowth of the database due to insufficient update maintenance. You'll want a DBA to assist you in shrinking that database after you successfully run the Database Maintenance and Server Cleanup Wizard.
    Lawrence Garvin, M.S., MCITP(x2), MCTS(x5), MCP(x7), MCBMSP
    Principal/CTO, Onsite Technology Solutions, Houston, Texas
    Microsoft MVP - Software Distribution (2005-2009)

  • Database creation in Oracle9i

    Hi,
    I am creating a new database, aflin, by using the following steps given in oracle documentation, but its giving error.
    1) set oracle_sid=aflin
    2) SQLPLUS /nolog
    CONNECT sys/password AS sysdba
    When I am giving this connect command it is giving "insufficient privileges or invalid username/password".
    I changed the init<sid>.ora and init.ora to reflect this new database, aflin.
    Most probably it is not taking the new database name, aflin.
    How to set this oracle_sid, so that i can create this new database, aflin.
    Also, when I am not giving the command in step-1, oracle connects to some other database, when i check through (select instance_name from v$instance).
    Can you'll tell me know how to go about creating this new database.
    Thanks in advance,
    MAK

    Yes, a full export apparently also creates schema/tablespaces for you.
    Use "ignore=y" if it should be able to overcome existing tablespaces/users.
    Well... Think I did a full=y exp/imp once, but simply dont recall much about it (oracle7->8)
    Just did a quick check to find any wise words from support, and there were not any - except that they seem to think "full=y" is good for defragmenting an entire database according to this plan :
    Reducing Database Fragmentation
    You can reduce fragmentation by performing a full database export and import as follows:
    - Do a full database export (FULL=y) to back up the entire database.
    - Shut down the Oracle database server after all users are logged off.
    - Delete the database.
    - Re-create the database using the CREATE DATABASE statement.
    - Do a full database import (FULL=y) to restore the entire database.
    Not much info, I know :)
    Do you have space/time to just try and redo the task if it fails miserably ?
    (and post what you experience :) )
    - but I would hope it works "out of the box" - I mean, they invented "full=y", so they better make it work too, and be able to handle duplicates and stuff :D
    Or, you might consider doing the full exp , and then pick from it what you need ?
    (But then you loose things only in full export (triggers / "public synonyms" and ? )
    /Ryberg

  • What is database re indexing

    hi
    what  is database re indexing
    what is the concept of it,
    adil

    One can REBUILD or REORGANIZE an existing index.  
    BOL: "This topic describes how to reorganize or rebuild a fragmented index in SQL Server 2012 by using SQL Server Management Studio or Transact-SQL. The SQL Server Database Engine automatically maintains indexes whenever insert, update, or delete operations
    are made to the underlying data. Over time these modifications can cause the information in the index to become scattered in the database (fragmented). Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does
    not match the physical ordering inside the data file. Heavily fragmented indexes can degrade query performance and cause your application to respond slowly.
    You can remedy index fragmentation by reorganizing or rebuilding an index. For partitioned indexes built on a partition scheme, you can use either of these methods on a complete index or a single partition of an index. Rebuilding an index drops and re-creates
    the index. This removes fragmentation, reclaims disk space by compacting the pages based on the specified or existing fill factor setting, and reorders the index rows in contiguous pages. When ALL is specified, all indexes on the table are dropped and rebuilt
    in a single transaction. Reorganizing an index uses minimal system resources. It defragments the leaf level of clustered and nonclustered indexes on tables and views by physically reordering the leaf-level pages to match the logical, left to right, order of
    the leaf nodes. Reorganizing also compacts the index pages. Compaction is based on the existing fill factor value."
    LINK: http://technet.microsoft.com/en-us/library/ms189858.aspx
    The following blog demonstrates how to REBUILD all the indexes in a database:
    http://www.sqlusa.com/bestpractices2008/rebuild-all-indexes/
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Dynamic Date dimensions in MS Sql Server 2005

    Environment : BO 4 SP3
    Database : MS Sql 2005
    Trying to create dynamic date universe with MS Sql as back end.
    But when I try creating current year month dated class, it returns error :
    Parse failed: Exception : DBD, [Microsoft SQL Server Native Client 10.0] : Incorrect syntax near 'From'.State:42000
    I tested the universe connection which is working fine as evident from below screenshot :
    BusinessObjects Configuration
    Version 3.2.1.80
    Build 14.1.1.1036
    Network Layer OLE DB
    DBMS Engine MS SQL Server 2008
    Language en
    Charset CP1252
    Library D:\Programs\BusinessObjects4\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\drivers\lib32\dbd_wsqloledb.dll
    SBO D:\Programs\BusinessObjects4\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\oledb\sqlsrv.sbo
    RSS D:\Programs\BusinessObjects4\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\oledb\sqlsrv.rss
    PRM D:\Programs\BusinessObjects4\SAP BusinessObjects Enterprise XI 4.0\dataAccess\connectionServer\oledb\sqlsrv.prm
    Strategies Not Defined
    Middleware and DBMS Configuration
    Driver architecture 32
    Charset UCS2
    Driver name Microsoft SQL Server Native Client 10.0
    Driver version 10.50.1600.1
    Provider file name sqlncli10.dll
    OLE DB Version 02.80
    DBMS name Microsoft SQL Server
    DBMS version 09.00.5069

    Per SAP Note 1241422 - Database fragmentation and reindexing improves performance
    Summary
    Symptom
    During the lifetime of a database (any db not only SAP Business One) and due to insert\update\delete of data, the information in indexes is fragmented.  Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file.  Heavily fragmented indexes can cause slow performance.
    Other terms
    Index, performance, re-index, reindex, slow, poor, DB
    Reason and Prerequisites
    FAQ
    Solution
    It is recommended to run a rebuild the following procedure once\twice a month:

  • Reindex Table in MS SQL Server 2005

    Hi gurus,
    Is it possible or effective to reindex all SAP tables in SQL Server 2005?
    I've seen lot of example of re-indexing tables in SQL Server, but not in SAP environment (Since SAP Tables are really huge, around 77,000 tables).
    Should I defrag my database?
    I've executed DBCC SHOWCONTIG and found that some tables have logical defragment more than 70%.
    Is there any tcode in SAP to do re-index task?
    Thanks before gurus.

    Per SAP Note 1241422 - Database fragmentation and reindexing improves performance
    Summary
    Symptom
    During the lifetime of a database (any db not only SAP Business One) and due to insert\update\delete of data, the information in indexes is fragmented.  Fragmentation exists when indexes have pages in which the logical ordering, based on the key value, does not match the physical ordering inside the data file.  Heavily fragmented indexes can cause slow performance.
    Other terms
    Index, performance, re-index, reindex, slow, poor, DB
    Reason and Prerequisites
    FAQ
    Solution
    It is recommended to run a rebuild the following procedure once\twice a month:

  • Assist with my SQL Reporting Script

    I have started writing a HTML SQL reporting script based off of Jeffrey Hicks tutorial
    Here is my entire script:
    ###################START SCRIPT#####################################
    #requires -version 3.0
    #Create a SQL Server report of said SQL environment
    [cmdletbinding()]
    Param(
    [string]$computername=$env:computername,
    [string]$path="$env:temp\sqlrpt.htm"
    #define an empty array to hold all of the HTML fragments
    #the fragments will break apart each HTML section in the final output so that you can out whatever information you like
    $fragments=@()
    #save current location so I can set it back after importing SQL module
    $curr = get-location
    #import the SQL module
    Import-Module SQLPS -DisableNameChecking
    #change the location back
    set-location $curr
    #get uptime
    Write-Verbose "Getting SQL Server uptime"
    $starttime = Invoke-Sqlcmd -Query 'SELECT sqlserver_start_time AS StartTime FROM sys.dm_os_sys_info' -ServerInstance $computername -database master
    $version = Invoke-Sqlcmd "Select @@version AS Version"
    #create an object
    $uptime = New-Object -TypeName PSObject -Property @{
     StartTime = $starttime.Item(0)
     Uptime = (Get-Date)-$starttime.Item(0)
     Version = $version.Item(0).replace("`n","|")
    $tmp = $uptime | ConvertTo-HTML -fragment -AS List
    #replace "|" place holder with <br>"
    $fragments += $tmp.replace("|","<br>")
    #SQL Host Information
    $smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $computername
    $fragments += "<h3>SQL Host Information Details</h3>"
    $fragments += $smo | select ComputerNamePhysicalNetBios,Name, Processors, ProcessorUsage, PhysicalMemory, PhysicalMemoryUsageInKB, MasterDBPath, BackupDirectory | ConvertTo-HTML -Fragment
    #Get Status of all SQL related Services
    Write-Verbose "Querying services"
    $services = Get-Service -DisplayName *SQL* -ComputerName $computername |
    Select Name,Displayname,Status
    $fragments += "<h3>SQL Services</h3>"
    $fragments += $services | ConvertTo-HTML -Fragment
    #get databases
    #path to databases
    Write-Verbose "Querying datases"
    $dbpath = "SQLServer:\SQL\Localhost\default\databases"
    $fragments += "<h3>Database Utilization</h3>"
    $fragments += dir $dbpath | Select Name,Size,DataSpaceUsage,SpaceAvailable,
    @{Name="PercentFree";Expression={ [math]::Round((($_.SpaceAvailable/1kb)/$_.size)*100,2) }} |
    Sort PercentFree | ConvertTo-HTML -fragment
    #get database backup information
    # Create an SMO connection to the instance
    $smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $computername
    $dbbackups = $smo.Databases
    $fragments += "<h3>Last Database Backup Information</h3>"
    $fragments += $dbbackups | select Name,LastBackupDate, LastLogBackupDate | ConvertTo-HTML -Fragment
    #Login & Service Account Information#SQL Host Information
    $smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $computername
    $fragments += "<h3>Login & Service Account Information</h3>"
    $fragments += $smo | select ServiceAccount, Logins | ConvertTo-HTML -Fragment
    #volume usage
    Write-Verbose "Querying system volumes"
    $data = Get-CimInstance win32_volume -filter "drivetype=3" -ComputerName $computername
    $drives = foreach ($item in $data) {
        $prophash = [ordered]@{
        Drive = $item.DriveLetter
        Volume = $item.DeviceID
        Compressed = $item.Compressed
        SizeGB = $item.capacity/1GB -as [int]
        FreeGB = "{0:N4}" -f ($item.Freespace/1GB )
        PercentFree = [math]::Round((($item.Freespace/$item.capacity) * 100),2)
        #create a new object from the property hash
        New-Object PSObject -Property $prophash
    [xml]$html = $drives | ConvertTo-Html -fragment
    #check each row, skipping the TH header row
    for ($i=1;$i -le $html.table.tr.count-1;$i++) {
      $class = $html.CreateAttribute("class")
      #check the value of the last column and assign a class to the row
      if (($html.table.tr[$i].td[-1] -as [int]) -le 25) {                                         
        $class.value = "danger" 
        $html.table.tr[$i].Attributes.Append($class) | Out-Null
      elseif (($html.table.tr[$i].td[-1] -as [int]) -le 35) {                                              
        $class.value = "warn"   
        $html.table.tr[$i].Attributes.Append($class) | Out-Null
    $fragments += "<h3>Volume Utilization</h3>"
    $fragments += $html.innerxml
    #define the HTML style
    Write-Verbose "preparing report"
    $imagefile = "c:\scripts\db.png"
    $ImageBits = [Convert]::ToBase64String((Get-Content $imagefile -Encoding Byte))
    $ImageHTML = "<img src=data:image/png;base64,$($ImageBits) alt='db utilization'/>"
    $head = @"
    <style>
    body { background-color:#FAFAFA;
           font-family:Arial;
           font-size:12pt; }
    td, th { border:1px solid black;
             border-collapse:collapse; }
    th { color:white;
         background-color:black; }
    table, tr, td, th { padding: 2px; margin: 0px }
    tr:nth-child(odd) {background-color: lightgray}
    table { margin-left:50px; }
    img
    float:left;
    margin: 0px 25px;
    .danger {background-color: red}
    .warn {background-color: yellow}
    </style>
    $imagehtml
    <br><br><br>
    <H2>SQL Server Report: $Computername</H2>
    <br>
    #create the HTML document
    ConvertTo-HTML -Head $head -Body $fragments -PostContent "<i>report generated: $(Get-Date)</i>" |
    Out-File -FilePath $path -Encoding ascii
    Write-Verbose "Opening report"
    Invoke-Item $path
    ######################END SCRIPT##################################
    I have 2 questions for help in regards to the above script:
    1)  For the Login and Service Account portion I can't get my output to show up properly.  Here is the snip from the script:
    #Login & Service Account Information#SQL Host Information
    $smo = new-object ('Microsoft.SqlServer.Management.Smo.Server') $computername
    $fragments += "<h3>Login & Service Account Information</h3>"
    $fragments += $smo | select ServiceAccount, Logins | ConvertTo-HTML -Fragment
    Here is how the output shows for this portion:
    ServiceAccount
    Logins
    domain\svcAcct
                 Microsoft.SqlServer.Management.Smo.LoginCollection
    I would like top have the login information show in the above table of the all the different logins.  When I run the script without HTML for that portion and just output to console it shows the login info as I would expect.
    2)  The 2nd question is, how do I add a variable to the bottom of the script to email the report to said email address.  This is probably simple but can't get my head wrapped around this part.
    Thanks all in advance!

    Thanks AnnaWY, that resolved the portion on how to email the report.  I was also able to utilize the following code which does the same thing as well:
    #Send an email with the contents of the report
    $MailBody= Get-Content $path
    $MailSubject= "SQL Server Report"
    $SmtpClient = New-Object system.net.mail.smtpClient
    $SmtpClient.host = "smtp.server.com"
    $MailMessage = New-Object system.net.mail.mailmessage
    $MailMessage.from = "[email protected]"
    $MailMessage.To.add("[email protected]")
    $MailMessage.Subject = $MailSubject
    $MailMessage.IsBodyHtml = 1
    $MailMessage.Body = $MailBody
    $SmtpClient.Send($MailMessage)
    I still have not been able to resolve the portion regarding the login/service account information not showing up in the table correctly.  For the time being I have removed it from the environment report and instead included it as a script of its own
    in our Security Auditing process.

  • Error in formula field: Column 'TASK_BCWP' does not belong to table Task

    Hi,
    We have a formula field to calculate the a cost KPI and that field throws an error in the ULS logs. The error is: PWA:PWA, ServiceApp:Project Server Service Application, User:GDFN\IPL_Content, PSI: SSP: Formula Evaluation Failed! - trying to continue - (System.ArgumentException:
    Column 'TASK_BCWP' does not belong to table Task.
    We are running EPM 2010. Recreating of the custom field did not work. We are not running with the latest CU yet.
    The formula that we use is:
    IIf([BCWP] > 0; [ACWP] / [BCWP]; 1)
    The stack trace from the ULS log is:
    PWA:http://gdfn-ipl-14/PWA, ServiceApp:Project Server Service Application, User:GDFN\IPL_Content, PSI: SSP: Formula Evaluation Failed! - trying to continue - (System.ArgumentException: Column 'TASK_BCWP' does not belong to table Task.
    at System.Data.DataRow.GetDataColumn(String columnName)
    at System.Data.DataRow.get_Item(String columnName)
    at Microsoft.Office.Project.Server.BusinessLayer.FormulaDataProvider.HaveColumn(DataRow row, String columnName)
    at Microsoft.Office.Project.Server.BusinessLayer.FormulaDataProvider.GetTaskData(Guid nodeId, Int32 fieldId)
    at Microsoft.Office.Project.Server.BusinessLayer.FormulaDataProvider.GetDataInternal(Int32 entityId, Guid nodeId, Int32 fieldId, Boolean canChangeEntity)
    at Microsoft.Office.Project.Server.BusinessLayer.FormulaDataProvider.GetData(Int32 entityId, Guid nodeId, Int32 fieldId)
    at Microsoft.Office.Project.Server.BusinessLayer.Formula.FieldExpression.Evaluate(IFieldEvaluator context, Guid nodeId)
    at Microsoft.Office.Project.Server.BusinessLayer.Formula.GreaterExpression.Evaluate(IFieldEvaluator context, Guid nodeId)
    at Microsoft.Office.Project.Server.BusinessLayer.Formula.ConditionalExpression.Evaluate(IFieldEvaluator context, Guid nodeId)
    at Microsoft.Office.Project.Server.BusinessLayer.Formula.FormulaEvaluator.Evaluate(PlatformContext context, Expression formula, Int32 entityId, Int32 fieldId, Guid nodeId, Dictionary`2 entityFormulaFields)) + -- FieldId = 188776525 -- NodeId = 1b1149c1-52c2-4f7f-ba60-6d65effdb6b3
    -- Formula = IIf(Greater([MSPJ188743691], 0), Divide([MSPJ188743800], [MSPJ188743691]), 1)
    I have no clue where to start looking for the cause of this error. Any help would be appreciated.
    Thanks,
    Quint Mouthaan

    Hi Quint,
    This can because of database fragmentation. I would recommend you to re-index the databases and perform a de-fragmentation and can check the behavior.
    On the other hand when you receive this message, you can try an IISRESET and check. Additionally you can use refer to following link for de-fragmenting queries.
    http://support.microsoft.com/kb/943345
    Happy troubleshooting.
    Vikram Daruru - MSFT

  • Error in Calculation Script TCP IP Error

    <p>Hi all,</p><p> </p><p>I am getting a strange error while running a calculation scriptthrough esscmd.</p><p> </p><p>When i run a calculation script from ESSCMD i get"Network Error: The client or server timed out waiting toreceive data using TCP/IP. Check network connections. Increase theNetRetryCOunt and/or NetDelay values in the ESSBASE.CFG file.Update tis file on both client and server. Restart the client andtry again"<br><br>Actually the script was running fine last week but since 3 daysit's throwing an error.<br><br>The scripts are running from ESSCMD and there are 5 calc scriptsruns. First and Second goes through fine and execute with sts id =0. Starting 3rd calc script it is throwing this error.<br><br>All calc scripts starts with<br>//ESS_LOCALE English_UnitedStates.Latin1@Binary<br>SET CACHE HIGH;<br>SET MSG SUMMARY;<br>SET NOTICE DEFAULT;<br>SET UPDATECALC OFF;<br>SET CALCPARALLEL 7;<br>SET CREATEBLOCKONEQ ON;<br><br>1. Calc Script 1 is about 1988 lines. - Executed successfully<br>2. Calc Script 2 is about 1988 lines. - Executed successfully<br>3. Calc Script 3 is about 600 lines - Throwing TCP/ IP Error<br>4. Calc Script 4 is about 600 lines - Throwing TCP/ IP Error<br>5. Aggregation script - Throwing TCP / IP Error.</p><p> </p><p>Any idea... ??</p><p> </p><p>Thanks in advance..    <br><br><br></p>

    While there is a possibility that you are seeing a real network error, you might want to run a couple of checks if you are running the script that runs the calc on a different unit than the server. If it is the network, changes to the NETDELAY and NETRETRYCOUNT will help.<BR><BR>But more likely, it is a problem with the essbase server and the specific app process. I'd suspect that the calc and other things happening are swamping the memory and/or overloading IO.<BR><BR>Take good look at your cube, it's block sizes, and the nature of the calcs you are running. How many blocks are needed to do a particular calc, and will those all fit in memory at the same time?<BR><BR>You may need to modify your SET MSG and SET NOTICE parameters so that you can identify the specific step where your calc is having problems. Those are long calc scripts, you may find it useful to break them into smaller modules for testing to determine what the problem is. Also, you need to look at the server and app logs to see if there are any hints there.<BR><BR>The "Network Error" message is quite non-specific; it only says that the communication has failed, not why. In my experience, it more often happens when a an app or the main server process freezes up, and it may actually take a shutdown and restart of at least Essbase if not the whole server.<BR><BR>As an additional note: Is the database fragmented? All apps benefit from a periodic export, reset, reload routine to defrag the database.

  • Datasheet view "lists.asmx" timeout

    Hi, 
    Im currently trying to help a client with an issue regarding a large list and datasheet view. Everything works as expected until the view contains more than one SPFieldType.User. 
    The list contains 9000 items (~5000 items per folder) and the view is shown in a "flat view" so all items are visible. 
    There is about 10-15 fields (text, currency, note, calculated fields) in total and the view opens up in datasheet view but only the first 100 or so items are loaded. After 120 seconds an error is displayed (Could not connect to the server at this time). 
    Fiddler2 shows the POST requests to load items from the list using Lists.asmx GetListitems method, this call lasts about 30 seconds and then "fails". Since the request wasn't returned within the 30 seconds the same call is made again. This goes
    on until the 120 second mark is hit. 
    IS there a way to increase the timeout for this call to Lists.asmx to last more than 30 seconds ?
    Or is it possible to improve the performance regarding people fields ?
    - Have increased connectionTimeout in IIS 
    - Have checked the database fragmentation level
    - Item Threshold increased to 12 000
    Thanks in advance!

    Hi Daniel,
    According to your description, my understanding is that you want to enlarge the web service timeout value to handle large data.
    We can set the web service timeout value like below:
    WebReference.ProxyClass myProxy = new WebReference.ProxyClass();
    //Set the timeout in milliseconds - e.g. 100 seconds
    myProxy.Timeout = 10000;
    Here is a similiar thread for your reference:
    http://stackoverflow.com/questions/711311/setting-timeout-value-for-net-web-service
    Best Regards
    Zhengyu Guo
    TechNet Community Support

  • Errors while db defragging

    Hi folks,
    i have troubles with some database files while doing defragmentation.
    11:11:27:301: storage_messages_env::do_defrag: checking 1 database, fragmentation level: 22%(140209261)
    11:11:27:301: storage_messages_db::do_defrag(1): starting
    11:13:07:409: adf_storage_db_error_call: m-00000001: __fop_file_setup: Retry limit (100) exceeded
    11:13:07:409: storage_messages_db::copy_msgs_db(1): error opening database: messages: File exists
    11:13:07:409: storage_messages_env::do_defrag: error defragmenting 1 database
    2011/12/20 11:13:07:409: storage_messages_env::do_defrag: checking 9 database, fragmentation level: 14%(19017432)
    2011/12/20 11:13:07:409: storage_messages_env::do_defrag: checking 10 database, fragmentation level: 20%(19333313)
    2011/12/20 11:13:07:409: storage_messages_db::do_defrag(10): starting
    2011/12/20 11:14:47:511: adf_storage_db_error_call: m-00000010: __fop_file_setup: Retry limit (100) exceeded
    2011/12/20 11:14:47:511: storage_messages_db::copy_msgs_db(10): error opening database: messages: File exists
    2011/12/20 11:14:47:511: storage_messages_env::do_defrag: error defragmenting 10 database
    There are no other processes which could lock those files.
    In the application database is working fine.
    How that issue can be solved and what may be the reason?
    Thanks in advance.

    1. Is this a Berkeley DB issue? That is, are you actually using BDB, and if so what version, on what OS/platform?
    2. What exactly are you doing in the do_defrag method? Are you somehow trying to open a BDB database using the DB->open() / Db::open() method, and specify the DB_EXCL flag (along with DB_CREATE)?
    Based on the error messages, you are hitting EEXIST, which means that a call to open a file (presumably a BDB database) was made using the O_EXCL flag (along with the O_CREAT flag) and the file existed. I am not sure how the code in do_defrag looks like, but you should be able to resolve this by carefully controlling which flags you use when opening the database files.
    Regards,
    Andrei

  • 10g Host command imp.exe not working

    Hi,
    Just created a relatively simple form that uses host command.
    I create a sql script using text_io to create a new user.
    I then use host to run that script on the app server.
    All this works fine.
    Then i use host command to try and import into the db. This is when it does nothing.
    I have cut it down to bear bones and a button with the command:
    host('D:\oracle\database\BIN\imp.exe LOG=D:\test.log');This doesnt even create the log "test.log"
    If i copy and paste this into the run box on the app server the log is created?
    Any ideas?
    Thanks

    hi
    first try to import by using dos go to start,run and cmd something like this.
    c:\Imp user/pass@orcl file=c:\file.dmp log=c:\log_name.log full=y
    What is import/export and why does one need it?
    Oracle's export (exp) and import (imp) utilities are used to perform logical database backup and recovery. When exporting, database objects are dumped to a binary file which can then be imported into another Oracle database.
    These utilities can be used to move data between different machines, databases or schema. However, as they use a proprietary binary file format, they can only be used between Oracle databases. One cannot export data and expect to import it into a non-Oracle database.
    Various parameters are available to control what objects are exported or imported. To get a list of available parameters, run the exp or imp utilities with the help=yes parameter.
    The export/import utilities are commonly used to perform the following tasks:
    Backup and recovery (small databases only, say < +50GB, if bigger, use RMAN instead)
    Move data between Oracle databases on different platforms (for example from Solaris to Windows)
    Reorganization of data/ eliminate database fragmentation (export, drop and re-import tables)
    Upgrade databases from extremely old versions of Oracle (when in-place upgrades are not supported by the Database Upgrade Assistant any more)
    Detect database corruption. Ensure that all the data can be read
    Transporting tablespaces between databases
    Etc.
    From Oracle 10g, users can choose between using the old imp/exp utilities, or the newly introduced Datapump utilities, called expdp and impdp. These new utilities introduce much needed performance improvements, network based exports and imports, etc.
    NOTE: It is generally advised not to use exports as the only means of backing-up a database. Physical backup methods (for example, when you use RMAN) are normally much quicker and supports point in time based recovery (apply archivelogs after recovering a database). Also, exp/imp is not practical for large database environments.
    [edit] How does one use the import/export utilities?
    Look for the "imp" and "exp" executables in your $ORACLE_HOME/bin directory. One can run them interactively, using command line parameters, or using parameter files. Look at the imp/exp parameters before starting. These parameters can be listed by executing the following commands: "exp help=yes" or "imp help=yes".
    The following examples demonstrate how the imp/exp utilities can be used:
    exp scott/tiger file=emp.dmp log=emp.log tables=emp rows=yes indexes=no
    exp scott/tiger file=emp.dmp tables=(emp,dept)
    imp scott/tiger file=emp.dmp full=yes
    imp scott/tiger file=emp.dmp fromuser=scott touser=scott tables=dept
    Using a parameter file:
    exp userid=scott/tiger@orcl parfile=export.txt
    ... where export.txt contains:
    BUFFER=10000000
    FILE=account.dmp
    FULL=n
    OWNER=scott
    GRANTS=y
    COMPRESS=y
    NOTE: If you do not like command line utilities, you can import and export data with the "Schema Manager" GUI that ships with Oracle Enterprise Manager (OEM).
    [edit] Can one export a subset of a table?
    From Oracle 8i one can use the QUERY= export parameter to selectively unload a subset of the data from a table. You may need to escape special chars on the command line, for example: query=\"where deptno=10\". Look at these examples:
    exp scott/tiger tables=emp query="where deptno=10"
    exp scott/tiger file=abc.dmp tables=abc query=\"where sex=\'f\'\" rows=yes--------------------------
    You can also use DBMS_DATAPUMP.
    For example, you can start the export job from a PL/SQL package with the following PL/SQL code:
    declare
        handle  number;
    begin
        handle := dbms_datapump.open('EXPORT','SCHEMA');
        dbms_datapump.add_file(handle,'SCOTT3.DMP','DUMPDIR');
        dbms_datapump.metadata_filter(handle,'SCHEMA_EXPR','= ''SCOTT''');
        dbms_datapump.set_parallel(handle,4);
        dbms_datapump.start_job(handle);
        dbms_datapump.detach(handle);
    end;
    / sarah

Maybe you are looking for